Smooth voxels (Marching Cubes algorithm)


Hello everyone. I am an old Second Life user, but new to High Fidelity… brought here by the news that Second Life is getting a successor. I looked a bit into the project and its current capabilities, which are very impressive so far. For starters, there’s one thing in particular I’m intrigued about:

Something I always dreamed of is a virtual world which properly makes use of voxels… as some could say, a hybrid between Second Life and Minecraft. I was excited to find out that Hifi is voxel based and aims to be just that! But after looking into its existing voxel system, I was a bit disappointed. It appears the voxel editor only lets you place colored cubes… a few models too, but this is beyond the point I wish to get to.

The idea I’ve had for several years was an in-world editing system that lets you sculpt and create entire objects out of voxels… which ultimately look as detailed as standard 3D models. While the ability to import polygon based meshes is of course essential, I believe this is the only way to have a real in-world creation mechanism, where you can build and modify anything freely.

Of course this raises a few problems. Since today’s computer hardware doesn’t allow making voxels as small as molecules, such level of detail isn’t yet doable this way, and ultimately everything looks blocky. But smooth objects can be achieved if, instead of drawing each voxel as a block, you generate a continuous surface between the voxel positions. This typically uses the Marching Cubes algorithm and metaballs, which is what some voxel terrains you can dig / construct in realtime have. More info and some examples of this:

Smoothing the surfaces between voxels is of course not enough on its own, to fully control the shape of a voxel object. First of all, you’d need to specify how and where voxels connect… so different materials / parts of an object don’t soften or erase one another when they shouldn’t. And second, you must define the texture and its blending / scaling / offset / rotation on each voxel manually… since there is no UV mapping here, and the resulting surface is procedural. This should all be doable and relatively easy.

Will Marching Cubes (or another voxel smoothing mechanism) ever be implemented? And will the in-world building tools allow users to make full detail objects out of voxels in realtime?


I discussed this a bit in the chat… apparently I was missing some essential information before making this post; It was mentioned that a system to support smooth voxels is already in place, called metavoxel heightmap. I was induced in error by the term “height map”, as this typically means a 2D image offsetting vertex points on a flat plane. Supposedly this already allows smooth 3D voxel terrain, and digging tunnels in any direction through its volume… as well as texturing various areas differently, with textures smoothly blending in between one another. That sounds exactly like the functionality I’ve been imagining!

I’m still curious what level of complexity this allows for exactly. Terrain is soft and sloppy, so it’s a low detail entity… but what about higher precision stuff? Can this be used to make medium detail models, like simple architecture? What about high detail contraptions, with multiple shapes and materials… like whole cars or home electronics? How clean and controllable are the shapes you can get via metavoxels?


Yes, the metavoxel system combines heightmaps with a voxel representation using the “dual contour” approach, which is related to Marching Cubes. If you’re interested in the technical details, there’s a paper at that you can read.

As described in that paper, the dual contour approach can handle both smooth and “creased” surfaces, though with limitations due to topology: each cube can only contain one vertex, so it can’t in a general sense handle features that come to sharp points, like cones. Still, it works pretty well for a variety of different features.

At the moment, any dual contour features you make with the metavoxel editor are limited to a fixed resolution (1/8 of a meter, I believe). When you make them on an existing heightfield, we clear out that section of heightfield and convert it to dual contour data. The plan for the near future is to make that approach more seamless (both figuratively and literally, since there are currently seams between the two representations) and more efficient in terms of memory and bandwidth usage. Notably, the features you make with the dual contour tools will match the resolution of the underlying heightfield.

After we get this approach working well for terrain, we can consider using it for more fine-grained features, as well as providing a mechanism for instancing objects made with dual contour tools, so that you can create an object with the tools once and instantiate it in different locations (as you would an FBX-based entity). For now, however, the focus is on terrain and relatively coarse-grained features.


Thank you, that explains a lot. I imagine there are limitations as with any approach… although ideally this can be used to build any type object with correct detail. As for sharp parts, I imagine it would be possible to create cubes and cones as well, although they wouldn’t be perfect and would contain soft edges.

It will indeed be nice to have multiple objects made out of metavoxels, which could be taken in inventory and rezzed anywhere in the world… as is the case in Second Life.