One recipe for in-world terrain and collision hull editing


#1
  1. Include THREE.js and its OBJExporter.js on the Scripting side.
  2. Devise a way to spatially-interact with “control point” vertices (like maybe how we edit Bézier curves in 2D?).
  3. Utilize an in-memory THREE.js geometry to represent the structure – which can easily be exported to OBJ format.
  4. Then upload the OBJ data via scripting to APT – emerging a virtual .obj file.
  5. And then reference the resulting atp:// link as usual from a modelURL, compoundShapeURL, etc…

The last three steps are reversible – it’s possible to read-back the .obj data and import it into a THREE.js geometry again using OBJLoader.js.

So (technically) the building blocks already exist to forge some kind of poor man’s in-world model editor in pure JavaScript.

I’ve not tried decoding OBJ files from scripting yet – but generating them and exporting to ATP seems to work fine (in fact, that’s how I implemented my dynamic 3D text experiment way-back-when – using THREE.js’s TextGeometry to extrude OBJ models on-the-fly and while in-world).

Might be a tad too slow CPU-wise to work on super-dense models, but if applied conservatively (like to manage a few tens of thousands vertices at a time) it seems plenty feasible.

Anyway, just tossing the idea out there as a potential stop-gap – while waiting on even-better ways to do in-world editing. If anybody has a strong interest in exploring this approach let me know and will attempt to share a few starting points.


#2

We are heading into beta - beta1 - and the stopgap measures, nice as they may seem, need to come to an end. It is time to think about making this sort of stuff easy for people. It should not take a ton of complex code to work around a missing feature as fundamental as this one.

I was looking at what Ebbe recently said of project Sansar: ““Creating your own VR experience shouldn’t require an engineering team, and Project Sansar will make that possible.” It is time for all of us to get down and start thinking through what core features need to be in beta 2 of High Fidelity (beta 1 is already locked down), so that enabling interfaces are in place throughout the code base to make it easy to do things here.


#3

Something like this? https://www.youtube.com/watch?v=6HLQz9yMyGA&feature=youtu.beCospacses


#4

Cospaces is a nice assembling tool. It seems to lack editing tools, resizing, deformations, etc. I like how it does rotations as compared to here.


#5

I think I can relate to some of your frustration – sometimes it feels like we’ve been waiting “forever” for certain core features to emerge (with the ability to easily create one’s own VR experience topping the list).

But it seems like HiFi wants that same thing and that maybe lack of focus isn’t the underlying problem…

Perhaps VR editing isn’t easy because it’s a more expansive challenge than first thought.

Imagine what the situation might look like if an industry wanted and aimed for the first Microsoft Word… but somehow arrived at the first Photoshop instead. Both software can create arbitrary stuff, edit a variety of documents, support drag n’ drop, etc.

However, how could we usefully demand either software – if in such terms? And without counterparts for comparison, how could we efficiently tell actual deficits apart from having built a wrong, similar tool?

Anyhow… after diving pretty deep into UX editing code, I’m convinced there are promising approaches being overlooked right now. This is why I continue to suggest the need for more experimentation – and rather than placing premature demands onto a next beta, that anyone interested might help shine light into editing’s darkest corners…

I haven’t checked out Project Sansar or Cospaces in gory detail yet – so appreciate the links and will investigate both further. Maybe there’s a more wiki-like place we could start gathering-together anything we find and like about particular VR editing systems?

For every complex problem there is a solution that is concise, clear, simple, and wrong. - H.L. Mencken


#6

It is known art. You can even dig into Blender to get not only a glimpse but the editing code. But yes, might as well start a discussion here on ways to do it. There are some big questions to address notably around change/take but that big one can be set aside for now.

When an entity is instantiated, a copy of its structure/models is kept in the domain. It is also saved locally. It should be possible, with appropriate script methods, to deform the model in-situ. Those changes are persisted in the domain.


#7

Yup, been there / done that. :slight_smile: Blender’s native RNA/DNA stuff is intriguing. It’s too bad it doesn’t maintain full surface-level parity with the UI and Python bindings though. On a side note – it’s possible to drive Blender using JavaScript, which might put a new spin on its integration potential.

My sights however are more towards user-generated editing systems, where even things like the mechanics of rotation can be personalized and scripted. What I’d really like to see is a VR system that can be programmed with smarts about different spatial reasoning modes; for example – between editing a forest, its trees or their collective relationships.


#8

Do you mean driving blender via .js just as keyboard command translator for outside blender instance? or?


#9

Like fractal leaf creation? That would be very nice.


#10

… meant having a JavaScript engine available inside of Blender, and was thinking it could be used strategically over Python for interoperability stuff.

Technically it’s possible to do this just-in-time (ie: without recompiling Blender) by using Python’s built-in ctypes and a pre-built Mozilla SpiderMonkey js.dll (.so, .dylib, etc.).

   jsNNN.dll + Python glue + JavaScript glue == JS pseudo-Blender API

But of course it’d probably perform a lot better if done as a native Blender add-on instead.


#11

Yup.

And in a similar vein there’s a really cool WebGL example that effectively produces an endless supply of unique flowers – with its whole “editor” consisting essentially of a pseudo-random refresh button.

What’s even more amazing maybe is that when “intelligence” can be placed at the substrate level… sometimes a radical kind of compression emerges. Like the “genetic diversity” across flowers below exists across a mere 16-bit seed value:




And since it’s deterministic that seed value (along with the flower logic and shaders) represent everything needed to reproduce a flower exactly. For example, a field of 65,535 such wildflowers – each distinct and at least slightly-different from all others – suddenly fits into an uncompressed two and a half megabytes or so of instance data (ie: 64K * [vec3, quat, vec3, uint16]).

If 16-bits can choose between 65,535 options… 64-bits could choose between 18,446,744,073,709,551,615 options.


#12

This is all very cool shiny stuff. Truly I want to see it happen. Until then, let us have the basics working: ability to do static collision trimesh.


#13

… the flower technology is also about efficiently registering millions of collisions against virtual surfaces in real time… and sure – I mean virtual photons – but physics simulations in general have started leaping onto the GPU, leading to an increasing variety of similarities.

Separately from that, I would like to better understand which aspects of collisions you think trimesh hulls can help with the most. What do you think about something like below as a set of minimum acceptance criteria?

  • At the human level… collisions are reasonably believable.
  • At the construction level… collisions are reasonably configurable.
  • At the interaction level… collisions are reasonably scriptable.
  • At the runtime level… collisions are reasonably resumable.
  • At the networking level… collisions are reasonably recoverable.
  • At the cloud level… collisions are reasonably divisible.
  • At the frame level… collisions are reasonably deferrable.

(have I missed anything critical – or included anything unimportant?)


#14

I answered those questions in several earlier posts, detailing the reasons why it is necessary (search for trimesh or tri-mesh). And, if you are not one of the HF developers, I’d rather continue that conversation with one of them. We as alphas not part of HF cannot do much more than suggest things, so suggesting things to one another, though a rich way to formulate consensus, is probably not the best way to get matters moving.

I’m all quite with agreement that various procedural texturing techniques, and procedural build techniques indeed is a great thing to have It is going to take a while to do that and right now we need to be able to quickly make static mesh work well with dynamic elements. For example, mesh in the InWorldz grid has (like in SL)an option to derive the physics (tri-mesh) shape from the a decimated render mesh, accuracy to 0.05m. Anything less detailed is low-fidelity.


#15

vertex shaders are on my personal wishlist, and some of the others too. that would enable a lot of great procedurally generated content!


#16

http://algorithmicbotany.org/papers/


#17

I like the cell tissue partitioning paper. Makes great leaves.


#18

cool – do you have any favorite use cases or applications in mind?

a while back i explored a lightweight virtual geometry/vertex/fragment approach – feeding on simple shapes like a pre-generated UV sphere or toroid, and then applying a vec4 floating point texture as single dynamic input to both vertex and fragment shader.

it’s not as flexible as having direct access to geometry+vertex shading – but at the same time, since it only varies a texture between frames, it’s amazingly GPU-friendly (and compatible with just about everything including WebGL 1.0).

here are two examples of the level of detail / controlled-deformations to primitives that can be achieved in real-time that way: