Processor Overhead Question - Three.js CLOTH versus HiFi_engine simulation?


#1

I’m going to need a lot of the devs and alphas to chime in on this one.

While looking at the posts by @humbletim I found
http://threejs.org/examples/#webgl_animation_cloth

I also recall @Twa_Hinkle showing us an OpenGL WebGL water shader js that ((myself or someone else)) might be able to apply to a cube. Pictured: not the same one, but an example from the same link cloth was found:

My question is this: I want to leverage the most efficient methods for cloth simulation. Things like Banners and Flags upon the Alpha Quintessance will come to life in the sea breeze, and the audio sounds that come with it.

Should I simply start filling up on JS heavy objects or wait until the Engine supports it? @Menithal, can I create a hack ((cause I’m that kinda guy)) whereby a bot avatar located at an arbitrary origin has a soft attachment that is effected by wind?

Special note: JS Dev by trade. I would be fine with integrating node.js or similar library dependancies upon my StackManager or SandboxConsoles.

@c, @Balpien.Hammerer, @Caitlyn, @thoys, @Adrian


#2

Simpler method for cloth would be just have an entity that is made out of a grid of armatures, and animating / parenting them to one an nother instead: no need to go use an bot avatar with an arbitary origin :slight_smile: coz that you would be doing anyway with the “flag avatar”

But yeah, JS heavy object would be another way, but I do not know how you can render planars without entities, not per say making something curved with them.

Also, gotta remember with a JS heavy object is that the entity server and the clients are gonna get quite busy with the calculations an updates.


#3

Ultimately I think dynamic cloth simulations will have to be done the GPU, but there might be some fun ways to approximate them if willing to jump through some hoops…

For example, you could pre-rig a planar grid in Blender (which is I think is what Menithal was also suggesting?) and then experiment with manipulating those “joints” dynamically.

As a quick experiment I tried a 32x32 unit (1024 face) planar FBX model with two arbitrary bones – and from within Interface applied a separate FBX animation sequence onto it:

For casual cloth simulations like flags I think pre-baked animations could work and in theory that can even done from Interface (or perhaps AC scripting or Node.js sister server).

The internal memory structures for animation curves seem pretty efficient – so I don’t see where having tens of thousands of frames would present a problem. Maybe you could use that like a spool of animations – choosing sub-sequences via .animation.firstFrame and animation.lastFrame (varying animation.fps in response to gust levels etc.).

As a reference point the blue flag was a 32 frame animation with four curves – coming in at roughly 128 (96+32) floating point values (d|XYZ Lcl Rotations for first joint and d|Y Lcl Rotations for second).


#4

Cool stuff. I hope I can have time to try this and the .js debug stuff etc. I saw some other stuff there that I think might be doable in HiFi too.


#5

Only down side is that Animations have to be frame based, not curve based, so one has to bake all frames into the animation file, thus it might use a bit.

But yes, that was pretty much what I was suggesting :slight_smile: