I was wondering if anyone thinks that cloth simulation for avatar clothing might be something possible in HiFi. For the last year I have been doing some research on it (strange hobby I know) and have learned that most 3D content programs (Maya, Blender, Marvelous Designer are just a few) allows the creator to assign cloth physics to a mesh that uses the avatar as collision. I am not talking about environment physics interacting with the cloth (wind and such) just the motion of the avatar needs to affect the cloth mesh. I know this has been done in games and movies BUT those environments are precondition and controlled, they are not like a virtual world where things change hourly and freely by other users, that creating more of a challenge for sure. I just feel there has to be away to upload those properties with the mesh itself.
The short answer is yes, we will probably implement some cloth features on the avatar. Philip is currently experimenting with simulating hair.
The long answer:
At the moment your avatar motion is computed entirely on your interface client, whether via animation, inverse kinematics following motion detection hardware (such as Hydra or a mo-cap suit), and/or ragdoll physics. The result of that simulation is streamed up to a server (specifically the “Avatar Mixer” (AM) to which you’re connected) and then relayed to everyone who can see you – each joint’s rotation is continuously streamed and updated. This spares your client from needing to do all of the detailed physics for each avatar.
The bandwidth and complexity required to stream cloth/hair state is probably higher than we want to support, so the cloth simulation would be done as a client-side only effect – no streaming/relay through the servers. This means that your client would have to do the simulation for ALL of the avatars in view. Not a problem for 10 avatars but will become a scaling issue when that number increases to 100 or 1000.