Hello to all
I would like to thank all of you for all the information available here which has helped me get my noob head around whats new in the world of rigging avatars.
I come from the ZX Spectrum generation, much like @Judas I spent my youth 'coding’ stuff like breakout in Basic from back of magazines. For many years I was out of touch with computers, but always been an artist/modeller.
It was SL which blew my imagination open wide, and like many I wanted to build and create things, first with prims, then with mesh… Particularly wanted to make moving characters.
Eventually, after growing some grey hairs and learning a thing or two about modelling packages i managed this a couple years ago [please don’t laugh]:
The idea was that if I could make custom mesh characters, I could make a machinima animation which would escape the normal avi type altogether- like a 3D cartoon using SL for the mechanics to hang my creations on saving me having to learn rigging/animation from the bottom up… Just trying to model was enough for now!
I gave up after making Acid Cat …
I wanted his jaw to move, and his eyes to turn just like the standard avi could…
Although I played with the idea of moving the jaw with a script purchased from marketplace, it never quite responded to sounds via mic properly… so my animation plans began to come apart and perhaps Acid Cat and Mauski would never frolic with mad moving eyes and mic responsive jaws, this gave the cat the blues as you can see…
So here I am looking at hifi and wondering if things might be different… I’m prepared to learn how to make my own armatures, I would have so done long ago if I’d thought SL would have worked with them, but it seemed only the standard avi skeleton could be used to parent mesh to, and it was flawed.
I know from what I have read that currently - the Mixamo avi’s are currently ‘aligned’ in terms of how the bones hang in 3D space in hifi [forgive my lack of full understanding and terms]. I read that one can drop a mixamo avi via Fuse, into hifi more easily because the orientation of the bones in either realm is the same, to me this part is the crucial part.
My musings have brought me to a point where I’m wondering, would any properly designed bone system, built in Blender or whatever be compatible in hifi if certain positions/orientations of joints were adhered to?
For example, I build [in Blender] a typical human armature of superhero proportions to suit my basemesh, careful to make sure the joints are talking the same language as mixamo ones in hifi. would it move fine inworld? would the proportions ‘snap to suit’ the mixamo type proportions? If so could sliders be used to return to superhuman proportions?
It seems so far - everyone is looking mixamo… or if not they are moved via mixamo skeleton rules.
I have made a kind of superhuman mesh WIP I’d like to bring into hifi,
Its far too dense in Polys atm, I’ll more than likely have to re-topologise - what would be the right target range for density on a human ? is 10K acceptable these days?
Still much work needed on facial forms, fingers and toes… I did work some more refined head / facial features on a separate duplicate of this guy.
Presumably all I need to do now is follow a blender tutorial or 3… [some great [tough] ones on these forums already i know]… Thus I make the bones/armature properly, including facial bones.
I know… I’m going to a new hard place!!
But once rigged in blender, can my figure be ‘driven’ by hifi fairly easily - how can i make sure hifi understands ‘Them Bones?’
Like Plato once thought with his ‘ideals’ is it not possible to have the underlying form of the ‘ideal avatar’ in terms of joints, positions rotations for all the bones agreed upon for creators to work with? Then we have some common understanding, like an ‘Adam and Eve’ of bone/joint positions and rotations which could be altered to some degree, or morphed to provide variations in human type [ectomorphic vs endomorphic body types, elves or trolls].
Hypothetically, I make my custom superhero mesh to suit ‘Adams’ ideal bone proportions and drop it on - it works! - if his proportions are adjusted to be slightly more like a normal human in process, i can fix with sliders. This almost worked with SL, but i could never get faces responding to mic. Also the SL skeleton was fixed in space as T-Pose, legs together, more awkward to fit/skin/weight a nice human to it, A-pose seems more intuitive.
I guess my idea is that if there’s an agreed ‘ideal’ hifi armature and bone system, which has easily customizable/movable joint locations to account for variations [like satyrs and demons], then we are all free to dress that armature with whatever we like… It would be possible to do a pixar style human or a realistic human mesh to be driven by same rules in hifi even though their eyes aren’t in exactly the same location due to style. I could be way wrong here, but I’m new to all this.
I hope you are all kind enough to accommodate my rather rambling post, which essentially is backing up the idea that if we have an agreed, default humanoid/bipedal armature system, with the ability to customize [for example size of /distance between eyes], we are free to start bringing in our meshes with some certaintly that they will move in hifi. For now I’m overwhelmed still by the amount of programs/options from what I have read.
I’ll post as I learn more here about these things, It would be great to establish a clear route from mesh creation to moving and working in hifi for those of only intermediate technical knowledge. I can make a human form and make it well, I can re-topologise to make it efficient but what is the next best step in application and workflow from there to get it moving?
Lastly - I can’t find much in the way of tutorials for hifi/interface on youtube, where can I see a bunch of getting started inworld/building related tuts?
Thanks for taking the time to read, apologies if it’s a bit waffly and amateurish