Hifi + blender -- quaternion rtc experiment


#1

on the left is hifi interface and on the right is blender – the two apps kept in sync over an improvised digital tether.


#2

That’s bloody clever, tell us more


#3

Very cool stuff. Blender’s Quaternions seem to transfer over just fine to high fidelity, even if the axises are a tad bit different. Although I do see some issues with the last armature going out of sync when animating.

Are you rendering the armatures via the UI onto the scene?


#4

Fantastic!!! In a other thread you have written, that you use a local file of the model and an update with a script. Is this made in the same way?


#5

That’s excellent thanks for sharing I can see everybody’s minds exploding right now :slight_smile:


#6

Here are a few technical details for the curious. :wink:

On the Blender side, custom bone shapes were used mainly to help with visual debugging. To produce the data feed an embedded web server was grafted onto Blender, exposing raw armature and viewport information as a JSON web service.

On the HiFi side, what looks like wireframe models for bones and axis indicators are actually script-rezzed Line entities (rezzed once and then manipulated as the data comes in). And the data is simply pulled using continuous XHR requests (which might not be very efficient, and is one reason things are sorta laggy, but it certainly worked better than expected).

HiFi doesn’t yet support parent-child entities for position and rotation, so it actually took a lot of trial-and-error to find the right conversions to get from raw Blender quaternions into HiFi comprehensions. Basically the raw pose data had to be “upconverted” into an armature again (because each bone’s position depends on all ancestor orientations), and then everything had to be decomposed back into standalone (but now locally-absolute) data for HiFI rendering.

ps: this example was ran side-by-side on a single computer… but to me the real fun begins when this kind of tech is used in the future across split construction teams – some working in-world (where the perspective is better) and others working off-world (where the tools are currently more mature, efficient and accessible).