I’m looking for some advice from people who know the API better than I do on the best design for this task.
I’m using High Fidelity to run a research study in virtual reality. Three different people will be in the same virtual space that will contain a table and three chairs. In each physical room, there will be the same table and three chairs. As these people interact, I want to keep the physical-to-virtual world correspondence fixed, so that they don’t run into said table and chairs, unless they want to sit down.
I’ve been stumbling through the API and found a nice little function called “getSensorToWorldMatrix()” that is very, very nice. In the case that someone teleports, rotates, or otherwise gets moved, I can detect that (because this matrix changes) and find the right transformation back to the spot they should be, if the physical and virtual worlds align.
I thought it would be easy to get and set the position, rotation, and scale of the user in HF, but that’s the part I’m now hung up on. I found a function that gets me the head position [appropriately named GetHeadPosition], and a function that sets the place an avatar should be [goToLocation] but it appears goToLocation is intended to be used as a way to get around a world rather than adjusting position, because the position you specify is where the feet of the avatar land, rather than the head. It would be possible to get the position of the feet, or the height of the avatar, and do some math in between, but this problem smells like I’m not understanding the API correctly. What sort of API functions should I use to do this task?
I’m looking for the right way to prevent users from rotating or translating in the virtual world without doing the same in the real world. I think it’s through maintaining the “getSensorToWorldMatrix” matrix, but I’d be open to any suggestions that solve my problem.