Development Roadmap, H2 2016


Here are some big categories covering what we are working toward now and through end of year… going to blog similar later today:


We’ve made a conscious choice to go to beta early with the goal of getting people building quickly, with the hope that presence of early creators in the system will help us better prioritize our work. However, it leaves us with a number of known bugs and partially complete features. So a good portion - probably about a third of our development time - will be spent fixing bugs and finishing features. We are actively accepting bug reports from the forums and at We’ll also be monitoring issues submitted through GitHub.


With the rise and success of the HTC Vive with it’s great hand controllers, we need to prioritize making our UI work well in the Vive, particularly when both hands are using hand controllers and walking around in room scale VR.

Right now there are many actions in HiFi which require a mouse or keyboard. We’d like to make all of these actions available using only the hand controllers. That means making the HUD/UI easily controlled with them, providing a way to type without a keyboard when needed, and redesigning the UX to put key capabilities at the ‘fingertips’ of a person with only a controller - things like navigation or importing content.


We need to make it easy to find and share content made by others. We will build submission mechanisms for sharing content through our own marketplace, improve import, export, and backup tools, and work on a secure proof of purchase mechanism that can be used both to advertise and authenticate content, as well as get us ready for a paid marketplace.


We will work on creating solid controls allowing server operators to regulate their spaces: At the simplest level, we need tools that allow a person running a domain to decide who to allow in, without necessarily needing to know (or caring to know) their username. Formation of groups, and the ability to delegate capabilities will likely be part of this work.


We need a first cut at making it easy to find and explore the different domains that are coming online. An event calendar of some kind will likely be part of this work, as well as screenshots or photospheres that can capture a particular vantage point for easy sharing.


Right now we are a web download, but we would like to have versions of High Fidelity available through Steam and the Oculus store.


I’ll meet up with anyone interested in playa near stacks at 3 on friday to discuss this plan.

In-world meeting Friday 2:30 - 4 PM at Playa
Anti-Harassment in VR worlds

I see lot’s of important tools. But misisng still the important one that keeps you from doing bigger things.

Where are the inworld terrain tools ?
All my things keep delaying because there’s no inworld terrain !
And mesh terrain is really a bad option, besides it’s not walkable.

Mabye looking in terrain things myself. but it possible would be terrible.


OK. We’ll think about that. Point taken. Very much agree we will need a terrain data type.


I’m not clear on what you mean here. A mesh based terrain could be given a collision hull that matches the mesh itself, so it should be ‘walkable’. Even if we had some sort of in-world editable / deformable terrain system, in the end for both rendering and collision functionality we would be generating a mesh of some kind.


I tried it, need to try it again.
But the biggest obstacle is that i cannot modify a mesh terrain inworld to adjust to the buildings. You never can plan a terain ahead. Terrain you create in the process when building the domain and fill it. To keep silent about texturing.

Still thinking how to overcome the obstacles myself.


Interacting without a keyboard and mouse is essential however I’m yet to experience any solution that hasn’t been dire (think typing using chopsticks). The problem stems from the fact coders make this stuff rather than designers. In my water I don’t feel like replicating a keyboard in vr is a good solution. I would be more interested in exploring handwriting recognition…


@Philip, @Jherico, what he means by not workable is that there is no means to deform mesh. The quality of ‘terrain’ in those legacy grids is that there are in-world planar mesh deformation tools. We can generalize this to say we need a set of mesh deformation tools. And, in keeping with the HF minimalist design concepts, what we really need are core means and methods that let scripts obtain metrics of an entity and apply various kinds of deformations to elected sections of the mesh. With that core feature-set, others can develop some compelling in-world mesh editing tools.


On server security, keep in mind that a person may have many domains, or, that they may hire a security firm to be their overall provider. So whatever means is there to allow/disallow someone from accessing one domain needs to be apply to other domains; that is, whatever opaque token applies in one domain should also work when set in other domains.

Another important aspect to this mechanism is that it has to apply to the agent before they enter a domain; that is, it has to be done before the connection occurs. Whether this is in a form of a ban/access list built into the domain stack, the determination whether an avatar can be in a domain must absolutely happen before they are permitted to connect to it. Otherwise, a griefer can dispatch an attack in the few milliseconds that he is connected to the domain.


My own personal opinion, but I think it may actually be too early to try to address the ideas of authentication, authorization and access control. The basic protocols should support the idea of rejecting a packet / change because it’s not authorized (the equivalent of an HTTP 403 response), but anything beyond that may be overly constraining. Consider that HTTP contains a mechanism for authentication that is almost universally not used. Instead, individual sites and organizations tend to do authentication and identification using various different in-page mechanisms. I imagine that similarly organizations that create domains might want to do their own kind of auth.

If Amazon created a domain, you can be certain that the only kind of authentication they would want to do would be against their own user database, and would be implemented via scripting that triggered when you arrived to verify that you were either already authenticated, or to present some kind of auth user interface.

Of course since Hifi is inherently social this offers some interesting challenges. If my avatar meets your avatar on some hypothetical Amazon domain and then we meet again on some other domain, how do I verify that you’re the same person, and not someone else wearing the same avatar?


Also - we are going to try using the TriMesh primitive in Bullet to allow triangle-accurate surfaces for static objects, which should make it easy to import and use an arbitrary mesh to walk on (as opposed to generating a convex hull which for a terrain will be inaccurate by a great enough distance to be visually wrong).


To have a workable terrain datatype, we also need some form of view-dependent LOD (as with Second Life), because for the triangle/texel density at your feet to be sufficiently high, you need to rapidly decimate the triangles for areas farther away if you are to have a good horizon. We did this in SL with variable compression heightfield terrain patches, but suffered with edge artifacts and the inability to fuse patches at greater distances into a single patch to achieve longer drawout. This time we must do better! :slight_smile:


@judas: Handwriting recognition still results in poor ‘typing’ speed, though. Not sure there can be a big win, although aesthetically I like the idea.


Truly! Typewriter tech developed as a need to speed up the process of handwritten documentation. In fact, the use of fingers in parallel processing mode was so fast and the latencies were so low that the machinery of the time could not keep up - they would jam. And so, the QWERTY layout was born to slow down the fingers.


Well, there will likely still be some people who would desire the ability to type something while in full VR gear (i.e. when they’re not sitting down at an actual keyboard), and I figure someday at least some VR-suits/VR-gloves will have a good enough tactile-feedback that you could simulate a keyboard suspended in mid air in front of them that then “feels” like a real keyboard when they place their hands upon it, and gives the right clickyclacky tactile feedback when you press down on the virtual keys. Probably not tomorrow, or next month, or maybe even next year, but it will come, and its probably best to have at least a placeholder in the system to support it (a place for the system calls and stuff to be bolted on later) once the details of that eventually get worked out.

As for user-account stuff… I’m wondering if the user credentials will be tied to one or the other provider (i.e. whatever web portal they originally created the account on) or if the user-account stuff willl be portal-agnostic. Ideally the user-account credentials need to exist independently of whatever place it was originally created on, such that if that particular web-portal outfit goes out of business, you don’t lose your VR world identity. It also needs to exist in such a way that NO one in the universe other than the guy who originally created the user-account can revoke it, and maybe not even then.


Yes, I think that was discussed in the last HF meetup. BTW, we really need speech to text transcripts of those meetings.

What was said is that a likely approach would be OAUTH based tokens with scopes. In that way a script/domain would ask for and cache the token and able to apply, for example, an identity scope (literally, are you ‘you’ and nothing more). But, the act of getting that token would be permission based; that is, the avatar would accept or decline that request (happens but once and CAN be revoked).

That then would be the basis for tying things like inventory to an avatar across all domains, or it can be the basis for letting an avatar be in a domain.


“And so, the QWERTY layout was born to slow down the fingers.”

That is actually a bit of a myth. The layout was chosen to keep commonly-used-together letters mechanically separated to avoid jams. The slowing down was minor and a side effect. Independent (truely independent) tests didn’t find much difference between QWERTY, and Dvorak. Dvorak is a little faster but not by enough to really make it worth a change. OTOH, the top row of a QWERTY contains the word TYPEWRITER re-arranged (so salespeople could dash off the word quickly without learning to type properly), so I don’t think anyone can claim QWERTY was designed the most optimum layout for actual typing in mind! It just appears that, within certain limits, layout isn’t as important as a lot of layout junkies like to kidd themselves.

(I say that as a Colmac layout user who dislikes QWERTY, though for RSI-avoidance reasons rather than any – in reality – marginal speed differences).


And when I say Colmac user, I mean I have remapped my TruelyErgonomic keyboard (raked keys are the real wrist-twisting throwback to mechanical typewriters!) to Colmac for programming use, mainly because of all the brackets and symbols are easier to type on a matrix keyboard than on my Bat, which I far prefer for general typing (using one to type this) as it leaves my other hand free on the trackball (or ultimately some sort of 6DoF controller).


No, the Bat isn’t particularly hard to learn. I got proficient in a couple of weeks, which - from memory - is quicker than I learned my way around a QWERTY hunt-and-peck style.

Also, despite the claims from some quarters, chorded keyboards are not noticably faster than touch typing on a matrix, it is the hand-free aspect that I like, though.

I am aware of all the ‘one handed typing’ jokes. I make most of them regularly myself! :stuck_out_tongue:

Another non-raked keyboard I considered, which is probably more portable if you need that, is the TypeMatrix. Looks nice, but ultimately I found the TruelyErgonomic’s arched-and-splayed layout more comfortable for desk use (I printed out mock-ups and sat in front of them for quite a while before making my purchase decision).

(I am not affiliated with any of these. I just take my input devices a bit too seriously!)


But typing on virtual keyboard is a horrible experience compared to using a real keyboard.its something that looks cool and impressive till you try to use it .lol would better experience to have the rift set up like one of those flip up welding masks and the keyboard glued to your belly :grin: .we need to understand the visual language of vr and move away from writing because the solutions are all so complex and contrived they can’t be right


typing on a virtual keyboard is horrid for the same reason as typing on a flat surface - bad haptics.


The truth of the keyboard thing is if you can touch type your good to go .if anyone has the passion to type then they should learn to play the Instrument .the rift and the vive both have mics typing is only needed for the location bar. Philip suggested ideas for navigation .if you want to type sit at your desk .I’m sure it’s only a matter of time till someone does a video how to hook Google talk and Google tranlate to do live voice to text in any imaginable language .and then no one will use it .