Alpha meetup - Jan 29th @ 2pm PST


Hi All,

Let’s meet at the usual spot (hifi://sandbox/winter ) . There are not many things to update on since Monday, we have been bug fixing. I know some of you want to talk about DDE and I am happy to do that.



Ill just list the questions I’d ask you get a heads up / get the right people to answer. Some these I asked in the previous thread on the subject, Thus let me boil it down to the bones:

  1. Is this a temporary measure? as in not pursuing the licensing at all, or just at this moment or uncertain. (I guess this is due to face data that is bundled with the DDE.exe instead of just the DDER algorithm.)

  2. Why wasn’t the licensing never brought up when it was being implemented? Did something change in the direction of HiFi? I thought this was one of the major media published features to date.

  3. Hooks? Right now the face-tracking systems are built for faceshift with DDE support being build more of an adapter to the Faceshift. Will you guys be making api open to for documentation?

  4. This is probably the Nth time I mention this, but now with facetracking removed by default there needs to be a whole another way to express an avatar: Is there, and what would be the timeline for improved shapekey/blendshape support for Entities/MyAvatar via script? Avatars look lifeless without life on the faces and I’d say it is the most important part of Human communications, especially with HMDs.

  5. What is the new MVP For Hifi? How do you plan to differenciate from the other VRWP for the end user?

The #4 also would be good to have control of expressions via script so that HMD users may also express them selves. This also would help with, forexample doing bodyshapes, modified faces or make entities that are animated in an interesting or organic way. Additionally we need to gain control of limb scaling as well.

Also. I prepared some pitch forks. Not sure why they are not appearing on the market though., will look into it when I get home.


Just to give an example of why faces are so important thought I would get everybody going for the meeting and a bit of fun as well


tell everyone hi, i’m still flying away so i can;t even hear :frowning:


My Internet connection still not up to the task of so many avatars … I’ll watch the video … and hopefully have a better connection for next week!


Well, I’m there, but all I have is stars and silence. oO The stuff in world hasn’t loaded at all, and I hear no voice at all. :expressionless:


After all the cache clkearing on sandbox. i teleported back to my atp domain and soem thing not load. (until a relog i think)


Well, after more than an hour, objects finally loaded… and I found I was in the wrong section of Sandbox. No wonder I couldn’t hear voice. :confused: I could’a sworn I’d left myself parked at the Winter Is Coming section of Sandbox last time I logged out. oO


@chris Is there are video of the meetup, please?


Any thoughts on how to improve the venue and structure for meetups for better, more actual meeting-like communication and discourse flow?

Having a bunch of avi stand (or whatever) around in a big open space doesn’t see optimal. Has anyone kept track of attendance and participant behavior. It might be useful to know how many people attend, how long they stay, how much they participate actively or just listen, etc. The event might be given more structure suited to the purpose for efficiency and encourage participation? How can discussion be better supported, how do we know who’s talking, how do we ‘get the floor’, etc. the usual dynamics of meetings.

Also, is there an archive of recordings? Can they be edited and annotated for efficient later viewing?


I stayed away for a couple of weeks because my frustration levels with HF alpha rose too high. Now, looking back at what has transpired I see DDE is gone (it was the next generation example I would show to people asking about HF), and several more bugs have appeared.

I understand there is some kind of HMD immersive VR development death march going on for a March semi-MVP deliverable, and so the swath of previously working features destruction will widen until then. OK, focus is good, and we are sidelined - I can live with that up to a point.

Is anyone in HF formulating a list of what or when things will get fixed? If so, would you please post that list? Now with sound broken, which used to work flawlessly, going to meetups is rather pointless. There is no decent audio chat, text chat has been staked yet again.

When will audio chat, and the very basics of cached entity rendering support get fixed?

Oh, and where’s the vid for the Jan 29th meetup?

Now going to play with Hololens development for a while.


here is my summary that I remember of.

  1. They announced that face tracking is a good feature to have, but wont be looking into the licensing until much later. Sounds like leaving for third party devs.
  2. DDE status is uncertain due to the licensing. no precise answer on what part of it, @ctrlaltdavid couldn’t attend
  3. They are going to Continue with agressive Continous Integration to hunt down bugs. Along with the QA discussed last week that should be ramping up.
  4. Focus on getting HiFi to HMD Beta MVP state prior to first dispatch of the Rift.
  5. Usability, UI cleanup, some other minor stuff I cant remember anymore.
  6. Script based shapekey control isn’t currently on the horizon.
  7. Some talk about the PBR textures via Algorithmic (substance designer / painter) tools


They fixed that bug in the sunday build. You should be rezing where you last left off, regardless if you crash or not.


I know nothing about DDE licensing. The DDE code’s still in the codebase if HiFi do sort out licensing at some stage.


Thanks @Menithal. It looks like even the MVP requirements are not available either. will check again in another two weeks, sooner on breaking news.


Happy Monday!

Here is the video:

(apologies for the note from my wife that pops up during the video)

Here are older video meetup:


@chris, at time index 1023 in the meetup, you mentioned that are few virtual worlds that can deliver smooth frame rates with VR gear (like the DK2) with many avatars present. What level of graphics capabilities are on the computer you used to make the vid? Is the minimum requirement for that smooth experience an NVidia 970 or 980? Also, what was the net bandwidth to/from your computer at the time of that meetup?

I need to know (I am sure others too want to know) that to determine the minimum system requirements (MSR) for the MVP.


Hi @Balpien_Hammere for the rift this is the recommended spec:

The laptop I was working on has a 980 GTXM. Which is actually below the recommended 970 GTX, but not by much. It may have more trouble with the CV1 and getting to 90FPS.

I am not sure what my bandwidth was at the time of this meetup, I could just pull up that stat next time.



@Nathan_Adored - sandbox/winter will take you right to the meeting spot. :slight_smile:


Yes, very familiar with the Oculus spec but, there is more to smoothness than the raw graphics capabilities. The client’s render engine, all the bandwidth minimization tricks like occlusion, vertex reduction (LOD), and really important is the minimum BW between client and server. If that BW ends up being 20Mb/sec even 10Mb/sec continuous, the comfortable audience size might be quite small. But yes, definitely check on the U/D BW when many people are present, especially people using HMD and precision controllers.