Hypothetical query about non-linear FOV


#1

I am curious how one might go about implementing a camera view that was distorted in the following way:

Where in the stack would it be implemented, for example. How difficult would adding such a function into the code be?


#2

It’s a great idea with much potential. Look into ‘foviation’ The human eye works like this, with just a small central area in hi-res and color. The periphery is mostly gray-scale and tuned to sensitivity to motion.


#3

That’s all good until the person looks left or right. Then their prime field of view will be looking at a degraded image. To make that approach work properly, you need eye pupil tracker cameras in the HMD to communicate where the person is looking at the screen. Then the interface renderer has to apply a change to shift the high resolution zone.


#4

Well, the idea is that the 1:1 area in the middle is quite large. Large enough to have a normal interaction experience. The edges are to expand the FOV beyond what a desktop monitor is capable of.

Ideally, with eye-tracking, looking at the compressed part of the image (possibly while a button is pressed) will move the camera to face in that direction.

Also, while I have shown a 1:2:1 screen real estate use, I am imagining using the mouse scroll wheel to zoom the 1:1 view area larger (increased compression of the peripheral vision area) or back down to suit whatever one is doing at the time: Running around you might want more peripheral vision, while standing doing something you likely want more view space.

And yes, the idea is inspired by foviation, though the 1:1 view area is much larger and you aren’t meant to just stare at the dead centre of it! :smile:

(I also believe that pre-distorting the image is how Occulus and other companies following their lead are using much cheaper+lighter optics in their devices than would be needed if their input wasn’t distorted inverse to what the optics do in order to present the user with a flat view. Where in the stack does this occur? A distorted rendering target would, I assume, be more efficient, but post-render distorting of a prepared flat view likely would be easier to implement.)


#5

Microsoft ran the computations and gave Oculus, for free, the build files for a radical curve lens model that eliminates the optical distortions in the DK1/2. I truly hope the CV1 uses that lens system because that will then gives us a clean image across the entire field.


#6

Theoretically any opengl application could do this by rendering the scene over and over using only portions of the total projection matrix, and then copying and scaling the results to an output framebuffer. However this would be pretty expensive because you’d have to be executing the same scene draw calls over and over.

However, nVidia has an extension in their Gameworks VR code called ‘multi-res-shading’ that does pretty much this. It allows you to divide up the screen into 9 viewports and specify resolution scales for each of them, so you only have to do the draw calls once and the driver does all of the rest of the work.


#7

Eye-tracking is key to get the advantages of faster processing and resource efficiencies. Being able to change gaze within a field of view and keep the efficiencies (more bang for the hardware resource buck) is pretty important for realistic experience as well.


#8

The other nice part of eye gaze tracking is that software can send some new hardware render assist clues could be sent to a graphics engine to set the FOV center point. We’re at the very beginning of hardware optimized viewer gaze rendering.


#9

Thanks for all the interesting responses so far!

Keep in mind, as cool as fovial-rendering is, that isn’t my intention here. Just extending the FOV without sacrificing detail at the centre of the view.


#10

Sounds interesting, but not quite what I am talking about as I want to compress the sides. Lowering their effective resolution is just a necessary side-effect. The per-pixel detail should stay the same, just each edge pixel represents a larger angular area.


#11

Oh, you mean like a fish-eye effect (wide-angle lens)?


#12

Yes. But non-linear distortion so you get a normal view forward and a compressed view of the sides. Poor-person’s wrap-around view on a standard monitor.

(Or, if I can convince someone over at the optics lab to calculate and grind me an appropriate lens I can retro-fit to a spare video projector, maybe a rather good solution!)