While VR has reached an incredibly level of fluidity and realism, there’s still a lot of work to be done before we can produce all the visual inputs the human eye and brain expects. Ever since Facebook bought Oculus, they’ve really been investing in basic research to push the VR industry forward.
One of the big problems with existing VR is that your eyes are a fixed distance from the screen, using a fixed lens. So any depth of field or visual blur has to be simulated on screen. Apart from feeling quite unnatural, the effect is hard to pull off effectively. The folks from Facebook’s Reality Labs demoed an awesome potential answer to this. Known as the Half Dome, it’s a “varifocal” headset. It combines eye-tracking, a wide field of view and advanced optomechanical systems to provide an unprecedented level of visual fidelity when it comes to focus and blur.
Now we are being introduced to DeepFocus. It’s an AI-powered rendering system that actually runs the Half Dome system. DeepFocus allows the Half Dome to defocus whatever the user isn’t looking at in real time. Accurate rendered blur and variable focus is essential for VR to ever look completely realistic and believable. At least, that’s what the research team believes.
To achieve seamless and accurate blur rendering they had to rely on sophisticated deep learning algorithms. The creators wanted to create a rendering technology that would immediately make any VR application more realistic.
Opening the Way
After putting all that time, money and effort into making Half Dome and DeepFocus, you’d think Facebook and Oculus would be rather precious about their new toy. Instead, they’ve opened up the project to other researchers and developers, helping the whole industry adapt and explore this technology.
For us consumers, that can only be good news and promised even more breathtaking VR experiences in the future.