Being productive requires some basis in reality, meaning having knowledge of what's going on around you. Knowing where real objects are relative to the virtual world becomes important after inadvertently clearing your desk for the second time while flailing about in VR. Simple tasks like finding objects after putting on the headset become an awkward dance in VR. I think this is why Apple and Microsoft are selling us on the augmented reality devices rather than fully enclosed virtual reality headsets. Augmented reality allows the user to be in the real world by projecting onto a transparent surface rather than obscuring the user's vision with a display. It maintains productivity by supplementing our vision rather than replacing it, even though the display technology has a lot less pixels. Today's augmented reality headsets have smaller field-of-views, lower resolutions, and inferior image contrast/brightness relative to virtual reality headsets on the market. This post documents using a pair of inexpensive webcams to give sight to a VR headset user, and preserve the larger resolution.
The initial motivation was being able to use a keyboard in virtual reality by finding it on my desk quickly. This was tested with a single USB webcam hot-glued to the top of an Oculus DK2. To represent the webcam feed virtually, a plane was set in front of the user at an arms length, about 1ft [30cm] wide, and with the its texture set as the webcam.
Although it worked, the lack of depth perception and became a hassle very quickly since distance was constantly misjudged. The feedback of the live video feed allowed the task to be eventually completed, but depth perception became the obvious next step. Interestingly, no one noticed the camera not being in front of their eyes until they tried to cover their “eyes.”
Realizing that depth perception is a large and under appreciated part of being human, two cameras became the focus of the second test. Two Logitech C210 webcams were used because they’re inexpensive and already in my junk box. The plastic housings where removed and replaced with custom 3D printed mounts. Since a Leap Motion is already mounted to the headset using a 3D printed ABS mount, the camera mounts were fused to it with acetone. In software, the webcam plane was duplicated for the second webcam, but each plane only rendered for their corresponding eye in order to produce the illusion of perspective. After shifting one of the planes slightly down to compensate for the alignment, the brain immediately began to see in “3D” with the FOV and quality of a webcam. Depth was still difficult until some trial and error of scaling and placing the planes at different distances from the viewer. Using the hand as a measurement, scaling the values intuitively eventually led to a near perfect 1-to-1 match of hand size as the brain expects it. Picking up and interacting with objects became intuitive again; the brain permitted this new view of the world.
From here the next steps would be adding fish eye lenses to increase field of view. The camera’s limited view felt like looking down a telescope and required moving the head a lot. The resolution was good enough to write with pen and paper, but the auto-exposure feature was too slow and inaccurate. If anything, this experiment has made me really appreciate the specifications of the human eye. Here's a slide I taken from Oculus Connect 3 Keynote about the future of VR: