VR User Interface
Bret Victor wrote a fascinating blog post about the future of user interfaces. He argues that most of the user interfaces between us and technology are incredibly limited because they are designed for two dimensional screens - what he calls “pictures under glass”. Our bodies are capable of so much more than simply moving your thumb or fingers a few inches, and increasingly, technology is becoming capable of much more immersive experiences. Victor is saying that user interfaces of the future will be more immersive and intuitive by taking advantage of more of the human body and better emulating interactions that come naturally. I couldn’t agree more.
To me, virtual reality is the most exciting recent breakthrough to change the way we think about interactions between human and computer. If you’ve ever tried it, you’ll know what I’m talking about. But that immersion remains a one-way experience. VR is really good at providing you with really stunning and intuitive images, but not yet good at letting you give inputs to the computer in a natural way. You are stuck with the the traditional “pictures behind glass” tools like mice, keyboards, and game controllers.
A user interacting with the Meta augmented reality headset. She has no real tactile feedback about the virtual vase she is holding.
Oculus and HTC both have controllers which track hand position in virtual space, but these are still just simple extensions of traditional game controllers and can’t provide a realistic simulation of the virtual environment. Some researchers have also begun working on interesting new tactile user interfaces. The Tangible Media Group at MIT is leading the way and publishing some fascinating research. Here’s a gem among it:
The InForm by the Tangible Media Group
How does it work?
This project started out in collaboration with Nathan Chau as my undergraduate thesis but we hope to take it much further. It is a new user interface for VR that we call Freehand.
Illustration demonstrating the concept of Freehand. The user is convinced that they are touching the virtual red cube while in reality they are only touching the robotic arm.
Finished prototype.
It works by tracking the location of your hands and actuating a robotic arm to intersect your hands at the real-world location where virtual objects would be. It is able to simulate the weight and inertia objects, convincing you, for example, that you are holding a virtual cube that isn’t there.
This technology could be extended to allow for rich, intuitive, and efficient interactions with computers and virtual spaces. We are not sure exactly what the future of UI in VR will look like, but we are convinced that bringing senses beyond just sight and sound into the experience will go a long way.
Show me some videos!
Video of me interacting with a virtual cube.
This video is synced to the previous one. This is my point of view.
Point of view again. This time, the interface is in AR instead of VR.
Simulation of interaction with more complex shapes.