VR Lessons so far

This is a loosely organized list of things I’ve noticed while using and developing virtual reality applications for the smartphone-in-headset form factor. It is specific to my experience and may not reflect anyone else’s personal preference, such that VR is apparently quite dependent on preference. But I think that steadfast rules of design are necessary for the impending VR bubble, to convey an aesthetic and unified design such that users may expect certain, common idioms and adapt to VR software quickly. Thus, this is list a roadmap of my current aesthetic for VR. It is a living document, in the sense that future experiences may invalidate assumptions I have made and force me to recognize more universal truths. Proceed with caution.
Presence is the ability to feel like you are in the scene, not just viewing a special screen. You’ll hear a lot of people talk about it, and It is important, but ultimately I believe it to be a descriptor of an end result, a combination of elements done well. There is no one thing that makes “presence”, just as there is no one thing that makes an application “intuitive”, “user friendly”, “elegant”, or “beautiful”. They either are or they are not, and it’s up to the individual experiences of the users to determine it.
Presence is a double-edged sword. I’ve found that, once I feel “present” in the application, I also feel alone, almost a “ghost town” feeling. Even if the app has a single-user purpose, it seems like it would be better in an “arcade” sort of setting. To be able to see other people may help with presence.
•The hardware is not yet ready for the mass market. That’s good, actually, because the software and design side of things are a lot worse off. Now is the time to get into VR development. I’ll say nothing more about the hardware issues from a performance side. They are well known, and being worked on fervently by people with far more resources than I.
Mixing 2D and 3D elements is a no-go. Others have talked about not placing fixed-screen-space 2D heads-up-display elements in the view for video game applications, but it extends much further than that. The problem is two-fold: we currently have to take off the display to do simple things involving any sort of user input, and there is no way to manage separate application windows. We’re a long way off from getting this one right. For now, we’ll have to settle for being consistent in a single app on its own. A good start would be to build a form API that uses three-space objects to represent its controls.
Give the user an avatar. This may be a personal preference, but when I look down, I want to see a body. It doesn’t have to be my body, it just needs something there. Floating in the air gives me no sense of how tall I stand, which in turn gives me no sense of how far away everything is.
Match the avatar to the UI, and vice versa. If your application involves a character running around, then encourage the user to stand and design around gamepads. If you must have a user sit at a keyboard, then create a didactic explanation for the restriction of their movement: put them in a vehicle.
Gesture control may finally be useful. I’m still researching this issue, but the experiments I’ve done so far have indicated that the ability to move the view freely and see depth make gestures significantly easier to execute than they have been with 2D displays. I am anxious to finish soldering together a device for performing arm gestures and test this more thoroughly. This demo makes it clear that this is at least an extremely lucrative path of study.
Use all of the depth cues. Binocular vision is not the only one. Place familiar objects with well-known sizes in the scene. Use fog/haze and a hue shift towards blue at further distances. But most importantly, do not give the user long view distances. Restrict it with blind corners instead. Binocular vision is only good for a few feet before the other depth cues become more important, and we are not yet capable of making a convincing experience without the binocular cue.
Object believability has more to do with textures and shading than polygon count. Save on polygon count in favor of more detailed textures and smooth shading.
Frame rate is important. I remember being perfectly happy with 30FPS on games 10 years ago. That’s not going to cut it anymore. You have to hit 60FPS, at least. Oculus Rift is targeting 75FPS. I’m sure that is a good goal. Make sure you’re designing your content and algorithms to maintain this benchmark.
Use lots of non-repetitive textures. Flat colors give nothing for your eyes to “catch” on to make the stereo image. The design of these viewer devices is such that the eyes must actually fight their natural focus angle to see things in the display correctly. It will be easier for the user if you make it as hard as possible to not focus on object surfaces. Repetitive textures are only slightly better than flat colors, as they provide a chance to focus at the wrong angle, yet still achieve what is known as the “wallpaper effect”. And do not place smaller objects in any sort of pattern with regular spacing.
Support as many different application interactions as possible. If the user has a keyboard hooked up, let them use the keyboard. If they have a gamepad, let them use the gamepad. If the user wants to use the app on their desktop with a regular 2D display, let them. Do not presume to know how the user will interact with the application. This early in development, not everyone will have all of the same hardware. Even into the future, it will be unlikely that an app will be successfully monetizable with a user base solely centered on those who have all of the requisite hardware to have a full VR experience. Be maximally accessible.
Make the application useful. This seems like it shouldn’t be said, but ask yourself what would happen if you were to rip out the “VR” aspect of the application and have people use it with traditional IO elements. Treat the VR aspect of it as tertiary. Presence by its very definition means forgetting about the artifice of the experience. If the experience is defined by its VR nature, then it is actively destroying presence by reveling in artifice.
Much research needs to be done on user input especially for large amounts of text. Typing on a keyboard is still the gold standard of text entry, but tying the user to the keyboard does not make for the best experience, and reaquiring a spatial reference to the keyboard after putting the headset on and moving away from the keyboard is nearly impossible. Too often, I find myself reaching completely behind in the wrong direction.
3D Audio is essential. We could mostly get away without audio in 2D application development, but in VR it is a significant component to sensing orientation and achieving presence. I believe it works by giving us a reference to fixed points in space that can always be sensed, even if they are not in view. Because you always hear the audio, you never lose the frame of reference.

I may add to this later.
Culled from GameDev.net

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s