Hello everyone. This post should be pretty long (and heavy). I feel that this mistake is being caused largely by the more seasoned developers on here using the wrong words to describe how to pre-visualize. This led to me making a large amount of mistakes in software development and has made me scrap so much code. So, without further ado, I present to you: The Importance Of Pre-Visualization
Many a beginner on gamedev.net (including me) has trouble with their software in the beginning. I had no idea how to plan for projects or what was even included in projects. I would look for posts about how to plan out projects. Generally when these questions get ask the seasoned developers on here (Also known as people who have worked on many finished projects) answer with responses like:
I have a really iterative design
I don’t really pre-visualize, I try to sort out more details at implementation
Now, that’s not to say these developers are in the wrong at all. For a beginner however, these terms can be very daunting and confusing. For me, I thought I shouldn’t really pre-visualize at all and that everything would sort itself out eventually (Boy how wrong was I). In this article I plan to explain to you how I pre-visualize my projects, how much you should really be pre-visualizing, and why it’s important. So let’s jump right in with the third one: Why Pre-Visualizing Important?
Pre-Visualizing allows you to plan how your classes interact. Imagine this: In many projects, your Collision and Physics interact, and almost all of your classes have to access a singular Map for the level you’re on. How they will interact and how the map will be handled must be thought out so that the code you write at the beginning will be prepared for how the other classes use the Map. This must be sorted out in pre-visualization because you write certain code (classes) at different times, which means if you don’t think about this you’ll end up re-writing enormous amounts of code.
Pre-Visualization also defines project scope. Knowing what you plan to accomplish and what accomplishing that includes helps with development (For one thing, you will be able to gauge your progress and define what needs to be done next). When making a Side-Scroller, understanding the scope of the enemies A.I. is important so you’ll know the work involved. If you make simple A.I., you can compensate with adding bows to a Side-Scroller that was originally only going to have swords. Now that I’ve made that analogy however, let us move on to another important topic involving why pre-visualizing is important: Understanding the Mechanics of your game.
This ties into project scope. The mechanics are part of project scope because the more complex the mechanics the more time it will take to implement them. Imagine this: Having a Bow involves a lot more coding (Handling projectiles shot, their collision, how fast they move, animation for them, etc.). So at project scope, you define if you’ll have a bow or if you’ll only have swords. This lets you only plan for swords. The first part of planning should always be defining your scope.
Now on to the second part: How much you should be pre-visualizing. My general rule is figuring out your hierarchy and how your classes will interact, however I leave out the actual coding details. I know how to code, and a large part of my actual software-design is figuring out how to solve problems or thinking about the best way to solve a problem. Figuring out what those problems are and how you’ll solve them is pre-visualization. Actually planning out my code, what my functions will pass in, etc. shouldn’t be defined in pre-visualization (Except for small, single-task programs like converting one form of a linear equation to another form.). Solving these problems before you start coding make sure that all the code you right already had that problem in mind (So when a problem turns up or when you are implementing something, you don’t have to scrap existing code).
Some problems are bound to be encountered while coding, and trying to write down and fix every minute detail of your program is an example of bad pre-visualization. You can’t anticipate everything, however anticipating what you can (AKA the bigger problems and ideas) will help exponentially.
Now, what you’ve all been waiting for: How do I pre-visualize? It’s simple really. I get a notebook, write down the name of my project. I define the scope, the mechanics, and then take one or two pages in the notebook I label “Classes”. I figure out the basic classes and write down their responsibility (Defining responsibility make sure you understand what all of your classes are actually supposed to be doing). Then, I take maybe a page for each class or important mechanic and think about it hard. I think about how it’ll handle it’s responsibilities and how it will interact with other classes. The key word here is interaction. Interaction is a huge part of software design (Especially video game software design.). This allows me to anticipate the basic structure of my code and the problems I’ll run into. Then for a day or two I’ll read over what I have and reflect. After I do this, I take my journal to the computer and start coding. This whole process is one to two weeks.
The main point of this article was to stress how important pre-visualization is to beginners. Now, it might just be tic-tac-toe, however still get in the habit of pre-visualizing. It’ll pay off in the long run.
If you enjoyed this article, please post down below. If you have any recommendation about how you plan or any corrections, feel free to share them with everyone. Cheers :)!
Culled from GameDev.net
This is a loosely organized list of things I’ve noticed while using and developing virtual reality applications for the smartphone-in-headset form factor. It is specific to my experience and may not reflect anyone else’s personal preference, such that VR is apparently quite dependent on preference. But I think that steadfast rules of design are necessary for the impending VR bubble, to convey an aesthetic and unified design such that users may expect certain, common idioms and adapt to VR software quickly. Thus, this is list a roadmap of my current aesthetic for VR. It is a living document, in the sense that future experiences may invalidate assumptions I have made and force me to recognize more universal truths. Proceed with caution.
•Presence is the ability to feel like you are in the scene, not just viewing a special screen. You’ll hear a lot of people talk about it, and It is important, but ultimately I believe it to be a descriptor of an end result, a combination of elements done well. There is no one thing that makes “presence”, just as there is no one thing that makes an application “intuitive”, “user friendly”, “elegant”, or “beautiful”. They either are or they are not, and it’s up to the individual experiences of the users to determine it.
•Presence is a double-edged sword. I’ve found that, once I feel “present” in the application, I also feel alone, almost a “ghost town” feeling. Even if the app has a single-user purpose, it seems like it would be better in an “arcade” sort of setting. To be able to see other people may help with presence.
•The hardware is not yet ready for the mass market. That’s good, actually, because the software and design side of things are a lot worse off. Now is the time to get into VR development. I’ll say nothing more about the hardware issues from a performance side. They are well known, and being worked on fervently by people with far more resources than I.
•Mixing 2D and 3D elements is a no-go. Others have talked about not placing fixed-screen-space 2D heads-up-display elements in the view for video game applications, but it extends much further than that. The problem is two-fold: we currently have to take off the display to do simple things involving any sort of user input, and there is no way to manage separate application windows. We’re a long way off from getting this one right. For now, we’ll have to settle for being consistent in a single app on its own. A good start would be to build a form API that uses three-space objects to represent its controls.
•Give the user an avatar. This may be a personal preference, but when I look down, I want to see a body. It doesn’t have to be my body, it just needs something there. Floating in the air gives me no sense of how tall I stand, which in turn gives me no sense of how far away everything is.
•Match the avatar to the UI, and vice versa. If your application involves a character running around, then encourage the user to stand and design around gamepads. If you must have a user sit at a keyboard, then create a didactic explanation for the restriction of their movement: put them in a vehicle.
•Gesture control may finally be useful. I’m still researching this issue, but the experiments I’ve done so far have indicated that the ability to move the view freely and see depth make gestures significantly easier to execute than they have been with 2D displays. I am anxious to finish soldering together a device for performing arm gestures and test this more thoroughly. This demo makes it clear that this is at least an extremely lucrative path of study.
•Use all of the depth cues. Binocular vision is not the only one. Place familiar objects with well-known sizes in the scene. Use fog/haze and a hue shift towards blue at further distances. But most importantly, do not give the user long view distances. Restrict it with blind corners instead. Binocular vision is only good for a few feet before the other depth cues become more important, and we are not yet capable of making a convincing experience without the binocular cue.
•Object believability has more to do with textures and shading than polygon count. Save on polygon count in favor of more detailed textures and smooth shading.
•Frame rate is important. I remember being perfectly happy with 30FPS on games 10 years ago. That’s not going to cut it anymore. You have to hit 60FPS, at least. Oculus Rift is targeting 75FPS. I’m sure that is a good goal. Make sure you’re designing your content and algorithms to maintain this benchmark.
•Use lots of non-repetitive textures. Flat colors give nothing for your eyes to “catch” on to make the stereo image. The design of these viewer devices is such that the eyes must actually fight their natural focus angle to see things in the display correctly. It will be easier for the user if you make it as hard as possible to not focus on object surfaces. Repetitive textures are only slightly better than flat colors, as they provide a chance to focus at the wrong angle, yet still achieve what is known as the “wallpaper effect”. And do not place smaller objects in any sort of pattern with regular spacing.
•Support as many different application interactions as possible. If the user has a keyboard hooked up, let them use the keyboard. If they have a gamepad, let them use the gamepad. If the user wants to use the app on their desktop with a regular 2D display, let them. Do not presume to know how the user will interact with the application. This early in development, not everyone will have all of the same hardware. Even into the future, it will be unlikely that an app will be successfully monetizable with a user base solely centered on those who have all of the requisite hardware to have a full VR experience. Be maximally accessible.
•Make the application useful. This seems like it shouldn’t be said, but ask yourself what would happen if you were to rip out the “VR” aspect of the application and have people use it with traditional IO elements. Treat the VR aspect of it as tertiary. Presence by its very definition means forgetting about the artifice of the experience. If the experience is defined by its VR nature, then it is actively destroying presence by reveling in artifice.
•Much research needs to be done on user input especially for large amounts of text. Typing on a keyboard is still the gold standard of text entry, but tying the user to the keyboard does not make for the best experience, and reaquiring a spatial reference to the keyboard after putting the headset on and moving away from the keyboard is nearly impossible. Too often, I find myself reaching completely behind in the wrong direction.
•3D Audio is essential. We could mostly get away without audio in 2D application development, but in VR it is a significant component to sensing orientation and achieving presence. I believe it works by giving us a reference to fixed points in space that can always be sensed, even if they are not in view. Because you always hear the audio, you never lose the frame of reference.
I may add to this later.
Culled from GameDev.net