While the IPhone world is busy figuring out whether or not the new IPhone OS 4.0 Terms of Service allows for Unity content to be sold on the App Store, I would rather focus on writing about development of 3D content for research studies.
So how does it work? I have posted a multitude of learning resources for Unity, but all of them describe the process of making games. How about non-gaming applications? What's different for the respective development processes?
Non-gaming applications span across a plethora of disciplines. These include serious games (where acquiring meaningful skills/information is the primary concern), cognitive experiments, art installations, marketing presentations, simulations (e.g. medical, architectural), trainings, information displays and many more. The use of Unity content which I am interested in usually involves the collection of data for academic purposes. I conduct studies within virtual environments to test hypotheses about human behavior and cognitive functions. By saying that, I'd like to stress that my experiences don't necessarily apply to all of these areas or even to psychological research in general; it's just my point of view, for what it's worth.
So how does it work? I have posted a multitude of learning resources for Unity, but all of them describe the process of making games. How about non-gaming applications? What's different for the respective development processes?
Non-gaming applications span across a plethora of disciplines. These include serious games (where acquiring meaningful skills/information is the primary concern), cognitive experiments, art installations, marketing presentations, simulations (e.g. medical, architectural), trainings, information displays and many more. The use of Unity content which I am interested in usually involves the collection of data for academic purposes. I conduct studies within virtual environments to test hypotheses about human behavior and cognitive functions. By saying that, I'd like to stress that my experiences don't necessarily apply to all of these areas or even to psychological research in general; it's just my point of view, for what it's worth.
- So what is a key point about my approach of developing virtual environments? I focus on data collection. In the end, my applications are nothing else but fancy data collection tools.
That's nothing unheard-of in the game industry. It seems to be common practice to include so-called hooks into a game while it's being developed. These hooks act as a form of window to show you whats going on under the hood of your game. They extract data and let the developers know what's happening and where within their game world. If you have a first person shooter game, such data might give you insights on where and how often certain weapons are used or where your players die. I'd rather find out where my user's attention is, how much he can remember about his environment, or how fast his reaction is to critical stimuli. The principle is exactly the same though, just without the guns and bad guys. - After figuring out what exactly I want to achieve by exposing the user to a virtual environment (e.g. training wayfinding ability), I create the environment. Now here's an important difference. The environment doesn't have to be pretty and realism is not my top priority. Games often have a distinct art style or aim for high levels of realism. When I create a virtual environment, I try to keep the texturing close to the real-world and the dimensions and scaling of the world are of utmost importance. Realistic lighting? Postprocessing effects (motion blur, depth of field, etc.)? Fancy Animations? No, no and no! My “artstyle” is simple: fairly accurate textures to provide semantic information to the user and geometric precision to build the foundation of getting accurate data out of the program. If it needs to be quick and dirty, Google SketchUp is my tool of choice for a speedy creation of virtual environments. By importing existing models from the Google Warehouse I can get a complete house or research environment up and running in under an hour. For anything more complicated, there are plenty of tools out there to model from scratch: 3DS Max, Maya, Blender, Modo, Lightwave 3D, Cinema 4D, …
Sound can be an issue if the research question demands it, otherwise there is only instructions and silence.
All of this effort (or lack of effort) is done for a reason. The less fluff I add, the more resources I have for data collection and important mechanics. - Performance - Every game needs to run at a decent frame rate (frames per second - fps) to ensure that our eyes perceive fluid motions on the screen. If I want to rely on my data and compare it to other researcher's experiments who use more sophisticated technology and software/hardware worth thousands and thousands of bucks, I need to be as accurate as possible and keep latency down as far as I can. The frame rate at which my program runs, tells me how many times per second the code of the application is executed. If I have some code which writes critical data to my hard drive once every millisecond, my frame rate needs to keep up with this and execute my code at least once every millisecond (1000 fps!). It is also crucial to keep in mind that frame rate is variable (varies by content which is present on screen or by scripts which are running at any given moment) and I need to base my data-recording code not on frame rate, but on time. To display your framerate within your unity program, you can use this script. To make your application frame rate independent (rather dependent on time), you can use Time.deltaTime.
In addition to having a high frame rate for your application, it is also important to take all hardware into consideration. If you know relevant facts like resolution and latency of your mouse (USB vs. PS/2?), latency of your keyboard (or any other input device you use), refresh rate of your monitor, or specs of your sound setup, you can improve the accuracy of your experiment or research application a lot. If your goal is to demonstrate an improvement of user reaction time by 20 milliseconds by using your application, your hardware and software have to support such accuracy. - A huge advantage of creating a research application over game development is the fact that I don't care about other people's computer systems. Most applications which I create need to run on exactly ONE machine; possibly two if I work from home and conduct my studies with the development machine at our research lab. Both are pretty powerful computers and I don't have to worry that some customer with a low-end machine won't be able to run my application. Sometimes that's a cheap excuse for not optimizing my applications, but mostly it's a huge time saver. If you happen to do research which has the potential to be commercialized lateron, cudos to you. I leave the optimization for a wide user-range to the moment when I realize that the initial prototype actually works as intended and collects high-quality data.
- The software's content largely depends on the purpose of your application. For developing games it probably won't hurt to have a background in game design. I am creating assessment and training programs for cognitive functions, so I am focussing on the implementation of psychological theory, rehabilitation principles and so forth. Most of this is based on giving feedback to users and motivating and empowering the user to learn. Clear instructions, providing feedback, etc. For longer assessments and especially long-term trainings and interventions it becomes more and more important to keep the participant's motivation up. That's where serious games come into play (literally!). By implementing game-like mechanics, characters, or a story line, user engagement goes up and participant drop-out rates go down (at least in theory!).
To conclude, developing the mechanics of my non-gaming applications and those of a computer game are not fundamentally different. It's just a matter of perspective. After attending GDC2010, it seemed to me that quite a few game developers are utilizing theories from social psychology for their games. (No, I'm not talking about Sid Meier's keynote in which he called anything counter-intuitive “It's Psychology!” *sigh*). - Compatibility with external hardware – Let's face it, Unity is not specifically made for research. I use motion/eye/face trackers, webcam-tracking, head-mounted displays, mind-boggling joysticks, WiiMotes, 3d-mice, multi-display-setups or anything our engineers can come up with on a daily basis. Not all of this is supported by Unity out of the box (only things that can be used to emulate mouse/keyboard/joystick input). For everything else, a customized solution is necessary. As such, socket communcation is available in the free version of Unity, while plugin support is only available for Unity Pro.
With Sony's PlayStation Move and Microsoft's Project Natal on the horizon and nifty tools like the WiiMote and Iphone/Ipad already available, “alternative” input devices are already widespread in the game industry. Right now it's just a question of getting that $50k head-mounted display or the extra-fency markerless tracking solution to work which makes or breaks your research project. That's usually of no concern for game development. A standard game pad, mouse/keyboard or console controller should suffice, unless you work for Microsoft with a motion-sensing camera aimed at your face.