Other Free Encyclopedias » Online Encyclopedia » Encyclopedia - Featured Articles » Contributed Topics from U-Z

Virtual and Augmented Reality - Introduction, Desktop VR, Augmented Reality, Immersive VR, VR Graphics Rendering and Modeling, OpenGL

world users user objects

Xiaojun Shen and Shervin Shirmohammadi
University of Ottawa, Canada

Definition: Virtual Reality is the technology that provides almost real and/or believable experiences in a synthetic or virtual way, while Augmented Reality enhances the real world by superimposing computer-generated information on top of it.

Introduction

The term Virtual Reality (VR) was initially introduced by Jaron Lanier, founder of VPL Research in 1989. This term is a contradiction in its self, for nothing can be both real and virtual at the same time. Perhaps Real Virtuality would be a better term, because this is what new technologies have been giving. Other related terms include Artificial Reality (Myron Krueger, 1970s), Cyberspace (William Gibson, 1984), and, more recently. Virtual Worlds and Virtual Environments (1990s). Virtual Reality may be considered to have been born in the mid-1960s, based on the work of Van Sutherland from the University of Utah. A paper, published in 1972 by D.L Vickers, one of Sutherland’s colleagues, describes an interactive computer graphics system utilizing a head-mounted display and wand. The display, worn like a pair of eyeglasses, gives an illusion to the observer that he/she is surrounded by three-dimensional, computer-generated objects. The challenge of VR is to make those objects appear convincingly real in many aspects like appearance, behavior, and quality of interaction between the objects and the user/environment. VR is the technology that provides almost real and/or believable experiences in a virtual way. To achieve this, it uses the entire spectrum of current multimedia technologies such as image, video, sound and text, as well as newer and upcoming media such as e-touch, e-taste, and e-smell. To define the characteristics of VR, Heim used the three I’s, immersion, interactivity and information intensity .

  • Immersion comes from devices that isolate the senses sufficiently to make a person feel transported to another place.
  • Interactivity comes from the computer’s lighting ability to change the scene’s point of view as fast as the human organism can alter his or her physical position and perspective.
  • Information intensity is the notion that a virtual world can offer special quality such as telepresence and artificial entities that show a certain degree of intelligent behavior.

Immersion originally depends on visual aspects implemented by 3D Computer Graphics (CG), as users need to feel that they are located in a similar to real world. So far users could see, hear and manipulate objects through VR. But as time goes by and technology develops, users will be able to smell, taste, touch and feel object through VR in the near future.

Interaction was meant to be a walk through in the infancy stage of VR as users could change the view not by the system but by their own operation. But in the last few years it has become very powerful not only for browsing VR but for manipulating objects and communicating with other people in real time. Interaction is the process between the user and the system or among different users and can implemented by various input and output devices.

Information intensity depends on the quantity and quality of information provided through VR for users who might have different demands. No VR can be suitable for all users. What can be offered to the different users is accuracy of objects, real time communication, realistic simulation, interaction or a combination of all.

The applications of VR are becoming very popular in multidisciplinary fields, and the development of CG makes it more appealing by creating not only physical objects such as buildings and hair but natural phenomenon such as storms. Ever since the military and the entertainment industry developed advanced VR technology, the impact of VR has become powerful enough to attract people. Scientists use VR to visualize scientific information and architects use VR to review architectural plans for preventing fatal mistakes. Educators use VR to provide an interactive learning environment, whereas historians use VR to reconstruct historical buildings. By doing so, people can share their own ideas on the same visual information and collaborate with each other through VR.

VR systems can be categorized by intensity of immersion as Desktop VR, Augmented Reality (AR) and Immersive VR .

Desktop VR

Desktop VR uses a computer monitor as display to provide graphical interface for users. It is cost-effective when compared to the immersiv eVR as it does not require any expensive hardware and software and is also relatively easy to develop. Although they lack the immersion quality, they consist of computer-generated environments which exist in 3 dimensions (even if they are shown on a 2-D display). Figure la shows an example of a Desktop VR system used for industrial training. In Figure lb, the same desktop application is used but this time the user can see a stereoscopic 3D view, on a regular monitor, through the use of special software and 3D goggles. Because the worlds exist in 3 dimensions, users can freely navigate in 3 dimensions around in the worlds. Flight simulators are examples, where participants “fly” though models of real or fantasy worlds, watching the world on a 2-D screen.

Augmented Reality

Augmented Reality (AR) can be thought of as a variation of VR. In the original publication which coined the term, (Computer-) Augmented Reality was introduced as the opposite of VR: instead of diving the user into a purely-synthesized informational environment, the goal of AR is to augment the real world with information handling capabilities. AR is used to describe a system, which enhances the real world by superimposing computer-generated information on top of it. VR technologies completely immerse a user inside a synthetic environment. While immersed, the user can not see the real world around him/her. In contrast, AR allows the user to see the real world, but superimposes computer-generated information upon or composed with the real world. Therefore, AR supplements reality, rather than completely replacing it.

Combining 3D graphics with the real world in a 3D space is useful in that it enhances a user’s perception of and interaction with the real world. In addition, the augmented information, such as annotations, speech instructions, images, videos, and 3D models, helps the user perform real world tasks. Figure 2a shows a neurosurgery planning application at the Harvard Medical School, where the internal organ of the patient is synthetically superimposed in 3D on top of the actual patient, allowing the surgeon to “see” inside the head of the patient. Figure 2b shows a wearable computer used for the implementation of AR of industry training application.

AR systems have been proposed as solutions in many domains, including medical, entertainment, military training, engineering design, robotics and tele-operation, and so on. For example, doctors can see virtual ultrasound images overlaid on a patient’s body, giving them the equivalent of X-ray vision during a needle biopsy, while a car driver can see the infrared imagery of night vision overlaid on the road ahead. Another major domain is the assembly, maintenance, and repair of complex machinery, in which computer-generated graphics and text prompts would be developed to train and assist plant personnel during complex manipulation or equipment maintenance and repair tasks.

Although AR technology has been developed since 1993 only a few commercial products using AR appear in the market today, such as the Instructional System for the Boeing Company and the telecommunications services product for Bell Canada field service technicians.

Immersive VR

Immersive VR, which completely immerses the user inside the computer generated world, can be achieved by using either the technologies of Head-Mounted Display (HMD) or multiple projections. Immersive VR with HMD uses HMD to project VR just in front of the eyes and allows users to focus on display without distraction. A magnetic sensor inside the HMD detects the users’ head motion and feeds that information to the attached processor. Consequently, the user turns his or her head; the displayed graphics can reflect the changing viewpoint. The virtual world appears to respond to head movement in a familiar way and in a way which differentiates self from world. You move and the virtual world looks like it stays still. The sense of inclusion within a virtual world which this technology creates has a powerful personal impact. Figure 3a shows an HMD in action.

Immersive VR with multiple projections uses multiple projectors to create VR on a huge screen, which might be a hemispherical surface, in a room where users might ware polarized glasses to maximize the feeling of being present at the scene in standstill. The form of this immersive graphical display is known as the CAVE (stands for Computer-Aided Virtual Environment), where the immersion occurs by surrounding the body on all sides by images, rather than just the eyes. Early versions of these technologies were demonstrated at SIGGRAPH ‘92 in Chicago by Sun Microsystems and University of Illinois. The CAVE is essentially a five sided cube. The participant stands in the middle of cube, and images are projected onto the walls in front, above, below and on either side of the participant, utilizing full 270-degree peripheral vision. As the user travels through the virtual environment, updated images are projected onto the CAVE’s walls to give the sensation of smooth motion. Figure 3b shows a CAVE at the University of Ottawa’s DISCOVERLab.

At present, immersive VR stretches the limits of computational power, I/O design and understanding of human perception. The 3-D graphic VR worlds are usually made up of polygons. Some systems allow texture mapping of different patterns onto the polygon surfaces. The polygons may be shaded using different algorithms which create more or less realistic shadows and reflections. Displays are “laggy,” with responses to motion being delayed, particularly for complicated worlds.

VR Graphics Rendering and Modeling

Graphics engines and displays are the cornerstone of the VR user interface. The display provides the user with a three-dimensional window into the virtual environment, and the engine generates the images for display. Traditionally, these graphics capabilities only were available on high-end graphics workstations. However, in recent years, sufficient graphics capabilities have become available on standard PCs. Add-on highspeed graphics processors are inexpensive and give PCs rendering horsepower that rivals low-to-mid-range graphics workstations. Moreover, the standard graphics API enables the development portable graphics-intensive applications.

OpenGL

OpenGL, initiated by Silicon Graphics in 1992, is an open specification for an applications program interface for defining 2D and 3D objects. With OpenGL, an application can create the same effects in any operating system using any OpenGL-adhering graphics adapter. Since its inception OpenGL has been controlled by an Architectural Review Board3 whose representatives are from the following companies: 3DLabs, Compaq, Evans & Sutherland (Accelgraphics), Hewlett-Packard, IBM, Intel, Intergraph, NVIDIA, Microsoft, and Silicon Graphics.

Most of the CG research (and implementation) broadly uses OpenGL which has become a de facto standard. Virtual Reality is no exception to this rule. There are many options available for hardware acceleration of OpenGL based applications. The idea is that some complex operations may be performed by specific hardware (an OpenGL accelerated video card such as those based on 3DLab’s Permedia series or Mitsubishi’s 3Dpro chipset for instance) instead of the CPU which is not optimized for such operations. Such acceleration allows low-end workstations to perform quite well yet at low cost.

Virtual Reality Modeling Language (VRML)

VRML 2.0, which is the latest version of the well-known VRML format, is an ISO standard (ISO/IEC 14772-1:1997). Having a huge installed base, VRML 2.0 has been designed to support easy authorability, extensibility, and capability of implementation on a wide range of systems. It defines rules and semantics for presentation of a 3D scene. Using any VRML 2.0 compliant browser, a user can simply use a mouse to navigate through a virtual world displayed on the screen. In addition, VRML provides nodes for interaction and behavior. These nodes, such as TouchSensor and TimeSensor , can be used to intercept certain user interactions or other events which then can be ROUTed to corresponding objects to perform certain operations. Moreover, more complex actions can take place using Script nodes which are used to write programs that run inside the VRML world. In addition to the Script node, VRML 2.0 specifies an External Authoring Interface (EAI) which can be used by external applications to monitor and control the VRML environment. These advanced features enable a developer to create an interactive 3D environment and bring the VRML world to life.

X3D

X3D, the successor to the VRML, is the ISO standard for representing 3D objects and scenes combining aspects of the VRML specification with the XML standard. X3D is a scalable and open software standard for defining and communicating real-time, interactive 3D content for visual effects and behavioral modeling. It can be used across hardware devices and in a broad range of applications including CAD, visual simulation, medical visualization, GIS, entertainment, educational, and multimedia presentations. X3D provides both the XML-encoding and the Scene Authoring Interface (SAI) to enable both web and non-web applications to incorporate real-time 3D data, presentations and controls into non-3D content. It improves upon VRML with new features, advanced APIs, additional data encoding formats, stricter conformance, and a componentized architecture using profiles that allows for a modular approach to supporting the standard and permits backward compatibility with legacy VRML data. Additional features of X3D:

  • Open source, so no licensing issues.
  • Has been officially incorporated within the MPEG-4 multimedia standard.
  • XML support makes it easy to expose 3D data to Web Services and distributed applications.
  • Compatible with the next generation of graphics files – e.g. Scalable Vector Graphics.
  • 3D objects can be manipulated in C or C++, as well as Java.

VR Haptic Interface

In an effort to bring more and more realism to the virtual world, VR developers get increasingly more creative. Devices have been invented that simulate tactile feedback and force feedback. Together these are called haptic devices. The most notable commercially available force feedback product is called Phantom, and touch feedback devices are usually some permutations of virtual gloves such as dataglove. If you are holding a virtual ball in your hand, for example, a tactile device will let you feel how smooth or rough its surface is, whereas a Phantom will let you feel how heavy it is. Gloves, by the way, are also tracking devices, because they let you feel the virtual objects you are touching (by simulating pressure and tingling sensation on your hand), at the same time feeding information about position of your fingers back to the computer. Readers are referred to the Tele-Haptics chapter in this book for further information.

Collaborative Virtual Environments

Collaborative Virtual Environments (CVE) are currently one of the most challenging VR research areas. A CVE is a shared virtual world that could radically alter the way multiple people work, learn, consume, and entertain. It adds new dimensions to the needs of human-factors, networking, synchronization, middleware, object model acquisition and representation. Take human-factors research in VR for example; it has traditionally focused on the development of natural interfaces for manipulating virtual objects and traversing virtual landscapes. Collaborative manipulation, on the other hand, requires the consideration of how participants should interact with each other in a shared space, in addition to how co-manipulated objects should behave and work together.

The main issue in a CVE, in addition to the other issues in VR, is how distributed entities share and maintain the same resources. This problem has to be solved in the framework of many hardware and software platforms, in other words, in a totally heterogeneous environment. Five important problems in implementing a CVE are listed in the sequel.

Consistency: The fundamental model presented by a CVE platform is a shared 3D space. Since all clients accessing or updating the data share the 3D graphics database, the issue of distributed consistency must be solved by any CVE to ensure the same view is presented to all participants. Since the number of participants is not fixed, and during the run time users may enter the environment after the environment has been changed from its initial state, a CVE needs to be able to provide support for latecomers and early leavers. Network lag adds an additional challenge to the consistency issue, leading to solutions where communication protocols that are both fast and timely-reliable are used.

Scalability: The number of possible interactions between n simultaneous users in a multiuser system is of order 0(n2) at any moment. Ideally network traffic should be almost constant or grow near-linearly with the number of users. Usually, not all the data in the CVE environment would be relevant to a particular user at a given time. This suggests the idea of partitioning the environment into regions (or zones, locales, auras) that may either be fixed or bound to moving avatars. Events and actions in remote zones need not be distributed and remote objects need not be visualized or might be visualized at a coarse-grain level. Most of the traffic is isolated within zones.

Ownership : A multi-user CVE world is subject to conflicts. Conflicts occur when collaborating users perform opposing actions, for example, a conflict may arise if one user tries to open a window while another user is trying to close it. These conflicts must be avoided or solved. One method to determine the attributes of objects that may be modified by certain users is to assign temporary ownership to objects. Manipulation of objects may include the change of the object’s coordinate system and the change in the scene graph: users may grasp objects and take them to a different world. Operations like this are essential for applications like virtual shopping.

Persistence: Some of these applications that involve a large number of users need a largescale, persistent collaborative virtual environment (PCVE) system that is “never-ending” or “always on”. This is either because its users require that it is always running, or because it is so large or distributed that stopping the entire simulation to make changes is just not possible. There are a number of issues that should be addressed to support PDVE. Persistence would be the first feature that characterizes a PCVE system. It describes the extent to which the virtual environment exists after all participants have left the environment.

Dynamic Configuration: This property allows a PCVE system to dynamically configure itself without user interaction, enabling applications to take on new functionalities after their execution. CVEs should be modifiable at run-time by accepting the contributions of new objects and new behaviors.

Standards for Collaborative Virtual Environments

Several international standards are very likely to make a major impact on CVE technology: the Distributed Interactive Simulation (DIS) (IEEE Standard 1278.1) and the High Level Architecture (HLA) (IEEE Standard 1516). DIS is a set of communication protocols to allow the interoperability of heterogeneous and geographically dispersed simulators. Development of the DIS protocol began in 1989, jointly sponsored by the United States Army Simulation, Training and Instrumentation Command (STRICOM), ARPA and the Defense Modeling and Simulation Office (DMSO). DIS was based on SIMNET and designed as a man in the loop simulation in which participants interact in a shared environment from geographically dispersed sites.

The successor of DIS, the High Level Architecture, defines a standard architecture for large-scale distributed simulations. It is a general architecture for simulation reuse and interoperability developed by the US Department of Defense. The conceptualization of this High Level Architecture leads to the development of the Run-Time Infrastructure (RTI). This software implements an interface specification that represents one of the tangible products of the HLA. The HLA architecture is now an IEEE standard (No. 1516) and an OMG (Object Management Group) standard for distributed simulations and modeling.

Virtual Broadway, Virtual Orchestra: De Forest and Vitaphone - Other Talkies [next] [back] Virtanen, Arthuri Ilmari

User Comments

Your email address will be altered so spam harvesting bots can't read it easily.
Hide my email completely instead?

Cancel or