Other Free Encyclopedias » Online Encyclopedia » Encyclopedia - Featured Articles » Contributed Topics from P-T

Photographic Virtual Reality - History, Photographic Virtual Realty Panoramas, Equipment, Software, Stitching, Source image, Output, Objects, Photographic Virtual Realty Scenes

projection camera images view

Emerson College

Photographic virtual reality is a type of digital media that allows a user to interact with photographic panoramas, objects, or scenes on a computer display using software called a viewer or player. Sometimes referred to as immersive photography, photographic virtual reality is designed to give the user the sensation of being there.


Photographic virtual reality has its roots in the work of the Irish painter Robert Barker, who, in 1787, created a cylindrical painting of Edinburgh. The painting was viewed from the center of a cylindrical surface. Baker patented the invention and coined the term panorama from the Greek word pan , meaning all, horama , meaning a view, and from horan , meaning to see. This intent to immerse the viewer into a scene and the striving to invent a form of experiential representation was continued throughout the 19th and 20th centuries with panoramic and stereo-optic photography. One of the earliest examples of true photographic virtual reality was the Aspen Moviemap project developed in 1978 by a team of researchers from MIT working with Andrew Lippman with funding from DARPA. The research team, which included Peter Clay, Bob Mohl, and Michael Naimark coined the term Moviemap to describe the work they were producing. A gyroscopic stabilizer with 16 mm stop-frame cameras was mounted on top of a car and a fifth wheel with an encoder triggered the cameras every 10 feet. The car was carefully driven down the center of every street in town. The playback system required several early laser disc players, a computer, and a touch screen display. The user could navigate throughout the virtual space.

Although there were many other experiments, it was not until Apple Computer Inc. released the QuickTime VR capabilities for its popular QuickTime Player in 1995 that photographic virtual reality became an established medium. For the first few years, authoring photographic virtual reality was cumbersome and quite technical. Photographers created work by executing a series of pre-written, modifiable programming scripts in an application called Macintosh Programmers Workshop. In August of 1997 Apple launched the QuickTime VR Authoring Studio application, which greatly automated the process. In the late 1990s iPix began developing software and equipment to support the creation of full 360° horizontal by 180° vertical panoramic imagery. This allowed users to look in all directions including straight up and straight down. Since that time developers have been expanding the possibilities of this emerging form by creating new authoring and viewer software as well as specialized camera equipment and techniques.

Photographic Virtual Realty Panoramas

A photographic virtual reality panorama is a panoramic image that the user can pan left and right, tilt the view up and down, and zoom the view in and out creating the sensation of looking around from a point of view located inside the scene.


Generally, the process of generating photographic virtual reality media begins with the creation of source material, which is typically a series of photographic still images, although panoramic imaging devices are available that can create the entire source for a panorama in one image. Source material can be created with a digital camera, a video camera, PhotoCD, or a film camera, where the film is subsequently scanned on a flatbed or film scanner. In some cases photographic virtual reality source materials can be created in a computer graphics software application.

A wide variety of equipment is available for the creation of source material. This can include traditional camera equipment, which has been augmented with specialized tripod heads, ultra-wide fisheye lenses, or parabolic mirrors, as well as, dedicated rotational image capture systems. Most approaches to photographic virtual reality require that multiple images be made of any single scene. It is imperative that all of the images in the sequence match in point of view, exposure, color, and focus. Therefore it is recommended that a tripod with a special panoramic head be used and that all camera controls be operated in manual mode.

Perspective continuity between images is maintained by placing the nodal point of the camera lens directly above the rotation point of the tripod head. The nodal point of the lens is where light rays converge in the barrel of the lens, not the film plane nor the front of the lens element. This assures that the point of view of each image is maintained at the exact point in space as the camera is rotated. Panoramic tripod heads, or camera rigs, are designed with adjustable brackets for this purpose. The bracket will usually also allow the camera to be rotated into portrait orientation for greater vertical coverage. Tripod heads will also often have graduated scale and clutch for accurate rotation of the camera, as well as sprit levels to assure that the rotation plane is level.

The number of shots required for a panorama depends on the field of view of the lens, whether the camera is mounted in
portrait or landscape orientation, and the amount of overlap of each image. It is a good idea to overlap the images by at least 50 percent.

Exposure and contrast range can be a serious concern with virtual reality photography as the lighting conditions can vary so much in a single scene. Therefore, it is important that lighting be metered for the entire scene and an average exposure setting set manually and maintained in subsequent shots. If the camera is set to automatic exposure, the exposure will change as the camera is rotated toward lighter or darker areas of the scene. The result will be that the image brightness will vary from shot to shot and will match poorly when combined in the stitching process. It is also recommended that automatic white balance not be used for the same reason.

Film cameras were designed to record light so that it could be reproduced on photographic paper. Digital cameras were developed to display images on a computer screen. Both photographic paper and computer screens fall far short of the dynamic range, or ratio between light and dark, which the eye perceives. By taking a series of pictures with different exposure settings the range can be captured and the images can be combined into a single high-dynamic range image called a radiance map.

In most cases it is recommended that the focus be set manually to the hyperfocal distance at smaller apertures to achieve acceptable sharpness for the entire scene.


Once the source material has been collected and digitized if necessary, it must be digitally processed. A variety of software applications exist for processing photographic virtual reality source material, which range from free downloadable applications on the Internet to expensive full-feature packages.


The first step in processing the sequence of source material images is to combine them into a single image in a process called stitching. The stitching process prepares the perspective of the image for one of three projection methods and blends the edges of adjacent images to form a single contiguous image. When the right side of a 360° panoramic source image leaves off where the left side begins the image is considered wrapped. This creates a seamless edge when viewed in a player. Although a panorama does not need a full 360° yaw, or horizontal rotation, to be viewed in a player, it would not be considered wrapped. The resulting stitched image is referred to as source image.

Source image

The type of projection method used in a source image depends on the type of source material and what form the final output will take. The three most common projection methods used in photographic virtual reality include cylindrical projection, spherical projection, and cubic projection.

A cylindrical projection image appears with correct perspective when it is mapped onto the inside surface of a cylinder and viewed from the center point of the cylinder. This type of projection is what slit cameras make, and it is the most common projection method used in photographic virtual reality. Cylindrical projection allows for a full 360° yaw, but the vertical tilt, or pitch, is limited to about 110° before perspective distortions will begin to occur.

For pitch angles greater than 110°, a spherical projection method is recommended. A spherical projection image appears with correct perspective when it is mapped onto the inside surface of a sphere and viewed from the center point of the sphere. It can display a full 360° yaw and 180° pitch from nadir to zenith. This kind of an image can only be created if the camera lens has a 180° field of view or greater, or if the source material is collected and stitched in rows as well as columns. Spherical projection can also be referred to as an equirectangular projection. The nadir and zenith points appear stretched into lines at the bottom and top of a flattened spherical projection image so that when it is mapped onto a sphere, these lines converge at the poles of the geometry.

Another solution to full 360° yaw and 180° pitch photographic virtual reality is cubic projection. A spherical panorama can be converted into the six faces of a cube. The cube faces are actually 90° × 90° rectilinear images. Cubic projection is sometimes referred to as cube strip projection or hemicube projection.


Once a source image is complete it must be compressed into an interactive movie file or mapped onto geometry and assigned a user interface so that it can be navigated in a player. The image is assigned a default pan, which is the horizontal direction that the image will face when it is initially opened in a player. The pan range is determined by the yaw of the source image. In most cases this is 360°, but it can be less if the image is not wrapped and the interface will stop the pan when the edge is reached.

The default tilt is the vertical direction that the image will face when it is initially opened in a player. The tilt range can be set from 0° (no tilt at all) to 180° (complete tilt from nadir to zenith) and is determined by the pitch of the source image.

The default zoom is the magnification of the image when it is initially opened in a player. The zoom range is determined by the resolution of the source image. Low-resolution images will appear unacceptably soft, or pixelated, if the zoom range is not limited.

Photographic Virtual Reality Objects

A photographic virtual reality object is an image of an object that can be rotated, tumbled, and the view can be dollied in and out, creating the sensation of holding the object and examining it. It is created by photographing an object every few degrees as it is rotated, or by rotating the camera around the object’s y-axis as it is photographed. The images are then reassembled and compressed into an interactive movie file. Special tabletop rigs are available for capturing object files. Object rigs are designed to keep the object and camera in registration during shooting to assure smooth playback. Tumble can be added by capturing rows of images as the camera or object is pitched on the object’s x-axis. This allows the user to turn the object over. In some cases the object can include animation by adding multiple frames to a single view of the object. This technique is referred to as view states. An example of this would be an image of a car, which the user can rotate as well as open and close the doors.

Photographic Virtual Realty Scenes

A photographic virtual reality scene can be made up of multiple panoramic nodes and objects as well as audio and links to other digital media and Internet addresses. Additionally a photographic virtual reality scene can be made from three-dimensional models, which have been rendered in a computer graphic application. By combining these media, a virtual world can be created, which is capable of telling a complex story in a unique and dynamic way. Photographic virtual reality scenes are also referred to as hypermedia. This differs from multimedia, which is generally organized in a linear structure. Hypermedia can be entered, or accessed, from multiple points, navigated in a non-linear, exploratory experience and is often open ended. It has its foundation in the hyperlink, which was invented for the World Wide Web in 1990 by Tim Berners-Lee as a way to organize and access digital documents and other resources. The hyperlink led to the invention of hypertext, which could accommodate true interactive narrative. Hypermedia takes this to the next level by creating a multi-sensory experience. Through these technologies it is now possible to enter and inhabit an image, which is what Robert Barker was trying to achieve over 200 years ago.

Photographic Workshops: A Changing Educational Practice - The Modern Workshop, Beginnings of modernist workshop: late 1930s to early 1950s [next] [back] Photographic Optics - Photographic Imaging, Photographic Lenses, Photographic Filters

User Comments

Your email address will be altered so spam harvesting bots can't read it easily.
Hide my email completely instead?

Cancel or