QuickTime VR 学习5

作者:樊焕婷 时间:2022-10-02 点击数:

5. APPLICATIONS

The panoramic viewing technology can be applied to

applications which require the exploration of real or imaginary

scenes. Some example applications include virtual travel, real

estate property inspection, architecture visualizations, virtual

museums, virtual shopping and virtual reality games.

An example of panoramic movie application is the

commercial CD-ROM title: Star Trek/The Next Generation–

Interactive Technical Manual. This title lets the user navigate

in the Starship Enterprise using panoramic movies. Several

thousand still photographs were shot to create more than two

hundred panoramic images, which cover most areas in the

starship. In addition, many object movies were created from the

props in the set.

The object movie can be applied to visualize a scientific or

engineering simulation. Most simulations require lengthy

computations on sophisticated computers. The simulation

results can be computed for all the possible view orientations

and stored as an object movie which can be inspected by

anyone with a personal computer.

Time-varying environment maps may be used to include

motions in a scene. An example of time-varying environment

maps has been generated using time-lapse photography. A

camera was fixed at the same location and took a panoramic

picture every 30 minutes during a whole day. The resulting

movie shows the time passage while the user is freely looking

around.

Another use of the orientation-independent movie is in

interactive TV. A movie can be broadcast in a 360-degree

format, perhaps using multiple channels. Each TV viewer can

freely control the camera angle locally while watching the

movie. A similar idea called “electronic panning camera” has

been demonstrated for video conferencing applications [29].

Although most applications generated with the image-based

approach are likely to be CD-ROM based in the near future

because of CD-ROM's large storage capacity, the variable

resolution files make the approach practical for network

transmission. A low-resolution panoramic movie takes up less

than 100 KB per node and provides 360-degree panning in a

358

320x240-pixel window with reasonable quality. As network

speeds improve and better compression technologies become

available, on-line navigation of panoramic spaces may become

more common in the near future. One can use the same spatial

navigation metaphor to browse an informational space. The

ability to attach information to some spatial representations

may make it easier to become familiar with an intricate

information space.

6. CONCLUSIONS AND FUTURE DIRECTIONS

The image-based method makes use of environment maps,

in particular cylindrical panoramic images, to compose a

scene. The environment maps are orientation-independent

images, which allow the user to look around in arbitrary view

directions through the use of real-time image processing.

Multiple environment maps can be linked together to define a

scene. The user may move in the scene by jumping through the

maps. The method may be extended to include motions with

time-varying environment maps. In addition, the method

makes use of a two-dimensional array of frames to view an

object from different directions.

The image-based method also provides a solution to the

levels of detail problem in most 3D virtual reality display

systems. Ideally, an object should be displayed in less detail

when it is farther away and in more detail when it is close to the

observer. However, automatically changing the level of detail

is very difficult for most polygon based objects. In practice,

the same object is usually modeled at different detail levels and

the appropriate one is chosen for display based on some

viewing criteria and system performance [30], [31]. This

approach is costly as multiple versions of the objects need to

be created and stored. Since one can not predict how an object

will be displayed in advance, it is difficult to store enough

levels to include all possible viewing conditions.

The image-based method automatically provides the

appropriate level of detail. The images are views of a scene

from a range of locations. As the viewpoint moves from one

location to another within the range, the image associated with

the new location is retrieved. In this way, the scene is always

displayed at the appropriate level of detail.

This method is the underlying technology for QuickTime

VR, a system for creating and interacting with virtual

environments. The system meets most of the objectives that we

described in the introduction. The playback environment

supports most computers and does not require special hardware.

It uses images as a common representation and can therefore

accommodate both real and imaginary scenes. The display

speed is independent of scene complexity and rendering quality.

The making of the Star Trek title in a rather short time frame

(less than 2 months for generating all the panoramic movies of

the Enterprise) has demonstrated the system's relative ease in

creating a complex environment.

The method’s chief limitations are the requirements that the

scene be static and the movement be confined to particular

points. The first limitation may be eased somewhat with the use

of time-varying environment maps. The environment maps

may have motions in some local regions, such as opening a

door. The motion may be triggered by an event or continuously

looping. Because the motions are mostly confined to some

local regions, the motion frames can be compressed efficiently

with inter-frame compression.

Another solution to the static environment constraint is the

combination of image warping and 3D rendering. Since most

backgrounds are static, they can be generated efficiently from

environment maps. The objects which are time-varying or

event driven can be rendered on-the-fly using 3D rendering. The

rendered objects are composited onto the map-generated

background in real-time using layering, alpha masking or z

buffering. Usually, the number of interactive objects which

need to be rendered in real-time is small, therefore, even a

software based 3D renderer may be enough for the task.

Being able to move freely in a photographic scene is more

difficult. For computer rendered scenes, the view interpolation

method may be a solution. The method requires depth and

camera information for automatic image registration. This

information is not easily obtainable from photographic

scenes.

Another constraint with the current panoramic player is its

limitation in looking straight up or down due to the use of

cylindrical panoramic images. This limitation can be removed

if other types of environment maps, such as cubic or spherical

maps, are used. However, capturing a cubic or a spherical map

photographically may be more difficult than a cylindrical one.

The current player does not require any additional input and

output devices other than those commonly available on

personal computers. However, input devices with more than

two degrees of freedom may be useful since the navigation is

more than two-dimensional. Similarly, immersive stereo

displays combined with 3D sounds may enhance the experience

of navigation.

One of the ultimate goals of virtual reality will be achieved

when one can not discern what is real from what is virtual. With

the ability to use photographs of real scenes for virtual

navigation, we may be one step closer.

7. ACKNOWLEDGMENTS

The author is grateful to the entire QuickTime VR team for

their tremendous efforts on which this paper is based.

Specifically, the author would like to acknowledge the

following individuals: Eric Zarakov, for his managerial support

and making QuickTime VR a reality; Ian Small, for his

contributions to the engineering of the QuickTime VR product;

Ken Doyle, for his QuickTime integration work; Michael Chen,

for his work on user interface, the object maker and the object

player; Ken Turkowski, for code optimization and PowerPC

porting help; Richard Mander, for user interface design and

study; and Ted Casey, for content and production support. The

assistance from the QuickTime team, especially Jim Nitchal’s

help on code optimization, is also appreciated. Dan O'Sullivan

and Mitch Yawitz's early work on navigable movies contributed

to the development of the object movie.

Most of the work reported on in this paper began in the

Computer Graphics program of the Advanced Technology

Group at Apple Computer, Inc. The panoramic player was

inspired by work from Gavin Miller. Ned Greene and Lance

Williams contributed ideas related to environment mapping and

view interpolation. Frank Crow's encouragement and support

throughout were critical in keeping the research going. The

author's interns, Lili Cheng, Chase Garfinkle and Patrick Teo,

helped in shaping up the project into its current state.

The images in figure 6 are extracted from the "Apple

Company Store in QuickTime VR" CD. The Great Wall

photographs in figure 9 were taken with the assistance of Helen

Tahn, Zen Jing and Professor En-Hua Wu. Thanks go to Vicki de

Mey for proofreading the paper.

REFERENCES

[1] Lippman, A. Movie Maps: An Application of the Optical

Videodisc to Computer Graphics. Computer Graphics(Proc.

SIGGRAPH’80), 32-43.

[2] Ripley, D. G. DVI–a Digital Multimedia Technology.

369

Communications of the ACM. 32(7):811-822. 1989.

[3] Miller, G., E. Hoffert, S. E. Chen, E. Patterson, D.

Blackketter, S. Rubin, S. A. Applin, D. Yim, J. Hanan. The

Virtual Museum: Interactive 3D Navigation of a Multimedia

Database. The Journal of Visualization and Computer

Animation, (3): 183-197, 1992.

[4] Mohl, R. Cognitive Sp

ace in the Interactive Movie Map:

an Investigation of Spatial Learning in the Virtual

Environments. MIT Doctoral Thesis, 1981.

[5] Apple Computer, Inc. QuickTime, Version 1.5 for

Developers CD. 1992.

[6] Blinn, J. F. and M. E. Newell. Texture and Reflection in

Computer Generated Images. Communications of the ACM,

19(10):542-547. October 1976.

[7] Hall, R. Hybrid Techniques for Rapid Image Synthesis. in

Whitted, T. and R. Cook, eds. Image Rendering Tricks, Course

Notes 16 for SIGGRAPH’86. August 1986.

[8] Greene, N. Environment Mapping and Other Applications

of World Projections. Computer Graphics and Applications,

6(11):21-29. November 1986.

[9] Yelick, S. Anamorphic Image Processing. B.S. Thesis.

Department of Electrical Engineering and Computer Science.

May, 1980.

[10] Hodges, M and R. Sasnett. Multimedia Computing– Case

Studies from MIT Project Athena. 89-102. Addison-Wesley.

1993.

[11] Miller, G. and S. E. Chen. Real-Time Display of

Surroundings using Environment Maps. Technical Report No.

44, 1993, Apple Computer, Inc.

[12] Greene, N and M. Kass. Approximating Visibility with

Environment Maps. Technical Report No. 41. Apple Computer,

Inc.

[13] Regan, M. and R. Pose. Priority Rendering with a Virtual

Reality Address Recalculation Pipeline. Computer Graphics

(Proc. SIGGRAPH’94), 155-162.

[14] Greene, N. Creating Raster Ominmax Images from

Multiple Perspective Views using the Elliptical Weighted

Average Filter. IEEE Computer Graphics and Applications.

6(6):21-27, June, 1986.

[15] Irani, M. and S. Peleg. Improving Resolution by Image

Registration. Graphical Models and Image Processing. (3),

May, 1991.

[16] Szeliski, R. Image Mosaicing for Tele-Reality

Applications. DEC Cambridge Research Lab Technical Report,

CRL 94/2. May, 1994.

[17] Mann, S. and R. W. Picard. Virtual Bellows: Constructing

High Quality Stills from Video. Proceedings of ICIP-94. 363-

367. November, 1994.

[18] Chen, S. E. and L. Williams. View Interpolation for Image

Synthesis. Computer Graphics(Proc. SIGGRAPH’93), 279-288.

[19] Cheng, N. L. View Reconstruction form Uncalibrated

Cameras for Three-Dimensional Scenes. Master's Thesis,

Department of Electrical Engineering and Computer Sciences,

U. C. Berkeley. 1995.

[20] Laveau, S. and O. Faugeras. 3-D Scene Representation as a

Collection of Images and Fundamental Matrices. INRIA,

Technical Report No. 2205, February, 1994.

[21] Williams, L. Pyramidal Parametrics. Computer

Graphics(Proc. SIGGRAPH’83), 1-11.

[22] Berman, D. R., J. T. Bartell and D. H. Salesin.

Multiresolution Painting and Compositing. Computer Graphics

(Proc. SIGGRAPH’94), 85-90.

[23] Perlin, K. and D. Fox. Pad: An Alternative Approach to the

Computer Interface. Computer Graphics (Proc. SIGGRAPH’93),

57-72.

[24] Hoffert, E., L. Mighdoll, M. Kreuger, M. Mills, J. Cohen,

et al. QuickTime: an Extensible Standard for Digital

Multimedia. Proceedings of the IEEE Computer Conference

(CompCon’92), February 1992.

[25] Apple Computer, Inc. Inside Macintosh: QuickTime.

Addison-Wesley. 1993.

[26] Chen, S. E. and G. S. P. Miller. Cylindrical to planar

image mapping using scanline coherence. United States Patent

number 5,396,583. Mar. 7, 1995.

[27] Chen, M. A Study in Interactive 3-D Rotation Using 2-D

Control Devices. Computer Graphics (Proc. SIGGRAPH’88),

121-130.

[28] Weghorst, H., G. Hooper and D. Greenberg. Improved

Computational Methods for Ray Tracing. ACM Transactions on

Graphics. 3(1):52-69. 1986.

[29] 'Electronic Panning' Device Opens Viewing Range. Digital

Media: A Seybold Report. 2(3):13-14. August, 1992.

[30] Clark, J. H. Hierarchical Geometric Models for Visible

Surface Algorithms. Communications of the ACM, (19)10:547-

554. October, 1976

[31] Funkhouser, T. A. and C. H. Séquin. Adaptive Display

Algorithm for Interactive Frame Rates During Visualization of

Complex Virtual Environments. Computer Graphics(Proc.

SIGGRAPH’93), 247-254.

Figure 6. A walkthrough sequence created from a set of

panoramas spaced 5 feet apart.

37Figure 5. A perspective view created from warping a region enclosed by the yellow box in the panoramic image.

Figure 9. A stitched panoramic image and some of the photographs the image stitched from.

中国人民警察大学版权所有