3D Videocommunication
Algorithms, Concepts and Real-time Systems in Human Centred Communication
Inbunden, Engelska, 2005
Av Oliver Schreer, Peter Kauff, Thomas Sikora, Germany) Schreer, Oliver (Heinrich-Hertz-Institut, Berlin, Germany) Kauff, Peter (Heinrich-Hertz-Institut, Berlin, Germany) Sikora, Thomas (Heinrich-Hertz-Institute (HHI), Berlin
1 939 kr
Produktinformation
- Utgivningsdatum2005-07-28
- Mått174 x 250 x 26 mm
- Vikt822 g
- SpråkEngelska
- Antal sidor368
- FörlagJohn Wiley & Sons Inc
- EAN9780470022719
Du kanske också är intresserad av
Fraunhofer Technologies for Heritage Protection in Times of Climate Change and Digitization.
Alexandra Schieweck, Jakob Barz, Maris Bauer, Florian Gruber, Tobias Hellmund, Timo Hevesi-Toth, Kristina Holl, Andreas Holländer, Johannes Hügle, Wiktoria Humanicka, Erich Jelen, Vanessa Jelito, Yvonne Kasimir, Ralf Kilian, Martin Kilo, Katharina Klein, Martin Krus, Erell Le Drezen, Johanna Leissner, Johanna Moßgraber, Uta Pollmer, Thomas Rauch, Sylvain Renault, Stefan Bichlmair, Lisa Rentschler, Jürgen Reuter, Frank-Holm Rögner, Sabrina Rota, Magdalena Roth, Rainer Richter, Pedro Santos, Erik Schmidt, Gerhard Schottner, Cansel Erdogmus, Oliver Schreer, Peter Weber, Olaf Zywitzki, Volker Franke, Fabian Friederich, Theobald Fuchs, Constanze Fuhrmann, Wulf Grählert
1 129 kr
Media Production, Delivery and Interaction for Platform Independent Systems
Oliver Schreer, Jean-François Macq, Omar Aziz Niamut, Javier Ruiz-Hidalgo, Ben Shirley, Georg Thallinger, Graham Thomas, Germany) Schreer, Oliver (Fraunhofer Heinrich Hertz Institute, Technical University Berlin, Belgium) Macq, Jean-Francois (Alcatel-Lucent Bell Labs, The Netherlands) Niamut, Omar Aziz (TNO, Spain) Ruiz-Hidalgo, Javier (Universitat Politecnica de Catalunya, UK) Shirley, Ben (MediaCityUK University of Salford, Austria) Thallinger, Georg (DIGITAL - Institute for Information and Communication Technologies, JOANNEUM RESEARCH, UK) Thomas, Graham (BBC Research & Development
1 489 kr
Tillhör följande kategorier
Dr Oliver Schreer, Heinrich-Hertz-Institute, & TU Berlin, Germany Oliver Schreer is Adjunct Professor at the Faculty of Electrical Engineering and Computer Science, Technical University Berlin. He lectures on Image Processing in Videocommunications and is a regular guest editor for the IEEE Transactions on Circuits, Systems and Video Technology. Dr Peter Kauff, Heinrich-Hertz-Institute, Berlin, GermanyPeter Kauff is the head of the “Immersive Media & 3D Video” Group at Heinrich-Hertz-Institute (HHI), Fraunhofer Gesellschaft, Berlin. He has been involved in numerous German and European projects related to digital HDTV signal processing and coding, interactive MPEG-4-based services, and advanced 3D video processing for immersive tele-presence and immersive media.Professor Dr Thomas Sikora, Head of the Communication Systems Group, Technical University of Berlin, BerlinAs the chairman of the ISO-MPEG video group (Moving Picture Experts Group), Dr Sikora was responsible for the development and standardization of the MPEG video coding algorithms. He frequently works as an industry consultant on issues related to interactive digital video. He is an appointed member of the Supervisory board of a number of German companies and international research organizations. He is an Associate Editor for IEEE Signal Processing Magazine and the EURASIP Signal Processing: Image Communication journal and currently serves as the Editor-in-Chief of the IEEE Transactions on Circuits and Systems for Video Technology.
- List of Contributors xiiiSymbols xixAbbreviations xxiIntroduction 1Oliver Schreer, Peter Kauff and Thomas SikoraSection I Applications of 3D Videocommunication 51 History of Telepresence 7Wijnand A. IJsselsteijn1.1 Introduction 71.2 The Art of Immersion: Barker’s Panoramas 101.3 Cinerama and Sensorama 111.4 Virtual Environments 141.5 Teleoperation and Telerobotics 161.6 Telecommunications 181.7 Conclusion 19References 202 3D TV Broadcasting 23Christoph Fehn2.1 Introduction 232.2 History of 3D TV Research 242.3 A Modern Approach to 3D TV 262.3.1 A Comparison with a Stereoscopic Video Chain 282.4 Stereoscopic View Synthesis 292.4.1 3D Image Warping 292.4.2 A ‘Virtual’ Stereo Camera 302.4.3 The Disocclusion Problem 322.5 Coding of 3D Imagery 342.5.1 Human Factor Experiments 352.6 Conclusions 36Acknowledgements 37References 373 3D in Content Creation and Post-production 39Oliver Grau3.1 Introduction 393.2 Current Techniques for Integrating Real and Virtual Scene Content 413.3 Generation of 3D Models of Dynamic Scenes 443.4 Implementation of a Bidirectional Interface Between Real and Virtual Scenes 463.4.1 Head Tracking 493.4.2 View-dependent Rendering 503.4.3 Mask Generation 503.4.4 Texturing 513.4.5 Collision Detection 523.5 Conclusions 52References 524 Free Viewpoint Systems 55Masayuki Tanimoto4.1 General Overview of Free Viewpoint Systems 554.2 Image Domain System 574.2.1 EyeVision 574.2.2 3D-TV 584.2.3 Free Viewpoint Play 594.3 Ray-space System 594.3.1 FTV (Free Viewpoint TV) 594.3.2 Bird’s-eye View System 604.3.3 Light Field Video Camera System 624.4 Surface Light Field System 644.5 Model-based System 654.5.1 3D Room 654.5.2 3D Video 664.5.3 Multi-texturing 674.6 Integral Photography System 684.6.1 NHK System 684.6.2 1D-II 3D Display System 704.7 Summary 70References 715 Immersive Videoconferencing 75Peter Kauff and Oliver Schreer5.1 Introduction 755.2 The Meaning of Telepresence in Videoconferencing 765.3 Multi-party Communication Using the Shared Table Concept 795.4 Experimental Systems for Immersive Videoconferencing 835.5 Perspective and Trends 87Acknowledgements 88References 88Section II 3D Data Representation and Processing 916 Fundamentals of Multiple-view Geometry 93Spela Ivekovic, Andrea Fusiello and Emanuele Trucco6.1 Introduction 936.2 Pinhole Camera Geometry 946.3 Two-view Geometry 966.3.1 Introduction 966.3.2 Epipolar Geometry 976.3.3 Rectification 1026.3.4 3D Reconstruction 1046.4 N-view Geometry 1066.4.1 Trifocal Geometry 1066.4.2 The Trifocal Tensor 1086.4.3 Multiple-view Constraints 1096.4.4 Uncalibrated Reconstruction from N views 1106.4.5 Autocalibration 1116.5 Summary 112References 1127 Stereo Analysis 115Nicole Atzpadin and Jane Mulligan7.1 Stereo Analysis Using Two Cameras 1157.1.1 Standard Area-based Stereo Analysis 1177.1.2 Fast Real-time Approaches 1207.1.3 Post-processing 1237.2 Disparity From Three or More Cameras 1257.2.1 Two-camera versus Three-camera Disparity 1277.2.2 Correspondence Search with Three Views 1287.2.3 Post-processing 1297.3 Conclusion 130References 1308 Reconstruction of Volumetric 3D Models 133Peter Eisert8.1 Introduction 1338.2 Shape-from-Silhouette 1358.2.1 Rendering of Volumetric Models 1368.2.2 Octree Representation of Voxel Volumes 1378.2.3 Camera Calibration from Silhouettes 1398.3 Space-carving 1408.4 Epipolar Image Analysis 1438.4.1 Horizontal Camera Motion 1438.4.2 Image Cube Trajectory Analysis 1458.5 Conclusions 148References 1489 View Synthesis and Rendering Methods 151Reinhard Koch and Jan-Friso Evers-Senne9.1 The Plenoptic Function 1529.1.1 Sampling the Plenoptic Function 1529.1.2 Recording of the Plenoptic Samples 1539.2 Categorization of Image-based View Synthesis Methods 1549.2.1 Parallax Effects in View Rendering 1549.2.2 Taxonomy of IBR Systems 1569.3 Rendering Without Geometry 1589.3.1 The Aspen Movie-Map 1589.3.2 Quicktime VR 1589.3.3 Central Perspective Panoramas 1599.3.4 Manifold Mosaicing 1599.3.5 Concentric Mosaics 1619.3.6 Cross-slit Panoramas 1629.3.7 Light Field Rendering 1629.3.8 Lumigraph 1639.3.9 Ray Space 1649.3.10 Related Techniques 1649.4 Rendering with Geometry Compensation 1659.4.1 Disparity-based Interpolation 1659.4.2 Image Transfer Methods 1669.4.3 Depth-based Extrapolation 1679.4.4 Layered Depth Images 1689.5 Rendering from Approximate Geometry 1699.5.1 Planar Scene Approximation 1699.5.2 View-dependent Geometry and Texture 1699.6 Recent Trends in Dynamic IBR 170References 17210 3D Audio Capture and Analysis 175Markus Schwab and Peter Noll10.1 Introduction 17510.2 Acoustic Echo Control 17610.2.1 Single-channel Echo Control 17710.2.2 Multi-channel Echo Control 17910.3 Sensor Placement 18110.4 Acoustic Source Localization 18210.4.1 Introduction 18210.4.2 Real-time System and Results 18310.5 Speech Enhancement 18510.5.1 Multi-channel Speech Enhancement 18610.5.2 Single-channel Noise Reduction 18710.6 Conclusions 190References 19111 Coding and Standardization 193Aljoscha Smolic and Thomas Sikora11.1 Introduction 19311.2 Basic Strategies for Coding Images and Video 19411.2.1 Predictive Coding of Images 19411.2.2 Transform Domain Coding of Images and Video 19511.2.3 Predictive Coding of Video 19811.2.4 Hybrid MC/DCT Coding for Video Sequences 19911.2.5 Content-based Video Coding 20111.3 Coding Standards 20211.3.1 JPEG and JPEG 2000 20211.3.2 Video Coding Standards 20211.4 MPEG-4 — an Overview 20411.4.1 MPEG-4 Systems 20511.4.2 BIFS 20511.4.3 Natural Video 20611.4.4 Natural Audio 20711.4.5 SNHC 20811.4.6 AFX 20911.5 The MPEG 3DAV Activity 21011.5.1 Omnidirectional Video 21011.5.2 Free-viewpoint Video 21211.6 Conclusion 214References 214Section III 3D Reproduction 21712 Human Factors of 3D Displays 219Wijnand A. IJsselsteijn, Pieter J.H. Seuntiëns and Lydia M.J. Meesters12.1 Introduction 21912.2 Human Depth Perception 22012.2.1 Binocular Disparity and Stereopsis 22012.2.2 Accommodation and Vergence 22212.2.3 Asymmetrical Binocular Combination 22312.2.4 Individual Differences 22412.3 Principles of Stereoscopic Image Production and Display 22512.4 Sources of Visual Discomfort in Viewing Stereoscopic Displays 22612.4.1 Keystone Distortion and Depth Plane Curvature 22712.4.2 Magnification and Miniaturization Effects 22812.4.3 Shear Distortion 22912.4.4 Cross-talk 22912.4.5 Picket Fence Effect and Image Flipping 23012.5 Understanding Stereoscopic Image Quality 230References 23113 3D Displays 235Siegmund Pastoor13.1 Introduction 23513.2 Spatial Vision 23613.3 Taxonomy of 3D Displays 23713.4 Aided-viewing 3D Display Technologies 23813.4.1 Colour-multiplexed (Anaglyph) Displays 23813.4.2 Polarization-multiplexed Displays 23913.4.3 Time-multiplexed Displays 23913.4.4 Location-multiplexed Displays 24013.5 Free-viewing 3D Display Technologies 24213.5.1 Electroholography 24213.5.2 Volumetric Displays 24313.5.3 Direction-multiplexed Displays 24413.6 Conclusions 258References 25814 Mixed Reality Displays 261Siegmund Pastoor and Christos Conomis14.1 Introduction 26114.2 Challenges for MR Technologies 26314.3 Human Spatial Vision and MR Displays 26414.4 Visual Integration of Natural and Synthetic Worlds 26514.4.1 Free-form Surface-prism HMD 26514.4.2 Waveguide Holographic HMD 26614.4.3 Virtual Retinal Display 26714.4.4 Variable-accommodation HMD 26714.4.5 Occlusion Handling HMD 26814.4.6 Video See-through HMD 26914.4.7 Head-mounted Projective Display 26914.4.8 Towards Free-viewing MR Displays 27014.5 Examples of Desktop and Hand-held MR Systems 27314.5.1 Hybrid 2D/3D Desktop MR System with Multimodal Interaction 273 14.5.2 Mobile MR Display with Markerless Video-based Tracking 27514.6 Conclusions 278References 27915 Spatialized Audio and 3D Audio Rendering 281Thomas Sporer and Sandra Brix15.1 Introduction 28115.2 Basics of Spatial Audio Perception 28115.2.1 Perception of Direction 28215.2.2 Perception of Distance 28315.2.3 The Cocktail Party Effect 28315.2.4 Final Remarks 28415.3 Spatial Sound Reproduction 28415.3.1 Discrete Multi-channel Loudspeaker Reproduction 28415.3.2 Binaural Reproduction 28715.3.3 Multi-object Audio Reproduction 28715.4 Audiovisual Coherence 29115.5 Applications 29315.6 Summary and Outlook 293References 293Section IV 3D Data Sensors 29716 Sensor-based Depth Capturing 299João G.M. Gonçalves and Vítor Sequeira16.1 Introduction 29916.2 Triangulation-based Sensors 30116.3 Time-of-flight-based Sensors 30316.3.1 Pulsed Wave 30416.3.2 Continuous-wave-based Sensors 30416.3.3 Summary 30816.4 Focal Plane Arrays 30816.5 Other Methods 30916.6 Application Examples 30916.7 The Way Ahead 31116.8 Summary 311References 31217 Tracking and User Interface for Mixed Reality 315Yousri Abdeljaoued, David Marimon i Sanjuan, and Touradj Ebrahimi17.1 Introduction 31517.2 Tracking 31617.2.1 Mechanical Tracking 31717.2.2 Acoustic Tracking 31717.2.3 Inertial Tracking 31817.2.4 Magnetic Tracking 31817.2.5 Optical Tracking 32017.2.6 Video-based Tracking 32017.2.7 Hybrid Tracking 32317.3 User Interface 32417.3.1 Tangible User Interfaces 32417.3.2 Gesture-based Interfaces 32517.4 Applications 32817.4.1 Mobile Applications 32817.4.2 Collaborative Applications 32917.4.3 Industrial Applications 32917.5 Conclusions 331References 331Index 335