Interactive Displays
Natural Human-Interface Technologies
Inbunden, Engelska, 2014
1 719 kr
Produktinformation
- Utgivningsdatum2014-09-26
- Mått175 x 250 x 27 mm
- Vikt803 g
- FormatInbunden
- SpråkEngelska
- SerieWiley Series in Display Technology
- Antal sidor408
- FörlagJohn Wiley & Sons Inc
- ISBN9781118631379
Tillhör följande kategorier
Achintya K. Bhowmik, Intel Corporation, USADr. Achin Bhowmik is the director of perceptual computing technology and solutions at Intel Corporation, where his group is focused on developing next-generation computing solutions based on natural human-computer interaction and visual computing technologies and applications. He is a senior member of the IEEE as well as program committee member of SID and IMID. He is associate editor of the Journal of the Society for Information Display, and was guest editor for two special volumes on "Advances in OLED Displays" and "Interactive Displays". Dr. Bhowmik is an Adjunct Professor at Kyung-Hee University, Seoul, Korea teaching courses on digital imaging & display, digital image processing and optics of liquid crystal displays. He is on the board of directors for OpenCV, the organization behind the open source computer vision library.
- About the Author xiiiList of Contributors xvSeries Editor’s Foreword xviiPreface xixList of Acronyms xxi1 Senses, Perception, and Natural Human-Interfaces for Interactive Displays 1Achintya K. Bhowmik1.1 Introduction 11.2 Human Senses and Perception 41.3 Human Interface Technologies 91.3.1 Legacy Input Devices 91.3.2 Touch-based Interactions 111.3.3 Voice-based Interactions 131.3.4 Vision-based Interactions 151.3.5 Multimodal Interactions 181.4 Towards “True” 3D Interactive Displays 201.5 Summary 23References 242 Touch Sensing 27Geoff Walker2.1 Introduction 272.2 Introduction to Touch Technologies 282.2.1 Touchscreens 302.2.2 Classifying Touch Technologies by Size and Application 302.2.3 Classifying Touch Technologies by Materials and Structure 322.2.4 Classifying Touch Technologies by the Physical Quantity Being Measured 332.2.5 Classifying Touch Technologies by Their Sensing Capabilities 332.2.6 The Future of Touch Technologies 342.3 History of Touch Technologies 352.4 Capacitive Touch Technologies 352.4.1 Projected Capacitive (P-Cap) 352.4.2 Surface Capacitive 472.5 Resistive Touch Technologies 512.5.1 Analog Resistive 512.5.2 Digital Multi-touch Resistive (DMR) 572.5.3 Analog Multi-touch Resistive (AMR) 592.6 Acoustic Touch Technologies 612.6.1 Surface Acoustic Wave (SAW) 612.6.2 Acoustic Pulse Recognition (APR) 642.6.3 Dispersive Signal Technology (DST) 672.7 Optical Touch Technologies 682.7.1 Traditional Infrared 682.7.2 Multi-touch Infrared 732.7.3 Camera-based Optical 762.7.4 In-glass Optical (Planar Scatter Detection – PSD) 812.7.5 Vision-based Optical 822.8 Embedded Touch Technologies 862.8.1 On-cell Mutual-capacitive 892.8.2 Hybrid In-cell/On-cell Mutual-capacitive 902.8.3 In-cell Mutual-capacitive 912.8.4 In-cell Light Sensing 932.9 Other Touch Technologies 962.9.1 Force-sensing 962.9.2 Combinations of Touch Technologies 982.10 Summary 982.11 Appendix 100References 1013 Voice in the User Interface 107 Andrew Breen, Hung H. Bui, Richard Crouch, Kevin Farrell, Friedrich Faubel, Roberto Gemello, William F. Ganong III, Tim Haulick, Ronald M. Kaplan, Charles L. Ortiz, Peter F. Patel-Schneider, Holger Quast, Adwait Ratnaparkhi, Vlad Sejnoha, Jiaying Shen, Peter Stubley and Paul van Mulbregt3.1 Introduction 1073.2 Voice Recognition 1103.2.1 Nature of Speech 1103.2.2 Acoustic Model and Front-end 1123.2.3 Aligning Speech to HMMs 1133.2.4 Language Model 1143.2.5 Search: Solving Crosswords at 1000 Words a Second 1153.2.6 Training Acoustic and Language Models 1163.2.7 Adapting Acoustic and Language Models for Speaker DependentRecognition 1163.2.8 Alternatives to the “Canonical” System 1173.2.9 Performance 1173.3 Deep Neural Networks for Voice Recognition 1193.4 Hardware Optimization 1223.4.1 Lower Power Wake-up Computation 1223.4.2 Hardware Optimization for Specific Computations 1233.5 Signal Enhancement Techniques for Robust Voice Recognition 1233.5.1 Robust Voice Recognition 1243.5.2 Single-channel Noise Suppression 1243.5.3 Multi-channel Noise Suppression 1253.5.4 Noise Cancellation 1253.5.5 Acoustic Echo Cancellation 1273.5.6 Beamforming 1273.6 Voice Biometrics 1283.6.1 Introduction 1283.6.2 Existing Challenges to Voice Biometrics 1293.6.3 New Areas of Research in Voice Biometrics 1303.7 Speech Synthesis 1303.8 Natural Language Understanding 1343.8.1 Mixed Initiative Conversations 1353.8.2 Limitations of Slot and Filler Technology 1373.9 Multi-turn Dialog Management 1413.10 Planning and Reasoning 1443.10.1 Technical Challenges 1443.10.2 Semantic Analysis and Discourse Representation 1463.10.3 Pragmatics 1473.10.4 Dialog Management as Collaboration 1483.10.5 Planning and Re-planning 1493.10.6 Knowledge Representation and Reasoning 1493.10.7 Monitoring 1503.10.8 Suggested Readings 1513.11 Question Answering 1513.11.1 Question Analysis 1523.11.2 Find Relevant Information 1523.11.3 Answers and Evidence 1533.11.4 Presenting the Answer 1533.12 Distributed Voice Interface Architecture 1543.12.1 Distributed User Interfaces 1543.12.2 Distributed Speech and Language Technology 1553.13 Conclusion 157Acknowledgements 158References 1584 Visual Sensing and Gesture Interactions 165Achintya K. Bhowmik4.1 Introduction 1654.2 Imaging Technologies: 2D and 3D 1674.3 Interacting with Gestures 1704.4 Summary 177References 1785 Real-Time 3D Sensing With Structured Light Techniques 181Tyler Bell, Nikolaus Karpinsky and Song Zhang5.1 Introduction 1815.2 Structured Pattern Codifications 1835.2.1 2D Pseudo-random Codifications 1835.2.2 Binary Structured Codifications 1845.2.3 N-ary Codifications 1875.2.4 Continuous Sinusoidal Phase Codifications 1875.3 Structured Light System Calibration 1915.4 Examples of 3D Sensing with DFP Techniques 1935.5 Real-Time 3D Sensing Techniques 1955.5.1 Fundamentals of Digital-light-processing (DLP) Technology 1965.5.2 Real-Time 3D Data Acquisition 1985.5.3 Real-Time 3D Data Processing and Visualization 1995.5.4 Example of Real-Time 3D Sensing 2005.6 Real-Time 3D Sensing for Human Computer Interaction Applications 2015.6.1 Real-Time 3D Facial Expression Capture and its HCI Implications 2015.6.2 Real-Time 3D Body Part Gesture Capture and its HCI Implications 2025.6.3 Concluding Human Computer Interaction Implications 2045.7 Some Recent Advancements 2045.7.1 Real-Time 3D Sensing and Natural 2D Color Texture Capture 2045.7.2 Superfast 3D Sensing 2065.8 Summary 208Acknowledgements 209References 2096 Real-Time Stereo 3D Imaging Techniques 215Lazaros Nalpantidis6.1 Introduction 2156.2 Background 2166.3 Structure of Stereo Correspondence Algorithms 2196.3.1 Matching Cost Computation 2206.3.2 Matching Cost Aggregation 2216.4 Categorization of Characteristics 2226.4.1 Depth Estimation Density 2226.4.2 Optimization Strategy 2246.5 Categorization of Implementation Platform 2256.5.1 CPU-only Methods 2256.5.2 GPU-accelerated Methods 2266.5.3 Hardware Implementations (FPGAs, ASICs) 2276.6 Conclusion 229References 2297 Time-of-Flight 3D-Imaging Techniques 233Daniël Van Nieuwenhove7.1 Introduction 2337.2 Time-of-Flight 3D Sensing 2337.3 Pulsed Time-of-Flight Method 2357.4 Continuous Time-of-Flight Method 2367.5 Calculations 2367.6 Accuracy 2397.7 Limitations and Improvements 2407.7.1 TOF Challenges 2407.7.2 Theoretical Limits 2417.7.3 Distance Aliasing 2427.7.4 Multi-path and Scattering 2437.7.5 Power Budget and Optimization 2437.8 Time-of-Flight Camera Components 2447.9 Typical Values 2447.9.1 Light Power Range 2447.9.2 Background Light 2457.10 Current State of the Art 2477.11 Conclusion 247References 2488 Eye Gaze Tracking 251Heiko Drewes8.1 Introduction and Motivation 2518.2 The Eyes 2538.3 Eye Trackers 2568.3.1 Types of Eye Trackers 2568.3.2 Corneal Reflection Method 2578.4 Objections and Obstacles 2608.4.1 Human Aspects 2608.4.2 Outdoor Use 2618.4.3 Calibration 2618.4.4 Accuracy 2618.4.5 Midas Touch Problem 2628.5 Eye Gaze Interaction Research 2638.6 Gaze Pointing 2648.6.1 Solving the Midas Touch Problem 264 8.6.2 Solving the Accuracy Issue 2658.6.3 Comparison of Mouse and Gaze Pointing 2668.6.4 Mouse and Gaze Coordination 2678.6.5 Gaze Pointing Feedback 2698.7 Gaze Gestures 2708.7.1 The Concept of Gaze Gestures 2708.7.2 Gesture Detection Algorithm 2708.7.3 Human Ability to Perform Gaze Gestures 2718.7.4 Gaze Gesture Alphabets 2728.7.5 Gesture Separation from Natural Eye Movement 2738.7.6 Applications for Gaze Gestures 2748.8 Gaze as Context 2758.8.1 Activity Recognition 2758.8.2 Reading Detection 2778.8.3 Attention Detection 2798.8.4 Using Gaze Context 2808.9 Outlook 280References 2819 Multimodal Input for Perceptual User Interfaces 285Joseph J. LaViola Jr., Sarah Buchanan and Corey Pittman9.1 Introduction 2859.2 Multimodal Interaction Types 2869.3 Multimodal Interfaces 2879.3.1 Touch Input 2879.3.2 3D Gesture 2949.3.3 Eye Tracking and Gaze 2999.3.4 Facial Expressions 3009.3.5 Brain-computer Input 3019.4 Multimodal Integration Strategies 3039.4.1 Frame-based Integration 3049.4.2 Unification-based Integration 3049.4.3 Procedural Integration 3059.4.4 Symbolic/Statistical Integration 3059.5 Usability Issues with Multimodal Interaction 3059.6 Conclusion 307References 30810 Multimodal Interaction in Biometrics: Technological and Usability Challenges 313Norman Poh, Phillip A. Tresadern and Rita Wong10.1 Introduction 31310.1.1 Motivations for Identity Assurance 31410.1.2 Biometrics10.1.3 Application Characteristics of Multimodal Biometrics 31410.1.4 2D and 3D Face Recognition 31610.1.5 A Multimodal Case Study 31710.1.6 Adaptation to Blind Subjects 31810.1.7 Chapter Organization 32010.2 Anatomy of the Mobile Biometry Platform 32010.2.1 Face Analysis 32010.2.2 Voice Analysis 32310.2.3 Model Adaptation 32510.2.4 Data Fusion 32610.2.5 Mobile Platform Implementation 32610.2.6 MoBio Database and Protocol 32710.3 Case Study: Usability Study for the Visually Impaired 32810.3.1 Impact of Head Pose Variations on Performance 32910.3.2 User Interaction Module: Head Pose Quality Assessment 32910.3.3 User-Interaction Module: Audio Feedback Mechanism 33310.3.4 Usability Testing with the Visually Impaired 33610.4 Discussions and Conclusions 338Acknowledgements 339References 33911 Towards “True” 3D Interactive Displays 343Jim Larimer, Philip J. Bos and Achintya K. Bhowmik11.1 Introduction 34311.2 The Origins of Biological Vision 34611.3 Light Field Imaging 35211.4 Towards “True” 3D Visual Displays 35911.5 Interacting with Visual Content on a 3D Display 36811.6 Summary 371References 371Index 375
Du kanske också är intresserad av
Photoalignment of Liquid Crystalline Materials
Vladimir G. Chigrinov, Vladimir M. Kozenkov, Hoi-Sing Kwok, Vladimir G. (Hong Kong University of Science and Technology) Chigrinov, Vladimir M. (Moscow State University) Kozenkov, Hoi-Sing (Hong Kong University of Science & Technology) Kwok, Vladimir G Chigrinov, Vladimir M Kozenkov
1 859 kr
Flexible Flat Panel Displays
Darran R. Cairns, Darran R. Cairns, Dirk J. Broer, Gregory P. Crawford, USA) Cairns, Darran R. (University of Missouri, Kansas City, Netherlands) Broer, Dirk J. (Eindhoven Technical University, USA) Crawford, Gregory P. (Miami University, Florida, Darran R Cairns, Dirk J Broer, Gregory P Crawford
1 829 kr