Adaptive Blind Signal and Image Processing
Learning Algorithms and Applications
Inbunden, Engelska, 2002
Av Andrzej Cichocki, Shun-ichi Amari, Shun-Ichi Amari, Cichocki
2 459 kr
Produktinformation
- Utgivningsdatum2002-04-26
- Mått177 x 252 x 37 mm
- Vikt1 157 g
- FormatInbunden
- SpråkEngelska
- Antal sidor586
- FörlagJohn Wiley & Sons Inc
- ISBN9780471607915
Tillhör följande kategorier
Andrzej Cichocki received the M.Sc. (with honors), Ph.D. and Dr.Sc. (Habilitation) degrees, all in electrical engineering, from Warsaw University of Technology in Poland.Since 1972, he has been with the Institute of Theory of Electrical Engineering, Measurement and Information Systems, Faculty of Electrical Engineering at the Warsaw University of Technology, where he obtain a title of a full Professor in 1995.He spent several years at University Erlangen-Nuerenberg in Germany, at the Chair of Applied and Theoretical Electrical Engineering directed by Professor Rolf Unbehauen, as an Alexander-von-Humboldt Research Fellow and Guest Professor. In 1995-1997 he was a team leader of the laboratory for Artificial Brain Systems, at Frontier Research Program RIKEN (Japan), in the Brain Information Processing Group.
- Preface xxix1 Introduction to Blind Signal Processing: Problems and Applications 11.1 Problem Formulations – An Overview 21.1.1 Generalized Blind Signal Processing Problem 21.1.2 Instantaneous Blind Source Separation and Independent Component Analysis 51.1.3 Independent Component Analysis for Noisy Data 111.1.4 Multichannel Blind Deconvolution and Separation 151.1.5 Blind Extraction of Signals 191.1.6 Generalized Multichannel Blind Deconvolution – State Space Models 201.1.7 Nonlinear State Space Models – Semi-Blind Signal Processing 221.1.8 Why State Space Demixing Models? 231.2 Potential Applications of Blind and Semi-Blind Signal Processing 241.2.1 Biomedical Signal Processing 251.2.2 Blind Separation of Electrocardiographic Signals of Fetus and Mother 261.2.3 Enhancement and Decomposition of EMG Signals 281.2.4 EEG and MEG Data Processing 281.2.5 Application of ICA/BSS for Noise and Interference Cancellation in Multi-sensory Biomedical Signals 301.2.6 Cocktail Party Problem 351.2.7 Digital Communication Systems 361.2.8 Image Restoration and Understanding 382 Solving a System of Algebraic Equations and Related Problems 432.1 Formulation of the Problem for Systems of Linear Equations 442.2 Least-Squares Problems 452.2.1 Basic Features of the Least-Squares Solution 452.2.2 Weighted Least-Squares and Best Linear Unbiased Estimation 472.2.3 Basic Network Structure-Least-Squares Criteria 482.2.4 Iterative Parallel Algorithms for Large and Sparse Systems 492.2.5 Iterative Algorithms with Non-negativity Constraints 512.2.6 Robust Criteria and Iteratively Reweighted Least-Squares Algorithm 532.2.7 Tikhonov Regularization and SVD 572.3 Least Absolute Deviation (1-norm) Solution of Systems of Linear Equations 612.3.1 Neural Network Architectures Using a Smooth Approximation and Regularization 622.3.2 Neural Network Model for LAD Problem Exploiting Inhibition Principles 642.4 Total Least-Squares and Data Least-Squares Problems 682.4.1 Problems Formulation 682.4.2 Total Least-Squares Estimation 702.4.3 Adaptive Generalized Total Least-Squares 742.4.4 Extended TLS for Correlated Noise Statistics 762.4.5 An Illustrative Example - Fitting a Straight Line to a Set of Points 782.5 Sparse Signal Representation and Minimum 1-norm Solution 802.5.1 Approximate Solution of Minimum p-norm Problem Using Iterative LS Approach 812.5.2 Uniqueness and Optimal Solution for Sparse Representation 842.5.3 FOCUSS Algorithms 843 Principal/Minor Component Analysis and Related Problems 873.1 Introduction 873.2 Basic Properties of PCA 883.2.1 Eigenvalue Decomposition 883.2.2 Estimation of Sample Covariance Matrices 903.2.3 Signal and Noise Subspaces - Automatic Choice of Dimensionality for PCA 913.2.4 Basic Properties of PCA 943.3 Extraction of Principal Components 953.4 Basic Cost Functions and Adaptive Algorithms for PCA 993.4.1 The Rayleigh Quotient – Basic Properties 993.4.2 Basic Cost Functions for Computing Principal and Minor Components 1003.4.3 Fast PCA Algorithm Based on the Power Method 1023.4.4 Inverse Power Iteration Method 1053.5 Robust PCA 1053.6 Adaptive Learning Algorithms for MCA 1083.7 Unified Parallel Algorithms for PCA/MCA and PSA/MSA 1113.7.1 Cost Function for Parallel Processing 1123.7.2 Gradient of J(W) 1133.7.3 Stability Analysis 1143.7.4 Unified Stable Algorithms 1173.8 SVD in Relation to PCA and Matrix Subspaces 1183.9 Multistage PCA for BSS 1204 Blind Decorrelation and SOS for Robust Blind Identification 1294.1 Spatial Decorrelation - Whitening Transforms 1304.1.1 Batch Approach 1304.1.2 Optimization Criteria for Adaptive Blind Spatial Decorrelation 1324.1.3 Derivation of Equivariant Adaptive Algorithms for Blind Spatial Decorrelation 1334.1.4 Simple Local Learning Rule 1364.1.5 Gram-Schmidt Orthogonalization 1384.1.6 Blind Separation of Decorrelated Sources Versus Spatial Decorrelation 1394.1.7 Bias Removal for Noisy Data 1394.1.8 Robust Prewhitening - Batch Algorithm 1404.2 SOS Blind Identification Based on EVD 1414.2.1 Mixing Model 1414.2.2 Basic Principles: SD and EVD 1434.3 Improved Blind Identification Algorithms Based on EVD/SVD 1484.3.1 Robust Orthogonalization of Mixing Matrices for Colored Sources 1484.3.2 An Improved Algorithm Based on GEVD 1534.3.3 An Improved Two-stage Symmetric EVD/SVD Algorithm 1554.3.4 BSS and Identification Using a Bandpass Filters 1564.4 Joint Diagonalization - Robust SOBI Algorithms 1574.4.1 The Modified SOBI Algorithm for Nonstationary Sources: SONS Algorithm 1604.4.2 Computer Simulation Experiments 1614.4.3 Extensions of Joint Approximate Diagonalization Technique 1624.4.4 Comparison of the JAD and Symmetric EVD 1634.5 Cancellation of Correlation 1644.5.1 Standard Estimation of Mixing Matrix and Noise Covariance Matrix 1644.5.2 Blind Identification of Mixing Matrix Using the Concept of Cancellation of Correlation 1655 Statistical Signal Processing Approach to Blind Signal Extraction 1775.1 Introduction and Problem Formulation 1785.2 Learning Algorithms Using Kurtosis as a Cost Function 1805.2.1 A Cascade Neural Network for Blind Extraction of Non-Gaussian Sources with Learning Rule Based on Normalized Kurtosis 1815.2.2 Algorithms Based on Optimization of Generalized Kurtosis 1845.2.3 KuicNet Learning Algorithm 1865.2.4 Fixed-point Algorithms 1875.2.5 Sequential Extraction and Deflation Procedure 1915.3 On-Line Algorithms for Blind Signal Extraction of Temporally Correlated Sources 1935.3.1 On-Line Algorithms for Blind Extraction Using a Linear Predictor 1955.3.2 Neural Network for Multi-unit Blind Extraction 1975.4 Batch Algorithms for Blind Extraction of Temporally Correlated Sources 1995.4.1 Blind Extraction Using a First Order Linear Predictor 2015.4.2 Blind Extraction of Sources Using Bank of Adaptive Bandpass Filters 2025.4.3 Blind Extraction of Desired Sources Correlated with Reference Signals 2055.5 A Statistical Approach to Sequential Extraction of Independent Sources 2065.5.1 Log Likelihood and Cost Function 2065.5.2 Learning Dynamics 2085.5.3 Equilibrium of Dynamics 2095.5.4 Stability of Learning Dynamics and Newton’s Method 2115.6 A Statistical Approach to Temporally Correlated Sources 2125.7 On-line Sequential Extraction of Convolved and Mixed Sources 2145.7.1 Formulation of the Problem 2145.7.2 Extraction of Single i.i.d. Source Signal 2155.7.3 Extraction of Multiple i.i.d. Sources 2175.7.4 Extraction of Colored Sources from Convolutive Mixture 2185.8 Computer Simulations: Illustrative Examples 2195.8.1 Extraction of Colored Gaussian Signals 2205.8.2 Extraction of Natural Speech Signals from Colored Gaussian Signals 2225.8.3 Extraction of Colored and White Sources 2225.8.4 Extraction of Natural Image Signal from Interferences 2245.9 Concluding Remarks 2246 Natural Gradient Approach to Independent Component Analysis 2316.1 Basic Natural Gradient Algorithms 2326.1.1 Kullback–Leibler Divergence - Relative Entropy as a Measure of Stochastic Independence 2326.1.2 Derivation of Natural Gradient Basic Learning Rules 2356.2 Generalizations of the Basic Natural Gradient Algorithm 2376.2.1 Nonholonomic Learning Rules 2376.2.2 Natural Riemannian Gradient in Orthogonality Constraint 2396.3 NG Algorithms for Blind Extraction 2426.3.1 Stiefel and Grassmann-Stiefel Manifolds Approaches 2426.4 Generalized Gaussian Distribution Model 2446.4.1 Moments of the Generalized Gaussian Distribution 2486.4.2 Kurtosis and Gaussian Exponent 2506.4.3 The Flexible ICA Algorithm 2506.4.4 Pearson System 2546.5 Natural Gradient Algorithms for Non-stationary Sources 2556.5.1 Model Assumptions 2556.5.2 Second Order Statistics Cost Function 2566.5.3 Derivation of Natural Gradient Learning Algorithms 2567 Locally Adaptive Algorithms for ICA and their Implementations 2737.1 Modified Jutten-H´erault Algorithms for Blind Separation of Sources 2747.1.1 Recurrent Neural Network 2747.1.2 Statistical Independence 2747.1.3 Self-normalization 2777.1.4 Feed-forward Neural Network and Associated Learning Algorithms 2787.1.5 Multilayer Neural Networks 2817.2 Iterative Matrix Inversion Approach to the Derivation of a Family of Robust ICA Algorithms 2847.2.1 Derivation of Robust ICA Algorithm Using Generalized Natural Gradient Approach 2877.2.2 Practical Implementation of the Algorithms 2887.2.3 Special Forms of the Flexible Robust Algorithm 2907.2.4 Decorrelation Algorithm 2907.2.5 Natural Gradient Algorithms 2907.2.6 Generalized EASI Algorithm 2907.2.7 Non-linear PCA Algorithm 2917.2.8 Flexible ICA Algorithm for Unknown Number of Sources and their Statistics 2927.3 Blind Source Separation with Non-negativity Constraints 2937.4 Computer Simulations 2948 Robust Techniques for BSS and ICA with Noisy Data 3058.1 Introduction 3058.2 Bias Removal Techniques for Prewhitening and ICA Algorithms 3068.2.1 Bias Removal for Whitening Algorithms 3068.2.2 Bias Removal for Adaptive ICA Algorithms 3078.3 Blind Separation of Signals Buried in Additive Convolutive Reference Noise 3108.3.1 Learning Algorithms for Noise Cancellation 3118.4 Cumulant-Based Adaptive ICA Algorithms 3148.4.1 Cumulant-Based Cost Functions 3148.4.2 Family of Equivariant Algorithms Employing Higher Order Cumulants 3158.4.3 Possible Extensions 3178.4.4 Cumulants for Complex Valued Signals 3188.4.5 Blind Separation with More Sensors than Sources 3188.5 Robust Extraction of Arbitrary a Group of Source Signals 3208.5.1 Blind Extraction of Sparse Sources with Largest Positive Kurtosis Using Prewhitening and Semi-Orthogonality Constraint 3208.5.2 Blind Extraction of an Arbitrary Group of Sources without Prewhitening 3238.6 Recurrent Neural Network Approach for Noise Cancellation 3258.6.1 Basic Concept and Algorithm Derivation 3258.6.2 Simultaneous Estimation of a Mixing Matrix and Noise Reduction 3288.6.2.1 Regularization 3298.6.3 Robust Prewhitening and Principal Component Analysis (PCA) 3318.6.4 Computer Simulation Experiments for the Amari-Hopfield Network 3319 Multichannel Blind Deconvolution: Natural Gradient Approach 3359.1 SIMO Convolutive Models and Learning Algorithms for Estimation of a Source Signal 3369.1.1 Equalization Criteria for SIMO Systems 3389.1.2 SIMO Blind Identification and Equalization via Robust ICA/BSS 3409.1.3 Feed-forward Deconvolution Model and Natural Gradient Learning Algorithm 3429.1.4 Recurrent Neural Network Model and Hebbian Learning Algorithm 3439.2 Multichannel Blind Deconvolution with Constraints Imposed on FIR Filters 3469.3 General Models for Multiple-Input Multiple-Output Blind Deconvolution 3499.3.1 Fundamental Models and Assumptions 3499.3.2 Separation-Deconvolution Criteria 3519.4 Relationships Between BSS/ICA and MBD 3549.4.1 Multichannel Blind Deconvolution in the Frequency Domain 3549.4.2 Algebraic Equivalence of Various Approaches 3559.4.3 Convolution as a Multiplicative Operator 3579.4.4 Natural Gradient Learning Rules for Multichannel Blind Deconvolution (MBD) 3589.4.5 NG Algorithms for Double Infinite Filters 3599.4.6 Implementation of Algorithms for a Minimum Phase Non-causal System 3609.5 Natural Gradient Algorithms with Nonholonomic Constraints 3629.5.1 Equivariant Learning Algorithm for Causal FIR Filters in the Lie Group Sense 3639.5.2 Natural Gradient Algorithm for a Fully Recurrent Network 3679.6 MBD of Non-minimum Phase System Using Filter Decomposition Approach 3689.6.1 Information Back-propagation 3709.6.2 Batch Natural Gradient Learning Algorithm 3719.7 Computer Simulation Experiments 3739.7.1 The Natural Gradient Algorithm vs. the Ordinary Gradient Algorithm 3739.7.2 Information Back-propagation Example 37510 Estimating Functions and Superefficiency for ICA and Deconvolution 38310.1 Estimating Functions for Standard ICA 38410.1.1 What is an Estimating Function? 38410.1.2 Semiparametric Statistical Model 38510.1.3 Admissible Class of Estimating Functions 38610.1.4 Stability of Estimating Functions 38910.1.5 Standardized Estimating Function and Adaptive Newton Method 39210.1.6 Analysis of Estimation Error and Superefficiency 39310.1.7 Adaptive Choice of ' Function 39510.2 Estimating Functions in Noisy Cases 39610.3 Estimating Functions for Temporally Correlated Source Signals 39710.3.1 Source Model 39710.3.2 Likelihood and Score Functions 39910.3.3 Estimating Functions 40010.3.4 Simultaneous and Joint Diagonalization of Covariance Matrices and Estimating Functions 40110.3.5 Standardized Estimating Function and Newton Method 40410.3.6 Asymptotic Errors 40710.4 Semiparametric Models for Multichannel Blind Deconvolution 40710.4.1 Notation and Problem Statement 40810.4.2 Geometrical Structures on FIR Manifold 40910.4.3 Lie Group 41010.4.4 Natural Gradient Approach for Multichannel Blind Deconvolution 41010.4.5 Efficient Score Matrix Function and its Representation 41310.5 Estimating Functions for MBD 41510.5.1 Superefficiency of Batch Estimator 41811 Blind Filtering and Separation Using a State-Space Approach 42311.1 Problem Formulation and Basic Models 42411.1.1 Invertibility by State Space Model 42611.1.2 Controller Canonical Form 42811.2 Derivation of Basic Learning Algorithms 42811.2.1 Gradient Descent Algorithms for Estimation of Output Matrices W= [C;D] 42911.2.2 Special Case - Multichannel Blind Deconvolution with Causal FIR Filters 43211.2.3 Derivation of the Natural Gradient Algorithm for the State Space Model 43211.3 Estimation of Matrices [A;B] by Information Back–propagation 43411.4 State Estimator – The Kalman Filter 43711.4.1 Kalman Filter 43711.5 Two–stage Separation Algorithm 43912 Nonlinear State Space Models – Semi-Blind Signal Processing 44312.1 General Formulation of The Problem 44312.1.1 Invertibility by State Space Model 44712.1.2 Internal Representation 44712.2 Supervised-Unsupervised Learning Approach 44812.2.1 Nonlinear Autoregressive Moving Average Model 44812.2.2 Hyper Radial Basis Function Neural Network Model (HRBFN) 44912.2.3 Estimation of Parameters of HRBF Networks Using Gradient Approach 451References 45313 Appendix – Mathematical Preliminaries 53513.1 Matrix Analysis 53513.1.1 Matrix inverse update rules 53513.1.2 Some properties of determinant 53613.1.3 Some properties of the Moore-Penrose pseudo-inverse 53613.1.4 Matrix Expectations 53713.1.5 Differentiation of a scalar function with respect to a vector 53813.1.6 Matrix differentiation 53913.1.7 Trace 54013.1.8 Matrix differentiation of trace of matrices 54113.1.9 Important Inequalities 54213.1.10Inequalities in Information Theory 54313.2 Distance measures 54413.2.1 Geometric distance measures 54413.2.2 Distances between sets 54413.2.3 Discrimination measures 54514 Glossary of Symbols and Abbreviations 547Index 552