Identification of Physical Systems
Applications to Condition Monitoring, Fault Diagnosis, Soft Sensor and Controller Design
Inbunden, Engelska, 2014
Av Rajamani Doraiswami, Maryhelen Stevenson, Chris Diduch, Rajamani (The University of New Brunswick) Doraiswami, Maryhelen (University of New Brunswick) Stevenson, Chris (University of New Brunswick) Diduch
1 679 kr
Produktinformation
- Utgivningsdatum2014-05-13
- Mått178 x 252 x 29 mm
- Vikt953 g
- FormatInbunden
- SpråkEngelska
- Antal sidor544
- FörlagJohn Wiley & Sons Inc
- ISBN9781119990123
Tillhör följande kategorier
Rajamani Doraiswami, Professor Emeritus, Electrical and Computer Engineering Department, University of New Brunswick, USARajamani Doraiswami is Professor Emeritus in the Department of Electrical and Computer Engineering at the University of New Brunswick.Dr. Doraiswami is known internationally as an excellent researcher, has held an NSERC operating grant continually since 1981 and has published more than 60 papers in refereed journals and 90 conference papers. Dr. Doraiswami's research interests focus on control, signal processing, pattern classification and algorithms. One of his most successful collaborations has been in the development of laboratories for the teaching of analysis and design of control and signal processing systems in real-time. Chris Diduch is a Professor in the Department of Electrical and Computer Engineering at the University of New Brunswick. His research is in the fields of control systems and digital systems. Maryhelen Stevenson is a Professor in the Department of Electrical and Computer Engineering at the University of New Brunswick. Her research is in the fields of pattern classification, speech and signal processing, adaptive systems and time-frequency representations.
- Preface xvNomenclature xxi1 Modeling of Signals and Systems 11.1 Introduction 11.2 Classification of Signals 21.2.1 Deterministic and Random Signals 31.2.2 Bounded and Unbounded Signal 31.2.3 Energy and Power Signals 31.2.4 Causal, Non-causal, and Anti-causal Signals 41.2.5 Causal, Non-causal, and Anti-causal Systems 41.3 Model of Systems and Signals 51.3.1 Time-Domain Model 51.3.2 Frequency-Domain Model 81.4 Equivalence of Input–Output and State-Space Models 81.4.1 State-Space and Transfer Function Model 81.4.2 Time-Domain Expression for the Output Response 81.4.3 State-Space and the Difference Equation Model 91.4.4 Observer Canonical Form 91.4.5 Characterization of the Model 101.4.6 Stability of (Discrete-Time) Systems 101.4.7 Minimum Phase System 111.4.8 Pole-Zero Locations and the Output Response 111.5 Deterministic Signals 111.5.1 Transfer Function Model 121.5.2 Difference Equation Model 121.5.3 State-Space Model 141.5.4 Expression for an Impulse Response 141.5.5 Periodic Signal 141.5.6 Periodic Impulse Train 151.5.7 A Finite Duration Signal 161.5.8 Model of a Class of All Signals 171.5.9 Examples of Deterministic Signals 181.6 Introduction to Random Signals 231.6.1 Stationary Random Signal 231.6.2 Joint PDF and Statistics of Random Signals 241.6.3 Ergodic Process 271.7 Model of Random Signals 281.7.1 White Noise Process 291.7.2 Colored Noise 301.7.3 Model of a Random Waveform 301.7.4 Classification of the Random Waveform 311.7.5 Frequency Response and Pole-Zero Locations 311.7.6 Illustrative Examples of Filters 361.7.7 Illustrative Examples of Random Signals 361.7.8 Pseudo Random Binary Sequence (PRBS) 381.8 Model of a System with Disturbance and Measurement Noise 411.8.1 Input–Output Model of the System 411.8.2 State-Space Model of the System 441.8.3 Illustrative Examples in Integrated System Model 471.9 Summary 50References 54Further Readings 542 Characterization of Signals: Correlation and Spectral Density 572.1 Introduction 572.2 Definitions of Auto- and Cross-Correlation (and Covariance) 582.2.1 Properties of Correlation 612.2.2 Normalized Correlation and Correlation Coefficient 662.3 Spectral Density: Correlation in the Frequency Domain 672.3.1 Z-transform of the Correlation Function 692.3.2 Expressions for Energy and Power Spectral Densities 712.4 Coherence Spectrum 742.5 Illustrative Examples in Correlation and Spectral Density 762.5.1 Deterministic Signals: Correlation and Spectral Density 762.5.2 Random Signals: Correlation and Spectral Density 872.6 Input–Output Correlation and Spectral Density 912.6.1 Generation of Random Signal from White Noise 922.6.2 Identification of Non-Parametric Model of a System 932.6.3 Identification of a Parametric Model of a Random Signal 942.7 Illustrative Examples: Modeling and Identification 982.8 Summary 1092.9 Appendix 112References 1163 Estimation Theory 1173.1 Overview 1173.2 Map Relating Measurement and the Parameter 1193.2.1 Mathematical Model 1193.2.2 Probabilistic Model 1203.2.3 Likelihood Function 1223.3 Properties of Estimators 1233.3.1 Indirect Approach to Estimation 1233.3.2 Unbiasedness of the Estimator 1243.3.3 Variance of the Estimator: Scalar Case 1253.3.4 Median of the Data Samples 1253.3.5 Small and Large Sample Properties 1263.3.6 Large Sample Properties 1263.4 Cramér–Rao Inequality 1273.4.1 Scalar Case: and ̂ Scalars while y is a Nx1 Vector 1283.4.2 Vector Case: is a Mx1 Vector 1293.4.3 Illustrative Examples: Cramér–Rao Inequality 1303.4.4 Fisher Information 1383.5 Maximum Likelihood Estimation 1393.5.1 Formulation of Maximum Likelihood Estimation 1393.5.2 Illustrative Examples: Maximum Likelihood Estimation of Mean or Median 1413.5.3 Illustrative Examples: Maximum Likelihood Estimation of Mean and Variance 1483.5.4 Properties of Maximum Likelihood Estimator 1543.6 Summary 1543.7 Appendix: Cauchy–Schwarz Inequality 1573.8 Appendix: Cram´er–Rao Lower Bound 1573.8.1 Scalar Case 1583.8.2 Vector Case 1603.9 Appendix: Fisher Information: Cauchy PDF 1613.10 Appendix: Fisher Information for i.i.d. PDF 1613.11 Appendix: Projection Operator 1623.12 Appendix: Fisher Information: Part Gauss-Part Laplace 164Problem 165References 165Further Readings 1654 Estimation of Random Parameter 1674.1 Overview 1674.2 Minimum Mean-Squares Estimator (MMSE): Scalar Case 1674.2.1 Conditional Mean: Optimal Estimator 1684.3 MMSE Estimator: Vector Case 1694.3.1 Covariance of the Estimation Error 1714.3.2 Conditional Expectation and Its Properties 1724.4 Expression for Conditional Mean 1724.4.1 MMSE Estimator: Gaussian Random Variables 1734.4.2 MMSE Estimator: Unknown is Gaussian and Measurement Non-Gaussian 1744.4.3 The MMSE Estimator for Gaussian PDF 1764.4.4 Illustrative Examples 1784.5 Summary 1834.6 Appendix: Non-Gaussian Measurement PDF 1844.6.1 Expression for Conditional Expectation 1844.6.2 Conditional Expectation for Gaussian x and Non-Gaussian y 185References 188Further Readings 1885 Linear Least-Squares Estimation 1895.1 Overview 1895.2 Linear Least-Squares Approach 1895.2.1 Linear Algebraic Model 1905.2.2 Least-Squares Method 1905.2.3 Objective Function 1915.2.4 Optimal Least-Squares Estimate: Normal Equation 1935.2.5 Geometric Interpretation of Least-Squares Estimate: Orthogonality Principle 1945.3 Performance of the Least-Squares Estimator 1955.3.1 Unbiasedness of the Least-Squares Estimate 1955.3.2 Covariance of the Estimation Error 1975.3.3 Properties of the Residual 1985.3.4 Model and Systemic Errors: Bias and the Variance Errors 2015.4 Illustrative Examples 2055.4.1 Non-Zero-Mean Measurement Noise 2095.5 Cram´er–Rao Lower Bound 2095.6 Maximum Likelihood Estimation 2105.6.1 Illustrative Examples 2105.7 Least-Squares Solution of Under-Determined System 2125.8 Singular Value Decomposition 2135.8.1 Illustrative Example: Singular and Eigenvalues of Square Matrices 2155.8.2 Computation of Least-Squares Estimate Using the SVD 2165.9 Summary 2185.10 Appendix: Properties of the Pseudo-Inverse and the Projection Operator 2215.10.1 Over-Determined System 2215.10.2 Under-Determined System 2225.11 Appendix: Positive Definite Matrices 2225.12 Appendix: Singular Value Decomposition of a Matrix 2235.12.1 SVD and Eigendecompositions 2255.12.2 Matrix Norms 2265.12.3 Least Squares Estimate for Any Arbitrary Data Matrix H 2265.12.4 Pseudo-Inverse of Any Arbitrary Matrix 2285.12.5 Bounds on the Residual and the Covariance of the Estimation Error 2285.13 Appendix: Least-Squares Solution for Under-Determined System 2285.14 Appendix: Computation of Least-Squares Estimate Using the SVD 229References 229Further Readings 2306 Kalman Filter 2316.1 Overview 2316.2 Mathematical Model of the System 2336.2.1 Model of the Plant 2336.2.2 Model of the Disturbance and Measurement Noise 2336.2.3 Integrated Model of the System 2346.2.4 Expression for the Output of the Integrated System 2356.2.5 Linear Regression Model 2356.2.6 Observability 2366.3 Internal Model Principle 2366.3.1 Controller Design Using the Internal Model Principle 2376.3.2 Internal Model (IM) of a Signal 2376.3.3 Controller Design 2386.3.4 Illustrative Example: Controller Design 2416.4 Duality Between Controller and an Estimator Design 2446.4.1 Estimation Problem 2446.4.2 Estimator Design 2446.5 Observer: Estimator for the States of a System 2466.5.1 Problem Formulation 2466.5.2 The Internal Model of the Output 2466.5.3 Illustrative Example: Observer with Internal Model Structure 2476.6 Kalman Filter: Estimator of the States of a Stochastic System 2506.6.1 Objectives of the Kalman Filter 2516.6.2 Necessary Structure of the Kalman Filter 2526.6.3 Internal Model of a Random Process 2526.6.4 Illustrative Example: Role of an Internal Model 2546.6.5 Model of the Kalman Filter 2556.6.6 Optimal Kalman Filter 2566.6.7 Optimal Scalar Kalman Filter 2566.6.8 Optimal Kalman Gain 2606.6.9 Comparison of the Kalman Filters: Integrated and Plant Models 2606.6.10 Steady-State Kalman Filter 2616.6.11 Internal Model and Statistical Approaches 2616.6.12 Optimal Information Fusion 2626.6.13 Role of the Ratio of Variances 2626.6.14 Fusion of Information from the Model and the Measurement 2636.6.15 Illustrative Example: Fusion of Information 2646.6.16 Orthogonal Properties of the Kalman Filter 2666.6.17 Ensemble and Time Averages 2666.6.18 Illustrative Example: Orthogonality Properties of the Kalman Filter 2676.7 The Residual of the Kalman Filter with Model Mismatch and Non-Optimal Gain 2676.7.1 State Estimation Error with Model Mismatch 2686.7.2 Illustrative Example: Residual with Model Mismatch and Non-Optimal Gain 2716.8 Summary 2746.9 Appendix: Estimation Error Covariance and the Kalman Gain 2776.10 Appendix: The Role of the Ratio of Plant and the Measurement Noise Variances 2796.11 Appendix: Orthogonal Properties of the Kalman Filter 2796.11.1 Span of a Matrix 2846.11.2 Transfer Function Formulae 2846.12 Appendix: Kalman Filter Residual with Model Mismatch 285References 2877 System Identification 2897.1 Overview 2897.2 System Model 2917.2.1 State-Space Model 2917.2.2 Assumptions 2927.2.3 Frequency-Domain Model 2927.2.4 Input Signal for System Identification 2937.3 Kalman Filter-Based Identification Model Structure 2977.3.1 Expression for the Kalman Filter Residual 2987.3.2 Direct Form or Colored Noise Form 3007.3.3 Illustrative Examples: Process, Predictor, and Innovation Forms 3027.3.4 Models for System Identification 3047.3.5 Identification Methods 3057.4 Least-Squares Method 3077.4.1 Linear Matrix Model: Batch Processing 3087.4.2 The Least-Squares Estimate 3087.4.3 Quality of the Least-Squares Estimate 3127.4.4 Illustrative Example of the Least-Squares Identification 3137.4.5 Computation of the Estimates Using Singular Value Decomposition 3157.4.6 Recursive Least-Squares Identification 3167.5 High-Order Least-Squares Method 3187.5.1 Justification for a High-Order Model 3187.5.2 Derivation of a Reduced-Order Model 3237.5.3 Formulation of Model Reduction 3247.5.4 Model Order Selection 3247.5.5 Illustrative Example of High-Order Least-Squares Method 3257.5.6 Performance of the High-Order Least-Squares Scheme 3267.6 The Prediction Error Method 3277.6.1 Residual Model 3277.6.2 Objective Function 3277.6.3 Iterative Prediction Algorithm 3287.6.4 Family of Prediction Error Algorithms 3307.7 Comparison of High-Order Least-Squares and the Prediction Error Methods 3307.7.1 Illustrative Example: LS, High Order LS, and PEM 3317.8 Subspace Identification Method 3347.8.1 Identification Model: Predictor Form of the Kalman Filter 3347.9 Summary 3407.10 Appendix: Performance of the Least-Squares Approach 3477.10.1 Correlated Error 3477.10.2 Uncorrelated Error 3477.10.3 Correlation of the Error and the Data Matrix 3487.10.4 Residual Analysis 3507.11 Appendix: Frequency-Weighted Model Order Reduction 3527.11.1 Implementation of the Frequency-Weighted Estimator 3547.11.2 Selection of the Frequencies 354References 3548 Closed Loop Identification 3578.1 Overview 3578.1.1 Kalman Filter-Based Identification Model 3588.1.2 Closed-Loop Identification Approaches 3588.2 Closed-Loop System 3598.2.1 Two-Stage and Direct Approaches 3598.3 Model of the Single Input Multi-Output System 3608.3.1 State- Space Model of the Subsystem 3608.3.2 State-Space Model of the Overall System 3618.3.3 Transfer Function Model 3618.3.4 Illustrative Example: Closed-Loop Sensor Network 3628.4 Kalman Filter-Based Identification Model 3648.4.1 State-Space Model of the Kalman Filter 3648.4.2 Residual Model 3658.4.3 The Identification Model 3668.5 Closed-Loop Identification Schemes 3668.5.1 The High-Order Least-Squares Method 3668.6 Second Stage of the Two-Stage Identification 3728.7 Evaluation on a Simulated Closed-Loop Sensor Net 3728.7.1 The Performance of the Stage I Identification Scheme 3728.7.2 The Performance of the Stage II Identification Scheme 3738.8 Summary 374References 3779 Fault Diagnosis 3799.1 Overview 3799.1.1 Identification for Fault Diagnosis 3809.1.2 Residual Generation 3809.1.3 Fault Detection 3809.1.4 Fault Isolation 3819.2 Mathematical Model of the System 3819.2.1 Linear Regression Model: Nominal System 3829.3 Model of the Kalman Filter 3829.4 Modeling of Faults 3839.4.1 Linear Regression Model 3839.5 Diagnostic Parameters and the Feature Vector 3849.6 Illustrative Example 3869.6.1 Mathematical Model 3869.6.2 Feature Vector and the Influence Vectors 3879.7 Residual of the Kalman Filter 3889.7.1 Diagnostic Model 3899.7.2 Key Properties of the Residual 3899.7.3 The Role of the Kalman Filter in Fault Diagnosis 3899.8 Fault Diagnosis 3909.9 Fault Detection: Bayes Decision Strategy 3909.9.1 Pattern Classification Problem: Fault Detection 3919.9.2 Generalized Likelihood Ratio Test 3929.9.3 Maximum Likelihood Estimate 3929.9.4 Decision Strategy 3949.9.5 Other Test Statistics 3959.10 Evaluation of Detection Strategy on Simulated System 3969.11 Formulation of Fault Isolation Problem 3969.11.1 Pattern Classification Problem: Fault Isolation 3979.11.2 Formulation of the Fault Isolation Scheme 3989.11.3 Fault Isolation Tasks 3999.12 Estimation of the Influence Vectors and Additive Fault 3999.12.1 Parameter-Perturbed Experiment 4009.12.2 Least-Squares Estimates 4019.13 Fault Isolation Scheme 4019.13.1 Sequential Fault Isolation Scheme 4029.13.2 Isolation of the Fault 4039.14 Isolation of a Single Fault 4039.14.1 Fault Discriminant Function 4039.14.2 Performance of Fault Isolation Scheme 4049.14.3 Performance Issues and Guidelines 4059.15 Emulators for Offline Identification 4069.15.1 Examples of Emulators 4079.15.2 Emulators for Multiple Input-Multiple-Output System 4079.15.3 Role of an Emulator 4089.15.4 Criteria for Selection 4099.16 Illustrative Example 4099.16.1 Mathematical Model 4099.16.2 Selection of Emulators 4109.16.3 Transfer Function Model 4109.16.4 Role of the Static Emulators 4119.16.5 Role of the Dynamic Emulator 4129.17 Overview of Fault Diagnosis Scheme 4149.18 Evaluation on a Simulated Example 4149.18.1 The Kalman Filter 4149.18.2 The Kalman Filter Residual and Its Auto-correlation 4149.18.3 Estimation of the Influence Vectors 4169.18.4 Fault Size Estimation 4169.18.5 Fault Isolation 4179.19 Summary 4189.20 Appendix: Bayesian Multiple Composite Hypotheses Testing Problem 4229.21 Appendix: Discriminant Function for Fault Isolation 4239.22 Appendix: Log-Likelihood Ratio for a Sinusoid and a Constant 4249.22.1 Determination of af, bf , and cf 4249.22.2 Determination of the Optimal Cost 425References 42610 Modeling and Identification of Physical Systems 42710.1 Overview 42710.2 Magnetic Levitation System 42710.2.1 Mathematic Model of a Magnetic Levitation System 42710.2.2 Linearized Model 42910.2.3 Discrete-Time Equivalent of Continuous-Time Models 43010.2.4 Identification Approach 43210.2.5 Identification of the Magnetic Levitation System 43310.3 Two-Tank Process Control System 43610.3.1 Model of the Two-Tank System 43610.3.2 Identification of the Closed-Loop Two-Tank System 43810.4 Position Control System 44210.4.1 Experimental Setup 44210.4.2 Mathematical Model of the Position Control System 44210.5 Summary 444References 44611 Fault Diagnosis of Physical Systems 44711.1 Overview 44711.2 Two-Tank Physical Process Control System 44811.2.1 Objective 44811.2.2 Identification of the Physical System 44811.2.3 Fault Detection 44911.2.4 Fault Isolation 45111.3 Position Control System 45211.3.1 The Objective 45211.3.2 Identification of the Physical System 45211.3.3 Detection of Fault 45511.3.4 Fault Isolation 45511.3.5 Fault Isolability 45511.4 Summary 457References 45712 Fault Diagnosis of a Sensor Network 45912.1 Overview 45912.2 Problem Formulation 46112.3 Fault Diagnosis Using a Bank of Kalman Filters 46112.4 Kalman Filter for Pairs of Measurements 46212.5 Kalman Filter for the Reference Input-Measurement Pair 46312.6 Kalman Filter Residual: A Model Mismatch Indicator 46312.6.1 Residual for a Pair of Measurements 46312.7 Bayes Decision Strategy 46412.8 Truth Table of Binary Decisions 46512.9 Illustrative Example 46712.10 Evaluation on a Physical Process Control System 46912.11 Fault Detection and Isolation 47012.11.1 Comparison with Other Approaches 47312.12 Summary 47412.13 Appendix 47512.13.1 Map Relating yi(z) to yj(z) 47512.13.2 Map Relating r(z) to yj(z) 476References 47713 Soft Sensor 47913.1 Review 47913.1.1 Benefits of a Soft Sensor 47913.1.2 Kalman Filter 47913.1.3 Reliable Identification of the System 48013.1.4 Robust Controller Design 48013.1.5 Fault Tolerant System 48113.2 Mathematical Formulation 48113.2.1 Transfer Function Model 48213.2.2 Uncertainty Model 48213.3 Identification of the System 48313.3.1 Perturbed Parameter Experiment 48413.3.2 Least-Squares Estimation 48413.3.3 Selection of the Model Order 48513.3.4 Identified Nominal Model 48513.3.5 Illustrative Example 48613.4 Model of the Kalman Filter 48813.4.1 Role of the Kalman Filter 48813.4.2 Model of the Kalman Filter 48913.4.3 Augmented Model of the Plant and the Kalman Filter 48913.5 Robust Controller Design 48913.5.1 Objective 48913.5.2 Augmented Model 49013.5.3 Closed-Loop Performance and Stability 49013.5.4 Uncertainty Model 49113.5.5 Mixed-sensitivity Optimization Problem 49213.5.6 State-Space Model of the Robust Control System 49313.6 High Performance and Fault Tolerant Control System 49413.6.1 Residual and Model-mismatch 49413.6.2 Bayes Decision Strategy 49513.6.3 High Performance Control System 49513.6.4 Fault-Tolerant Control System 49613.7 Evaluation on a Simulated System: Soft Sensor 49613.7.1 Offline Identification 49713.7.2 Identified Model of the Plant 49713.7.3 Mixed-sensitivity Optimization Problem 49813.7.4 Performance and Robustness 49913.7.5 Status Monitoring 49913.8 Evaluation on a Physical Velocity Control System 50013.9 Conclusions 50213.10 Summary 503References 507Index 509