- Nyhet
Model Predictive Control
Fundamentals and Practice
2 019 kr
Kommande
Produktinformation
- Utgivningsdatum2026-09-09
- FormatInbunden
- SpråkEngelska
- Antal sidor592
- Upplaga26001
- FörlagJohn Wiley & Sons Inc
- ISBN9781394333295
Tillhör följande kategorier
Jay H. Lee, PhD, is the Choon Hoon Cho Chair and Professor of Chemical and Materials Science, Aerospace and Mechanical Engineering, Electrical and Computer Engineering, and Industrial and Systems Engineering at the University of Southern California. He has been an authoritative researcher on model predictive control, optimization, and AI applications. Niket S. Kaisare, PhD, is a Professor in the Department of Chemical Engineering at the Indian Institute of Technology - Madras. He specializes in advanced process control, catalytic micro-reactors, and energy systems, and is an expert in model-based advanced process control. Carlos E. García, PhD, has retired as the Global Discipline Head for Process Control at Shell Oil Company following a 36-year career. He is widely recognized as one of the pioneers of model predictive control and is a member of the Control Process Automation Hall of Fame.
- ContentsAcknowledgmentsPreface1 Introduction1.1 What’s MPC?1.2 Why MPC?1.2.1 Economic Drivers of APC/MPC1.2.2 Economic Advantages of MPC vs. Other Tools1.3 Historical Overview1.3.1 Early Computer Control1.3.2 The Pioneers1.3.3 Adoption Growth1.4 Impact of MPC on Control Research1.4.1 Early Theoretical Developments1.4.2 State-Space Model Formulation and Stability Results1.4.3 Other Theoretical Developments1.4.4 Lessons Learned Along the MPC Journey1.5 A Typical Industrial Control Problem1.6 Organization of This BookExercises2 Step Response Modeling and Identification2.1 Linear Time Invariant Systems2.2 Impulse / Step Response Models2.2.1 Impulse Response Models2.2.2 Step Response Models2.3 Multi-Step Prediction2.3.1 Recursive Multi-Step Prediction Based on FIR Model2.3.2 Recursive Multi-Step Prediction Based on Step-Response Model2.3.3 Multivariable Generalization2.4 Examples2.5 Identification2.5.1 Settling Time2.5.2 Sampling Time2.5.3 Choice of the Input Signal for Experimental Identification2.5.4 The Linear Least Squares Problem2.5.5 Linear Least Squares IdentificationExercises3 Dynamic Matrix Control – The Basic Algorithm3.1 The Concept of Moving Horizon Control3.2 Multi-Step Prediction3.3 Objective Function3.4 Constraints3.4.1 Manipulated Variable Constraints3.4.2 Manipulated Variable Rate Constraints3.4.3 Output Variable Constraints3.4.4 Combined Constraints3.5 Quadratic Programming Solution of the Control Problem3.5.1 Quadratic Programs3.5.2 Formulation as a Quadratic Program3.6 Implementation3.6.1 Moving Horizon Algorithm3.6.2 DMC Examples3.6.3 Efficient Solutions to the QP3.6.4 Proper Constraint Formulation3.6.5 Choice of Horizon Length3.6.6 Input Blocking3.6.7 Filtering of the Feedback Signal3.7 Examples: Analysis and Guidelines3.7.1 Unconstrained SISO Systems3.7.2 Constrained SISO Systems3.7.3 MIMO System with Strong Gain Directionality3.7.4 Constrained MIMO Systems3.7.5 Conclusions and General Tuning Guidelines3.8 Case Study: Control of “Shell Heavy Oil Fractionator” using Dynamic Matrix Control3.8.1 Heavy Oil Fractionator: Background3.8.2 Control Structure DescriptionExercises4 Dynamic Matrix Control – Extensions and Variations4.1 Features Found in Other Industrial Algorithms4.1.1 Reference Trajectories4.1.2 Coincidence Points4.1.3 The Funnel Approach4.1.4 Use of Other Norms4.1.5 Input Parameterization4.1.6 Model Conditioning4.1.7 Prioritization of CVs and MVs4.2 Connection with Internal Model Control4.3 Some Possible Enhancements to DMC4.3.1 Closed-Loop Update of Model State4.3.2 Integrating Dynamics4.3.3 Noise Filter4.3.4 Bi-Level Optimization4.3.5 Product Property EstimationExercises5 Linear Time Invariant System Models5.1 Sampling and Reconstruction5.1.1 Introduction to Digital Control5.1.2 Sampling5.1.3 Aliasing5.1.4 Reconstruction5.2 Introduction to z-transform5.3 Transfer Function Models5.3.1 Continuous Time5.3.2 Discrete Time5.3.3 Transfer Matrix5.3.4 Converting Continuous Transfer Function to Discrete Transfer Function5.3.5 Stability and Implications of Poles5.3.6 Gain, Frequency Response5.4 State-Space Model5.4.1 Continuous Time5.4.2 Discrete Time5.4.3 Converting Continuous- to Discrete-Time System5.5 Conversion Between Discrete-Time Models5.5.1 Representing State-Space System as Transfer Function5.5.2 Realization of Transfer Function as State-Space System5.5.3 Impulse and Step Responses of State-Space System5.5.4 Derivation of Transfer Matrix from Impulse Response5.5.5 From Impulse / Step Response to State-Space ModelExercises6 Discrete-Time State Space Models6.1 State-Coordinate Transformation6.2 Stability6.2.1 System Poles and Characteristic Equation6.2.2 Stability6.2.3 Lyapunov Equation6.3 Controllability, Reachability, and Stabilizability6.3.1 Definitions6.3.2 Conditions for Reachability6.3.3 Coordinate Transformation6.4 Observability, Reconstructability, and Detectability6.4.1 Definitions6.4.2 Conditions for Observability6.4.3 Coordinate Transformation6.5 Kalman’s Decomposition and Minimal Realization6.5.1 Kalman’s Decomposition6.5.2 Minimal Realization6.6 Disturbance Modeling6.6.1 Linear Stochastic System Model for Stationary Processes6.6.2 Stochastic System Models for Processes with Nonstationary Behavior6.6.3 Models for Estimation and ControlExercises7 State Estimation7.1 Linear Estimator Structure7.2 Observer Pole Placement7.3 Kalman Filter7.3.1 Derivation of the Optimal Filter Gain Matrix7.3.2 Correlated Noise Case7.3.3 Stability of Kalman Filter7.4 Extensions7.4.1 Inferential Estimation7.4.2 Non-stationary (Integrating) Noise7.4.3 Time-Varying System7.4.4 Periodically Time-Varying System7.4.5 Measurement Delays7.5 Least Squares Formulation of State Estimation7.5.1 Batch Least Squares Formulation7.5.2 Recursive Solution and Equivalence with Kalman Filter7.5.3 Use of Moving Estimation WindowExercises8 Unconstrained Quadratic Optimal Control8.1 Linear State Feedback Controller Design8.2 Finite Horizon Quadratic Optimal Control8.2.1 Open‑Loop Optimal Solution via Least Squares8.2.2 State Feedback Solution via Dynamic Programming8.2.3 Comparison of the Two Approaches8.3 Infinite Horizon Quadratic Optimal Control8.3.1 Optimal State Feedback Law: Asymptotic Solution of the Finite Horizon Problem8.3.2 Receding Horizon Implementation of the Finite Horizon Solution8.3.3 Equivalence Between Finite and Infinite Horizon Problems8.4 Analysis8.4.1 State Feedback Case8.4.2 Output Feedback Case8.4.3 Setpoint Tracking and Disturbance Rejection8.5 Stochastic LQ Control8.5.1 Finite Horizon Problem8.5.2 Output Feedback LQ ControlExercises9 Constrained Quadratic Optimal Control9.1 Finite Horizon Problem9.2 Infinite Horizon Problem9.2.1 Options for Re‑formulation as an Equivalent Finite‑Horizon Problem9.2.2 Comparison of Various Options9.3 Constraint Softening9.4 Derivation of an Explicit Form of the Optimal Control Law via Multi‑Parametric Programming9.5 Analysis9.5.1 Stability Concepts and Lyapunov’s Direct Method9.5.2 State Feedback Case9.5.3 Output Feedback Case9.6 Stochastic Case (*)Exercises10 System Identification10.1 Problem Overview10.2 Model Structures10.2.1 Finite Impulse Response Model10.2.2 Structures for Parametric Identification10.2.3 Key Issues in Parametric Models10.3 Parametric Identification Methods10.3.1 Prediction Error Method10.3.2 Properties of Linear Least Squares Identification10.3.3 Persistency of Excitation10.3.4 Frequency‑Domain Bias Distribution Under PEM10.3.5 Parameter Estimation via Statistical Methods ()10.3.6 Other Methods ()10.4 Nonparametric Identification10.4.1 Impulse Response Identification10.4.2 Frequency Response Identification (*)10.5 Subspace Identification10.5.1 The Basic Method10.5.2 Analysis and Discussion10.6 Practice of System Identification: A User’s Perspective10.6.1 Experiment Design10.6.2 PRBS Signals10.6.3 Data Pre‑Processing10.6.4 Model Fitting and Validation10.6.5 Model Quality Assessment and an Integrated FrameworkExercises11 Linear MPC: State Space Formulation11.1 Motivation11.2 Model Construction11.2.1 Model Structure for State‑Space MPC11.2.2 Stochastic System Model with Output Disturbance Only11.2.3 Stochastic System Model with State and Output Disturbances11.2.4 Summary11.3 Deterministic State Space MPC11.3.1 State Regulation Problem11.3.2 Constraints11.3.3 Offset‑Free Output Tracking and Regulation Problem11.4 MPC with State Estimation11.4.1 State Estimation Using Kalman Filter11.4.2 Control Calculation Using State Estimate11.4.3 MPC with Output Disturbance Only11.4.4 MPC with State Disturbance Model11.4.5 MPC with Full Disturbance Model11.4.6 Tracking a Setpoint Trajectory11.4.7 Constraint Softening11.5 Inferential Control11.5.1 Problem Formulation11.5.2 Infrequent Primary Measurements11.5.3 Handling Measurement Delays in Primary Measurements11.6 Sequential Linearization‑Based MPC (for Nonlinear Systems)11.6.1 Model Construction11.6.2 Extended Kalman Filter11.6.3 Multi‑Step Prediction11.6.4 Objective Function and Constraints11.6.5 Implementation of Sequential Linearization‑Based MPCExercises12 Nonlinear MPC12.1 Introduction12.2 NMPC Formulation12.3 Solution via Nonlinear Programming12.3.1 Elements of NLP Formulations12.3.2 Nonlinear Programming Solvers12.4 Stability and Other Properties12.4.1 Invariant Set and Output Admissible Set12.4.2 Cost‑To‑Go and Terminal Penalty12.4.3 Establishing Closed‑Loop Stability12.4.4 Implementation: Quasi‑Infinite Horizon MPC12.5 Nonlinear State Estimation12.5.1 Extended Kalman Filter12.5.2 Moving Horizon Estimation for Nonlinear State Estimation12.6 Case Study12.7 Conclusions and Future DirectionsExercises13 Repetitive MPC for Batch and Periodic Systems13.1 Introduction13.1.1 Historical Background13.2 General Framework13.2.1 Problem Formulation13.2.2 Limitations of Conventional Feedback Control for Periodic Processes13.3 Iterative Learning Model Predictive Control for Batch Systems13.3.1 Notations13.3.2 “Run‑To‑Run” IL‑MPC Method for an Unconstrained System13.3.3 “Run‑To‑Run” IL‑MPC Method for a Constrained System13.3.4 Real‑Time‑Feedback IL‑MPC Method for an Unconstrained System13.3.5 Real‑Time‑Feedback IL‑MPC Method for the Constrained System13.4 Repetitive Model Predictive Control for Continuous Systems with Periodic Operations13.4.1 Notations13.4.2 “Run‑To‑Run” R‑MPC Method for an Unconstrained System13.4.3 “Run‑To‑Run” R‑MPC Methods for the Constrained System13.4.4 Real‑Time‑Feedback‑Based R‑MPC Methods for the Unconstrained System13.4.5 Real‑Time‑Feedback‑Based R‑MPC Methods for the Constrained System13.5 Future OutlookExercisesAppendix A: Review of Linear TransformationA.1 Vector SpaceA.1.1 DefinitionA.1.2 Dimension of a Vector SpaceA.1.3 Linear Independence of VectorsA.1.4 BasisA.1.5 SubspaceA.1.6 Union, Intersection, Independence, and Internal SumA.1.7 Change of BasisA.2 Linear OperatorA.2.1 DefinitionA.2.2 Matrix RepresentationA.2.3 Change of Basis for Linear OperatorsA.2.4 Null Space and Image SpaceA.2.5 Inverse OperatorA.2.6 Injection, Surjection, BijectionA.2.7 Inner Product SpaceA.2.8 Orthogonal Vectors and Orthonormal BasisA.2.9 Change of Basis to Orthonormal BasisA.2.10 Orthogonal MatrixA.2.11 Projection, Orthogonal ProjectionA.3 Matrix AlgebraA.3.1 Eigenvalues, EigenvectorsA.3.2 Computing Eigenvalues and EigenvectorsA.3.3 Jordan Decomposition and Its ApplicationsA.3.4 Singular Value DecompositionA.3.5 Cayley‑Hamilton TheoremA.3.6 Matrix FunctionA.3.7 Vector NormsA.3.8 Matrix NormA.3.9 Positive (Negative) Definiteness and Semi‑DefinitenessA.4 ExercisesB.1 Random VariablesB.1.1 IntroductionB.1.2 Basic Probability ConceptsB.1.3 StatisticsB.2 Stochastic ProcessB.2.1 Basic Probability ConceptsC Model ReductionC.1 Model Reduction ProblemC.2 Hankel Matrix and Hankel Singular ValuesC.3 Balanced Realization and TruncationC.4 Application to FIR ModelsD.1 Kalman Filter as the Bayesian Estimator for Gaussian SystemsD.2 Moving Horizon Estimation: Recursive Solution to the Unconstrained Linear ProblemD.2.1 Dynamic Programming and Arrival CostD.2.2 Recursive Calculation of the Arrival Cost and One‑Step‑Ahead PredictionD.2.3 Equivalence with the Kalman FilterD.3 Stochastic State Feedback ProblemsD.3.1 Open‑Loop Optimal Solution via Least SquaresD.3.2 Optimal Feedback Policy via Dynamic ProgrammingD.3.3 Open‑Loop Optimal Feedback Control vs. Optimal Feedback ControlD.4 Stochastic Output Feedback ProblemsD.4.1 Optimal Output Feedback ControllerD.4.2 Derivation via Dynamic ProgrammingD.4.3 Extension to the Infinite Horizon Case: LQG Controller and Separation PrincipleD.4.4 AnalysisE.1 Discrete Time SystemsE.2 The IMC Loop Structure and PropertiesE.2.1 StabilityE.2.2 The Perfect ControllerE.2.3 Zero OffsetE.2.4 RobustnessE.2.5 IMC Feedforward Compensator DesignE.2.6 Saturation ConstraintsE.2.7 IMC Tuning Method for PID ControllersE.2.8 Analysis ToolsExercisesF MPC Toolbox Tutorial: Shell Oil FractionatorF.1 Problem DescriptionF.1.1 BackgroundF.1.2 Model DefinitionF.1.3 Overview of Our ApproachF.2 Tutorial: Using the MPC GUIF.2.1 Simplified Problem DefinitionF.2.2 Using the MPC GUIF.3 Solving the Shell Oil Control Problem Using the MPC ToolboxF.3.1 Comparison of Control StructuresF.3.2 Specifying Target for MVG A Brief Tutorial on SimulinkG.1 A MIMO System ExampleG.2 Simulink Model for a CSTR
Du kanske också är intresserad av
Computational Techniques for Process Simulation and Analysis Using MATLAB®
Niket S. Kaisare
2 499 kr
Computational Techniques for Process Simulation and Analysis Using MATLAB®
Niket S. Kaisare
1 449 kr
- Nyhet
- Nyhet
- Nyhet
- Nyhet
- Nyhet
Hjärnans akilleshälar : hur din hjärna lurar dig, och vad du kan göra åt det
Anders Hansen
279 kr329 kr
- Nyhet