- Nyhet
Cooperative Control of Distributed Multi-Agent Systems
2 689 kr
Skickas . Fri frakt för medlemmar vid köp för minst 249 kr.
Produktinformation
- Utgivningsdatum2007-12-07
- Mått174 x 250 x 31 mm
- Vikt948 g
- FormatInbunden
- SpråkEngelska
- Antal sidor464
- FörlagJohn Wiley & Sons Inc
- ISBN9780470060315
Tillhör följande kategorier
Jeff Shamma's research interest is feedback control and systems theory. He received a Ph.D. in Systems Science and Engineering in 1988 from the Massachusetts Institute of Technology, Department of Mechanical Engineering. His previous faculty positions have included the University of Minnesota, Minneapolis, and the University of Texas, Austin. Since 1999, he has been with UCLA, where he is currently a Professor of Mechanical and Aerospace Engineering. He served as the MAE Department Vice Chair for Graduate Affairs from 2000-2002. Jeff Shamma is also the recipient of the NSF Young Investigator Award (1992), a recipient of the American Automatic Control Council Donald P. Eckman Award (1996), a past Plenary Speaker at the American Control Conference (1998), and a Fellow of the IEEE (2006). He has served on the editorial boards of the IEEE Transactions on Automatic Control and Systems & Control Letters.
- List of Contributors xiiiPreface xvPart I Introduction 11 Dimensions of cooperative control 3Jeff S. Shamma and Gurdal Arslan1.1 Why cooperative control? 31.1.1 Motivation 31.1.2 Illustrative example: command and control of networked vehicles 41.2 Dimensions of cooperative control 51.2.1 Distributed control and computation 51.2.2 Adversarial interactions 111.2.3 Uncertain evolution 141.2.4 Complexity management 151.3 Future directions 16Acknowledgements 17References 17Part II Distributed Control and Computation 192 Design of behavior of swarms: From flocking to data fusion using microfilter networks 21Reza Olfati-Saber2.1 Introduction 212.2 Consensus problems 222.3 Flocking behavior for distributed coverage 252.3.1 Collective potential of flocks 272.3.2 Distributed flocking algorithms 292.3.3 Stability analysis for flocking motion 302.3.4 Simulations of flocking 322.4 Microfilter networks for cooperative data fusion 32Acknowledgements 39References 393 Connectivity and convergence of formations 43Sonja Glavaˇski, Anca Williams and Tariq Samad3.1 Introduction 433.2 Problem formulation 443.3 Algebraic graph theory 463.4 Stability of vehicle formations in the case of time-invariant communication 483.4.1 Formation hierarchy 483.5 Stability of vehicle formations in the case of time-variant communication 543.6 Stabilizing feedback for the time-variant communication case 573.7 Graph connectivity and stability of vehicle formations 583.8 Conclusion 60Acknowledgements 60References 614 Distributed receding horizon control: stability via move suppression 63William B. Dunbar4.1 Introduction 634.2 System description and objective 644.3 Distributed receding horizon control 684.4 Feasibility and stability analysis 724.5 Conclusion 76Acknowledgement 76References 765 Distributed predictive control: synthesis, stability and feasibility 79Tam´as Keviczky, Francesco Borrelli and Gary J. Balas5.1 Introduction 795.2 Problem formulation 815.3 Distributed MPC scheme 835.4 DMPC stability analysis 855.4.1 Individual value functions as Lyapunov functions 875.4.2 Generalization to arbitrary number of nodes and graph 895.4.3 Exchange of information 905.4.4 Stability analysis for heterogeneous unconstrained LTI subsystems 915.5 Distributed design for identical unconstrained LTI subsystems 935.5.1 LQR properties for dynamically decoupled systems 955.5.2 Distributed LQR design 985.6 Ensuring feasibility 1025.6.1 Robust constraint fulfillment 1025.6.2 Review of methodologies 1035.7 Conclusion 106References 1076 Task assignment for mobile agents 109Brandon J. Moore and Kevin M. Passino6.1 Introduction 1096.2 Background 1116.2.1 Primal and dual problems 1116.2.2 Auction algorithm 1136.3 Problem statement 1156.3.1 Feasible and optimal vehicle trajectories 1156.3.2 Benefit functions 1176.4 Assignment algorithm and results 1186.4.1 Assumptions 1186.4.2 Motion control for a distributed auction 1196.4.3 Assignment algorithm termination 1206.4.4 Optimality bounds 1246.4.5 Early task completion 1286.5 Simulations 1306.5.1 Effects of delays 1306.5.2 Effects of bidding increment 1326.5.3 Early task completions 1336.5.4 Distributed vs. centralized computation 1346.6 Conclusions 136Acknowledgements 137References 1377 On the value of information in dynamic multiple-vehicle routing problems 139Alessandro Arsie, John J. Enright and Emilio Frazzoli7.1 Introduction 1397.2 Problem formulation 1417.3 Control policy description 1447.3.1 A control policy requiring no explicit communication: the unlimited sensing capabilities case 1447.3.2 A control policy requiring communication among closest neighbors: the limited sensing capabilities case 1457.3.3 A sensor-based control policy 1487.4 Performance analysis in light load 1507.4.1 Overview of the system behavior in the light load regime 1507.4.2 Convergence of reference points 1527.4.3 Convergence to the generalized median 1567.4.4 Fairness and efficiency 1577.4.5 A comparison with algorithms for vector quantization and centroidal Voronoi tessellations 1607.5 A performance analysis for sTP, mTP/FG and mTP policies 1617.5.1 The case of sTP policy 1617.5.2 The case of mTP/FG and mTP policies 1677.6 Some numerical results 1697.6.1 Uniform distribution, light load 1697.6.2 Non-uniform distribution, light load 1697.6.3 Uniform distribution, dependency on the target generation rate 1707.6.4 The sTP policy 1717.7 Conclusions 172References 1758 Optimal agent cooperation with local information 177Eric Feron and Jan DeMot8.1 Introduction 1778.2 Notation and problem formulation 1798.3 Mathematical problem formulation 1818.3.1 DP formulation 1818.3.2 LP formulation 1828.4 Algorithm overview and LP decomposition 1848.4.1 Intuition and algorithm overview 1848.4.2 LP decomposition 1858.5 Fixed point computation 1938.5.1 Single agent problem 1938.5.2 Mixed forward-backward recursion 1948.5.3 Forward recursion 1988.5.4 LTI system 1998.5.5 Computation of the optimal value function at small separations 2028.6 Discussion and examples 2058.7 Conclusion 209Acknowledgements 209References 2109 Multiagent cooperation through egocentric modeling 213Vincent Pei-wen Seah and Jeff S. Shamma9.1 Introduction 2139.2 Centralized and decentralized optimization 2159.2.1 Markov model 2159.2.2 Fully centralized optimization 2189.2.3 Fully decentralized optimization 2199.3 Evolutionary cooperation 2209.4 Analysis of convergence 2229.4.1 Idealized iterations and main result 2229.4.2 Proof of Theorem 9.4.2 2249.5 Conclusion 227Acknowledgements 228References 228Part III Adversarial Interactions 23110 Multi-vehicle cooperative control using mixed integer linear programming 233Matthew G. Earl and Raffaello D’Andrea10.1 Introduction 23310.2 Vehicle dynamics 23510.3 Obstacle avoidance 23810.4 RoboFlag problems 24110.4.1 Defensive Drill 1: one-on-one case 24210.4.2 Defensive Drill 2: one-on-one case 24710.4.3 ND-on-NA case 25010.5 Average case complexity 25110.6 Discussion 25410.7 Appendix: Converting logic into inequalities 25510.7.1 Equation (10.24) 25610.7.2 Equation (10.33) 257Acknowledgements 258References 25811 LP-based multi-vehicle path planning with adversaries 261Georgios C. Chasparis and Jeff S. Shamma11.1 Introduction 26111.2 Problem formulation 26311.2.1 State-space model 26311.2.2 Single resource models 26411.2.3 Adversarial environment 26511.2.4 Model simplifications 26511.2.5 Enemy modeling 26611.3 Optimization set-up 26711.3.1 Objective function 26711.3.2 Constraints 26811.3.3 Mixed-integer linear optimization 26811.4 LP-based path planning 26911.4.1 Linear programming relaxation 26911.4.2 Suboptimal solution 26911.4.3 Receding horizon implementation 27011.5 Implementation 27111.5.1 Defense path planning 27111.5.2 Attack path planning 27411.5.3 Simulations and discussion 27611.6 Conclusion 278Acknowledgements 278References 27912 Characterization of LQG differential games with different information patterns 281Ashitosh Swarup and Jason L. Speyer12.1 Introduction 28112.2 Formulation of the discrete-time LQG game 28212.3 Solution of the LQG game as the limit to the LEG Game 28312.3.1 Problem formulation of the LEG Game 28412.3.2 Solution to the LEG Game problem 28512.3.3 Filter properties for small values of θ 28812.3.4 Construction of the LEG equilibrium cost function 29012.4 LQG game as the limit of the LEG Game 29112.4.1 Behavior of filter in the limit 29112.4.2 Limiting value of the cost 29112.4.3 Convexity conditions 29312.4.4 Results 29312.5 Correlation properties of the LQG game filter in the limit 29412.5.1 Characteristics of the matrix P−1 i Pi 29512.5.2 Transformed filter equations 29512.5.3 Correlation properties of ε2 i 29612.5.4 Correlation properties of ε1 i 29712.6 Cost function properties—effect of a perturbation in up 29712.7 Performance of the Kalman filtering algorithm 29812.8 Comparison with the Willman algorithm 29912.9 Equilibrium properties of the cost function: the saddle interval 29912.10 Conclusion 300Acknowledgements 300References 301Part IV Uncertain Evolution 30313 Modal estimation of jump linear systems: an information theoretic viewpoint 305Nuno C. Martins and Munther A. Dahleh13.1 Estimation of a class of hidden markov models 30513.1.1 Notation 30713.2 Problem statement 30813.2.1 Main results 30813.2.2 Posing the problem statement as a coding paradigm 30913.2.3 Comparative analysis with previous work 30913.3 Encoding and decoding 31013.3.1 Description of the estimator (decoder) 31113.4 Performance analysis 31213.4.1 An efficient decoding algorithm 31213.4.2 Numerical results 31413.5 Auxiliary results leading to the proof of theorem 13.4.3 316Acknowledgements 319References 32014 Conditionally-linear filtering for mode estimation in jump-linear systems 323Daniel Choukroun and Jason L. Speyer14.1 Introduction 32314.2 Conditionally-Linear Filtering 32414.2.1 Short review of the standard linear filtering problem 32414.2.2 The conditionally-linear filtering problem 32614.2.3 Discussion 33014.3 Mode-estimation for jump-linear systems 33314.3.1 Statement of the problem 33314.3.2 State-space model for y k 33514.3.3 Development of the conditionally-linear filter 33714.3.4 Discussion 34014.3.5 Reduced order filter 34114.3.6 Comparison with Wonham filter 34314.3.7 Case of noisy observations of xk 34514.4 Numerical Example 35014.4.1 Gyro failure detection from accurate spacecraft attitude measurements Description 35014.5 Conclusion 35414.6 Appendix A: Inner product of equation (14.14) 35514.7 Appendix B: Development of the filter equations (14.36) to (14.37) 356Acknowledgements 358References 35815 Cohesion of languages in grammar networks 359Y. Lee, T.C. Collier, C.E. Taylor and E.P. Stabler15.1 Introduction 35915.2 Evolutionary dynamics of languages 36015.3 Topologies of language populations 36115.4 Language structure 36315.5 Networks induced by structural similarity 36515.5.1 Three equilibrium states 36615.5.2 Density of grammar networks and language convergence 36815.5.3 Rate of language convergence in grammar networks 37015.6 Conclusion 372Acknowledgements 374References 374Part V Complexity Management 37716 Complexity management in the state estimation of multi-agent systems 379Domitilla Del Vecchio and Richard M. Murray16.1 Introduction 37916.2 Motivating example 38116.3 Basic concepts 38416.3.1 Partial order theory 38416.3.2 Deterministic transition systems 38616.4 Problem formulation 38716.5 Problem solution 38816.6 Example: the RoboFlag Drill 39116.6.1 RoboFlag Drill estimator 39216.6.2 Complexity of the RoboFlag Drill estimator 39416.6.3 Simulation results 39516.7 Existence of discrete state estimators on a lattice 39516.8 Extensions to the estimation of discrete and continuous variables 39916.8.1 RoboFlag Drill with continuous dynamics 40416.9 Conclusion 405Acknowledgement 406References 40617 Abstraction-based command and control with patch models 409V. G. Rao, S. Goldfarb and R. D’Andrea17.1 Introduction 40917.2 Overview of patch models 41117.3 Realization and verification 41517.4 Human and artificial decision-making 41917.4.1 Example: the surround behavior 42117.5 Hierarchical control 42317.5.1 Information content and situation awareness 42617.6 Conclusion 429References 431Index 433