Details

Advanced Kalman Filtering, Least-Squares and Modeling


Advanced Kalman Filtering, Least-Squares and Modeling

A Practical Handbook
1. Aufl.

von: Bruce P. Gibbs

159,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 06.01.2011
ISBN/EAN: 9780470890035
Sprache: englisch
Anzahl Seiten: 632

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

This book is intended primarily as a handbook for engineers who must design practical systems. <p>Its primary goal is to discuss model development in sufficient detail so that the reader may design an estimator that meets all application requirements and is robust to modeling assumptions.  Since it is sometimes difficult to <i>a priori</i> determine the best model structure, use of <i>exploratory data analysis</i> to define model structure is discussed.  Methods for deciding on the “best” model are also presented. </p> <p>A second goal is to present little known extensions of least squares estimation or Kalman filtering that provide guidance on model structure and parameters, or make the estimator more robust to changes in real-world behavior.</p> <p>A third goal is discussion of implementation issues that make the estimator more accurate or efficient, or that make it flexible so that model alternatives can be easily compared.</p> <p>The fourth goal is to provide the designer/analyst with guidance in evaluating estimator performance and in determining/correcting problems.</p> <p>The final goal is to provide a subroutine library that simplifies implementation, and flexible general purpose high-level drivers that allow both easy analysis of alternative models and access to extensions of the basic filtering.<br /> <br /> </p> <p>Supplemental materials and up-to-date errata are downloadable at <a href="http://booksupport.wiley.com/">http://booksupport.wiley.com</a>.</p>
<p>PREFACE xv</p> <p><b>1 INTRODUCTION 1</b></p> <p>1.1 The Forward and Inverse Modeling Problem 2</p> <p>1.2 A Brief History of Estimation 4</p> <p>1.3 Filtering, Smoothing, and Prediction 8</p> <p>1.4 Prerequisites 9</p> <p>1.5 Notation 9</p> <p>1.6 Summary 11</p> <p><b>2 SYSTEM DYNAMICS AND MODELS 13</b></p> <p>2.1 Discrete-Time Models 14</p> <p>2.2 Continuous-Time Dynamic Models 17</p> <p>2.2.1 State Transition and Process Noise Covariance Matrices 19</p> <p>2.2.2 Dynamic Models Using Basic Function Expansions 22</p> <p>2.2.3 Dynamic Models Derived from First Principles 25</p> <p>2.2.4 Stochastic (Random) Process Models 31</p> <p>2.2.5 Linear Regression Models 42</p> <p>2.2.6 Reduced-Order Modeling 44</p> <p>2.3 Computation of State Transition and Process Noise Matrices 45</p> <p>2.3.1 Numeric Computation of Φ 45</p> <p>2.3.2 Numeric Computation of QD 57</p> <p>2.4 Measurement Models 58</p> <p>2.5 Simulating Stochastic Systems 60</p> <p>2.6 Common Modeling Errors and System Biases 62</p> <p>2.7 Summary 65</p> <p><b>3 MODELING EXAMPLES 67</b></p> <p>3.1 Angle-Only Tracking of Linear Target Motion 67</p> <p>3.2 Maneuvering Vehicle Tracking 69</p> <p>3.2.1 Maneuvering Tank Tracking Using Multiple Models 69</p> <p>3.2.2 Aircraft Tracking 73</p> <p>3.3 Strapdown Inertial Navigation System (INS) Error Model 74</p> <p>3.4 Spacecraft Orbit Determination (OD) 80</p> <p>3.4.1 Geopotential Forces 83</p> <p>3.4.2 Other Gravitational Attractions 86</p> <p>3.4.3 Solar Radiation Pressure 87</p> <p>3.4.4 Aerodynamic Drag 88</p> <p>3.4.5 Thrust Forces 89</p> <p>3.4.6 Earth Motion 89</p> <p>3.4.7 Numerical Integration and Computation of Φ 90</p> <p>3.4.8 Measurements 92</p> <p>3.4.9 GOES I-P Satellites 96</p> <p>3.4.10 Global Positioning System (GPS) 97</p> <p>3.5 Fossil-Fueled Power Plant 99</p> <p>3.6 Summary 99</p> <p><b>4 LINEAR LEAST-SQUARES ESTIMATION: FUNDAMENTALS 101</b></p> <p>4.1 Least-Squares Data Fitting 101</p> <p>4.2 Weighted Least Squares 108</p> <p>4.3 Bayesian Estimation 115</p> <p>4.3.1 Bayesian Least Squares 115</p> <p>4.3.2 Bayes’ Theorem 117</p> <p>4.3.3 Minimum Variance or Minimum Mean-Squared Error (MMSE) 121</p> <p>4.3.4 Orthogonal Projections 124</p> <p>4.4 Probabilistic Approaches—Maximum Likelihood and Maximum A Posteriori 125</p> <p>4.4.1 Gaussian Random Variables 126</p> <p>4.4.2 Maximum Likelihood Estimation 128</p> <p>4.4.3 Maximum A Posteriori 133</p> <p>4.5 Summary of Linear Estimation Approaches 137</p> <p><b>5 LINEAR LEAST-SQUARES ESTIMATION: SOLUTION TECHNIQUES 139</b></p> <p>5.1 Matrix Norms, Condition Number, Observability, and the Pseudo-Inverse 139</p> <p>5.1.1 Vector-Matrix Norms 139</p> <p>5.1.2 Matrix Pseudo-Inverse 141</p> <p>5.1.3 Condition Number 141</p> <p>5.1.4 Observability 145</p> <p>5.2 Normal Equation Formation and Solution 145</p> <p>5.2.1 Computation of the Normal Equations 145</p> <p>5.2.2 Cholesky Decomposition of the Normal Equations 149</p> <p>5.3 Orthogonal Transformations and the QR Method 156</p> <p>5.3.1 Givens Rotations 158</p> <p>5.3.2 Householder Transformations 159</p> <p>5.3.3 Modified Gram-Schmidt (MGS) Orthogonalization 162</p> <p>5.3.4 QR Numerical Accuracy 165</p> <p>5.4 Least-Squares Solution Using the SVD 165</p> <p>5.5 Iterative Techniques 167</p> <p>5.5.1 Sparse Array Storage 167</p> <p>5.5.2 Linear Iteration 168</p> <p>5.5.3 Least-Squares Solution for Large Sparse Problems Using Krylov Space Methods 169</p> <p>5.6 Comparison of Methods 175</p> <p>5.6.1 Solution Accuracy for Polynomial Problem 175</p> <p>5.6.2 Algorithm Timing 181</p> <p>5.7 Solution Uniqueness, Observability, and Condition Number 183</p> <p>5.8 Pseudo-Inverses and the Singular Value Transformation (SVD) 185</p> <p>5.9 Summary 190</p> <p><b>6 LEAST-SQUARES ESTIMATION: MODEL ERRORS AND MODEL ORDER 193</b></p> <p>6.1 Assessing the Validity of the Solution 194</p> <p>6.1.1 Residual Sum-of-Squares (SOS) 194</p> <p>6.1.2 Residual Patterns 195</p> <p>6.1.3 Subsets of Residuals 196</p> <p>6.1.4 Measurement Prediction 196</p> <p>6.1.5 Estimate Comparison 197</p> <p>6.2 Solution Error Analysis 208</p> <p>6.2.1 State Error Covariance and Confidence Bounds 208</p> <p>6.2.2 Model Error Analysis 212</p> <p>6.3 Regression Analysis for Weighted Least Squares 237</p> <p>6.3.1 Analysis of Variance 238</p> <p>6.3.2 Stepwise Regression 239</p> <p>6.3.3 Prediction and Optimal Data Span 244</p> <p>6.4 Summary 245</p> <p><b>7 LEAST-SQUARES ESTIMATION: CONSTRAINTS, NONLINEAR MODELS, AND ROBUST TECHNIQUES 249</b></p> <p>7.1 Constrained Estimates 249</p> <p>7.1.1 Least-Squares with Linear Equality Constraints (Problem LSE) 249</p> <p>7.1.2 Least-Squares with Linear Inequality Constraints (Problem LSI) 256</p> <p>7.2 Recursive Least Squares 257</p> <p>7.3 Nonlinear Least Squares 259</p> <p>7.3.1 1-D Nonlinear Least-Squares Solutions 263</p> <p>7.3.2 Optimization for Multidimensional Unconstrained Nonlinear Least Squares 264</p> <p>7.3.3 Stopping Criteria and Convergence Tests 269</p> <p>7.4 Robust Estimation 282</p> <p>7.4.1 De-Weighting Large Residuals 282</p> <p>7.4.2 Data Editing 283</p> <p>7.5 Measurement Preprocessing 285</p> <p>7.6 Summary 286</p> <p><b>8 KALMAN FILTERING 289</b></p> <p>8.1 Discrete-Time Kalman Filter 290</p> <p>8.1.1 Truth Model 290</p> <p>8.1.2 Discrete-Time Kalman Filter Algorithm 291</p> <p>8.2 Extensions of the Discrete Filter 303</p> <p>8.2.1 Correlation between Measurement and Process Noise 303</p> <p>8.2.2 Time-Correlated (Colored) Measurement Noise 305</p> <p>8.2.3 Innovations, Model Validation, and Editing 311</p> <p>8.3 Continous-Time Kalman-Bucy Filter 314</p> <p>8.4 Modifications of the Discrete Kalman Filter 321</p> <p>8.4.1 Friedland Bias-FreeBias-Restoring Filter 321</p> <p>8.4.2 Kalman-Schmidt Consider Filter 325</p> <p>8.5 Steady-State Solution 328</p> <p>8.6 Wiener Filter 332</p> <p>8.6.1 Wiener-Hopf Equation 333</p> <p>8.6.2 Solution for the Optimal Weighting Function 335</p> <p>8.6.3 Filter Input Covariances 336</p> <p>8.6.4 Equivalence of Weiner and Steady-State Kalman-Bucy Filters 337</p> <p>8.7 Summary 341</p> <p><b>9 FILTERING FOR NONLINEAR SYSTEMS, SMOOTHING, ERROR ANALYSISMODEL DESIGN, AND</b><br /><b>MEASUREMENT PREPROCESSING 343</b></p> <p>9.1 Nonlinear Filtering 344</p> <p>9.1.1 Linearized and Extended Kalman Filters 344</p> <p>9.1.2 Iterated Extended Kalman Filter 349</p> <p>9.2 Smoothing 352</p> <p>9.2.1 Fixed-Point Smoother 353</p> <p>9.2.2 Fixed-Lag Smoother 356</p> <p>9.2.3 Fixed-Interval Smoother 357</p> <p>9.3 Filter Error Analysis and Reduced-Order Modeling 370</p> <p>9.3.1 Linear Analysis of Independent Error Sources 372</p> <p>9.3.2 Error Analysis for ROM Defi ned as a Transformed Detailed Model 380</p> <p>9.3.3 Error Analysis for Different Truth and Filter Models 382</p> <p>9.4 Measurement Preprocessing 385</p> <p>9.5 Summary 385</p> <p><b>10 FACTORED (SQUARE-ROOT) FILTERING 389</b></p> <p>10.1 Filter Numerical Accuracy 390</p> <p>10.2 U-D Filter 392</p> <p>10.2.1 U-D Filter Measurement Update 394</p> <p>10.2.2 U-D Filter Time Update 396</p> <p>10.2.3 RTS Smoother for U-D Filter 401</p> <p>10.2.4 U-D Error Analysis 403</p> <p>10.3 Square Root Information Filter (SRIF) 404</p> <p>10.3.1 SRIF Time Update 405</p> <p>10.3.2 SRIF Measurement Update 407</p> <p>10.3.3 Square Root Information Smoother (SRIS) 408</p> <p>10.3.4 Dyer-McReynolds Covariance Smoother (DMCS) 410</p> <p>10.3.5 SRIF Error Analysis 410</p> <p>10.4 Inertial Navigation System (INS) Example Using Factored Filters 412</p> <p>10.5 Large Sparse Systems and the SRIF 417</p> <p>10.6 Spatial Continuity Constraints and the SRIF Data Equation 419</p> <p>10.6.1 Flow Model 421</p> <p>10.6.2 Log Conductivity Spatial Continuity Model 422</p> <p>10.6.3 Measurement Models 424</p> <p>10.6.4 SRIF Processing 424</p> <p>10.6.5 Steady-State Flow Constrained Iterative Solution 425</p> <p>10.7 Summary 427</p> <p><b>11 ADVANCED FILTERING TOPICS 431</b></p> <p>11.1 Maximum Likelihood Parameter Estimation 432</p> <p>11.1.1 Calculation of the State Transition Partial Derivatives 434</p> <p>11.1.2 Derivatives of the Filter Time Update 438</p> <p>11.1.3 Derivatives of the Filter Measurement Update 439</p> <p>11.1.4 Partial Derivatives for Initial Condition Errors 440</p> <p>11.1.5 Computation of the Log Likelihood and Scoring Step 441</p> <p>11.2 Adaptive Filtering 449</p> <p>11.3 Jump Detection and Estimation 450</p> <p>11.3.1 Jump-Free Filter Equations 452</p> <p>11.3.2 Stepwise Regression 454</p> <p>11.3.3 Correction of Jump-Free Filter State 455</p> <p>11.3.4 Real-Time Jump Detection Using Stepwise Regression 456</p> <p>11.4 Adaptive Target Tracking Using Multiple Model Hypotheses 461</p> <p>11.4.1 Weighted Sum of Filter Estimates 462</p> <p>11.4.2 Maximum Likelihood Filter Selection 463</p> <p>11.4.3 Dynamic and Interactive Multiple Models 464</p> <p>11.5 Constrained Estimation 471</p> <p>11.6 Robust Estimation: H-Infi nity Filters 471</p> <p>11.7 Unscented Kalman Filter (UKF) 474</p> <p>11.7.1 Unscented Transform 475</p> <p>11.7.2 UKF Algorithm 478</p> <p>11.8 Particle Filters 485</p> <p>11.9 Summary 490</p> <p><b>12 EMPIRICAL MODELING 493</b></p> <p>12.1 Exploratory Time Series Analysis and System Identification 494</p> <p>12.2 Spectral Analysis Based on the Fourier Transform 495</p> <p>12.2.1 Fourier Series for Periodic Functions 497</p> <p>12.2.2 Fourier Transform of Continuous Energy Signals 498</p> <p>12.2.3 Fourier Transform of Power Signals 502</p> <p>12.2.4 Power Spectrum of Stochastic Signals 504</p> <p>12.2.5 Time-Limiting Window Functions 506</p> <p>12.2.6 Discrete Fourier Transform 509</p> <p>12.2.7 Periodogram Computation of Power Spectra 512</p> <p>12.2.8 Blackman-Tukey (Correlogram) Computation of Power Spectra 514</p> <p>12.3 Autoregressive Modeling 522</p> <p>12.3.1 Maximum Entropy Method (MEM) 524</p> <p>12.3.2 Burg MEM 525</p> <p>12.3.3 Final Prediction Error (FPE) and Akaike Information Criteria (AIC) 526</p> <p>12.3.4 Marple AR Spectral Analysis 528</p> <p>12.3.5 Summary of MEM Modeling Approaches 529</p> <p>12.4 ARMA Modeling 531</p> <p>12.4.1 ARMA Parameter Estimation 532</p> <p>12.5 Canonical Variate Analysis 534</p> <p>12.5.1 CVA Derivation and Overview 536</p> <p>12.5.2 Summary of CVA Steps 539</p> <p>12.5.3 Sample Correlation Matrices 540</p> <p>12.5.4 Order Selection Using the AIC 541</p> <p>12.5.5 State-Space Model 543</p> <p>12.5.6 Measurement Power Spectrum Using the State-Space Model 544</p> <p>12.6 Conversion from Discrete to Continuous Models 548</p> <p>12.7 Summary 551</p> <p><b>APPENDIX A SUMMARY OF VECTORMATRIX OPERATIONS 555</b></p> <p>A.1 Definition 555</p> <p>A.1.1 Vectors 555</p> <p>A.1.2 Matrices 555</p> <p>A.2 Elementary Vector Matrix Operations 557</p> <p>A.2.1 Transpose 557</p> <p>A.2.2 Addition 557</p> <p>A.2.3 Inner (Dot) Product of Vectors 557</p> <p>A.2.4 Outer Product of Vectors 558</p> <p>A.2.5 Multiplication 558</p> <p>A.3 Matrix Functions 558</p> <p>A.3.1 Matrix Inverse 558</p> <p>A.3.2 Partitioned Matrix Inversion 559</p> <p>A.3.3 Matrix Inversion Identity 560</p> <p>A.3.4 Determinant 561</p> <p>A.3.5 Matrix Trace 562</p> <p>A.3.6 Derivatives of Matrix Functions 563</p> <p>A.3.7 Norms 564</p> <p>A.4 Matrix Transformations and Factorization 565</p> <p>A.4.1 LU Decomposition 565</p> <p>A.4.2 Cholesky Factorization 565</p> <p>A.4.3 Similarity Transformation 566</p> <p>A.4.4 Eigen Decomposition 566</p> <p>A.4.5 Singular Value Decomposition (SVD) 566</p> <p>A.4.6 Pseudo-Inverse 567</p> <p>A.4.7 Condition Number 568</p> <p><b>APPENDIX B PROBABILITY AND RANDOM VARIABLES 569</b></p> <p>B.1 Probability 569</p> <p>B.1.1 Definitions 569</p> <p>B.1.2 Joint and Conditional Probability, and Independence 570</p> <p>B.2 Random Variable 571</p> <p>B.2.1 Distribution and Density Functions 571</p> <p>B.2.2 Bayes’ Theorem for Density Functions 572</p> <p>B.2.3 Moments of Random Variables 573</p> <p>B.2.4 Gaussian Distribution 574</p> <p>B.2.5 Chi-Squared Distribution 574</p> <p>B.3 Stochastic Processes 575</p> <p>B.3.1 Wiener or Brownian Motion Process 576</p> <p>B.3.2 Markov Process 576</p> <p>B.3.3 Differential and Integral Equations with White Noise Inputs 577</p> <p>BIBLIOGRAPHY 579</p> <p>INDEX 599</p>
<b>BRUCE P. GIBBS</b> has forty-one years of experience applying estimation and control theory to applications for NASA, the Department of Defense, the Department of Energy, the National Science Foundation, and private industry. He is currently a consulting scientist at Carr Astronautics, where he designs image navigation software for the GOES-R geosynchronous weather satellite. Gibbs previously developed similar systems for the GOES-NOP weather satellites and GPS.
<b>The only book to cover least-squares estimation, Kalman filtering, and model development</b> <p>This book provides a complete explanation of estimation theory and application, modeling approaches, and model evaluation. Each topic starts with a clear explanation of the theory (often including historical context), followed by application issues that should be considered in the design. Different implementations designed to address specific problems are presented, and numerous examples of varying complexity are used to demonstrate the concepts.</p> <p>It focuses on practical methods for developing and implementing least-squares estimators, Kalman filters, and newer filtering techniques. Since model development is critical to a successful implementation, the book discusses first-principle approaches, basis function expansions, stochastic models, and ARMA-type structures. Computation of empirical models and determination of "best" model structures and order are also discussed. The text is written to help the reader design an estimator that meets all application requirements.</p> <p>Specifically addressed are methods for developing models that meet estimation goals, procedures for making the estimator robust to modeling and numerical errors, extensions of the basic methods for handling non-ideal systems, and techniques for evaluating performance and analyzing accuracy problems. Including many real-world examples, the book:</p> <ul> <li>Presents little-known extensions of least-squares estimation and Kalman filtering that provide guidance on model structure and parameters</li> <li>Explains numerical accuracy, computational burden, and modeling tradeoffs for real-world applications</li> <li>Discusses implementation issues that make the estimator more accurate or efficient, or that make it flexible so that model alternatives can be easily compared</li> <li>Offers guidance in evaluating estimator performance and in determining/correcting problems</li> <li>A related Web site provides a subroutine library that simplifies implementation, as well as general purpose high-level drivers that allow for the easy analysis of alternative models and access to extensions of the basic Kalman filtering</li> </ul> <p>Drawing from four decades of the author's experience with the material, <i>Advanced Kalman Filtering, Least-Squares and Modeling</i> is a comprehensive and detailed explanation of these topics. Practicing engineers, designers, analysts, and students using estimation theory to develop practical systems will find this a very useful reference.</p>

Diese Produkte könnten Sie auch interessieren:

Bandwidth Efficient Coding
Bandwidth Efficient Coding
von: John B. Anderson
EPUB ebook
114,99 €
Digital Communications with Emphasis on Data Modems
Digital Communications with Emphasis on Data Modems
von: Richard W. Middlestead
PDF ebook
171,99 €
Bandwidth Efficient Coding
Bandwidth Efficient Coding
von: John B. Anderson
PDF ebook
114,99 €