Details

Matrix Algebra for Linear Models


Matrix Algebra for Linear Models


1. Aufl.

von: Marvin H. J. Gruber

91,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 02.12.2013
ISBN/EAN: 9781118800416
Sprache: englisch
Anzahl Seiten: 392

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p><b>A self-contained introduction to matrix analysis theory and applications in the field of statistics</b></p> <p>Comprehensive in scope, <i>Matrix Algebra for Linear Models</i> offers a succinct summary of matrix theory and its related applications to statistics, especially linear models. The book provides a unified presentation of the mathematical properties and statistical applications of matrices in order to define and manipulate data.</p> <p>Written for theoretical and applied statisticians, the book utilizes multiple numerical examples to illustrate key ideas, methods, and techniques crucial to understanding matrix algebra’s application in linear models. <i>Matrix Algebra for Linear Models</i> expertly balances concepts and methods allowing for a side-by-side presentation of matrix theory and its linear model applications. Including concise summaries on each topic, the book also features:</p> <ul> <li>Methods of deriving results from the properties of eigenvalues and the singular value decomposition</li> <li>Solutions to matrix optimization problems for obtaining more efficient biased estimators for parameters in linear regression models</li> <li>A section on the generalized singular value decomposition</li> <li>Multiple chapter exercises with selected answers to enhance understanding of the presented material</li> </ul> <p><i>Matrix Algebra for Linear Models</i> is an ideal textbook for advanced undergraduate and graduate-level courses on statistics, matrices, and linear algebra. The book is also an excellent reference for statisticians, engineers, economists, and readers interested in the linear statistical model.</p>
<p>Preface xiii</p> <p>Acknowledgments xv</p> <p><b>Part I Basic Ideas about Matrices and Systems of Linear Equations 1</b></p> <p><b>Section 1 What Matrices are and Some Basic Operations with Them 3</b></p> <p>1.1 Introduction 3</p> <p>1.2 What are Matrices and why are they Interesting to a Statistician? 3</p> <p>1.3 Matrix Notation Addition and Multiplication 6</p> <p>1.4 Summary 10</p> <p>Exercises 10</p> <p><b>Section 2 Determinants and Solving a System of Equations 14</b></p> <p>2.1 Introduction 14</p> <p>2.2 Definition of and Formulae for Expanding Determinants 14</p> <p>2.3 Some Computational Tricks for the Evaluation of Determinants 16</p> <p>2.4 Solution to Linear Equations Using Determinants 18</p> <p>2.5 Gauss Elimination 22</p> <p>2.6 Summary 27</p> <p>Exercises 27</p> <p><b>Section 3 The Inverse of a Matrix 30</b></p> <p>3.1 Introduction 30</p> <p>3.2 The Adjoint Method of Finding the Inverse of a Matrix 30</p> <p>3.3 Using Elementary Row Operations 31</p> <p>3.4 Using the Matrix Inverse to Solve a System of Equations 33</p> <p>3.5 Partitioned Matrices and Their Inverses 34</p> <p>3.6 Finding the Least Square Estimator 38</p> <p>3.7 Summary 44</p> <p>Exercises 44</p> <p><b>Section 4 Special Matrices and Facts about Matrices that will be used in the Sequel 47</b></p> <p>4.1 Introduction 47</p> <p>4.2 Matrices of the Form aI<sub>n</sub> + bJ<sub>n</sub> 47</p> <p>4.3 Orthogonal Matrices 49</p> <p>4.4 Direct Product of Matrices 52</p> <p>4.5 An Important Property of Determinants 53</p> <p>4.6 The Trace of a Matrix 56</p> <p>4.7 Matrix Differentiation 57</p> <p>4.8 The Least Square Estimator Again 62</p> <p>4.9 Summary 62</p> <p>Exercises 63</p> <p><b>Section 5 Vector Spaces 66</b></p> <p>5.1 Introduction 66</p> <p>5.2 What is a Vector Space? 66</p> <p>5.3 The Dimension of a Vector Space 68</p> <p>5.4 Inner Product Spaces 70</p> <p>5.5 Linear Transformations 73</p> <p>5.6 Summary 76</p> <p>Exercises 76</p> <p><b>Section 6 The Rank of a Matrix and Solutions to Systems of Equations 79</b></p> <p>6.1 Introduction 79</p> <p>6.2 The Rank of a Matrix 79</p> <p>6.3 Solving Systems of Equations with Coefficient Matrix of Less than Full Rank 84</p> <p>6.4 Summary 87</p> <p>Exercises 87</p> <p><b>Part II Eigenvalues the Singular Value Decomposition and Principal Components 91</b></p> <p><b>Section 7 Finding the Eigenvalues of a Matrix 93</b></p> <p>7.1 Introduction 93</p> <p>7.2 Eigenvalues and Eigenvectors of a Matrix 93</p> <p>7.3 Nonnegative Definite Matrices 101</p> <p>7.4 Summary 104</p> <p>Exercises 105</p> <p><b>Section 8 The Eigenvalues and Eigenvectors of Special Matrices 108</b></p> <p>8.1 Introduction 108</p> <p>8.2 Orthogonal Nonsingular and Idempotent Matrices 109</p> <p>8.3 The Cayley–Hamilton Theorem 112</p> <p>8.4 The Relationship between the Trace the Determinant and the Eigenvalues of a Matrix 114</p> <p>8.5 The Eigenvalues and Eigenvectors of the Kronecker Product of Two Matrices 116</p> <p>8.6 The Eigenvalues and the Eigenvectors of a Matrix of the Form aI + bJ 117</p> <p>8.7 The Loewner Ordering 119</p> <p>8.8 Summary 121</p> <p>Exercises 122</p> <p><b>Section 9 The Singular Value Decomposition (SVD) 124</b></p> <p>9.1 Introduction 124</p> <p>9.2 The Existence of the SVD 125</p> <p>9.3 Uses and Examples of the SVD 127</p> <p>9.4 Summary 134</p> <p>Exercises 134</p> <p><b>Section 10 Applications of the Singular Value Decomposition 137</b></p> <p>10.1 Introduction 137</p> <p>10.2 Reparameterization of a Non-full-Rank Model to a Full-Rank Model 137</p> <p>10.3 Principal Components 141</p> <p>10.4 The Multicollinearity Problem 143</p> <p>10.5 Summary 144</p> <p>Exercises 145</p> <p><b>Section 11 Relative Eigenvalues and Generalizations of the Singular Value Decomposition 146</b></p> <p>11.1 Introduction 146</p> <p>11.2 Relative Eigenvalues and Eigenvectors 146</p> <p>11.3 Generalizations of the Singular Value Decomposition:Overview 151</p> <p>11.4 The First Generalization 152</p> <p>11.5 The Second Generalization 157</p> <p>11.6 Summary 160</p> <p>Exercises 160</p> <p><b>Part III Generalized Inverses 163</b></p> <p><b>Section 12 Basic Ideas about Generalized Inverses 165</b></p> <p>12.1 Introduction 165</p> <p>12.2 What is a Generalized Inverse and how is One Obtained? 165</p> <p>12.3 The Moore–Penrose Inverse 170</p> <p>12.4 Summary 173</p> <p>Exercises 173</p> <p><b>Section 13 Characterizations of Generalized Inverses Using the Singular Value Decomposition 175</b></p> <p>13.1 Introduction 175</p> <p>13.2 Characterization of the Moore–Penrose Inverse 175</p> <p>13.3 Generalized Inverses in Terms of the Moore–Penrose Inverse 177</p> <p>13.4 Summary 185</p> <p>Exercises 186</p> <p><b>Section 14 Least Square and Minimum Norm Generalized Inverses 188</b></p> <p>14.1 Introduction 188</p> <p>14.2 Minimum Norm Generalized Inverses 189</p> <p>14.3 Least Square Generalized Inverses 193</p> <p>14.4 An Extension of Theorem 7.3 to Positive-Semi-definite Matrices 196</p> <p>14.5 Summary 197</p> <p>Exercises 197</p> <p><b>Section 15 More Representations of Generalized Inverses 200</b></p> <p>15.1 Introduction 200</p> <p>15.2 Another Characterization of the Moore–Penrose Inverse 200</p> <p>15.3 Still another Representation of the Generalized Inverse 204</p> <p>15.4 The Generalized Inverse of a Partitioned Matrix 207</p> <p>15.5 Summary 211</p> <p>Exercises 211</p> <p><b>Section 16 Least Square Estimators for Less than Full-Rank Models 213</b></p> <p>16.1 Introduction 213</p> <p>16.2 Some Preliminaries 213</p> <p>16.3 Obtaining the LS Estimator 214</p> <p>16.4 Summary 221</p> <p>Exercises 221</p> <p><b>Part IV Quadratic Forms and the Analysis of Variance 223</b></p> <p><b>Section 17 Quadratic Forms and their Probability Distributions 225</b></p> <p>17.1 Introduction 225</p> <p>17.2 Examples of Quadratic Forms 225</p> <p>17.3 The Chi-Square Distribution 228</p> <p>17.4 When does the Quadratic Form of a Random Variable have a Chi-Square Distribution? 230</p> <p>17.5 When are Two Quadratic Forms with the Chi-Square Distribution Independent? 231</p> <p>17.6 Summary 234</p> <p>Exercises 235</p> <p><b>Section 18 Analysis of Variance: Regression Models and the One- and Two-Way Classification 237</b></p> <p>18.1 Introduction 237</p> <p>18.2 The Full-Rank General Linear Regression Model 237</p> <p>18.3 Analysis of Variance: One-Way Classification 241</p> <p>18.4 Analysis of Variance: Two-Way Classification 244</p> <p>18.5 Summary 249</p> <p>Exercises 249</p> <p><b>Section 19 More ANOVA 253</b></p> <p>19.1 Introduction 253</p> <p>19.2 The Two-Way Classification with Interaction 254</p> <p>19.3 The Two-Way Classification with One Factor Nested 258</p> <p>19.4 Summary 262</p> <p>Exercises 262</p> <p><b>Section 20 The General Linear Hypothesis 264</b></p> <p>20.1 Introduction 264</p> <p>20.2 The Full-Rank Case 264</p> <p>20.3 The Non-full-Rank Case 267</p> <p>20.4 Contrasts 270</p> <p>20.5 Summary 273</p> <p>Exercises 273</p> <p><b>Part V Matrix Optimization Problems 275</b></p> <p><b>Section 21 Unconstrained Optimization Problems 277</b></p> <p>21.1 Introduction 277</p> <p>21.2 Unconstrained Optimization Problems 277</p> <p>21.3 The Least Square Estimator Again 281</p> <p>21.4 Summary 283</p> <p>Exercises 283</p> <p><b>Section 22 Constrained Minimization Problems with Linear Constraints 287</b></p> <p>22.1 Introduction 287</p> <p>22.2 An Overview of Lagrange Multipliers 287</p> <p>22.3 Minimizing a Second-Degree Form with Respect to a Linear Constraint 293</p> <p>22.4 The Constrained Least Square Estimator 295</p> <p>22.5 Canonical Correlation 299</p> <p>22.6 Summary 302</p> <p>Exercises 302</p> <p><b>Section 23 The Gauss–Markov Theorem 304</b></p> <p>23.1 Introduction 304</p> <p>23.2 The Gauss–Markov Theorem and the Least Square Estimator 304</p> <p>23.3 The Modified Gauss–Markov Theorem and the Linear Bayes Estimator 306</p> <p>23.4 Summary 311</p> <p>Exercises 311</p> <p><b>Section 24 Ridge Regression-Type Estimators 314</b></p> <p>24.1 Introduction 314</p> <p>24.2 Minimizing a Second-Degree Form with Respect to a Quadratic Constraint 314</p> <p>24.3 The Generalized Ridge Regression Estimators 315</p> <p>24.4 The Mean Square Error of the Generalized Ridge Estimator without Averaging over the Prior Distribution 317</p> <p>24.5 The Mean Square Error Averaging over the Prior Distribution 321</p> <p>24.6 Summary 321</p> <p>Exercises 321</p> <p>Answers to Selected Exercises 324</p> <p>References 366</p> <p>Index 368</p>
<p>“This book seems suitable for an advanced undergraduate and/or introductory master's level course . . . Four appealing features of this book are its inclusion of an overview, a summary, exercises (with answers provided), and numerical examples for all sections.”  (<i>American Mathematical Society</i>, 1 November 2015)</p> <p>“The book is suitable for graduate and postgraduate students and researchers. This book is highly recommended.”  (<i>Zentralblatt</i>, 1 April 2015)</p> <p>“This is an excellent and comprehensive presentation of the use of matrices for linear models. The writing is very clear, and the layout is excellent. It would serve well either as a class text or as the foundation for individual personal study.”  (<i>International Statistical Review</i>, 18 March 2014)</p>
<p><b>MARVIN H. J. GRUBER, P<small>H</small>D,</b> is Professor Emeritus in the School of Mathematical Sciences at Rochester Institute of Technology. He has authored several books and journal articles in his areas of research interest, which include improving the efficiency of regression estimators. Dr. Gruber is a member of the American Mathematical Society and the American Statistical Association.
<b><p>A self-contained introduction to matrix analysis theory and applications in the field of statistics</b> <p>Comprehensive in scope, <i>Matrix Algebra for Linear Models</i> offers a succinct summary of matrix theory and its related applications to statistics, especially linear models. The book provides a unified presentation of the mathematical properties and statistical applications of matrices in order to define and manipulate data. <p>Written for theoretical and applied statisticians, the book utilizes multiple numerical examples to illustrate key ideas, methods, and techniques crucial to understanding matrix algebra's application in linear models. <i>Matrix Algebra for Linear Models</i> expertly balances concepts and methods allowing for a side-by-side presentation of matrix theory and its linear model applications. Including concise summaries on each topic, the book also features: <ul> <li>Methods of deriving results from the properties of eigenvalues and the singular value decomposition</li> <li>Solutions to matrix optimization problems for obtaining more efficient biased estimators for parameters in linear regression models</li> <li>A section on the generalized singular value decomposition</li> <li>Multiple chapter exercises with selected answers to enhance understanding of the presented material</li> </ul> <p><i>Matrix Algebra for Linear Models</i> is an ideal textbook for advanced undergraduate and graduate-level courses on statistics, matrices, and linear algebra. The book is also an excellent reference for statisticians, engineers, economists, and readers interested in the linear statistical model.

Diese Produkte könnten Sie auch interessieren:

Statistics for Microarrays
Statistics for Microarrays
von: Ernst Wit, John McClure
PDF ebook
90,99 €