Details

Linear Models


Linear Models


Wiley Series in Probability and Statistics 2. Aufl.

von: Shayle R. Searle, Marvin H. J. Gruber

120,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 23.09.2016
ISBN/EAN: 9781118952849
Sprache: englisch
Anzahl Seiten: 696

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p><b>Provides an easy-to-understand guide to statistical linear models and its uses in data analysis</b></p> <p>This book defines a broad spectrum of statistical linear models that is useful in the analysis of data. Considerable rewriting was done to make the book more reader friendly than the first edition. <i>Linear Models, Second Edition </i>is written in such a way as to be self-contained for a person with a background in basic statistics, calculus and linear algebra. The text includes numerous applied illustrations, numerical examples, and exercises, now augmented with computer outputs in SAS and R. Also new to this edition is:</p> <p>• A greatly improved internal design and format</p> <p>• A short introductory chapter to ease understanding of the order in which topics are taken up</p> <p>• Discussion of additional topics including multiple comparisons and shrinkage estimators</p> <p>• Enhanced discussions of generalized inverses, the MINQUE, Bayes and Maximum Likelihood estimators for estimating variance components</p> <p>Furthermore, in this edition, the second author adds many pedagogical elements throughout the book. These include numbered examples, end-of-example and end-of-proof symbols, selected hints and solutions to exercises available on the book’s website, and references to “big data” in everyday life. Featuring a thorough update, <i>Linear Models, Second Edition </i>includes:</p> <p>• A new internal format, additional instructional pedagogy, selected hints and solutions to exercises, and several more real-life applications</p> <p>• Many examples using SAS and R with timely data sets</p> <p>• Over 400 examples and exercises throughout the book to reinforce understanding</p> <p><i>Linear Models, Second Edition </i>is a textbook and a reference for upper-level undergraduate and beginning graduate-level courses on linear models, statisticians, engineers, and scientists who use multiple regression or analysis of variance in their work.</p> <p><b>SHAYLE R. SEARLE, PhD, </b>was Professor Emeritus of Biometry at Cornell University. He was the author of the first edition of <i>Linear Models, Linear Models for Unbalanced Data, </i>and <i>Generalized, Linear, and Mixed Models </i>(with Charles E. McCulloch), all from Wiley. The first edition of <i>Linear Models </i>appears in the Wiley Classics Library.</p> <p><b>MARVIN H. J. GRUBER, PhD, </b>is Professor Emeritus at Rochester Institute of Technology, School of Mathematical Sciences. Dr. Gruber has written a number of papers and has given numerous presentations at professional meetings during his tenure as a professor at RIT. His fields of interest include regression estimators and the improvement of their efficiency using shrinkage estimators. He has written and published two books on this topic. Another of his books, <i>Matrix Algebra for Linear Models, </i>also published by Wiley, provides good preparation for studying Linear Models. He is a member of the American Mathematical Society, the Institute of Mathematical Statistics and the American Statistical Association.</p>
Preface xvii <p>Preface to First Edition xxi</p> <p>About the Companion Website xxv</p> <p>Introduction and Overview 1</p> <p><b>1. Generalized Inverse Matrices 7</b></p> <p>1. Introduction, 7</p> <p>a. Definition and Existence of a Generalized Inverse, 8</p> <p>b. An Algorithm for Obtaining a Generalized Inverse, 11</p> <p>c. Obtaining Generalized Inverses Using the Singular Value Decomposition (SVD), 14</p> <p><b>2. Solving Linear Equations,</b> <b>17</b></p> <p>a. Consistent Equations, 17</p> <p>b. Obtaining Solutions, 18</p> <p>c. Properties of Solutions, 20</p> <p>3. The Penrose Inverse, 26</p> <p>4. Other Definitions, 30</p> <p>5. Symmetric Matrices, 32</p> <p>a. Properties of a Generalized Inverse, 32</p> <p>b. Two More Generalized Inverses of X'X, 35</p> <p>6. Arbitrariness in a Generalized Inverse, 37</p> <p>7. Other Results, 42</p> <p>8. Exercises, 44</p> <p>2. Distributions and Quadratic Forms 49</p> <p>1. Introduction, 49</p> <p>2. Symmetric Matrices, 52</p> <p>3. Positive Definiteness, 53</p> <p>4. Distributions, 58</p> <p>a. Multivariate Density Functions, 58</p> <p>b. Moments, 59</p> <p>c. Linear Transformations, 60</p> <p>d. Moment and Cumulative Generating Functions, 62</p> <p>e. Univariate Normal, 64</p> <p>f. Multivariate Normal, 64</p> <p>g. Central χ2, F, and t, 69</p> <p>h. Non-central χ2, 71</p> <p>i. Non-central F, 73</p> <p>j. The Non-central t Distribution, 73</p> <p>5. Distribution of Quadratic Forms, 74</p> <p>a. Cumulants, 75</p> <p>b. Distributions, 78</p> <p>c. Independence, 80</p> <p>6. Bilinear Forms, 87</p> <p>7. Exercises, 89</p> <p>3. Regression for the Full-Rank Model 95</p> <p>1. Introduction, 95</p> <p>a. The Model, 95</p> <p>b. Observations, 97</p> <p>c. Estimation, 98</p> <p>d. The General Case of k x Variables, 100</p> <p>e. Intercept and No-Intercept Models, 104</p> <p>2. Deviations From Means, 105</p> <p>3. Some Methods of Estimation, 109</p> <p>a. Ordinary Least Squares, 109</p> <p>b. Generalized Least Squares, 109</p> <p>c. Maximum Likelihood, 110</p> <p>d. The Best Linear Unbiased Estimator (b.l.u.e.) (Gauss–Markov Theorem), 110</p> <p>e. Least-squares Theory When The Parameters are Random Variables, 112</p> <p>4. Consequences of Estimation, 115</p> <p>a. Unbiasedness, 115</p> <p>b. Variances, 115</p> <p>c. Estimating E(y), 116</p> <p>d. Residual Error Sum of Squares, 119</p> <p>e. Estimating the Residual Error Variance, 120</p> <p>f. Partitioning the Total Sum of Squares, 121</p> <p>g. Multiple Correlation, 122</p> <p>5. Distributional Properties, 126</p> <p>a. The Vector of Observations y is Normal, 126</p> <p>b. The Least-square Estimator b is Normal, 127</p> <p>c. The Least-square Estimator b and the Estimator of the Variance ^σ2 are Independent, 127</p> <p>d. The Distribution of SSE/σ2 is a χ2 Distribution, 128</p> <p>e. Non-central χ2's, 128</p> <p>f. F-distributions, 129</p> <p>g. Analyses of Variance, 129</p> <p>h. Tests of Hypotheses, 131</p> <p>i. Confidence Intervals, 133</p> <p>j. More Examples, 136</p> <p>k. Pure Error, 139</p> <p>6. The General Linear Hypothesis, 141</p> <p>a. Testing Linear Hypothesis, 141</p> <p>b. Estimation Under the Null Hypothesis, 143</p> <p>c. Four Common Hypotheses, 145</p> <p>d. Reduced Models, 148</p> <p>e. Stochastic Constraints, 158</p> <p>f. Exact Quadratic Constraints (Ridge Regression), 160</p> <p>7. Related Topics, 162</p> <p>a. The Likelihood Ratio Test, 163</p> <p>b. Type I and Type II Errors, 164</p> <p>c. The Power of a Test, 165</p> <p>d. Estimating Residuals, 166</p> <p>8. Summary of Regression Calculations, 168</p> <p>9. Exercises, 169</p> <p>4. Introducing Linear Models: Regression on Dummy Variables 175</p> <p>1. Regression on Allocated Codes, 175</p> <p>a. Allocated Codes, 175</p> <p>b. Difficulties and Criticism, 176</p> <p>c. Grouped Variables, 177</p> <p>d. Unbalanced Data, 178</p> <p>2. Regression on Dummy (0, 1) Variables, 180</p> <p>a. Factors and Levels, 180</p> <p>b. The Regression, 181</p> <p>3. Describing Linear Models, 184</p> <p>a. A One-Way Classification, 184</p> <p>b. A Two-Way Classification, 186</p> <p>c. A Three-Way Classification, 188</p> <p>d. Main Effects and Interactions, 188</p> <p>e. Nested and Crossed Classifications, 194</p> <p>4. The Normal Equations, 198</p> <p>5. Exercises, 201</p> <p>5. Models Not of Full Rank 205</p> <p>1. The Normal Equations, 205</p> <p>a. The Normal Equations, 206</p> <p>b. Solutions to the Normal Equations, 209</p> <p>2. Consequences of a Solution, 210</p> <p>a. Expected Value of bº, 210</p> <p>b. Variance Covariance Matrices of bº (Variance Covariance Matrices), 211</p> <p>c. Estimating E(y), 212</p> <p>d. Residual Error Sum of Squares, 212</p> <p>e. Estimating the Residual Error Variance, 213</p> <p>f. Partitioning the Total Sum of Squares, 214</p> <p>g. Coefficient of Determination, 215</p> <p>3. Distributional Properties, 217</p> <p>a. The Observation Vector y is Normal, 217</p> <p>b. The Solution to the Normal Equations bº is Normally Distributed, 217</p> <p>c. The Solution to the Normal Equations bº and the Estimator of the Residual Error Variance ^σ2 are Independent, 217</p> <p>d. The Error Sum of Squares Divided by the Population Variance SSE/σ2 is Chi-square χ2, 217</p> <p>e. Non-central χ2's, 218</p> <p>f. Non-central F-distributions, 219</p> <p>g. Analyses of Variance, 220</p> <p>h. Tests of Hypotheses, 221</p> <p>4. Estimable Functions, 223</p> <p>a. Definition, 223</p> <p>b. Properties of Estimable Functions, 224</p> <p>c. Confidence Intervals, 227</p> <p>d. What Functions Are Estimable?, 228</p> <p>e. Linearly Independent Estimable Functions, 229</p> <p>f. Testing for Estimability, 229</p> <p>g. General Expressions, 233</p> <p>5. The General Linear Hypothesis, 236</p> <p>a. Testable Hypotheses, 236</p> <p>b. Testing Testable Hypothesis, 237</p> <p>c. The Hypothesis K'b = 0, 240</p> <p>d. Non-testable Hypothesis, 241</p> <p>e. Checking for Testability, 243</p> <p>f. Some Examples of Testing Hypothesis, 245</p> <p>g. Independent and Orthogonal Contrasts, 248</p> <p>h. Examples of Orthogonal Contrasts, 250</p> <p>6. Restricted Models, 255</p> <p>a. Restrictions Involving Estimable Functions, 257</p> <p>b. Restrictions Involving Non-estimable Functions, 259</p> <p>c. Stochastic Constraints, 260</p> <p>7. The "Usual Constraints", 264</p> <p>a. Limitations on Constraints, 266</p> <p>b. Constraints of the Form bº i = 0, 266</p> <p>c. Procedure for Deriving bº and G, 269</p> <p>d. Restrictions on the Model, 270</p> <p>e. Illustrative Examples of Results in Subsections a–d, 272</p> <p>8. Generalizations, 276</p> <p>a. Non-singular V, 277</p> <p>b. Singular V, 277</p> <p>9. An Example, 280</p> <p>10. Summary, 283</p> <p>11. Exercises, 283</p> <p>6. Two Elementary Models 287</p> <p>1. Summary of the General Results, 288</p> <p>2. The One-Way Classification, 291</p> <p>a. The Model, 291</p> <p>b. The Normal Equations, 294</p> <p>c. Solving the Normal Equations, 294</p> <p>d. Analysis of Variance, 296</p> <p>e. Estimable Functions, 299</p> <p>f. Tests of Linear Hypotheses, 304</p> <p>g. Independent and Orthogonal Contrasts, 308</p> <p>h. Models that Include Restrictions, 310</p> <p>i. Balanced Data, 312</p> <p>3. Reductions in Sums of Squares, 313</p> <p>a. The R( ) Notation, 313</p> <p>b. Analyses of Variance, 314</p> <p>c. Tests of Hypotheses, 315</p> <p>4. Multiple Comparisons, 316</p> <p>5. Robustness of Analysis of Variance to Assumptions, 321</p> <p>a. Non-normality of the Error, 321</p> <p>b. Unequal Variances, 325</p> <p>c. Non-independent Observations, 330</p> <p>6. The Two-Way Nested Classification, 331</p> <p>a. Model, 332</p> <p>b. Normal Equations, 332</p> <p>c. Solving the Normal Equations, 333</p> <p>d. Analysis of Variance, 334</p> <p>e. Estimable Functions, 336</p> <p>f. Tests of Hypothesis, 337</p> <p>g. Models that Include Restrictions, 339</p> <p>h. Balanced Data, 339</p> <p>7. Normal Equations for Design Models, 340</p> <p>8. A Few Computer Outputs, 341</p> <p>9. Exercises, 343</p> <p>7. The Two-Way Crossed Classification 347</p> <p>1. The Two-Way Classification Without Interaction, 347</p> <p>a. Model, 348</p> <p>b. Normal Equations, 349</p> <p>c. Solving the Normal Equations, 350</p> <p>d. Absorbing Equations, 352</p> <p>e. Analyses of Variance, 356</p> <p>f. Estimable Functions, 368</p> <p>g. Tests of Hypothesis, 370</p> <p>h. Models that Include Restrictions, 373</p> <p>i. Balanced Data, 374</p> <p>2. The Two-Way Classification with Interaction, 380</p> <p>a. Model, 381</p> <p>b. Normal Equations, 383</p> <p>c. Solving the Normal Equations, 384</p> <p>d. Analysis of Variance, 385</p> <p>(i) Basic Calculations, 385</p> <p>(ii) Fitting Different Models, 389</p> <p>(iii) Computational Alternatives, 395</p> <p>(iv) Interpretation of Results, 397</p> <p>(v) Fitting Main Effects Before Interaction, 397</p> <p>e. Estimable Functions, 398</p> <p>f. Tests of Hypotheses, 403</p> <p>g. Models that Include Restrictions, 413</p> <p>h. All Cells Filled, 414</p> <p>i. Balanced Data, 415</p> <p>3. Interpretation of Hypotheses, 420</p> <p>4. Connectedness, 422</p> <p>5. The μij Models, 427</p> <p>6. Exercises, 429</p> <p>8. Some Other Analyses 437</p> <p>1. Large-Scale Survey-Type Data, 437</p> <p>a. Example, 438</p> <p>b. Fitting a Linear Model, 438</p> <p>c. Main-Effects-Only Models, 440</p> <p>d. Stepwise Fitting, 442</p> <p>e. Connectedness, 442</p> <p>f. The μij-models, 443</p> <p>2. Covariance, 445</p> <p>a. A General Formulation, 446</p> <p>b. The One-Way Classification, 454</p> <p>c. The Two-Way Classification (With Interaction), 470</p> <p>3. Data Having All Cells Filled, 474</p> <p>a. Estimating Missing Observations, 475</p> <p>b. Setting Data Aside, 478</p> <p>c. Analysis of Means, 479</p> <p>d. Separate Analyses, 487</p> <p>4. Exercises, 487</p> <p>9. Introduction to Variance Components 493</p> <p>1. Fixed and Random Models, 493</p> <p>a. A Fixed-Effects Model, 494</p> <p>b. A Random-Effects Model, 494</p> <p>c. Other Examples, 496</p> <p>2. Mixed Models, 497</p> <p>3. Fixed or Random, 499</p> <p>4. Finite Populations, 500</p> <p>5. Introduction to Estimation, 500</p> <p>a. Variance Matrix Structures, 501</p> <p>b. Analyses of Variance, 502</p> <p>c. Estimation, 504</p> <p>6. Rules for Balanced Data, 507</p> <p>a. Establishing Analysis of Variance Tables, 507</p> <p>b. Calculating Sums of Squares, 510</p> <p>c. Expected Values of Mean Squares, E(MS), 510</p> <p>7. The Two-Way Classification, 512</p> <p>a. The Fixed-Effects Model, 515</p> <p>b. Random-Effects Model, 518</p> <p>c. The Mixed Model, 521</p> <p>8. Estimating Variance Components from Balanced Data, 526</p> <p>a. Unbiasedness and Minimum Variance, 527</p> <p>b. Negative Estimates, 528</p> <p>9. Normality Assumptions, 530</p> <p>a. Distribution of Mean Squares, 530</p> <p>b. Distribution of Estimators, 532</p> <p>c. Tests of Hypothesis, 533</p> <p>d. Confidence Intervals, 536</p> <p>e. Probability of Negative Estimates, 538</p> <p>f. Sampling Variances of Estimators, 539</p> <p>10. Other Ways to Estimate Variance Components, 542</p> <p>a. Maximum Likelihood Methods, 542</p> <p>b. The MINQUE, 545</p> <p>c. Bayes Estimation, 554</p> <p>11. Exercises, 557</p> <p>10. Methods of Estimating Variance Components from Unbalanced Data 563</p> <p>1. Expectations of Quadratic Forms, 563</p> <p>a. Fixed-Effects Models, 564</p> <p>b. Mixed Models, 565</p> <p>c. Random-Effects Models, 566</p> <p>d. Applications, 566</p> <p>2. Analysis of Variance Method (Henderson's Method 1), 567</p> <p>a. Model and Notation, 567</p> <p>b. Analogous Sums of Squares, 568</p> <p>c. Expectations, 569</p> <p>d. Sampling Variances of Estimators, 577</p> <p>3. Adjusting for Bias in Mixed Models, 588</p> <p>a. General Method, 588</p> <p>b. A Simplification, 588</p> <p>c. A Special Case: Henderson's Method 2, 589</p> <p>4. Fitting Constants Method (Henderson's Method 3), 590</p> <p>a. General Properties, 590</p> <p>b. The Two-Way Classification, 592</p> <p>c. Too Many Equations, 595</p> <p>d. Mixed Models, 597</p> <p>e. Sampling Variances of Estimators, 597</p> <p>5. Analysis of Means Methods, 598</p> <p>6. Symmetric Sums Methods, 599</p> <p>7. Infinitely Many Quadratics, 602</p> <p>8. Maximum Likelihood for Mixed Models, 605</p> <p>a. Estimating Fixed Effects, 606</p> <p>b. Fixed Effects and Variance Components, 611</p> <p>c. Large Sample Variances, 613</p> <p>9. Mixed Models Having One Random Factor, 614</p> <p>10. Best Quadratic Unbiased Estimation, 620</p> <p>a. The Method of Townsend and Searle (1971) for a Zero Mean, 620</p> <p>b. The Method of Swallow and Searle (1978) for a Non-Zero Mean, 622</p> <p>11. Shrinkage Estimation of Regression Parameters and Variance Components, 626</p> <p>a. Shrinkage Estimators, 626</p> <p>b. The James–Stein Estimator, 627</p> <p>c. Stein's Estimator of the Variance, 627</p> <p>d. A Shrinkage Estimator of Variance Components, 628</p> <p>12. Exercises, 630</p> <p>References 633</p> <p>Author Index 645</p> <p>Subject Index 649</p>
<p><b>The late SHAYLE R. SEARLE, PhD, </b>was Professor Emeritus of Biometry at Cornell University. He was the author of the first edition of <i>Linear Models, Linear Models for Unbalanced Data, </i>and <i>Generalized, Linear, and Mixed Models </i>(with Charles E. McCulloch), all from Wiley. The first edition of <i>Linear Models </i>appears in the Wiley Classics Library.</p> <p><b>MARVIN H. J. GRUBER, PhD, </b>is Professor Emeritus at Rochester Institute of Technology, School of Mathematical Sciences. Dr. Gruber has written a number of papers and has given numerous presentations at professional meetings during his tenure as a professor at RIT. His fields of interest include regression estimators and the improvement of their efficiency using shrinkage estimators. He has written and published two books on this topic. Another of his books, <i>Matrix Algebra for Linear Models, </i>also published by Wiley, provides good preparation for studying Linear Models. He is a member of the American Mathematical Society, the Institute of Mathematical Statistics and the American Statistical Association.</p>
<p><b>Provides an easy-to-understand guide to statistical linear models and its uses in data analysis</b></p> <p>This book defines a broad spectrum of statistical linear models that is useful in the analysis of data. Considerable rewriting was done to make the book more reader friendly than the first edition. <i>Linear Models, Second Edition </i>is written in such a way as to be self-contained for a person with a background in basic statistics, calculus and linear algebra. The text includes numerous applied illustrations, numerical examples, and exercises, now augmented with computer outputs in SAS and R. Also new to this edition is:</p> <p>• A greatly improved internal design and format</p> <p>• A short introductory chapter to ease understanding of the order in which topics are taken up</p> <p>• Discussion of additional topics including multiple comparisons and shrinkage estimators</p> <p>• Enhanced discussions of generalized inverses, the MINQUE, Bayes and Maximum Likelihood estimators for estimating variance components</p> <p>Furthermore, in this edition, the second author adds many pedagogical elements throughout the book. These include numbered examples, end-of-example and end-of-proof symbols, selected hints and solutions to exercises available on the book’s website, and references to “big data” in everyday life. Featuring a thorough update, <i>Linear Models, Second Edition </i>includes:</p> <p>• A new internal format, additional instructional pedagogy, selected hints and solutions to exercises, and several more real-life applications</p> <p>• Many examples using SAS and R with timely data sets</p> <p>• Over 400 examples and exercises throughout the book to reinforce understanding</p> <p><i>Linear Models, Second Edition </i>is a textbook and a reference for upper-level undergraduate and beginning graduate-level courses on linear models, statisticians, engineers, and scientists who use multiple regression or analysis of variance in their work.</p>

Diese Produkte könnten Sie auch interessieren:

Statistics for Microarrays
Statistics for Microarrays
von: Ernst Wit, John McClure
PDF ebook
90,99 €