Details

Fundamental Statistical Inference


Fundamental Statistical Inference

A Computational Approach
Wiley Series in Probability and Statistics, Band 216 1. Aufl.

von: Marc S. Paolella

100,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 19.06.2018
ISBN/EAN: 9781119417873
Sprache: englisch
Anzahl Seiten: 584

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p><b>A hands-on approach to statistical inference that addresses the latest developments in this ever-growing field</b></p> <p>This clear and accessible book for beginning graduate students offers a practical and detailed approach to the field of statistical inference, providing complete derivations of results, discussions, and MATLAB programs for computation. It emphasizes details of the relevance of the material, intuition, and discussions with a view towards very modern statistical inference. In addition to classic subjects associated with mathematical statistics, topics include an intuitive presentation of the (single and double) bootstrap for confidence interval calculations, shrinkage estimation, tail (maximal moment) estimation, and a variety of methods of point estimation besides maximum likelihood, including use of characteristic functions, and indirect inference. Practical examples of all methods are given. Estimation issues associated with the discrete mixtures of normal distribution, and their solutions, are developed in detail. Much emphasis throughout is on non-Gaussian distributions, including details on working with the stable Paretian distribution and fast calculation of the noncentral Student's t. An entire chapter is dedicated to optimization, including development of Hessian-based methods, as well as heuristic/genetic algorithms that do not require continuity, with MATLAB codes provided.</p> <p>The book includes both theory and nontechnical discussions, along with a substantial reference to the literature, with an emphasis on alternative, more modern approaches. The recent literature on the misuse of hypothesis testing and p-values for model selection is discussed, and emphasis is given to alternative model selection methods, though hypothesis testing of distributional assumptions is covered in detail, notably for the normal distribution. </p> <p>Presented in three parts—Essential Concepts in Statistics; Further Fundamental Concepts in Statistics; and Additional Topics—Fundamental Statistical Inference: A Computational Approach offers comprehensive chapters on: Introducing Point and Interval Estimation; Goodness of Fit and Hypothesis Testing; Likelihood; Numerical Optimization; Methods of Point Estimation; Q-Q Plots and Distribution Testing; Unbiased Point Estimation and Bias Reduction; Analytic Interval Estimation; Inference in a Heavy-Tailed Context; The Method of Indirect Inference; and, as an appendix, A Review of Fundamental Concepts in Probability Theory, the latter to keep the book self-contained, and giving material on some advanced subjects such as saddlepoint approximations, expected shortfall in finance, calculation with the stable Paretian distribution, and convergence theorems and proofs. </p>
<p>Preface xi</p> <p><b>PART I ESSENTIAL CONCEPTS IN STATISTICS</b></p> <p><b>1 Introducing Point and Interval Estimation 3</b></p> <p>1.1 Point Estimation / 4</p> <p>1.1.1 Bernoulli Model / 4</p> <p>1.1.2 Geometric Model / 6</p> <p>1.1.3 Some Remarks on Bias and Consistency / 11</p> <p>1.2 Interval Estimation via Simulation / 12</p> <p>1.3 Interval Estimation via the Bootstrap / 18</p> <p>1.3.1 Computation and Comparison with Parametric Bootstrap / 18</p> <p>1.3.2 Application to Bernoulli Model and Modification / 20</p> <p>1.3.3 Double Bootstrap / 24</p> <p>1.3.4 Double Bootstrap with Analytic Inner Loop / 26</p> <p>1.4 Bootstrap Confidence Intervals in the Geometric Model / 31</p> <p>1.5 Problems / 35</p> <p><b>2 Goodness of Fit and Hypothesis Testing 37</b></p> <p>2.1 Empirical Cumulative Distribution Function / 38</p> <p>2.1.1 The Glivenko–Cantelli Theorem / 38</p> <p>2.1.2 Proofs of the Glivenko–Cantelli Theorem / 41</p> <p>2.1.3 Example with Continuous Data and Approximate Confidence Intervals / 45</p> <p>2.1.4 Example with Discrete Data and Approximate Confidence Intervals / 49</p> <p>2.2 Comparing Parametric and Nonparametric Methods / 52</p> <p>2.3 Kolmogorov–Smirnov Distance and Hypothesis Testing / 57</p> <p>2.3.1 The Kolmogorov–Smirnov and Anderson–Darling Statistics / 57</p> <p>2.3.2 Significance and Hypothesis Testing / 59</p> <p>2.3.3 Small-Sample Correction / 63</p> <p>2.4 Testing Normality with KD and AD / 65</p> <p>2.5 Testing Normality with <i>W<sup>2</sup></i> and <i>U<sup>2</sup></i> / 68</p> <p>2.6 Testing the Stable Paretian Distributional Assumption: First Attempt / 69</p> <p>2.7 Two-Sample Kolmogorov Test / 73</p> <p>2.8 More on (Moron?) Hypothesis Testing / 74</p> <p>2.8.1 Explanation / 75</p> <p>2.8.2 Misuse of Hypothesis Testing / 77</p> <p>2.8.3 Use and Misuse of <i>p</i>-Values / 79</p> <p>2.9 Problems / 82</p> <p><b>3 Likelihood 85</b></p> <p>3.1 Introduction / 85</p> <p>3.1.1 Scalar Parameter Case / 87</p> <p>3.1.2 Vector Parameter Case / 92</p> <p>3.1.3 Robustness and the MCD Estimator / 100</p> <p>3.1.4 Asymptotic Properties of the Maximum Likelihood Estimator / 102</p> <p>3.2 Cramér–Rao Lower Bound / 107</p> <p>3.2.1 Univariate Case / 108</p> <p>3.2.2 Multivariate Case / 111</p> <p>3.3 Model Selection / 114</p> <p>3.3.1 Model Misspecification / 114</p> <p>3.3.2 The Likelihood Ratio Statistic / 117</p> <p>3.3.3 Use of Information Criteria / 119</p> <p>3.4 Problems / 120</p> <p><b>4 Numerical Optimization 123</b></p> <p>4.1 Root Finding / 123</p> <p>4.1.1 One Parameter / 124</p> <p>4.1.2 Several Parameters / 131</p> <p>4.2 Approximating the Distribution of the Maximum Likelihood Estimator / 135</p> <p>4.3 General Numerical Likelihood Maximization / 136</p> <p>4.3.1 Newton–Raphson and Quasi-Newton Methods / 137</p> <p>4.3.2 Imposing Parameter Restrictions / 140</p> <p>4.4 Evolutionary Algorithms / 145</p> <p>4.4.1 Differential Evolution / 146</p> <p>4.4.2 Covariance Matrix Adaption Evolutionary Strategy / 149</p> <p>4.5 Problems / 155</p> <p><b>5 Methods of Point Estimation 157</b></p> <p>5.1 Univariate Mixed Normal Distribution / 157</p> <p>5.1.1 Introduction / 157</p> <p>5.1.2 Simulation of Univariate Mixtures / 160</p> <p>5.1.3 Direct Likelihood Maximization / 161</p> <p>5.1.4 Use of the EM Algorithm / 169</p> <p>5.1.5 Shrinkage-Type Estimation / 174</p> <p>5.1.6 Quasi-Bayesian Estimation / 176</p> <p>5.1.7 Confidence Intervals / 178</p> <p>5.2 Alternative Point Estimation Methodologies / 184</p> <p>5.2.1 Method of Moments Estimator / 185</p> <p>5.2.2 Use of Goodness-of-Fit Measures / 190</p> <p>5.2.3 Quantile Least Squares / 191</p> <p>5.2.4 Pearson Minimum Chi-Square / 193</p> <p>5.2.5 Empirical Moment Generating Function Estimator / 195</p> <p>5.2.6 Empirical Characteristic Function Estimator / 198</p> <p>5.3 Comparison of Methods / 199</p> <p>5.4 A Primer on Shrinkage Estimation / 200</p> <p>5.5 Problems / 202</p> <p><b>PART II FURTHER FUNDAMENTAL CONCEPTS IN STATISTICS</b></p> <p><b>6 Q-Q Plots and Distribution Testing 209</b></p> <p>6.1 P-P Plots and Q-Q Plots / 209</p> <p>6.2 Null Bands / 211</p> <p>6.2.1 Definition and Motivation / 211</p> <p>6.2.2 Pointwise Null Bands via Simulation / 212</p> <p>6.2.3 Asymptotic Approximation of Pointwise Null Bands / 213</p> <p>6.2.4 Mapping Pointwise and Simultaneous Significance Levels / 215</p> <p>6.3 Q-Q Test / 217</p> <p>6.4 Further P-P and Q-Q Type Plots / 219</p> <p>6.4.1 (Horizontal) Stabilized P-P Plots / 219</p> <p>6.4.2 Modified S-P Plots / 220</p> <p>6.4.3 MSP Test for Normality / 224</p> <p>6.4.4 Modified Percentile (Fowlkes-MP) Plots / 228</p> <p>6.5 Further Tests for Composite Normality / 231</p> <p>6.5.1 Motivation / 232</p> <p>6.5.2 Jarque–Bera Test / 234</p> <p>6.5.3 Three Powerful (and More Recent) Normality Tests / 237</p> <p>6.5.4 Testing Goodness of Fit via Binning: Pearson’s <i>X<sub> P</sub><sup>2</sup></i> Test / 240</p> <p>6.6 Combining Tests and Power Envelopes / 247</p> <p>6.6.1 Combining Tests / 248</p> <p>6.6.2 Power Comparisons for Testing Composite Normality / 252</p> <p>6.6.3 Most Powerful Tests and Power Envelopes / 252</p> <p>6.7 Details of a Failed Attempt / 255</p> <p>6.8 Problems / 260</p> <p><b>7 Unbiased Point Estimation and Bias Reduction 269</b></p> <p>7.1 Sufficiency / 269</p> <p>7.1.1 Introduction / 269</p> <p>7.1.2 Factorization / 272</p> <p>7.1.3 Minimal Sufficiency / 276</p> <p>7.1.4 The Rao–Blackwell Theorem / 283</p> <p>7.2 Completeness and the Uniformly Minimum Variance Unbiased Estimator / 286</p> <p>7.3 An Example with i.i.d. Geometric Data / 289</p> <p>7.4 Methods of Bias Reduction / 293</p> <p>7.4.1 The Bias-Function Approach / 293</p> <p>7.4.2 Median-Unbiased Estimation / 296</p> <p>7.4.3 Mode-Adjusted Estimator / 297</p> <p>7.4.4 The Jackknife / 302</p> <p>7.5 Problems / 305</p> <p><b>8 Analytic Interval Estimation 313</b></p> <p>8.1 Definitions / 313</p> <p>8.2 Pivotal Method / 315</p> <p>8.2.1 Exact Pivots / 315</p> <p>8.2.2 Asymptotic Pivots / 318</p> <p>8.3 Intervals Associated with Normal Samples / 319</p> <p>8.3.1 Single Sample / 319</p> <p>8.3.2 Paired Sample / 320</p> <p>8.3.3 Two Independent Samples / 322</p> <p>8.3.4 Welch’s Method for 𝜇<sub>1</sub> − 𝜇<sub>2</sub> when 𝜎<sub>1</sub><sup>2</sup> ≠ 𝜎<sub>2</sub><sup>2 </sup>/ 323</p> <p>8.3.5 Satterthwaite’s Approximation / 324</p> <p>8.4 Cumulative Distribution Function Inversion / 326</p> <p>8.4.1 Continuous Case / 326</p> <p>8.4.2 Discrete Case / 330</p> <p>8.5 Application of the Nonparametric Bootstrap / 334</p> <p>8.6 Problems / 337</p> <p><b>PART III ADDITIONAL TOPICS</b></p> <p><b>9 Inference in a Heavy-Tailed Context 341</b></p> <p>9.1 Estimating the Maximally Existing Moment / 342</p> <p>9.2 A Primer on Tail Estimation / 346</p> <p>9.2.1 Introduction / 346</p> <p>9.2.2 The Hill Estimator / 346</p> <p>9.2.3 Use with Stable Paretian Data / 349</p> <p>9.3 Noncentral Student’s <i>t</i> Estimation / 351</p> <p>9.3.1 Introduction / 351</p> <p>9.3.2 Direct Density Approximation / 352</p> <p>9.3.3 Quantile-Based Table Lookup Estimation / 353</p> <p>9.3.4 Comparison of NCT Estimators / 354</p> <p>9.4 Asymmetric Stable Paretian Estimation / 358</p> <p>9.4.1 Introduction / 358</p> <p>9.4.2 The Hint Estimator / 359</p> <p>9.4.3 Maximum Likelihood Estimation / 360</p> <p>9.4.4 The McCulloch Estimator / 361</p> <p>9.4.5 The Empirical Characteristic Function Estimator / 364</p> <p>9.4.6 Testing for Symmetry in the Stable Model / 366</p> <p>9.5 Testing the Stable Paretian Distribution / 368</p> <p>9.5.1 Test Based on the Empirical Characteristic Function / 368</p> <p>9.5.2 Summability Test and Modification / 371</p> <p>9.5.3 ALHADI: The 𝛼-Hat Discrepancy Test / 375</p> <p>9.5.4 Joint Test Procedure / 383</p> <p>9.5.5 Likelihood Ratio Tests / 384</p> <p>9.5.6 Size and Power of the Symmetric Stable Tests / 385</p> <p>9.5.7 Extension to Testing the Asymmetric Stable Paretian Case / 395</p> <p><b>10 The Method of Indirect Inference 401</b></p> <p>10.1 Introduction / 401</p> <p>10.2 Application to the Laplace Distribution / 403</p> <p>10.3 Application to Randomized Response / 403</p> <p>10.3.1 Introduction / 403</p> <p>10.3.2 Estimation via Indirect Inference / 406</p> <p>10.4 Application to the Stable Paretian Distribution / 409</p> <p>10.5 Problems / 416</p> <p><b>A Review of Fundamental Concepts in Probability Theory 419</b></p> <p>A.1 Combinatorics and Special Functions / 420</p> <p>A.2 Basic Probability and Conditioning / 423</p> <p>A.3 Univariate Random Variables / 424</p> <p>A.4 Multivariate Random Variables / 427</p> <p>A.5 Continuous Univariate Random Variables / 430</p> <p>A.6 Conditional Random Variables / 432</p> <p>A.7 Generating Functions and Inversion Formulas / 434</p> <p>A.8 Value at Risk and Expected Shortfall / 437</p> <p>A.9 Jacobian Transformations / 451</p> <p>A.10 Sums and Other Functions / 453</p> <p>A.11 Saddlepoint Approximations / 456</p> <p>A.12 Order Statistics / 460</p> <p>A.13 The Multivariate Normal Distribution / 462</p> <p>A.14 Noncentral Distributions / 465</p> <p>A.15 Inequalities and Convergence / 467</p> <p>A.15.1 Inequalities for Random Variables / 467</p> <p>A.15.2 Convergence of Sequences of Sets / 469</p> <p>A.15.3 Convergence of Sequences of Random Variables / 473</p> <p>A.16 The Stable Paretian Distribution / 483</p> <p>A.17 Problems / 492</p> <p>A.18 Solutions / 509</p> <p>References 537</p> <p>Index 561</p>
<p><b>Marc S. Paolella, PhD,</b> is a Professor at the Department of Banking and Finance, University of Zurich. He is also the Editor of <i>Econometrics</i> and an Associate Editor of the <i>Royal Statistical Society Journal Series A.</i>
<p><b>Fundamental Statistical Inference</b> <p>A Computational Approach <p><b>A hands-on approach to Statistical Inference that addresses the latest developments in this ever-growing field</b> <p>This clear and accessible book for beginning graduate students offers a practical and detailed approach to the field of statistical inference, providing complete derivations of results, discussions, and MATLAB programs for computation. It emphasizes details of the relevance of the material, intuition, and discussions with a view towards very modern statistical inference. In addition to classic subjects associated with mathematical statistics, topics include an intuitive presentation of the (single and double) bootstrap for confidence interval calculations, shrinkage estimation, tail (maximal moment) estimation, and a variety of methods of point estimation besides maximum likelihood, including use of characteristic functions and indirect inference. Practical examples of all methods are given and estimation issues associated with the discrete mixtures of normal distribution, and their solutions, are developed in detail. Much emphasis throughout is focused on non-Gaussian distributions, including details of working with the stable Paretian distribution and fast calculation of the noncentral Student's <i>t.</i> An entire chapter is dedicated to optimization, including development of Hessian-based methods, as well as heuristic/genetic algorithms that do not require continuity, with MATLAB codes provided. <p><i>Fundamental Statistical Inference: A Computational Approach</i> includes both theory and nontechnical discussions, along with a substantial reference to the literature, with an emphasis on alternative, more modern approaches. The recent literature on the misuse of hypothesis testing and <i>p</i>-values for model selection is discussed. Emphasis is also given to alternative model selection methods, though hypothesis testing of distributional assumptions is covered in detail, notably for the normal distribution. <p>Presented in three parts—Essential Concepts in Statistics; Further Fundamental Concepts in Statistics; and Additional Topics—<i>Fundamental Statistical Inference: A Computational Approach</i> offers comprehensive chapters on: Introducing Point and Interval Estimation; Goodness of Fit and Hypothesis Testing; Likelihood; Numerical Optimization; Methods of Point Estimation; Q-Q Plots and Distribution Testing; Unbiased Point Estimation and Bias Reduction; Analytic Interval Estimation; Inference in a Heavy-Tailed Context; The Method of Indirect Inference; and, as an appendix, A Review of Fundamental Concepts in Probability Theory, the latter to keep the book self-contained, and offering material on some advanced subjects such as saddlepoint approximations, expected shortfall in finance, calculation with the stable Paretian distribution, and convergence theorems and proofs.

Diese Produkte könnten Sie auch interessieren:

Statistics for Microarrays
Statistics for Microarrays
von: Ernst Wit, John McClure
PDF ebook
90,99 €