Details

Advanced Statistics with Applications in R


Advanced Statistics with Applications in R


Wiley Series in Probability and Statistics, Band 392 1. Aufl.

von: Eugene Demidenko

103,99 €

Verlag: Wiley
Format: EPUB
Veröffentl.: 26.11.2019
ISBN/EAN: 9781118594612
Sprache: englisch
Anzahl Seiten: 880

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p><i>Advanced Statistics with Applications in R </i>fills the gap between several excellent theoretical statistics textbooks and many applied statistics books where teaching reduces to using existing packages. This book looks at <i>what is under the hood</i>. Many statistics issues including the recent crisis with <i>p</i>-value are caused by misunderstanding of statistical concepts due to poor theoretical background of practitioners and applied statisticians. This book is the product of a forty-year experience in teaching of probability and statistics and their applications for solving real-life problems.</p> <p>There are more than 442 examples in the book: basically every probability or statistics concept is illustrated with an example accompanied with an <b>R</b> code. Many examples, such as <i>Who said π? What team is better? The fall of the Roman empire, James Bond chase problem, Black Friday shopping, Free fall equation: Aristotle or Galilei</i>, and many others are intriguing. These examples cover biostatistics, finance, physics and engineering, text and image analysis, epidemiology, spatial statistics, sociology, etc.</p> <p><i>Advanced Statistics with Applications in R</i> teaches students to use theory for solving real-life problems through computations: there are about 500 <b>R</b> codes and 100 datasets. These data can be freely downloaded from the author's website <b>dartmouth.edu/~eugened</b>.</p> <p>This book is suitable as a text for senior undergraduate students with major in statistics or data science or graduate students. Many researchers who apply statistics on the regular basis find explanation of many fundamental concepts from the theoretical perspective illustrated by concrete real-world applications. </p>
<p>Why I Wrote This Book</p> <p><b>1 Discrete random variables 1</b></p> <p>1.1 Motivating example 1</p> <p>1.2 Bernoulli random variable 2</p> <p>1.3 General discrete random variable 4</p> <p>1.4 Mean and variance 6</p> <p>1.4.1 Mechanical interpretation of the mean 7</p> <p>1.4.2 Variance 12</p> <p>1.5 R basics 15</p> <p>1.5.1 Scripts/functions 16</p> <p>1.5.2 Text editing in R 17</p> <p>1.5.3 Saving your R code 18</p> <p>1.5.4 for loop 18</p> <p>1.5.5 Vectorized computations 19</p> <p>1.5.6 Graphics 23</p> <p>1.5.7 Coding and help in R 25</p> <p>1.6 Binomial distribution 26</p> <p>1.7 Poisson distribution 32</p> <p>1.8 Random number generation using sample 38</p> <p>1.8.1 Generation of a discrete random variable 38</p> <p>1.8.2 Random Sudoku 39</p> <p><b>2 Continuous random variables 43</b></p> <p>2.1 Distribution and density functions 43</p> <p>2.1.1 Cumulative distribution function 43</p> <p>2.1.2 Empirical cdf 45</p> <p>2.1.3 Density function 46</p> <p>2.2 Mean, variance, and other moments 48</p> <p>2.2.1 Quantiles, quartiles, and the median 54</p> <p>2.2.2 The tight confidence range 55</p> <p>2.3 Uniform distribution 59</p> <p>2.4 Exponential distribution 63</p> <p>2.4.1 Laplace or double-exponential distribution 67</p> <p>2.4.2 R functions 67</p> <p>2.5 Moment generating function 69</p> <p>2.5.1 Fourier transform and characteristic function 72</p> <p>2.6 Gamma distribution 75</p> <p>2.6.1 Relationship to Poisson distribution 77</p> <p>2.6.2 Computing the gamma distribution in R 79</p> <p>2.6.3 The tight confidence range 79</p> <p>2.7 Normal distribution 82</p> <p>2.8 Chebyshev’s inequality 91</p> <p>2.9 The law of large numbers 93</p> <p>2.9.1 Four types of stochastic convergence 94</p> <p>2.9.2 Integral approximation using simulations 99</p> <p>2.10 The central limit theorem 104</p> <p>2.10.1 Why the normal distribution is the most natural symmetric distribution 112</p> <p>2.10.2 CLT on the relative scale 113</p> <p>2.11 Lognormal distribution 116</p> <p>2.11.1 Computation of the tight confidence range 118</p> <p>2.12 Transformations and the delta method 120</p> <p>2.12.1 The delta method 124</p> <p>2.13 Random number generation 126</p> <p>2.13.1 Cauchy distribution 130</p> <p>2.14 Beta distribution 132</p> <p>2.15 Entropy 134</p> <p>2.16 Benford’s law: the distribution of the first digit 138</p> <p>2.16.1 Distributions that almost obey Benford’s law 142</p> <p>2.17 The Pearson family of distributions 145</p> <p>2.18 Major univariate continuous distributions 147</p> <p><b>3 Multivariate random variables 149</b></p> <p>3.1 Joint cdf and density 149</p> <p>3.1.1 Expectation 154</p> <p>3.1.2 Bivariate discrete distribution 154</p> <p>3.2 Independence 156</p> <p>3.2.1 Convolution 159</p> <p>3.3 Conditional density 168</p> <p>3.3.1 Conditional mean and variance 171</p> <p>3.3.2 Mixture distribution and Bayesian statistics 179</p> <p>3.3.3 Random sum 182</p> <p>3.3.4 Cancer tumors grow exponentially 184</p> <p>3.4 Correlation and linear regression 189</p> <p>3.5 Bivariate normal distribution 198</p> <p>3.5.1 Regression as conditional mean 206</p> <p>3.5.2 Variance decomposition and coefficient of determination 208</p> <p>3.5.3 Generation of dependent normal observations 209</p> <p>3.5.4 Copula 214</p> <p>3.6 Joint density upon transformation 218</p> <p>3.7 Geometric probability 223</p> <p>3.7.1 Meeting problem 224</p> <p>3.7.2 Random objects on the square 225</p> <p>3.8 Optimal portfolio allocation 230</p> <p>3.8.1 Stocks do not correlate 231</p> <p>3.8.2 Correlated stocks 232</p> <p>3.8.3 Markowitz bullet 233</p> <p>3.8.4 Probability bullet 234</p> <p>3.9 Distribution of order statistics 236</p> <p>3.10 Multidimensional random vectors 239</p> <p>3.10.1 Multivariate conditional distribution 245</p> <p>3.10.2 Multivariate MGF 247</p> <p>3.10.3 Multivariate delta method 248</p> <p>3.10.4 Multinomial distribution 251</p> <p><b>4 Four important distributions in statistics 255</b></p> <p>4.1 Multivariate normal distribution 255</p> <p>4.1.1 Generation of multivariate normal variables 259</p> <p>4.1.2 Conditional distribution 261</p> <p>4.1.3 Multivariate CLT 268</p> <p>4.2 Chi-square distribution 270</p> <p>4.2.1 Noncentral chi-square distribution 276</p> <p>4.2.2 Expectations and variances of quadratic forms 277</p> <p>4.2.3 Kronecker product and covariance matrix 277</p> <p>4.3<i> t</i>-distribution 280</p> <p>4.3.1 Noncentral <i>t</i>-distribution 284</p> <p>4.4 <i>F</i>-distribution 286</p> <p><b>5 Preliminary data analysis and visualization 291</b></p> <p>5.1 Comparison of random variables using the cdf 291</p> <p>5.1.1 ROC curve 294</p> <p>5.1.2 Survival probability 305</p> <p>5.2 Histogram 312</p> <p>5.3 Q-Q plot 315</p> <p>5.3.1 The q-q confidence bands 319</p> <p>5.4 Box plot 324</p> <p>5.5 Kernel density estimation 325</p> <p>5.5.1 Density movie 331</p> <p>5.5.2 3D scatterplots 333</p> <p>5.6 Bivariate normal kernel density 335</p> <p>5.6.1 Bivariate kernel smoother for images 339</p> <p>5.6.2 Smoothed scatterplot 341</p> <p>5.6.3 Spatial statistics for disease mapping 342</p> <p><b>6 Parameter estimation 347</b></p> <p>6.1 Statistics as inverse probability 349</p> <p>6.2 Method of moments 350</p> <p>6.2.1 Generalized method of moments 353</p> <p>6.3 Method of quantiles 357</p> <p>6.4 Statistical properties of an estimator 358</p> <p>6.4.1 Unbiasedness 359</p> <p>6.4.2 Mean Square Error 365</p> <p>6.4.3 Multidimensional MSE 371</p> <p>6.4.4 Consistency of estimators 373</p> <p>6.5 Linear estimation 378</p> <p>6.5.1 Estimation of the mean using linear estimator 379</p> <p>6.5.2 Vector representation 383</p> <p>6.6 Estimation of variance and correlation coefficient 385</p> <p>6.6.1 Quadratic estimation of the variance 386</p> <p>6.6.2 Estimation of the covariance and correlation coefficient 389</p> <p>6.7 Least squares for simple linear regression 398</p> <p>6.7.1 Gauss—Markov theorem 402</p> <p>6.7.2 Statistical properties of the OLS estimator under the normal assumption 404</p> <p>6.7.3 The lm function and prediction by linear regression 406</p> <p>6.7.4 Misinterpretation of the coefficient of determination 410</p> <p>6.8 Sufficient statistics and the exponential family of distributions 415</p> <p>6.8.1 Uniformly minimum-variance unbiased estimator 419</p> <p>6.8.2 Exponential family of distributions 422</p> <p>6.9 Fisher information and the Cramér—Rao bound 433</p> <p>6.9.1 One parameter 434</p> <p>6.9.2 Multiple parameters 440</p> <p>6.10 Maximum likelihood 453</p> <p>6.10.1 Basic definitions and examples 453</p> <p>6.10.2 Circular statistics and the von Mises distribution 471</p> <p>6.10.3 Maximum likelihood, sufficient statistics and the exponential family 475</p> <p>6.10.4 Asymptotic properties of ML 477</p> <p>6.10.5 When maximum likelihood breaks down 485</p> <p>6.10.6 Algorithms for log-likelihood function maximization 498</p> <p>6.11 Estimating equations and the M-estimator 510</p> <p>6.11.1 Robust statistics 516</p> <p><b>7 Hypothesis testing and confidence intervals 523</b></p> <p>7.1 Fundamentals of statistical testing 523</p> <p>7.1.1 The <i>p</i>-value and its interpretation 525</p> <p>7.1.2 Ad hoc statistical testing 528</p> <p>7.2 Simple hypothesis 531</p> <p>7.3 The power function of the <i>Z</i>-test 536</p> <p>7.3.1 Type II error and the power function 536</p> <p>7.3.2 Optimal significance level and the ROC curve 542</p> <p>7.3.3 One-sided hypothesis 545</p> <p>7.4 The <i>t</i>-test for the means 549</p> <p>7.4.1 One-sample <i>t</i>-test 549</p> <p>7.4.2 Two-sample <i>t</i>-test 552</p> <p>7.4.3 One-sided <i>t</i>-test 557</p> <p>7.4.4 Paired versus unpaired <i>t</i>-test 558</p> <p>7.4.5 Parametric versus nonparametric tests 560</p> <p>7.5 Variance test 562</p> <p>7.5.1 Two-sided variance test 562</p> <p>7.5.2 One-sided variance test 565</p> <p>7.6 Inverse-cdf test 566</p> <p>7.6.1 General formulation 567</p> <p>7.6.2 The <i>F</i>-test for variances 569</p> <p>7.6.3 Binomial proportion 573</p> <p>7.6.4 Poisson rate 577</p> <p>7.7 Testing for correlation coefficient 580</p> <p>7.8 Confidence interval 583</p> <p>7.8.1 Unbiased CI and its connection to hypothesis testing 588</p> <p>7.8.2 Inverse cdf CI 589</p> <p>7.8.3 CI for the normal variance and SD 591</p> <p>7.8.4 CI for other major statistical parameters 592</p> <p>7.8.5 Confidence region 594</p> <p>7.9 Three asymptotic tests and confidence intervals 597</p> <p>7.9.1 Pearson chi-square test 605</p> <p>7.9.2 Handwritten digit recognition 608</p> <p>7.10 Limitations of classical hypothesis testing and the <i>d</i>-value 612</p> <p>7.10.1 What the <i>p</i>-value means? 613</p> <p>7.10.2 Why <i>α</i> = 0.05? 614</p> <p>7.10.3 The null hypothesis is always rejected with a large enough sample size 616</p> <p>7.10.4 Parameter-based inference 618</p> <p>7.10.5 The <i>d</i>-value for individual inference 619</p> <p><b>8 Linear model and its extensions 627</b></p> <p>8.1 Basic definitions and linear least squares 627</p> <p>8.1.1 Linear model with the intercept term 632</p> <p>8.1.2 The vector-space geometry of least squares 633</p> <p>8.1.3 Coefficient of determination 636</p> <p>8.2 The Gauss—Markov theorem 639</p> <p>8.2.1 Estimation of regression variance 641</p> <p>8.3 Properties of OLS estimators under the normal assumption 643</p> <p>8.3.1 The sensitivity of statistical inference to violation of the normal assumption 646</p> <p>8.4 Statistical inference with linear models 650</p> <p>8.4.1 Confidence interval and region 650</p> <p>8.4.2 Linear hypothesis testing and the <i>F</i>-test 653</p> <p>8.4.3 Prediction by linear regression and simultaneous confidence band 661</p> <p>8.4.4 Testing the null hypothesis and the coefficient of determination 664</p> <p>8.4.5 Is X fixed or random? 665</p> <p>8.5 The one-sided <i>p</i>- and <i>d</i>-value for regression coefficients 671</p> <p>8.5.1 The one-sided <i>p</i>-value for interpretation on the population level 672</p> <p>8.5.2 The <i>d</i>-value for interpretation on the individual level 673</p> <p>8.6 Examples and pitfalls 676</p> <p>8.6.1 Kids drinking and alcohol movie watching 676</p> <p>8.6.2 My first false discovery 680</p> <p>8.6.3 Height, foot, and nose regression 681</p> <p>8.6.4 A geometric interpretation of adding a new predictor 684</p> <p>8.6.5 Contrast coefficient of determination against spurious regression 687</p> <p>8.7 Dummy variable approach and ANOVA 696</p> <p>8.7.1 Dummy variables for categories 696</p> <p>8.7.2 Unpaired and paired <i>t</i>-test 705</p> <p>8.7.3 Modeling longitudinal data 708</p> <p>8.7.4 One-way ANOVA model 712</p> <p>8.7.5 Two-way ANOVA 720</p> <p>8.8 Generalized linear model 723</p> <p>8.8.1 MLE estimation of GLM 727</p> <p>8.8.2 Logistic and probit regressions for binary outcome 728</p> <p>8.8.3 Poisson regression 736</p> <p><b>9 Nonlinear regression 741</b></p> <p>9.1 Definition and motivating examples 741</p> <p>9.2 Nonlinear least squares 750</p> <p>9.3 Gauss—Newton algorithm 753</p> <p>9.4 Statistical properties of the NLS estimator 757</p> <p>9.4.1 Large sample properties 757</p> <p>9.4.2 Small sample properties 762</p> <p>9.4.3 Asymptotic confidence intervals and hypothesis testing 763</p> <p>9.4.4 Three methods of statistical inference in large sample 768</p> <p>9.5 The nls function and examples 770</p> <p>9.5.1 NLS-cdf estimator 782</p> <p>9.6 Studying small sample properties through simulations 786</p> <p>9.6.1 Normal distribution approximation 787</p> <p>9.6.2 Statistical tests 789</p> <p>9.6.3 Confidence region 791</p> <p>9.6.4 Confidence intervals 792</p> <p>9.7 Numerical complications of the nonlinear least squares 794</p> <p>9.7.1 Criteria for existence 795</p> <p>9.7.2 Criteria for uniqueness 796</p> <p>9.8 Optimal design of experiments with nonlinear regression 799</p> <p>9.8.1 Motivating examples 799</p> <p>9.8.2 Optimal designs with nonlinear regression 802</p> <p>9.9 The Michaelis—Menten model 805</p> <p>9.9.1 The NLS solution 806</p> <p>9.9.2 The exact solution 807</p> <p><b>10 Appendix 811</b></p> <p>10.1 Notation 811</p> <p>10.2 Basics of matrix algebra 811</p> <p>10.2.1 Preliminaries and matrix inverse 812</p> <p>10.2.2 Determinant 815</p> <p>10.2.3 Partition matrices 816</p> <p>10.3 Eigenvalues and eigenvectors 818</p> <p>10.3.1 Jordan spectral matrix decomposition 819</p> <p>10.3.2 SVD: Singular value decomposition of a rectangular matrix 820</p> <p>10.4 Quadratic forms and positive definite matrices 822</p> <p>10.4.1 Quadratic forms 822</p> <p>10.4.2 Positive and nonnegative definite matrices 823</p> <p>10.5 Vector and matrix calculus 826</p> <p>10.5.1 Differentiation of a scalar-valued function with respect to a vector 826</p> <p>10.5.2 Differentiation of a vector-valued function with respect to a vector 827</p> <p>10.5.3 Kronecker product 828</p> <p>10.5.4 vec operator 828</p> <p>10.6 Optimization 829</p> <p>10.6.1 Convex and concave functions 830</p> <p>10.6.2 Criteria for unconstrained minimization 831</p> <p>10.6.3 Gradient algorithms 835</p> <p>10.6.4 Constrained optimization: Lagrange multiplier technique 838</p> <p>Bibliography 843</p> <p>Index 851</p>
<p><b>PROFESSOR EUGENE DEMIDENKO</b> works at Dartmouth College in the Department of Biomedical Science, he teaches statistics at Mathematics Department to undergraduate students and to graduate students at Quantitative Biomedical Sciences at Geisel School of Medicine. He has brought experience in theoretical and applied statistics, such as epidemiology and biostatistics, statistical analysis of images, tumor regrowth, ill-posed inverse problems in engineering and technology, optimal portfolio allocation, among others. His first book with Wiley <i>Mixed Model: Theory and Applications with R</i> gained much popularity among researchers and graduate/PhD students. Prof. Demidenko is the author of a controversial paper <i>The P-value You Can't Buy</i> published in 2016 in <i>The American Statistician</i>.
<p><i>Advanced Statistics with Applications in R</i> fills the gap between several excellent theoretical statistics textbooks and many applied statistics books where teaching reduces to using existing packages. This book looks at<i> what is under the hood</i>. Many statistics issues including the recent crisis with <i>p</i>-value are caused by misunderstanding of statistical concepts due to poor theoretical background of practitioners and applied statisticians. This book is the product of a forty-year experience in teaching of probability and statistics and their applications for solving real-life problems. <p>There are more than 442 examples in the book: basically every probability or statistics concept is illustrated with an example accompanied with an <b>R</b> code. Many examples, such as <i>Who said ???</i> <i>What team is better? The fall of the Roman empire, James Bond chase problem, Black Friday shopping, Free fall equation: Aristotle or Galilei,</i> and many others are intriguing. These examples cover biostatistics, finance, physics and engineering, text and image analysis, epidemiology, spatial statistics, sociology, etc. <p><i>Advanced Statistics with Applications in R</i> teaches students to use theory for solving real-life problems through computations: there are about 500 <b>R</b> codes and 100 datasets. These data can be freely downloaded from the author's website <b>dartmouth.edu/~eugened</b>. <p>This book is suitable as a text for senior undergraduate students with major in statistics or data science or graduate students. Many researchers who apply statistics on the regular basis find explanation of many fundamental concepts from the theoretical perspective illustrated by concrete real-world applications. <p>"This book is superior to the current available books on market in many aspects."</br> <b> —Yi Zhao,</b> Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health and <b>Yizhen Xu,</b> Department of Biostatistics, Brown University <p>"This text is an excellent book suitable for a wide variety of audiences trying to learn probability, statistics, and R programming language."</br> <b> —Fenghai Duan,</b> Department of Biostatistics and Center for Statistical Sciences, Brown University School of Public Health, Providence, Rhode Island
<p>“This book is superior to the current available books on market in many aspects.”</p> <p>—Yi Zhao, Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health and Yizhen Xu, Department of biostatistics, Brown University<br /><br /></p> <p>“This text is an excellent book suitable for a wide variety of audiences trying to learn probability, statistics, and R programming language.”</p> <p>—Fenghai Duan, Department of Biostatistics and Center for Statistical Sciences, Brown University School of Public Health, Providence, Rhode Island</p>

Diese Produkte könnten Sie auch interessieren:

Statistics for Microarrays
Statistics for Microarrays
von: Ernst Wit, John McClure
PDF ebook
90,99 €