Details

Predictive Analytics


Predictive Analytics

Parametric Models for Regression and Classification Using R
Wiley Series in Probability and Statistics 1. Aufl.

von: Ajit C. Tamhane

103,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 24.09.2020
ISBN/EAN: 9781118948910
Sprache: englisch
Anzahl Seiten: 384

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p><b>Provides a foundation in classical parametric methods of regression and classification essential for pursuing advanced topics in predictive analytics and statistical learning</b></p> <p>This book covers a broad range of topics in parametric regression and classification including multiple regression, logistic regression (binary and multinomial), discriminant analysis, Bayesian classification, generalized linear models and Cox regression for survival data. The book also gives brief introductions to some modern computer-intensive methods such as classification and regression trees (CART), neural networks and support vector machines.</p> <p>The book is organized so that it can be used by both advanced undergraduate or masters students with applied interests and by doctoral students who also want to learn the underlying theory. This is done by devoting the main body of the text of each chapter with basic statistical methodology illustrated by real data examples. Derivations, proofs and extensions are relegated to the Technical Notes section of each chapter, Exercises are also divided into theoretical and applied. Answers to selected exercises are provided. A solution manual is available to instructors who adopt the text.</p> <p>Data sets of moderate to large sizes are used in examples and exercises. They come from a variety of disciplines including business (finance, marketing and sales), economics, education, engineering and sciences (biological, health, physical and social). All data sets are available at the book’s web site. Open source software R is used for all data analyses. R codes and outputs are provided for most examples. R codes are also available at the book’s web site.</p> <p><i>Predictive Analytics: Parametric Models for Regression and Classification Using R</i> is ideal for a one-semester upper-level undergraduate and/or beginning level graduate course in regression for students in business, economics, finance, marketing, engineering, and computer science. It is also an excellent resource for practitioners in these fields.</p>
<p>Preface xiii</p> <p>Acknowledgments xv</p> <p>Abbreviations xvii</p> <p>About the companion website xxi</p> <p><b>1 Introduction 1</b></p> <p>1.1 Supervised versus unsupervised learning 2</p> <p>1.2 Parametric versus nonparametric models 3</p> <p>1.3 Types of data 4</p> <p>1.4 Overview of parametric predictive analytics 5</p> <p><b>2 Simple linear regression and correlation 7</b></p> <p>2.1 Fitting a straight line 9</p> <p>2.1.1 Least squares (LS) method 9</p> <p>2.1.2 Linearizing transformations 11</p> <p>2.1.3 Fitted values and residuals 13</p> <p>2.1.4 Assessing goodness of fit 14</p> <p>2.2 Statistical inferences for simple linear regression 17</p> <p>2.2.1 Simple linear regression model 17</p> <p>2.2.2 Inferences on <i>β</i><sub>0</sub> and <i>β</i><sub>1</sub> 18</p> <p>2.2.3 Analysis of variance for simple linear regression 19</p> <p>2.2.4 Pure error versus model error 20</p> <p>2.2.5 Prediction of future observations 21</p> <p>2.3 Correlation analysis 24</p> <p>2.3.1 Bivariate normal distribution 26</p> <p>2.3.2 Inferences on correlation coefficient 27</p> <p>2.4 Modern extensions 28</p> <p>2.5 Technical notes 29</p> <p>2.5.1 Derivation of the LS estimators 29</p> <p>2.5.2 Sums of squares 30</p> <p>2.5.3 Distribution of the LS estimators 30</p> <p>2.5.4 Prediction interval 32</p> <p>Exercises 32</p> <p><b>3 Multiple linear regression: basics 37</b></p> <p>3.1 Multiple linear regression model 39</p> <p>3.1.1 Model in scalar notation 39</p> <p>3.1.2 Model in matrix notation 40</p> <p>3.2 Fitting a multiple regression model 41</p> <p>3.2.1 Least squares (LS) method 41</p> <p>3.2.2 Interpretation of regression coefficients 45</p> <p>3.2.3 Fitted values and residuals 45</p> <p>3.2.4 Measures of goodness of fit 47</p> <p>3.2.5 Linearizing transformations 48</p> <p>3.3 Statistical inferences for multiple regression 49</p> <p>3.3.1 Analysis of variance for multiple regression 49</p> <p>3.3.2 Inferences on regression coefficients 51</p> <p>3.3.3 Confidence ellipsoid for the <i>β </i>vector 52</p> <p>3.3.4 Extra sum of squares method 54</p> <p>3.3.5 Prediction of future observations 59</p> <p>3.4 Weighted and generalized least squares 60</p> <p>3.4.1 Weighted least squares 60</p> <p>3.4.2 Generalized least squares 62</p> <p>3.4.3 Statistical inference on GLS estimator 63</p> <p>3.5 Partial correlation coefficients 63</p> <p>3.5.1 Test of significance of partial correlation coefficient 65</p> <p>3.6 Special topics 66</p> <p>3.6.1 Dummy variables 66</p> <p>3.6.2 Interactions 69</p> <p>3.6.3 Standardized regression 74</p> <p>3.7 Modern extensions 75</p> <p>3.7.1 Regression trees 76</p> <p>3.7.2 Neural nets 78</p> <p>3.8 Technical notes 81</p> <p>3.8.1 Derivation of the LS estimators 81</p> <p>3.8.2 Distribution of the LS estimators 81</p> <p>3.8.3 Gauss–Markov theorem 82</p> <p>3.8.4 Properties of fitted values and residuals 83</p> <p>3.8.5 Geometric interpretation of least squares 83</p> <p>3.8.6 Confidence ellipsoid for <i>β </i>85</p> <p>3.8.7 Population partial correlation coefficient 85</p> <p>Exercises 86</p> <p><b>4 Multiple linear regression: model diagnostics 95</b></p> <p>4.1 Model assumptions and distribution of residuals 95</p> <p>4.2 Checking normality 96</p> <p>4.3 Checking homoscedasticity 98</p> <p>4.3.1 Variance stabilizing transformations 99</p> <p>4.3.2 Box–Cox transformation 100</p> <p>4.4 Detecting outliers 103</p> <p>4.5 Checking model misspecification 106</p> <p>4.6 Checking independence 108</p> <p>4.6.1 Runs test 109</p> <p>4.6.2 Durbin–Watson test 109</p> <p>4.7 Checking influential observations 110</p> <p>4.7.1 Leverage 111</p> <p>4.7.2 Cook’s distance 111</p> <p>4.8 Checking multicollinearity 114</p> <p>4.8.1 Multicollinearity: causes and consequences 114</p> <p>4.8.2 Multicollinearity diagnostics 115</p> <p>Exercises 119</p> <p><b>5 Multiple linear regression: shrinkage and dimension reduction methods 127</b></p> <p>5.1 Ridge regression 128</p> <p>5.1.1 Ridge problem 128</p> <p>5.1.2 Choice of <i>λ </i>129</p> <p>5.2 Lasso regression 132</p> <p>5.2.1 Lasso problem 132</p> <p>5.3 Principal components analysis and regression135</p> <p>5.3.1 Principal components analysis (PCA) 135</p> <p>5.3.2 Principal components regression (PCR) 142</p> <p>5.4 Partial least squares (PLS) 146</p> <p>5.4.1 PLS1 algorithm 147</p> <p>5.5 Technical notes 154</p> <p>5.5.1 Properties of ridge estimator 154</p> <p>5.5.2 Derivation of principal components 155</p> <p>Exercises 156</p> <p><b>6 Multiple linear regression: variable selection and model building 159</b></p> <p>6.1 Best subset selection 160</p> <p>6.1.1 Model selection criteria 160</p> <p>6.2 Stepwise regression 165</p> <p>6.3 Model building 174</p> <p>6.4 Technical notes 175</p> <p>6.4.1 Derivation of the <i>C<sub>p </sub></i>statistic 175</p> <p>Exercises 177</p> <p><b>7 Logistic regression and classification 181</b></p> <p>7.1 Simple logistic regression 183</p> <p>7.1.1 Model 183</p> <p>7.1.2 Parameter estimation 185</p> <p>7.1.3 Inferences on parameters 189</p> <p>7.2 Multiple logistic regression 190</p> <p>7.2.1 Model and inference 190</p> <p>7.3 Likelihood ratio (LR) test 194</p> <p>7.3.1 Deviance 195</p> <p>7.3.2 Akaike information criterion (AIC) 197</p> <p>7.3.3 Model selection and diagnostics 197</p> <p>7.4 Binary classification using logistic regression 201</p> <p>7.4.1 Measures of correct classification 201</p> <p>7.4.2 Receiver operating characteristic (ROC) curve 204</p> <p>7.5 Polytomous logistic regression 207</p> <p>7.5.1 Nominal logistic regression 208</p> <p>7.5.2 Ordinal logistic regression 212</p> <p>7.6 Modern extensions 215</p> <p>7.6.1 Classification trees 215</p> <p>7.6.2 Support vector machines 218</p> <p>7.7 Technical notes 222</p> <p>Exercises 224</p> <p><b>8 Discriminant analysis 233</b></p> <p>8.1 Linear discriminant analysis based on Mahalnobis distance 234</p> <p>8.1.1 Mahalnobis distance 234</p> <p>8.1.2 Bayesian classification 235</p> <p>8.2 Fisher’s linear discriminant function 239</p> <p>8.2.1 Two groups 239</p> <p>8.2.2 Multiple groups 241</p> <p>8.3 Naive Bayes 243</p> <p>8.4 Technical notes 244</p> <p>8.4.1 Calculation of pooled sample covariance matrix 244</p> <p>8.4.2 Derivation of Fisher’s linear discriminant functions 245</p> <p>8.4.3 Bayes rule 247</p> <p>Exercises 247</p> <p><b>9 Generalized linear models 251</b></p> <p>9.1 Exponential family and link function 251</p> <p>9.1.1 Exponential family 251</p> <p>9.1.2 Link function 254</p> <p>9.2 Estimation of parameters of GLM 255</p> <p>9.2.1 Maximum likelihood estimation 255</p> <p>9.2.2 Iteratively reweighted least squares (IRWLS) Algorithm 256</p> <p>9.3 Deviance and AIC 258</p> <p>9.4 Poisson regression 263</p> <p>9.4.1 Poisson regression for rates 266</p> <p>9.5 Gamma regression 269</p> <p>9.6 Technical notes 273</p> <p>9.6.1 Mean and variance of the exponential family of distributions 273</p> <p>9.6.2 MLE of <i>β</i>and its evaluation using the IRWLS algorithm 274</p> <p>Exercises 277</p> <p><b>10 Survival analysis 281</b></p> <p>10.1 Hazard rate and survival distribution 282</p> <p>10.2 Kaplan–Meier estimator 283</p> <p>10.3 Logrank test 286</p> <p>10.4 Cox’s proportional hazards model 289</p> <p>10.4.1 Estimation 290</p> <p>10.4.2 Examples 291</p> <p>10.4.3 Time-dependent covariates 295</p> <p>10.5 Technical notes 300</p> <p>10.5.1 ML estimation of the Cox proportional hazards model 300</p> <p>Exercises 301</p> <p><b>Appendix A Primer on matrix algebra and multivariate distributions 305</b></p> <p>A.1 Review of matrix algebra 305</p> <p>A.2 Review of multivariate distributions 307</p> <p>A.3 Multivariate normal distribution 309</p> <p><b>Appendix B Primer on maximum likelihood estimation 311</b></p> <p>B.1 Maximum likelihood estimation 311</p> <p>B.2 Large sample inference on MLEs 313</p> <p>B.3 Newton–Raphson and Fisher scoring algorithms 315</p> <p>B.4 Technical notes 317</p> <p><b>Appendix C Projects 319</b></p> <p>C.1 Project 1 321</p> <p>C.2 Project 2 322</p> <p>C.3 Project 3 324</p> <p><b>Appendix D Statistical tables 327</b></p> <p>References 339</p> <p>Answers to selected exercises 343</p> <p>Index 355</p>
<p><b>Ajit C. Tamhane, PhD,</b> is Professor of Industrial Engineering & Management Sciences with a courtesy appointment in Statistics at Northwestern University. He is a fellow of the American Statistical Association, Institute of Mathematical Statistics, American Association for Advancement of Science and an elected member of the International Statistical Institute.
<p><b>Provides a foundation in classical parametric methods of regression and classification essential for pursuing advanced topics in predictive analytics and statistical learning</b> <p>This book covers a broad range of topics in parametric regression and classification including multiple regression, logistic regression (binary and multinomial), discriminant analysis, Bayesian classification, generalized linear models and Cox regression for survival data. The book also gives brief introductions to some modern computer-intensive methods such as classification and regression trees (CART), neural networks and support vector machines. <p>The book is organized so that it can be used by both advanced undergraduate or masters students with applied interests and by doctoral students who also want to learn the underlying theory. This is done by devoting the main body of the text of each chapter with basic statistical methodology illustrated by real data examples. Derivations, proofs and extensions are relegated to the Technical Notes section at the end of each chapter, Exercises are also divided into theoretical and applied. Answers to selected exercises are provided. A solution manual is available to instructors who adopt the text. <p>Data sets of moderate to large sizes are used in examples and exercises. They come from a variety of disciplines including business (finance, marketing and sales), economics, education, engineering and sciences (biological, health, physical and social). All data sets are available at the book's web site. Open source software R is used for all data analyses. R codes and outputs are provided for most examples. R codes are also available at the book's web site. <p><i>Predictive Analytics: Parametric Models for Regression and Classification Using R</i> is ideal for a one-semester upper-level undergraduate and/or beginning level graduate course in regression for students in business, economics, finance, marketing, engineering, and computer science. It is also an excellent resource for practitioners in these fields.

Diese Produkte könnten Sie auch interessieren:

Statistics for Microarrays
Statistics for Microarrays
von: Ernst Wit, John McClure
PDF ebook
90,99 €