Details

Bayesian Statistics


Bayesian Statistics

An Introduction
4. Aufl.

von: Peter M. Lee

42,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 12.06.2012
ISBN/EAN: 9781118359754
Sprache: englisch
Anzahl Seiten: 496

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p>Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques.</p> <p>This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as well as how it contrasts with the conventional approach. The theory is built up step by step, and important notions such as sufficiency are brought out of a discussion of the salient features of specific examples.</p> <p><i>This edition:</i></p> <ul> <li>Includes expanded coverage of Gibbs sampling, including more numerical examples and treatments of OpenBUGS, R2WinBUGS and R2OpenBUGS.</li> <li>Presents significant new material on recent techniques such as Bayesian importance sampling, variational Bayes, Approximate Bayesian Computation (ABC) and Reversible Jump Markov Chain Monte Carlo (RJMCMC).</li> <li>Provides extensive examples throughout the book to complement the theory presented.</li> <li>Accompanied by a supporting website featuring new material and solutions.</li> </ul> <p>More and more students are realizing that they need to learn Bayesian statistics to meet their academic and professional goals. This book is best suited for use as a main text in courses on Bayesian statistics for third and fourth year undergraduates and postgraduate students.</p>
<p>Preface xix</p> <p>Preface to the First Edition xxi</p> <p><b>1 Preliminaries 1</b></p> <p>1.1 Probability and Bayes’ Theorem 1</p> <p>1.1.1 Notation 1</p> <p>1.1.2 Axioms for probability 2</p> <p>1.1.3 ‘Unconditional’ probability 5</p> <p>1.1.4 Odds 6</p> <p>1.1.5 Independence 7</p> <p>1.1.6 Some simple consequences of the axioms; Bayes’ Theorem 7</p> <p>1.2 Examples on Bayes’ Theorem 9</p> <p>1.2.1 The Biology of Twins 9</p> <p>1.2.2 A political example 10</p> <p>1.2.3 A warning 10</p> <p>1.3 Random variables 12</p> <p>1.3.1 Discrete random variables 12</p> <p>1.3.2 The binomial distribution 13</p> <p>1.3.3 Continuous random variables 14</p> <p>1.3.4 The normal distribution 16</p> <p>1.3.5 Mixed random variables 17</p> <p>1.4 Several random variables 17</p> <p>1.4.1 Two discrete random variables 17</p> <p>1.4.2 Two continuous random variables 18</p> <p>1.4.3 Bayes’ Theorem for random variables 20</p> <p>1.4.4 Example 21</p> <p>1.4.5 One discrete variable and one continuous variable 21</p> <p>1.4.6 Independent random variables 22</p> <p>1.5 Means and variances 23</p> <p>1.5.1 Expectations 23</p> <p>1.5.2 The expectation of a sum and of a product 24</p> <p>1.5.3 Variance, precision and standard deviation 25</p> <p>1.5.4 Examples 25</p> <p>1.5.5 Variance of a sum; covariance and correlation 27</p> <p>1.5.6 Approximations to the mean and variance of a function of a random variable 28</p> <p>1.5.7 Conditional expectations and variances 29</p> <p>1.5.8 Medians and modes 31</p> <p>1.6 Exercises on Chapter 1 31</p> <p><b>2 Bayesian inference for the normal distribution 36</b></p> <p>2.1 Nature of Bayesian inference 36</p> <p>2.1.1 Preliminary remarks 36</p> <p>2.1.2 Post is prior times likelihood 36</p> <p>2.1.3 Likelihood can be multiplied by any constant 38</p> <p>2.1.4 Sequential use of Bayes’ Theorem 38</p> <p>2.1.5 The predictive distribution 39</p> <p>2.1.6 A warning 39</p> <p>2.2 Normal prior and likelihood 40</p> <p>2.2.1 Posterior from a normal prior and likelihood 40</p> <p>2.2.2 Example 42</p> <p>2.2.3 Predictive distribution 43</p> <p>2.2.4 The nature of the assumptions made 44</p> <p>2.3 Several normal observations with a normal prior 44</p> <p>2.3.1 Posterior distribution 44</p> <p>2.3.2 Example 46</p> <p>2.3.3 Predictive distribution 47</p> <p>2.3.4 Robustness 47</p> <p>2.4 Dominant likelihoods 48</p> <p>2.4.1 Improper priors 48</p> <p>2.4.2 Approximation of proper priors by improper priors 49</p> <p>2.5 Locally uniform priors 50</p> <p>2.5.1 Bayes’ postulate 50</p> <p>2.5.2 Data translated likelihoods 52</p> <p>2.5.3 Transformation of unknown parameters 52</p> <p>2.6 Highest density regions 54</p> <p>2.6.1 Need for summaries of posterior information 54</p> <p>2.6.2 Relation to classical statistics 55</p> <p>2.7 Normal variance 55</p> <p>2.7.1 A suitable prior for the normal variance 55</p> <p>2.7.2 Reference prior for the normal variance 58</p> <p>2.8 HDRs for the normal variance 59</p> <p>2.8.1 What distribution should we be considering? 59</p> <p>2.8.2 Example 59</p> <p>2.9 The role of sufficiency 60</p> <p>2.9.1 Definition of sufficiency 60</p> <p>2.9.2 Neyman’s factorization theorem 61</p> <p>2.9.3 Sufficiency principle 63</p> <p>2.9.4 Examples 63</p> <p>2.9.5 Order statistics and minimal sufficient statistics 65</p> <p>2.9.6 Examples on minimal sufficiency 66</p> <p>2.10 Conjugate prior distributions 67</p> <p>2.10.1 Definition and difficulties 67</p> <p>2.10.2 Examples 68</p> <p>2.10.3 Mixtures of conjugate densities 69</p> <p>2.10.4 Is your prior really conjugate? 71</p> <p>2.11 The exponential family 71</p> <p>2.11.1 Definition 71</p> <p>2.11.2 Examples 72</p> <p>2.11.3 Conjugate densities 72</p> <p>2.11.4 Two-parameter exponential family 73</p> <p>2.12 Normal mean and variance both unknown 73</p> <p>2.12.1 Formulation of the problem 73</p> <p>2.12.2 Marginal distribution of the mean 75</p> <p>2.12.3 Example of the posterior density for the mean 76</p> <p>2.12.4 Marginal distribution of the variance 77</p> <p>2.12.5 Example of the posterior density of the variance 77</p> <p>2.12.6 Conditional density of the mean for given variance 77</p> <p>2.13 Conjugate joint prior for the normal distribution 78</p> <p>2.13.1 The form of the conjugate prior 78</p> <p>2.13.2 Derivation of the posterior 80</p> <p>2.13.3 Example 81</p> <p>2.13.4 Concluding remarks 82</p> <p>2.14 Exercises on Chapter 2 82</p> <p><b>3 Some other common distributions 85</b></p> <p>3.1 The binomial distribution 85</p> <p>3.1.1 Conjugate prior 85</p> <p>3.1.2 Odds and log-odds 88</p> <p>3.1.3 Highest density regions 90</p> <p>3.1.4 Example 91</p> <p>3.1.5 Predictive distribution 92</p> <p>3.2 Reference prior for the binomial likelihood 92</p> <p>3.2.1 Bayes’ postulate 92</p> <p>3.2.2 Haldane’s prior 93</p> <p>3.2.3 The arc-sine distribution 94</p> <p>3.2.4 Conclusion 95</p> <p>3.3 Jeffreys’ rule 96</p> <p>3.3.1 Fisher’s information 96</p> <p>3.3.2 The information from several observations 97</p> <p>3.3.3 Jeffreys’ prior 98</p> <p>3.3.4 Examples 98</p> <p>3.3.5 Warning 100</p> <p>3.3.6 Several unknown parameters 100</p> <p>3.3.7 Example 101</p> <p>3.4 The Poisson distribution 102</p> <p>3.4.1 Conjugate prior 102</p> <p>3.4.2 Reference prior 103</p> <p>3.4.3 Example 104</p> <p>3.4.4 Predictive distribution 104</p> <p>3.5 The uniform distribution 106</p> <p>3.5.1 Preliminary definitions 106</p> <p>3.5.2 Uniform distribution with a fixed lower endpoint 107</p> <p>3.5.3 The general uniform distribution 108</p> <p>3.5.4 Examples 110</p> <p>3.6 Reference prior for the uniform distribution 110</p> <p>3.6.1 Lower limit of the interval fixed 110</p> <p>3.6.2 Example 111</p> <p>3.6.3 Both limits unknown 111</p> <p>3.7 The tramcar problem 113</p> <p>3.7.1 The discrete uniform distribution 113</p> <p>3.8 The first digit problem; invariant priors 114</p> <p>3.8.1 A prior in search of an explanation 114</p> <p>3.8.2 The problem 114</p> <p>3.8.3 A solution 115</p> <p>3.8.4 Haar priors 117</p> <p>3.9 The circular normal distribution 117</p> <p>3.9.1 Distributions on the circle 117</p> <p>3.9.2 Example 119</p> <p>3.9.3 Construction of an HDR by numerical integration 120</p> <p>3.9.4 Remarks 122</p> <p>3.10 Approximations based on the likelihood 122</p> <p>3.10.1 Maximum likelihood 122</p> <p>3.10.2 Iterative methods 123</p> <p>3.10.3 Approximation to the posterior density 123</p> <p>3.10.4 Examples 124</p> <p>3.10.5 Extension to more than one parameter 126</p> <p>3.10.6 Example 127</p> <p>3.11 Reference posterior distributions 128</p> <p>3.11.1 The information provided by an experiment 128</p> <p>3.11.2 Reference priors under asymptotic normality 130</p> <p>3.11.3 Uniform distribution of unit length 131</p> <p>3.11.4 Normal mean and variance 132</p> <p>3.11.5 Technical complications 134</p> <p>3.12 Exercises on Chapter 3 134</p> <p><b>4 Hypothesis testing 138</b></p> <p>4.1 Hypothesis testing 138</p> <p>4.1.1 Introduction 138</p> <p>4.1.2 Classical hypothesis testing 138</p> <p>4.1.3 Difficulties with the classical approach 139</p> <p>4.1.4 The Bayesian approach 140</p> <p>4.1.5 Example 142</p> <p>4.1.6 Comment 143</p> <p>4.2 One-sided hypothesis tests 143</p> <p>4.2.1 Definition 143</p> <p>4.2.2 P-values 144</p> <p>4.3 Lindley’s method 145</p> <p>4.3.1 A compromise with classical statistics 145</p> <p>4.3.2 Example 145</p> <p>4.3.3 Discussion 146</p> <p>4.4 Point (or sharp) null hypotheses with prior information 146</p> <p>4.4.1 When are point null hypotheses reasonable? 146</p> <p>4.4.2 A case of nearly constant likelihood 147</p> <p>4.4.3 The Bayesian method for point null hypotheses 148</p> <p>4.4.4 Sufficient statistics 149</p> <p>4.5 Point null hypotheses for the normal distribution 150</p> <p>4.5.1 Calculation of the Bayes’ factor 150</p> <p>4.5.2 Numerical examples 151</p> <p>4.5.3 Lindley’s paradox 152</p> <p>4.5.4 A bound which does not depend on the prior distribution 154</p> <p>4.5.5 The case of an unknown variance 155</p> <p>4.6 The Doogian philosophy 157</p> <p>4.6.1 Description of the method 157</p> <p>4.6.2 Numerical example 157</p> <p>4.7 Exercises on Chapter 4 158</p> <p><b>5 Two-sample problems 162</b></p> <p>5.1 Two-sample problems – both variances unknown 162</p> <p>5.1.1 The problem of two normal samples 162</p> <p>5.1.2 Paired comparisons 162</p> <p>5.1.3 Example of a paired comparison problem 163</p> <p>5.1.4 The case where both variances are known 163</p> <p>5.1.5 Example 164</p> <p>5.1.6 Non-trivial prior information 165</p> <p>5.2 Variances unknown but equal 165</p> <p>5.2.1 Solution using reference priors 165</p> <p>5.2.2 Example 167</p> <p>5.2.3 Non-trivial prior information 167</p> <p>5.3 Variances unknown and unequal (Behrens–Fisher problem) 168</p> <p>5.3.1 Formulation of the problem 168</p> <p>5.3.2 Patil’s approximation 169</p> <p>5.3.3 Example 170</p> <p>5.3.4 Substantial prior information 170</p> <p>5.4 The Behrens–Fisher controversy 171</p> <p>5.4.1 The Behrens–Fisher problem from a classical standpoint 171</p> <p>5.4.2 Example 172</p> <p>5.4.3 The controversy 173</p> <p>5.5 Inferences concerning a variance ratio 173</p> <p>5.5.1 Statement of the problem 173</p> <p>5.5.2 Derivation of the F distribution 174</p> <p>5.5.3 Example 175</p> <p>5.6 Comparison of two proportions; the 2 × 2 table 176</p> <p>5.6.1 Methods based on the log-odds ratio 176</p> <p>5.6.2 Example 177</p> <p>5.6.3 The inverse root-sine transformation 178</p> <p>5.6.4 Other methods 178</p> <p>5.7 Exercises on Chapter 5 179</p> <p><b>6 Correlation, regression and the analysis of variance 182</b></p> <p>6.1 Theory of the correlation coefficient 182</p> <p>6.1.1 Definitions 182</p> <p>6.1.2 Approximate posterior distribution of the correlation coefficient 184</p> <p>6.1.3 The hyperbolic tangent substitution 186</p> <p>6.1.4 Reference prior 188</p> <p>6.1.5 Incorporation of prior information 189</p> <p>6.2 Examples on the use of the correlation coefficient 189</p> <p>6.2.1 Use of the hyperbolic tangent transformation 189</p> <p>6.2.2 Combination of several correlation coefficients 189</p> <p>6.2.3 The squared correlation coefficient 190</p> <p>6.3 Regression and the bivariate normal model 190</p> <p>6.3.1 The model 190</p> <p>6.3.2 Bivariate linear regression 191</p> <p>6.3.3 Example 193</p> <p>6.3.4 Case of known variance 194</p> <p>6.3.5 The mean value at a given value of the explanatory variable 194</p> <p>6.3.6 Prediction of observations at a given value of the explanatory variable 195</p> <p>6.3.7 Continuation of the example 195</p> <p>6.3.8 Multiple regression 196</p> <p>6.3.9 Polynomial regression 196</p> <p>6.4 Conjugate prior for the bivariate regression model 197</p> <p>6.4.1 The problem of updating a regression line 197</p> <p>6.4.2 Formulae for recursive construction of a regression line 197</p> <p>6.4.3 Finding an appropriate prior 199</p> <p>6.5 Comparison of several means – the one way model 200</p> <p>6.5.1 Description of the one way layout 200</p> <p>6.5.2 Integration over the nuisance parameters 201</p> <p>6.5.3 Derivation of the F distribution 203</p> <p>6.5.4 Relationship to the analysis of variance 203</p> <p>6.5.5 Example 204</p> <p>6.5.6 Relationship to a simple linear regression model 206</p> <p>6.5.7 Investigation of contrasts 207</p> <p>6.6 The two way layout 209</p> <p>6.6.1 Notation 209</p> <p>6.6.2 Marginal posterior distributions 210</p> <p>6.6.3 Analysis of variance 212</p> <p>6.7 The general linear model 212</p> <p>6.7.1 Formulation of the general linear model 212</p> <p>6.7.2 Derivation of the posterior 214</p> <p>6.7.3 Inference for a subset of the parameters 215</p> <p>6.7.4 Application to bivariate linear regression 216</p> <p>6.8 Exercises on Chapter 6 217</p> <p><b>7 Other topics 221</b></p> <p>7.1 The likelihood principle 221</p> <p>7.1.1 Introduction 221</p> <p>7.1.2 The conditionality principle 222</p> <p>7.1.3 The sufficiency principle 223</p> <p>7.1.4 The likelihood principle 223</p> <p>7.1.5 Discussion 225</p> <p>7.2 The stopping rule principle 226</p> <p>7.2.1 Definitions 226</p> <p>7.2.2 Examples 226</p> <p>7.2.3 The stopping rule principle 227</p> <p>7.2.4 Discussion 228</p> <p>7.3 Informative stopping rules 229</p> <p>7.3.1 An example on capture and recapture of fish 229</p> <p>7.3.2 Choice of prior and derivation of posterior 230</p> <p>7.3.3 The maximum likelihood estimator 231</p> <p>7.3.4 Numerical example 231</p> <p>7.4 The likelihood principle and reference priors 232</p> <p>7.4.1 The case of Bernoulli trials and its general implications 232</p> <p>7.4.2 Conclusion 233</p> <p>7.5 Bayesian decision theory 234</p> <p>7.5.1 The elements of game theory 234</p> <p>7.5.2 Point estimators resulting from quadratic loss 236</p> <p>7.5.3 Particular cases of quadratic loss 237</p> <p>7.5.4 Weighted quadratic loss 238</p> <p>7.5.5 Absolute error loss 238</p> <p>7.5.6 Zero-one loss 239</p> <p>7.5.7 General discussion of point estimation 240</p> <p>7.6 Bayes linear methods 240</p> <p>7.6.1 Methodology 240</p> <p>7.6.2 Some simple examples 241</p> <p>7.6.3 Extensions 243</p> <p>7.7 Decision theory and hypothesis testing 243</p> <p>7.7.1 Relationship between decision theory and classical hypothesis testing 243</p> <p>7.7.2 Composite hypotheses 245</p> <p>7.8 Empirical Bayes methods 245</p> <p>7.8.1 Von Mises’ example 245</p> <p>7.8.2 The Poisson case 246</p> <p>7.9 Exercises on Chapter 7 247</p> <p><b>8 Hierarchical models 253</b></p> <p>8.1 The idea of a hierarchical model 253</p> <p>8.1.1 Definition 253</p> <p>8.1.2 Examples 254</p> <p>8.1.3 Objectives of a hierarchical analysis 257</p> <p>8.1.4 More on empirical Bayes methods 257</p> <p>8.2 The hierarchical normal model 258</p> <p>8.2.1 The model 258</p> <p>8.2.2 The Bayesian analysis for known overall mean 259</p> <p>8.2.3 The empirical Bayes approach 261</p> <p>8.3 The baseball example 262</p> <p>8.4 The Stein estimator 264</p> <p>8.4.1 Evaluation of the risk of the James–Stein estimator 267</p> <p>8.5 Bayesian analysis for an unknown overall mean 268</p> <p>8.5.1 Derivation of the posterior 270</p> <p>8.6 The general linear model revisited 272</p> <p>8.6.1 An informative prior for the general linear model 272</p> <p>8.6.2 Ridge regression 274</p> <p>8.6.3 A further stage to the general linear model 275</p> <p>8.6.4 The one way model 276</p> <p>8.6.5 Posterior variances of the estimators 277</p> <p>8.7 Exercises on Chapter 8 277</p> <p><b>9 The Gibbs sampler and other numerical methods 281</b></p> <p>9.1 Introduction to numerical methods 281</p> <p>9.1.1 Monte Carlo methods 281</p> <p>9.1.2 Markov chains 282</p> <p>9.2 The EM algorithm 283</p> <p>9.2.1 The idea of the EM algorithm 283</p> <p>9.2.2 Why the EM algorithm works 285</p> <p>9.2.3 Semi-conjugate prior with a normal likelihood 287</p> <p>9.2.4 The EM algorithm for the hierarchical normal model 288</p> <p>9.2.5 A particular case of the hierarchical normal model 290</p> <p>9.3 Data augmentation by Monte Carlo 291</p> <p>9.3.1 The genetic linkage example revisited 291</p> <p>9.3.2 Use of R 291</p> <p>9.3.3 The genetic linkage example in R 292</p> <p>9.3.4 Other possible uses for data augmentation 293</p> <p>9.4 The Gibbs sampler 294</p> <p>9.4.1 Chained data augmentation 294</p> <p>9.4.2 An example with observed data 296</p> <p>9.4.3 More on the semi-conjugate prior with a normal likelihood 299</p> <p>9.4.4 The Gibbs sampler as an extension of chained data augmentation 301</p> <p>9.4.5 An application to change-point analysis 302</p> <p>9.4.6 Other uses of the Gibbs sampler 306</p> <p>9.4.7 More about convergence 309</p> <p>9.5 Rejection sampling 311</p> <p>9.5.1 Description 311</p> <p>9.5.2 Example 311</p> <p>9.5.3 Rejection sampling for log-concave distributions 311</p> <p>9.5.4 A practical example 313</p> <p>9.6 The Metropolis–Hastings algorithm 317</p> <p>9.6.1 Finding an invariant distribution 317</p> <p>9.6.2 The Metropolis–Hastings algorithm 318</p> <p>9.6.3 Choice of a candidate density 320</p> <p>9.6.4 Example 321</p> <p>9.6.5 More realistic examples 322</p> <p>9.6.6 Gibbs as a special case of Metropolis–Hastings 322</p> <p>9.6.7 Metropolis within Gibbs 323</p> <p>9.7 Introduction to WinBUGS and OpenBUGS 323</p> <p>9.7.1 Information about WinBUGS and OpenBUGS 323</p> <p>9.7.2 Distributions in WinBUGS and OpenBUGS 324</p> <p>9.7.3 A simple example using WinBUGS 324</p> <p>9.7.4 The pump failure example revisited 327</p> <p>9.7.5 DoodleBUGS 327</p> <p>9.7.6 coda 329</p> <p>9.7.7 R2WinBUGS and R2OpenBUGS 329</p> <p>9.8 Generalized linear models 332</p> <p>9.8.1 Logistic regression 332</p> <p>9.8.2 A general framework 334</p> <p>9.9 Exercises on Chapter 9 335</p> <p><b>10 Some approximate methods 340</b></p> <p>10.1 Bayesian importance sampling 340</p> <p>10.1.1 Importance sampling to find HDRs 343</p> <p>10.1.2 Sampling importance re-sampling 344</p> <p>10.1.3 Multidimensional applications 344</p> <p>10.2 Variational Bayesian methods: simple case 345</p> <p>10.2.1 Independent parameters 347</p> <p>10.2.2 Application to the normal distribution 349</p> <p>10.2.3 Updating the mean 350</p> <p>10.2.4 Updating the variance 351</p> <p>10.2.5 Iteration 352</p> <p>10.2.6 Numerical example 352</p> <p>10.3 Variational Bayesian methods: general case 353</p> <p>10.3.1 A mixture of multivariate normals 353</p> <p>10.4 ABC: Approximate Bayesian Computation 356</p> <p>10.4.1 The ABC rejection algorithm 356</p> <p>10.4.2 The genetic linkage example 358</p> <p>10.4.3 The ABC Markov Chain Monte Carlo algorithm 360</p> <p>10.4.4 The ABC Sequential Monte Carlo algorithm 362</p> <p>10.4.5 The ABC local linear regression algorithm 365</p> <p>10.4.6 Other variants of ABC 366</p> <p>10.5 Reversible jump Markov chain Monte Carlo 367</p> <p>10.5.1 RJMCMC algorithm 367</p> <p>10.6 Exercises on Chapter 10 369</p> <p><b>Appendix A Common statistical distributions 373</b></p> <p>A.1 Normal distribution 374</p> <p>A.2 Chi-squared distribution 375</p> <p>A.3 Normal approximation to chi-squared 376</p> <p>A.4 Gamma distribution 376</p> <p>A.5 Inverse chi-squared distribution 377</p> <p>A.6 Inverse chi distribution 378</p> <p>A.7 Log chi-squared distribution 379</p> <p>A.8 Student’s t distribution 380</p> <p>A.9 Normal/chi-squared distribution 381</p> <p>A.10 Beta distribution 382</p> <p>A.11 Binomial distribution 383</p> <p>A.12 Poisson distribution 384</p> <p>A.13 Negative binomial distribution 385</p> <p>A.14 Hypergeometric distribution 386</p> <p>A.15 Uniform distribution 387</p> <p>A.16 Pareto distribution 388</p> <p>A.17 Circular normal distribution 389</p> <p>A.18 Behrens’ distribution 391</p> <p>A.19 Snedecor’s F distribution 393</p> <p>A.20 Fisher’s z distribution 393</p> <p>A.21 Cauchy distribution 394</p> <p>A.22 The probability that one beta variable is greater than another 395</p> <p>A.23 Bivariate normal distribution 395</p> <p>A.24 Multivariate normal distribution 396</p> <p>A.25 Distribution of the correlation coefficient 397</p> <p><b>Appendix B Tables 399</b></p> <p>B.1 Percentage points of the Behrens–Fisher distribution 399</p> <p>B.2 Highest density regions for the chi-squared distribution 402</p> <p>B.3 HDRs for the inverse chi-squared distribution 404</p> <p>B.4 Chi-squared corresponding to HDRs for log chi-squared 406</p> <p>B.5 Values of F corresponding to HDRs for log F 408</p> <p>Appendix C R programs 430</p> <p><b>Appendix D Further reading 436</b></p> <p>D.1 Robustness 436</p> <p>D.2 Nonparametric methods 436</p> <p>D.3 Multivariate estimation 436</p> <p>D.4 Time series and forecasting 437</p> <p>D.5 Sequential methods 437</p> <p>D.6 Numerical methods 437</p> <p>D.7 Bayesian networks 437</p> <p>D.8 General reading 438</p> <p>References 439</p> <p><b>Index 455</b></p>
<p>“As a lifelong non-statistician and sporadic “user” of statistics, I have not come across another advanced statistics book (as I would characterize this one) that offers so much to the non-expert and, I’ll bet, to the expert as well. The book has my highest recommendation.”  (<i>Computing Reviews</i>, 7 January 2013)</p>
<b>Peter Lee</b>, Department of Mathematics & Formerly Provost of Wentworth College, University of York.
<p>Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques.</p> <p>This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as well as how it contrasts with the conventional approach. The theory is built up step by step, and important notions such as sufficiency are brought out of a discussion of the salient features of specific examples.</p> <p>This edition:</p> <ul> <li>Includes expanded coverage of Gibbs sampling, including more numerical examples and treatments of OpenBUGS, R2WinBUGS and R2OpenBUGS.</li> <li>Presents significant new material on recent techniques such as Bayesian importance sampling, variational Bayes, Approximate Bayesian Computation (ABC) and Reversible Jump Markov Chain Monte Carlo (RJMCMC).</li> <li>Provides extensive examples throughout the book to complement the theory presented.</li> <li>Accompanied by a supporting website featuring new material and solutions.</li> </ul> <p>More and more students are realizing that they need to learn Bayesian statistics to meet their academic and professional goals. This book is best suited for use as a main text in courses on Bayesian statistics for third and fourth year undergraduates and postgraduate students.</p>

Diese Produkte könnten Sie auch interessieren:

Statistics for Microarrays
Statistics for Microarrays
von: Ernst Wit, John McClure
PDF ebook
90,99 €