Details

Error Estimation for Pattern Recognition


Error Estimation for Pattern Recognition


IEEE Press Series on Biomedical Engineering 1. Aufl.

von: Ulisses M. Braga Neto, Edward R. Dougherty

118,99 €

Verlag: Wiley
Format: EPUB
Veröffentl.: 22.06.2015
ISBN/EAN: 9781119079378
Sprache: englisch
Anzahl Seiten: 336

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p>This book is the first of its kind to discuss error estimation with a model-based approach. From the basics of classifiers and error estimators to distributional and Bayesian theory, it covers important topics and essential issues pertaining to the scientific validity of pattern classification.</p> <p>Error Estimation for Pattern Recognition focuses on error estimation, which is a broad and poorly understood topic that reaches all research areas using pattern classification. It includes model-based approaches and discussions of newer error estimators such as bolstered and Bayesian estimators. This book was motivated by the application of pattern recognition to high-throughput data with limited replicates, which is a basic problem now appearing in many areas. The first two chapters cover basic issues in classification error estimation, such as definitions, test-set error estimation, and training-set error estimation. The remaining chapters in this book cover results on the performance and representation of training-set error estimators for various pattern classifiers.</p> <p>Additional features of the book include:</p> <p>• The latest results on the accuracy of error estimation<br />• Performance analysis of re-substitution, cross-validation, and bootstrap error estimators using analytical and simulation approaches<br />• Highly interactive computer-based exercises and end-of-chapter problems</p> <p>This is the first book exclusively about error estimation for pattern recognition.</p> <p><b>Ulisses M. Braga Neto</b> is an Associate Professor in the Department of Electrical and Computer Engineering at Texas A&M University, USA. He received his PhD in Electrical and Computer Engineering from The Johns Hopkins University. Dr. Braga Neto received an NSF CAREER Award for his work on error estimation for pattern recognition with applications in genomic signal processing. He is an IEEE Senior Member.</p> <p><b>Edward R. Dougherty</b> is a Distinguished Professor, Robert F. Kennedy ’26 Chair, and Scientific Director at the Center for Bioinformatics and Genomic Systems Engineering at Texas A&M University, USA. He is a fellow of both the IEEE and SPIE, and he has received the SPIE Presidents Award. Dr. Dougherty has authored several books including Epistemology of the Cell: A Systems Perspective on Biological Knowledge and Random Processes for Image and Signal Processing (Wiley-IEEE Press).</p>
<p>Preface xiii</p> <p>Acknowledgments xix</p> <p>List of Symbols xxi</p> <p><b>1 Classification 1</b></p> <p>1.1 Classifiers 1</p> <p>1.2 Population-Based Discriminants 3</p> <p>1.3 Classification Rules 8</p> <p>1.4 Sample-Based Discriminants 13</p> <p>1.4.1 Quadratic Discriminants 14</p> <p>1.4.2 Linear Discriminants 15</p> <p>1.4.3 Kernel Discriminants 16</p> <p>1.5 Histogram Rule 16</p> <p>1.6 Other Classification Rules 20</p> <p>1.6.1 <i>k</i>-Nearest-Neighbor Rules 20</p> <p>1.6.2 Support Vector Machines 21</p> <p>1.6.3 Neural Networks 22</p> <p>1.6.4 Classification Trees 23</p> <p>1.6.5 Rank-Based Rules 24</p> <p>1.7 Feature Selection 25</p> <p>Exercises 28</p> <p><b>2 Error Estimation 35</b></p> <p>2.1 Error Estimation Rules 35</p> <p>2.2 Performance Metrics 38</p> <p>2.2.1 Deviation Distribution 39</p> <p>2.2.2 Consistency 41</p> <p>2.2.3 Conditional Expectation 41</p> <p>2.2.4 Linear Regression 42</p> <p>2.2.5 Confidence Intervals 42</p> <p>2.3 Test-Set Error Estimation 43</p> <p>2.4 Resubstitution 46</p> <p>2.5 Cross-Validation 48</p> <p>2.6 Bootstrap 55</p> <p>2.7 Convex Error Estimation 57</p> <p>2.8 Smoothed Error Estimation 61</p> <p>2.9 Bolstered Error Estimation 63</p> <p>2.9.1 Gaussian-Bolstered Error Estimation 67</p> <p>2.9.2 Choosing the Amount of Bolstering 68</p> <p>2.9.3 Calibrating the Amount of Bolstering 71</p> <p>Exercises 73</p> <p><b>3 Performance Analysis 77</b></p> <p>3.1 Empirical Deviation Distribution 77</p> <p>3.2 Regression 79</p> <p>3.3 Impact on Feature Selection 82</p> <p>3.4 Multiple-Data-Set Reporting Bias 84</p> <p>3.5 Multiple-Rule Bias 86</p> <p>3.6 Performance Reproducibility 92</p> <p>Exercises 94</p> <p><b>4 Error Estimation for Discrete Classification 97</b></p> <p>4.1 Error Estimators 98</p> <p>4.1.1 Resubstitution Error 98</p> <p>4.1.2 Leave-One-Out Error 98</p> <p>4.1.3 Cross-Validation Error 99</p> <p>4.1.4 Bootstrap Error 99</p> <p>4.2 Small-Sample Performance 101</p> <p>4.2.1 Bias 101</p> <p>4.2.2 Variance 103</p> <p>4.2.3 Deviation Variance, RMS, and Correlation 105</p> <p>4.2.4 Numerical Example 106</p> <p>4.2.5 Complete Enumeration Approach 108</p> <p>4.3 Large-Sample Performance 110</p> <p>Exercises 114</p> <p><b>5 Distribution Theory 115</b></p> <p>5.1 Mixture Sampling Versus Separate Sampling 115</p> <p>5.2 Sample-Based Discriminants Revisited 119</p> <p>5.3 True Error 120</p> <p>5.4 Error Estimators 121</p> <p>5.4.1 Resubstitution Error 121</p> <p>5.4.2 Leave-One-Out Error 122</p> <p>5.4.3 Cross-Validation Error 122</p> <p>5.4.4 Bootstrap Error 124</p> <p>5.5 Expected Error Rates 125</p> <p>5.5.1 True Error 125</p> <p>5.5.2 Resubstitution Error 128</p> <p>5.5.3 Leave-One-Out Error 130</p> <p>5.5.4 Cross-Validation Error 132</p> <p>5.5.5 Bootstrap Error 133</p> <p>5.6 Higher-Order Moments of Error Rates 136</p> <p>5.6.1 True Error 136</p> <p>5.6.2 Resubstitution Error 137</p> <p>5.6.3 Leave-One-Out Error 139</p> <p>5.7 Sampling Distribution of Error Rates 140</p> <p>5.7.1 Resubstitution Error 140</p> <p>5.7.2 Leave-One-Out Error 141</p> <p>Exercises 142</p> <p><b>6 Gaussian Distribution Theory: Univariate Case 145</b></p> <p>6.1 Historical Remarks 146</p> <p>6.2 Univariate Discriminant 147</p> <p>6.3 Expected Error Rates 148</p> <p>6.3.1 True Error 148</p> <p>6.3.2 Resubstitution Error 151</p> <p>6.3.3 Leave-One-Out Error 152</p> <p>6.3.4 Bootstrap Error 152</p> <p>6.4 Higher-Order Moments of Error Rates 154</p> <p>6.4.1 True Error 154</p> <p>6.4.2 Resubstitution Error 157</p> <p>6.4.3 Leave-One-Out Error 160</p> <p>6.4.4 Numerical Example 165</p> <p>6.5 Sampling Distributions of Error Rates 166</p> <p>6.5.1 Marginal Distribution of Resubstitution Error 166</p> <p>6.5.2 Marginal Distribution of Leave-One-Out Error 169</p> <p>6.5.3 Joint Distribution of Estimated and True Errors 174</p> <p>Exercises 176</p> <p><b>7 Gaussian Distribution Theory: Multivariate Case 179</b></p> <p>7.1 Multivariate Discriminants 179</p> <p>7.2 Small-Sample Methods 180</p> <p>7.2.1 Statistical Representations 181</p> <p>7.2.2 Computational Methods 194</p> <p>7.3 Large-Sample Methods 199</p> <p>7.3.1 Expected Error Rates 200</p> <p>7.3.2 Second-Order Moments of Error Rates 207</p> <p>Exercises 218</p> <p><b>8 Bayesian MMSE Error Estimation 221</b></p> <p>8.1 The Bayesian MMSE Error Estimator 222</p> <p>8.2 Sample-Conditioned MSE 226</p> <p>8.3 Discrete Classification 227</p> <p>8.4 Linear Classification of Gaussian Distributions 238</p> <p>8.5 Consistency 246</p> <p>8.6 Calibration 253</p> <p>8.7 Concluding Remarks 255</p> <p>Exercises 257</p> <p><b>A Basic Probability Review 259</b></p> <p>A.1 Sample Spaces and Events 259</p> <p>A.2 Definition of Probability 260</p> <p>A.3 Borel-Cantelli Lemmas 261</p> <p>A.4 Conditional Probability 262</p> <p>A.5 Random Variables 263</p> <p>A.6 Discrete Random Variables 265</p> <p>A.7 Expectation 266</p> <p>A.8 Conditional Expectation 268</p> <p>A.9 Variance 269</p> <p>A.10 Vector Random Variables 270</p> <p>A.11 The Multivariate Gaussian 271</p> <p>A.12 Convergence of Random Sequences 273</p> <p>A.13 Limiting Theorems 275</p> <p><b>B Vapnik–Chervonenkis Theory 277</b></p> <p>B.1 Shatter Coefficients 277</p> <p>B.2 The VC Dimension 278</p> <p>B.3 VC Theory of Classification 279</p> <p>B.3.1 Linear Classification Rules 279</p> <p>B.3.2 <i>k</i>NN Classification Rule 280</p> <p>B.3.3 Classification Trees 280</p> <p>B.3.4 Nonlinear SVMs 281</p> <p>B.3.5 Neural Networks 281</p> <p>B.3.6 Histogram Rules 281</p> <p>B.4 Vapnik–Chervonenkis Theorem 282</p> <p><b>C Double Asymptotics 285</b></p> <p>Bibliography 291</p> <p>Author index 301</p> <p>Subject index 305</p>
<p><b>Ulisses M. Braga Neto</b> is an Associate Professor in the Department of Electrical and Computer Engineering at Texas A&M University, USA. He received his PhD in Electrical and Computer Engineering from The Johns Hopkins University. Dr. Braga Neto received an NSF CAREER Award for his work on error estimation for pattern recognition with applications in genomic signal processing. He is an IEEE Senior Member.</p> <p><b>Edward R. Dougherty</b> is a Distinguished Professor, Robert F. Kennedy ’26 Chair, and Scientific Director at the Center for Bioinformatics and Genomic Systems Engineering at Texas A&M University, USA. He is a fellow of both the IEEE and SPIE, and he has received the SPIE Presidents Award. Dr. Dougherty has authored several books including Epistemology of the Cell: A Systems Perspective on Biological Knowledge and Random Processes for Image and Signal Processing (Wiley-IEEE Press).</p>
<p><b>This book is the first of its kind to discuss error estimation with a model-based </b><b>approach. From the basics of classifiers and error estimators to </b><b>more specialized classifiers, it covers important topics and essential issues </b><b>pertaining to the scientific validity of pattern classification.</b><b> </b></p> <p><i>Error Estimation for Pattern Recognition </i>focuses on error estimation, which is a broad and poorly understood topic that reaches all research areas using pattern classification. It includes model-based approaches and discussions of newer error estimators such as bolstered and Bayesian estimators. This book was motivated by the application of pattern recognition to high-throughput data with limited replicates, which is a basic problem now appearing in many areas. The first two chapters cover basic issues in classification error estimation, such as definitions, test-set error estimation, and training-set error estimation. The remaining chapters in this book cover results on the performance and representation of training-set error estimators for various pattern classifiers.</p> <p> Additional features of the book include:</p> <p>• The latest results on the accuracy of error estimation</p> <p>• Performance analysis of resubstitution, cross-validation, and bootstrap error estimators using analytical and simulation approaches</p> <p>• Highly interactive computer-based exercises and end-of-chapter problems</p> <p> <br />This is the first book exclusively about error estimation for pattern recognition.</p> <p><br /><b>Ulisses M. Braga Neto </b>is an Associate Professor in the Department of Electrical and Computer Engineering at Texas A&M University, USA. He received his PhD in Electrical and Computer Engineering from The Johns Hopkins University. He received an NSF CAREER Award for his work on error estimation for pattern recognition with applications in genomic signal processing. He is an IEEE Senior member.</p> <p> <b>Edward R. Dougherty </b>is Distinguished Professor, Robert F. Kennedy ‘26 Chair, and Scientific Director at the Center for Bioinformatics and Genomic Systems Engineering at Texas A&M University, USA. He is a fellow of both IEEE and SPIE, and he has received the SPIE Presidents Award. He has authored several books including <i>Epistemology of the Cell: A Systems Perspective on Biological Knowledge </i>and <i>Random Processes for Image and Signal Processing </i>(Wiley-IEEE Press).</p>

Diese Produkte könnten Sie auch interessieren:

Strategies to the Prediction, Mitigation and Management of Product Obsolescence
Strategies to the Prediction, Mitigation and Management of Product Obsolescence
von: Bjoern Bartels, Ulrich Ermel, Peter Sandborn, Michael G. Pecht
PDF ebook
116,99 €