Details

Multiblock Data Fusion in Statistics and Machine Learning


Multiblock Data Fusion in Statistics and Machine Learning

Applications in the Natural and Life Sciences
1. Aufl.

von: Age K. Smilde, Tormod Næs, Kristian Hovde Liland

134,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 30.03.2022
ISBN/EAN: 9781119600985
Sprache: englisch
Anzahl Seiten: 416

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<b>Multiblock Data Fusion in Statistics and Machine Learning</b> <p><b>Explore the advantages and shortcomings of various forms of multiblock analysis, and the relationships between them, with this expert guide </b> <p>Arising out of fusion problems that exist in a variety of fields in the natural and life sciences, the methods available to fuse multiple data sets have expanded dramatically in recent years. Older methods, rooted in psychometrics and chemometrics, also exist. <p><i>Multiblock Data Fusion in Statistics and Machine Learning: Applications in the Natural and Life Sciences</i> is a detailed overview of all relevant multiblock data analysis methods for fusing multiple data sets. It focuses on methods based on components and latent variables, including both well-known and lesser-known methods with potential applications in different types of problems. <p> Many of the included methods are illustrated by practical examples and are accompanied by a freely available R-package. The distinguished authors have created an accessible and useful guide to help readers fuse data, develop new data fusion models, discover how the involved algorithms and models work, and understand the advantages and shortcomings of various approaches. <p> This book includes: <ul><li>A thorough introduction to the different options available for the fusion of multiple data sets, including methods originating in psychometrics and chemometrics</li> <li>Practical discussions of well-known and lesser-known methods with applications in a wide variety of data problems</li> <li>Included, functional R-code for the application of many of the discussed methods</li></ul> <p> Perfect for graduate students studying data analysis in the context of the natural and life sciences, including bioinformatics, sensometrics, and chemometrics, <i>Multiblock Data Fusion in Statistics and Machine Learning: Applications in the Natural and Life Sciences</i> is also an indispensable resource for developers and users of the results of multiblock methods.
<p>Foreword xiii</p> <p>Preface xv</p> <p>List of Figures xvii</p> <p>List of Tables xxxi</p> <p><b>Part I Introductory Concepts and Theory 1</b></p> <p><b>1 Introduction 3</b></p> <p>1.1 Scope of the Book 3</p> <p>1.2 Potential Audience 4</p> <p>1.3 Types of Data and Analyses 5</p> <p>1.3.1 Supervised and Unsupervised Analyses 5</p> <p>1.3.2 High-, Mid- and Low-level Fusion 5</p> <p>1.3.3 Dimension Reduction 7</p> <p>1.3.4 Indirect Versus Direct Data 8</p> <p>1.3.5 Heterogeneous Fusion 8</p> <p>1.4 Examples 8</p> <p>1.4.1 Metabolomics 8</p> <p>1.4.2 Genomics 11</p> <p>1.4.3 Systems Biology 13</p> <p>1.4.4 Chemistry 13</p> <p>1.4.5 Sensory Science 15</p> <p>1.5 Goals of Analyses 16</p> <p>1.6 Some History 17</p> <p>1.7 Fundamental Choices 17</p> <p>1.8 Common and Distinct Components 19</p> <p>1.9 Overview and Links 20</p> <p>1.10 Notation and Terminology 21</p> <p>1.11 Abbreviations 22</p> <p><b>2 Basic Theory and Concepts 25</b></p> <p>2.i General Introduction 25</p> <p>2.1 Component Models 25</p> <p>2.1.1 General Idea of Component Models 25</p> <p>2.1.2 Principal Component Analysis 26</p> <p>2.1.3 Sparse PCA 30</p> <p>2.1.4 Principal Component Regression 31</p> <p>2.1.5 Partial Least Squares 32</p> <p>2.1.6 Sparse PLS 36</p> <p>2.1.7 Principal Covariates Regression 37</p> <p>2.1.8 Redundancy Analysis 38</p> <p>2.1.9 Comparing PLS, PCovR and RDA 38</p> <p>2.1.10 Generalised Canonical Correlation Analysis 38</p> <p>2.1.11 Simultaneous Component Analysis 39</p> <p>2.2 Properties of Data 39</p> <p>2.2.1 Data Theory 39</p> <p>2.2.2 Scale-types 42</p> <p>2.3 Estimation Methods 44</p> <p>2.3.1 Least-squares Estimation 44</p> <p>2.3.2 Maximum-likelihood Estimation 45</p> <p>2.3.3 Eigenvalue Decomposition-based Methods 47</p> <p>2.3.4 Covariance or Correlation-based Estimation Methods 47</p> <p>2.3.5 Sequential Versus Simultaneous Methods 48</p> <p>2.3.6 Homogeneous Versus Heterogeneous Fusion 50</p> <p>2.4 Within- and Between-block Variation 52</p> <p>2.4.1 Definition and Example 52</p> <p>2.4.2 MAXBET Solution 54</p> <p>2.4.3 MAXNEAR Solution 54</p> <p>2.4.4 PLS2 Solution 55</p> <p>2.4.5 CCA Solution 55</p> <p>2.4.6 Comparing the Solutions 56</p> <p>2.4.7 PLS, RDA and CCA Revisited 56</p> <p>2.5 Framework for Common and Distinct Components 60</p> <p>2.6 Preprocessing 63</p> <p>2.7 Validation 64</p> <p>2.7.1 Outliers 64</p> <p>2.7.1.1 Residuals 64</p> <p>2.7.1.2 Leverage 66</p> <p>2.7.2 Model Fit 67</p> <p>2.7.3 Bias-variance Trade-off 69</p> <p>2.7.4 Test Set Validation 70</p> <p>2.7.5 Cross-validation 72</p> <p>2.7.6 Permutation Testing 75</p> <p>2.7.7 Jackknife and Bootstrap 76</p> <p>2.7.8 Hyper-parameters and Penalties 77</p> <p>2.8 Appendix 78</p> <p><b>3 Structure of Multiblock Data 87</b></p> <p>3.i General Introduction 87</p> <p>3.1 Taxonomy 87</p> <p>3.2 Skeleton of a Multiblock Data Set 87</p> <p>3.2.1 Shared Sample Mode 88</p> <p>3.2.2 Shared Variable Mode 88</p> <p>3.2.3 Shared Variable or Sample Mode 88</p> <p>3.2.4 Shared Variable and Sample Mode 89</p> <p>3.3 Topology of a Multiblock Data Set 90</p> <p>3.3.1 Unsupervised Analysis 90</p> <p>3.3.2 Supervised Analysis 93</p> <p>3.4 Linking Structures 95</p> <p>3.4.1 Linking Structure for Unsupervised Analysis 95</p> <p>3.4.2 Linking Structures for Supervised Analysis 96</p> <p>3.5 Summary 98</p> <p><b>4 Matrix Correlations 99</b></p> <p>4.i General Introduction 99</p> <p>4.1 Definition 99</p> <p>4.2 Most Used Matrix Correlations 101</p> <p>4.2.1 Inner Product Correlation 101</p> <p>4.2.2 GCD coefficient 101</p> <p>4.2.3 RV-coefficient 102</p> <p>4.2.4 SMI-coefficient 102</p> <p>4.3 Generic Framework of Matrix Correlations 104</p> <p>4.4 Generalised Matrix Correlations 105</p> <p>4.4.1 Generalised RV-coefficient 105</p> <p>4.4.2 Generalised Association Coefficient 106</p> <p>4.5 Partial Matrix Correlations 108</p> <p>4.6 Conclusions and Recommendations 110</p> <p>4.7 Open Issues 111</p> <p><b>Part II Selected Methods for Unsupervised and Supervised Topologies 113</b></p> <p><b>5 Unsupervised Methods 115</b></p> <p>5.i General Introduction 115</p> <p>5.ii Relations to the General Framework 115</p> <p>5.1 Shared Variable Mode 117</p> <p>5.1.1 Only Common Variation 117</p> <p>5.1.1.1 Simultaneous Component Analysis 117</p> <p>5.1.1.2 Clustering and SCA 123</p> <p>5.1.1.3 Multigroup Data Analysis 125</p> <p>5.1.2 Common, Local, and Distinct Variation 126</p> <p>5.1.2.1 Distinct and Common Components 127</p> <p>5.1.2.2 Multivariate Curve Resolution 130</p> <p>5.2 Shared Sample Mode 133</p> <p>5.2.1 Only Common Variation 133</p> <p>5.2.1.1 SUM-PCA 133</p> <p>5.2.1.2 Multiple Factor Analysis and STATIS 135</p> <p>5.2.1.3 Generalised Canonical Analysis 136</p> <p>5.2.1.4 Regularised Generalised Canonical Correlation Analysis 139</p> <p>5.2.1.5 Exponential Family SCA 140</p> <p>5.2.1.6 Optimal-scaling 143</p> <p>5.2.2 Common, Local, and Distinct Variation 146</p> <p>5.2.2.1 Joint and Individual Variation Explained 146</p> <p>5.2.2.2 Distinct and Common Components 147</p> <p>5.2.2.3 PCA-GCA 148</p> <p>5.2.2.4 Advanced Coupled Matrix and Tensor Factorisation 153</p> <p>5.2.2.5 Penalised-ESCA 156</p> <p>5.2.2.6 Multivariate Curve Resolution 158</p> <p>5.3 Generic Framework 159</p> <p>5.3.1 Framework for Simultaneous Unsupervised Methods 159</p> <p>5.3.1.1 Description of the Framework 159</p> <p>5.3.1.2 Framework Applied to Simultaneous Unsupervised Data Analysis Methods 161</p> <p>5.3.1.3 Framework of Common/Distinct Applied to Simultaneous Unsupervised Multiblock Data Analysis Methods 161</p> <p>5.4 Conclusions and Recommendations 162</p> <p>5.5 Open Issues 164</p> <p><b>6 ASCA and Extensions 167</b></p> <p>6.i General Introduction 167</p> <p>6.ii Relations to the General Framework 167</p> <p>6.1 ANOVA-Simultaneous Component Analysis 168</p> <p>6.1.1 The ASCA Method 168</p> <p>6.1.2 Validation of ASCA 176</p> <p>6.1.2.1 Permutation Testing 176</p> <p>6.1.2.2 Back-projection 178</p> <p>6.1.2.3 Confidence Ellipsoids 178</p> <p>6.1.3 The ASCA+ and LiMM-PCA Methods 181</p> <p>6.2 Multilevel-SCA 182</p> <p>6.3 Penalised-ASCA 183</p> <p>6.4 Conclusions and Recommendations 185</p> <p>6.5 Open Issues 186</p> <p><b>7 Supervised Methods 187</b></p> <p>7.i General Introduction 187</p> <p>7.ii Relations to the General Framework 187</p> <p>7.1 Multiblock Regression: General Perspectives 188</p> <p>7.1.1 Model and Assumptions 188</p> <p>7.1.2 Different Challenges and Aims 188</p> <p>7.2 Multiblock PLS Regression 190</p> <p>7.2.1 Standard Multiblock PLS Regression 190</p> <p>7.2.2 MB-PLS Used for Classification 194</p> <p>7.2.3 Sparse Multiblock PLS Regression (sMB-PLS) 196</p> <p>7.3 The Family of SO-PLS Regression Methods (Sequential and Orthogonalised PLS Regression) 199</p> <p>7.3.1 The SO-PLS Method 199</p> <p>7.3.2 Order of Blocks 202</p> <p>7.3.3 Interpretation Tools 202</p> <p>7.3.4 Restricted PLS Components and their Application in SO-PLS 203</p> <p>7.3.5 Validation and Component Selection 204</p> <p>7.3.6 Relations to ANOVA 205</p> <p>7.3.7 Extensions of SO-PLS to Handle Interactions Between Blocks 212</p> <p>7.3.8 Further Applications of SO-PLS 215</p> <p>7.3.9 Relations Between SO-PLS and ASCA 215</p> <p>7.4 Parallel and Orthogonalised PLS (PO-PLS) Regression 217</p> <p>7.5 Response Oriented Sequential Alternation 222</p> <p>7.5.1 The ROSA Method 222</p> <p>7.5.2 Validation 225</p> <p>7.5.3 Interpretation 225</p> <p>7.6 Conclusions and Recommendations 228</p> <p>7.7 Open Issues 229</p> <p><b>Part III Methods for Complex Multiblock Structures 231</b></p> <p><b>8 Complex Block Structures; with Focus on L-Shape Relations 233</b></p> <p>8.i General Introduction 233</p> <p>8.ii Relations to the General Framework 234</p> <p>8.1 Analysis of L-shape Data: General Perspectives 235</p> <p>8.2 Sequential Procedures for L-shape Data Based on PLS/PCR and ANOVA 236</p> <p>8.2.1 Interpretation of X1, Quantitative X2-data, Horizontal Axis First 236</p> <p>8.2.2 Interpretation of X1, Categorical X2-data, Horizontal Axis First 238</p> <p>8.2.3 Analysis of Segments/Clusters of X1 Data 240</p> <p>8.3 The L-PLS Method for Joint Estimation of Blocks in L-shape Data 246</p> <p>8.3.1 The Original L-PLS Method, Endo-L-PLS 247</p> <p>8.3.2 Exo- Versus Endo-L-PLS 250</p> <p>8.4 Modifications of the Original L-PLS Idea 252</p> <p>8.4.1 Weighting Information from X3 and X1 in L-PLS Using a Parameter <i>α</i>252</p> <p>8.4.2 Three-blocks Bifocal PLS 253</p> <p>8.5 Alternative L-shape Data Analysis Methods 254</p> <p>8.5.1 Principal Component Analysis with External Information 254</p> <p>8.5.2 A Simple PCA Based Procedure for Using Unlabelled Data in Calibration 255</p> <p>8.5.3 Multivariate Curve Resolution for Incomplete Data 256</p> <p>8.5.4 An Alternative Approach in Consumer Science Based on Correlations Between X3 and X1 257</p> <p>8.6 Domino PLS and More Complex Data Structures 258</p> <p>8.7 Conclusions and Recommendations 258</p> <p>8.8 Open Issues 260</p> <p><b>Part IV Alternative Methods for Unsupervised and Supervised Topologies 261</b></p> <p><b>9 Alternative Unsupervised Methods 263</b></p> <p>9.i General Introduction 263</p> <p>9.ii Relationship to the General Framework 263</p> <p>9.1 Shared Variable Mode 263</p> <p>9.2 Shared Sample Mode 265</p> <p>9.2.1 Only Common Variation 265</p> <p>9.2.1.1 DIABLO 265</p> <p>9.2.1.2 Generalised Coupled Tensor Factorisation 266</p> <p>9.2.1.3 Representation Matrices 267</p> <p>9.2.1.4 Extended PCA 272</p> <p>9.2.2 Common, Local, and Distinct Variation 273</p> <p>9.2.2.1 Generalised SVD 273</p> <p>9.2.2.2 Structural Learning and Integrative Decomposition 273</p> <p>9.2.2.3 Bayesian Inter-battery Factor Analysis 275</p> <p>9.2.2.4 Group Factor Analysis 276</p> <p>9.2.2.5 OnPLS 277</p> <p>9.2.2.6 Generalised Association Study 278</p> <p>9.2.2.7 Multi-Omics Factor Analysis 278</p> <p>9.3 Two Shared Modes and Only Common Variation 281</p> <p>9.3.1 Generalised Procrustes Analysis 282</p> <p>9.3.2 Three-way Methods 282</p> <p>9.4 Conclusions and Recommendations 283</p> <p>9.4.1 Open Issues 284</p> <p><b>10 Alternative Supervised Methods 287</b></p> <p>10.i General Introduction 287</p> <p>10.ii Relations to the General Framework 287</p> <p>10.1 Model and Focus 288</p> <p>10.2 Extension of PCovR 288</p> <p>10.2.1 Sparse Multiblock Principal Covariates Regression, Sparse PCovR 288</p> <p>10.2.2 Multiway Multiblock Covariates Regression 289</p> <p>10.3 Multiblock Redundancy Analysis 292</p> <p>10.3.1 Standard Multiblock Redundancy Analysis 292</p> <p>10.3.2 Sparse Multiblock Redundancy Analysis 294</p> <p>10.4 Miscellaneous Multiblock Regression Methods 295</p> <p>10.4.1 Multiblock Variance Partitioning 296</p> <p>10.4.2 Network Induced Supervised Learning 296</p> <p>10.4.3 Common Dimensions for Multiblock Regression 298</p> <p>10.5 Modifications and Extensions of the SO-PLS Method 298</p> <p>10.5.1 Extensions of SO-PLS to Three-Way Data 298</p> <p>10.5.2 Variable Selection for SO-PLS 299</p> <p>10.5.3 More Complicated Error Structure for SO-PLS 299</p> <p>10.5.4 SO-PLS Used for Path Modelling 300</p> <p>10.6 Methods for Data Sets Split Along the Sample Mode, Multigroup Methods 304</p> <p>10.6.1 Multigroup PLS Regression 304</p> <p>10.6.2 Clustering of Observations in Multiblock Regression 306</p> <p>10.6.3 Domain-Invariant PLS, DI-PLS 307</p> <p>10.7 Conclusions and Recommendations 308</p> <p>10.8 Open Issues 309</p> <p><b>Part V Software 311</b></p> <p><b>11 Algorithms and Software 313</b></p> <p>11.1 Multiblock Software 313</p> <p>11.2 R package multiblock 313</p> <p>11.3 Installing and Starting the Package 314</p> <p>11.4 Data Handling 314</p> <p>11.4.1 Read From File 314</p> <p>11.4.2 Data Pre-processing 315</p> <p>11.4.3 Re-coding Categorical Data 316</p> <p>11.4.4 Data Structures for Multiblock Analysis 317</p> <p>11.4.4.1 Create List of Blocks 317</p> <p>11.4.4.2 Create data.frame of Blocks 317</p> <p>11.5 Basic Methods 318</p> <p>11.5.1 Prepare Data 319</p> <p>11.5.2 Modelling 319</p> <p>11.5.3 Common Output Elements Across Methods 319</p> <p>11.5.4 Scores and Loadings 320</p> <p>11.6 Unsupervised Methods 321</p> <p>11.6.1 Formatting Data for Unsupervised Data Analysis 321</p> <p>11.6.2 Method Interfaces 322</p> <p>11.6.3 Shared Sample Mode Analyses 322</p> <p>11.6.4 Shared Variable Mode 322</p> <p>11.6.5 Common Output Elements Across Methods 323</p> <p>11.6.6 Scores and Loadings 324</p> <p>11.6.7 Plot From Imported Package 325</p> <p>11.7 ANOVA Simultaneous Component Analysis 325</p> <p>11.7.1 Formula Interface 325</p> <p>11.7.2 Simulated Data 325</p> <p>11.7.3 ASCA Modelling 325</p> <p>11.7.4 ASCA Scores 326</p> <p>11.7.5 ASCA Loadings 326</p> <p>11.8 Supervised Methods 327</p> <p>11.8.1 Formatting Data for Supervised Analyses 327</p> <p>11.8.2 Multiblock Partial Least Squares 328</p> <p>11.8.2.1 MB-PLS Modelling 328</p> <p>11.8.2.2 MB-PLS Summaries and Plotting 328</p> <p>11.8.3 Sparse Multiblock Partial Least Squares 328</p> <p>11.8.3.1 Sparse MB-PLS Modelling 328</p> <p>11.8.3.2 Sparse MB-PLS Plotting 329</p> <p>11.8.4 Sequential and Orthogonalised Partial Least Squares 330</p> <p>11.8.4.1 SO-PLS Modelling 330</p> <p>11.8.4.2 Måge Plot 331</p> <p>11.8.4.3 SO-PLS Loadings 332</p> <p>11.8.4.4 SO-PLS Scores 333</p> <p>11.8.4.5 SO-PLS Prediction 334</p> <p>11.8.4.6 SO-PLS Validation 334</p> <p>11.8.4.7 Principal Components of Predictions 336</p> <p>11.8.4.8 CVANOVA 336</p> <p>11.8.5 Parallel and Orthogonalised Partial Least Squares 337</p> <p>11.8.5.1 PO-PLS Modelling 337</p> <p>11.8.5.2 PO-PLS Scores and Loadings 338</p> <p>11.8.6 Response Optimal Sequential Alternation 339</p> <p>11.8.6.1 ROSA Modelling 339</p> <p>11.8.6.2 ROSA Loadings 340</p> <p>11.8.6.3 ROSA Scores 340</p> <p>11.8.6.4 ROSA Prediction 340</p> <p>11.8.6.5 ROSA Validation 341</p> <p>11.8.6.6 ROSA Image Plots 342</p> <p>11.8.7 Multiblock Redundancy Analysis 343</p> <p>11.8.7.1 MB-RDA Modelling 343</p> <p>11.8.7.2 MB-RDA Loadings and Scores 343</p> <p>11.9 Complex Data Structures 344</p> <p>11.9.1 L-PLS 344</p> <p>11.9.1.1 Simulated L-shaped Data 344</p> <p>11.9.1.2 Exo-L-PLS 344</p> <p>11.9.1.3 Endo-L-PLS 344</p> <p>11.9.1.4 L-PLS Cross-validation 345</p> <p>11.9.2 SO-PLS-PM 345</p> <p>11.9.2.1 Single SO-PLS-PM Model 346</p> <p>11.9.2.2 Multiple Paths in an SO-PLS-PM Model 346</p> <p>11.10 Software Packages 347</p> <p>11.10.1 R Packages 347</p> <p>11.10.2 MATLAB Toolboxes 348</p> <p>11.10.3 Python 349</p> <p>11.10.4 Commercial Software 349<i>                </i></p> <p>References 351</p> <p>Index 373</p>
<p><b>Age K. Smilde</b> is a Professor of Biosystems Data Analysis at the Swammerdam Institute for Life Sciences at the University of Amsterdam. He also holds a part-time position at the Department of Machine Intelligence of Simula Metropolitan Center for Digital Engineering in Oslo, Norway. His research interest is multiblock data analysis and its implementation in different fields of life sciences. He is currently the Editor-in-Chief of the <i>Journal of Chemometrics. </i> </p> <p><b> Tormod Næs</b> is a Senior Scientist at Nofima, a food research institute in Norway. He is also currently employed as adjoint professor at the Department of Food Science, University of Copenhagen, Denmark and as extraordinary professor at University of Stellenbosch, South Africa. His main research interest is multivariate analysis with special emphasis on applications in sensory science and spectroscopy. <p><b>Kristian Hovde Liland</b> is an Associate Professor with a top scientist scholarship in Data Science at the Norwegian University of Life Sciences and works in the areas of chemometrics, data analysis and machine learning. His main research is in linear prediction modelling, spectroscopy, and the transition between chemometrics and machine learning.
<p><b>Explore the advantages and shortcomings of various forms of multiblock analysis, and the relationships between them, with this expert guide </b></p> <p>Arising out of fusion problems that exist in a variety of fields in the natural and life sciences, the methods available to fuse multiple data sets have expanded dramatically in recent years. Older methods, rooted in psychometrics and chemometrics, also exist. <p><i>Multiblock Data Fusion in Statistics and Machine Learning: Applications in the Natural and Life Sciences</i> is a detailed overview of all relevant multiblock data analysis methods for fusing multiple data sets. It focuses on methods based on components and latent variables, including both well-known and lesser-known methods with potential applications in different types of problems. <p> Many of the included methods are illustrated by practical examples and are accompanied by a freely available R-package. The distinguished authors have created an accessible and useful guide to help readers fuse data, develop new data fusion models, discover how the involved algorithms and models work, and understand the advantages and shortcomings of various approaches. <p> This book includes: <ul><li>A thorough introduction to the different options available for the fusion of multiple data sets, including methods originating in psychometrics and chemometrics</li> <li>Practical discussions of well-known and lesser-known methods with applications in a wide variety of data problems</li> <li>Included, functional R-code for the application of many of the discussed methods</li></ul> <p> Perfect for graduate students studying data analysis in the context of the natural and life sciences, including bioinformatics, sensometrics, and chemometrics, <i>Multiblock Data Fusion in Statistics and Machine Learning: Applications in the Natural and Life Sciences</i> is also an indispensable resource for developers and users of the results of multiblock methods.

Diese Produkte könnten Sie auch interessieren:

Hot-Melt Extrusion
Hot-Melt Extrusion
von: Dennis Douroumis
PDF ebook
136,99 €
Hot-Melt Extrusion
Hot-Melt Extrusion
von: Dennis Douroumis
EPUB ebook
136,99 €
Kunststoffe
Kunststoffe
von: Wilhelm Keim
PDF ebook
99,99 €