Details

Co-Clustering


Co-Clustering

Models, Algorithms and Applications
1. Aufl.

von: Gérard Govaert, Mohamed Nadif

139,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 11.12.2013
ISBN/EAN: 9781118649497
Sprache: englisch
Anzahl Seiten: 256

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

Cluster or co-cluster analyses are important tools in a variety of scientific areas. The introduction of this book presents a state of the art of already well-established, as well as more recent methods of co-clustering. The authors mainly deal with the two-mode partitioning under different approaches, but pay particular attention to a probabilistic approach.<br /> Chapter 1 concerns clustering in general and the model-based clustering in particular. The authors briefly review the classical clustering methods and focus on the mixture model. They present and discuss the use of different mixtures adapted to different types of data. The algorithms used are described and related works with different classical methods are presented and commented upon. This chapter is useful in tackling the problem of<br /> co-clustering under the mixture approach. Chapter 2 is devoted to the latent block model proposed in the mixture approach context. The authors discuss this model in detail and present its interest regarding co-clustering. Various algorithms are presented in a general context. Chapter 3 focuses on binary and categorical data. It presents, in detail, the appropriated latent block mixture models. Variants of these models and algorithms are presented and illustrated using examples. Chapter 4 focuses on contingency data. Mutual information, phi-squared and model-based co-clustering are studied. Models, algorithms and connections among different approaches are described and illustrated. Chapter 5 presents the case of continuous data. In the same way, the different approaches used in the previous chapters are extended to this situation.<br /> <p>Contents</p> <p>1. Cluster Analysis.<br /> 2. Model-Based Co-Clustering.<br /> 3. Co-Clustering of Binary and Categorical Data.<br /> 4. Co-Clustering of Contingency Tables.<br /> 5. Co-Clustering of Continuous Data.</p> <p>About the Authors</p> <p>Gérard Govaert is Professor at the University of Technology of Compiègne, France. He is also a member of the CNRS Laboratory Heudiasyc (Heuristic and diagnostic of complex systems). His research interests include latent structure modeling, model selection, model-based cluster analysis, block clustering and statistical pattern recognition. He is one of the authors of the MIXMOD (MIXtureMODelling) software.<br /> Mohamed Nadif is Professor at the University of Paris-Descartes, France, where he is a member of LIPADE (Paris Descartes computer science laboratory) in the Mathematics and Computer Science department. His research interests include machine learning, data mining, model-based cluster analysis, co-clustering, factorization and data analysis.</p> <p>Cluster Analysis is an important tool in a variety of scientific areas. Chapter 1 briefly presents a state of the art of already well-established as well more recent methods. The hierarchical, partitioning and fuzzy approaches will be discussed amongst others. The authors review the difficulty of these classical methods in tackling the high dimensionality, sparsity and scalability. Chapter 2 discusses the interests of coclustering, presenting different approaches and defining a co-cluster. The authors focus on co-clustering as a simultaneous clustering and discuss the cases of binary, continuous and co-occurrence data. The criteria and algorithms are described and illustrated on simulated and real data. Chapter 3 considers co-clustering as a model-based co-clustering. A latent block model is defined for different kinds of data. The estimation of parameters and co-clustering is tackled under two approaches: maximum likelihood and classification maximum likelihood. Hard and soft algorithms are described and applied on simulated and real data. Chapter 4 considers co-clustering as a matrix approximation. The trifactorization approach is considered and algorithms based on update rules are described. Links with numerical and probabilistic approaches are established. A combination of algorithms are proposed and evaluated on simulated and real data. Chapter 5 considers a co-clustering or bi-clustering as the search for coherent co-clusters in biological terms or the extraction of co-clusters under conditions. Classical algorithms will be described and evaluated on simulated and real data. Different indices to evaluate the quality of coclusters are noted and used in numerical experiments.</p>
<p>Acknowledgment xi</p> <p><b>Introduction xiii</b></p> <p>I.1. Types and representation of data xiii</p> <p>I.1.1. Binary data xiv</p> <p>I.1.2. Categorical data xiv</p> <p>I.1.3. Continuous data xv</p> <p>I.1.4. Contingency table xvii</p> <p>I.1.5. Data representations xix</p> <p>I.2. Simultaneous analysis xx</p> <p>I.2.1. Data analysis xx</p> <p>I.2.2. Co-clustering xxii</p> <p>I.2.3. Applications xxiii</p> <p>I.3. Notation xxvii</p> <p>I.4. Different approaches xxviii</p> <p>I.4.1. Two-mode partitioning xxviii</p> <p>I.4.2. Two-mode hierarchical clustering xxxvii</p> <p>I.4.3. Direct or block clustering xxxix</p> <p>I.4.4. Biclustering xxxix</p> <p>I.4.5. Other structures and other aims xliv</p> <p>I.5. Model-based co-clustering xlvi</p> <p>I.6. Outline xlix</p> <p><b>Chapter 1. Cluster Analysis 1</b></p> <p>1.1. Introduction 1</p> <p>1.2. Miscellaneous clustering methods 4</p> <p>1.2.1. Hierarchical approach 4</p> <p>1.2.2. The <i>k</i>-means algorithm 5</p> <p>1.2.3. Other approaches 7</p> <p>1.3. Model-based clustering and the mixture model 11</p> <p>1.4. EM algorithm 15</p> <p>1.4.1. Complete data and complete-data likelihood 16</p> <p>1.4.2. Principle 17</p> <p>1.4.3. Application to mixture models 18</p> <p>1.4.4. Properties 19</p> <p>1.4.5. EM: an alternating optimization algorithm 19</p> <p>1.5. Clustering and the mixture model 20</p> <p>1.5.1. The two approaches 20</p> <p>1.5.2. Classification likelihood 21</p> <p>1.5.3. The CEM algorithm 22</p> <p>1.5.4. Comparison of the two approaches 22</p> <p>1.5.5. Fuzzy clustering 24</p> <p>1.6. Gaussian mixture model 26</p> <p>1.6.1. The model 26</p> <p>1.6.2. CEM algorithm 28</p> <p>1.6.3. Spherical form, identical proportions and volumes 29</p> <p>1.6.4. Spherical form, identical proportions but differing volumes 30</p> <p>1.6.5. Identical covariance matrices and proportions 31</p> <p>1.7. Binary data 32</p> <p>1.7.1. Binary mixture model 32</p> <p>1.7.2. Parsimonious model 33</p> <p>1.7.3. Examples of application 35</p> <p>1.8. Categorical variables 36</p> <p>1.8.1. Multinomial mixture model 36</p> <p>1.8.2. Parsimonious model 38</p> <p>1.9. Contingency tables 41</p> <p>1.9.1. MNDKI2 algorithm 41</p> <p>1.9.2. Model-based approach 43</p> <p>1.9.3. Illustration 47</p> <p>1.10. Implementation 49</p> <p>1.10.1. Choice of model and of the number of classes 51</p> <p>1.10.2. Strategies for use 51</p> <p>1.10.3. Extension to particular situations 52</p> <p>1.11. Conclusion 53</p> <p><b>Chapter 2. Model-Based Co-Clustering 55</b></p> <p>2.1. Metric approach 55</p> <p>2.2. Probabilistic models 57</p> <p>2.3. Latent block model 59</p> <p>2.3.1. Definition 59</p> <p>2.3.2. Link with the mixture model 61</p> <p>2.3.3. Log-likelihoods 62</p> <p>2.3.4. A complex model 63</p> <p>2.4. Maximum likelihood estimation and algorithms 67</p> <p>2.4.1. Variational EM approach 69</p> <p>2.4.2. Classification EM approach 72</p> <p>2.4.3. Stochastic EM-Gibbs approach 73</p> <p>2.5. Bayesian approach 75</p> <p>2.6. Conclusion and miscellaneous developments 76</p> <p><b>Chapter 3. Co-Clustering of Binary and Categorical Data 79</b></p> <p>3.1. Example and notation 80</p> <p>3.2. Metric approach 82</p> <p>3.3. Bernoulli latent block model and algorithms 84</p> <p>3.3.1. The model 84</p> <p>3.3.2. Model identifiability 85</p> <p>3.3.3. Binary LBVEM and LBCEM algorithms 86</p> <p>3.4. Parsimonious Bernoulli LBMs 90</p> <p>3.5. Categorical data 91</p> <p>3.6. Bayesian inference 93</p> <p>3.7. Model selection 96</p> <p>3.7.1. The integrated completed log-likelihood (ICL) 96</p> <p>3.7.2. Penalized information criteria 97</p> <p>3.8. Illustrative experiments 98</p> <p>3.8.1. Townships 98</p> <p>3.8.2. Mero 101</p> <p>3.9. Conclusion 105</p> <p><b>Chapter 4. Co-Clustering of Contingency Tables 107</b></p> <p>4.1. Measures of association 108</p> <p>4.1.1. Phi-squared coefficient 109</p> <p>4.1.2. Mutual information 111</p> <p>4.2. Contingency table associated with a couple of partitions 113</p> <p>4.2.1. Associated distributions 113</p> <p>4.2.2. Associated measures of association 116</p> <p>4.3. Co-clustering of contingency table 119</p> <p>4.3.1. Two equivalent approaches 119</p> <p>4.3.2. Parameter modification of criteria 121</p> <p>4.3.3. Co-clustering with the phi-squared coefficient 124</p> <p>4.3.4. Co-clustering with the mutual information 129</p> <p>4.4. Model-based co-clustering 131</p> <p>4.4.1. Block model for contingency tables 133</p> <p>4.4.2. Poisson latent block model 137</p> <p>4.4.3. Poisson LBVEM and LBCEM algorithms 138</p> <p>4.5. Comparison of all algorithms 140</p> <p>4.5.1. CROKI2 versus CROINFO 142</p> <p>4.5.2. CROINFO versus Poisson LBCEM 142</p> <p>4.5.3. Poisson LBVEM versus Poisson LBCEM 144</p> <p>4.5.4. Behavior of CROKI2, CROINFO, LBCEM and LBVEM 147</p> <p>4.6. Conclusion 149</p> <p><b>Chapter 5. Co-Clustering of Continuous Data 151</b></p> <p>5.1. Metric approach 152</p> <p>5.1.1. Measure of information 153</p> <p>5.1.2. Summarized data associated with partitions 153</p> <p>5.1.3. Objective function 156</p> <p>5.1.4. CROEUC algorithm 157</p> <p>5.2. Gaussian latent block model 159</p> <p>5.2.1. The model 159</p> <p>5.2.2. Gaussian LBVEM and LBCEM algorithms 160</p> <p>5.2.3. Parsimonious Gaussian latent block models 161</p> <p>5.3. Illustrative example 163</p> <p>5.4. Gaussian block mixture model 168</p> <p>5.4.1. The model 169</p> <p>5.4.2. GBEM algorithm 170</p> <p>5.5. Numerical experiments 173</p> <p>5.5.1. GBEM versus CROEUC and EM 174</p> <p>5.5.2. Effect of the size of data 175</p> <p>5.6. Conclusion 175</p> <p>Bibliography 177</p> <p>Index 199</p>
<p><b>G&eeacute;rard Govaert</b> is Professor at University of Technology, Compiègne, France.</p> <p><b>Mohamed Nadif</b> is Professor at University of Paris-Descartes, France.</p>

Diese Produkte könnten Sie auch interessieren:

Bandwidth Efficient Coding
Bandwidth Efficient Coding
von: John B. Anderson
EPUB ebook
114,99 €
Digital Communications with Emphasis on Data Modems
Digital Communications with Emphasis on Data Modems
von: Richard W. Middlestead
PDF ebook
171,99 €
Bandwidth Efficient Coding
Bandwidth Efficient Coding
von: John B. Anderson
PDF ebook
114,99 €