Details

Advances in Data Science


Advances in Data Science

Symbolic, Complex, and Network Data
1. Aufl.

von: Edwin Diday, Rong Guan, Gilbert Saporta, Huiwen Wang

139,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 23.01.2020
ISBN/EAN: 9781119695103
Sprache: englisch
Anzahl Seiten: 258

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<P>Data science unifies statistics, data analysis and machine learning to achieve a better understanding of the masses of data which are produced today, and to improve prediction. Special kinds of data (symbolic, network, complex, compositional) are increasingly frequent in data science. These data require specific methodologies, but there is a lack of reference work in this field.<P> <P>Advances in Data Science fills this gap. It presents a collection of up-to-date contributions by eminent scholars following two international workshops held in Beijing and Paris. The 10 chapters are organized into four parts: Symbolic Data, Complex Data, Network Data and Clustering. They include fundamental contributions, as well as applications to several domains, including business and the social sciences. <P>
<p>Preface xi</p> <p><b>Part 1. Symbolic Data 1</b></p> <p><b>Chapter 1. Explanatory Tools for Machine Learning in the Symbolic Data Analysis Framework 3<br /></b><i>Edwin DIDAY</i></p> <p>1.1. Introduction 4</p> <p>1.2. Introduction to Symbolic Data Analysis 6</p> <p>1.2.1. What are complex data? 6</p> <p>1.2.2. What are “classes” and “class of complex data”? 7</p> <p>1.2.3. Which kind of class variability? 7</p> <p>1.2.4. What are “symbolic variables” and “symbolic data tables”? 7</p> <p>1.2.5. Symbolic Data Analysis (SDA) 9</p> <p>1.3. Symbolic data tables from Dynamic Clustering Method and EM 10</p> <p>1.3.1. The “dynamical clustering method” (DCM) 10</p> <p>1.3.2. Examples of DCM applications 10</p> <p>1.3.3. Clustering methods by mixture decomposition 12</p> <p>1.3.4. Symbolic data tables from clustering 13</p> <p>1.3.5. A general way to compare results of clustering methods by the “explanatory power” of their associated symbolic data table 15</p> <p>1.3.6. Quality criteria of classes and variables based on the cells of the symbolic data table containing intervals or inferred distributions 15</p> <p>1.4. Criteria for ranking individuals, classes and their bar chart descriptive symbolic variables 16</p> <p>1.4.1. A theoretical framework for SDA 16</p> <p>1.4.2. Characterization of a category and a class by a measure of discordance 18</p> <p>1.4.3. Link between a characterization by the criteria W and the standard Tf-Idf 19</p> <p>1.4.4. Ranking the individuals, the symbolic variables and the classes of a bar chart symbolic data table 21</p> <p>1.5. Two directions of research 23</p> <p>1.5.1. Parametrization of concordance and discordance criteria 23</p> <p>1.5.2. Improving the explanatory power of any machine learning tool by a filtering process 25</p> <p>1.6. Conclusion 27</p> <p>1.7. References 28</p> <p><b>Chapter 2. Likelihood in the Symbolic Context 31<br /></b><i>Richard EMILION and Edwin DIDAY</i></p> <p>2.1. Introduction 31</p> <p>2.2. Probabilistic setting 32</p> <p>2.2.1. Description variable and class variable 32</p> <p>2.2.2. Conditional distributions 33</p> <p>2.2.3. Symbolic variables 33</p> <p>2.2.4. Examples 35</p> <p>2.2.5. Probability measures on (ℂ, <i>C</i>), likelihood 37</p> <p>2.3. Parametric models for <i>p </i>= 1 38</p> <p>2.3.1. LDA model 38</p> <p>2.3.2. BLS method 41</p> <p>2.3.3. Interval-valued variables 42</p> <p>2.3.4. Probability vectors and histogram-valued variables 42</p> <p>2.4. Nonparametric estimation for <i>p</i> = 1 45</p> <p>2.4.1. Multihistograms and multivariate polygons 45</p> <p>2.4.2. Dirichlet kernel mixtures 45</p> <p>2.4.3. Dirichlet Process Mixture (DPM) 45</p> <p>2.5. Density models for <i>p</i> ≥ 2 46</p> <p>2.6. Conclusion 46</p> <p>2.7. References 47</p> <p><b>Chapter 3. Dimension Reduction and Visualization of Symbolic Interval-Valued Data Using Sliced Inverse Regression 49<br /></b><i>Han-Ming WU, Chiun-How KAO and Chun-houh CHEN</i></p> <p>3.1. Introduction 49</p> <p>3.2. PCA for interval-valued data and the sliced inverse regression 51</p> <p>3.2.1. PCA for interval-valued data 51</p> <p>3.2.2. Classic SIR 52</p> <p>3.3. SIR for interval-valued data 53</p> <p>3.3.1. Quantification approaches 54</p> <p>3.3.2. Distributional approaches 56</p> <p>3.4. Projections and visualization in DR subspace 58</p> <p>3.4.1. Linear combinations of intervals 58</p> <p>3.4.2. The graphical representation of the projected intervals in the 2D DR subspace 59</p> <p>3.5. Some computational issues 61</p> <p>3.5.1. Standardization of interval-valued data 61</p> <p>3.5.2. The slicing schemes for iSIR 62</p> <p>3.5.3. The evaluation of DR components 62</p> <p>3.6. Simulation studies 63</p> <p>3.6.1. Scenario 1: aggregated data 63</p> <p>3.6.2. Scenario 2: data based on interval arithmetic 63</p> <p>3.6.3. Results 64</p> <p>3.7. A real data example: face recognition data 65</p> <p>3.8. Conclusion and discussion 73</p> <p>3.9. References 74</p> <p><b>Chapter 4. On the “Complexity” of Social Reality. Some Reflections About the Use of Symbolic Data Analysis in Social Sciences 79<br /></b><i>Frédéric LEBARON</i></p> <p>4.1. Introduction 79</p> <p>4.2. Social sciences facing “complexity” 80</p> <p>4.2.1. The total social fact, a designation of “complexity” in social sciences 80</p> <p>4.2.2. Two families of answers 80</p> <p>4.2.3. The contemporary deepening of the two approaches, “reductionist” and “encompassing” 81</p> <p>4.2.4. Issues of scale and heterogeneity 82</p> <p>4.3. Symbolic data analysis in the social sciences: an example 83</p> <p>4.3.1. Symbolic data analysis 83</p> <p>4.3.2. An exploratory case study on European data 83</p> <p>4.3.3. A sociological interpretation 94</p> <p>4.4. Conclusion 95</p> <p>4.5. References 96</p> <p><b>Part 2. Complex Data 99</b></p> <p><b>Chapter 5. A Spatial Dependence Measure and Prediction of Georeferenced Data Streams Summarized by Histograms 101<br /></b><i>Rosanna VERDE and Antonio BALZANELLA</i></p> <p>5.1. Introduction 101</p> <p>5.2. Processing setup 103</p> <p>5.3. Main definitions 104</p> <p>5.4. Online summarization of a data stream through CluStream for Histogram data 106</p> <p>5.5. Spatial dependence monitoring: a variogram for histogram data 107</p> <p>5.6. Ordinary kriging for histogram data 110</p> <p>5.7. Experimental results on real data 112</p> <p>5.8. Conclusion 116</p> <p>5.9. References 116</p> <p><b>Chapter 6. Incremental Calculation Framework for Complex Data 119<br /></b><i>Huiwen WANG, Yuan WEI and Siyang WANG</i></p> <p>6.1. Introduction 119</p> <p>6.2. Basic data 122</p> <p>6.2.1. The basic data space 122</p> <p>6.2.2. Sample covariance matrix 123</p> <p>6.3. Incremental calculation of complex data 124</p> <p>6.3.1. Transformation of complex data 124</p> <p>6.3.2. Online decomposition of covariance matrix 125</p> <p>6.3.3. Adopted algorithms 128</p> <p>6.4. Simulation studies 131</p> <p>6.4.1. Functional linear regression 131</p> <p>6.4.2. Compositional PCA 133</p> <p>6.5. Conclusion 135</p> <p>6.6. Acknowledgment 135</p> <p>6.7. References 135</p> <p><b>Part 3. Network Data 139</b></p> <p><b>Chapter 7. Recommender Systems and Attributed Networks 141<br /></b><i>Fran</i><i>çoise FOGELMAN-SOULIÉ, Lanxiang MEI, Jianyu ZHANG, Yiming LI, Wen GE, Yinglan LI and Qiaofei YE</i></p> <p>7.1. Introduction 141</p> <p>7.2. Recommender systems 142</p> <p>7.2.1. Data used 143</p> <p>7.2.2. Model-based collaborative filtering 145</p> <p>7.2.3. Neighborhood-based collaborative filtering 145</p> <p>7.2.4. Hybrid models 148</p> <p>7.3. Social networks 150</p> <p>7.3.1. Non-independence 150</p> <p>7.3.2. Definition of a social network 150</p> <p>7.3.3. Properties of social networks 151</p> <p>7.3.4. Bipartite networks 152</p> <p>7.3.5. Multilayer networks 153</p> <p>7.4. Using social networks for recommendation 154</p> <p>7.4.1. Social filtering 154</p> <p>7.4.2. Extension to use attributes 155</p> <p>7.4.3. Remarks 156</p> <p>7.5. Experiments 156</p> <p>7.5.1. Performance evaluation 156</p> <p>7.5.2. Datasets 157</p> <p>7.5.3. Analysis of one-mode projected networks 158</p> <p>7.5.4. Models evaluated 160</p> <p>7.5.5. Results 160</p> <p>7.6. Perspectives 163</p> <p>7.7. References 163</p> <p><b>Chapter 8. Attributed Networks Partitioning Based on Modularity Optimization 169<br /></b><i>David COMBE, Christine LARGERON, Baptiste JEUDY, Fran</i><i>çoise FOGELMAN-SOULIÉ and Jing WANG</i></p> <p>8.1. Introduction 169</p> <p>8.2. Related work 171</p> <p>8.3. Inertia based modularity 172</p> <p>8.4. I-Louvain 174</p> <p>8.5. Incremental computation of the modularity gain 176</p> <p>8.6. Evaluation of I-Louvain method 179</p> <p>8.6.1. Performance of I-Louvain on artificial datasets 179</p> <p>8.6.2. Run-time of I-Louvain 180</p> <p>8.7. Conclusion 181</p> <p>8.8. References 182</p> <p><b>Part 4. Clustering 187</b></p> <p><b>Chapter 9. A Novel Clustering Method with Automatic Weighting of Tables and Variables 189<br /></b><i>Rodrigo C. DE ARA</i><i>ÚJO, Francisco DE ASSIS TENORIO DE CARVALHO and Yves LECHEVALLIER</i></p> <p>9.1. Introduction 189</p> <p>9.2. Related Work 190</p> <p>9.3. Definitions, notations and objective 191</p> <p>9.3.1. Choice of distances 192</p> <p>9.3.2. Criterion <i>W</i> measures the homogeneity of the partition <i>P</i> on the set of tables 193</p> <p>9.3.3. Optimization of the criterion <i>W </i>195</p> <p>9.4. Hard clustering with automated weighting of tables and variables 196</p> <p>9.4.1. Clustering algorithms MND–W and MND–WT 196</p> <p>9.5. Applications: UCI data sets 201</p> <p>9.5.1. Application I: Iris plant 201</p> <p>9.5.2. Application II: multi-features dataset 204</p> <p>9.6. Conclusion 206</p> <p>9.7. References 206</p> <p><b>Chapter 10. Clustering and Generalized ANOVA for Symbolic Data Constructed from Open Data 209<br /></b><i>Simona KORENJAK-</i><i>ČERNE, Nata</i><i>ša KEJ</i><i>?AR and Vladimir BATAGELJ</i></p> <p>10.1. Introduction 209</p> <p>10.2. Data description based on discrete (membership) distributions 210</p> <p>10.3. Clustering 212</p> <p>10.3.1. TIMSS – study of teaching approaches 215</p> <p>10.3.2. Clustering countries based on age–sex distributions of their populations 217</p> <p>10.4. Generalized ANOVA 221</p> <p>10.5. Conclusion 225</p> <p>10.6. References 226</p> <p>List of Authors 229</p> <p>Index 233</p>
<P>Edwin Diday is Emeritus Professor at Paris-Dauphine University-PSL. He helped to introduce the symbolic data analysis paradigm and the dynamic clustering method (opening the path to local models), as well as pyramidal clustering for spatial representation of overlapping clusters.<P> <P>Rong Guan is Associate Professor at the School of Statistics and Mathematics, Central University of Finance and Economics, Beijing. Her research covers complex and symbolic data analysis and financial distress diagnosis.<P> <P>Gilbert Saporta is Emeritus Professor at Conservatoire National des Arts et Metiers, France. His current research focuses on functional data analysis and clusterwise and sparse methods. He is Honorary President of the French Statistical Society.<P> <P>Huiwen Wang is Professor at the School of Economics and Management, Beihang University, Beijing. Her research covers dimension reduction, PLS regression, symbolic data analysis, compositional data analysis, functional data analysis and statistical modeling methods for mixed data.<P>

Diese Produkte könnten Sie auch interessieren:

MDX Solutions
MDX Solutions
von: George Spofford, Sivakumar Harinath, Christopher Webb, Dylan Hai Huang, Francesco Civardi
PDF ebook
53,99 €
Concept Data Analysis
Concept Data Analysis
von: Claudio Carpineto, Giovanni Romano
PDF ebook
107,99 €
Handbook of Virtual Humans
Handbook of Virtual Humans
von: Nadia Magnenat-Thalmann, Daniel Thalmann
PDF ebook
150,99 €