Details

Data Science in Theory and Practice


Data Science in Theory and Practice

Techniques for Big Data Analytics and Complex Data Sets
1. Aufl.

von: Maria Cristina Mariani, Osei Kofi Tweneboah, Maria Pia Beccar-Varela

109,99 €

Verlag: Wiley
Format: EPUB
Veröffentl.: 30.09.2021
ISBN/EAN: 9781119674733
Sprache: englisch
Anzahl Seiten: 400

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<B>DATA SCIENCE IN THEORY AND PRACTICE</b> <p><b>EXPLORE THE FOUNDATIONS OF DATA SCIENCE WITH THIS INSIGHTFUL NEW RESOURCE</b> <p><i>Data Science in Theory and Practice</i> delivers a comprehensive treatment of the mathematical and statistical models useful for analyzing data sets arising in various disciplines, like banking, finance, health care, bioinformatics, security, education, and social services. Written in five parts, the book examines some of the most commonly used and fundamental mathematical and statistical concepts that form the basis of data science. The authors go on to analyze various data transformation techniques useful for extracting information from raw data, long memory behavior, and predictive modeling. <p>The book offers readers a multitude of topics all relevant to the analysis of complex data sets. Along with a robust exploration of the theory underpinning data science, it contains numerous applications to specific and practical problems. The book also provides examples of code algorithms in R and Python and provides pseudo-algorithms to port the code to any other language. <p>Ideal for students and practitioners without a strong background in data science, readers will also learn from topics like: <ul><li>Analyses of foundational theoretical subjects, including the history of data science, matrix algebra and random vectors, and multivariate analysis</li> <li>A comprehensive examination of time series forecasting, including the different components of time series and transformations to achieve stationarity</li> <li>Introductions to both the R and Python programming languages, including basic data types and sample manipulations for both languages</li> <li>An exploration of algorithms, including how to write one and how to perform an asymptotic analysis</li> <li>A comprehensive discussion of several techniques for analyzing and predicting complex data sets</li></ul> <p>Perfect for advanced undergraduate and graduate students in Data Science, Business Analytics, and Statistics programs, <i>Data Science in Theory and Practice</i> will also earn a place in the libraries of practicing data scientists, data and business analysts, and statisticians in the private sector, government, and academia.
<p>List of Figures xvii</p> <p>List of Tables xxi</p> <p>Preface xxiii</p> <p><b>1 Background of Data Science 1</b></p> <p>1.1 Introduction 1</p> <p>1.2 Origin of Data Science 2</p> <p>1.3 Who is a Data Scientist? 2</p> <p>1.4 Big Data 3</p> <p>1.4.1 Characteristics of Big Data 4</p> <p>1.4.2 Big Data Architectures 5</p> <p><b>2 Matrix Algebra and Random Vectors 7</b></p> <p>2.1 Introduction 7</p> <p>2.2 Some Basics of Matrix Algebra 7</p> <p>2.2.1 Vectors 7</p> <p>2.2.2 Matrices 8</p> <p>2.3 Random Variables and Distribution Functions 12</p> <p>2.3.1 The Dirichlet Distribution 15</p> <p>2.3.2 Multinomial Distribution 17</p> <p>2.3.3 Multivariate Normal Distribution 18</p> <p>2.4 Problems 19</p> <p><b>3 Multivariate Analysis 21</b></p> <p>3.1 Introduction 21</p> <p>3.2 Multivariate Analysis: Overview 21</p> <p>3.3 Mean Vectors 22</p> <p>3.4 Variance–Covariance Matrices 24</p> <p>3.5 Correlation Matrices 26</p> <p>3.6 Linear Combinations of Variables 28</p> <p>3.6.1 Linear Combinations of Sample Means 29</p> <p>3.6.2 Linear Combinations of Sample Variance and Covariance 29</p> <p>3.6.3 Linear Combinations of Sample Correlation 30</p> <p>3.7 Problems 31</p> <p><b>4 Time Series Forecasting 35</b></p> <p>4.1 Introduction 35</p> <p>4.2 Terminologies 36</p> <p>4.3 Components of Time Series 39</p> <p>4.3.1 Seasonal 39</p> <p>4.3.2 Trend 40</p> <p>4.3.3 Cyclical 41</p> <p>4.3.4 Random 42</p> <p>4.4 Transformations to Achieve Stationarity 42</p> <p>4.5 Elimination of Seasonality via Differencing 44</p> <p>4.6 Additive and Multiplicative Models 44</p> <p>4.7 Measuring Accuracy of Different Time Series Techniques 45</p> <p>4.7.1 Mean Absolute Deviation 46</p> <p>4.7.2 Mean Absolute Percent Error 46</p> <p>4.7.3 Mean Square Error 47</p> <p>4.7.4 Root Mean Square Error 48</p> <p>4.8 Averaging and Exponential Smoothing Forecasting Methods 48</p> <p>4.8.1 Averaging Methods 49</p> <p>4.8.1.1 Simple Moving Averages 49</p> <p>4.8.1.2 Weighted Moving Averages 51</p> <p>4.8.2 Exponential Smoothing Methods 54</p> <p>4.8.2.1 Simple Exponential Smoothing 54</p> <p>4.8.2.2 Adjusted Exponential Smoothing 55</p> <p>4.9 Problems 57</p> <p><b>5 Introduction to R 61</b></p> <p>5.1 Introduction 61</p> <p>5.2 Basic Data Types 62</p> <p>5.2.1 Numeric Data Type 62</p> <p>5.2.2 Integer Data Type 62</p> <p>5.2.3 Character 63</p> <p>5.2.4 Complex Data Types 63</p> <p>5.2.5 Logical Data Types 64</p> <p>5.3 Simple Manipulations – Numbers and Vectors 64</p> <p>5.3.1 Vectors and Assignment 64</p> <p>5.3.2 Vector Arithmetic 65</p> <p>5.3.3 Vector Index 66</p> <p>5.3.4 Logical Vectors 67</p> <p>5.3.5 Missing Values 68</p> <p>5.3.6 Index Vectors 69</p> <p>5.3.6.1 Indexing with Logicals 69</p> <p>5.3.6.2 A Vector of Positive Integral Quantities 69</p> <p>5.3.6.3 A Vector of Negative Integral Quantities 69</p> <p>5.3.6.4 Named Indexing 69</p> <p>5.3.7 Other Types of Objects 70</p> <p>5.3.7.1 Matrices 70</p> <p>5.3.7.2 List 72</p> <p>5.3.7.3 Factor 73</p> <p>5.3.7.4 Data Frames 75</p> <p>5.3.8 Data Import 76</p> <p>5.3.8.1 Excel File 76</p> <p>5.3.8.2 CSV File 76</p> <p>5.3.8.3 Table File 77</p> <p>5.3.8.4 Minitab File 77</p> <p>5.3.8.5 SPSS File 77</p> <p>5.4 Problems 78</p> <p><b>6 Introduction to Python 81</b></p> <p>6.1 Introduction 81</p> <p>6.2 Basic Data Types 82</p> <p>6.2.1 Number Data Type 82</p> <p>6.2.1.1 Integer 82</p> <p>6.2.1.2 Floating-Point Numbers 83</p> <p>6.2.1.3 Complex Numbers 84</p> <p>6.2.2 Strings 84</p> <p>6.2.3 Lists 85</p> <p>6.2.4 Tuples 86</p> <p>6.2.5 Dictionaries 86</p> <p>6.3 Number Type Conversion 87</p> <p>6.4 Python Conditions 87</p> <p>6.4.1 If Statements 88</p> <p>6.4.2 The Else and Elif Clauses 89</p> <p>6.4.3 The While Loop 90</p> <p>6.4.3.1 The Break Statement 91</p> <p>6.4.3.2 The Continue Statement 91</p> <p>6.4.4 For Loops 91</p> <p>6.4.4.1 Nested Loops 92</p> <p>6.5 Python File Handling: Open, Read, and Close 93</p> <p>6.6 Python Functions 93</p> <p>6.6.1 Calling a Function in Python 94</p> <p>6.6.2 Scope and Lifetime of Variables 94</p> <p>6.7 Problems 95</p> <p><b>7 Algorithms 97</b></p> <p>7.1 Introduction 97</p> <p>7.2 Algorithm – Definition 97</p> <p>7.3 How toWrite an Algorithm 98</p> <p>7.3.1 Algorithm Analysis 99</p> <p>7.3.2 Algorithm Complexity 99</p> <p>7.3.3 Space Complexity 100</p> <p>7.3.4 Time Complexity 100</p> <p>7.4 Asymptotic Analysis of an Algorithm 101</p> <p>7.4.1 Asymptotic Notations 102</p> <p>7.4.1.1 Big O Notation 102</p> <p>7.4.1.2 The Omega Notation, Ω 102</p> <p>7.4.1.3 The Θ Notation 102</p> <p>7.5 Examples of Algorithms 104</p> <p>7.6 Flowchart 104</p> <p>7.7 Problems 105</p> <p><b>8 Data Preprocessing and Data Validations 109</b></p> <p>8.1 Introduction 109</p> <p>8.2 Definition – Data Preprocessing 109</p> <p>8.3 Data Cleaning 110</p> <p>8.3.1 Handling Missing Data 110</p> <p>8.3.2 Types of Missing Data 110</p> <p>8.3.2.1 Missing Completely at Random 110</p> <p>8.3.2.2 Missing at Random 110</p> <p>8.3.2.3 Missing Not at Random 111</p> <p>8.3.3 Techniques for Handling the Missing Data 111</p> <p>8.3.3.1 Listwise Deletion 111</p> <p>8.3.3.2 Pairwise Deletion 111</p> <p>8.3.3.3 Mean Substitution 112</p> <p>8.3.3.4 Regression Imputation 112</p> <p>8.3.3.5 Multiple Imputation 112</p> <p>8.3.4 Identifying Outliers and Noisy Data 113</p> <p>8.3.4.1 Binning 113</p> <p>8.3.4.2 Box and Whisker plot 113</p> <p>8.4 Data Transformations 115</p> <p>8.4.1 Min–Max Normalization 115</p> <p>8.4.2 <i>Z</i>-score Normalization 115</p> <p>8.5 Data Reduction 116</p> <p>8.6 Data Validations 117</p> <p>8.6.1 Methods for Data Validation 117</p> <p>8.6.1.1 Simple Statistical Criterion 117</p> <p>8.6.1.2 Fourier Series Modeling and SSC 118</p> <p>8.6.1.3 Principal Component Analysis and SSC 118</p> <p>8.7 Problems 119</p> <p><b>9 Data Visualizations 121</b></p> <p>9.1 Introduction 121</p> <p>9.2 Definition – Data Visualization 121</p> <p>9.2.1 Scientific Visualization 123</p> <p>9.2.2 Information Visualization 123</p> <p>9.2.3 Visual Analytics 124</p> <p>9.3 Data Visualization Techniques 126</p> <p>9.3.1 Time Series Data 126</p> <p>9.3.2 Statistical Distributions 127</p> <p>9.3.2.1 Stem-and-Leaf Plots 127</p> <p>9.3.2.2 Q–Q Plots 127</p> <p>9.4 Data Visualization Tools 129</p> <p>9.4.1 Tableau 129</p> <p>9.4.2 Infogram 130</p> <p>9.4.3 Google Charts 132</p> <p>9.5 Problems 133</p> <p><b>10 Binomial and Trinomial Trees 135</b></p> <p>10.1 Introduction 135</p> <p>10.2 The Binomial Tree Method 135</p> <p>10.2.1 One Step Binomial Tree 136</p> <p>10.2.2 Using the Tree to Price a European Option 139</p> <p>10.2.3 Using the Tree to Price an American Option 140</p> <p>10.2.4 Using the Tree to Price Any Path Dependent Option 141</p> <p>10.3 Binomial Discrete Model 141</p> <p>10.3.1 One-Step Method 141</p> <p>10.3.2 Multi-step Method 145</p> <p>10.3.2.1 Example: European Call Option 146</p> <p>10.4 Trinomial Tree Method 147</p> <p>10.4.1 What is the Meaning of Little o and Big O? 148</p> <p>10.5 Problems 148</p> <p><b>11 Principal Component Analysis 151</b></p> <p>11.1 Introduction 151</p> <p>11.2 Background of Principal Component Analysis 151</p> <p>11.3 Motivation 152</p> <p>11.3.1 Correlation and Redundancy 152</p> <p>11.3.2 Visualization 153</p> <p>11.4 The Mathematics of PCA 153</p> <p>11.4.1 The Eigenvalues and Eigenvectors 156</p> <p>11.5 How PCAWorks 159</p> <p>11.5.1 Algorithm 160</p> <p>11.6 Application 161</p> <p>11.7 Problems 162</p> <p><b>12 Discriminant and Cluster Analysis 165</b></p> <p>12.1 Introduction 165</p> <p>12.2 Distance 165</p> <p>12.3 Discriminant Analysis 166</p> <p>12.3.1 Kullback–Leibler Divergence 167</p> <p>12.3.2 Chernoff Distance 167</p> <p>12.3.3 Application – Seismic Time Series 169</p> <p>12.3.4 Application – Financial Time Series 171</p> <p>12.4 Cluster Analysis 173</p> <p>12.4.1 Partitioning Algorithms 174</p> <p>12.4.2 <i>k</i>-Means Algorithm 174</p> <p>12.4.3 <i>k</i>-Medoids Algorithm 175</p> <p>12.4.4 Application – Seismic Time Series 176</p> <p>12.4.5 Application – Financial Time Series 176</p> <p>12.5 Problems 177</p> <p><b>13 Multidimensional Scaling 179</b></p> <p>13.1 Introduction 179</p> <p>13.2 Motivation 180</p> <p>13.3 Number of Dimensions and Goodness of Fit 182</p> <p>13.4 Proximity Measures 183</p> <p>13.5 Metric Multidimensional Scaling 183</p> <p>13.5.1 The Classical Solution 184</p> <p>13.6 Nonmetric Multidimensional Scaling 186</p> <p>13.6.1 Shepard–Kruskal Algorithm 186</p> <p>13.7 Problems 187</p> <p><b>14 Classification and Tree-Based Methods 191</b></p> <p>14.1 Introduction 191</p> <p>14.2 An Overview of Classification 191</p> <p>14.2.1 The Classification Problem 192</p> <p>14.2.2 Logistic Regression Model 192</p> <p>14.2.2.1 <i>l</i>1 Regularization 193</p> <p>14.2.2.2 <i>l</i>2 Regularization 194</p> <p>14.3 Linear Discriminant Analysis 194</p> <p>14.3.1 Optimal Classification and Estimation of Gaussian Distribution 195</p> <p>14.4 Tree-Based Methods 197</p> <p>14.4.1 One Single Decision Tree 197</p> <p>14.4.2 Random Forest 198</p> <p>14.5 Applications 200</p> <p>14.6 Problems 202</p> <p><b>15 Association Rules 205</b></p> <p>15.1 Introduction 205</p> <p>15.2 Market Basket Analysis 205</p> <p>15.3 Terminologies 207</p> <p>15.3.1 Itemset and Support Count 207</p> <p>15.3.2 Frequent Itemset 207</p> <p>15.3.3 Closed Frequent Itemset 207</p> <p>15.3.4 Maximal Frequent Itemset 208</p> <p>15.3.5 Association Rule 208</p> <p>15.3.6 Rule Evaluation Metrics 208</p> <p>15.4 The Apriori Algorithm 210</p> <p>15.4.1 An example of the Apriori Algorithm 211</p> <p>15.5 Applications 213</p> <p>15.5.1 Confidence 214</p> <p>15.5.2 Lift 215</p> <p>15.5.3 Conviction 215</p> <p>15.6 Problems 216</p> <p><b>16 Support Vector Machines 219</b></p> <p>16.1 Introduction 219</p> <p>16.2 The Maximal Margin Classifier 219</p> <p>16.3 Classification Using a Separating Hyperplane 223</p> <p>16.4 Kernel Functions 225</p> <p>16.5 Applications 225</p> <p>16.6 Problems 227</p> <p><b>17 Neural Networks 231</b></p> <p>17.1 Introduction 231</p> <p>17.2 Perceptrons 231</p> <p>17.3 Feed Forward Neural Network 231</p> <p>17.4 Recurrent Neural Networks 233</p> <p>17.5 Long Short-Term Memory 234</p> <p>17.5.1 Residual Connections 235</p> <p>17.5.2 Loss Functions 236</p> <p>17.5.3 Stochastic Gradient Descent 236</p> <p>17.5.4 Regularization – Ensemble Learning 237</p> <p>17.6 Application 237</p> <p>17.6.1 Emergent and Developed Market 237</p> <p>17.6.2 The Lehman Brothers Collapse 237</p> <p>17.6.3 Methodology 238</p> <p>17.6.4 Analyses of Data 238</p> <p>17.6.4.1 Results of the Emergent Market Index 238</p> <p>17.6.4.2 Results of the Developed Market Index 238</p> <p>17.7 Significance of Study 239</p> <p>17.8 Problems 240</p> <p><b>18 Fourier Analysis 245</b></p> <p>18.1 Introduction 245</p> <p>18.2 Definition 245</p> <p>18.3 Discrete Fourier Transform 246</p> <p>18.4 The Fast Fourier Transform (FFT) Method 247</p> <p>18.5 Dynamic Fourier Analysis 250</p> <p>18.5.1 Tapering 251</p> <p>18.5.2 Daniell Kernel Estimation 252</p> <p>18.6 Applications of the Fourier Transform 253</p> <p>18.6.1 Modeling Power Spectrum of Financial Returns Using Fourier Transforms 253</p> <p>18.6.2 Image Compression 259</p> <p>18.7 Problems 259</p> <p><b>19 Wavelets Analysis 261</b></p> <p>19.1 Introduction 261</p> <p>19.1.1 Wavelets Transform 262</p> <p>19.2 DiscreteWavelets Transforms 264</p> <p>19.2.1 HaarWavelets 265</p> <p>19.2.1.1 Haar Functions 265</p> <p>19.2.1.2 Haar Transform Matrix 266</p> <p>19.2.2 Daubechies Wavelets 267</p> <p>19.3 Applications of the Wavelets Transform 269</p> <p>19.3.1 Discriminating Between Mining Explosions and Cluster of Earthquakes 269</p> <p>19.3.1.1 Background of Data 269</p> <p>19.3.1.2 Results 269</p> <p>19.3.2 Finance 271</p> <p>19.3.3 Damage Detection in Frame Structures 275</p> <p>19.3.4 Image Compression 275</p> <p>19.3.5 Seismic Signals 275</p> <p>19.4 Problems 276</p> <p><b>20 Stochastic Analysis 279</b></p> <p>20.1 Introduction 279</p> <p>20.2 Necessary Definitions from Probability Theory 279</p> <p>20.3 Stochastic Processes 280</p> <p>20.3.1 The Index Set 281</p> <p>20.3.2 The State Space 281</p> <p>20.3.3 Stationary and Independent Components 281</p> <p>20.3.4 Stationary and Independent Increments 282</p> <p>20.3.5 Filtration and Standard Filtration 283</p> <p>20.4 Examples of Stochastic Processes 284</p> <p>20.4.1 Markov Chains 285</p> <p>20.4.1.1 Examples of Markov Processes 286</p> <p>20.4.1.2 The Chapman–Kolmogorov Equation 287</p> <p>20.4.1.3 Classification of States 289</p> <p>20.4.1.4 Limiting Probabilities 290</p> <p>20.4.1.5 Branching Processes 291</p> <p>20.4.1.6 Time Homogeneous Chains 293</p> <p>20.4.2 Martingales 294</p> <p>20.4.3 Simple Random Walk 294</p> <p>20.4.4 The Brownian Motion (Wiener Process) 294</p> <p>20.5 Measurable Functions and Expectations 295</p> <p>20.5.1 Radon–Nikodym Theorem and Conditional Expectation 296</p> <p>20.6 Problems 299</p> <p><b>21 Fractal Analysis – Lévy, Hurst, DFA, DEA 301</b></p> <p>21.1 Introduction and Definitions 301</p> <p>21.2 Lévy Processes 301</p> <p>21.2.1 Examples of Lévy Processes 304</p> <p>21.2.1.1 The Poisson Process (Jumps) 305</p> <p>21.2.1.2 The Compound Poisson Process 305</p> <p>21.2.1.3 Inverse Gaussian (IG) Process 306</p> <p>21.2.1.4 The Gamma Process 307</p> <p>21.2.2 Exponential Lévy Models 307</p> <p>21.2.3 Subordination of Lévy Processes 308</p> <p>21.2.4 Stable Distributions 309</p> <p>21.3 Lévy Flight Models 311</p> <p>21.4 Rescaled Range Analysis (Hurst Analysis) 312</p> <p>21.5 Detrended Fluctuation Analysis (DFA) 315</p> <p>21.6 Diffusion Entropy Analysis (DEA) 316</p> <p>21.6.1 Estimation Procedure 317</p> <p>21.6.1.1 The Shannon Entropy 317</p> <p>21.6.2 The <i>H</i>–<i>𝛼 </i>Relationship for the Truncated Lévy Flight 319</p> <p>21.7 Application – Characterization of Volcanic Time Series 321</p> <p>21.7.1 Background of Volcanic Data 321</p> <p>21.7.2 Results 321</p> <p>21.8 Problems 323</p> <p><b>22 Stochastic Differential Equations 325</b></p> <p>22.1 Introduction 325</p> <p>22.2 Stochastic Differential Equations 325</p> <p>22.2.1 Solution Methods of SDEs 326</p> <p>22.3 Examples 335</p> <p>22.3.1 Modeling Asset Prices 335</p> <p>22.3.2 Modeling Magnitude of Earthquake Series 336</p> <p>22.4 Multidimensional Stochastic Differential Equations 337</p> <p>22.4.1 The multidimensional Ornstein–Uhlenbeck Processes 337</p> <p>22.4.2 Solution of the Ornstein–Uhlenbeck Process 338</p> <p>22.5 Simulation of Stochastic Differential Equations 340</p> <p>22.5.1 Euler–Maruyama Scheme for Approximating Stochastic Differential Equations 340</p> <p>22.5.2 Euler–Milstein Scheme for Approximating Stochastic Differential Equations 341</p> <p>22.6 Problems 343</p> <p><b>23 Ethics: With Great Power Comes Great Responsibility 345</b></p> <p>23.1 Introduction 345</p> <p>23.2 Data Science Ethical Principles 346</p> <p>23.2.1 Enhance Value in Society 346</p> <p>23.2.2 Avoiding Harm 346</p> <p>23.2.3 Professional Competence 347</p> <p>23.2.4 Increasing Trustworthiness 348</p> <p>23.2.5 Maintaining Accountability and Oversight 348</p> <p>23.3 Data Science Code of Professional Conduct 348</p> <p>23.4 Application 350</p> <p>23.4.1 Project Planning 350</p> <p>23.4.2 Data Preprocessing 350</p> <p>23.4.3 Data Management 350</p> <p>23.4.4 Analysis and Development 351</p> <p>23.5 Problems 351</p> <p>Bibliography 353</p> <p>Index 359</p>
<p><b>MARIA CRISTINA MARIANI, P<small>H</small>D,</b> is Shigeko K. Chan Distinguished Professor and Chair in the Department of Mathematical Sciences at The University of Texas at El Paso. She currently focuses her research on Stochastic Analysis, Differential Equations and Machine Learning with applications to Big Data and Complex Data sets arising in Public Health, Geophysics, Finance and others. Dr. Mariani is co-author of other Wiley books including <i>Quantitative Finance.</i></p> <p><b>OSEI KOFI TWENEBOAH, P<small>H</small>D,</b> is Assistant Professor of Data Science at Ramapo College of New Jersey. His main research is Stochastic Analysis, Machine Learning and Scientific Computing with applications to Finance, Health Sciences, and Geophysics. <p><b>MARIA PIA BECCAR-VARELA, P<small>H</small>D,</b> is Associate Professor of Instruction in the Department of Mathematical Sciences at the University of Texas at El Paso. Her research interests include Differential Equations, Stochastic Differential Equations, Wavelet Analysis and Discriminant Analysis applied to Finance, Health Sciences, and Earthquake Studies​.
<p><b>EXPLORE THE FOUNDATIONS OF DATA SCIENCE WITH THIS INSIGHTFUL NEW RESOURCE</b></p> <p><i>Data Science in Theory and Practice</i> delivers a comprehensive treatment of the mathematical and statistical models useful for analyzing data sets arising in various disciplines, like banking, finance, health care, bioinformatics, security, education, and social services. Written in five parts, the book examines some of the most commonly used and fundamental mathematical and statistical concepts that form the basis of data science. The authors go on to analyze various data transformation techniques useful for extracting information from raw data, long memory behavior, and predictive modeling. <p>The book offers readers a multitude of topics all relevant to the analysis of complex data sets. Along with a robust exploration of the theory underpinning data science, it contains numerous applications to specific and practical problems. The book also provides examples of code algorithms in R and Python and provides pseudo-algorithms to port the code to any other language. <p>Ideal for students and practitioners without a strong background in data science, readers will also learn from topics like: <ul><li>Analyses of foundational theoretical subjects, including the history of data science, matrix algebra and random vectors, and multivariate analysis</li> <li>A comprehensive examination of time series forecasting, including the different components of time series and transformations to achieve stationarity</li> <li>Introductions to both the R and Python programming languages, including basic data types and sample manipulations for both languages</li> <li>An exploration of algorithms, including how to write one and how to perform an asymptotic analysis</li> <li>A comprehensive discussion of several techniques for analyzing and predicting complex data sets</li></ul> <p>Perfect for advanced undergraduate and graduate students in Data Science, Business Analytics, and Statistics programs, <i>Data Science in Theory and Practice</i> will also earn a place in the libraries of practicing data scientists, data and business analysts, and statisticians in the private sector, government, and academia.

Diese Produkte könnten Sie auch interessieren:

Statistics for Microarrays
Statistics for Microarrays
von: Ernst Wit, John McClure
PDF ebook
90,99 €