Details

Machine Learning for Business Analytics


Machine Learning for Business Analytics

Concepts, Techniques and Applications with JMP Pro
2. Aufl.

von: Galit Shmueli, Peter C. Bruce, Mia L. Stephens, Muralidhara Anandamurthy, Nitin R. Patel

108,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 18.04.2023
ISBN/EAN: 9781119903840
Sprache: englisch
Anzahl Seiten: 608

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<b>MACHINE LEARNING FOR BUSINESS ANALYTICS</b> <p><b>An up-to-date introduction to a market-leading platform for data analysis and machine learning</b> <p><i>Machine Learning for Business Analytics: Concepts, Techniques, and Applications with JMP Pro<sup>®</sup>, 2<sup>nd</sup> ed.</i> offers an accessible and engaging introduction to machine learning. It provides concrete examples and case studies to educate new users and deepen existing users’ understanding of their data and their business. Fully updated to incorporate new topics and instructional material, this remains the only comprehensive introduction to this crucial set of analytical tools specifically tailored to the needs of businesses. <p><i>Machine Learning for Business Analytics: Concepts, Techniques, and Applications with JMP Pro<sup>®</sup>, 2<sup>nd</sup> ed.</i> readers will also find: <ul><li>Updated material which improves the book’s usefulness as a reference for professionals beyond the classroom</li> <li>Four new chapters, covering topics including Text Mining and Responsible Data Science</li> <li>An updated companion website with data sets and other instructor resources: www.jmp.com/dataminingbook</li> <li>A guide to JMP Pro<sup>®</sup>’s new features and enhanced functionality</li></ul> <p><i>Machine Learning for Business Analytics: Concepts, Techniques, and Applications with JMP Pro<sup>®</sup>, 2nd ed.</i> is ideal for students and instructors of business analytics and data mining classes, as well as data science practitioners and professionals in data-driven industries.
<p>Foreword xix</p> <p>Preface xx</p> <p>Acknowledgments xxiii</p> <p><b>Part I Preliminaries</b></p> <p><b>1 Introduction 3</b></p> <p>1.1 What Is Business Analytics? 3</p> <p>1.2 What Is Machine Learning? 5</p> <p>1.3 Machine Learning, AI, and Related Terms 5</p> <p>Statistical Modeling vs. Machine Learning 6</p> <p>1.4 Big Data 6</p> <p>1.5 Data Science 7</p> <p>1.6 Why Are There So Many Different Methods? 8</p> <p>1.7 Terminology and Notation 8</p> <p>1.8 Road Maps to This Book 10</p> <p>Order of Topics 12</p> <p><b>2 Overview of the Machine Learning Process 17</b></p> <p>2.1 Introduction 17</p> <p>2.2 Core Ideas in Machine Learning 18</p> <p>Classification 18</p> <p>Prediction 18</p> <p>Association Rules and Recommendation Systems 18</p> <p>Predictive Analytics 19</p> <p>Data Reduction and Dimension Reduction 19</p> <p>Data Exploration and Visualization 19</p> <p>Supervised and Unsupervised Learning 19</p> <p>2.3 The Steps in A Machine Learning Project 21</p> <p>2.4 Preliminary Steps 22</p> <p>Organization of Data 22</p> <p>Sampling from a Database 22</p> <p>Oversampling Rare Events in Classification Tasks 23</p> <p>Preprocessing and Cleaning the Data 23</p> <p>2.5 Predictive Power and Overfitting 29</p> <p>Overfitting 29</p> <p>Creation and Use of Data Partitions 31</p> <p>2.6 Building a Predictive Model with JMP Pro 34</p> <p>Predicting Home Values in a Boston Neighborhood 34</p> <p>Modeling Process 36</p> <p>2.7 Using JMP Pro for Machine Learning 42</p> <p>2.8 Automating Machine Learning Solutions 43</p> <p>Predicting Power Generator Failure 44</p> <p>Uber’s Michelangelo 45</p> <p>2.9 Ethical Practice in Machine Learning 47</p> <p>Machine Learning Software: The State of the Market by Herb</p> <p>Edelstein 47</p> <p>Problems 52</p> <p><b>Part II Data Exploration and Dimension Reduction</b></p> <p><b>3 Data Visualization 59</b></p> <p>3.1 Introduction 59</p> <p>3.2 Data Examples 61</p> <p>Example 1: Boston Housing Data 61</p> <p>Example 2: Ridership on Amtrak Trains 62</p> <p>3.3 Basic Charts: Bar Charts, Line Graphs, and Scatter Plots 62</p> <p>Distribution Plots: Boxplots and Histograms 64</p> <p>Heatmaps 67</p> <p>3.4 Multidimensional Visualization 70</p> <p>Adding Variables: Color, Hue, Size, Shape, Multiple Panels,</p> <p>Animation 70</p> <p>Manipulations: Rescaling, Aggregation and Hierarchies, Zooming,</p> <p>Filtering 73</p> <p>Reference: Trend Line and Labels 77</p> <p>Scaling Up: Large Datasets 79</p> <p>Multivariate Plot: Parallel Coordinates Plot 80</p> <p>Interactive Visualization 80</p> <p>3.5 Specialized Visualizations 82</p> <p>Visualizing Networked Data 82</p> <p>Visualizing Hierarchical Data: More on Treemaps 83</p> <p>Visualizing Geographical Data: Maps 84</p> <p>3.6 Summary: Major Visualizations and Operations, According to</p> <p>Machine Learning Goal 87</p> <p>Prediction 87</p> <p>Classification 87</p> <p>Time Series Forecasting 87</p> <p>Unsupervised Learning 88</p> <p>Problems 89</p> <p><b>4 Dimension Reduction 91</b></p> <p>4.1 Introduction 91</p> <p>4.2 Curse of Dimensionality 92</p> <p>4.3 Practical Considerations 92</p> <p>Problems 112</p> <p><b>Part III Performance Evaluation</b></p> <p><b>5 Evaluating Predictive Performance 117</b></p> <p>5.1 Introduction 118</p> <p>5.2 Evaluating Predictive Performance 118</p> <p>Problems 142</p> <p><b>Part IV Prediction and Classification Methods</b></p> <p><b>6 Multiple Linear Regression 147</b></p> <p>6.1 Introduction 147</p> <p>6.2 Explanatory vs. Predictive Modeling 148</p> <p>6.3 Estimating the Regression Equation and Prediction 149</p> <p>Example: Predicting the Price of Used Toyota Corolla</p> <p>Automobiles 150</p> <p>6.4 Variable Selection in Linear Regression 155</p> <p>Reducing the Number of Predictors 155</p> <p>How to Reduce the Number of Predictors 156</p> <p>Manual Variable Selection 156</p> <p>Automated Variable Selection 157</p> <p>Regularization (Shriknage Models) 164</p> <p>Problems 170</p> <p><b>7 k-Nearest Neighbors (k-NN) 175</b></p> <p>7.1 The 𝑘-NN Classifier (Categorical Outcome) 175</p> <p>Problems 186</p> <p><b>8 The Naive Bayes Classifier 189</b></p> <p>8.1 Introduction 189</p> <p>Threshold Probability Method 190</p> <p>Conditional Probability 190</p> <p>Problems 203</p> <p><b>9 Classification and Regression Trees 205</b></p> <p>9.1 Introduction 206</p> <p>Tree Structure 206</p> <p>Decision Rules 207</p> <p>Classifying a New Record 207</p> <p>9.2 Classification Trees 207</p> <p>Recursive Partitioning 207</p> <p>Example 1: Riding Mowers 208</p> <p>Categorical Predictors 210</p> <p>Standardization 210</p> <p>9.3 Growing a Tree for Riding Mowers Example 210</p> <p>Choice of First Split 211</p> <p>Choice of Second Split 212</p> <p>Final Tree 212</p> <p>Using a Tree to Classify New Records 213</p> <p>9.4 Evaluating the Performance of a Classification Tree 215</p> <p>Example 2: Acceptance of Personal Loan 215</p> <p>9.5 Avoiding Overfitting 219</p> <p>Stopping Tree Growth: CHAID 220</p> <p>Growing a Full Tree and Pruning It Back 220</p> <p>How JMP Pro Limits Tree Size 221</p> <p>9.6 Classification Rules from Trees 222</p> <p>9.7 Classification Trees for More Than Two Classes 224</p> <p>9.8 Regression Trees 224</p> <p>Prediction 224</p> <p>Evaluating Performance 225</p> <p>9.9 Advantages and Weaknesses of a Single Tree 227</p> <p>9.10 Improving Prediction: Random Forests and Boosted Trees 229</p> <p>Random Forests 229</p> <p>Boosted Trees 230</p> <p>Problems 233</p> <p><b>10 Logistic Regression 237</b></p> <p>10.1 Introduction 237</p> <p>10.2 The Logistic Regression Model 239</p> <p>10.3 Example: Acceptance of Personal Loan 240</p> <p>Model with a Single Predictor 241</p> <p>Estimating the Logistic Model from Data: Multiple Predictors 243</p> <p>Interpreting Results in Terms of Odds (for a Profiling Goal) 246</p> <p>10.4 Evaluating Classification Performance 247</p> <p>10.5 Variable Selection 249</p> <p>10.6 Logistic Regression for Multi-class Classification 250</p> <p>Logistic Regression for Nominal Classes 250</p> <p>Logistic Regression for Ordinal Classes 251</p> <p>Example: Accident Data 252</p> <p>10.7 Example of Complete Analysis: Predicting Delayed Flights 253</p> <p>Data Preprocessing 255</p> <p>Model Fitting, Estimation, and Interpretation---A Simple Model 256</p> <p>Model Fitting, Estimation and Interpretation---The Full Model 257</p> <p>Model Performance 257</p> <p>Problems 264</p> <p><b>11 Neural Nets 267</b></p> <p>11.1 Introduction 267</p> <p>11.2 Concept and Structure of a Neural Network 268</p> <p>11.3 Fitting a Network to Data 269</p> <p>Example 1: Tiny Dataset 269</p> <p>Computing Output of Nodes 269</p> <p>Preprocessing the Data 272</p> <p>Training the Model 273</p> <p>Using the Output for Prediction and Classification 279</p> <p>Example 2: Classifying Accident Severity 279</p> <p>Avoiding Overfitting 281</p> <p>11.4 User Input in JMP Pro 282</p> <p>11.5 Exploring the Relationship Between Predictors and Outcome 284</p> <p>11.6 Deep Learning 285</p> <p>Convolutional Neural Networks (CNNs) 285</p> <p>Local Feature Map 287</p> <p>A Hierarchy of Features 287</p> <p>The Learning Process 287</p> <p>Unsupervised Learning 288</p> <p>Conclusion 289</p> <p>11.7 Advantages and Weaknesses of Neural Networks 289</p> <p>Problems 290</p> <p><b>12 Discriminant Analysis 293</b></p> <p>12.1 Introduction 293</p> <p>Example 1: Riding Mowers 294</p> <p>Example 2: Personal Loan Acceptance 294</p> <p>12.2 Distance of an Observation from a Class 295</p> <p>12.3 From Distances to Propensities and Classifications 297</p> <p>12.4 Classification Performance of Discriminant Analysis 300</p> <p>12.5 Prior Probabilities 301</p> <p>12.6 Classifying More Than Two Classes 303</p> <p>Example 3: Medical Dispatch to Accident Scenes 303</p> <p>12.7 Advantages and Weaknesses 306</p> <p>Problems 307</p> <p><b>13 Generating, Comparing, and Combining Multiple Models 311</b></p> <p>13.1 Ensembles 311</p> <p>Why Ensembles Can Improve Predictive Power 312</p> <p>Simple Averaging or Voting 313</p> <p>Bagging 314</p> <p>Boosting 315</p> <p>Stacking 316</p> <p>Advantages and Weaknesses of Ensembles 317</p> <p>13.2 Automated Machine Learning (AutoML) 317</p> <p>AutoML: Explore and Clean Data 317</p> <p>AutoML: Determine Machine Learning Task 318</p> <p>AutoML: Choose Features and Machine Learning Methods 318</p> <p>AutoML: Evaluate Model Performance 320</p> <p>AutoML: Model Deployment 321</p> <p>Advantages and Weaknesses of Automated Machine Learning 322</p> <p>13.3 Summary 322</p> <p>Problems 323</p> <p><b>Part V Intervention and User Feedback</b></p> <p><b>14 Interventions: Experiments, Uplift Models, and Reinforcement Learning 327</b></p> <p>14.1 Introduction 327</p> <p>14.2 A/B Testing 328</p> <p>Example: Testing a New Feature in a Photo Sharing App 329</p> <p>The Statistical Test for Comparing Two Groups (𝑇 -Test) 329</p> <p>Multiple Treatment Groups: A/B/n Tests 333</p> <p>Multiple A/B Tests and the Danger of Multiple Testing 333</p> <p>14.3 Uplift (Persuasion) Modeling 333</p> <p>Getting the Data 334</p> <p>A Simple Model 336</p> <p>Modeling Individual Uplift 336</p> <p>Creating Uplift Models in JMP Pro 337</p> <p>Using the Results of an Uplift Model 338</p> <p>14.4 Reinforcement Learning 340</p> <p>Explore-Exploit: Multi-armed Bandits 340</p> <p>Markov Decision Process (MDP) 341</p> <p>14.5 Summary 344</p> <p>Problems 345</p> <p><b>Part VI Mining Relationships Among Records</b></p> <p><b>15 Association Rules and Collaborative Filtering 349</b></p> <p>15.1 Association Rules 349</p> <p>Discovering Association Rules in Transaction Databases 350</p> <p>Example 1: Synthetic Data on Purchases of Phone Faceplates 350</p> <p>Data Format 350</p> <p>Generating Candidate Rules 352</p> <p>The Apriori Algorithm 353</p> <p>Selecting Strong Rules 353</p> <p>The Process of Rule Selection 356</p> <p>Interpreting the Results 358</p> <p>Rules and Chance 359</p> <p>Example 2: Rules for Similar Book Purchases 361</p> <p>15.2 Collaborative Filtering 362</p> <p>Data Type and Format 363</p> <p>Example 3: Netflix Prize Contest 363</p> <p>User-Based Collaborative Filtering: “People Like You” 365</p> <p>Item-Based Collaborative Filtering 366</p> <p>Evaluating Performance 367</p> <p>Advantages and Weaknesses of Collaborative Filtering 368</p> <p>Collaborative Filtering vs. Association Rules 369</p> <p>15.3 Summary 370</p> <p>Problems 372</p> <p><b>16 Cluster Analysis 375</b></p> <p>16.1 Introduction 375</p> <p>Example: Public Utilities 377</p> <p>16.2 Measuring Distance Between Two Records 378</p> <p>Euclidean Distance 379</p> <p>Standardizing Numerical Measurements 379</p> <p>Other Distance Measures for Numerical Data 379</p> <p>Distance Measures for Categorical Data 382</p> <p>Distance Measures for Mixed Data 382</p> <p>16.3 Measuring Distance Between Two Clusters 383</p> <p>Minimum Distance 383</p> <p>Maximum Distance 383</p> <p>Average Distance 383</p> <p>Centroid Distance 383</p> <p>16.4 Hierarchical (Agglomerative) Clustering 385</p> <p>Single Linkage 385</p> <p>Complete Linkage 386</p> <p>Average Linkage 386</p> <p>Centroid Linkage 386</p> <p>Ward’s Method 387</p> <p>Dendrograms: Displaying Clustering Process and Results 387</p> <p>Validating Clusters 391</p> <p>Two-Way Clustering 393</p> <p>Limitations of Hierarchical Clustering 393</p> <p>16.5 Nonhierarchical Clustering: The 𝐾-Means Algorithm 394</p> <p>Choosing the Number of Clusters (𝑘) 396</p> <p>Problems 403</p> <p><b>Part VII Forecasting Time Series</b></p> <p><b>17 Handling Time Series 409</b></p> <p>17.1 Introduction 409</p> <p>17.2 Descriptive vs. Predictive Modeling 410</p> <p>17.3 Popular Forecasting Methods in Business 411</p> <p>Combining Methods 411</p> <p>17.4 Time Series Components 411</p> <p>Example: Ridership on Amtrak Trains 412</p> <p>17.5 Data Partitioning and Performance Evaluation 415</p> <p>Benchmark Performance: Naive Forecasts 417</p> <p>Generating Future Forecasts 417</p> <p>Problems 419</p> <p><b>18 Regression-Based Forecasting 423</b></p> <p>18.1 A Model with Trend 424</p> <p>Linear Trend 424</p> <p>Exponential Trend 427</p> <p>Polynomial Trend 429</p> <p>18.2 A Model with Seasonality 430</p> <p>Additive vs. Multiplicative Seasonality 432</p> <p>18.3 A Model with Trend and Seasonality 433</p> <p>18.4 Autocorrelation and ARIMA Models 433</p> <p>Computing Autocorrelation 433</p> <p>Improving Forecasts by Integrating Autocorrelation Information 437</p> <p>Fitting AR Models to Residuals 439</p> <p>Evaluating Predictability 441</p> <p>Problems 444</p> <p><b>19 Smoothing and Deep Learning Methods for Forecasting 455</b></p> <p>19.1 Introduction 455</p> <p>19.2 Moving Average 456</p> <p>Centered Moving Average for Visualization 456</p> <p>Trailing Moving Average for Forecasting 457</p> <p>Choosing Window Width (𝑤) 460</p> <p>19.3 Simple Exponential Smoothing 461</p> <p>Choosing Smoothing Parameter 𝛼 462</p> <p>Relation Between Moving Average and Simple Exponential</p> <p>Smoothing 465</p> <p>19.4 Advanced Exponential Smoothing 465</p> <p>Series With a Trend 465</p> <p>Series With a Trend and Seasonality 466</p> <p>19.5 Deep Learning for Forecasting 470</p> <p>Problems 472</p> <p><b>Part VIII Data Analytics</b></p> <p><b>20 Text Mining 483</b></p> <p>20.1 Introduction 483</p> <p>20.2 The Tabular Representation of Text: Document–Term Matrix and</p> <p>“Bag-of-Words” 484</p> <p>20.3 Bag-of-Words vs. Meaning Extraction at Document Level 486</p> <p>20.4 Preprocessing the Text 486</p> <p>Tokenization 487</p> <p>Text Reduction 488</p> <p>Presence/Absence vs. Frequency (Occurrences) 489</p> <p>Term Frequency-Inverse Document Frequency (TF-IDF) 489</p> <p>From Terms to Topics: Latent Semantic Analysis and Topic</p> <p>Analysis 490</p> <p>Extracting Meaning 491</p> <p>From Terms to High Dimensional Word Vectors: Word2Vec 491</p> <p>20.5 Implementing Machine Learning Methods 492</p> <p>20.6 Example: Online Discussions on Autos and Electronics 492</p> <p>Importing the Records 493</p> <p>Text Preprocessing in JMP 494</p> <p>Using Latent Semantic Analysis and Topic Analysis 496</p> <p>Fitting a Predictive Model 499</p> <p>Prediction 499</p> <p>20.7 Example: Sentiment Analysis of Movie Reviews 500</p> <p>Data Preparation 500</p> <p>Latent Semantic Analysis and Fitting a Predictive Model 500</p> <p>20.8 Summary 502</p> <p>Problems 503</p> <p><b>21 Responsible Data Science 505</b></p> <p>21.1 Introduction 505</p> <p>Example: Predicting Recidivism 506</p> <p>21.2 Unintentional Harm 506</p> <p>21.3 Legal Considerations 508</p> <p>The General Data Protection Regulation (GDPR) 508</p> <p>Protected Groups 508</p> <p>21.4 Principles of Responsible Data Science 508</p> <p>Non-maleficence 509</p> <p>Fairness 509</p> <p>Transparency 510</p> <p>Accountability 511</p> <p>Data Privacy and Security 511</p> <p>21.5 A Responsible Data Science Framework 511</p> <p>Justification 511</p> <p>Assembly 512</p> <p>Data Preparation 513</p> <p>Modeling 513</p> <p>Auditing 513</p> <p>21.6 Documentation Tools 514</p> <p>Impact Statements 514</p> <p>Model Cards 515</p> <p>Datasheets 516</p> <p>Audit Reports 516</p> <p>21.7 Example: Applying the RDS Framework to the COMPAS Example 517</p> <p>Unanticipated Uses 518</p> <p>Ethical Concerns 518</p> <p>Protected Groups 518</p> <p>Data Issues 518</p> <p>Fitting the Model 519</p> <p>Auditing the Model 520</p> <p>Bias Mitigation 526</p> <p>21.8 Summary 526</p> <p>Problems 528</p> <p><b>Part IX Cases</b></p> <p><b>22 Cases 533</b></p> <p>22.1 Charles Book Club 533</p> <p>The Book Industry 533</p> <p>Database Marketing at Charles 534</p> <p>Machine Learning Techniques 535</p> <p>Assignment 537</p> <p>22.2 German Credit 541</p> <p>Background 541</p> <p>Data 541</p> <p>Assignment 544</p> <p>22.3 Tayko Software Cataloger 545</p> <p>Background 545</p> <p>The Mailing Experiment 545</p> <p>Data 545</p> <p>Assignment 546</p> <p>22.4 Political Persuasion 548</p> <p>Background 548</p> <p>Predictive Analytics Arrives in US Politics 548</p> <p>Political Targeting 548</p> <p>Uplift 549</p> <p>Data 549</p> <p>Assignment 550</p> <p>22.5 Taxi Cancellations 552</p> <p>Business Situation 552</p> <p>Assignment 552</p> <p>22.6 Segmenting Consumers of Bath Soap 554</p> <p>Business Situation 554</p> <p>Key Problems 554</p> <p>Data 555</p> <p>Measuring Brand Loyalty 556</p> <p>Assignment 556</p> <p>22.7 Catalog Cross-Selling 557</p> <p>Background 557</p> <p>Assignment 557</p> <p>22.8 Direct-Mail Fundraising 559</p> <p>Background 559</p> <p>Data 559</p> <p>Assignment 559</p> <p>22.9 Time Series Case: Forecasting Public Transportation Demand 562</p> <p>Background 562</p> <p>Problem Description 562</p> <p>Available Data 562</p> <p>Assignment Goal 562</p> <p>Assignment 563</p> <p>Tips and Suggested Steps 563</p> <p>22.10 Loan Approval 564</p> <p>Background 564</p> <p>Regulatory Requirements 564</p> <p>Getting Started 564</p> <p>Assignment 564</p> <p>References 567</p> <p>Data Files Used in the Book 571</p> <p>Index 573</p>
<p><b>Galit Shmueli, PhD</b> is Distinguished Professor at National Tsing Hua University’s Institute of Service Science. She has designed and instructed business analytics courses since 2004 at University of Maryland, Statistics.com, The Indian School of Business, and National Tsing Hua University, Taiwan. <p><b>Peter C. Bruce</b> is Founder of the Institute for Statistics Education at Statistics.com, and Chief Learning Officer at Elder Research, Inc. <p><b>Mia L. Stephens, M.S.</b> is an Advisory Product Manager with JMP, driving the product vision and roadmaps for JMP<sup>®</sup> and JMP Pro<sup>®</sup>. <p><b>Muralidhara Anandamurthy, PhD</b> is an Academic Ambassador with JMP, overseeing technical support for academic users of JMP Pro<sup>®</sup>. <p><b>Nitin R. Patel, PhD</b> is cofounder and lead researcher at Cytel Inc. He is also a Fellow of the American Statistical Association and has served as a visiting professor at the Massachusetts Institute of Technology and Harvard University, among others.
<p><b>An up-to-date introduction to a market-leading platform for data analysis and machine learning</b> <p><i>Machine Learning for Business Analytics: Concepts, Techniques, and Applications with JMP Pro<sup>®</sup>, 2<sup>nd</sup> ed.</i> offers an accessible and engaging introduction to machine learning. It provides concrete examples and case studies to educate new users and deepen existing users’ understanding of their data and their business. Fully updated to incorporate new topics and instructional material, this remains the only comprehensive introduction to this crucial set of analytical tools specifically tailored to the needs of businesses. <p><i>Machine Learning for Business Analytics: Concepts, Techniques, and Applications with JMP Pro<sup>®</sup>, 2<sup>nd</sup> ed.</i> readers will also find: <ul><li>Updated material which improves the book’s usefulness as a reference for professionals beyond the classroom</li> <li>Four new chapters, covering topics including Text Mining and Responsible Data Science</li> <li>An updated companion website with data sets and other instructor resources: www.jmp.com/dataminingbook</li> <li>A guide to JMP Pro<sup>®</sup>’s new features and enhanced functionality</li></ul> <p><i>Machine Learning for Business Analytics: Concepts, Techniques, and Applications with JMP Pro<sup>®</sup>, 2nd ed.</i> is ideal for students and instructors of business analytics and data mining classes, as well as data science practitioners and professionals in data-driven industries.

Diese Produkte könnten Sie auch interessieren:

Statistics for Microarrays
Statistics for Microarrays
von: Ernst Wit, John McClure
PDF ebook
90,99 €