Details

Statistics and Probability with Applications for Engineers and Scientists Using MINITAB, R and JMP


Statistics and Probability with Applications for Engineers and Scientists Using MINITAB, R and JMP


2. Aufl.

von: Bhisham C. Gupta, Irwin Guttman, Kalanka P. Jayalath

125,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 24.12.2019
ISBN/EAN: 9781119516644
Sprache: englisch
Anzahl Seiten: 1040

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p><b>Introduces basic concepts in probability and statistics to data science students, as well as engineers and scientists</b></p> <p>Aimed at undergraduate/graduate-level engineering and natural science students, this timely, fully updated edition of a popular book on statistics and probability shows how real-world problems can be solved using statistical concepts. It removes Excel exhibits and replaces them with R software throughout, and updates both MINITAB and JMP software instructions and content. A new chapter discussing data mining—including big data, classification, machine learning, and visualization—is featured. Another new chapter covers cluster analysis methodologies in hierarchical, nonhierarchical, and model based clustering. The book also offers a chapter on Response Surfaces that previously appeared on the book’s companion website.</p> <p><i>Statistics and Probability with Applications for Engineers and Scientists using MINITAB, R and JMP, Second Edition</i> is broken into two parts. Part I covers topics such as: describing data graphically and numerically, elements of probability, discrete and continuous random variables and their probability distributions, distribution functions of random variables, sampling distributions, estimation of population parameters and hypothesis testing. Part II covers: elements of reliability theory, data mining, cluster analysis, analysis of categorical data, nonparametric tests, simple and multiple linear regression analysis, analysis of variance, factorial designs, response surfaces, and statistical quality control (SQC) including phase I and phase II control charts. The appendices contain statistical tables and charts and answers to selected problems.  </p> <ul> <li>Features two new chapters—one on Data Mining and another on Cluster Analysis</li> <li>Now contains R exhibits including code, graphical display, and some results</li> <li>MINITAB and JMP have been updated to their latest versions</li> <li>Emphasizes the p-value approach and includes related practical interpretations</li> <li>Offers a more applied statistical focus, and features modified examples to better exhibit statistical concepts</li> <li>Supplemented with an Instructor's-only solutions manual on a book’s companion website </li> </ul> <p><i>Statistics and Probability with Applications for Engineers and Scientists using MINITAB, R and JMP</i> is an excellent text for graduate level data science students, and engineers and scientists. It is also an ideal introduction to applied statistics and probability for undergraduate students in engineering and the natural sciences. </p>
<p>Preface xvii</p> <p>Acknowledgments xxi</p> <p>About The Companion Site xxiii</p> <p><b>1 Introduction 1</b></p> <p>1.1 Designed Experiment 2</p> <p>1.1.1 Motivation for the Study 2</p> <p>1.1.2 Investigation 3</p> <p>1.1.3 Changing Criteria 3</p> <p>1.1.4 A Summary of the Various Phases of the Investigation 5</p> <p>1.2 A Survey 6</p> <p>1.3 An Observational Study 6</p> <p>1.4 A Set of Historical Data 7</p> <p>1.5 A Brief Description of What is Covered in this Book 7</p> <p><b>Part I Fundamentals of Probability and Statistics</b></p> <p><b>2 Describing Data Graphically and Numerically 13</b></p> <p>2.1 Getting Started with Statistics 14</p> <p>2.1.1 What is Statistics? 14</p> <p>2.1.2 Population and Sample in a Statistical Study 14</p> <p>2.2 Classification of Various Types of Data 18</p> <p>2.2.1 Nominal Data 18</p> <p>2.2.2 Ordinal Data 19</p> <p>2.2.3 Interval Data 19</p> <p>2.2.4 Ratio Data 19</p> <p>2.3 Frequency Distribution Tables for Qualitative and Quantitative Data 20</p> <p>2.3.1 Qualitative Data 21</p> <p>2.3.2 Quantitative Data 24</p> <p>2.4 Graphical Description of Qualitative and Quantitative Data 30</p> <p>2.4.1 Dot Plot 30</p> <p>2.4.2 Pie Chart 31</p> <p>2.4.3 Bar Chart 33</p> <p>2.4.4 Histograms 37</p> <p>2.4.5 Line Graph 44</p> <p>2.4.6 Stem-and-Leaf Plot 45</p> <p>2.5 Numerical Measures of Quantitative Data 50</p> <p>2.5.1 Measures of Centrality 51</p> <p>2.5.2 Measures of Dispersion 56</p> <p>2.6 Numerical Measures of Grouped Data 67</p> <p>2.6.1 Mean of a Grouped Data 67</p> <p>2.6.2 Median of a Grouped Data 68</p> <p>2.6.3 Mode of a Grouped Data 69</p> <p>2.6.4 Variance of a Grouped Data 69</p> <p>2.7 Measures of Relative Position 70</p> <p>2.7.1 Percentiles 71</p> <p>2.7.2 Quartiles 72</p> <p>2.7.3 Interquartile Range (IQR) 72</p> <p>2.7.4 Coefficient of Variation 73</p> <p>2.8 Box-Whisker Plot 75</p> <p>2.8.1 Construction of a Box Plot 75</p> <p>2.8.2 How to Use the Box Plot 76</p> <p>2.9 Measures of Association 80</p> <p>2.10 Case Studies 84</p> <p>2.10.1 About St. Luke’s Hospital 85</p> <p>2.11 Using JMP 86</p> <p>Review Practice Problems 87</p> <p><b>3 Elements of Probability 97</b></p> <p>3.1 Introduction 97</p> <p>3.2 Random Experiments, Sample Spaces, and Events 98</p> <p>3.2.1 Random Experiments and Sample Spaces 98</p> <p>3.2.2 Events 99</p> <p>3.3 Concepts of Probability 103</p> <p>3.4 Techniques of Counting Sample Points 108</p> <p>3.4.1 Tree Diagram 108</p> <p>3.4.2 Permutations 110</p> <p>3.4.3 Combinations 110</p> <p>3.4.4 Arrangements of <i>n </i>Objects Involving Several Kinds of Objects 111</p> <p>3.5 Conditional Probability 113</p> <p>3.6 Bayes’s Theorem 116</p> <p>3.7 Introducing Random Variables 120</p> <p>Review Practice Problems 122</p> <p><b>4 Discrete Random Variables and Some Important Discrete Probability Distributions 128</b></p> <p>4.1 Graphical Descriptions of Discrete Distributions 129</p> <p>4.2 Mean and Variance of a Discrete Random Variable 130</p> <p>4.2.1 Expected Value of Discrete Random Variables and Their Functions 130</p> <p>4.2.2 The Moment-Generating Function-Expected Value of a Special Function of <i>X </i>133</p> <p>4.3 The Discrete Uniform Distribution 136</p> <p>4.4 The Hypergeometric Distribution 137</p> <p>4.5 The Bernoulli Distribution 141</p> <p>4.6 The Binomial Distribution 142</p> <p>4.7 The Multinomial Distribution 146</p> <p>4.8 The Poisson Distribution 147</p> <p>4.8.1 Definition and Properties of the Poisson Distribution 147</p> <p>4.8.2 Poisson Process 148</p> <p>4.8.3 Poisson Distribution as a Limiting Form of the Binomial 148</p> <p>4.9 The Negative Binomial Distribution 153</p> <p>4.10 Some Derivations and Proofs (Optional) 156</p> <p>4.11 A Case Study 156</p> <p>4.12 Using JMP 157</p> <p>Review Practice Problems 157</p> <p><b>5 Continuous Random Variables and Some Important Continuous Probability Distributions 164</b></p> <p>5.1 Continuous Random Variables 165</p> <p>5.2 Mean and Variance of Continuous Random Variables 168</p> <p>5.2.1 Expected Value of Continuous Random Variables and Their Functions 168</p> <p>5.2.2 The Moment-Generating Function and Expected Value of a Special Function of <i>X </i>171</p> <p>5.3 Chebyshev’s Inequality 173</p> <p>5.4 The Uniform Distribution 175</p> <p>5.4.1 Definition and Properties 175</p> <p>5.4.2 Mean and Standard Deviation of the Uniform Distribution 178</p> <p>5.5 The Normal Distribution 180</p> <p>5.5.1 Definition and Properties 180</p> <p>5.5.2 The Standard Normal Distribution 182</p> <p>5.5.3 The Moment-Generating Function of the Normal Distribution 187</p> <p>5.6 Distribution of Linear Combination of Independent Normal Variables 189</p> <p>5.7 Approximation of the Binomial and Poisson Distributions by the Normal Distribution 193</p> <p>5.7.1 Approximation of the Binomial Distribution by the Normal Distribution 193</p> <p>5.7.2 Approximation of the Poisson Distribution by the Normal Distribution 196</p> <p>5.8 A Test of Normality 196</p> <p>5.9 Probability Models Commonly used in Reliability Theory 201</p> <p>5.9.1 The Lognormal Distribution 202</p> <p>5.9.2 The Exponential Distribution 206</p> <p>5.9.3 The Gamma Distribution 211</p> <p>5.9.4 The Weibull Distribution 214</p> <p>5.10 A Case Study 218</p> <p>5.11 Using JMP 219</p> <p>Review Practice Problems 220</p> <p><b>6 Distribution of Functions Of Random Variables 228</b></p> <p>6.1 Introduction 229</p> <p>6.2 Distribution Functions of Two Random Variables 229</p> <p>6.2.1 Case of Two Discrete Random Variables 229</p> <p>6.2.2 Case of Two Continuous Random Variables 232</p> <p>6.2.3 The Mean Value and Variance of Functions of Two Random Variables 233</p> <p>6.2.4 Conditional Distributions 235</p> <p>6.2.5 Correlation between Two Random Variables 238</p> <p>6.2.6 Bivariate Normal Distribution 241</p> <p>6.3 Extension to Several Random Variables 244</p> <p>6.4 The Moment-Generating Function Revisited 245</p> <p>Review Practice Problems 249</p> <p><b>7 Sampling Distributions 253</b></p> <p>7.1 Random Sampling 253</p> <p>7.1.1 Random Sampling from an Infinite Population 254</p> <p>7.1.2 Random Sampling from a Finite Population 256</p> <p>7.2 The Sampling Distribution of the Sample Mean 258</p> <p>7.2.1 Normal Sampled Population 258</p> <p>7.2.2 Nonnormal Sampled Population 258</p> <p>7.2.3 The Central Limit Theorem 259</p> <p>7.3 Sampling from a Normal Population 264</p> <p>7.3.1 The Chi-Square Distribution 264</p> <p>7.3.2 The Student <i>t</i>-Distribution 271</p> <p>7.3.3 Snedecor’s <i>F</i>-Distribution 276</p> <p>7.4 Order Statistics 279</p> <p>7.4.1 Distribution of the Largest Element in a Sample 280</p> <p>7.4.2 Distribution of the Smallest Element in a Sample 281</p> <p>7.4.3 Distribution of the Median of a Sample and of the <i>k<sup>th </sup></i>Order Statistic 282</p> <p>7.4.4 Other Uses of Order Statistics 284</p> <p>7.5 Using JMP 286</p> <p>Review Practice Problems 286</p> <p><b>8 Estimation of Population Parameters 289</b></p> <p>8.1 Introduction 290</p> <p>8.2 Point Estimators for the Population Mean and Variance 290</p> <p>8.2.1 Properties of Point Estimators 292</p> <p>8.2.2 Methods of Finding Point Estimators 295</p> <p>8.3 Interval Estimators for the Mean <i>μ </i>of a Normal Population 301</p> <p>8.3.1 <i>σ</i><sup>2</sup> Known 301</p> <p>8.3.2 <i>σ</i><sup>2</sup> Unknown 304</p> <p>8.3.3 Sample Size is Large 306</p> <p>8.4 Interval Estimators for The Difference of Means of Two Normal Populations 313</p> <p>8.4.1 Variances are Known 313</p> <p>8.4.2 Variances are Unknown 314</p> <p>8.5 Interval Estimators for the Variance of a Normal Population 322</p> <p>8.6 Interval Estimator for the Ratio of Variances of Two Normal Populations 327</p> <p>8.7 Point and Interval Estimators for the Parameters of Binomial Populations 331</p> <p>8.7.1 One Binomial Population 331</p> <p>8.7.2 Two Binomial Populations 334</p> <p>8.8 Determination of Sample Size 338</p> <p>8.8.1 One Population Mean 339</p> <p>8.8.2 Difference of Two Population Means 339</p> <p>8.8.3 One Population Proportion 340</p> <p>8.8.4 Difference of Two Population Proportions 341</p> <p>8.9 Some Supplemental Information 343</p> <p>8.10 A Case Study 343</p> <p>8.11 Using JMP 343</p> <p>Review Practice Problems 344</p> <p><b>9 Hypothesis Testing 352</b></p> <p>9.1 Introduction 353</p> <p>9.2 Basic Concepts of Testing a Statistical Hypothesis 353</p> <p>9.2.1 Hypothesis Formulation 353</p> <p>9.2.2 Risk Assessment 355</p> <p>9.3 Tests Concerning the Mean of a Normal Population Having Known Variance 358</p> <p>9.3.1 Case of a One-Tail (Left-Sided) Test 358</p> <p>9.3.2 Case of a One-Tail (Right-Sided) Test 362</p> <p>9.3.3 Case of a Two-Tail Test 363</p> <p>9.4 Tests Concerning the Mean of a Normal Population Having Unknown Variance 372</p> <p>9.4.1 Case of a Left-Tail Test 372</p> <p>9.4.2 Case of a Right-Tail Test 373</p> <p>9.4.3 The Two-Tail Case 374</p> <p>9.5 Large Sample Theory 378</p> <p>9.6 Tests Concerning the Difference of Means of Two Populations Having Distributions with Known Variances 380</p> <p>9.6.1 The Left-Tail Test 380</p> <p>9.6.2 The Right-Tail Test 381</p> <p>9.6.3 The Two-Tail Test 383</p> <p>9.7 Tests Concerning the Difference of Means of Two Populations Having Normal Distributions with Unknown Variances 388</p> <p>9.7.1 Two Population Variances are Equal 388</p> <p>9.7.2 Two Population Variances are Unequal 392</p> <p>9.7.3 The Paired <i>t</i>-Test 395</p> <p>9.8 Testing Population Proportions 401</p> <p>9.8.1 Test Concerning One Population Proportion 401</p> <p>9.8.2 Test Concerning the Difference Between Two Population Proportions 405</p> <p>9.9 Tests Concerning the Variance of a Normal Population 410</p> <p>9.10 Tests Concerning the Ratio of Variances of Two Normal Populations 414</p> <p>9.11 Testing of Statistical Hypotheses using Confidence Intervals 418</p> <p>9.12 Sequential Tests of Hypotheses 422</p> <p>9.12.1 A One-Tail Sequential Testing Procedure 422</p> <p>9.12.2 A Two-Tail Sequential Testing Procedure 427</p> <p>9.13 Case Studies 430</p> <p>9.14 Using JMP 431</p> <p>Review Practice Problems 431</p> <p><b>Part II Statistics in Actions</b></p> <p><b>10 Elements of Reliability Theory 445</b></p> <p>10.1 The Reliability Function 446</p> <p>10.1.1 The Hazard Rate Function 446</p> <p>10.1.2 Employing the Hazard Function 455</p> <p>10.2 Estimation: Exponential Distribution 457</p> <p>10.3 Hypothesis Testing: Exponential Distribution 465</p> <p>10.4 Estimation: Weibull Distribution 467</p> <p>10.5 Case Studies 472</p> <p>10.6 Using JMP 474</p> <p>Review Practice Problems 474</p> <p><b>11 On Data Mining 476</b></p> <p>11.1 Introduction 476</p> <p>11.2 What is Data Mining? 477</p> <p>11.2.1 Big Data 477</p> <p>11.3 Data Reduction 478</p> <p>11.4 Data Visualization 481</p> <p>11.5 Data Preparation 490</p> <p>11.5.1 Missing Data 490</p> <p>11.5.2 Outlier Detection and Remedial Measures 491</p> <p>11.6 Classification 492</p> <p>11.6.1 Evaluating a Classification Model 493</p> <p>11.7 Decision Trees 499</p> <p>11.7.1 Classification and Regression Trees (CART) 500</p> <p>11.7.2 Further Reading 511</p> <p>11.8 Case Studies 511</p> <p>11.9 Using JMP 512</p> <p>Review Practice Problems 512</p> <p><b>12 Cluster Analysis 518</b></p> <p>12.1 Introduction 518</p> <p>12.2 Similarity Measures 519</p> <p>12.2.1 Common Similarity Coefficients 524</p> <p>12.3 Hierarchical Clustering Methods 525</p> <p>12.3.1 Single Linkage 526</p> <p>12.3.2 Complete Linkage 531</p> <p>12.3.3 Average Linkage 534</p> <p>12.3.4 Ward’s Hierarchical Clustering 536</p> <p>12.4 Nonhierarchical Clustering Methods 538</p> <p>12.4.1 <i>K</i>-Means Method 538</p> <p>12.5 Density-Based Clustering 544</p> <p>12.6 Model-Based Clustering 547</p> <p>12.7 A Case Study 552</p> <p>12.8 Using JMP 553</p> <p>Review Practice Problems 553</p> <p><b>13 Analysis of Categorical Data 558</b></p> <p>13.1 Introduction 558</p> <p>13.2 The Chi-Square Goodness-of-Fit Test 559</p> <p>13.3 Contingency Tables 568</p> <p>13.3.1 The 2 <i>× </i>2 Case with Known Parameters 568</p> <p>13.3.2 The 2 <i>× </i>2 Case with Unknown Parameters 570</p> <p>13.3.3 The <i>r × s </i>Contingency Table 572</p> <p>13.4 Chi-Square Test for Homogeneity 577</p> <p>13.5 Comments on the Distribution of the Lack-of-Fit Statistics 581</p> <p>13.6 Case Studies 583</p> <p>13.7 Using JMP 584</p> <p>Review Practice Problems 585</p> <p><b>14 Nonparametric Tests 591</b></p> <p>14.1 Introduction 591</p> <p>14.2 The Sign Test 592</p> <p>14.2.1 One-Sample Test 592</p> <p>14.2.2 The Wilcoxon Signed-Rank Test 595</p> <p>14.2.3 Two-Sample Test 598</p> <p>14.3 Mann–Whitney (Wilcoxon) <i>W </i>Test for Two Samples 604</p> <p>14.4 Runs Test 608</p> <p>14.4.1 Runs above and below the Median 608</p> <p>14.4.2 The Wald–Wolfowitz Run Test 611</p> <p>14.5 Spearman Rank Correlation 614</p> <p>14.6 Using JMP 618</p> <p>Review Practice Problems 618</p> <p><b>15 Simple Linear Regression Analysis 622</b></p> <p>15.1 Introduction 623</p> <p>15.2 Fitting the Simple Linear Regression Model 624</p> <p>15.2.1 Simple Linear Regression Model 624</p> <p>15.2.2 Fitting a Straight Line by Least Squares 627</p> <p>15.2.3 Sampling Distribution of the Estimators of Regression Coefficients 631</p> <p>15.3 Unbiased Estimator of <i>σ</i><sup>2</sup> 637</p> <p>15.4 Further Inferences Concerning Regression Coefficients (<i>β</i><sub>0</sub>, <i>β</i><sub>1</sub>), <i>E</i>(<i>Y </i>), and <i>Y </i>639</p> <p>15.4.1 Confidence Interval for <i>β</i><sub>1</sub> with Confidence Coefficient (1 <i>− α</i>) 639</p> <p>15.4.2 Confidence Interval for <i>β</i><sub>0</sub> with Confidence Coefficient (1 <i>− α</i>) 640</p> <p>15.4.3 Confidence Interval for <i>E</i>(<i>Y |X</i>) with Confidence Coefficient (1 <i>− α</i>) 642</p> <p>15.4.4 Prediction Interval for a Future Observation <i>Y </i>with Confidence Coefficient (1 <i>− α</i>) 645</p> <p>15.5 Tests of Hypotheses for <i>β</i><sub>0</sub> and <i>β</i><sub>1</sub> 652</p> <p>15.5.1 Test of Hypotheses for <i>β</i><sub>1</sub> 652</p> <p>15.5.2 Test of Hypotheses for <i>β</i><sub>0 </sub>652</p> <p>15.6 Analysis of Variance Approach to Simple Linear Regression Analysis 659</p> <p>15.7 Residual Analysis 665</p> <p>15.8 Transformations 674</p> <p>15.9 Inference About <i>ρ </i>681</p> <p>15.10A Case Study 683</p> <p>15.11 Using JMP 684</p> <p>Review Practice Problems 684</p> <p><b>16 Multiple Linear Regression Analysis 693</b></p> <p>16.1 Introduction 694</p> <p>16.2 Multiple Linear Regression Models 694</p> <p>16.3 Estimation of Regression Coefficients 699</p> <p>16.3.1 Estimation of Regression Coefficients Using Matrix Notation 701</p> <p>16.3.2 Properties of the Least-Squares Estimators 703</p> <p>16.3.3 The Analysis of Variance Table 704</p> <p>16.3.4 More Inferences about Regression Coefficients 706</p> <p>16.4 Multiple Linear Regression Model Using Quantitative and Qualitative Predictor Variables 714</p> <p>16.4.1 Single Qualitative Variable with Two Categories 714</p> <p>16.4.2 Single Qualitative Variable with Three or More Categories 716</p> <p>16.5 Standardized Regression Coefficients 726</p> <p>16.5.1 Multicollinearity 728</p> <p>16.5.2 Consequences of Multicollinearity 729</p> <p>16.6 Building Regression Type Prediction Models 730</p> <p>16.6.1 First Variable to Enter into the Model 730</p> <p>16.7 Residual Analysis and Certain Criteria for Model Selection 734</p> <p>16.7.1 Residual Analysis 734</p> <p>16.7.2 Certain Criteria for Model Selection 735</p> <p>16.8 Logistic Regression 740</p> <p>16.9 Case Studies 745</p> <p>16.10 Using JMP 748</p> <p>Review Practice Problems 748</p> <p><b>17 Analysis of Variance 757</b></p> <p>17.1 Introduction 758</p> <p>17.2 The Design Models 758</p> <p>17.2.1 Estimable Parameters 758</p> <p>17.2.2 Estimable Functions 760</p> <p>17.3 One-Way Experimental Layouts 761</p> <p>17.3.1 The Model and Its Analysis 761</p> <p>17.3.2 Confidence Intervals for Treatment Means 767</p> <p>17.3.3 Multiple Comparisons 773</p> <p>17.3.4 Determination of Sample Size 780</p> <p>17.3.5 The Kruskal–Wallis Test for One-Way Layouts (Nonparametric Method) 781</p> <p>17.4 Randomized Complete Block (RCB) Designs 785</p> <p>17.4.1 The Friedman <i>F<sub>r</sub></i>-Test for Randomized Complete Block Design (Nonparametric Method) 792</p> <p>17.4.2 Experiments with One Missing Observation in an RCB-Design Experiment 794</p> <p>17.4.3 Experiments with Several Missing Observations in an RCB-Design Experiment 795</p> <p>17.5 Two-Way Experimental Layouts 798</p> <p>17.5.1 Two-Way Experimental Layouts with One Observation per Cell 800</p> <p>17.5.2 Two-Way Experimental Layouts with <i>r > </i>1 Observations per Cell 801</p> <p>17.5.3 Blocking in Two-Way Experimental Layouts 810</p> <p>17.5.4 Extending Two-Way Experimental Designs to <i>n</i>-Way Experimental Layouts 811</p> <p>17.6 Latin Square Designs 813</p> <p>17.7 Random-Effects and Mixed-Effects Models 820</p> <p>17.7.1 Random-Effects Model 820</p> <p>17.7.2 Mixed-Effects Model 822</p> <p>17.7.3 Nested (Hierarchical) Designs 824</p> <p>17.8 A Case Study 831</p> <p>17.9 Using JMP 832</p> <p>Review Practice Problems 832</p> <p><b>18 The 2<sup>k</sup> Factorial Designs 847</b></p> <p>18.1 Introduction 848</p> <p>18.2 The Factorial Designs 848</p> <p>18.3 The 2<i><sup>k</sup> </i>Factorial Designs 850</p> <p>18.4 Unreplicated 2<sup><i>k </i></sup>Factorial Designs 859</p> <p>18.5 Blocking in the 2<i><sup>k</sup> </i>Factorial Design 867</p> <p>18.5.1 Confounding in the 2<i><sup>k</sup> </i>Factorial Design 867</p> <p>18.5.2 Yates’s Algorithm for the 2<i><sup>k</sup> </i>Factorial Designs 875</p> <p>18.6 The 2<i>k </i>Fractional Factorial Designs 877</p> <p>18.6.1 One-half Replicate of a 2<i><sup>k</sup> </i>Factorial Design 877</p> <p>18.6.2 One-quarter Replicate of a 2<i><sup>k</sup> </i>Factorial Design 882</p> <p>18.7 Case Studies 887</p> <p>18.8 Using JMP 889</p> <p>Review Practice Problems 889</p> <p><b>19 Response Surfaces 897</b></p> <p>19.1 Introduction 897</p> <p>19.1.1 Basic Concepts of Response Surface Methodology 898</p> <p>19.2 First-Order Designs 903</p> <p>19.3 Second-Order Designs 917</p> <p>19.3.1 Central Composite Designs (CCDs) 918</p> <p>19.3.2 Some Other First-Order and Second-Order Designs 928</p> <p>19.4 Determination of Optimum or Near-Optimum Point 936</p> <p>19.4.1 The Method of Steepest Ascent 937</p> <p>19.4.2 Analysis of a Fitted Second-Order Response Surface 941</p> <p>19.5 Anova Table for a Second-Order Model 946</p> <p>19.6 Case Studies 948</p> <p>19.7 Using JMP 950</p> <p>Review Practice Problems 950</p> <p><b>20 Statistical Quality Control—Phase I Control Charts 958</b></p> <p><b>21 Statistical Quality Control—Phase II Control Charts 960</b></p> <p><b>Appendices 961</b></p> <p>Appendix A Statistical Tables 962</p> <p>Appendix B Answers to Selected Problems 969</p> <p>Appendix C Bibliography 992</p> <p>Index 1003</p>
<p><b>BHISHAM C. GUPTA, P<small>H</small>D,</b> is Professor Emeritus of Statistics in the Department of Mathematics and Statistics at the University of Southern Maine, and the co-author of <i>Statistics and Probability with Applications for Engineers and Scientists.</i> <p><b>IRWIN GUTTMAN, P<small>H</small>D,</b> is Professor Emeritus of Statistics in the Department of Mathematics at the State University of New York at Buffalo and Department of Statistics at the University of Toronto, Canada. He is the co-author of <i>Statistics and Probability with Applications for Engineers and Scientists.</i> <p><b>KALANKA P. JAYALATH, P<small>H</small>D,</b> is Assistant Professor in the Department of Mathematics and Statistics at the University of Houston.
<p><b>INTRODUCES BASIC CONCEPTS IN PROBABILITY AND STATISTICS TO DATA SCIENCE STUDENTS, AS WELL AS ENGINEERS AND SCIENTISTS</b> <p>Aimed at undergraduate/graduate-level engineering and natural science students, this timely, fully updated edition of a popular book on statistics and probability shows how real-world problems can be solved using statistical concepts. It removes Excel exhibits and replaces them with R software throughout, and updates both MINITAB and JMP software instructions and content. A new chapter discussing data mining—including big data, classification, machine learning, and visualization—is featured. Another new chapter covers cluster analysis methodologies in hierarchical, nonhierarchical, and model based clustering. The book also offers a chapter on Response Surfaces that previously appeared on the book's companion website. <p><i>Statistics and Probability with Applications for Engineers and Scientists using MINITAB, R and JMP, Second Edition</i> is broken into two parts. Part I covers topics such as: describing data graphically and numerically, elements of probability, discrete and continuous random variables and their probability distributions, distribution functions of random variables, sampling distributions, estimation of population parameters and hypothesis testing. Part II covers: elements of reliability theory, data mining, cluster analysis, analysis of categorical data, nonparametric tests, simple and multiple linear regression analysis, analysis of variance, 2<sup>k </sup>factorial designs, response surfaces, and statistical quality control (SQC) including phase I and phase II control charts. The appendices contain statistical tables and charts and answers to selected problems. <ul> <li>Features two new chapters—one on Data Mining and another on Cluster Analysis</li> <li>Now contains R exhibits including code, graphical display, and some results</li> <li>MINITAB and JMP have been updated to their latest versions</li> <li>Emphasizes the p-value approach and includes related practical interpretations</li> <li>Offers a more applied statistical focus, and features modified examples to better exhibit statistical concepts</li> <li>Supplemented with an Instructor-only companion website featuring a complete solutions manual, PowerPoint slides, certain proofs and derivations, data sets, Chapters 20, and 21, some statistical tables, JMP files, and all R exhibits</li> <li>Supplemented with a Student companion website featuring a partial solutions manual, certain proofs and derivations, data sets, Chapters 20, and 21, some statistical tables, JMP files, and all R exhibits</li> </ul> <p><i>Statistics and Probability with Applications for Engineers and Scientists using MINITAB, R and JMP</i> is an excellent text for graduate level data science students, and engineers and scientists. It is also an ideal introduction to applied statistics and probability for undergraduate students in engineering and the natural sciences.

Diese Produkte könnten Sie auch interessieren:

Statistics for Microarrays
Statistics for Microarrays
von: Ernst Wit, John McClure
PDF ebook
90,99 €