Details

Planning and Executing Credible Experiments


Planning and Executing Credible Experiments

A Guidebook for Engineering, Science, Industrial Processes, Agriculture, and Business
1. Aufl.

von: Robert J. Moffat, Roy W. Henk

110,99 €

Verlag: Wiley
Format: EPUB
Veröffentl.: 02.02.2021
ISBN/EAN: 9781119532866
Sprache: englisch
Anzahl Seiten: 352

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p><b>Covers experiment planning, execution, analysis, and reporting</b></p> <p>This single-source resource guides readers in planning and conducting credible experiments for engineering, science, industrial processes, agriculture, and business. The text takes experimenters all the way through conducting a high-impact experiment, from initial conception, through execution of the experiment, to a defensible final report. It prepares the reader to anticipate the choices faced during each stage.</p> <p>Filled with real-world examples from engineering science and industry, <i>Planning and Executing Credible Experiments: A Guidebook for Engineering, Science, Industrial Processes, Agriculture, and Business</i> offers chapters that challenge experimenters at each stage of planning and execution and emphasizes uncertainty analysis as a design tool in addition to its role for reporting results. Tested over decades at Stanford University and internationally, the text employs two powerful, free, open-source software tools: GOSSET to optimize experiment design, and R for statistical computing and graphics. A website accompanies the text, providing additional resources and software downloads.</p> <ul> <li>A comprehensive guide to experiment planning, execution, and analysis</li> <li>Leads from initial conception, through the experiment’s launch, to final report</li> <li>Prepares the reader to anticipate the choices faced throughout an experiment</li> <li>Hones the motivating question</li> <li>Employs principles and techniques from Design of Experiments (DoE)</li> <li>Selects experiment designs to obtain the most information from fewer experimental runs</li> <li>Offers chapters that propose questions that an experimenter will need to ask and answer during each stage of planning and execution</li> <li>Demonstrates how uncertainty analysis guides and strengthens each stage</li> <li>Includes examples from real-life industrial experiments</li> <li>Accompanied by a website hosting open-source software</li> </ul> <p><i>Planning and Executing Credible Experiments</i> is an excellent resource for graduates and senior undergraduates—as well as professionals—across a wide variety of engineering disciplines.</p>
<p>About the Authors xxi</p> <p>Preface xxiii</p> <p>Acknowledgments xxvii</p> <p>About the Companion Website xxix</p> <p><b>1 Choosing Credibility 1</b></p> <p>1.1 The Responsibility of an Experimentalist 2</p> <p>1.2 Losses of Credibility 2</p> <p>1.3 Recovering Credibility 3</p> <p>1.4 Starting with a Sharp Axe 3</p> <p>1.5 A Systems View of Experimental Work 4</p> <p>1.6 In Defense of Being a Generalist 5</p> <p>Panel 1.1 The Bundt Cake Story 6</p> <p>References 6</p> <p>Homework 6</p> <p><b>2 The Nature of Experimental Work 7</b></p> <p>2.1 Tested Guide of Strategy and Tactics 7</p> <p>2.2 What Can Be Measured and What Cannot? 8</p> <p>2.2.1 Examples Not Measurable 8</p> <p>2.2.2 Shapes 9</p> <p>2.2.3 Measurable by the Human Sensory System 10</p> <p>2.2.4 Identifying and Selecting Measurable Factors 11</p> <p>2.2.5 Intrusive Measurements 11</p> <p>2.3 Beware Measuring Without Understanding: Warnings from History 12</p> <p>2.4 How Does Experimental Work Differ from Theory and Analysis? 13</p> <p>2.4.1 Logical Mode 13</p> <p>2.4.2 Persistence 13</p> <p>2.4.3 Resolution 13</p> <p>2.4.4 Dimensionality 15</p> <p>2.4.5 Similarity and Dimensional Analysis 15</p> <p>2.4.6 Listening to Our Theoretician Compatriots 16</p> <p>Panel 2.1 Positive Consequences of the Reproducibility Crisis 17</p> <p>Panel 2.2 Selected Invitations to Experimental Research, Insights from Theoreticians 18</p> <p>Panel 2.3 Prepublishing Your Experiment Plan 21</p> <p>2.4.7 Surveys and Polls 22</p> <p>2.5 Uncertainty 23</p> <p>2.6 Uncertainty Analysis 23</p> <p>References 24</p> <p>Homework 25</p> <p><b>3 An Overview of Experiment Planning 27</b></p> <p>3.1 Steps in an Experimental Plan 27</p> <p>3.2 Iteration and Refinement 28</p> <p>3.3 Risk Assessment/Risk Abatement 28</p> <p>3.4 Questions to Guide Planning of an Experiment 29</p> <p>Homework 30</p> <p><b>4 Identifying the Motivating Question 31</b></p> <p>4.1 The Prime Need 31</p> <p>Panel 4.1 There’s a Hole in My Bucket 32</p> <p>4.2 An Anchor and a Sieve 33</p> <p>4.3 Identifying the Motivating Question Clarifies Thinking 33</p> <p>4.3.1 Getting Started 33</p> <p>4.3.2 Probe and Focus 34</p> <p>4.4 Three Levels of Questions 35</p> <p>4.5 Strong Inference 36</p> <p>4.6 Agree on the Form of an Acceptable Answer 36</p> <p>4.7 Specify the Allowable Uncertainty 37</p> <p>4.8 Final Closure 37</p> <p>Reference 38</p> <p>Homework 38</p> <p><b>5 Choosing the Approach 39</b></p> <p>5.1 Laying Groundwork 39</p> <p>5.2 Experiment Classifications 40</p> <p>5.2.1 Exploratory 40</p> <p>5.2.2 Identifying the Important Variables 40</p> <p>5.2.3 Demonstration of System Performance 41</p> <p>5.2.4 Testing a Hypothesis 41</p> <p>5.2.5 Developing Constants for Predetermined Models 41</p> <p>5.2.6 Custody Transfer and System Performance Certification Tests 42</p> <p>5.2.7 Quality-Assurance Tests 42</p> <p>5.2.8 Summary 43</p> <p>5.3 Real or Simplified Conditions? 43</p> <p>5.4 Single-Sample or Multiple-Sample? 43</p> <p>Panel 5.1 A Brief Summary of “Dissertation upon Roast Pig” 44</p> <p>Panel 5.2 Consider a Spherical Cow 44</p> <p>5.5 Statistical or Parametric Experiment Design? 45</p> <p>5.6 Supportive or Refutative? 47</p> <p>5.7 The Bottom Line 47</p> <p>References 48</p> <p>Homework 48</p> <p><b>6 Mapping for Safety, Operation, and Results 51</b></p> <p>6.1 Construct Multiple Maps to Illustrate and Guide Experiment Plan 51</p> <p>6.2 Mapping Prior Work and Proposed Work 51</p> <p>6.3 Mapping the Operable Domain of an Apparatus 53</p> <p>6.4 Mapping in Operator’s Coordinates 57</p> <p>6.5 Mapping the Response Surface 59</p> <p>6.5.1 Options for Organizing a Table 59</p> <p>6.5.2 Options for Presenting the Response on a Scatter-Plot-Type Graph 61</p> <p>Homework 64</p> <p><b>7 Refreshing Statistics 65</b></p> <p>7.1 Reviving Key Terms to Quantify Uncertainty 65</p> <p>7.1.1 Population 65</p> <p>7.1.2 Sample 66</p> <p>7.1.3 Central Value 67</p> <p>7.1.4 Mean, μ or <i>Ȳ </i>67</p> <p>7.1.5 Residual 67</p> <p>7.1.6 Variance, σ<sup>2</sup> or <i>S</i><sup>2</sup> 68</p> <p>7.1.7 Degrees of Freedom, <i>Df </i>68</p> <p>7.1.8 Standard Deviation, σ<i><sub>Y</sub> </i>or <i>S<sub>Y</sub> </i>68</p> <p>7.1.9 Uncertainty of the Mean, δμ 69</p> <p>7.1.10 Chi‐Squared, <i>χ</i><sup>2</sup> 69</p> <p>7.1.11 p‐Value 70</p> <p>7.1.12 Null Hypothesis 70</p> <p>7.1.13 F‐value of Fisher Statistic 71</p> <p>7.2 The Data Distribution Most Commonly Encountered The Normal Distribution for Samples of Infinite Size 71</p> <p>7.3 Account for Small Samples: The t‐Distribution 72</p> <p>7.4 Construct Simple Models by Computer to Explain the Data 73</p> <p>7.4.1 Basic Statistical Analysis of Quantitative Data 73</p> <p>7.4.2 Model Data Containing Categorical and Quantitative Factors 75</p> <p>7.4.3 Display Data Fit to One Categorical Factor and One Quantitative Factor 76</p> <p>7.4.4 Quantify How Each Factor Accounts for Variation in the Data 76</p> <p>7.5 Gain Confidence and Skill at Statistical Modeling Via the R Language 77</p> <p>7.5.1 Model and Plot Results of a Single Variable Using the Example Data diceshoe.csv 77</p> <p>7.5.2 Evaluate Alternative Models of the Example Data hiloy.csv 78</p> <p>7.5.2.1 Inspect the Data 78</p> <p>7.5.3 Grand Mean 78</p> <p>7.5.4 Model by Groups: Group‐Wise Mean 78</p> <p>7.5.5 Model by a Quantitative Factor 78</p> <p>7.5.6 Model by Multiple Quantitative Factors 78</p> <p>7.5.7 Allow Factors to Interact (So Each Group Gets Its Own Slope) 79</p> <p>7.5.8 Include Polynomial Factors (a Statistical Linear Model Can Be Curved) 80</p> <p>7.6 Report Uncertainty 80</p> <p>7.7 Decrease Uncertainty (Improve Credibility) by Isolating Distinct Groups 81</p> <p>7.8 Original Data, Summary, and R 82</p> <p>References 83</p> <p>Homework 83</p> <p><b>8 Exploring Statistical Design of Experiments 87</b></p> <p>8.1 Always Seeking Wiser Strategies 87</p> <p>8.2 Evolving from Novice Experiment Design 87</p> <p>8.3 Two‐Level and Three‐Level Factorial Experiment Plans 88</p> <p>8.4 A Three‐Level, Three‐Factor Design 89</p> <p>8.5 The Plackett–Burman 12‐Run Screening Design 93</p> <p>8.6 Details About Analysis of Statistically Designed Experiments 95</p> <p>8.6.1 Model Main Factors to Original Raw Data 95</p> <p>8.6.2 Model Main Factors to Original Data Around Center of Each Factor 96</p> <p>8.6.3 Model Including All Interaction Terms 97</p> <p>8.6.4 Model Including Only Dominant Interaction Terms 97</p> <p>8.6.5 Model Including Dominant Interaction Term Plus Quadratic Term 98</p> <p>8.6.6 Model All Factors of Example 2, Centering Each Quantitative Factor 99</p> <p>8.6.7 Refine Model of Example 2 Including Only Dominant Terms 100</p> <p>8.7 Retrospect of Statistical Design Examples 101</p> <p>8.8 Philosophy of Statistical Design 101</p> <p>8.9 Statistical Design for Conditions That Challenge Factorial Designs 102</p> <p>8.10 A Highly Recommended Tool for Statistical Design of Experiments 103</p> <p>8.11 More Tools for Statistical Design of Experiments 103</p> <p>8.12 Conclusion 103</p> <p>Further Reading 104</p> <p>Homework 104</p> <p><b>9 Selecting the Data Points 107</b></p> <p>9.1 The Three Categories of Data 107</p> <p>9.1.1 The Output Data 107</p> <p>9.1.2 Peripheral Data 108</p> <p>9.1.3 Backup Data 108</p> <p>9.1.4 Other Data You May Wish to Acquire 108</p> <p>9.2 Populating the Operating Volume 109</p> <p>9.2.1 Locating the Data Points Within the Operating Volume 109</p> <p>9.2.2 Estimating the Topography of the Response Surface 109</p> <p>9.3 Example from Velocimetry 109</p> <p>9.3.1 Sharpen Our Approach 110</p> <p>9.3.2 Lessons Learned from Velocimetry Example 111</p> <p>9.4 Organize the Data 112</p> <p>9.4.1 Keep a Laboratory Notebook 112</p> <p>9.4.2 Plan for Data Security 112</p> <p>9.4.3 Decide Data Format 112</p> <p>9.4.4 Overview Data Guidelines 112</p> <p>9.4.5 Reasoning Through Data Guidelines 113</p> <p>9.5 Strategies to Select Next Data Points 114</p> <p>9.5.1 Overview of Option 1: Default Strategy with Intensive Experimenter Involvement 115</p> <p>9.5.1.1 Choosing the Data Trajectory 115</p> <p>9.5.1.2 The Default Strategy: Be Bold 115</p> <p>9.5.1.3 Anticipate, Check, Course Correct 116</p> <p>9.5.1.4 Other Aspects to Keep in Mind 116</p> <p>9.5.1.5 Endpoints 117</p> <p>9.5.2 Reintroducing Gosset 118</p> <p>9.5.3 Practice Gosset Examples (from Gosset User Manual) 119</p> <p>9.6 Demonstrate Gosset for Selecting Data 120</p> <p>9.6.1 Status Quo of Experiment Planning and Execution (Prior to Selecting More Samples) 120</p> <p>9.6.1.1 Specified Motivating Question 120</p> <p>9.6.1.2 Identified Pertinent Candidate Factors 121</p> <p>9.6.1.3 Selected Initial Sample Points Using Plackett–Burman 121</p> <p>9.6.1.4 Executed the First 12 Runs at the PB Sample Conditions 122</p> <p>9.6.1.5 Analyzed Results. Identified Dominant First-Order Factors. Estimated First-Order Uncertainties of Factors 123</p> <p>9.6.1.6 Generated Draft Predictive Equation 124</p> <p>9.6.2 Use Gosset to Select Additional Data Samples 125</p> <p>9.6.2.1 Example Gosset Session: User Input to Select Next Points 125</p> <p>9.6.2.2 Example Gosset Session: How We Chose User Input 126</p> <p>9.6.2.3 Example Gosset Session: User Input Along with Gosset Output 128</p> <p>9.6.2.4 Example Gosset Session: Convert the Gosset Design to Operator Values 131</p> <p>9.6.2.5 Results of Example Gosset Session: Operator Plots of Total Experiment Plan 132</p> <p>9.6.2.6 Execute Stage Two of the Experiment Plan: User Plus Gosset Sample Points 132</p> <p>9.7 Use Gosset to Analyze Results 133</p> <p>9.8 Other Options and Features of Gosset 133</p> <p>9.9 Using Gosset to Find Local Extrema in a Function of Several Variables 134</p> <p>9.10 Summary 137</p> <p>Further Reading 137</p> <p>Homework 137</p> <p><b>10 Analyzing Measurement Uncertainty 143</b></p> <p>10.1 Clarifying Uncertainty Analysis 143</p> <p>10.1.1 Distinguish Error and Uncertainty 144</p> <p>10.1.1.1 Single-Sample vs. Multiple-Sample 145</p> <p>10.1.2 Uncertainty as a Diagnostic Tool 146</p> <p>10.1.2.1 What Can Uncertainty Analysis Tell You? 146</p> <p>10.1.2.2 What is Uncertainty Analysis Good For? 148</p> <p>10.1.2.3 Uncertainty Analysis Can Redirect a Poorly Conceived Experiment 148</p> <p>10.1.2.4 Uncertainty Analysis Improves the Quality of Your Work 148</p> <p>10.1.2.5 Slow Sampling and “Randomness” 149</p> <p>10.1.2.6 Uncertainty Analysis Makes Results Believable 150</p> <p>10.1.3 Uncertainty Analysis Aids Management Decision-Making 150</p> <p>10.1.3.1 Management’s Task: Dealing with Warranty Issues 150</p> <p>10.1.4 The Design Group’s Task: Setting Tolerances on Performance Test Repeatability 152</p> <p>10.1.5 The Performance Test Group’s Task: Setting the Tolerances on Measurements 152</p> <p>10.2 Definitions 153</p> <p>10.2.1 True Value 153</p> <p>10.2.2 Corrected Value 153</p> <p>10.2.3 Data Reduction Program 153</p> <p>10.2.4 Accuracy 153</p> <p>10.2.5 Error 154</p> <p>10.2.6 XXXX Error 154</p> <p>10.2.7 Fixed Error 154</p> <p>10.2.8 Residual Fixed Error 154</p> <p>10.2.9 Random Error 154</p> <p>10.2.10 Variable (but Deterministic) Error 155</p> <p>10.2.11 Uncertainty 155</p> <p>10.2.12 Odds 155</p> <p>10.2.13 Absolute Uncertainty 155</p> <p>10.2.14 Relative Uncertainty 155</p> <p>10.3 The Sources and Types of Errors 156</p> <p>10.3.1 Types of Errors: Fixed, Random, and Variable 156</p> <p>10.3.2 Sources of Errors: The Measurement Chain 156</p> <p>10.3.2.1 The Undisturbed Value 158</p> <p>10.3.2.2 The Available Value 158</p> <p>10.3.2.3 The Achieved Value 158</p> <p>10.3.2.4 The Observed Value 159</p> <p>10.3.2.5 The Corrected Value 159</p> <p>10.3.3 Specifying the True Value 160</p> <p>10.3.3.1 If the Achieved Value is Taken as the True Value 160</p> <p>10.3.3.2 If the Available Value is Taken as the True Value 163</p> <p>10.3.3.3 If the Undisturbed Value is Taken as the True Value 166</p> <p>10.3.3.4 If the Mixed Mean Gas Temperature is Taken as the True Value 167</p> <p>10.3.4 The Role of the End User 167</p> <p>10.3.4.1 The End-Use Equations Implicitly Define the True Value 167</p> <p>10.3.5 Calibration 168</p> <p>10.4 The Basic Mathematics 170</p> <p>10.4.1 The Root-Sum-Squared (RSS) Combination 170</p> <p>10.4.2 The Fixed Error in a Measurement 171</p> <p>10.4.3 The Random Error in a Measurement 172</p> <p>10.4.4 The Uncertainty in a Measurement 173</p> <p>10.4.5 The Uncertainty in a Calculated Result 174</p> <p>10.4.5.1 The Relative Uncertainty in a Result 176</p> <p>10.5 Automating the Uncertainty Analysis 178</p> <p>10.5.1 The Mathematical Basis 178</p> <p>10.5.2 Example of Uncertainty Analysis by Spreadsheet 179</p> <p>10.6 Single-Sample Uncertainty Analysis 181</p> <p>10.6.1 Assembling the Necessary Inputs 184</p> <p>10.6.2 Calculating the Uncertainty in the Result 185</p> <p>10.6.3 The Three Levels of Uncertainty: Zero<sup>th</sup>-, First-, and N<sup>th</sup>-Order 185</p> <p>10.6.3.1 Zero<sup>th</sup>-Order Replication 186</p> <p>10.6.3.2 First-Order Replication 187</p> <p>10.6.3.3 N<sup>th</sup>-Order Replication 188</p> <p>10.6.4 Fractional-Order Replication for Special Cases 188</p> <p>10.6.5 Summary of Single-Sample Uncertainty Levels 189</p> <p>10.6.5.1 Zero<sup>th</sup>-Order 189</p> <p>10.6.5.2 First-Order 190</p> <p>10.6.5.3 N<sup>th</sup>-Order 190</p> <p>References 190</p> <p>Further Reading 191</p> <p>Homework 191</p> <p><b>11 Using Uncertainty Analysis in Planning and Execution 197</b></p> <p>11.1 Using Uncertainty Analysis in Planning 197</p> <p>11.1.1 The Physical Situation and Energy Analysis 198</p> <p>11.1.2 The Steady‐State Method 199</p> <p>11.1.3 The Transient Method 200</p> <p>11.1.4 Reflecting on Assumptions Made During DRE Derivations 201</p> <p>11.2 Perform Uncertainty Analysis on the DREs 202</p> <p>11.2.1 Uncertainty Analysis: General Form 202</p> <p>11.2.2 Uncertainty Analysis of the Steady‐State Method 203</p> <p>11.2.3 Uncertainty Analysis – Transient Method 204</p> <p>11.2.4 Compare the Results of Uncertainty Analysis of the Methods 205</p> <p>11.2.5 What Does the Calculated Uncertainty Interval Mean? 206</p> <p>11.2.6 Cross‐Checking the Experiment 207</p> <p>11.2.7 Conclusions 207</p> <p>11.3 Using Uncertainty Analysis in Selecting Instruments 208</p> <p>11.4 Using Uncertainty Analysis in Debugging an Experiment 209</p> <p>11.4.1 Handling Overall Scatter 209</p> <p>11.4.2 Sources of Scatter 210</p> <p>11.4.3 Advancing Toward Calibration 211</p> <p>11.4.4 Selecting Thresholds 212</p> <p>11.4.5 Iterating Analysis 212</p> <p>11.4.6 Rechecking Situational Uncertainty 212</p> <p>11.5 Reporting the Uncertainties in an Experiment 213</p> <p>11.5.1 Progress in Uncertainty Reporting 214</p> <p>11.6 Multiple‐Sample Uncertainty Analysis 214</p> <p>11.6.1 Revisiting Single‐Sample and Multiple‐Sample Uncertainty Analysis 214</p> <p>11.6.2 Examples of Multiple‐Sample Uncertainty Analysis 215</p> <p>11.6.3 Fixed Error and Random Error 216</p> <p>11.7 Coordinate with Uncertainty Analysis Standards 216</p> <p>11.7.1 Describing Fixed and Random Errors in a Measurement 217</p> <p>11.7.2 The Bias Limit 217</p> <p>11.7.2.1 Fossilization 218</p> <p>11.7.2.2 Bias Limits 218</p> <p>11.7.3 The Precision Index 219</p> <p>11.7.4 The Number of Degrees of Freedom 220</p> <p>11.8 Describing the Overall Uncertainty in a Single Measurement 220</p> <p>11.8.1 Adjusting for a Single Measurement 220</p> <p>11.8.2 Describing the Overall Uncertainty in a Result 221</p> <p>11.8.3 Adding the Overall Uncertainty to Predictive Models 222</p> <p>11.9 Additional Statistical Tools and Elements 222</p> <p>11.9.1 Pooled Variance 222</p> <p>11.9.1.1 Student’s t‐Distribution – Pooled Examples 223</p> <p>11.9.2 Estimating the Standard Deviation of a Population from the Standard Deviation of a Small Sample: The Chi‐Squared χ2 Distribution 223</p> <p>References 225</p> <p>Homework 226</p> <p><b>12 Debugging an Experiment, Shakedown, and Validation 231</b></p> <p>12.1 Introduction 231</p> <p>12.2 Classes of Error 231</p> <p>12.3 Using Time-Series Analysis in Debugging 232</p> <p>12.4 Examples 232</p> <p>12.4.1 Gas Temperature Measurement 232</p> <p>12.4.2 Calibration of a Strain Gauge 233</p> <p>12.4.3 Lessons Learned from Examples 234</p> <p>12.5 Process Unsteadiness 234</p> <p>12.6 The Effect of Time-Constant Mismatching 235</p> <p>12.7 Using Uncertainty Analysis in Debugging an Experiment 236</p> <p>12.7.1 Calibration and Repeatability 236</p> <p>12.7.2 Stability and Baselining 238</p> <p>12.8 Debugging the Experiment via the Data Interpretation Program 239</p> <p>12.8.1 Debug the Experiment via the DIP 239</p> <p>12.8.2 Debug the Interface of the DIP 239</p> <p>12.8.3 Debug Routines in the DIP 240</p> <p>12.9 Situational Uncertainty 241</p> <p><b>13 Trimming Uncertainty 243</b></p> <p>13.1 Focusing on the Goal 243</p> <p>13.2 A Motivating Question for Industrial Production 243</p> <p>13.2.1 Agreed Motivating Questions for Industrial Example 244</p> <p>13.2.2 Quick Answers to Motivating Questions 244</p> <p>13.2.3 Challenge: Precheck Analysis and Answers 245</p> <p>13.3 Plackett–Burman 12-Run Results and Motivating Question #3 245</p> <p>13.4 PB 12-Run Results and Motivating Question #1 247</p> <p>13.4.1 Building a Predictive Model Equation from R-Language Linear Model 248</p> <p>13.4.2 Parsing the Dual Predictive Model Equation 249</p> <p>13.4.3 Uncertainty of the Intercept in the Dual Predictive Model Equation 250</p> <p>13.4.4 Mapping an Answer to Motivating Question #1 251</p> <p>13.4.5 Tentative Answers to Motivating Question #1 252</p> <p>13.5 Uncertainty Analysis of Dual Predictive Model and Motivating Question #2 252</p> <p>13.5.1 Uncertainty of the Constant in the Dual Predictive Model Equation 252</p> <p>13.5.2 Uncertainty of Other Factors in the Dual Predictive Model Equation 253</p> <p>13.5.3 Include All Coefficient Uncertainties in the Dual Predictive Model Equation 254</p> <p>13.5.4 Overall Uncertainty from All Factors in the Predictive Model Equation 254</p> <p>13.5.5 Improved Tentative Answers to Motivating Questions, Including Uncertainties 256</p> <p>13.5.6 Search for Improved Predictive Models 256</p> <p>13.6 The PB 12-Run Results and Individual Machine Models 256</p> <p>13.6.1 Individual Machine Predictive Model Equations 258</p> <p>13.6.2 Uncertainty of the Intercept in the Individual Predictive Model Equations 258</p> <p>13.6.3 Uncertainty of the Constant in the Individual Predictive Model Equations 259</p> <p>13.6.4 Uncertainty of Other Factors in the Individual Predictive Model Equation 259</p> <p>13.6.4.1 Uncertainties of Machine 1 259</p> <p>13.6.4.2 Uncertainties of Machine 2 260</p> <p>13.6.4.3 Including Instrument and Measurement Uncertainties 260</p> <p>13.6.5 Include All Coefficient Uncertainties in the Individual Predictive Model Equations 260</p> <p>13.6.6 Overall Uncertainty from All Factors in the Individual Predictive Model Equations 261</p> <p>13.6.7 Quick Overview of Individual Machine Performance Over the Operating Map 262</p> <p>13.7 Final Answers to All Motivating Questions for the PB Example Experiment 263</p> <p>13.7.1 Answers to Motivating Question #1 263</p> <p>13.7.2 Answers to Motivating Question #2 263</p> <p>13.7.3 Answers to Motivating Question #3 (Expanded from Section 13.3) 263</p> <p>13.7.4 Answers to Motivating Question #4 264</p> <p>13.7.5 Other Recommendations (to Our Client) 264</p> <p>13.8 Conclusions 265</p> <p>Homework 266</p> <p><b>14 Documenting the Experiment: Report Writing 269</b></p> <p>14.1 The Logbook 269</p> <p>14.2 Report Writing 269</p> <p>14.2.1 Organization of the Reports 270</p> <p>14.2.2 Who Reads What? 270</p> <p>14.2.3 Picking a Viewpoint 271</p> <p>14.2.4 What Goes Where? 271</p> <p>14.2.4.1 What Goes in the Abstract? 272</p> <p>14.2.4.2 What Goes in the Foreword? 272</p> <p>14.2.4.3 What Goes in the Objective? 273</p> <p>14.2.4.4 What Goes in the Results and Conclusions? 273</p> <p>14.2.4.5 What Goes in the Discussion? 274</p> <p>14.2.4.6 References 274</p> <p>14.2.4.7 Figures 275</p> <p>14.2.4.8 Tables 276</p> <p>14.2.4.9 Appendices 276</p> <p>14.2.5 The Mechanics of Report Writing 276</p> <p>14.2.6 Clear Language Versus “JARGON” 277</p> <p>Panel 14.1 The Turbo-Encabulator 278</p> <p>14.2.7 “Gobbledygook”: Structural Jargon 279</p> <p>Panel 14.2 U.S. Code, Title 18, No. 793 279</p> <p>14.2.8 Quantitative Writing 281</p> <p>14.2.8.1 Substantive Versus Descriptive Writing 281</p> <p>Panel 14.3 The Descriptive Bank Statement 281</p> <p>14.2.8.2 Zero-Information Statements 281</p> <p>14.2.8.3 Change 282</p> <p>14.3 International Organization for Standardization, ISO 9000 and other Standards 282</p> <p>14.4 Never Forget. Always Remember 282</p> <p><b>Appendix A: Distributing Variation and Pooled Variance 283</b></p> <p>A.1 Inescapable Distributions 283</p> <p>A.1.1 The Normal Distribution for Samples of Infinite Size 283</p> <p>A.1.2 Adjust Normal Distributions with Few Data: The Student’s t-Distribution 283</p> <p>A.2 Other Common Distributions 286</p> <p>A.3 Pooled Variance (Advanced Topic) 286</p> <p><b>Appendix B: Illustrative Tables for Statistical Design 289</b></p> <p>B.1 Useful Tables for Statistical Design of Experiments 289</p> <p>B.1.1 Ready-made Ordering for Randomized Trials 289</p> <p>B.1.2 Exhausting Sets of Two-Level Factorial Designs (≤ Five Factors) 289</p> <p>B.2 The Plackett–Burman (PB) Screening Designs 289</p> <p><b>Appendix C: Hand Analysis of a Two-Level Factorial Design 293</b></p> <p>C.1 The General Two-Level Factorial Design 293</p> <p>C.2 Estimating the Significance of the Apparent Factor Effects 298</p> <p>C.3 Hand Analysis of a Plackett–Burman (PB) 12-Run Design 299</p> <p>C.4 Illustrative Practice Example for the PB 12-Run Pattern 302</p> <p>C.4.1 Assignment: Find Factor Effects and the Linear Coefficients Absent Noise 302</p> <p>C.4.2 Assignment: Find Factor Effects and the Linear Coefficients with Noise 303</p> <p>C.5 Answer Key: Compare Your Hand Calculations 303</p> <p>C.5.1 Expected Results Absent Noise (compare C.4.1) 303</p> <p>C.5.2 Expected Results with Random Gaussian Noise (cf. C.4.2) 304</p> <p>C.6 Equations for Hand Calculations 305</p> <p><b>Appendix D: Free Recommended Software 307</b></p> <p>D.1 Instructions to Obtain the R Language for Statistics 307</p> <p>D.2 Instructions to Obtain LibreOffice 308</p> <p>D.3 Instructions to Obtain Gosset 308</p> <p>D.4 Possible Use of RStudio 309</p> <p>Index 311</p>
<p><b>ROBERT J. MOFFAT, P<small>H</small>D,</b> is a Professor Emeritus of Mechanical Engineering at Stanford University. He proved engines for General Motors and is the former President of Moffat Thermosciences, Inc. His main areas of research include convective heat transfer in engineering systems, experimental methods in heat transfer and fluid mechanics, and biomedical thermal issues. <p><b>ROY W. HENK, P<small>H</small>D,</b> designed aerospace engine components and has conducted experimental tests in industry, a government lab and internationally. He has been a Professor in the USA, South Korea, and most notably at Kyoto University in Japan. His research includes experiment design, energy conversion, turbomachinery, and thermal fluid physics.
<p><b>Covers experiment planning, execution, analysis, and reporting</b> <p>This single-source resource guides readers in planning and conducting credible experiments for engineering, science, industrial processes, agriculture, and business. The text takes experimenters all the way through conducting a high-impact experiment, from initial conception, through execution of the experiment, to a defensible final report. It prepares the reader to anticipate the choices faced throughout each one. <p>Filled with real-world examples from engineering science and industry, <i>Planning and Executing Credible Experiments: A Guidebook for Engineering, Science, Industrial Processes, Agriculture, and Business</i> offers chapters that challenge experimenters at each stage of planning and execution and emphasizes uncertainty analysis as a design tool in addition to its role for reporting results. Tested over decades at Stanford University and internationally, the text employs two powerful, free, open-source software tools: GOSSET to optimize experiment design, and R for statistical computing and graphics. A website accompanies the text, providing additional resources and software downloads. <ul> <li>A comprehensive guide to experiment planning, execution, and analysis</li> <li>Leads from initial conception, through the experiment's launch, to final report</li> <li>Prepares the reader to anticipate the choices faced throughout an experiment</li> <li>Hones the motivating question</li> <li>Employs principles and techniques from Design of Experiments (DoE)</li> <li>Selects experiment designs to obtain the most information from fewer experimental runs</li> <li>Offers chapters that propose questions that an experimenter will need to ask and answer during each stage of planning and execution</li> <li>Includes examples from real-life industrial experiments</li> <li>Accompanied by a website hosting open-source software</li> </ul> <p><i>Planning and Executing Credible Experiments</i> is an excellent resource for graduates and senior undergraduates—as well as professionals—across a wide variety of engineering disciplines.
<i>Planning and Executing Credible Experiments </i>is a comprehensive presentation of what needs to be done in the development, execution, and interpretation of experiments. It includes a detailed discussion of uncertainty quantification that often is overlooked in presentation of experimental data. It is an essential reference for anyone who uses experiments for answering key questions and making data-based decisions in engineering, science, medicine and business.--Professor C.T. Bowman, Mechanical Engineering Department, Stanford University<br /><br /><i>Planning and Executing Credible Experiments</i> is a must read for any engineer who either is a test engineer or supervises a test engineer. I believe nothing of substance relative to experiments has been left out of this text. I'm giving a copy to my nephew who is an undergrad engineer, I can't think of a better gift...it keeps on giving. It's a great reference.--James Callas, Test Chief at Caterpillar, P.E. (semi-retired)<br /><br /> <p>Finally available, the long-awaited book <i>Planning and Executing Credible Experiments</i> written by Moffat & Henk! I have eagerly anticipated this book being published. While this statement may seem enthusiastic, I assure you that we who had the privilege of attending Professor Moffat's classes at Stanford University will understand this feeling of mine. His graduate courses in experimental methods in thermosciences were worth our intense effort. He packed his classes with weighty content: each thought, each reflection. After each class, the same litany among us students ... “were you able to record everything? … Can we share our notes?” We dreamed of having this book to accompany the lessons of the great master!</p> <p>In my opinion, “Planning and Executing Credible Experiments” is the bible of the experimentalist. A complete book. By conceptualizing the credibility of the experiment and the systemic view of experimental work, it lifts the responsibility of the experimentalist, enabling him, in a critical way, to validate his experiments. The book discusses strategies and tactics of the experimental work, differentiating it from theoretical analysis. With the authority of one of the respected founders advancing the analysis of measurement uncertainty, this book is dense with concepts, practical examples, and statistical tools. This book demonstrates with clarity and propriety "what does uncertainty analysis tell the experimentalist?"</p> <p>Having this book in hand empowers experimentalists in every department at my university.</p> <p>To every university in other countries: I highly recommend this book to improve the impact and credibility of their research, science, and engineering schools. Just as it benefits Stanford-trained researchers.</p> <p>Any manufacturer, business and industry aiming to improve its operations will find the strategies in this book invaluable because its tools boost the Deming cycle.</p> <p>In Sum: Planning and Executing Credible Experiments guides experimentalists to perform experiments with results they can justify and defend.--Prof. M. N. Frota, Pontifical University of Rio de Janeiro / BRAZIL</p>

Diese Produkte könnten Sie auch interessieren:

Turbulent Drag Reduction by Surfactant Additives
Turbulent Drag Reduction by Surfactant Additives
von: Feng-Chen Li, Bo Yu, Jin-Jia Wei, Yasuo Kawaguchi
PDF ebook
142,99 €
Turbulent Drag Reduction by Surfactant Additives
Turbulent Drag Reduction by Surfactant Additives
von: Feng-Chen Li, Bo Yu, Jin-Jia Wei, Yasuo Kawaguchi
EPUB ebook
142,99 €
Wear
Wear
von: Gwidon W. Stachowiak
PDF ebook
144,99 €