Details

Total Survey Error in Practice


Total Survey Error in Practice


Wiley Series in Survey Methodology 1. Aufl.

von: Paul P. Biemer, Edith D. de Leeuw, Stephanie Eckman, Brad Edwards, Frauke Kreuter, Lars E. Lyberg, N. Clyde Tucker, Brady T. West

103,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 06.02.2017
ISBN/EAN: 9781119041689
Sprache: englisch
Anzahl Seiten: 624

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p><b>Featuring a timely presentation of total survey error (TSE), this edited volume introduces valuable tools for understanding and improving survey data quality in the context of evolving large-scale data sets</b></p> <p>This book provides an overview of the TSE framework and current TSE research as related to survey design, data collection, estimation, and analysis. It recognizes that survey data affects many public policy and business decisions and thus focuses on the framework for understanding and improving survey data quality. The book also addresses issues with data quality in official statistics and in social, opinion, and market research as these fields continue to evolve, leading to larger and messier data sets. This perspective challenges survey organizations to find ways to collect and process data more efficiently without sacrificing quality. The volume consists of the most up-to-date research and reporting from over 70 contributors representing the best academics and researchers from a range of fields. The chapters are broken out into five main sections: The Concept of TSE and the TSE Paradigm, Implications for Survey Design, Data Collection and Data Processing Applications, Evaluation and Improvement, and Estimation and Analysis. Each chapter introduces and examines multiple error sources, such as sampling error, measurement error, and nonresponse error, which often offer the greatest risks to data quality, while also encouraging readers not to lose sight of the less commonly studied error sources, such as coverage error, processing error, and specification error. The book also notes the relationships between errors and the ways in which efforts to reduce one type can increase another, resulting in an estimate with larger total error.</p> <p>This book:</p> <p>• Features various error sources, and the complex relationships between them, in 25 high-quality chapters on the most up-to-date research in the field of TSE</p> <p>• Provides comprehensive reviews of the literature on error sources as well as data collection approaches and estimation methods to reduce their effects</p> <p>• Presents examples of recent international events that demonstrate the effects of data error, the importance of survey data quality, and the real-world issues that arise from these errors</p> <p>• Spans the four pillars of the total survey error paradigm (design, data collection, evaluation and analysis) to address key data quality issues in official statistics and survey research</p> <p><i>Total Survey Error in Practice </i>is a reference for survey researchers and data scientists in research areas that include social science, public opinion, public policy, and business. It can also be used as a textbook or supplementary material for a graduate-level course in survey research methods.</p>
<p>Notes on Contributors xix</p> <p>Preface xxv</p> <p><b>Section 1 The Concept of TSE and the TSE Paradigm 1</b></p> <p><b>1 The Roots and Evolution of the Total Survey Error Concept 3<br /></b><i>Lars E. Lyberg and Diana Maria Stukel</i></p> <p>1.1 Introduction and Historical Backdrop 3</p> <p>1.2 Specific Error Sources and Their Control or Evaluation 5</p> <p>1.3 Survey Models and Total Survey Design 10</p> <p>1.4 The Advent of More Systematic Approaches Toward Survey Quality 12</p> <p>1.5 What the Future Will Bring 16</p> <p>References 18</p> <p><b>2 Total Twitter Error: Decomposing Public Opinion Measurement on Twitter from a Total Survey Error Perspective 23<br /></b><i>Yuli Patrick Hsieh and Joe Murphy</i></p> <p>2.1 Introduction 23</p> <p>2.2 Social Media: An Evolving Online Public Sphere 25</p> <p>2.3 Components of Twitter Error 27</p> <p>2.4 Studying Public Opinion on the Twittersphere and the Potential Error Sources of Twitter Data: Two Case Studies 31</p> <p>2.5 Discussion 40</p> <p>2.6 Conclusion 42</p> <p>References 43</p> <p><b>3 Big Data: A Survey Research Perspective 47<br /></b><i>Reg Baker</i></p> <p>3.1 Introduction 47</p> <p>3.2 Definitions 48</p> <p>3.3 The Analytic Challenge: From Database Marketing to Big Data and Data Science 56</p> <p>3.4 Assessing Data Quality 58</p> <p>3.5 Applications in Market, Opinion, and Social Research 59</p> <p>3.6 The Ethics of Research Using Big Data 62</p> <p>3.7 The Future of Surveys in a Data-Rich Environment 62</p> <p>References 65</p> <p><b>4 The Role of Statistical Disclosure Limitation in Total Survey Error 71<br /></b><i>Alan F. Karr</i></p> <p>4.1 Introduction 71</p> <p>4.2 Primer on SDL 72</p> <p>4.3 TSE-Aware SDL 75</p> <p>4.4 Edit-Respecting SDL 79</p> <p>4.5 SDL-Aware TSE 83</p> <p>4.6 Full Unification of Edit, Imputation, and SDL 84</p> <p>4.7 “Big Data” Issues 87</p> <p>4.8 Conclusion 89</p> <p>Acknowledgments 91</p> <p>References 92</p> <p><b>Section 2 Implications for Survey Design 95</b></p> <p><b>5 The Undercoverage–Nonresponse Tradeoff 97<br /></b><i>Stephanie Eckman and Frauke Kreuter</i></p> <p>5.1 Introduction 97</p> <p>5.2 Examples of the Tradeoff 98</p> <p>5.3 Simple Demonstration of the Tradeoff 99</p> <p>5.4 Coverage and Response Propensities and Bias 100</p> <p>5.5 Simulation Study of Rates and Bias 102</p> <p>5.6 Costs 110</p> <p>5.7 Lessons for Survey Practice 111</p> <p>References 112</p> <p><b>6 Mixing Modes: Tradeoffs Among Coverage, Nonresponse, and Measurement Error 115<br /></b><i>Roger Tourangeau</i></p> <p>6.1 Introduction 115</p> <p>6.2 The Effect of Offering a Choice of Modes 118</p> <p>6.3 Getting People to Respond Online 119</p> <p>6.4 Sequencing Different Modes of Data Collection 120</p> <p>6.5 Separating the Effects of Mode on Selection and Reporting 122</p> <p>6.6 Maximizing Comparability Versus Minimizing Error 127</p> <p>6.7 Conclusions 129</p> <p>References 130</p> <p><b>7 Mobile Web Surveys: A Total Survey Error Perspective 133<br /></b><i>Mick P. Couper, Christopher Antoun, and Aigul Mavletova</i></p> <p>7.1 Introduction 133</p> <p>7.2 Coverage 135</p> <p>7.3 Nonresponse 137</p> <p>7.4 Measurement Error 142</p> <p>7.5 Links Between Different Error Sources 148</p> <p>7.6 The Future of Mobile Web Surveys 149</p> <p>References 150</p> <p><b>8 The Effects of a Mid-Data Collection Change in Financial Incentives on Total Survey Error in the National Survey of Family Growth: Results from a Randomized Experiment 155<br /></b><i>James Wagner, Brady T. West, Heidi Guyer, Paul Burton, Jennifer Kelley, Mick P. Couper, and William D. Mosher</i></p> <p>8.1 Introduction 155</p> <p>8.2 Literature Review: Incentives in Face-to-Face Surveys 156</p> <p>8.3 Data and Methods 159</p> <p>8.4 Results 163</p> <p>8.5 Conclusion 173</p> <p>References 175</p> <p><b>9 A Total Survey Error Perspective on Surveys in Multinational, Multiregional, and Multicultural Contexts 179<br /></b><i>Beth-Ellen Pennell, Kristen Cibelli Hibben, Lars E. Lyberg, Peter Ph. Mohler, and Gelaye Worku</i></p> <p>9.1 Introduction 179</p> <p>9.2 TSE in Multinational, Multiregional, and Multicultural Surveys 180</p> <p>9.3 Challenges Related to Representation and Measurement Error Components in Comparative Surveys 184</p> <p>9.4 QA and QC in 3MC Surveys 192</p> <p>References 196</p> <p><b>10 Smartphone Participation in Web Surveys: Choosing Between the Potential for Coverage, Nonresponse, and Measurement Error 203<br /></b><i>Gregg Peterson, Jamie Griffin, John LaFrance, and JiaoJiao Li</i></p> <p>10.1 Introduction 203</p> <p>10.2 Prevalence of Smartphone Participation in Web Surveys 206</p> <p>10.3 Smartphone Participation Choices 209</p> <p>10.4 Instrument Design Choices 212</p> <p>10.5 Device and Design Treatment Choices 216</p> <p>10.6 Conclusion 218</p> <p>10.7 Future Challenges and Research Needs 219</p> <p>Appendix 10.A: Data Sources 220</p> <p>Appendix 10.B: Smartphone Prevalence in Web Surveys 221</p> <p>Appendix 10.C: Screen Captures from Peterson et al. (2013) Experiment 225</p> <p>Appendix 10.D: Survey Questions Used in the Analysis of the Peterson et al. (2013) Experiment 229</p> <p>References 231</p> <p><b>11 Survey Research and the Quality of Survey Data Among Ethnic Minorities 235<br /></b><i>Joost Kappelhof</i></p> <p>11.1 Introduction 235</p> <p>11.2 On the Use of the Terms Ethnicity and Ethnic Minorities 236</p> <p>11.3 On the Representation of Ethnic Minorities in Surveys 237 Ethnic Minorities 241</p> <p>11.4 Measurement Issues 242</p> <p>11.5 Comparability, Timeliness, and Cost Concerns 244</p> <p>11.6 Conclusion 247</p> <p>References 248</p> <p><b>Section 3 Data Collection and Data Processing Applications 253</b></p> <p><b>12 Measurement Error in Survey Operations Management: Detection, Quantification, Visualization, and Reduction 255<br /></b><i>Brad Edwards, Aaron Maitland, and Sue Connor</i></p> <p>12.1 TSE Background on Survey Operations 256</p> <p>12.2 Better and Better: Using Behavior Coding (CARIcode) and Paradata to Evaluate and Improve Question (Specification) Error and Interviewer Error 257</p> <p>12.3 Field-Centered Design: Mobile App for Rapid Reporting and Management 261</p> <p>12.4 Faster and Cheaper: Detecting Falsification With GIS Tools 265</p> <p>12.5 Putting It All Together: Field Supervisor Dashboards 268</p> <p>12.6 Discussion 273</p> <p>References 275</p> <p><b>13 Total Survey Error for Longitudinal Surveys 279<br /></b><i>Peter Lynn and Peter J. Lugtig</i></p> <p>13.1 Introduction 279</p> <p>13.2 Distinctive Aspects of Longitudinal Surveys 280</p> <p>13.3 TSE Components in Longitudinal Surveys 281</p> <p>13.4 Design of Longitudinal Surveys from a TSE Perspective 285</p> <p>13.5 Examples of Tradeoffs in Three Longitudinal Surveys 290</p> <p>13.6 Discussion 294</p> <p>References 295</p> <p><b>14 Text Interviews on Mobile Devices 299<br /></b><i>Frederick G. Conrad, Michael F. Schober, Christopher Antoun, Andrew L. Hupp, and H. Yanna Yan</i></p> <p>14.1 Texting as a Way of Interacting 300</p> <p>14.2 Contacting and Inviting Potential Respondents through Text 303</p> <p>14.3 Texting as an Interview Mode 303</p> <p>14.4 Costs and Efficiency of Text Interviewing 312</p> <p>14.5 Discussion 314</p> <p>References 315</p> <p><b>15 Quantifying Measurement Errors in Partially Edited Business Survey Data 319<br /></b><i>Thomas Laitila, Karin Lindgren, Anders Norberg, and Can Tongur</i></p> <p>15.1 Introduction 319</p> <p>15.2 Selective Editing 320</p> <p>15.3 Effects of Errors Remaining After SE 325</p> <p>15.4 Case Study: Foreign Trade in Goods Within the European Union 328</p> <p>15.5 Editing Big Data 334</p> <p>15.6 Conclusions 335</p> <p>References 335</p> <p><b>Section 4 Evaluation and Improvement 339</b></p> <p><b>16 Estimating Error Rates in an Administrative Register and Survey Questions Using a Latent Class Model 341<br /></b><i>Daniel L. Oberski</i></p> <p>16.1 Introduction 341</p> <p>16.2 Administrative and Survey Measures of Neighborhood 342</p> <p>16.3 A Latent Class Model for Neighborhood of Residence 345</p> <p>16.4 Results 348</p> <p>Appendix 16.A: Program Input and Data 355</p> <p>Acknowledgments 357</p> <p>References 357</p> <p><b>17 ASPIRE: An Approach for Evaluating and Reducing the Total Error in Statistical Products with Application to Registers and the National Accounts 359<br /></b><i>Paul P. Biemer, Dennis Trewin, Heather Bergdahl, and Yingfu Xie</i></p> <p>17.1 Introduction and Background 359</p> <p>17.2 Overview of ASPIRE 360</p> <p>17.3 The ASPIRE Model 362</p> <p>17.4 Evaluation of Registers 367</p> <p>17.5 National Accounts 371</p> <p>17.6 A Sensitivity Analysis of GDP Error Sources 376</p> <p>17.7 Concluding Remarks 379</p> <p>Appendix 17.A: Accuracy Dimension Checklist 381</p> <p>References 384</p> <p><b>18 Classification Error in Crime Victimization Surveys: A Markov Latent Class Analysis 387<br /></b><i>Marcus E. Berzofsky and Paul P. Biemer</i></p> <p>18.1 Introduction 387</p> <p>18.2 Background 389</p> <p>18.3 Analytic Approach 392</p> <p>18.4 Model Selection 396</p> <p>18.5 Results 399</p> <p>18.6 Discussion and Summary of Findings 404</p> <p>18.7 Conclusions 407</p> <p>Appendix 18.A: Derivation of the Composite False-Negative Rate 407</p> <p>Appendix 18.B: Derivation of the Lower Bound for False-Negative Rates from a Composite Measure 408</p> <p>Appendix 18.C: Examples of Latent GOLD Syntax 408</p> <p>References 410</p> <p><b>19 Using Doorstep Concerns Data to Evaluate and Correct for Nonresponse Error in a Longitudinal Survey 413<br /></b><i>Ting Yan</i></p> <p>19.1 Introduction 413</p> <p>19.2 Data and Methods 416</p> <p>19.3 Results 418</p> <p>19.4 Discussion 428</p> <p>Acknowledgment 430</p> <p>References 430</p> <p><b>20 Total Survey Error Assessment for Sociodemographic Subgroups in the 2012 U.S. National Immunization Survey 433<br /></b><i>Kirk M. Wolter, Vicki J. Pineau, Benjamin Skalland, Wei Zeng, James A. Singleton, Meena Khare, Zhen Zhao, David Yankey, and Philip J. Smith</i></p> <p>20.1 Introduction 433</p> <p>20.2 TSE Model Framework 434</p> <p>20.3 Overview of the National Immunization Survey 437</p> <p>20.4 National Immunization Survey: Inputs for TSE Model 440</p> <p>20.5 National Immunization Survey TSE Analysis 445</p> <p>20.6 Summary 452</p> <p>References 453</p> <p><b>21 Establishing Infrastructure for the Use of Big Data to Understand Total Survey Error: Examples from Four Survey Research Organizations Overview 457<br /></b><i>Brady T. West</i></p> <p><b>Part 1 Big Data Infrastructure at the Institute for Employment Research (IAB) </b><b>458<br /></b><i>Antje Kirchner, Daniela Hochfellner, Stefan Bender</i></p> <p>Acknowledgments 464</p> <p>References 464</p> <p><b>Part 2 Using Administrative Records Data at the U.S. Census Bureau: Lessons Learned from Two Research Projects Evaluating Survey Data </b><b>467<br /></b><i>Elizabeth M. Nichols, Mary H. Mulry, and Jennifer Hunter Childs</i></p> <p>Acknowledgments and Disclaimers 472</p> <p>References 472</p> <p><b>Part 3 Statistics New Zealand’s Approach to Making Use of Alternative Data Sources in a New Era of Integrated Data </b><b>474<br /></b><i>Anders Holmberg and Christine Bycroft</i></p> <p>References 478</p> <p><b>Part 4 Big Data Serving Survey Research: Experiences at the University of Michigan Survey Research Center </b><b>478<br /></b><i>Grant Benson and Frost Hubbard</i></p> <p>Acknowledgments and Disclaimers 484</p> <p>References 484</p> <p><b>Section 5 Estimation and Analysis 487</b></p> <p><b>22 Analytic Error as an Important Component of Total Survey Error: Results from a Meta-Analysis 489<br /></b><i>Brady T. West, Joseph W. Sakshaug, and Yumi Kim</i></p> <p>22.1 Overview 489</p> <p>22.2 Analytic Error as a Component of TSE 490</p> <p>22.3 Appropriate Analytic Methods for Survey Data 492</p> <p>22.4 Methods 495</p> <p>22.5 Results 497</p> <p>22.6 Discussion 505</p> <p>Acknowledgments 508</p> <p>References 508</p> <p><b>23 Mixed-Mode Research: Issues in Design and Analysis 511<br /></b><i>Joop Hox, Edith de Leeuw, and Thomas Klausch</i></p> <p>23.1 Introduction 511</p> <p>23.2 Designing Mixed-Mode Surveys 512</p> <p>23.3 Literature Overview 514</p> <p>23.4 Diagnosing Sources of Error in Mixed-Mode Surveys 516</p> <p>23.5 Adjusting for Mode Measurement Effects 523</p> <p>23.6 Conclusion 527</p> <p>References 528</p> <p><b>24 The Effect of Nonresponse and Measurement Error on Wage Regression across Survey Modes: A Validation Study 531<br /></b><i>Antje Kirchner and Barbara Felderer</i></p> <p>24.1 Introduction 531</p> <p>24.2 Nonresponse and Response Bias in Survey Statistics 532</p> <p>24.3 Data and Methods 534</p> <p>24.4 Results 541</p> <p>24.5 Summary and Conclusion 546</p> <p>Acknowledgments 547</p> <p>Appendix 24.A 548</p> <p>Appendix 24.B 549</p> <p>References 554</p> <p><b>25 Errors in Linking Survey and Administrative Data 557<br /></b><i>Joseph W. Sakshaug and Manfred Antoni</i></p> <p>25.1 Introduction 557</p> <p>25.2 Conceptual Framework of Linkage and Error Sources 559</p> <p>25.3 Errors Due to Linkage Consent 561</p> <p>25.4 Erroneous Linkage with Unique Identifiers 565</p> <p>25.5 Erroneous Linkage with Nonunique Identifiers 567</p> <p>25.6 Applications and Practical Guidance 568</p> <p>25.7 Conclusions and Take-Home Points 571</p> <p>References 571</p> <p>Index 575</p>
<p><b>Paul P. Biemer, PhD, </b>is distinguished fellow at RTI International and associate director of Survey Research and Development at the Odum Institute, University of North Carolina, USA.</p> <p><b>Edith de Leeuw, PhD, </b>is professor of survey methodology in the Department of Methodology and Statistics at Utrecht University, the Netherlands.</p> <p><b>Stephanie Eckman, PhD, </b>is fellow at RTI International, USA.</p> <p><b>Brad Edwards </b>is vice president, director of Field Services, and deputy area director at Westat, USA.</p> <p><b>Frauke Kreuter, PhD, </b>is professor and director of the Joint Program in Survey Methodology, University of Maryland, USA; professor of statistics and methodology at the University of Mannheim, Germany; and head of the Statistical Methods Research Department at the Institute for Employment Research, Germany.</p> <p><b>Lars E. Lyberg, PhD, </b>is senior advisor at Inizio, Sweden.</p> <p><b>N. Clyde Tucker, PhD, </b>is principal survey methodologist at the American Institutes for Research, USA.</p> <p><b>Brady T. West, PhD, </b>is research associate professor in the Survey Research Center, located within the Institute for Social Research at the University of Michigan (U-M), and also serves as statistical consultant on the Consulting for Statistics, Computing and Analytics Research (CSCAR) team at U-M, USA.</p>
<p><b>Featuring a timely presentation of total survey error (TSE), this edited volume introduces valuable tools for understanding and improving survey data quality in the context of evolving large-scale data sets</b></p> <p>This book provides an overview of the TSE framework and current TSE research as related to survey design, data collection, estimation, and analysis. It recognizes that survey data affects many public policy and business decisions and thus focuses on the framework for understanding and improving survey data quality. The book also addresses issues with data quality in official statistics and in social, opinion, and market research as these fields continue to evolve, leading to larger and messier data sets. This perspective challenges survey organizations to find ways to collect and process data more efficiently without sacrificing quality. The volume consists of the most up-to-date research and reporting from over 70 contributors representing the best academics and researchers from a range of fields. The chapters are broken out into five main sections: The Concept of TSE and the TSE Paradigm, Implications for Survey Design, Data Collection and Data Processing Applications, Evaluation and Improvement, and Estimation and Analysis. Each chapter introduces and examines multiple error sources, such as sampling error, measurement error, and nonresponse error, which often offer the greatest risks to data quality, while also encouraging readers not to lose sight of the less commonly studied error sources, such as coverage error, processing error, and specification error. The book also notes the relationships between errors and the ways in which efforts to reduce one type can increase another, resulting in an estimate with larger total error.</p> <p>This book:</p> <p>• Features various error sources, and the complex relationships between them, in 25 high-quality chapters on the most up-to-date research in the field of TSE</p> <p>• Provides comprehensive reviews of the literature on error sources as well as data collection approaches and estimation methods to reduce their effects</p> <p>• Presents examples of recent international events that demonstrate the effects of data error, the importance of survey data quality, and the real-world issues that arise from these errors</p> <p>• Spans the four pillars of the total survey error paradigm (design, data collection, evaluation and analysis) to address key data quality issues in official statistics and survey research</p> <p><i>Total Survey Error in Practice </i>is a reference for survey researchers and data scientists in research areas that include social science, public opinion, public policy, and business. It can also be used as a textbook or supplementary material for a graduate-level course in survey research methods.</p>

Diese Produkte könnten Sie auch interessieren:

Statistics for Microarrays
Statistics for Microarrays
von: Ernst Wit, John McClure
PDF ebook
90,99 €