Details

Experimental Methods in Survey Research


Experimental Methods in Survey Research

Techniques that Combine Random Sampling with Random Assignment
Wiley Series in Survey Methodology 1. Aufl.

von: Paul J. Lavrakas, Michael W. Traugott, Courtney Kennedy, Allyson L. Holbrook, Edith D. de Leeuw, Brady T. West

103,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 01.10.2019
ISBN/EAN: 9781119083757
Sprache: englisch
Anzahl Seiten: 544

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p><b>A thorough and comprehensive guide to the theoretical, practical, and methodological approaches used in survey experiments across disciplines such as political science, health sciences, sociology, economics, psychology, and marketing</b></p> <p>This book explores and explains the broad range of experimental designs embedded in surveys that use both probability and non-probability samples. It approaches the usage of survey-based experiments with a Total Survey Error (TSE) perspective, which provides insight on the strengths and weaknesses of the techniques used.</p> <p><i>Experimental Methods in Survey Research: Techniques that Combine Random Sampling with Random Assignment </i>addresses experiments on within-unit coverage, reducing nonresponse, question and questionnaire design, minimizing interview measurement bias, using adaptive design, trend data, vignettes, the analysis of data from survey experiments, and other topics, across social, behavioral, and marketing science domains.</p> <p>Each chapter begins with a description of the experimental method or application and its importance, followed by reference to relevant literature. At least one detailed original experimental case study then follows to illustrate the experimental method’s deployment, implementation, and analysis from a TSE perspective. The chapters conclude with theoretical and practical implications on the usage of the experimental method addressed. In summary, this book:</p> <ul> <li>Fills a gap in the current literature by successfully combining the subjects of survey methodology and experimental methodology in an effort to maximize both internal validity and external validity</li> <li>Offers a wide range of types of experimentation in survey research with in-depth attention to their various methodologies and applications</li> <li>Is edited by internationally recognized experts in the field of survey research/methodology and in the usage of survey-based experimentation —featuring contributions from across a variety of disciplines in the social and behavioral sciences</li> <li>Presents advances in the field of survey experiments, as well as relevant references in each chapter for further study</li> <li>Includes more than 20 types of original experiments carried out within probability sample surveys</li> <li>Addresses myriad practical and operational aspects for designing, implementing, and analyzing survey-based experiments by using a Total Survey Error perspective to address the strengths and weaknesses of each experimental technique and method</li> </ul> <p><i>Experimental Methods in Survey Research: Techniques that Combine Random Sampling with Random Assignment </i>is an ideal reference for survey researchers and practitioners in areas such political science, health sciences, sociology, economics, psychology, public policy, data collection, data science, and marketing. It is also a very useful textbook for graduate-level courses on survey experiments and survey methodology.</p>
<p>List of Contributors xix</p> <p>Preface by <i>Dr. Judith Tanur</i> xxv</p> <p>About the Companion Website xxix</p> <p><b>1 Probability Survey-Based Experimentation and the Balancing of Internal and External Validity Concerns 1<br /></b><i>Paul J. Lavrakas, Courtney Kennedy, Edith D. de Leeuw, Brady T. West, Allyson L. Holbrook, </i><i>and Michael W. Traugott</i></p> <p>1.1 Validity Concerns in Survey Research 3</p> <p>1.2 Survey Validity and Survey Error 5</p> <p>1.3 Internal Validity 6</p> <p>1.4 Threats to Internal Validity 8</p> <p>1.5 External Validity 11</p> <p>1.6 Pairing Experimental Designs with Probability Sampling 12</p> <p>1.7 Some Thoughts on Conducting Experiments with Online Convenience Samples 12</p> <p>1.8 The Contents of this Book 15</p> <p>References 15</p> <p><b>Part I Introduction to Section on Within-Unit Coverage 19<br /></b><i>Paul J. Lavrakas and Edith D. de Leeuw</i></p> <p><b>2 Within-Household Selection Methods: A Critical Review and Experimental Examination 23<br /></b><i>Jolene D. Smyth, Kristen Olson, and Mathew Stange</i></p> <p>2.1 Introduction 23</p> <p>2.2 Within-Household Selection and Total Survey Error 24</p> <p>2.3 Types of within-Household Selection Techniques 24</p> <p>2.4 Within-Household Selection in Telephone Surveys 25</p> <p>2.5 Within-Household Selection in Self-Administered Surveys 26</p> <p>2.6 Methodological Requirements of Experimentally Studying Within-Household Selection Methods 27</p> <p>2.7 Empirical Example 30</p> <p>2.8 Data and Methods 31</p> <p>2.9 Analysis Plan 34</p> <p>2.10 Results 35</p> <p>2.11 Discussion and Conclusions 40</p> <p>References 42</p> <p><b>3 Measuring within-Household Contamination: The Challenge of Interviewing More Than One Member of a Household 47<br /></b><i>Colm O’Muircheartaigh, Stephen Smith, and Jaclyn S.Wong</i></p> <p>3.1 Literature Review 47</p> <p>3.2 Data and Methods 50</p> <p>Investigators 53</p> <p>Field/Project Directors 53</p> <p>3.3 The Sequence of Analyses 55</p> <p>3.4 Results 55</p> <p>3.5 Effect on Standard Errors of the Estimates 57</p> <p>3.6 Effect on Response Rates 58</p> <p>3.7 Effect on Responses 61</p> <p>3.8 Substantive Results 64</p> <p>References 64</p> <p><b>Part II Survey Experiments with Techniques to Reduce Nonresponse 67<br /></b><i>Edith D. de Leeuw and Paul J. Lavrakas</i></p> <p><b>4 Survey Experiments on Interactions and Nonresponse: A Case Study of Incentives and Modes 69<br /></b><i>A. Bianchi and S. Biffignandi</i></p> <p>4.1 Introduction 69</p> <p>4.2 Literature Overview 70</p> <p>4.3 Case Study: Examining the Interaction between Incentives and Mode 73</p> <p>4.4 Concluding Remarks 83</p> <p>Acknowledgments 85</p> <p>References 86</p> <p><b>5 Experiments on the Effects of Advance Letters in Surveys 89<br /></b><i>Susanne Vogl, Jennifer A. Parsons, Linda K. Owens, and Paul J. Lavrakas</i></p> <p>5.1 Introduction 89</p> <p>5.2 State of the Art on Experimentation on the Effect of Advance Letters 93</p> <p>5.3 Case Studies: Experimental Research on the Effect of Advance Letters 95</p> <p>5.4 Case Study I: Violence against Men in Intimate Relationships 96</p> <p>5.5 Case Study II: The Neighborhood Crime and Justice Study 100</p> <p>5.6 Discussion 106</p> <p>5.7 Research Agenda for the Future 107</p> <p>References 108</p> <p><b>Part III Overview of the Section on the Questionnaire 111<br /></b><i>Allyson Holbrook and Michael W. Traugott</i></p> <p><b>6 Experiments on the Design and Evaluation of Complex Survey Questions 113<br /></b><i>Paul Beatty, Carol Cosenza, and Floyd J. Fowler Jr.</i></p> <p>6.1 Question Construction: Dangling Qualifiers 115</p> <p>6.2 Overall Meanings of Question Can Be Obscured by Detailed Words 117</p> <p>6.3 Are Two Questions Better than One? 119</p> <p>6.4 The Use of Multiple Questions to Simplify Response Judgments 121</p> <p>6.5 The Effect of Context or Framing on Answers 122</p> <p>6.6 Do Questionnaire Effects Vary Across Sub-groups of Respondents? 124</p> <p>6.7 Discussion 126</p> <p>References 128</p> <p><b>7 Impact of Response Scale Features on Survey Responses to Behavioral Questions 131<br /></b><i>Florian Keusch and Ting Yan</i></p> <p>7.1 Introduction 131</p> <p>7.2 Previous Work on Scale Design Features 132</p> <p>7.3 Methods 134</p> <p>7.4 Results 136</p> <p>7.5 Discussion 141</p> <p>Acknowledgment 143</p> <p>7.A Question Wording 143</p> <p>7.A.1 Experimental Questions (One Question Per Screen) 143</p> <p>7.A.2 Validation Questions (One Per Screen) 144</p> <p>7.A.3 GfK Profile Questions (Not Part of the Questionnaire) 145</p> <p>7.B Test of Interaction Effects 145</p> <p>References 146</p> <p><b>8 Mode Effects Versus Question Format Effects: An Experimental Investigation of Measurement Error Implemented in a Probability-Based Online Panel 151<br /></b><i>Edith D. de Leeuw, Joop Hox, and Annette Scherpenzeel</i></p> <p>8.1 Introduction 151</p> <p>8.2 Experiments and Probability-Based Online Panels 153</p> <p>8.3 Mixed-Mode Question Format Experiments 154</p> <p>8.4 Summary and Discussion 161</p> <p>Acknowledgments 162</p> <p>References 162</p> <p><b>9 Conflicting Cues: Item Nonresponse and Experimental Mortality 167<br /></b><i>David J. Ciuk and Berwood A. Yost</i></p> <p>9.1 Introduction 167</p> <p>9.2 Survey Experiments and Item Nonresponse 167</p> <p>9.3 Case Study: Conflicting Cues and Item Nonresponse 170</p> <p>9.4 Methods 170</p> <p>9.5 Issue Selection 171</p> <p>9.6 Experimental Conditions and Measures 172</p> <p>9.7 Results 173</p> <p>9.8 Addressing Item Nonresponse in Survey Experiments 174</p> <p>9.9 Summary 178</p> <p>References 179</p> <p><b>10 Application of a List Experiment at the Population Level: The Case of Opposition to Immigration in the Netherlands 181<br /></b><i>Mathew J. Creighton, Philip S. Brenner, Peter Schmidt, and Diana Zavala-Rojas</i></p> <p>10.1 Fielding the Item Count Technique (ICT) 183</p> <p>10.2 Analyzing the Item Count Technique (ICT) 185</p> <p>10.3 An Application of ICT: Attitudes toward Immigrants in the Netherlands 186</p> <p>10.4 Limitations of ICT 190</p> <p>References 192</p> <p><b>Part IV Introduction to Section on Interviewers 195<br /></b><i>Brady T. West and Edith D. de Leeuw</i></p> <p><b>11 Race- and Ethnicity-of-Interviewer Effects 197<br /></b><i>Allyson L. Holbrook, Timothy P. Johnson, and Maria Krysan</i></p> <p>11.1 Introduction 197</p> <p>11.2 The Current Research 205</p> <p>11.3 Respondents and Procedures 207</p> <p>11.4 Measures 207</p> <p>11.5 Analysis 210</p> <p>11.6 Results 211</p> <p>11.7 Discussion and Conclusion 219</p> <p>References 221</p> <p><b>12 Investigating Interviewer Effects and Confounds in Survey-Based Experimentation 225<br /></b><i>Paul J. Lavrakas, Jenny Kelly, and Colleen McClain</i></p> <p>12.1 Studying Interviewer Effects Using a <i>Post hoc</i> Experimental Design 226</p> <p>12.2 Studying Interviewer Effects Using A <i>Priori</i> Experimental Designs 230</p> <p>12.3 An Original Experiment on the Effects of Interviewers Administering Only One Treatment vs. Interviewers Administrating Multiple Treatments 232</p> <p>12.4 Discussion 239</p> <p>References 242</p> <p><b>Part V Introduction to Section on Adaptive Design 245<br /></b><i>Courtney Kennedy and Brady T. West</i></p> <p><b>13 Using Experiments to Assess Interactive Feedback That Improves Response Quality in Web Surveys 247<br /></b><i>Tanja Kunz and Marek Fuchs</i></p> <p>13.1 Introduction 247</p> <p>13.2 Case Studies – Interactive Feedback in Web Surveys 251</p> <p>13.3 Methodological Issues in Experimental Visual Design Studies 258</p> <p>References 269</p> <p><b>14 Randomized Experiments for Web-Mail Surveys Conducted Using Address-Based Samples of the General Population 275<br /></b><i>Z. Tuba Suzer-Gurtekin, Mahmoud Elkasabi, James M. Lepkowski, Mingnan Liu, and Richard Curtin</i></p> <p>14.1 Introduction 275</p> <p>14.2 Study Design and Methods 278</p> <p>14.3 Results 281</p> <p>14.4 Discussion 285</p> <p>References 287</p> <p><b>Part VI Introduction to Section on Special Surveys 291<br /></b><i>Michael W. Traugott and Edith D. de Leeuw</i></p> <p><b>15 Mounting Multiple Experiments on Longitudinal Social Surveys: Design and Implementation Considerations 293<br /></b><i>Peter Lynn and Annette Jäckle</i></p> <p>15.1 Introduction and Overview 293</p> <p>15.2 Types of Experiments that Can Be Mounted in a Longitudinal Survey 294</p> <p>15.3 Longitudinal Experiments and Experiments in Longitudinal Surveys 295</p> <p>15.4 Longitudinal Surveys that Serve as Platforms for Experimentation 296</p> <p>15.5 The Understanding Society Innovation Panel 298</p> <p>15.6 Avoiding Confounding of Experiments 299</p> <p>15.7 Allocation Procedures 301</p> <p>15.8 Refreshment Samples 304</p> <p>15.9 Discussion 305</p> <p>15.A Appendix: Stata Syntax to Produce Table 15.3 Treatment Allocations 306</p> <p>References 306</p> <p><b>16 Obstacles and Opportunities for Experiments in Establishment Surveys Supporting Official Statistics 309<br /></b><i>Diane K. Willimack and Jaki S. McCarthy</i></p> <p>16.1 Introduction 309</p> <p>16.2 Some Key Differences between Household and Establishment Surveys 310</p> <p>16.3 Existing Literature Featuring Establishment Survey Experiments 312</p> <p>16.4 Key Considerations for Experimentation in Establishment Surveys 314</p> <p>16.5 Examples of Experimentation in Establishment Surveys 318</p> <p>16.6 Discussion and Concluding Remarks 323</p> <p>Acknowledgments 324</p> <p>References 324</p> <p><b>Part VII Introduction to Section on Trend Data 327<br /></b><i>Michael W. Traugott and Paul J. Lavrakas</i></p> <p><b>17 Tracking Question-Wording Experiments across Time in the General Social Survey, 1984–2014 329<br /></b><i>Tom W. Smith and Jaesok Son</i></p> <p>17.1 Introduction 329</p> <p>17.2 GSS Question-Wording Experiment on Spending Priorities 330</p> <p>17.3 Experimental Analysis 330</p> <p>17.4 Summary and Conclusion 338</p> <p>17.A National Spending Priority Items 339</p> <p>References 340</p> <p><b>18 Survey Experiments and Changes in Question Wording in Repeated Cross-Sectional Surveys 343<br /></b><i>Allyson L. Holbrook, David Sterrett, Andrew W. Crosby, Marina Stavrakantonaki, Xiaoheng Wang, Tianshu Zhao, and Timothy P. Johnson</i></p> <p><i><br /></i>18.1 Introduction 343</p> <p>18.2 Background 344</p> <p>18.3 Two Case Studies 347</p> <p>18.4 Implications and Conclusions 362</p> <p>Acknowledgments 364</p> <p>References 364</p> <p><b>Part VIII Vignette Experiments in Surveys 369<br /></b><i>Allyson Holbrook and Paul J. Lavrakas</i></p> <p><b>19 Are Factorial Survey Experiments Prone to Survey Mode Effects? 371<br /></b><i>Katrin Auspurg, Thomas Hinz, and Sandra Walzenbach</i></p> <p>19.1 Introduction 371</p> <p>19.2 Idea and Scope of Factorial Survey Experiments 372</p> <p>19.3 Mode Effects 373</p> <p>19.4 Case Study 378</p> <p>19.5 Conclusion 388</p> <p>References 390</p> <p><b>20 Validity Aspects of Vignette Experiments: Expected “What-If” Differences between Reports of Behavioral Intentions and Actual Behavior 393<br /></b><i>Stefanie Eifler and Knut Petzold</i></p> <p>20.1 Outline of the Problem 393</p> <p>20.2 Research Findings from Our Experimental Work 399</p> <p>20.3 Discussion 411</p> <p>References 413</p> <p><b>Part IX Introduction to Section on Analysis 417<br /></b><i>Brady T. West and Courtney Kennedy</i></p> <p><b>21 Identities and Intersectionality: A Case for Purposive Sampling in Survey-Experimental Research 419<br /></b><i>Samara Klar and Thomas J. Leeper</i></p> <p>21.1 Introduction 419</p> <p>21.2 Common Techniques for Survey Experiments on Identity 420</p> <p>21.3 How Limited are Representative Samples for Intersectionality Research? 426</p> <p>21.4 Conclusions and Discussion 430</p> <p>Author Biographies 431</p> <p>References 431</p> <p><b>22 Designing Probability Samples to Study Treatment Effect Heterogeneity 435<br /></b><i>Elizabeth Tipton, David S. Yeager, Ronaldo Iachan, and Barbara Schneider</i></p> <p>22.1 Introduction 435</p> <p>22.2 Nesting a Randomized Treatment in a National Probability Sample: The NSLM 446</p> <p>22.3 Discussion and Conclusions 451</p> <p>Acknowledgments 453</p> <p>References 453</p> <p><b>23 Design-Based Analysis of Experiments Embedded in Probability Samples 457<br /></b><i>Jan A. van den Brakel</i></p> <p>23.1 Introduction 457</p> <p>23.2 Design of Embedded Experiments 458</p> <p>23.3 Design-Based Inference for Embedded Experiments with One Treatment Factor 460</p> <p>23.4 Analysis of Experiments with Clusters of Sampling Units as Experimental Units 466</p> <p>23.5 Factorial Designs 468</p> <p>23.6 A Mixed-Mode Experiment in the Dutch Crime Victimization Survey 472</p> <p>23.7 Discussion 477</p> <p>Acknowledgments 478</p> <p>References 478</p> <p><b>24 Extending the Within-Persons Experimental Design: The Multitrait-Multierror (MTME) Approach 481<br /></b><i>Alexandru Cernat and Daniel L. Oberski</i></p> <p>24.1 Introduction 481</p> <p>24.2 The Multitrait-Multierror (MTME) Framework 482</p> <p>24.3 Designing the MTME Experiment 487</p> <p>24.4 Statistical Estimation for the MTME Approach 489</p> <p>24.5 Measurement Error in Attitudes toward Migrants in the UK 491</p> <p>24.6 Results 494</p> <p>24.7 Conclusions and Future Research Directions 497</p> <p>Acknowledgments 498</p> <p>References 498</p> <p>Index 501</p>
<p><b>Paul J. Lavrakas, PhD,</b> is Senior Fellow at the NORC at the University of Chicago, Adjunct Professor at University of Illinois-Chicago, Senior Methodologist at the Social Research Centre of Australian National University and at the Office for Survey Research at Michigan State University. <p><b>Michael W. Traugott, PhD,</b> is Research Professor in the Institute for Social Research at the University of Michigan. <p><b>Courtney Kennedy, PhD,</b> is Director of Survey Research at Pew Research Center in Washington, DC. <p><b>Allyson L. Holbrook, PhD,</b> is Professor of Public Administration and Psychology at the University of Illinois-Chicago. <p><b>Edith D. de Leeuw, PhD,</b> is Professor of Survey Methodology in the Department of Methodology and Statistics at Utrecht University. <p><b>Brady T. West, PhD,</b> is Research Associate Professor in the Survey Research Center at the University of Michigan-Ann Arbor.
<p><b>A thorough and comprehensive guide to the theoretical, practical, and methodological approaches used in survey experiments across disciplines such as political science, health sciences, sociology, economics, psychology, and marketing</b> <p>This book explores and explains the broad range of experimental designs embedded in surveys that use both probability and non-probability samples. It approaches the usage of survey-based experiments with a Total Survey Error (TSE) perspective, which provides insight on the strengths and weaknesses of the techniques used. <p><i>Experimental Methods in Survey Research: Techniques that Combine Random Sampling with Random Assignment</i> addresses experiments on within-unit coverage, reducing nonresponse, question and questionnaire design, minimizing interview measurement bias, using adaptive design, trend data, vignettes, the analysis of data from survey experiments, and other topics, across social, behavioral, and marketing science domains. <p>Each chapter begins with a description of the experimental method or application and its importance, followed by reference to relevant literature. At least one detailed original experimental case study then follows to illustrate the experimental method's deployment, implementation, and analysis from a TSE perspective. The chapters conclude with theoretical and practical implications on the usage of the experimental method addressed. In summary, this book: <ul> <li>Fills a gap in the current literature by successfully combining the subjects of survey methodology and experimental methodology in an effort to maximize both internal validity and external validity</li> <li>Offers a wide range of types of experimentation in survey research with in-depth attention to their various methodologies and applications</li> <li>Is edited by internationally recognized experts in the field of survey research/methodology and in the usage of survey-based experimentation —featuring contributions from across a variety of disciplines in the social and behavioral sciences</li> <li>Presents advances in the field of survey experiments, as well as relevant references in each chapter for further study</li> <li>Includes more than 20 types of original experiments carried out within probability sample surveys</li> <li>Addresses myriad practical and operational aspects for designing, implementing, and analyzing survey-based experiments by using a Total Survey Error perspective to address the strengths and weaknesses of each experimental technique and method</li> </ul> <p><i>Experimental Methods in Survey Research: Techniques that Combine Random Sampling with Random Assignment</i> is an ideal reference for survey researchers and practitioners in areas such political science, health sciences, sociology, economics, psychology, public policy, data collection, data science, and marketing. It is also a very useful textbook for graduate-level courses on survey experiments and survey methodology.

Diese Produkte könnten Sie auch interessieren:

Statistics for Microarrays
Statistics for Microarrays
von: Ernst Wit, John McClure
PDF ebook
90,99 €