Details

Online Panel Research


Online Panel Research

A Data Quality Perspective
Wiley Series in Survey Methodology 1. Aufl.

von: Mario Callegaro, Reginald P. Baker, Jelke Bethlehem, Anja S. Göritz, Jon A. Krosnick, Paul J. Lavrakas

68,99 €

Verlag: Wiley
Format: EPUB
Veröffentl.: 14.04.2014
ISBN/EAN: 9781118763513
Sprache: englisch
Anzahl Seiten: 512

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<b>Provides new insights into the accuracy and value of online panels for completing surveys</b> <p>Over the last decade, there has been a major global shift in survey and market research towards data collection, using samples selected from online panels. Yet despite their widespread use, remarkably little is known about the quality of the resulting data.</p> <p>This edited volume is one of the first attempts to carefully examine the quality of the survey data being generated by online samples. It describes some of the best empirically-based research on what has become a very important yet controversial method of collecting data. <i>Online Panel Research</i> presents 19 chapters of previously unpublished work addressing a wide range of topics, including coverage bias, nonresponse, measurement error, adjustment techniques, the relationship between nonresponse and measurement error, impact of smartphone adoption on data collection, Internet rating panels, and operational issues.</p> <p>The datasets used to prepare the analyses reported in the chapters are available on the accompanying website: www.wiley.com/go/online_panel</p> <ul> <li>Covers controversial topics such as professional respondents, speeders, and respondent validation.</li> <li>Addresses cutting-edge topics such as the challenge of smartphone survey completion, software to manage online panels, and Internet and mobile ratings panels.</li> <li>Discusses and provides examples of comparison studies between online panels and other surveys or benchmarks.</li> <li>Describes adjustment techniques to improve sample representativeness.</li> <li>Addresses coverage, nonresponse, attrition, and the relationship between nonresponse and measurement error with examples using data from the United States and Europe.</li> <li>Addresses practical questions such as motivations for joining an online panel and best practices for managing communications with panelists.</li> <li>Presents a meta-analysis of determinants of response quantity.</li> <li>Features contributions from 50 international authors with a wide variety of backgrounds and expertise.</li> </ul> <p>This book will be an invaluable resource for opinion and market researchers, academic researchers relying on web-based data collection, governmental researchers, statisticians, psychologists, sociologists, and other research practitioners.</p>
<p>Preface xv</p> <p>Acknowledgments xvii</p> <p>About the Editors xix</p> <p>About the Contributors xxiii</p> <p><b>1 Online panel research: History, concepts, applications and a look at the future 1</b><br /> <i>Mario Callegaro, Reg Baker, Jelke Bethlehem, Anja S. Göritz, Jon A. Krosnick, and Paul J. Lavrakas</i></p> <p>1.1 Introduction 1</p> <p>1.2 Internet penetration and online panels 2</p> <p>1.3 Definitions and terminology 2</p> <p>1.4 A brief history of online panels 4</p> <p>1.5 Development and maintenance of online panels 6</p> <p>1.6 Types of studies for which online panels are used 15</p> <p>1.7 Industry standards, professional associations’ guidelines, and advisory groups 15</p> <p>1.8 Data quality issues 17</p> <p>1.9 Looking ahead to the future of online panels 17</p> <p><b>2 A critical review of studies investigating the quality of data obtained with online panels based on probability and nonprobability samples 23</b><br /> <i>Mario Callegaro, Ana Villar, David Yeager, and Jon A. Krosnick</i></p> <p>2.1 Introduction 23</p> <p>2.2 Taxonomy of comparison studies 24</p> <p>2.3 Accuracy metrics 27</p> <p>2.4 Large-scale experiments on point estimates 28</p> <p>2.5 Weighting adjustments 35</p> <p>2.6 Predictive relationship studies 36</p> <p>2.7 Experiment replicability studies 38</p> <p>2.8 The special case of pre-election polls 42</p> <p>2.9 Completion rates and accuracy 43</p> <p>2.10 Multiple panel membership 43</p> <p>2.11 Online panel studies when the offline population is less of a concern 46</p> <p>2.12 Life of an online panel member 47</p> <p>2.13 Summary and conclusion 48</p> <p><b>Part I COVERAGE 55</b></p> <p><b>Introduction to Part I 56</b><br /> <i>Mario Callegaro and Jon A. Krosnick</i></p> <p><b>3 Assessing representativeness of a probability-based online panel in Germany 61</b><br /> <i>Bella Struminskaya, Lars Kaczmirek, Ines Schaurer, and Wolfgang Bandilla</i></p> <p>3.1 Probability-based online panels 61</p> <p>3.2 Description of the GESIS Online Panel Pilot 62</p> <p>3.3 Assessing recruitment of the Online Panel Pilot 66</p> <p>3.4 Assessing data quality: Comparison with external data 68</p> <p>3.5 Results 74</p> <p>3.6 Discussion and conclusion 80</p> <p><b>4 Online panels and validity: Representativeness and attrition in the Finnish eOpinion panel 86</b><br /> <i>Kimmo Grönlund and Kim Strandberg</i></p> <p>4.1 Introduction 86</p> <p>4.2 Online panels: Overview of methodological considerations 87</p> <p>4.3 Design and research questions 88</p> <p>4.4 Data and methods 90</p> <p>4.5 Findings 92</p> <p>4.6 Conclusion 100</p> <p><b>5 The untold story of multi-mode (online and mail) consumer panels: From optimal recruitment to retention and attrition 104</b><br /> <i>Allan L. McCutcheon, Kumar Rao, and Olena Kaminska</i></p> <p>5.1 Introduction 104</p> <p>5.2 Literature review 107</p> <p>5.3 Methods 108</p> <p>5.4 Results 115</p> <p>5.5 Discussion and conclusion 124</p> <p>Part II NONRESPONSE 127</p> <p>Introduction to Part II 128<br /> Jelke Bethlehem and Paul J. Lavrakas</p> <p><b>6 Nonresponse and attrition in a probability-based online panel for the general population 135</b><br /> <i>Peter Lugtig, Marcel Das, and Annette Scherpenzeel</i></p> <p>6.1 Introduction 135</p> <p>6.2 Attrition in online panels versus offline panels 137</p> <p>6.3 The LISS panel 139</p> <p>6.4 Attrition modeling and results 142</p> <p>6.5 Comparison of attrition and nonresponse bias 148</p> <p>6.6 Discussion and conclusion 150</p> <p><b>7 Determinants of the starting rate and the completion rate in online panel studies 154</b><br /> <i>Anja S. Göritz</i></p> <p>7.1 Introduction 154</p> <p>7.2 Dependent variables 155</p> <p>7.3 Independent variables 156</p> <p>7.4 Hypotheses 156</p> <p>7.5 Method 163</p> <p>7.6 Results 164</p> <p>7.7 Discussion and conclusion 166</p> <p><b>8 Motives for joining nonprobability online panels and their association with survey participation behavior 171</b><br /> <i>Florian Keusch, Bernad Batinic, and Wolfgang Mayerhofer</i></p> <p>8.1 Introduction 171</p> <p>8.2 Motives for survey participation and panel enrollment 173</p> <p>8.3 Present study 176</p> <p>8.4 Results 179</p> <p>8.5 Conclusion 185</p> <p><b>9 Informing panel members about study results: Effects of traditional and innovative forms of feedback on participation 192</b><br /> <i>Annette Scherpenzeel and Vera Toepoel</i></p> <p>9.1 Introduction 192</p> <p>9.2 Background 193</p> <p>9.3 Method 196</p> <p>9.4 Results 199</p> <p>9.5 Discussion and conclusion 207</p> <p><b>Part III MEASUREMENT ERROR 215</b></p> <p><b>Introduction to Part III 216</b><br /> <i>Reg Baker and Mario Callegaro</i></p> <p><b>10 Professional respondents in nonprobability online panels 219</b><br /> <i>D. Sunshine Hillygus, Natalie Jackson, and McKenzie Young</i></p> <p>10.1 Introduction 219</p> <p>10.2 Background 220</p> <p>10.3 Professional respondents and data quality 221</p> <p>10.4 Approaches to handling professional respondents 223</p> <p>10.5 Research hypotheses 224</p> <p>10.6 Data and methods 225</p> <p>10.7 Results 226</p> <p>10.8 Satisficing behavior 229</p> <p>10.9 Discussion 232</p> <p><b>11 The impact of speeding on data quality in nonprobability and freshly recruited probability-based online panels 238</b><br /> <i>Robert Greszki, Marco Meyer, and Harald Schoen</i></p> <p>11.1 Introduction 238</p> <p>11.2 Theoretical framework 239</p> <p>11.3 Data and methodology 242</p> <p>11.4 Response time as indicator of data quality 243</p> <p>11.5 How to measure "speeding"? 246</p> <p>11.6 Does speeding matter? 251</p> <p>11.7 Conclusion 257</p> <p><b>Part IV WEIGHTING ADJUSTMENTS 263</b></p> <p><b>Introduction to Part IV 264</b><br /> <i>Jelke Bethlehem and Mario Callegaro</i></p> <p><b>12 Improving web survey quality: Potentials and constraints of propensity score adjustments 273</b><br /> <i>Stephanie Steinmetz, Annamaria Bianchi, Kea Tijdens, and Silvia Biffignandi</i></p> <p>12.1 Introduction 273</p> <p>12.2 Survey quality and sources of error in nonprobability web surveys 274</p> <p>12.3 Data, bias description, and PSA 277</p> <p>12.4 Results 284</p> <p>12.5 Potentials and constraints of PSA to improve nonprobability web survey quality: Conclusion 286</p> <p><b>13 Estimating the effects of nonresponses in online panels through imputation 299</b><br /> <i>Weiyu Zhang</i></p> <p>13.1 Introduction 299</p> <p>13.2 Method 302</p> <p>13.3 Measurements 303</p> <p>13.4 Findings 303</p> <p>13.5 Discussion and conclusion 308</p> <p><b>Part V NONRESPONSE AND MEASUREMENT ERROR 311</b></p> <p><b>Introduction to Part V 312</b><br /> <i>Anja S. Göritz and Jon A. Krosnick</i></p> <p><b>14 The relationship between nonresponse strategies and measurement error: Comparing online panel surveys to traditional surveys 313</b><br /> <i>Neil Malhotra, Joanne M. Miller, and Justin Wedeking</i></p> <p>14.1 Introduction 313</p> <p>14.2 Previous research and theoretical overview 314</p> <p>14.3 Does interview mode moderate the relationship between nonresponse strategies and data quality? 317</p> <p>14.4 Data 318</p> <p>14.5 Measures 320</p> <p>14.6 Results 324</p> <p>14.7 Discussion and conclusion 332</p> <p><b>15 Nonresponse and measurement error in an online panel: Does additional effort to recruit reluctant respondents result in poorer quality data? 337</b><br /> <i>Caroline Roberts, Nick Allum, and Patrick Sturgis</i></p> <p>15.1 Introduction 337</p> <p>15.2 Understanding the relation between nonresponse and measurement error 338</p> <p>15.3 Response propensity and measurement error in panel surveys 341</p> <p>15.4 The present study 342</p> <p>15.5 Data 343</p> <p>15.6 Analytical strategy 344</p> <p>15.7 Results 350</p> <p>15.8 Discussion and conclusion 357</p> <p><b>Part VI SPECIAL DOMAINS 363</b></p> <p><b>Introduction to Part VI 364</b><br /> <i>Reg Baker and Anja S. Göritz</i></p> <p><b>16 An empirical test of the impact of smartphones on panel-based online data collection 367</b><br /> <i>Frank Drewes</i></p> <p>16.1 Introduction 367</p> <p>16.2 Method 369</p> <p>16.3 Results 371</p> <p>16.4 Discussion and conclusion 385</p> <p><b>17 Internet and mobile ratings panels 387</b><br /> <i>Philip M. Napoli, Paul J. Lavrakas, and Mario Callegaro</i></p> <p>17.1 Introduction 387</p> <p>17.2 History and development of Internet ratings panels 388</p> <p>17.3 Recruitment and panel cooperation 390</p> <p>17.4 Compliance and panel attrition 394</p> <p>17.5 Measurement issues 396</p> <p>17.6 Long tail and panel size 398</p> <p>17.7 Accuracy and validation studies 400</p> <p>17.8 Statistical adjustment and modeling 401</p> <p>17.9 Representative research 402</p> <p>17.10 The future of Internet audience measurement 403</p> <p><b>Part VII OPERATIONAL ISSUES IN ONLINE PANELS 409</b></p> <p><b>Introduction to Part VII 410</b><br /> <i>Paul J. Lavrakas and Anja S. Göritz</i></p> <p><b>18 Online panel software 413</b><br /> <i>Tim Macer</i></p> <p>18.1 Introduction 413</p> <p>18.2 What does online panel software do? 414</p> <p>18.3 Survey of software providers 415</p> <p>18.4 A typology of panel research software 416</p> <p>18.5 Support for the different panel software typologies 417</p> <p>18.6 The panel database 418</p> <p>18.7 Panel recruitment and profile data 421</p> <p>18.8 Panel administration 423</p> <p>18.9 Member portal 425</p> <p>18.10 Sample administration 428</p> <p>18.11 Data capture, data linkage and interoperability 430</p> <p>18.12 Diagnostics and active panel management 433</p> <p>18.13 Conclusion and further work 436</p> <p><b>19 Validating respondents’ identity in online samples: The impact of efforts to eliminate fraudulent respondents 441</b><br /> <i>Reg Baker, Chuck Miller, Dinaz Kachhi, Keith Lange, Lisa Wilding-Brown, and Jacob Tucker</i></p> <p>19.1 Introduction 441</p> <p>19.2 The 2011 study 443</p> <p>19.3 The 2012 study 444</p> <p>19.4 Results 446</p> <p>19.5 Discussion 449</p> <p>19.6 Conclusion 450</p> <p>References 451</p> <p><b>Appendix 19.A 452</b></p> <p><b>Index 457</b></p>
<p><b>Mario Callegaro</b>, Survey Research Scientist, Quantitative Marketing, Google Inc., UK</p> <p><b>Reg Baker</b>, President & Chief Operating Officer, Market Strategies International, USA</p> <p><b>Paul J. Lavrakas</b>, Nielsen Media Research, Research Psychologist/Research Methodologist, USA</p> <p><b>Jon A. Krosnick</b>, Professor of Political Science, Communication, Psychology, Stanford University, USA</p> <p><b>Jelke Bethlehem</b>, Department of Quantitative Economics, University of Amsterdam, The Netherlands</p> <p><b>Anja Göritz</b>, University of Erlangen-Nuremberg, Department of Economics and Social Psychology, Germany</p>
<p><b>Provides new insights into the accuracy and value of online panels for completing surveys</b></p> <p>Over the last decade, there has been a major global shift in survey and market research towards data collection, using samples selected from online panels. Yet despite their widespread use, remarkably little is known about the quality of the resulting data.</p> <p>This edited volume is one of the first attempts to carefully examine the quality of the survey data being generated by online samples. It describes some of the best empirically-based research on what has become a very important yet controversial method of collecting data. <i>Online Panel Research</i> presents 19 chapters of previously unpublished work addressing a wide range of topics, including coverage bias, nonresponse, measurement error, adjustment techniques, the relationship between nonresponse and measurement error, impact of smartphone adoption on data collection, Internet rating panels, and operational issues.</p> <p>The datasets used to prepare the analyses reported in the chapters are available on the accompanying website: www.wiley.com/go/online_panel</p> <ul> <li>Covers controversial topics such as professional respondents, speeders, and respondent validation.</li> <li>Addresses cutting-edge topics such as the challenge of smartphone survey completion, software to manage online panels, and Internet and mobile ratings panels.</li> <li>Discusses and provides examples of comparison studies between online panels and other surveys or benchmarks.</li> <li>Describes adjustment techniques to improve sample representativeness.</li> <li>Addresses coverage, nonresponse, attrition, and the relationship between nonresponse and measurement error with examples using data from the United States and Europe.</li> <li>Addresses practical questions such as motivations for joining an online panel and best practices for managing communications with panelists.</li> <li>Presents a meta-analysis of determinants of response quantity.</li> <li>Features contributions from 50 international authors with a wide variety of backgrounds and expertise.</li> </ul> <p>This book will be an invaluable resource for opinion and market researchers, academic researchers relying on web-based data collection, governmental researchers, statisticians, psychologists, sociologists, and other research practitioners.</p>

Diese Produkte könnten Sie auch interessieren:

Statistics for Microarrays
Statistics for Microarrays
von: Ernst Wit, John McClure
PDF ebook
90,99 €