Experimental Methods in Survey Research, by Paul J. Lavrakas

Experimental Methods in Survey Research

Techniques that Combine Random Sampling with Random Assignment

Edited by

Paul J. Lavrakas

NORC, IL, US

 

Michael W. Traugott

University of Michigan, MI, US

 

Courtney Kennedy

Pew Foundation, DC, US

 

Allyson L. Holbrook

University of Illinois-Chicago, IL, US

 

Edith D. de Leeuw

University of Utrecht, NL, TC

 

Brady T. West

University of Michigan, MI, US

 

Wiley Logo

List of Contributors

 

Katrin Auspurg

Ludwig Maximilian University of Munich

Munich, Germany

 

Paul Beatty

Center for Behavioral Science Methods

U.S. Census Bureau

Washington, DC, USA

 

A. Bianchi

Department of Management, Economics and Quantitative Methods

University of Bergamo

via dei Caniana 2, 24127 Bergamo, Italy

 

S. Biffignandi

Department of Management, Economics and Quantitative Methods

University of Bergamo

via dei Caniana 2, 24127 Bergamo, Italy

 

Philip S. Brenner

Department of Sociology and Center for Survey Research

University of Massachusetts

100 Morrissey Blvd., Boston, MA 02125, USA

 

Alexandru Cernat

Social Statistics Department

University of Manchester

Manchester, M13 9PL, UK

 

David J. Ciuk

Department of Government

Franklin & Marshall College

Lancaster, PA, USA

 

Carol Cosenza

Center for Survey Research

University of Massachusetts

Boston, MA, USA

 

Mathew J. Creighton

School of Sociology, Geary Institute for Public Policy

University College Dublin

Stillorgan Road, Dublin 4, Ireland

 

Andrew W. Crosby

Department of Public Administration

Pace University

New York, NY 10038, USA

 

Richard Curtin

Institute for Social Research

University of Michigan

Ann Arbor, MI 48106, USA

 

Edith D. de Leeuw

Department of Methodology & Statistics

Utrecht University

Utrecht, the Netherlands

 

Stefanie Eifler

Department of Sociology

Catholic University of Eichstätt‐Ingolstadt

85072 Eichstätt, Germany

 

Mahmoud Elkasabi

ICF, The Demographic and Health Surveys Program

Rockville, MD 20850, USA

 

Floyd J. Fowler Jr

Center for Survey Research

University of Massachusetts

Boston, MA, USA

 

Marek Fuchs

Darmstadt University of Technology

Institute of Sociology

Karolinenplatz 5, 64283 Darmstadt, Germany

 

Thomas Hinz

University of Konstanz

Konstanz, Germany

 

Allyson L. Holbrook

Departments of Public Administration and Psychology and the Survey Research Laboratory

University of Illinois at Chicago

Chicago, IL 60607, USA

 

Joop Hox

Department of Methodology and Statistics

University of Utrecht

Utrecht, the Netherlands

 

Ronaldo Iachan

ICF International

Fairfax, VA 22031, USA

 

Annette Jäckle

University of Essex

Institute for Social and Economic Research

Colchester CO4 3SQ, UK

 

Timothy P. Johnson

Department of Public Administration and the Survey Research Laboratory

University of Illinois at Chicago

Chicago, IL 60607, USA

 

Jenny Kelly

NORC, University of Chicago

55 East Monroe Street, Chicago, IL 60603, USA

 

Courtney Kennedy

Pew Research Center

Washington, DC, USA

 

Florian Keusch

Department of Sociology, School of Social Sciences

University of Mannheim

Mannheim, Germany

 

Samara Klar

School of Government & Public Policy

University of Arizona

Tucson, AZ 85721, USA

 

Maria Krysan

Department of Sociology and the Institute of Government & Public Affairs

University of Illinois at Chicago

Chicago, IL, USA

 

Tanja Kunz

Leibniz‐Institute for the Social Sciences

P.O.Box 12 21 55, 68072, Mannheim, Germany

 

Paul J. Lavrakas

NORC, University of Chicago

55 East Monroe Street, Chicago, IL 60603, USA

 

Thomas J. Leeper

Department of Methodology

London School of Economics and Political Science

London WC2A 2AE, UK

 

James M. Lepkowski

Institute for Social Research

University of Michigan

Ann Arbor, MI 48106, USA

 

Mingnan Liu

Facebook, Inc.

Menlo Park, CA, USA

 

Peter Lynn

University of Essex

Institute for Social and Economic Research

Colchester CO4 3SQ, UK

 

Jaki S. McCarthy

United States Department of Agriculture

National Agricultural Statistics Service (USDA/NASS)

1400 Independence Avenue, SW, Washington, DC 20250‐2054, USA

 

Colleen McClain

Program in Survey Methodology at the University of Michigan, Institute for Social Research

426 Thompson Street, Ann Arbor, MI 48104, USA

 

Daniel L. Oberski

Department of Methodology & Statistics

Utrecht University

Utrecht, 3584 CH, the Netherlands

 

Kristen Olson

Department of Sociology

University of Nebraska‐Lincoln

Lincoln, NE, USA

 

Colm O'Muircheartaigh

Harris School of Public Policy

University of Chicago and NORC at the University of Chicago

Chicago, IL, United States

 

Linda K. Owens

Stephens Family Clinical Research Institute

Carle Foundation Hospital, 611W. Park Street

Urbana, IL 61801, USA

 

Jennifer A. Parsons

Survey Research Laboratory

University of Illinois at Chicago

Chicago, IL, USA

 

Knut Petzold

Sociology Section

Ruhr‐Universität Bochum

44801 Bochum, Germany

 

Annette Scherpenzeel

Chair for Economics of Aging, School of Management

Technical University of Munich

Munich, Germany

 

Peter Schmidt

Department of Political Science and Centre for Environment and Development (ZEU)

University of Giessen

Karl‐Glöcknerstrasse 21 E, 35394 Giessen Germany

 

Barbara Schneider

College of Education and Sociology Department

Michigan State University

East Lansing, MI 48824, USA

 

Stephen Smith

NORC at the University of Chicago

Chicago, IL, United States

 

Tom W. Smith

Center for the Study of Politics and Society

NORC at the University of Chicago

1155 East 60th Street Chicago, IL 60637, USA

 

Jolene D. Smyth

Department of Sociology

University of Nebraska‐Lincoln

Lincoln, NE, USA

 

Jaesok Son

Center for the Study of Politics and Society

NORC at the University of Chicago

1155 East 60th Street Chicago, IL 60637, USA

 

Mathew Stange

Mathematica Policy Research

Ann Arbor, MI, USA

 

Marina Stavrakantonaki

Department of Public Administration

University of Illinois at Chicago

Chicago, IL 60607, USA

 

David Sterrett

NORC at the University of Chicago

Chicago, IL 60603, USA

 

Z. Tuba Suzer‐Gurtekin

Institute for Social Research

University of Michigan

Ann Arbor, MI 48106, USA

 

Elizabeth Tipton

Human Development Department, Teachers College

Columbia University

New York, NY 10027, USA

and

Statistics Department

Northwestern University

Evanston, IL 60201, USA

 

Michael W. Traugott

Center for Political Studies

Institute for Social Research, University of Michigan

Ann Arbor, MI, USA

 

Jan A. van den Brakel

Department of Statistical Methods

Statistics Netherlands

Heerlen, the Netherlands

and

Department of Quantitative Economics

Maastricht University School of Business and Economics

Maastricht, the Netherlands

 

Susanne Vogl

Department of Education

Department of Sociology, University of Vienna

Vienna, Austria

 

Sandra Walzenbach

University of Konstanz,

Konstanz, Germany

and

ISER/University of Essex

Colchester, UK

 

Xiaoheng Wang

Department of Public Administration

University of Illinois at Chicago

Chicago, IL 60607, USA

 

Brady T. West

Survey Research Center

Institute for Social Research, University of Michigan

Ann Arbor, MI, USA

 

Diane K. Willimack

United States Department of Commerce

U.S. Census Bureau

4600 Silver Hill Road, Washington, DC 20233–0001, USA

 

Jaclyn S. Wong

Department of Sociology

University of South Carolina

Columbia, SC, United States

 

Ting Yan

Westat

Rockville, MD, USA

 

David S. Yeager

Psychology Department

University of Texas at Austin

Austin, TX 78712, USA

 

Berwood A. Yost

Franklin & Marshall College

Floyd Institute for Public Policy and Center for Opinion Research

Lancaster, PA, USA

 

Diana Zavala‐Rojas

Research and Expertise Centre for Survey Methodology and European Social Survey ERIC, Department of Political and Social Sciences

Universitat Pompeu Fabra

C/de Ramon Trias Fargas, 25‐27, 08005 Barcelona, Spain

 

Tianshu Zhao

Department of Public Administration

University of Illinois at Chicago

Chicago, IL 60607, USA

Preface by Dr. Judith Tanur

I am enormously flattered to be asked to supply a preface for this path‐breaking volume of essays about experiments embedded in surveys. I had hoped to contribute a chapter to the volume with my friend and long‐term collaborator, Stephen Fienberg, but Steve, active to the end, lost his long battle with cancer before he was able to make the time to work on the chapter we had planned to write. So, I would like to use this opportunity to write something about the work we had done, and had hoped to continue, on the parallels between experimental and survey methodology, on experiments embedded in surveys (and vice‐versa), and some of the considerations for analysis occasioned by such embedding.

We had long noted (e.g. Fienberg and Tanur 1987, 1988, 1989, 1996) that there are a great many parallels between elements of survey design and experimental design. Although in fact surveys and experiments had developed very long and independent traditions by the start of the twentieth century, it was only with the rise of ideas associated with mathematical statistics in the 1920s that the tools for major progress in these areas became available. The key intellectual idea was the role of randomization or random selection, both in experimentation and in sampling, and both R.A. Fisher and Jerzy Neyman utilized that idea, although in different ways. The richness of the two separate literatures continues to offer new opportunities for cross‐fertilization of theory and tools for survey practice in particular.

Although the ideas are parallel, often the purpose these ideas serve is different in the two domains. For example, experimental randomization is used in experimental design in order to justify the assumption that the experimental and control groups are equivalent a priori, but it finds its analog in probability sampling in surveys in order to assure the “representativeness” of the sample vis a vis the population to which generalizations are to be made. On the other hand, similar processes to create homogeneous groups occur in both experimental and sampling designs and serve similar purposes. In experimentation, blocking creates homogenous groups, exerting control of experimental error by segregating the effects of extraneous sources of variation. In sampling, stratification serves the same purpose by drawing sample members from each of the homogenous groups into which the population can be divided. The list could be lengthened (as indeed, Steve and I did in the papers cited above) to include such parallels in design as those between Latin and Graeco‐Latin squares on the one hand and lattice sampling or “deep stratification” on the other and between split plot designs and cluster sampling. In the analysis stage, we pointed to the parallel between covariance adjustment in experiments and poststratification in surveys. We embarked on a project to find and describe more modern parallels and to encourage researchers in each of these fields to look in the literature of the other for ideas about design and analysis.

We wrote a series of papers and envisioned a book, Reaching Conclusions: The Role of Randomized Experiments and Sample Surveys, that will now, unfortunately, remain a draft, that explored the ramifications of these parallels, pointed out newer and less obvious parallels between experiments and surveys, and noted the frequent embedding of experiments in surveys and vice versa. In particular, we urged – sometimes by example – that when such embedding took place, the analyst should take advantage of the embedded structure in planning and carrying out the analysis.

In our 1989 Science paper, we addressed these issues of analysis of embedded experiments, reiterating the point that although there are formal parallels between the structures of surveys and experiments there are fundamental inferential differences as described above. We pointed out three inferential stances that could be used. In the context of this volume, it is perhaps worth quoting that discussion at some length (p. 243).

(1)  One can use the standard experiment paradigm, which relies largely on internal validity based on randomization and local control (for example, the device of blocking) and on the assumption that the unique effects of experimental units and the treatment effects can be expressed in a simple additive form, without interaction (Fisher 1935, Ch. 4; Kempthorne 1952, Ch. 9). Then inference focuses on within‐experiment treatment differences.

(2)  One can use the standard sampling paradigm, which, for a two‐treatment experiment embedded in a survey relies largely on external validity and generalizes the observations for each of the treatments to separate but paired populations of values. Each unit or individual in the original population from which the sample was drawn is conceived to have a pair of values, one for each treatment. But only one of these is observable, depending on which treatment is given. Then inferences focus on the mean difference or the difference in the means of the two populations.

(3)  One can conceptualize a population of experiments, of which the present embedded experiment is a unit or sample of units, and thus capitalize on the internal validity created by the design of the present embedded experiment as well as the external validity created by the generalization from the present experiment to the conceptual population of experiments. Then inferences focus on treatment differences in a broader context than simply the present embedded experiment.

Then we took advantage of the generosity of Roger Tourangeau and Kenneth Rasinski to reanalyze data they had collected in a probability sample survey on context effects in attitude surveys (1989). They had crossed four issues at differing levels of familiarity with four orders of presentation (balanced in the form of a Latin Square), two versions of the context questions, and two methods of structuring the context question. Thus, there were 16 versions of the questionnaire plus two extra versions with neutral context questions. These 18 versions of the questionnaire were crossed with four interviewers. We considered the interviewers as blocks, and so within each block, we had five replications of an 18‐treatment experiment, where 16 of the treatments represent a 4 × 2 × 2 factorial design. Focusing separately on two of the issues (abortion and welfare), we had four treatment combinations (positive vs. negative contexts by scattered vs. massed structure of the context‐setting questions). The context‐setting questions for the abortion question dealt with women's rights or traditional values, while those for the welfare question concerned fraud and waste in government programs or government responsibility to provide services. Using logit models to predict a favorable response on the target question and contrasting the four basic treatment combinations with the neutral context, we carried out detailed analyses and found very complicated results. I shall try to sketch only a fraction of results of those analyses – I urge the reader with a serious interest in these issues to refer to the original paper (Fienberg and Tanur, 1987). In short, we found that context matters – but its effect depends of how often the respondent agrees with the context‐setting questions. And we found that all three of the inferential stances detailed above gave similar results for the abortion question, while the first and third gave similar results for the welfare question.

This volume concentrates on the embedding of experiments within surveys, often to answer questions about survey methodology as did the Tourangeau/Rasinski experiment discussed above. I would guess that such procedures are the most common uses of embedding. But perhaps it is worth bearing in mind that there are many good examples of the reverse – the embedding of surveys in experiments and such embedded surveys often serve a more substantive purpose. A set of particularly good examples were the massive social experiments of the mid‐twentieth century such as the Negative Income Tax experiments and the Health Insurance Study. Many of the measurements of the outcome variables in those randomized experiments were necessarily carried out via surveys of the participants.

I find the contents of this volume fascinating, both for the broad sweep of the topics examined and for the variety of disciplines represented by the contributing authors and what I know of their expertise. I look forward to reading many of the articles.

I wish Steve's work could have been included in a more specific and current way than I have been able to achieve in this preface – he had plans for updating that I am not able to carry out. In that context, I am especially pleased to note the inclusion of a chapter by Jan Van den Brakel, whose work I know Steve admired and whose chapter I expect will contain much of the new material that would have informed our chapter had Steve lived to complete it.

Prolific as Steve was, he left so much good work unfinished and so much more not even yet contemplated. I miss him.

Judith Tanur

Montauk, New York

December 2017

References

  1. Fienberg, S.E. and Tanur, J.M. (1987). Experimental and sampling structures: parallels diverging and meeting. International Statistical Review 55 (1): 75–96.
  2. Fienberg, S.E. and Tanur, J.M. (1988). From the inside out and the outside in: combining experimental and sampling structures. Canadian Journal of Statistics 16: 135–151.
  3. Fienberg, S.E. and Tanur, J.M. (1989). Experimental combining cognitive and statistical approaches to survey design. Science 243: 1017–1022.
  4. Fienberg, S.E. and Tanur, J.M. (1996). Reconsidering the fundamental contributions of Fisher and Neyman on experimentation and sampling. International Statistical Review 64: 237–253.
  5. Fisher, R.A. (1935). The Design of Experiments. Edinburgh: Oliver and Boyd.
  6. Kempthorne, O. (1952). The Design and Analysis of Experiments. New York: Wiley.
  7. Tourangeau, R. and Rasinski, K.A. (1989). Carryover effects in attitude surveys. Public Opinion Quarterly 53: 495–524.

About the Companion Website

This book is accompanied by a companion website:

www.wiley.com/go/Lavrakas/survey-research

image

The website includes:

  1. ‐ Letters
  2. ‐ Brochures
  3. ‐ Appendices