Cover Page

Improvement Science in Evaluation: Methods and Uses



Christina A. Christie
Moira Inkelas
Sebastian Lemire

Editors

New Directions for Evaluation

Sponsored by the American Evaluation Association

EDITOR-IN-CHIEF

Paul R. Brandon
University of Hawai‘i at Mānoa

Associate Editors

J. Bradley Cousins
University of Ottawa
Lois-ellin Datta
Datta Analysis

Editorial Advisory Board

Anna Ah Sam
University of Hawai‘i at Mānoa
Michael Bamberger
Independent consultant
Gail Barrington
Barrington Research Group, Inc.
Fred Carden
International Development Research Centre
Thomas Chapel
Centers for Disease Control and Prevention
Leslie Cooksy
Sierra Health Foundation
Fiona Cram
Katoa Ltd.
Peter Dahler-Larsen
University of Southern Denmark
E. Jane Davidson
Real Evaluation Ltd.
Stewart Donaldson
Claremont Graduate University
Jody Fitzpatrick
University of Colorado Denver
Deborah M. Fournier
Boston University
Jennifer Greene
University of Illinois at Urbana-Champaign
Melvin Hall
Northern Arizona University
George M. Harrison
University of Hawai‘i at Mānoa
Gary Henry
Vanderbilt University
Rodney Hopson
George Mason University
George Julnes
University of Baltimore
Jean King
University of Minnesota
Saville Kushner
University of Auckland
Robert Lahey
REL Solutions Inc.
Miri Levin-Rozalis
Ben Gurion University of the Negev and Davidson Institute at the Weizmann Institute of Science
Laura Leviton
Robert Wood Johnson Foundation
Melvin Mark
Pennsylvania State University
Sandra Mathison
University of British Columbia
Robin Lin Miller
Michigan State University
Michael Morris
University of New Haven
Debra Rog
Westat and the Rockville Institute
Patricia Rogers
Royal Melbourne Institute of Technology
Mary Ann Scheirer
Scheirer Consulting
Robert Schwarz
University of Toronto
Lyn Shulha
Queen's University
Nick L. Smith
Syracuse University
Sanjeev Sridharan
University of Toronto
Monica Stitt-Bergh
University of Hawai‘i at Mānoa

Editorial Policy and Procedures

New Directions for Evaluation, a quarterly sourcebook, is an official publication of the American Evaluation Association. The journal publishes works on all aspects of evaluation, with an emphasis on presenting timely and thoughtful reflections on leading‐edge issues of evaluation theory, practice, methods, the profession, and the organizational, cultural, and societal context within which evaluation occurs. Each issue of the journal is devoted to a single topic, with contributions solicited, organized, reviewed, and edited by one or more guest editors.

The editor‐in‐chief is seeking proposals for journal issues from around the globe about topics new to the journal (although topics discussed in the past can be revisited). A diversity of perspectives and creative bridges between evaluation and other disciplines, as well as chapters reporting original empirical research on evaluation, are encouraged. A wide range of topics and substantive domains are appropriate for publication, including evaluative endeavors other than program evaluation; however, the proposed topic must be of interest to a broad evaluation audience.

Journal issues may take any of several forms. Typically they are presented as a series of related chapters, but they might also be presented as a debate; an account, with critique and commentary, of an exemplary evaluation; a feature‐length article followed by brief critical commentaries; or perhaps another form proposed by guest editors.

Submitted proposals must follow the format found via the Association's website at http://www.eval.org/Publications/NDE.asp. Proposals are sent to members of the journal's Editorial Advisory Board and to relevant substantive experts for single‐blind peer review. The process may result in acceptance, a recommendation to revise and resubmit, or rejection. The journal does not consider or publish unsolicited single manuscripts.

Before submitting proposals, all parties are asked to contact the editor‐in‐chief, who is committed to working constructively with potential guest editors to help them develop acceptable proposals. For additional information about the journal, see the “Statement of the Editor‐in‐Chief” in the Spring 2013 issue (No. 137).

Paul R. Brandon, Editor‐in‐Chief
University of Hawai‘i at Mānoa
College of Education
1776 University Avenue
Castle Memorial Hall, Rm. 118
Honolulu, HI 968222463
e‐mail: nde@eval.org

Editors’ Notes

Evaluation is about change. As Carol Weiss reminds us, “Evaluation is a practical craft, designed to help make programs work better and to allocate resources to better programs. Evaluators expect people in authority to use evaluation results to take wise action. They take satisfaction from the chance to contribute to social betterment” (1998, p. 5). Speaking to this central purpose of evaluation, a broad range of topics related to utilization have been explored in the evaluation literature, including the conceptualization of evaluation use (Leviton & Hughes, 1981), concepts of research use (Cousins & Shulha, 2006; Hofstetter & Alkin, 2003), and empirical research on utilization (Cousins & Leithwood, 1986; Cousins & Shulha, 2006; Hofstetter & Alkin, 2003). At root, all of these contributions are about how evaluation brings about change, how evaluation contributes to social betterment.

Sharing this overarching purpose, improvement science is an approach to increasing knowledge that leads to an improvement of a product, process, or system (Moen, Nolan, & Provost, 2012). Improvement science has experienced a surge of interest over the past 30 years—especially in the health sciences. Despite the rapidly expanding reach of improvement science in education, criminal justice, and social care, among other fields, published applications of improvement science are close to nonexistent in the evaluation literature. Indeed, many evaluators know little about improvement science. What is improvement science? What does improvement science look like in real-world applications? And what might we, as evaluators, learn from the theory and practice of improvement science? These and other questions are considered in this issue of New Directions for Evaluation.

The primary motivation for the issue is to promote increased cross-talk and perhaps even cross-fertilization of ideas, techniques, and tools between evaluation and improvement science. Speaking directly to this aim, there are at least four areas where this cross-fertilization is particularly relevant: learning from error, examining variation, appreciating context, and focusing on systems change.

Learning from error is both friend and foe in evaluation. To be sure, the idea of trial and error can be traced back to the early ideas of social engineering (e.g., Campbell's notion of the “experimenting society”) and the distinction between theory and implementation failure is a staple of theory-based evaluation. We learn from error; evaluation is no exception. That being said, the heavy focus on outcomes and fervent pursuit of “what works” have also served to depress the room for error and, in effect, any learning that results from this, in the context of many contract-funded evaluations of public programs. The error as foe is even evident in the designs and methods often employed in evaluation that intentionally seek to “control,” “rule out,” or at least “adjust” for error. A lot of learning is potentially lost. From this perspective, improvement science offers a much-welcomed framework to carve out a learning space for error. The stepwise, piecemeal experimentation central to improvement science serves well to reduce the adverse consequences of error and allow for a progressive, trial-and-error learning.

The importance of variation has not been lost on evaluators. Most evaluators agree that programs rarely work or fail to work. Even though programs may fail on average to produce positive outcomes across many contexts, there are some contexts in which these failed programs actually deliver value. Programs work for some, under certain circumstances, and in constant interaction with local conditions. The sustained interest in what works for whom and under what conditions speaks to this awareness. Speaking directly to this interest in variation, improvement science offers operational guidance and concrete techniques for examining outcome variations and connecting these with program changes.

On a related point, and often as part of what explains variation in program outcomes, the complexity of the contexts in which programs are delivered is often of interest to evaluators. Evaluators work in the contexts in which problems must be understood. Because of this, evaluators encounter complex contextual issues that largely determine the success of initiatives. Evaluations that use an approach that examines program implementation and differences in success will lead to better programs because variability will be better understood. Grounded on decades of real-world applications, improvement science offers key insights on and practical guidelines for addressing the complexity of context.

Systems thinking has recently received a surge of interest among evaluators. The recognition that programs and the problems they seek to address function within broader systems is difficult to dispute. As such, it is necessary to understand the component processes of a system and how they work together, so as to understand the roots of the problem and generate innovative solutions. Sometimes quality can be improved by merely tweaking the system, that is, making small changes that enable the system to function in context the way it was designed to function. But other times the system must be redesigned from the ground up or major components changed. Motivated by systemic change, improvement science is grounded in a framework for improving systems that has been highly successful in fields as diverse as the automotive industry and health care (Kenney, 2008; Rother, 2009).

With these observations as our backdrop, the chapters in this volume address issues that are critical to both improvement science and evaluation.

Chapter 1 sets the stage by considering some of the conceptual similarities and distinctions between improvement science and evaluation. Chapter 2 provides a general introduction to the intellectual foundations, methods, and tools that collectively comprise improvement science. Chapter 3 provides the purest example of the implementation of improvement science, showcasing how iterative cycles of development and testing can provide solutions to address family- and system-level barriers to primary care. The other chapters offer illustrations of improvement science in a variety of contexts and provide illustrations of the benefits and challenges of implementing improvement science for evaluative purposes. Chapter 4 illustrates how network of diverse organizations can use iterative learning cycles to come up with promising ideas, test and prototype these ideas, and spread and sustain what is found to work for a community population. Chapter 5 describes the implementation of rapid cycles of evaluations (Plan–Do–Study–Act cycles) to adapt interventions to local school contexts. Chapter 6 considers the potential value of combining improvement science and online learning. Chapter 7 concludes the volume with a set of reflections on the major benefits and implications of integrating improvement science more firmly in evaluation.

Collectively, the case chapters in this volume offer an inspiring review of state-of-the-art improvement science applications, providing a broad range of analytical strategies, data visualization techniques, and data collection strategies to be potentially applied in future evaluation contexts. Whereas the cases do not elucidate explicit connections to evaluation, several themes cutting across the cases speak directly to core themes in evaluation. These themes include a persistent focus on systems thinking, a determination to capture and better understand variation and contextual complexity, as well as a sustained commitment to generative learning about projects and programs: all issues of great concern to evaluators. The final chapter connects these themes, among others, with current trends in evaluation.

It is our hope that the volume will promote cross-talk between evaluation and improvement science—a field that continues to gain traction in an increasing range of public policy areas. From this perspective, the issue comes at just the right time to help both producers and users of evaluations to see the potential benefits of a closer engagement with improvement science.

References

  1. Cousins, J. B., & Leithwood, K. A. (1986). Current empirical research on evaluation utilization. Review of Educational Research, 56, 331–364.
  2. Cousins, J. B., & Shulha, L. M. (2006). A comparative analysis of evaluation utilization and its cognate fields of inquiry: Current issues and trends. In I. Shaw, J. Greene, & M. Mark (Eds.), Handbook of evaluation: Program, policy, and practice (pp. 266–291). Thousand Oaks, CA: Sage.
  3. Hofstetter, C. H., & Alkin, M. (2003). Evaluation use revisited. In T. Kellaghan, D. L. Stufflebeam, & L. Wingate (Eds.), International handbook of educational evaluation (pp. 189–196). Boston, MA: Kluwer.
  4. Kenney, C. C. (2008). The best practice: How the new quality movement is transforming medicine. New York, NY: Public Affairs.
  5. Leviton, L. C., & Hughes, E. F. X. (1981). Research on the utilization of evaluations: A review and synthesis. Evaluation Review, 5, 525–549.
  6. Moen, R. D., Nolan, T. W., & Provost, L. P. (2012). Quality improvement through planned experimentation. New York, NY: McGraw-Hill.
  7. Rother, M. (2009). Toyota kata: Managing people for improvement, adaptiveness and superior results. New York, NY: McGraw-Hill Professional.
  8. Weiss, C. H. (1998). Evaluation (2nd ed.). Upper Saddle River, NJ: Prentice-Hall.

 

 

 

Christina A. Christie
Moira Inkelas
Sebastian Lemire
Editors