Cover Page

PROGRAM EVALUATION IN PRACTICE

Core Concepts and Examples for Discussion and Analysis

Second Edition

DEAN T. SPAULDING

Wiley Logo

For Mr. Mugs, my laptop-lapdog

List of Tables, Figures, Exhibits, and Boxes

Tables

Table 1.1 Evaluation Matrix for the Summer Camp Project
Table 1.2 Stakeholder Perceptions of Strengths of and Barriers to Camp
Table 1.3. Status of Prior Recommendations Made for the Summer Camp Follow-Up Sessions
Table 3.1. Overview of the Scope and Sequence of Evaluation Objectives
Table 4.1 Thomas’s Evaluation Matrix Template for the Math Project
Table 5.1 Evaluation Matrix for the Mentor Program
Table 7.1 The District’s Technology Benchmarks
Table 7.2 Overview of the Logic Model Guiding the Project Evaluation

Figures

Figure 1.1 Formative and Summative Evaluation
Figure 2.1 Determining a Program’s Worth or Merit
Figure 2.2 Overview of Evaluation Approaches
Figure 4.1 The RFP Process
Figure 4.2 Overview of Project Activities
Figure 10.1 Structure of After-School Program Higher Collaboration of Services
Figure 11.1 Model of the Top-Down Approach to Professional Development
Figure 11.2 Model of Professional Development with Action Research
Figure 11.3 Overview of the Action Research Model

Exhibits

Exhibit 1.1 Parent or Guardian Perception Survey—Summer Camp
Exhibit 1.2 Interview Protocol for the Summer Camp Project
Exhibit 1.3 Overview of an Evaluation Objective and Findings
Exhibit 1.4 Example of an Evaluation Objective and Finding Focused on Program Modifications
Exhibit 6.1 Technology Use and Integration Checklist for Portfolio Analysis

Boxes

Box 1.1 Overview of the Framework Guiding Each Case Study
Box 1.2 Evaluation Objectives for the Summer Camp Project
Box 1.3 General Categories of Evaluation Objectives (Example for an After-School Program)
Box 2.1 Example of an Evaluation Objective and Benchmark
Box 4.1 The RFP Process
Box 4.2 Thomas’s Evaluation Objectives
Box 5.1 Program Goals
Box 5.2 What is Evaluation Capacity?
Box 6.1 Evaluation Objectives
Box 6.2 Overview of Portfolios in Education and Teacher Training
Box 7.1 Overview of Logic Models
Box 8.1 Overview of Jennifer and Ed’s Annual Evaluation Plan
Box 9.1 Evaluation Questions for the Reading Right Program
Box 10.1 Overview of Broad Categories for After-School Program Activities
Box 11.1 Overview of Action Research
Box 12.1 Sampling of Community Activities

Preface

In this second edition you will find some new chapters and new cases. Most significantly, you will find a chapter focusing on the basic theories and approaches to program evaluation. You will also find a chapter dedicated to objectives-based evaluation, an approach that most professional evaluators working today use. In addition, there is a new section on ethics in program evaluation as well as the Joint Committee on Standards for Educational Evaluation’s standards for evaluating educational programs. Case studies from the first edition have been updated, as have readings, discussion questions, and class activities.

For over twenty years, research and literature in the area of teaching program evaluation have noted that real-world opportunities and the skills gained and honed from such experiences are critical to the development of highly trained, highly skilled practitioners in the field of program evaluation (Brown, 1985; Chelimsky, 1997; Trevisan, 2002; Weeks, 1982). According to Trevisan and others, traditional courses in program evaluation have been designed to provide students with authentic experiences through in-course or out-of-course projects. Although both approaches have notable benefits, they are not without their share of limitations.

Didactic learning environments that use in-course projects have often been criticized for being too structured in their delivery. Trevisan (2004) and others note that these activities typically do not require students to leave campus or collect any “real” data that will be used by clients, in any meaningful way, to make decisions or to effect change. In such cases, these activities may consist of presenting students with a fictitious evaluation project to be designed based on a given set of goals, objectives, or variables for a fictitious agency, group, or company. Such involvement, however, typically offers no more than a cookie-cutter approach, with little room for student exploration, questioning, or growth in any sort of political or social context.

In an attempt to shift this paradigm, Trevisan (2002) describes a popular model employed by many institutions of higher education, whereby an evaluation center is established to provide a more coordinated effort toward providing in-depth learning opportunities for evaluators-in-training. Typically such a center acts as a sort of agency or consultancy, contracting with outside agencies, schools, or groups and serving as an external evaluator. According to Trevisan, this approach incorporates long-term evaluation projects of a year or more, to be conducted by full-time graduate students under the direct supervision of a full-time faculty member. Trevisan notes one of the benefits of such an approach: it provides graduate students interested in the field of evaluation with long-term, realistic projects that tend to reflect much of the work, dilemmas, issues, and ethical considerations that they will encounter on a daily basis as professional evaluators.

Although this approach certainly produces an experience that is more realistic, it also presents numerous challenges. For example, one barrier that many instructors face when attempting to implement a more hands-on, real-world project to teach program evaluation is the infrastructural challenges of an academic setting. This infrastructure not only is daunting to faculty but also is often counterproductive to and intrusive into the students’ overall learning experience. For example, most institutions of higher education function within a fifteen-week semester schedule, starting in September and ending in May. Although there are certainly examples of real-world evaluation projects that can be conducted from start to finish in such a short time (Spaulding & Lodico, 2003), the majority of real-world projects—especially those funded at the state or federal level—have timelines that stretch out across multiple years and require annual reporting. In addition, many of these state and federal evaluation projects follow a July 1 to June 30 or August 1 to July 31 funding cycle, with the majority of data analysis and report writing necessarily occurring during the summer, when many faculty (and students) are not usually on campus.

Another barrier to teaching program evaluation with a real-world project is the variability in the quality of the experiences from project to project. One difficulty with using real-world projects is that they are out of the hands of the instructor. In some cases, projects stall after a good start, partners change, or a host of other unexpected things happen. In situations in which the student is placed in an agency or group to work as an internal evaluator, the experience could very well turn out not to be as rich as expected.

To address some of these issues, instructors have used case studies in their classes to assist in teaching program evaluation. Although I am not suggesting that case studies by themselves will rectify the difficulties just noted or serve as a replacement for real-world experiences, they do allow evaluators-in-training to vicariously experience an evaluation. Further, case studies place these evaluators in decision-making situations that they otherwise might not be able to experience. Case studies also provide opportunities for rich discussion and learning while ensuring that certain learning objectives desired by the instructor are achieved.

Until now, the effort to use case studies when teaching program evaluation has been mainly a grassroots initiative, with instructors bringing into class examples of evaluation projects that they themselves have worked on and contextualizing them for their students. Although the use of case studies and case study books is evident in certain disciplines (such as child and adolescent development), “the absence of readily available teaching cases has been a significant gap in the field of evaluation” (Patton & Patrizi, 2005, p. 1).

The purpose of this book is to provide a variety of evaluation projects to be discussed, analyzed, and reflected on. The case studies are intended to foster rich discussions about evaluation practices, and the book’s comprehensive scope means that it should promote discussions touching on the real issues that arise when conducting an evaluation project.

For the instructor, this book is not meant to be a stand-alone text for teaching and learning about program evaluation. Its main purpose is to be used as an educational supplement to any course—introductory or advanced—in program evaluation. In addition, these cases should not be viewed as examples of exemplary program evaluations. Although the methods and tools featured in the cases closely reflect those used in real-world evaluations, classroom discussions and activities could certainly focus on expanding and improving those tools and overall methodologies.

I hope you enjoy reading and discussing these cases as much as I have enjoyed revisiting them.

An instructor’s supplement is available at www.josseybass.com/go/spaulding2e. Additional materials, such as videos, podcasts, and readings, can be found at www.josseybasspublichealth.com. Comments about this book are invited and can be sent to publichealth@wiley.com.

Acknowledgments

In writing this book I reviewed and reflected on over one hundred program evaluations that I have conducted in the last ten years. It brought back many faces of people I have worked with in the past, reminding me of the incredible opportunities I have had to work with so many talented evaluators and project directors in the field. I would like to thank all of them for their dedication and hard work in delivering quality programming.

Proposal reviewers Kathryn Anderson Alvestad, Brenda E. Friday, Sheldon Gen, Kristin Koskey, Leslie McCallister, Jennifer Ann Morrow, Patrick S. O’Donnell, Linda A. Sidoti, and Amy M. Williamson provided valuable feedback on the first edition and revision plan. Tomika Greer, Kristin Koskey, Joy Phillips, and Iveta Silova offered thoughtful and constructive comments on the completed draft manuscript.

The Author

Dean T. Spaulding is an associate professor at the College of Saint Rose in the Department of Educational Psychology. He is the former chair of teaching program evaluation for the American Evaluation Association. He is also one of the authors of Methods in Educational Research: From Theory to Practice (Jossey-Bass, 2006). Dr. Spaulding has served as a professional evaluator for more than a decade and has worked extensively in K–12 and higher education settings. Although his work has focused primarily on after-school, enrichment, and mentor programs, he has also worked to evaluate programs in public health, mental health, and special education settings at both the state and federal levels.

Part 1
Introduction