Editor’ Notes

Chapter 1: Sociocultural Theory: Providing More Structure to Culturally Responsive Evaluation

Chapter 2: Making the Case for the Humanities in Evaluation Training

A Humanities-Informed Evaluation Course

Linking Evaluation Skills and Humanities Themes

Texts and Curriculum Materials


Chapter 3: Political Psychology in Evaluation:A Theoretical Framework

Illustrating a Political Psychology Framework for Evaluation

Chapter 4: A Bridge Between Basic Social Science and Evaluation

The Relationship Between Theory and Practice

Rethinking the Meaning of Rigorous Design

The Political Significance of Bridging Social Scienceand Evaluation

Chapter 5: Using Nonequivalent Dependent Variables to Reduce Internal Validity Threats in Quasi-Experiments: Rationale, History, and Examples From Practice

Types and Sources of Potential Internal Validity Threats

Nonequivalent Dependent Variables

Rationale for Using Nonequivalent Dependent Variables to Reduce Internal Validity Threats

History and Examples of Using Nonequivalent Dependent Variables to Reduce Internal Validity Threats


Chapter 6: Eval + Comm

Visual Processing Theory and Evaluation

Chapter 7: Focus Groups in the Virtual World:Implications for the Future of Evaluation

An Illustration of Virtual Focus Groups in Evaluation

Some Concerns Arising From the Virtual Focus Groups

New Technologies for Virtual Focus Groups

Chapter 8: En“gendering” Evaluation: FeministEvaluation but “I Am NOT a Feminist!”

Barriers to Using the Term Feminist in Evaluation

Losses and Gains to Not Using Feminist in Feminist Evaluation

Next Steps: Is It Just a Word? Does It Matter?

Chapter 9: New Evaluators Addressing HealthDisparities Through Community-Based Evaluation

Health Disparities in the United States

Community-Based Evaluation

Challenges and Solutions for Evaluators


Chapter 10: Inside, Outside, Upside Down: Challenges and Opportunities That Frame the Future of a Novice Evaluator

Inside, Outside, and Upside Down

Going In


Inside a Box

Inside a Box Upside Down

Going Out


Going to Town on a Truck Outside Inside a Box Upside Down

Falling Off

Coming Out

Right Side Up!

Mama! Mama! I Went to Town. Inside, Outside, Upside Down!

Chapter 11: Sailing Through Relationships? OnDiscovering the Compass for Navigating 21st-Century Evaluation in the Pacific

Departure Points: Two Emerging Evaluators in AotearoaNew Zealand

Turning Tides: Broad Influences on the Direction of Our Craft

Finding Our Compass: Staying On CourseThrough Relationships

Making New Waves: Toward Evaluation as Social Navigation

Chapter 12: Integrating a New Evaluation UnitWith an Old Institution: See No Evil;Hear No Evil; Speak No Evil

Role Clarity

Push and Pull

Final Thoughts

Chapter 13: Building the Value of Evaluation: Engaging With Reflective Practitioners

The Current Environment

Building the Value of Evaluation

Conclusion and Suggestions for Future Research

Chapter 14: Evaluation of Multinational Programs: Value and Challenges

Value of Building Robust Multinational Evaluations

Ethical Clearances

Addressing Project Variation

Building Collaborations

Decreasing Unnecessary Noise in the Data Set


Chapter 15: Using Organizational Memory Directories to Analyze Networks

Organizational Memory

Metamemory and Directories

An Illustration of the Value of UnderstandingOrganizational Memory

Using Directories to Access Organizational Memory


Chapter 16: The Evolution of Understanding: Positioning Evaluation Within a ComprehensivePerformance Management System

Promise and Practice of Performance Management

Complementary Knowledge: Monitoring and Evaluation

Evaluation: Improving Learning and Data Use inPerformance Management

Moving Forward

Chapter 17: Effectiveness Engineering: Vistas ofOpportunity Beyond Merit, Worth,and Significance


Vistas of Opportunity

The Sensitizing Concept

New Toolbox and Versatile Perspective

Methodology Related to Systems and Complexity

Chapter 18: Harnessing the Power of the Electronic Health Record Data for Use in Program Evaluation

Accessing Data From the Electronic Health Record

Meaningful Use

Transforming the Behavioral Health Organization

Access to Data in the Future

Chapter 19: Utilizing Emerging Technology in Program Evaluation

Knowledge Production

Knowledge Dissemination


Chapter 20: Online Learning Programs: Evaluation’s Challenging Future

What Is an Online Learning Program?

Implementation Context






New Directions for Evaluation

Sponsored by the American Evaluation Association


Sandra Mathison University of British Columbia

Associate Editors

Saville Kushner University of the West of England
Patrick McKnight George Mason University
Patricia Rogers Royal Melbourne Institute of Technology

Editorial Advisory Board

Michael Bamberger Independent consultant
Gail Barrington Barrington Research Group Inc.
Nicole Bowman Bowman Consulting
Huey Chen University of Alabama at Birmingham
Lois-ellin Datta Datta Analysis
Stewart I. Donaldson Claremont Graduate University
Michael Duttweiler Cornell University
Jody Fitzpatrick University of Colorado at Denver
Gary Henry University of North Carolina, Chapel Hill
Stafford Hood Arizona State University
George Julnes Utah State University
Jean King University of Minnesota
Nancy Kingsbury US Government Accountability Office
Henry M. Levin Teachers College, Columbia University
Laura Leviton Robert Wood Johnson Foundation
Richard Light Harvard University
Linda Mabry Washington State University, Vancouver
Cheryl MacNeil Sage College
Anna Madison University of Massachusetts, Boston
Melvin M. Mark The Pennsylvania State University
Donna Mertens Gallaudet University
Rakesh Mohan Idaho State Legislature
Michael Morris University of New Haven
Rosalie T. Torres Torres Consulting Group
Elizabeth Whitmore Carleton University
Maria Defino Whitsett Austin Independent School District
Bob Williams Independent consultant
David B. Wilson University of Maryland, College Park
Nancy C. Zajano Learning Point Associates

Editorial Policy and Procedures

New Directions for Evaluation, a quarterly sourcebook, is an official publication of the American Evaluation Association. The journal publishes empirical, methodological, and theoretical works on all aspects of evaluation. A reflective approach to evaluation is an essential strand to be woven through every issue. The editors encourage issues that have one of three foci: (1) craft issues that present approaches, methods, or techniques that can be applied in evaluation practice, such as the use of templates, case studies, or survey research; (2) professional issues that present topics of import for the field of evaluation, such as utilization of evaluation or locus of evaluation capacity; (3) societal issues that draw out the implications of intellectual, social, or cultural developments for the field of evaluation, such as the women’s movement, communitarianism, or multiculturalism. A wide range of substantive domains is appropriate for New Directions for Evaluation; however, the domains must be of interest to a large audience within the field of evaluation. We encourage a diversity of perspectives and experiences within each issue, as well as creative bridges between evaluation and other sectors of our collective lives.

The editors do not consider or publish unsolicited single manuscripts. Each issue of the journal is devoted to a single topic, with contributions solicited, organized, reviewed, and edited by a guest editor. Issues may take any of several forms, such as a series of related chapters, a debate, or a long article followed by brief critical commentaries. In all cases, the proposals must follow a specific format, which can be obtained from the editor-in-chief. These proposals are sent to members of the editorial board and to relevant substantive experts for peer review. The process may result in acceptance, a recommendation to revise and resubmit, or rejection. However, the editors are committed to working constructively with potential guest editors to help them develop acceptable proposals.

Sandra Mathison, Editor-in-Chief

University of British Columbia

2125 Main Mall

Vancouver, BC V6T 1Z4



Editor’s Notes

This issue of New Directions for Evaluation (NDE) marks a milestone—the 25th anniversary of the American Evaluation Association (AEA). NDE is an official publication of AEA and has been a crucial means for the Association to foster and promote the professionalization of evaluation through thematic discussions of theory and practice in evaluation. NDE was first published in 1978 under the name New Directions for Program Evaluation, although the title became New Directions for Evaluation in 1995 in acknowledgement of the broader scope of evaluation. During the early years, NDE was affiliated with one of AEA’s predecessor organizations, the Evaluation Research Society. Over the years, NDE has been stewarded by a number of editors-in-chief, including Scarvia Anderson, Ronald Wooldridge, Ernest House, Mark Lipsey, Nick Smith, Willam Shadish, Lois-ellin Datta, Jennifer Greene, Gary Henry, Jean King, and myself.

In the first issue of 1978, then editor-in-chief Scarvia Anderson wrote, “Program evaluation is not a new enterprise” (Anderson, 1978, vii). In her introduction, Anderson points to the development of evaluation as a “distinct field of activity.” She falls short of characterizing evaluation as a discipline or profession, but acknowledges the meaningfulness of the creation of evaluation associations, publications, and the availability of government funding to investigate the success of the Great Society initiatives. There can be little doubt that in the years since NDE was launched evaluation has become a discipline, or, more accurately, a transdiscipline (Scriven, 1991) that permeates virtually every aspect of human endeavor. In every discipline, in every sector of society, in every institution, and in every organization evaluation is being conducted, sometimes by people who claim evaluation as their professional work and just as often by other professionals for whom evaluation is simply embedded in their work. AEA and NDE speak most directly to the former and face a future challenge of finding ways to speak to the latter.

Looking Back, Looking Ahead

Anniversaries are memorable moments, key elements of history. They are backward glances, ones that make us think about high and low points, but also provide glimpses into a future, ones that extend past successes, remedy shortcomings, and blaze new pathways. Taking the opportunity to pause at anniversaries is an opportunity for edification. In 2007, I edited an issue of NDE that looked back over the past 20 years of the journal to highlight important moments and enduring ideas in evaluation theory and practice (Mathison, 2007). That issue of NDE was devoted to what might be called the journal’s “greatest hits”—those articles to which evaluators return time and again in their scholarship and evaluation practice. The chapters included in that retrospective were Egon G. Guba and Yvonna S. Lincoln’s 1986 chapter, “But Is It Rigorous? Trustworthiness and Authenticity in Naturalistic Evaluation”; Mark Lipsey’s 1993 chapter, “Theory as Method: Small Theories of Treatments”; Carol Weiss’s 1997 “Theory-Based Evaluation: Past, Present, and Future”; and C. Bradley Cousins and Elizabeth Whitmore’s 1998 “Framing Participatory Evaluation.” These oft-referred-to chapters are about theoretical and conceptual ideas in evaluation, about foundational formulations that inform evaluators’ practice. The 2007 NDE issue looks back at the ideas that have endured, the ideas that have captured the imagination of NDE readers.

The current issue of NDE, on the 25th anniversary of AEA, looks not back but ahead. Because NDE is a thematic, guest-edited journal, it tends to favor more mature, self-assured voices in evaluation. Guest editors are usually senior members of AEA and have been doing and thinking about evaluation for many years, even when they are writing about new directions in evaluation. The journal format does not lend itself easily to showcasing the voices of novice evaluators, those just entering the field and who will be the next generation of evaluation practitioners and theoreticians. As such, NDE has chosen on this anniversary to highlight those voices. In a call for proposals, young evaluators (those people in the field less than 5 years) were invited to share what matters to them, theoretically, conceptually, and practically, as they begin their professional lives as evaluators. From this call we received 139 proposals that were reviewed by the Editorial Board, and from which 20 were chosen for inclusion. The overwhelming response to the call for proposals is surely meaningful, although without further investigation one must speculate on that meaningfulness. It would seem at least that young evaluators want to talk about their evaluation practice, to explore the ideas they encounter in their education, and to contribute to the evolving transdiscipline of evaluation. This seems a positive sign for the future of the profession of evaluation.

What’s on Young Evaluators’ Minds?

The call for proposals was purposefully nonspecific and encouraged young evaluators to write about what matters most to them. The most frequent foci of the proposals were issues in evaluating particular evaluands (like youth programs, professional development, international programs, and so on); elaborations on or examples of using extant evaluation models (especially developmental evaluation, culturally responsive evaluation, evaluation capacity building, but others as well); and descriptions of good teaching and learning of evaluation. (See Figure 1.) Other topics included evaluation methods and techniques, evaluators’ roles, evaluation use and the more contemporary notion of evaluation influence, conceptual ideas in evaluation (for example, validity, criteria), evaluation within organizational contexts (including internal evaluation), how evaluation can benefit from other disciplines, both the use of technology in evaluation and the evaluation of technology, and topics in research on evaluation. This describes the content of the proposals, but this analysis was not part of the selection of proposals for inclusion in this issue. Reviews by the NDE editorial board focused on a number of criteria, including clarity of the proposal, the “newness” of the topic, and its appeal to a broad audience of evaluators. As such, the proposals accepted and the chapters in this issue do not proportionally represent the topics in Figure 1. Figure 2 gives an overall impression of the topics covered in this NDE issue.

Figure 1. Foci of All Proposals in Response to the NDE Young Evaluators’ Issue Call for Proposals


Figure 2. Foci of Chapters in the Young Evaluators’ Perspectives Issue of New Directions for Evaluation


Many of the proposals submitted focused on issues and dilemmas in evaluating particular evaluands or provided examples of evaluations using extant evaluation models. These are surely worthy topics, and we respect the place these topics have in the minds of young evaluators, although we have not included these as chapters in this NDE issue. Instead we have selected chapters that reflect what may be glimpses into the future discourse in evaluation. Included are a number of chapters that build on what evaluation has already learned from other disciplines by introducing us to new possibilities from political psychology, the humanities, sociocultural theory, and the notion of basic social science in the chapters by McBride, Smith, Perry, and Blagg. We are also challenged in the chapters by Coryn and Hobson, Evergreen, and Galloway to think about techniques or methods we use, both at a practical and conceptual level. Chapters by Bheda, Schlueter, Robinson, and White and Boulton raise questions about who evaluators are, how they interact with others, and the roles they assume in their practice. And chapters by Baxter, Derrick-Mills, Hoffman, and Jansen van Rensburg confront, in various ways, conundrums in thinking about and doing evaluation within organizations, either from an external or internal perspective. Price and Wilson deal with concepts that affect much evaluation—performance management and effectiveness engineering. And the last three chapters by Cohen, Galen and Grodzicki, and Nord focus on using technology in evaluation or challenges in evaluating technology.

Time will tell if the topics in this issue are foundational. For now, we celebrate AEA’s 25th anniversary by shining a spotlight on what some of the next generation of evaluators is thinking about now.


Anderson, S. (1978). Editor’s notes: The expanding role of program evaluation. New Directions for Program Evaluation, 1, vii–xii.

Mathison, S. (Ed.). (2007). Enduring issues in evaluation: The 20th anniversary of the collaboration between NDE and AEA. New Directions for Evaluation, 114.

Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage.

Sandra Mathison


Sandra Mathison is a professor of education at the University of British Columbia, editor-in-chief of New Directions for Evaluation, and coeditor of Critical Education.

Chapter 1

Sociocultural Theory: Providing More Structure to Culturally Responsive Evaluation

Dominica F. McBride


Evaluation’s “ancestors” have formed a strong foundation on which experienced, nascent, and future evaluators can build. An area for growth and cultivation is culturally responsive evaluation. The author describes sociocultural theory (ST), a comprehensive theory explaining how culture influences human development, and its potential for program evaluation. Although ST concretizes culture and provides guidelines for culturally responsive research, it has never been applied to program evaluation. © Wiley Periodicals, Inc., and the American Evaluation Association.

Culture is the fabric of life, the theme that runs through humanity and its expressions and behaviors. Culture encompasses, but is not limited to, the beliefs, values, norms, language, food, and clothing that a group shares. It often guides behaviors, cognitions, decisions, institutions, and governances. Because of this pervasiveness, the consideration and/or study of culture is essential in program and policy development and evaluation. In development work, the study of culture can help identify what problems exist, why they exist, and how to solve them. In evaluation, the inclusion of culture is conducive to the full comprehension of the evaluand. Thus, the field of program evaluation cannot ignore the undercurrent and force of culture and, with this recognition, has begun to emphasize and prioritize culture in its standards, principles, and work.

Culturally responsive evaluation (CRE) provides a set of guiding principles for evaluations (Frazier-Anderson, Hood, & Hopson, in press). In conducting CRE, the evaluator must manifest cultural competence (see Ridley, Mendoza, Kanitz, Angermeier, & Zenk, 1994) in every decision, action, tool, and step. The evaluation should be infused with and/or respond to the target group’s cultural values, sensibilities, principles, feedback, and guidelines. If the evaluation or evaluator lacks this cultural responsiveness or sensitivity, then validity of the findings could be compromised (Kirkhart, 2010). Although a great guide for evaluators, some of the details of this framework and actualizing CRE in practice can be enhanced. This article describes how sociocultural theory (ST) can be applied to evaluation to bolster and supplement the current literature on CRE. It further provides examples of how this theory has been and could be used in both program development and evaluation.

The Russian psychologist Lev Vygotsky first developed sociocultural theory in the early 20th century, an era of confusion and political conflict (Rosa & Montero, 1990). During Vygotsky’s professional life a debate existed between psychological theories focused on “heredity and environment” versus intangible ideals with an emphasis on consciousness. By integrating ideas from various disciplines and connecting the dichotomized materialism and consciousness, Vygotsky developed an inclusive theory focused on analysis on multiple levels (Rosa & Montero, 1990), culture, and human development (Rogoff, 2003). With a foundation of philosophical integration, ST holds that culture is seen as mutually constituted by individuals, society, biology, ecology, and history (Rogoff, 2003). Thus, it asserts the influence of one aspect of a situation cannot be seen as separate from another. Further, the true understanding of an individual is encompassed in the understanding of the contexts and history of that individual (Rogoff, 2003). Because of this imperative, a sociocultural approach has the potential to add comprehensiveness and cohesion as well as bolster human connectivity and compassion. Examining one’s own cultural influences and potential biases, attempting to abate ethnocentrism, an openness and direct desire to learn about another’s culture, history, and the dynamics that mold them, moving beyond a deficit-oriented mentality, “separating value judgments from explanations,” and spending significant time with those that one is researching (or evaluating) and learning from them are all aspects of ST research (Rogoff, 2003). Openness, comprehensiveness, and unity are the core of ST, and these principles can lead to a more human way of seeing humans. The following sections present concrete ways to apply ST to bolster evaluations and cohesive human connections within evaluations.

Rogoff and Chavajay (1995) and Rogoff (2003) outline basic assumptions in sociocultural research (see Table 1.1). McBride (2009) conducted a study guided by ST, conceptualizing a culturally responsive family health program for an African American community. The first step in the research process was to conduct a literature review on the larger African American community, ascertaining cultural–historical themes (i.e., religion/spirituality, extended family, and racial socialization). The applicability of these greater factors was assessed with the target Black community. The second step was to construct a community advisory board that advised on the actions and direction of the study as it related to the local community. Given the importance of history and context, eight interviews on local Black history and policy were completed with local community leaders. Relevant historical and current documentation was reviewed to supplement interviews. A total of 10 focus groups were conducted with parents/guardians (N = 54) and family health workers (N = 17) in the community, gaining their input on the structure and content of the program and how to integrate the cultural–historical factors. Finally, after data analysis, a final focus group was conducted with a sample of the previous focus group members to ascertain the validity of the findings.

Table 1.1. Connecting Sociocultural Theory (ST) to Evaluation Practice

Assumption in ST Explanation of Assumption Examples from McBride (2009)
Unit of analysis is the sociocultural activity. The activity must be examined with culture considered and in its natural environment. Use of focus groups as a data-collection tool, not only attending to the content of the group but observing and notating the group process. Both the group content and process gave insight into the target culture. Although the naturally occurring activity of group process was tainted by external facilitation, the cultural dynamics were still apparent and notable.
Understanding of humanity requires study of both the development (individual, community, and species) and interpersonal and group processes of people. Study local history and how that affects present living and how culture (including biology) influences the changes and development of a community over time. Data focused on the contextual and cultural history of the target community. Data were collected from historic documents and studies, documentation from local leaders, and interviews with local leaders.
Human dynamics are affected by individual, interpersonal, and community dynamics, which are inseparable. Individual, social, and cultural processes are not perceived and studied in isolation; while one aspect may be of particular focus in a given study, the influence of the other two facets is always considered. Emphasis on the interpersonal and cultural–institutional aspects of families’ lives and the interconnection of the individual, social, and cultural processes. Focus group and interview questions targeted each of the processes and inquired on how they could be included and/or addressed by a program.
Variation and similarity in a community are equally pertinent and considered simultaneously. There is more variation within groups than between groups and includes methodology that highlights this fact and/or allows for these similarities and differences to be examined. (a) Using a mixed methods design, (b) conducting research on the greater culture and history, (c) inquiring if themes applied to the local community, (d) allowing room for varied opinions and interpretations, and, finally, (e) avoiding automatically generalizing the study’s findings to other communities, even of the same ethnicity.
Research question drives the methods; the methods do not drive the question. Use the method, be it psychological, ethnographic, and so on, or collection of methods best suited to answer the question and understand the relevant cultural and human dynamics. Use of both qualitative and quantitative methods, from various social sciences. Given the goal of the study, community input and understanding was essential and led to using qualitative methods. Quantitative methods were included to ascertain the degree of cultural–historical congruence between the greater and local ethnic group and the variation within the local group.
Researchers must be self-reflective and aware of their own cultural influences and the institutions that affect them and their work. Researchers recognize the influence of their own culture and institutions on their perspectives and scholarly work. This direction requires self-reflexivity beyond the conscious imperatives and into underlying assumptions. Addressed by exploring their own cultural processes and assumptions and the various institutions within the larger academic institution that implicitly and explicitly influenced this research.

As can be seen in Table 1.1, these assumptions are designed to engender a robust understanding of cultural processes through research. The first assumption focuses on the unit of analysis of the research, asserting culturally and naturally occurring activity should be of focus. Regarding program evaluation, the data-collection method of observation can be used and complemented with the ethnographic method of thick description (Geertz, 1973). The thick description should elucidate the activity, all of those involved, and the context. The evaluator should attempt to include these methods in the assessment of the community prior to the implementation of the evaluation and in the evaluation itself.

The focus of the second assumption is on change. Program evaluators can recognize that an evaluand may be different over time. The program and stakeholders assessed at one time may look different at another, with the change of context and people. One way an evaluator could apply this assumption is with the study of history and context prior to and during the implementation of an evaluation.

Qualitative research: An introduction to methods and designs.

Geertz, C. (1973). The interpretation of cultures: Selected essays. New York, NY: Basic Books.

Kirkhart, K. E. (2010). Eyes on the prize: Multicultural validity and evaluation theory. American Journal of Evaluation, 31, 400–413.

McBride, D. F. (2009). Moving towards holistic equity: A process of developing a culturally responsive family health program (Doctoral dissertation). Retrieved from Dissertations & Theses: Full Text. (Publication No. AAT 3391851)

Ridley, C. R., Mendoza, D. W., Kanitz, B. E., Angermeier, L., & Zenk, R. (1994). Cultural sensitivity in multicultural counseling: A perceptual schema model. Journal of Counseling Psychology, 41, 125–136.

Rogoff, B. (2003). The cultural nature of human development. New York, NY: Oxford University Press.

Rogoff, B., & Chavajay, P. (1995). What’s become of research on the cultural basis of cognitive development. American Psychologist, 50, 859–877.

Rosa, A., & Montero, I. (1990). The historical context of Vygotsky’s work: A sociohistorical approach. In L. C. Moll (Ed.), Vygotsky and education: Instructional implications and applications of sociohistorical psychology (pp. 59–88). New York, NY: Cambridge University Press.

Dominica F. McBride is co-founder/co-president of The HELP Institute, Inc., and head of the research division of the Community Mental Health Council, Inc.

McBride, D. F. (2011). Sociocultural theory: Providing more structure to culturally responsive evaluation. In S. Mathison (Ed.), Really new directions in evaluation: Young evaluators’ perspectives. New Directions for Evaluation, 131, 7–13.