image

Contents

Editors’ Notes

Chapter 1: From Consilium to Advice: A Review of the Evaluation and Related Literature on Advisory Structures and Processes

Orienting Questions

Why Ask for Advice?

On Advice

Definitions of Evaluation Advisory Group

Toward Theorizing Evaluation Consultation Advisory Groups

Research on Advisory Groups

Conclusion

Chapter 2: Advisory Committees in Contract and Grant-Funded Evaluation Projects

What Is an Advisory Committee?

Key Functions of Advisory Committees

Typical Features of Advisory Committees

An Advisory Committee for Every Evaluation Project?

Conclusion

Chapter 3: Advisory Groups for Evaluations in Diverse Cultural Groups, Communities, and Contexts

Advisory Group Roles

Guidelines for Recruiting and Working With an Evaluation Advisory Group

Advisory Group Takeaways

Chapter 4: The Evolution of a Philosophy and Practice of Evaluation Advice

Evaluation Frames

Our Professional Biography: Structuring an Evolving Philosophy of Voice

Real-World Problems

A Modest Proposal: Evaluation Facilitator and Best Practices

Conclusion

Chapter 5: Advice Giving in Contested Space

Evaluation in Contested Spaces

Conflict: The Irish at War

Discussion

Conclusion

Chapter 6: Empowering the Voice of Youth: The Role of Youth Advisory Councils in Grant Making Focused on Youth

Background and History

Infusing Evaluation Into Youth Grant-Making Efforts

Empowering the Voice of Youth in Grant Making and Advising Community Issues

Implications and Best Practices

Conclusion

Chapter 7: Congratulations on the New Initiative! Is It Time for a New Committee?

Evaluation Challenges in Past Community-Change Initiatives

A New Movement, and a Missing Piece

Chapter 8: Decolonizing Evaluation: The Necessity of Evaluation Advisory Groups in Indigenous Evaluation

A Brief History of Research in Indigenous Communities

Why Evaluation Advisory Groups?

Centrality of Indigenous Knowledge

Participatory Inquiry and Evaluation

Relevance and Service to Community

Summary

Chapter 9: A Model for Evaluation Advisory Groups: Ethos, Professional Craft Knowledge, Practices, and Skills

Conceptualizing EAG/EAC Practice

Ethos, Professional Craft Orientation and Knowledge, and Skills

Professional Craft Orientation, Knowledge, and Expertise

Training

Effective Advice

Index

image

New Directions for Evaluation

Sponsored by the American Evaluation Association

Editor-in-Chief

Sandra Mathison University of British Columbia

Associate Editors

Saville Kushner University of the West of England
Patrick McKnight George Mason University
Patricia Rogers Royal Melbourne Institute of Technology

Editorial Advisory Board

Michael Bamberger Independent consultant
Gail Barrington Barrington Research Group Inc.
Nicole Bowman Bowman Consulting
Huey Chen University of Alabama at Birmingham
Lois-ellin Datta Datta Analysis
Stewart I. Donaldson Claremont Graduate University
Michael Duttweiler Cornell University
Jody Fitzpatrick University of Colorado at Denver
Gary Henry University of North Carolina, Chapel Hill
Stafford Hood Arizona State University
George Julnes Utah State University
Jean King University of Minnesota
Nancy Kingsbury U.S. Government Accountability Office
Henry M. Levin Teachers College, Columbia University
Laura Leviton Robert Wood Johnson Foundation
Richard Light Harvard University
Linda Mabry Washington State University, Vancouver
Cheryl MacNeil Sage College
Anna Madison University of Massachusetts, Boston
Melvin M. Mark The Pennsylvania State University
Donna Mertens Gallaudet University
Rakesh Mohan Idaho State Legislature
Michael Morris University of New Haven
Rosalie T. Torres Torres Consulting Group
Elizabeth Whitmore Carleton University
Maria Defino Whitsett Austin Independent School District
Bob Williams Independent consultant
David B. Wilson University of Maryland, College Park
Nancy C. Zajano Learning Point Associates

Editorial Policy and Procedures

New Directions for Evaluation, a quarterly sourcebook, is an official publication of the American Evaluation Association. The journal publishes empirical, methodological, and theoretical works on all aspects of evaluation. A reflective approach to evaluation is an essential strand to be woven through every issue. The editors encourage issues that have one of three foci: (1) craft issues that present approaches, methods, or techniques that can be applied in evaluation practice, such as the use of templates, case studies, or survey research; (2) professional issues that present topics of import for the field of evaluation, such as utilization of evaluation or locus of evaluation capacity; (3) societal issues that draw out the implications of intellectual, social, or cultural developments for the field of evaluation, such as the women’s movement, communitarianism, or multiculturalism. A wide range of substantive domains is appropriate for New Directions for Evaluation; however, the domains must be of interest to a large audience within the field of evaluation. We encourage a diversity of perspectives and experiences within each issue, as well as creative bridges between evaluation and other sectors of our collective lives.

The editors do not consider or publish unsolicited single manuscripts. Each issue of the journal is devoted to a single topic, with contributions solicited, organized, reviewed, and edited by a guest editor. Issues may take any of several forms, such as a series of related chapters, a debate, or a long article followed by brief critical commentaries. In all cases, the proposals must follow a specific format, which can be obtained from the editor-in-chief. These proposals are sent to members of the editorial board and to relevant substantive experts for peer review. The process may result in acceptance, a recommendation to revise and resubmit, or rejection. However, the editors are committed to working constructively with potential guest editors to help them develop acceptable proposals.

Sandra Mathison, Editor-in-Chief

University of British Columbia

2125 Main Mall

Vancouver, BC V6T 1Z4

CANADA

e-mail: nde@eval.org

Editors’ Notes

In professional life, advice often is technical, an answer to a question, or perspectival, another way for seeing a problem or for working towards a solution. From the everyday to the technical, there are multiple advice systems and cultures, each with its own social and cultural rules and protocols, from the informal and mundane, such as where to get the best pizza around here, to the formal and exotic, such as how to best draw a sample of cancer patients for an evaluation of posttreatment recovery. Asking and giving advice works to bind and bond us socially, to thoughts, perspectives, ideas, opinions, beliefs, and expertise—to others. In religious work and in the university, the advice system of asking and proffering thoughts, opinions, preferences, and other types of advice serves to join the asker and answerer to larger ideas, worlds, histories, and cultures.

In evaluation work, advice solicitation and response are both mundane practices and formal structures, both dropping in on a colleague to get his or her opinion and organizing and using a formal evaluation advisory (EAG) or consultation (ECG) group (the latter is the preferred term in U.S. Federal service because of statute). It is the formal structure and how it can be used to enhance the quality of an evaluation that are the foci of this issue. But why devote a full issue to advisory groups if asking and receiving advice is so ordinary a process? Is it not also a common professional evaluation practice?

It may well be an everyday evaluation practice—there are no known surveys—and although the literature increasingly refers to the value of using outside advice for enhancing evaluation quality and the use of findings, this practice is neither fully analyzed nor theorized deeply, with Patton (2008) an exception. This issue intends both locating what the editors call the advice system in historical, social, and cultural contexts and then using several perspectives to explicate formal advice structures and practices, bringing in consultation as a cousin of advice. Case studies ground these analytic and theoretical overviews, with stories about everyday, intentional practice in organizing and using formal and informal advisory structures and advisory practices. Examples include Indigenous community members as technical, cultural advisors (Johnston-Goodstar), a variety of community-based local residents (Cohen), advisory groups in everyday evaluation practice (Mattessich), a variety of advisory group protocols (Compton and Baizerman), the use of an advisory group for a contested Northern Ireland museum exhibit (VeLure Roholt), a youth advisory group within a community foundation (Richards-Schuster), and the concerns and questions about why to have and how to work with an EAG/ECG. Taken together, the range, richness, and utility of an advisory structure strategy and practice become clear.

Evaluation is part of an advice system when the evaluator, contractor, and/or others intend a study to be used for program improvement, accountability, policy, decision making, and the like; that is, when it is intended to be used. It is in its use-purpose that evaluation can be clearly contrasted to the social and behavioral sciences (Mathison, 2008). When in their applied form, a claim to practicality is made then that these too can be part of the advice system—how? By providing data and derivative ideas, suggestions, and recommendations for action, that is program improvement and the rest. The metaphor “what do the data say?” is one basis for advice; a second is “what do the data mean for how to improve our program?” That is, data are read and given meaning for advice giving. It is in this hermeneutic, this reading of data for use, that is seen how both evaluation and applied sociobehavioral and interpretative sciences can be used for advice giving that can be practical, helpful, and suggestive.

This is why our introduction includes literature from the applied social and behavioral sciences, the policy sciences, and management science.

As is clear in the issue, advisory/consultation groups (EAG/ECG) have clear technical utility around evaluation credibility, legitimacy, and use of findings. These also have clear political use with constituencies (stakeholders) who want to influence how the evaluation is done and how it can (should) be used, by themselves and others. There is economic utility too in that members of evaluation advisory groups typically (we think) receive no salary (while receiving other benefits, as the text and case studies show), thus providing an evaluator with no–low-cost technical input. It is in the mix of these that the worth of an EAG/ECG shows itself. For all of these and other benefits, the use of EAG/ECG as such is not (we suspect) common practice. If so, why not?

It is the editors’ hunch that the amount and type of effort necessary to organize and sustain a working, effective advisory/consultation group is beyond what some evaluators have or want to invest. It can be hard work, especially to evaluators not trained in this practice, and until now, although there have been hortatory calls for using EAGs/ECGs, there are few evaluation examples easily found and fewer guidelines easily available to evaluators. Never mind the care and feeding of evaluation, program, community, and other experts! Some of this is taken up in the text. Perhaps another plausible source of resistance to, or at least the failure to use, formal EAGs/ECGs comes from the simple fact of limited time. The groups do take time. Is the time and effort to do this work worth it?

In the abstract, the answer is unequivocally yes, especially when the evaluator is working in a new field, in a difficult space, on a complex study, on a politically contested evaluation, or the like. In a moral sense, the answer is yes, again. Good input for the evaluator should contribute to a better study, that is, one more valid, useful, credible, and legitimate than if that advice was not given or given and not used. The editors believe and think that it is at least moral to, if not immoral not to, ask for advice from insiders and outsiders, from a variety of stakeholders if, at minimum, this could make a study better, and, differently important, because it is a truism that those engaged with or participating in an evaluation tend to have a greater stake in it being done well and in using it for accountability, policy, program improvement, and other decision making, thus potentially enhancing program/service effectiveness and value. Directly related is that those affected by an evaluation have a right to be involved in that study, as a general (moral) rule. When those interested and affected can bring value to the work, it is foolish, the editors imply, not to invite their contribution. How best to do this is suggested by the case studies. Compton and Baizerman propose an EAG/ECG facilitator role, removing from the evaluator the responsibility and effort needed to ensure more effectively the solicitation, assessment, implementation, and evaluation of expert input.

It is in this and similar ways that this issue engages important concerns about technical, social, political, and cultural expertise in advice giving, and the use of evaluation advisory groups to enhance an evaluation study and the practice of evaluation. Advice asking and giving may be ordinary, everyday practices. In evaluation practice, good advice is anything but ordinary; its value sometimes is priceless.

References

Mathison, S. (2008). What is the difference between evaluation and research, and why do we care? In N. L. Smith & P. R. Brandon (Eds.), Fundamental issues in evaluation. New York, NY: The Guilford Press.

Patton, M. (2008). Utilization-focused evaluation. San Francisco, CA: Sage.

Ross VeLure Roholt

Michael L. Baizerman

Editors

Ross VeLure Roholt is an assistant professor in the School of Social Work, University of Minnesota.

Michael L. Baizerman is a professor in the School of Social Work, University of Minnesota.

Chapter 1

From Consilium to Advice: A Review of the Evaluation and Related Literature on Advisory Structures and Processes

Michael L. Baizerman, Alexander Fink, Ross VeLure Roholt

Baizerman, M. L., Fink, A., & VeLure Roholt, R. (2012). From consilium to advice: A review of the evaluation and related literature on advisory structures and processes. In R. VeLure Roholt & M. L. Baizerman (Eds.), Evaluation advisory groups. New Directions for Evaluation, 136, 5–29.

Abstract

The literature in evaluation and related disciplines on advice and advisory structures and processes is described and analyzed. The purposes of evaluation advisory groups and evaluation consultation groups are discussed, and working and formal definitions of each are provided. ©Wiley Periodicals, Inc., and the American Evaluation Association.

In everyday evaluation practice, working evaluators seek advice from colleagues, potential and actual contractors, and from others with an interest in their specific project—including intended users, articles, and books, even friends and family. Some evaluators formalize their advice—seeking and giving advice in a group they consult more or less often over a short to a longer term, whereas other evaluators seek counsel informally, more or less regularly.

Seeking advice from others and from texts is normative practice in all professions, presumably, although it may only be formalized in some (e.g., medicine). Professional texts in many fields exhort the use of external advice. Evaluation texts also recommend the use of advice from others for conceptualizing, conducting, and completing an evaluation, and especially for enhancing the use of the evaluation and its findings for policy and program improvement (e.g., Daponte, 2008; Fitzpatrick, Sanders, & Worthen, 2011; Pankaj, Welsh, & Ostenso, 2011). Indeed, well-known models of evaluation practice advocate for the involvement of others in an evaluation so as to make more likely the effective use of the evaluation and its findings (e.g., Patton, 2008; Ryan & DeStefano, 2000).

Given the presence of the topic of advice giving in evaluation texts, articles, and reports, there is surprisingly little practical advice published on when and how to organize, manage, and utilize formal and informal evaluation consultation. This issue fills this gap. It also contributes to a beginning theorizing of advisory/consultative structure and practice. Our strategy is to introduce advice-giving and formal structures for this, present eight case studies of formal and informal structures for advice giving, and then conceptualize and theorize this practice in the categories—ethos, craft orientation, skills, and practices—in this way adding to our earlier work on evaluation capacity building (Compton, Baizerman, & Stockdill, 2002) and managing evaluation (Compton & Baizerman, 2009).

Orienting Questions

By the end of this New Directions for Evaluation issue, the following questions will have been addressed, and the reader should have a deeper appreciation for the subject and a firmer grasp on several approaches to organizing, managing, and utilizing an evaluation advisory group (EAG)/evaluation consultation group (ECG). Suggestions for a training curriculum and for research on EAG/ECG complete the final chapter.

Practical Questions

Conceptual and Theoretical Questions

Why Ask for Advice?

Simply put, advice is another person’s point of view, their take on you and your situation (and you in your situation), and guidance intended to help (we presume) you think about and act in a specific situation, or more broadly and longer term. Advice can guide, it can help one get unstuck, it can teach, and can make one feel better. The Latin root of advice is advere, to see, hence a viewpoint (point of view). In personal relations we often seek advice from family, friends, and experts; at work we often seek out an expert first, and may also include friends and family. Why the latter two? Because they know us, and hence may know how we may not see or think about certain things (going on) and importantly because we trust them to look over/look out for our (best) interests. All of this is quite ordinary, however interpersonally complicated it may get, especially with family and friends.

When the subject or problem at hand is technical in nature, as in evaluation, it may be far more reasonable, efficient, useful, and politically and interpersonally comfortable and safe to seek counsel from experts, typically in one’s own or in a nearby professional field. This can be done informally, on an ad hoc or more formal basis, once, more often, or regularly, in the short to longer term. Or one can consult and ask advice within a formal process. This formal process, along with a formal, longer-term advice-giving structure, is our focus—the evaluation advisory group/evaluation consultation group.

Whether informal or formal, advice seeking, advice giving, and advice using are practical, if at times contentious and contested ways of “getting outside” and beyond oneself to get another perspective on one’s practical situation, or on a broader issue. It is also a way to gain political legitimacy and a political base for one’s plan or practice, a way to find allies for proposed and ongoing actions. Such agency politics are often necessary to ensure that an evaluation can be conducted accurately, on time, and with a good chance of being used for accountability, policy, decision making, or program improvement. Politics can also take the form of perceived or actual resistance to an evaluator or to the technical aspects of their proposed or ongoing study. Agency politics are surely crucial when a major goal of an evaluation is to propose how to improve a program or an agency. It is important to remember that asking for advice is an interpersonal process, even when the advice process is formalized.

Advice is an interpersonal process, with its own politics of sex, race/ethnicity, ideas, feelings, and status. It is cultural and social in our society to seek advice, and it is also sociocultural to wonder about one’s need and want to do so, and how one who seeks counsel will be perceived by others, for example, as weak, unsure, not expert, as the wrong evaluator for the job, perhaps. In some fields, formalization of the advice process may work to marginalize such private and social concerns, yet the same formalization could also exacerbate these concerns and feelings. To ask for advice and to use it or not can show that advice can be a contested space.

Advice as such can be a contested space (VeLure Roholt & Baizerman, 2012), a place of disagreement and tension. Such disagreement can be about the substance of ideas, about style, about preference—about alternative views and ways, or about more. When advice is taken as embodied, such differences can be about more than alternative ideas and practices; advice can become personal.

Formal advice structures can work to make more or less prominent the political, interpersonal, and contested aspects of the advice process. How the structure is contested, how it is given legitimacy (and of what types), how its size, member recruitment, screening, training, and representativeness; the type and frequency of meetings; and its own ways of working are some of the practical, everyday, subjects of interest when deciding whether and how to develop and use a formal advisory structure—topics we take on in the case studies and in the final chapter. Whether or not to seek advice is a relatively simple decision; to use a formal advice structure is more complicated; whether or not to seek and use advice and counsel regularly from the same set of individuals working together in a group is even more complex a decision. By the end of this text, you should be better prepared to decide for yourself, based on your context and situation.

On Advice

Advice giving and advice taking are everyday practices in personal and social life: Should I wear these shoes to match my outfit? Which MD should I see? Who has the best pizza? What horse should I bet on? Should I go out with him? What statistical test would work best for these data? How would you go about getting management on board to use the findings from our evaluation? What groups should be represented on my evaluation advisory committee? These ordinary, mundane, everyday questions in the advice domain show that advice is a close sibling to opinion, suggestion, and recommendation in everyday speech. This blurring of meaning in everyday use between and among advice, opinion, suggestion, and recommendation is challenged in technical language games (Wittgenstein, 1953) where an opinion is different from a suggestion and also from a recommendation. Each of these terms has technical meanings in different technical worlds, such as social science research and evaluation, and may have yet other, different technical meanings within the social sciences and between these (psychology, anthropology, sociology, economics, political science) and evaluation.

In everyday English usage, native speakers distinguish among advise, suggest, and recommend. In technical/professional fields, there are clear practical and often legal differences among the three. In everyday native English, grammatical differences also obtain, with advice less strong an action requirement than suggest, and that less so than recommend. Each of these three terms has different Latin roots, whereas suggestion and recommendation both share advise in a thesaurus. Advice itself in its Latin origin joins ad (to, toward) to videre (to see). Advice: to see, inform, consul, tell, notify. It is also as if advice, in its foundational meaning, means “to see as I do.” Concilium is a close relative in meaning (counsel), and is the name of the earliest formal Roman advisory structure, Concilium Principis, a group offering counsel to the first Roman Emperor, Augustus (Cook, 1955). Our use of the terms advice, advisory, and consultation will be conventional and follow everyday U.S. English meanings, except when we explicitly change to a technical meaning.

We use both the conventional advisory committee and consultation (consultative) committee in recognition of and deference to U.S. federal governmental usage, which tightly restricts the use of “advisory groups” (Croley & Funk, 1997; Smith, 2007).

In current evaluation texts, it is common to find advice, suggestions, and recommendations that the evaluator solicit, assess, and use informal and formal input (i.e., advice, suggestions, and recommendations) from intended users from a variety of constituencies of a particular study, colleagues, and others (e.g., Fitzpatrick, Sanders, & Worthen, 2011). Such advice can be informal, formal, or some of each, can be given by one or more persons individually and/or by persons in/as a group. This input can be more or less formally organized as an ad hoc group, a formal ongoing advisory group, or the like. The group can be called a user group, consultation group, or an advisory group.

Advice is a common word and a common, often (almost) invisible social process; it is a complex interpersonal process that can implicate one’s self-conception, expertise, and vulnerability, as well as one’s positional authority and one’s very job. In our increasingly complex, global, fast-moving world, one often needs help from others—their perspective, insight, reflections, thoughts—what they see and think—and what they suggest—their advice. All of this ordinary advice asking and advice giving is the subject of social science research (Brown, 1955; de Leon, 1988; Maynard-Moody, 1983; Moore, 1971), as will be shown. But it is only of interest to us here when it is contextualized in a formal advice-giving structure for evaluation studies (and somewhat for evaluation policy and the managing of evaluators and an evaluation unit (Compton & Baizerman, 2009)). Yet we must remember that everyday practices very often are the same or closely similar to formal, professional practices. As Schon (1983) long ago pointed out, it is useful to distinguish “espoused theories” (or practices and skills) from “theories in use” (or practices and skills): How we talk about what we do does not necessarily map onto what we do and how we do it; as Dreyfus (2001) also shows. This means two things for us: (a) that informal, everyday advice practices can infuse, underlie, or even be the same as, formal advice practices, in part or wholly, and (b) how we talk about informal and formal advice practices may differ, whereas actual in-use practices may be similar or the same. The practical task here then is to be on the lookout for whether, to what degree, where, and how everyday advice practices turn up in formal advisory structures and practices. The literature review below illuminates some of these similarities in informal and formal advice giving, advice assessing, advice taking, and advice using.

To give this a different turn, in part we will be after “the embodied knowledge that comes only from engaging in practices in concerted co-presence with others” (Rawls, 2005, p. 5)—practices that are “things done, said, heard, felt—those recognizable” (Rawls, 2005). Put differently, how do formal evaluation advisory/consultation groups work and how does this map onto the working of everyday advice practices?

Advice practices are primordial in everyday human life; especially conjugal and group life: We ask others to help us live our lives, to make us wiser, to make those asked feel and think differently about themselves. Advice is a communicate transaction and as such is socioculturally bound to place and time. Who can be asked for advice, given who the asker is, and who can give advice of what type to which asker are socially related, everyday practices, important to us only to the extent that it reminds us that constructing a formal evaluation advisory structure means selecting members, orienting, training, and working with them, and how this is to be done with a committee may (is highly likely to) be based on practices in the larger society and culture. It is the evaluator’s responsibility to attend to this.

Advice-seeking behavior can be directed at family, friends, texts, and others. Among these others are professionals chosen for their expertise, their specialized knowledge (Ericsson, 2009; Higgs & Titchen, 2001). This is simply said, but quite complicated in practice. How do we as laypersons know what is/are the relevant expertise we need/want? (by referral). How do we know if a particular other has it? (by credential). How do we assess whether the suggested expert is right for us? (by experience). All of that is pretty easy. To make it more difficult, in an evaluation context, do we want to know what school of thought the consulting evaluator subscribes to and uses? Do we want to know the exact expertise of the evaluator expert? For example, are they more qualitative than quantitative in their approach? Have they evaluated chronic disease programs before? How well does the consultant work with local physicians? Does he or she have training and/or knowledge in medical terminology?

The obvious point here is that there is expertise and there is “expertise,” and it is not always easy to know or discern what this is and what one needs/wants (Briggle, 2008). This is the relevance problem. And it is not always easy to assess the appropriateness of a particular expertise to one’s purpose at hand, for example, the construction and use of a formal evaluation advisory/consultation group. Here too the evaluator must take note of these distinctions in expertise. In the last chapter, we show the practical relevance of expertise assessments and decisions.

Dreyfus’s five-stage model of expertise (see Compton & Baizerman, 2009) names the highest stage of expertise in Aristotle’s term, as phronesis—wisdom, the joining of the moral and the technical. Benner, Tanner, and Chesla (1996) in nursing also make this point—the joining of the technically correct with the morally right. The evaluator who is constructing a formal evaluation consultative structure such as a group or council should attend to this distinction, especially if the evaluator intends evaluation findings to be used for policy, decision making, and program improvement—all normative (and frequently moral) choices.