image

Contents

Introduction: Selections from Trudy Banta’s “Editor’s Notes”

Section One: Some Things Never Change

On the Crest of the Wave

Weaving Assessment into the Fabric of Higher Education

Revealing the Results of Assessment

This One’s for Students

The Power of a Matrix

Welcome News About the Impact of Assessment

Section Two: You Won’t Find This in Books

Are We Making A Difference?

Toward a Scholarship of Assessment

How Do We Know Whether We Are Making a Difference?

Demonstrating the Impact of Changes Based on Assessment Findings

Section Three: One Bird’s-Eye View of the U.S. Landscape for Assessment

Take Part in the National Goals Debate

Do Faculty Sense the Tightening of the Accountability Noose?

Can We Combine Peer Review and National Assessment?

Trying to Clothe the Emperor

Are There Measures for Improvement and Accountability?

Section Four: Tripping Lightly Around the Globe

Assessment: A Global Phenomenon

Evaluation of Teaching and Learning in Germany

A Global Perspective—At Last

The Most Compelling International Dialogue to Date

Two Conferences, Two Systems—Similar Perspectives?

Sour Notes from Europe

image

INTRODUCTION

Selections from Trudy Banta’s “Editor’s Notes”

1989–2010

What a privilege it is to have the opportunity to look back over more than 20 years of contributions to Assessment Update and share some of my favorite columns with you! As I contemplated this pleasant task, I thought it might be difficult to identify a few distinct categories for the more than 80 of my “Editor’s Notes” columns. But once I dove into the pile of issues, I discovered that wasn’t hard at all. Soon I was able to sort each issue into one of four stacks. Then I went through each stack and selected the half-dozen or so columns that seemed to tell the most important stories. Winnowing that collection to a set manageable within these covers was the most challenging part of the assignment.

The Four Stacks

A persistent theme in my work in outcomes assessment has been a search for characteristics, or principles, of good practice. I’ve put columns related to this theme in the section “Some Things Never Change.”

After finishing a book or another significant writing project, I have usually developed an essay for AU summarizing the contents or an aspect of my thinking that emerged from, but was not necessarily covered in, the published work. It thus seemed appropriate to call the second section “You Won’t Find This in Books.”

Throughout my career in assessment I have been drawn into several national initiatives focused on accountability, and of course I’ve described them in columns. Since mine is just one perspective on these initiatives, I’ve called the third section “One Bird’s-Eye View of the U.S. Landscape for Assessment.”

Finally, I have attempted to sample global dimensions of quality assurance/assessment by attending at least a couple of conferences in other countries each year since 1990, then writing about my observations. But I have not delved deeply into the issues in any particular country. Thus the fourth section is entitled “Tripping Lightly Around the Globe.”

Some Things Never Change

The first issue of Assessment Update appeared in Spring 1989, and in my column in that issue I painted a picture of the current landscape of outcomes assessment in a series of broad strokes. In the second issue my essay described assessment at the three institutions considered pioneers in this arena, each of which had been profiled in Peter Ewell’s Self-Regarding Institution (1984): Alverno College, Northeast Missouri (now Truman) State University, and the University of Tennessee, Knoxville. In these earliest contributions and in three others spanning the decade of the 1990s, it seems to me that I described many of the characteristics of effective assessment practice that we espouse today. For better or for worse, some things haven’t changed. The principles we discovered and enunciated 20 years ago are as relevant as ever.

The principles described in the selected columns include:

More recent columns foreshadow what many of us hope for the future of outcomes assessment, that is, that more scholars will contribute to the growing literature of assessment and thus push the field forward. In 2006 I described the first large-scale study that demonstrates the power of assessment in increasing student learning when applied purposefully in the process of curriculum change.

You Won’t Find This in Books

Since 1989 I have participated in a book project almost every year, sometimes as a coauthor but more often as the editor of a collection of works contributed by others. (My own scholarship clearly fits in the category Ernest Boyer [1990] called integration.) From time to time I couldn’t resist taking advantage of the bully pulpit I’ve enjoyed as the editor of Assessment Update to share some perspectives I’d developed, or some examples of good practice I’d encountered, as a result of my experiences in preparing a new book manuscript.

Three questions guided the development of my first major book, Making a Difference: Outcomes of a Decade of Assessment in Higher Education (Banta & Associates, 1993). I wanted to know if faculty were teaching more effectively, if students were learning more, and if the quality of colleges and universities was improving as a result of widespread use of outcomes assessment. Each of three “Editor’s Notes” columns addressed one of those questions, and the column focused on institutional quality is included here.

In 2002 almost two dozen colleagues contributed chapters to the book Building a Scholarship of Assessment (Banta & Associates, 2002). I remember arguing with the editors at Jossey-Bass about the title because I wanted to convey the idea that this branch of scholarship was just beginning to emerge and that this would not be the (my?) last, and certainly not the definitive, book on this subject. I insisted that we not call the work simply “Scholarship of Assessment,” and at length we agreed that adding the word ‘building’ to the title would accommodate my desire to signal that we were, and still are, at an early stage in the development of this literature. In my 2002 column in this section I identify Alverno College and James Madison University as two institutions where “systematic inquiry designed to deepen and extend the foundation of knowledge underlying our field” (my words from that column) is under way.

In 2005 I conducted telephone interviews with eleven top administrators at a national sample of colleges and universities where distinctive assessment programs had helped to transform those institutions. My 2006 column in this section contains descriptions of some of the assessment methods these leaders had found particularly effective.

Designing Effective Assessment: Principles and Profiles of Good Practice (Banta, Jones, and Black, 2009) was based on a national search for examples of effective practice in outcomes assessment. In my 2009 column herein I lament the fact that only 9 (6%) of the 146 institutional profiles we received for this book contain evidence that student learning has improved over time as a result of assessing learning and using the findings to improve instruction, curriculum, or academic support services such as advising.

One Bird’s-Eye View of the U.S. Landscape for Assessment

My involvement in outcomes assessment, which stems from my preparation and experience in measurement and program evaluation, began in 1980 when I was asked to assist in developing the University of Tennessee, Knoxville (UTK) response to the performance funding initiative of the Tennessee Higher Education Commission. With assistance provided by multiyear grants obtained in 1985, 1988, and 1989, I established the Center for Assessment Research and Development at UTK. And of course I have served as editor of Assessment Update since 1989. Accordingly I have been invited to participate in a number of national projects focused, broadly speaking, on outcomes assessment.

In 1990 President George H. W. Bush, his secretary of education, Lamar Alexander, and the nation’s governors agreed on six national goals for education as the bases for America 2000, a program aimed at educational improvement and achievement of the six goals. Goal 5, Objective 5 stated that by 2000 “the proportion of college graduates who demonstrate an advanced ability to think critically, communicate effectively, and solve problems will increase substantially.” Those last two words implied that the named skills would be measured somehow. And that implication led to an intense effort on the part of the National Center for Education Statistics (NCES) between 1990 and 1995 to make plans to develop a standardized test for the three skills that could be administered to college seniors. Early in that period I was one of fifteen individuals invited to prepare position papers to provide direction for the development of the national test. My 1992, 1993, and 1996 columns in this section contain my recollections of the attempt to build support for the national test.

The 1994 Congress was swept into office with a presumed mandate to carry out the “Contract for America,” and funding the construction of a national test for college students was not in the contract. Sal Corallo, the chief architect of the NCES effort to build the national test, retired, and for nearly a decade the notion of improving America’s colleges and universities by testing their students lay dormant.

The press to assess with a test sprang to life again with the issuance in 2006 of the report of Secretary of Education Margaret Spelling’s Commission on the Future of Higher Education. That report called for measures of the value added by colleges and universities and publicly reported scores on standardized tests that would permit comparison of institutional quality. Two large national associations, the American Association of State Colleges and Universities (AASCU) and the National Association of State Universities and Land Grant Colleges (NASULGC), came together to head off any efforts by the U.S. Department of Education to impose national measures of institutional quality, including a national test.

When my colleague Gary Pike and I were conducting research on the technical qualities of standardized tests of generic skills at UTK, little did we know that ours would remain the only large-scale studies of their kind for the next 25 years. So we were somewhat surprised to find ourselves appointed to two of the working groups that AASCU and NASULGC administrators asked to develop what came to be called the Voluntary System of Accountability (VSA). In my 2008 column herein I describe the concerns about currently available standardized tests of generic skills that we raised as a result of our studies in Tennessee. I also reveal that my objections to including a measure of value added based on test scores as a non-negotiable component of the VSA were overruled by other members of my working group.

In my 2010 column herein I raise a new concern: Assessment designed to address accountability demands—that is, using standardized test scores and value-added measures to compare the quality of institutions—may actually damage our years of work to make outcomes assessment serve to guide institutional improvement efforts. I describe promising new work with authentic measures such as electronic portfolios and rubrics, but I raise measurement-related cautions about these measures as well.

Tripping Lightly Around the Globe

I must begin this section by emphasizing that I have not developed deep knowledge of any national system of assessment, or quality assurance as it is called elsewhere, other than that of the United States. I have been invited to speak and/or consult in a number of countries, and I have tried to study quality assurance/assessment (QA) in those countries in preparing for my assignments. Then I have been able to observe their QA initiatives first-hand during my brief visits. But most of my written observations about assessment in other countries are based on conversing with colleagues and listening to presentations while attending an average of two overseas professional meetings each year since the late 1980s. Between 1989 and 2003, on behalf of my university I co-sponsored with academic partners in England and Scotland fifteen international conferences on assessing quality in higher education. And almost every year since 1990 I have participated in the annual meeting of the European Association for Institutional Research (EAIR).

At the first international conference co-sponsored by my university (UTK), which was held at Cambridge University in England, the 45 participants from 9 countries were so often confused by terms like course (meaning course of study in England and a single component of a degree program in the U.S.) that they recommended developing a glossary of such terms. Aided by the glossary we had prepared in the interim, the 80 participants from 20 countries who attended the second conference in the series at St. Andrews University in Scotland were ready to engage in spirited discussions of government QA policies and faculty reactions to them. My 1990 column provides some highlights of the presentations by colleagues from Germany, England, the Netherlands, and Mexico.

tuning

I conclude that 2009 column on a positive note, as I will here. The best session I attended in Lithuania was one in which four undergraduate psychology majors from the University of Freiburg in Germany described their use of community-organizing techniques to attract faculty attention to their concerns about the effects of the Bologna Process on students. One of the groups they organized wrote student learning outcomes and Freiburg faculty are adopting those! Perhaps it is our students who ultimately will be successful in engaging more of our faculty colleagues in using outcomes assessment to improve teaching and learning.

References

Banta, T. W. & Associates. (2002). Building a Scholarship of Assessment. San Francisco, CA: Jossey-Bass.

Banta, T. W. & Associates. (1993). Making a Difference: Outcomes of a Decade of Assessment in Higher Education. San Francisco, CA: Jossey-Bass.

Banta, T. W., Jones, E. A., & Black, K. E. (2009). Designing Effective Assessment: Principles and Profiles of Good Practice. San Francisco, CA: Jossey-Bass.

Boyer, E. L. (1990). Scholarship Reconsidered: Priorities of the Professoriate. Princeton, NJ: Carnegie.

Ewell, P. T. (1984). The Self-Regarding Institution. Boulder, CO: National Center for Higher Education Management Systems.