Cover Page

List of Tables

  1. Table 1.1 Traditional Versus Contemporary Ways of Thinking About Assessment
  2. Table 2.1 Relevant Chapters of This Book for Each Learning Setting
  3. Table 4.1 Examples of Effectively Expressed Learning Goals
  4. Table 7.1 Common Assessment Tools
  5. Table 12.1 Error Margins of Various Sample Sizes
  6. Table 12.2 Sample Sizes Needed from Small Groups for a 5% Error Margin
  7. Table 14.1 Common Root Causes of Foot-Dragging on Assessment
  8. Table 16.1 Examples of Learning Assessment Techniques
  9. Table 20.1 Student Definitions of Leadership Before and After Participating in a Leadership Development Program
  10. Table 20.2 Examples of Rating Scales
  11. Table 22.1 Perspectives for Comparing Evidence of Student Learning
  12. Table 23.1 A Tally of Assessments of Students by Their Peers
  13. Table 23.2 A Tally of Assessments of Students by Their Peers Presented with Percentages
  14. Table 23.3 Biology Test Outcomes Mapped Back to the Test Blueprint
  15. Table 24.1 How to Document Evidence of Key Traits of Effective Assessment Practices
  16. Table 24.2 Examples of Item Discrimination Results
  17. Table 24.3 Selected Outcomes from the National Survey of Student Engagement for Rodney College Seniors

List of Lists

  1. List 1.1 The Four-Step Teaching-Learning-Assessment Process
  2. List 3.1 Statements of Principles of Good Assessment Practice
  3. List 3.2 Steps to Ensure That Evidence of Student Learning Is Useful and Used
  4. List 3.3 Examples of Direct Evidence of Student Learning
  5. List 3.4 Examples of Indirect Evidence of Student Learning
  6. List 3.5 Examples of Assessment Errors and Biases
  7. List 3.6 Strategies to Minimize Assessment Errors and Biases
  8. List 4.1 Habits of Mind
  9. List 4.2 Transferrable Skills Valued by Employers
  10. List 4.3 Resources for Identifying Potential Learning Goals
  11. List 4.4 Taxonomies for Learning Goals
  12. List 4.5 Examples of Discussion Topics Regarding Learning Goals
  13. List 5.1 Information That Can Be Used to Help Ensure That a Curriculum Is Responsive
  14. List 5.2 Strategies to Add More Intensive Study of a Key Learning Goal
  15. List 5.3 Strategies to Improve a Program or General Education Curriculum
  16. List 7.1 Questions to Address in an Assessment Plan
  17. List 7.2 Questions to Ask About Assessment Efforts
  18. List 8.1 Suggestions for Assessing General Education Learning Goals
  19. List 9.1 Examples of the Benefits of Assessment
  20. List 10.1 Seminal Books on Assessing Student Learning
  21. List 10.2 Journals and Other Publications That Address Assessing Student Learning in Higher Education
  22. List 11.1 Assessment Tasks That Technologies Can Help With
  23. List 11.2 Questions to Ask Vendor References About Assessment Technologies
  24. List 11.3 Potential Assessment Resource Needs
  25. List 12.1 Where to Look for Assessment Ideas
  26. List 12.2 Examples of Evidence of Student Learning That May Already Be on Hand
  27. List 12.3 Is It Worth Taking Extra Steps to Minimize Assessment Errors and Biases and Increase Consistency?
  28. List 12.4 Four Situations Where Samples May Make Sense
  29. List 13.1 Strategies to Involve Part-Time Adjunct Faculty in Assessment
  30. List 13.2 Examples of How Students Can Engage in Assessment
  31. List 13.3 The Delphi Method for Achieving Consensus on Key Learning Goals
  32. List 14.1 Strategies to Value, Respect, and Reward Efforts to Improve Teaching
  33. List 14.2 Incentives and Rewards That Recognize and Honor Assessment Efforts
  34. List 14.3 How College Leaders Can Foster a Culture of Assessment
  35. List 15.1 Benefits of Well-Crafted Rubrics
  36. List 15.2 Questions to Help Identify What You're Looking for in Student Work
  37. List 15.3 Examples of Continuums for Rubric Performance Levels
  38. List 16.1 Examples of Assignments Beyond Essays, Term Papers, and Research Reports
  39. List 16.2 Questions to Address in a Prompt for an Assignment
  40. List 16.3 Strategies to Counter Plagiarism
  41. List 17.1 Tips for Writing Challenging Rather Than Trick Questions
  42. List 17.2 Tips for Writing Good Multiple-Choice Questions
  43. List 17.3 Tips for Writing Good Interpretive Exercises
  44. List 17.4 Tips for Writing Good Matching Items
  45. List 17.5 Tips for Writing Good True-False Items
  46. List 17.6 Tips for Writing Good Completion or Fill-in-the-Blank Items
  47. List 17.7 Information to Include in Test Directions
  48. List 18.1 Why Use Portfolios?
  49. List 18.2 Questions to Consider as You Plan a Portfolio Assessment
  50. List 18.3 Examples of Items That Might Be Included in a Portfolio
  51. List 18.4 Suggestions to Keep Portfolio Assessments Manageable
  52. List 18.5 Questions to Address in Portfolio Guidelines to Students
  53. List 18.6 Examples of Prompts for Student Reflection on a Portfolio
  54. List 19.1 Examples of Published Instruments for Student Learning Assessment in Higher Education
  55. List 19.2 Resources for Identifying Potential Published Instruments
  56. List 19.3 Useful Information on Potential Published Instruments
  57. List 19.4 Questions to Ask About Instruments with Little Published Information
  58. List 19.5 Deciding If a Published Instrument Is Right for You
  59. List 20.1 Examples of Prompts for Reflection on a Learning Experience
  60. List 20.2 Tips for Focus Groups and Interviews
  61. List 21.1 Factors Affecting Participation in Add-On Assessments
  62. List 22.1 Resources for Identifying Potential Peer Colleges
  63. List 22.2 Sources of External Insight on Potential Standards
  64. List 23.1 Models for Calculating Overall Rubric Scores
  65. List 23.2 A Summary of Qualitative Participant Feedback on an Assessment Workshop
  66. List 24.1 How to Determine the Discrimination of Test Items
  67. List 25.1 Formats for Sharing Summaries and Analyses of Student Learning Evidence
  68. List 25.2 Venues for Announcing and Sharing Summaries and Analyses of Student Learning Evidence
  69. List 25.3 Tips to Engage Audiences in Face-to-Face Discussions of Student Learning Evidence
  70. List 26.1 Strategies That Help College Students Learn
  71. List 26.2 Using Student Learning Evidence Fairly, Ethically, and Responsibly

List of Figure

  1. Figure 1.1 Teaching, Learning, and Assessment as a Continuous Four-Step Cycle

List of Exhibits

  1. Exhibit 5.1 Template for a Three-Column Curriculum Map for a Course Syllabus
  2. Exhibit 5.2 Template for a Four-Column Curriculum Map for a Course Syllabus
  3. Exhibit 5.3 Curriculum Map for a Hypothetical Certificate Program
  4. Exhibit 7.1 An Example of a Completed Chart for Monitoring Assessment Progress Across a College
  5. Exhibit 7.2 A Rating Scale Rubric for Evaluating College-Wide Student Learning Assessment Processes
  6. Exhibit 10.1 A Rubric for Providing Feedback on Assessment Plans and Reports
  7. Exhibit 10.2 A Template for an Annual Program Assessment Report
  8. Exhibit 15.1 A Checklist for Safe Culinary Practices
  9. Exhibit 15.2 A Structured Observation Guide for a One-Act Play
  10. Exhibit 15.3 A Student Essay on Making Community Service a Graduation Requirement
  11. Exhibit 17.1 A Test Blueprint for an Exam in an Educational Research Methods Course
  12. Exhibit 17.2 Multiple-Choice Questions on Assessment Concepts
  13. Exhibit 17.3 An Example of an Interpretive Exercise
  14. Exhibit 17.4 Matching Items from a Nursing Research Methods Test
  15. Exhibit 18.1 A Reflection Sheet for Individual Portfolio Items from a Graduate Course on Assessment Methods
  16. Exhibit 18.2 A Rubric for Assessing Portfolios from a Graduate Course on Assessment Methods
  17. Exhibit 20.1 A Prompt for a Reflective Paper on an Internship
  18. Exhibit 20.2 An Exit Survey for Students Completing a BS in Computer Information Systems
  19. Exhibit 20.3 A Self-Assessment of Library Skills, with a (Fictitious) Student's Responses
  20. Exhibit 21.1 A Rating Scale Rubric for Assessing Fellow Group Members
  21. Exhibit 22.1 Rubric Results for a Hypothetical Assessment of Written Communication Skills
  22. Exhibit 23.1 A Scored Rubric for a Research Report in Speech-Language Pathology/Audiology
  23. Exhibit 25.1 A Poorly Designed Table
  24. Exhibit 25.2 An Improved Version of the Table in Exhibit 25.1
  25. Exhibit 25.3 Results of a Rubric Assessing Writing
  26. Exhibit 25.4 A Pie Graph of the Qualitative Feedback in List 23.2


A Common Sense Guide




Linda Suskie









To my husband Steve, for his unflagging support for everything I've done, including this book

To our children, Melissa and Michael

And to everyone in higher education who believes, as I do, that one of the answers to today's problems is for everyone to get the best possible education

Preface to the Third Edition

When Jim Anker, the publisher of the first edition of this book, approached me about writing a second edition in 2008, I figured that I'd update the references and a few chapters and be done. The first edition was based, after all, on an enduring common sense approach to assessment that hadn't changed materially since the first edition was published in 2004. Ha! I ended up doing a complete reorganization and rewrite.

Fast-forward to 2017. With the second edition now eight years old, it was clearly time for an update. But the second edition had been very successful, so I again figured I'd update the references and a few chapters and be done. Ha! Once again, this is a complete reorganization and rewrite of the previous edition.

Why the rewrite? As I started work on this edition, I was immediately struck by how outdated the second edition had become in just a few short years. When I wrote the second edition, AAC&U's VALUE rubrics were largely untested, the National Institute for Learning Outcomes Assessment was just getting started, and the Degree Qualifications Profile didn't exist in the United States. Learning management systems and assessment information management systems were nowhere near as prevalent or sophisticated as they are today.

More broadly, the higher education community has largely moved from getting started with assessment to doing assessment. But a lot of assessment to date hasn't been done very well, so now we're starting to move from doing assessment to doing it meaningfully. Truly meaningful assessment remains a challenge, and this third edition aims to address that challenge in the following ways:

An increased emphasis on useful assessment. In earlier editions, I placed a chapter on using assessment results at the end, which made chronological sense. But many faculty and administrators still struggle to grasp that assessment is all about improving how we help students learn, not an end in itself, and that assessments should be planned with likely uses in mind. So I have added a second chapter on using assessment results to the beginning of the book. And throughout the book I talk not about “assessment results” but about “evidence of student learning,” which is what this is really all about.

Greater attention to building a culture in which assessment is useful and used. Getting colleagues on board remains a stubborn issue. Two chapters on this in the second edition have been expanded to six, including new chapters on guiding and coordinating assessment, helping everyone learn what to do, keeping assessment cost-effective, and making assessment collaborative.

An enhanced focus on the many settings of assessment, especially general education and co-curricula. Faculty and administrators are looking for more guidance on how to assess student learning in specific settings such as the classroom, general education curricula, undergraduate and graduate programs, and co-curricular experiences. The second edition provided little of this guidance and, indeed, did not draw many distinctions in assessment across these settings. A thorough treatment of assessment in each setting is beyond the scope of this book, of course. But this edition features a new chapter on the many settings of assessment, and several chapters now include discussions on applying the concepts in them to specific settings.

Call-out boxes to introduce assessment vocabulary. The jargon of assessment continues to put off faculty and staff as well as graduate students who use this as a textbook. I opened the second edition with a chapter that was essentially a glossary but overwhelmed graduate students. In my 2014 book, Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability, I introduced higher education vocabulary with call-out boxes called Jargon Alerts. That feature was well received, so in this edition I've eliminated the glossary chapter and instead sprinkled Jargon Alert boxes throughout.

A new focus on synthesizing evidence of student learning into an overall picture of an integrated learning experience. Assessment committees and administrators today are often inundated with assessment reports from programs and units, and they struggle to integrate them into an overall picture of student learning, in part because learning itself is not yet an integrated experience. The idea that assessment should be part of an integrated learning experience is now a theme addressed throughout the book.

More immediate attention to learning goals. The second edition discussed learning goals about a third of the way through the book. Because learning goals are the drivers of meaningful assessment, the chapter on them now appears earlier in the book.

A new chapter on curriculum design. One of the major barriers to effective assessment and the use of student learning evidence is poor curriculum design, so I've added a whole new chapter on this.

More thorough information on planning assessment processes. There are now two chapters instead of one. The first one, on planning assessments of program learning goals, provides a framework, and the second one applies that framework to planning assessments in general education, co-curricula, and other settings.

New frameworks for rubric design and setting standards and targets. In 2016 I researched and wrote a chapter, “Rubric Development,” for the second edition of the Handbook on Assessment, Measurement, and Evaluation in Higher Education (Secolsky & Denison, 2017). My research changed my thinking on what an effective rubric looks like, how it should be developed, and how standards and targets should be set.

A new chapter on assessing the hard-to-assess. The former chapter on assessing attitudes and values is now two chapters – one on miscellaneous assessment tools and one on how to assess the hard-to-assess.

New resources. Many new assessment resources have emerged since the second edition was published, including books, models, published instruments, technologies, and research. Perhaps the most important new resources are the widely used VALUE rubrics published by the Association of American Colleges & Universities ( and the many white papers published by the National Institute for Learning Outcomes Assessment ( This edition introduces readers to these and other valuable new resources. And, yes, I did update the references as I originally envisioned!


Interest in assessing student learning at colleges and universities – and the need to learn how to do it – skyrocketed in the last two decades of the twentieth century and continues to grow in the twenty-first century. The higher education community is increasingly committed to creating learning-centered environments in which faculty and staff work actively to help students learn, and the assessment of student learning is essential to understanding and gauging the success of these efforts. In the United States and elsewhere, accreditors and other quality assurance agencies require colleges and academic programs to assess how well students are achieving key learning goals. These trends have created a need for straightforward, sensible guidance on how to assess student learning.

Purpose and intended audience

Many years ago, someone commented on the value of my workshops to the “But how do we do it?” crowd. That phrase has stayed with me, and it is the root of this book. Yes, we in higher education are theorists and scholars, with an inherent interest in whys and wherefores, but there are times when all we need and want is simple, practical advice on how to do our jobs. Providing that advice is the purpose of this book.

Assessing Student Learning: A Common Sense Guide is designed to summarize current thinking on the practice of assessment in a comprehensive, accessible, and useful fashion for those without formal experience in assessing student learning. Short on background and theory and long on practical advice, this is a plainspoken, informally written book designed to provide sensible guidance on virtually all aspects of assessment to four audiences: Assessment newcomers, experienced assessment practitioners, faculty and administrators involved in student learning, and students in graduate courses on higher education assessment.

Scope and treatment: A common sense approach to assessment

This book is called A Common Sense Guide because its premise is that effective assessment is based on simple, common sense principles. Because each college and learning experience is unique and therefore requires a somewhat unique approach to assessment, this book presents readers not with a prescriptive cookbook approach but with well-informed principles and options that they can select and adapt to their own circumstances.

This book is also based on common sense in that it recognizes that most faculty do not want to spend an excessive amount of time on assessment and are not interested in generating scholarly research from their assessment activities. The book therefore sets realistic rather than scholarly standards for good practice. It does not expect faculty to conduct extensive validation studies of the tests they write, for example, but it does expect faculty to take reasonable steps to ensure that their tests are of sufficient quality to generate fair and useful evidence of student learning, and it provides very practical suggestions on how to do that.

This book also minimizes the use of educational and psychometric jargon. For instance, while it discusses reliability and validity, it avoids using those terms as much as possible. Jargon Alert boxes are sprinkled throughout to help readers understand the vocabulary of higher education assessment.

This book is also unique in its comprehensive scope, although it is not (as my husband reminded me when I was in despair over ever finishing the first edition) an encyclopedia. If you'd like to learn more, every chapter cites additional resources to explore.

Using this book

Assessment in higher education is still a nascent discipline. The science of educational testing and measurement is little more than a century old, and many of the ideas and concepts presented here have been developed only within the last few decades. Assessment scholars and practitioners still lack a common vocabulary or a widely accepted definition of what constitutes good or best assessment practices. As a result, a few may disagree with some of the ideas expressed here. As you hear conflicting ideas, use your own best judgment – your common sense, if you will – to decide what's best for your situation.

Assessment newcomers who want to gain a general understanding of all aspects of assessment will find that the book's five parts take them sequentially through the assessment process: Understanding assessment, planning the assessment process, getting everyone on board, choosing and developing appropriate tools, and understanding and using student learning evidence.

More experienced assessment practitioners will find the book a helpful reference guide. Plenty of headings, lists, tables, and cross-references, along with a thorough index, will help them find answers quickly to whatever questions they have.

Anyone involved in student learning, including faculty who simply want to improve assessments within their classes or student development staff who want to improve assessments in their co-curricular experiences, will find much of the book of interest. Table 2.1 in Chapter 2 suggests especially relevant chapters for various assessment settings.

Faculty and staff teaching professional development workshops and graduate courses in assessment will find this book a useful textbook or resource. Each chapter concludes with questions and exercises for thought, discussion, and practice. No answer key is provided, because these are mostly complex questions with no simple answers! Often the conversation leading to the answers will reinforce learning more than the answers themselves.


Some of the material in this book is adapted from my earlier book, Questionnaire Survey Research: What Works (Suskie, 1996), published by the Association for Institutional Research. I am grateful to the Association for permission to adapt this material. Some material in Chapter 15 is adapted from my book chapter “Rubric Development” in the Handbook on Measurement, Assessment, and Evaluation in Higher Education (Secolsky & Denison, 2017), by permission of Taylor and Francis Group, LLC, a division of Informa plc. I am grateful to Taylor and Francis for permission to adapt this material. I also thank my daughter Melissa for writing the deliberately less-than-sterling essay in Exhibit 15.3.

This book would not be in your hands without the work, support, and input of many people, including assessment practitioners and scholars, faculty and staff. Over the last 15 years, I have worked with literally thousands of faculty and staff at colleges and universities across the United States and throughout the world. Their questions, comments, thoughts, and ideas have pushed me to research and reflect on issues beyond those in the second edition, and this led to much of the new material in this edition. When I was contemplating this third edition, I emailed several hundred colleagues for their thoughts and ideas on a new edition, and dozens responded with extraordinarily thoughtful input. Assessment people are the nicest, friendliest, and most supportive people in the world! I particularly want to acknowledge, with deep gratitude, Elizabeth Barkley, Cynthia Howell, Claire Major, and Susan Wood, who reviewed drafts of the entire manuscript and offered wise counsel and suggestions.

Altogether I am incredibly grateful to these unsung colleagues for their willingness to share so much with me.

About the author

Linda Suskie is an internationally recognized consultant, writer, speaker, and educator on a broad variety of higher education assessment and accreditation topics. Her most recent book is Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability (2014). Her experience includes serving as a Vice President at the Middle States Commission on Higher Education, Associate Vice President for Assessment & Institutional Research at Towson University, and Director of the American Association for Higher Education's Assessment Forum. Her more than 40 years of experience in higher education administration include work in accreditation, assessment, institutional research, strategic planning, and quality management, and she has been active in numerous professional organizations.

Linda has taught graduate courses in assessment and educational research methods, as well as undergraduate courses in writing, statistics, and developmental mathematics. She holds a bachelor's degree in quantitative studies from Johns Hopkins University and a master's in educational measurement and statistics from the University of Iowa.

Part 1
Understanding Assessment

What Is Assessment?

While the term assessment can be used broadly – we can assess the achievement of any goal or outcome – in this book, the term generally refers to the assessment of student learning. Many assessment practitioners have put forth definitions of student learning assessment, but the best one I've heard is in the Jargon Alert box. It's from Dr. Jane Wolfson, a professor of biological sciences at Towson University (personal communication, n.d.). It suggests that student learning assessment has three fundamental traits.

  1. We have evidence of how well our students are achieving our key learning goals.
  2. The quality of that evidence is good enough that we can use it to inform important decisions, especially regarding helping students learn.
  3. We use that evidence not only to assess the achievement of individual students but also to reflect on what we are doing and, if warranted, change what we're doing.

Assessment is part of teaching and learning

Assessment is part of a four-step process of helping students learn (List 1.1). These four steps do not represent a one-and-done process but a continuous four-step cycle (Figure 1.1). In the fourth step, evidence of student learning is used to review and possibly revise approaches to the other three steps (see Jargon Alert on closing the loop), and the cycle begins anew.

Scheme for Teaching, Learning, and Assessment as a Continuous Four-Step Cycle.

Figure 1.1: Teaching, Learning, and Assessment as a Continuous Four-Step Cycle

If the cycle in Figure 1.1 looks familiar to you, it's the Plan-Do-Check-Act cycle of business quality improvement popularized by Deming (2000): Plan a process, do or carry out the process, check how well the process is working, and act on the information obtained during the Check step to decide on improvements to the process, as appropriate.

Comparing traditional and current approaches to assessment

Faculty have been assessing student learning for centuries, often through written and oral examinations. How do today's approaches to assessment differ from traditional approaches? Table 1.1 summarizes some key differences between traditional and contemporary ways of thinking about assessment.

Table 1.1: Traditional Versus Contemporary Ways of Thinking About Assessment

Traditional Approaches: Assessment is. . . Contemporary Approaches: Assessment is. . .
Planned and implemented without consideration of learning goals, if any even exist Carefully aligned with learning goals: The most important things we want students to learn (Chapter 4)
Often focused on memorized knowledge Focused on thinking and performance skills (Chapter 4)
Often poor quality, simply because faculty and staff have had few formal opportunities to learn how to design and use effective assessment strategies and tools Developed from research and best practices on teaching and assessment methodologies (Chapters 3 and 26)
Used only to assess and grade individual students, with decisions about changes to curricula and pedagogies often based on hunches and anecdotes rather than solid evidence Used to improve teaching, learning, and student success as well as to assign grades and otherwise assess individual students (Chapters 6 and 26)
Used only in individual course sections; not connected to anything else Viewed as part of an integrated, collaborative learning experience (Chapter 2)
Not used to tell the story of our successes; stories are told through anecdotes about star students rather than broader evidence from representative students Used to tell our story: What makes our college or program distinctive and how successful we are in meeting societal and student needs (Chapter 25)

Comparing assessment and grading

Obviously there is a great deal of overlap between the tasks of grading and assessment, as both aim to identify what students have learned. There are two key differences, however. The first is that the grading process is usually isolated, involving only an individual faculty member and an individual student. Assessment, in contrast, focuses on entire cohorts of students, and it often considers how effectively many people, not just an individual faculty member, are collectively helping them learn.

The second difference between grading and assessment is that they have different purposes. The main purpose of grades is to give feedback to individual students, while assessment has three broader purposes discussed in Chapter 6: Ensuring and improving educational quality, stewardship, and accountability. Grades alone are usually insufficient to achieve these purposes for several reasons.

Grades alone do not usually provide meaningful information on exactly what students have and haven't learned.    We can conclude from a grade of B in an organic chemistry course, for example, that the student has probably learned a good deal about organic chemistry. But that grade alone cannot tell us exactly what aspects of organic chemistry she has and has not mastered.

Grading and assessment criteria may differ.    Some faculty base grades not only on evidence of what students have learned, such as tests, papers, presentations, and projects, but also on student behaviors that may or may not be related to course learning goals. Some faculty, for example, count class attendance toward a final course grade, even though students with poor attendance might nonetheless master course learning goals. Others count class participation toward the final grade, even though oral communication skills aren't a course learning goal. Some faculty downgrade assignments that are turned in late. Under these grading practices, students who do not achieve major learning goals might nonetheless earn a fairly high grade by playing by the rules and fulfilling other less-important grading criteria. Conversely, students who achieve a course's major learning goals might nonetheless earn a poor grade if they fail to do the other things expected of them. To better sync grading and assessment criteria, add a professionalism learning goal (Chapter 4) or develop a competency-based curriculum (Chapter 5).

Grading standards may be vague or inconsistent.    While many faculty base assignment and course grades on carefully conceived learning goals and standards, others may base grades on inconsistent, imprecise, and idiosyncratic criteria. Faculty may say they want students to learn how to think critically, for example, but base grades largely on tests emphasizing factual recall. Faculty teaching sections of the same course may not agree on common standards and may therefore award different grades to similar student performance. Sometimes individual grading standards are so vague that a faculty member might, in theory, award an A to a student's work one day and a B to identical work a week later.

Grades do not reflect all learning experiences.    Grades give us information on student performance in individual courses or course assignments (Association of American Colleges & Universities, 2002), but they do not provide information on how well students have learned key competencies, such as critical thinking or writing skills, over an entire program. Grades also do not tell us what students have learned from ungraded co-curricular experiences.

Do grades have a place in an assessment effort?    Of course they do! Grades can be useful, albeit indirect (Chapter 3), and therefore insufficient evidence of student learning. Although grades are often too holistic to yield useful information on strengths and weaknesses in student learning, they can be a good starting point for identifying potential areas of concern. DFIW rates – the proportions of students earning a D, F, Incomplete, or Withdrawal in a course – can identify potential barriers to student success.

Grades can be especially useful if courses, assignments, and learning activities are purposefully designed to help students achieve key learning goals (Chapters 5 and 16) by using tools such as test blueprints (Chapter 17) or rubrics (Chapter 15).

Comparing assessment and scholarly research

Assessment, while a cousin of scholarly research, differs in its purpose and therefore in its nature (Upcraft & Schuh, 2002). Traditional scholarly research is commonly conducted to test theories, while assessment is a form of action research (see Jargon Alert) conducted to inform one's own practice – a craft-based rather than scientific approach (Ewell, 2002). The four-step teaching-learning-assessment cycle of establishing learning goals, providing learning opportunities, assessing student learning, and using evidence of student learning mirrors the four steps of action research: Plan, act, observe, and reflect.

Assessment, like any other form of action research, is disciplined and systematic and uses many of the methodologies of traditional research. But most faculty and staff lack the time and resources to design and conduct rigorous, replicable empirical research studies of student learning. They instead aim to keep the benefits of assessment in proportion to the time and resources devoted to them (Chapter 12). If you design your assessments reasonably well and collect corroborating evidence (Chapter 21), your evidence of student learning may be imperfect but will nonetheless give you information that you will be able to use with confidence to make decisions about teaching and learning.

Comparing assessment and evaluation

Is assessment a synonym for evaluation? It depends on the definition of evaluation that is used.

Evaluation may be defined as using assessment information to make an informed judgment    on matters such as whether students have achieved the learning goals we've established for them, the relative strengths and weaknesses of teaching strategies, or what changes in learning goals and teaching strategies are appropriate. Under this definition, evaluation is the last two steps of the teaching-learning-assessment process: Interpreting student learning evidence (part of Step 3) and using it (Step 4). This definition points out that student learning evidence alone only guides us; it does not dictate decisions to us. We use our best professional judgment to make appropriate decisions. This definition of evaluation thus reinforces the ownership that faculty and staff have over the assessment process.

Evaluation may be defined as determining the match between intended and actual outcomes.    Under this definition, evaluation is virtually synonymous with the third step of the teaching-learning-assessment cycle.

Evaluation may be defined as investigating and judging the quality or worth of a program, project, or other endeavor.    This defines evaluation more broadly than assessment. We might evaluate an employee safety program, an alumni program, or a civic project designed to reduce criminal recidivism. While assessment focuses on how well student learning goals are achieved, evaluation addresses how well all the major goals of a program are achieved. An anthropology program, for example, might have goals not only for student learning but also to conduct anthropological research, provide anthropological services to local museums, and conduct its affairs in a cost-effective manner. An evaluation of the program would consider not only student learning but also research activities, community service, and cost-effectiveness.

Comparing assessment and measurement

Just as assessment and evaluation of student learning are sometimes considered synonymous, so are assessment and measurement of student learning. But many people have a relatively narrow conception of measurement, thinking of it as placing something on a quantitative scale akin to a yardstick. This book avoids the term measurement, because assessment is much broader than this conception. Assessment may generate qualitative as well as quantitative evidence of student learning (Chapter 20); it may generate categorical evidence as well as evidence that can be placed on a scale (Chapter 23); and it does not have the precision that images like a yardstick imply (Chapter 24).

Time to think, discuss, and practice

  1. Compare the traits of traditional and contemporary assessment practices in Table 1.1 with those you're aware of at your college. Do people at your college largely practice traditional or contemporary approaches to assessment? Can you think of any ways to make contemporary approaches more pervasive?
  2. If anyone in your group has already conducted an assessment of student learning, ask them to share what was done. Then discuss how, if at all, the assessment would have been done differently if it had been approached as scholarly research.