Cover Page
1_image_3_1.jpg

Assessing Organizational Performance in Higher Education

BARBARA A. MILLER

1_image_3_3.jpg

TABLES, FIGURES, EXHIBITS, AND WORKSHEETS

TABLES

  1. 4.1 Gap Analysis: Cycle Time for the Registrar’s Posting of End-of-Term Grades (Number of Days After Last Scheduled Final Exam)

FIGURES

  1. 2.1 The Organization as a System
  2. 2.2 Internal Elements of an Organizational System
  3. 2.3 Leadership Systems
  4. 2.4 Inputs
  5. 2.5 Key Work Processes
  6. 2.6 Outputs
  7. 2.7 Outcomes
  8. 3.1 External Elements of an Organizational System
  9. 3.2 Upstream Systems
  10. 3.3 Customers
  11. 3.4 Students as Inputs, Customers, and Stakeholders
  12. 3.5 Stakeholders
  13. 5.1 The Seven Areas of Organizational Performance
  14. 5.2 Effectiveness
  15. 5.3 Productivity
  16. 5.4 Quality
  17. 5.5 Customer and Stakeholder Satisfaction
  18. 5.6 Efficiency
  19. 5.7 Innovation
  20. 5.8 Financial Durability

EXHIBITS

  1. 1.1 Performance Indicators and Reference Points for the Strategic Goal “Increase Enrollment”
  2. 1.2 Performance Indicators and Reference Points for the Strategic Subgoal “Increase Retention”
  3. 2.1 Excerpts from the Mission Statement, Vision Statement, Guiding Principles, Strategic Goals, and Organizational Structure for an Academic Department
  4. 2.2 Excerpts from the Mission Statement, Vision Statement, Guiding Principles, Strategic Goals, and Organizational Structure for Information Services
  5. 2.3 Examples of Inputs for an Academic Department
  6. 2.4 Examples of Inputs for Information Services
  7. 2.5 Examples of Key Work Processes for an Academic Department
  8. 2.6 Examples of Key Work Processes for Information Services
  9. 2.7 Examples of Outputs for an Academic Department
  10. 2.8 Examples of Outputs for Information Services
  11. 2.9 Examples of Intended Outcomes for an Academic Department
  12. 2.10 Examples of Intended Outcomes for Information Services
  13. 3.1 Examples of Upstream Supplier Systems for an Academic Department
  14. 3.2 Examples of Upstream Supplier Systems for Information Services
  15. 3.3 Examples of Upstream Constraining Systems for an Academic Department
  16. 3.4 Examples of Upstream Constraining Systems for Information Services
  17. 3.5 Examples of Upstream Service Partner Systems for an Academic Department
  18. 3.6 Examples of Upstream Service Partner Systems for Information Services
  19. 3.7 Examples of Internal and External Customers for an Academic Department
  20. 3.8 Examples of Internal and External Customers for Information Services
  21. 3.9 Examples of Internal and External Stakeholders for an Academic Department
  22. 3.10 Examples of Internal and External Stakeholders for Information Services
  23. 4.1 Examples of Critical Success Factors for an Academic Department
  24. 4.2 Examples of Critical Success Factors for Information Services
  25. 4.3 Examples of Reference Points
  26. 5.1 Examples of Performance Indicators for Effectiveness in an Academic Department
  27. 5.2 Examples of Performance Indicators for Effectiveness in Information Services
  28. 5.3 Examples of Performance Indicators for Productivity in an Academic Department
  29. 5.4 Examples of Performance Indicators for Productivity in Information Services
  30. 5.5 Examples of Performance Indicators for Q1: Quality of Upstream Systems in an Academic Department
  31. 5.6 Examples of Performance Indicators for Q1: Quality of Upstream Systems in Information Services
  32. 5.7 Examples of Performance Indicators for Q2: Quality of Inputs in an Academic Department
  33. 5.8 Examples of Performance Indicators for Q2: Quality of Inputs in Information Services
  34. 5.9 Examples of Performance Indicators for Q3: Quality of Key Work Processes in an Academic Department
  35. 5.10 Examples of Performance Indicators for Q3: Quality of Key Work Processes in Information Services
  36. 5.11 Examples of Performance Indicators for Q4: Quality of Outputs in an Academic Department
  37. 5.12 Examples of Performance Indicators for Q4: Quality of Outputs in Information Services
  38. 5.13 Examples of Performance Indicators for Q5: Quality of Leadership Systems: Follower and Stakeholder Perceptions and External Relations
  39. 5.14 Examples of Performance Indicators for Q5: Quality of Leadership Systems: Mission
  40. 5.15 Examples of Performance Indicators for Q5: Quality of Leadership Systems: Vision
  41. 5.16 Examples of Performance Indicators for Q5: Quality of Leadership Systems: Guiding Principles
  42. 5.17 Examples of Performance Indicators for Q5: Quality of Leadership Systems: Strategic Goals
  43. 5.18 Examples of Performance Indicators for Q5: Quality of Leadership Systems: Organizational Structure: Design and Governance
  44. 5.19 Examples of Performance Indicators for Q5: Quality of Leadership Systems: Resource Acquisition and Allocation
  45. 5.20 Examples of Performance Indicators for Q5: Quality of Leadership Systems: Costs and Benefits
  46. 5.21 Examples of Performance Indicators for Q6: Quality of Worklife
  47. 5.22 Examples of Performance Indicators for Customer and Stakeholder Satisfaction in an Academic Department
  48. 5.23 Examples of Performance Indicators for Customer and Stakeholder Satisfaction in Information Services
  49. 5.24 Examples of Performance Indicators for Efficiency in an Academic Department
  50. 5.25 Examples of Performance Indicators for Efficiency in Information Services
  51. 5.26 Examples of Creative Changes Supporting Innovation
  52. 5.27 Examples of Performance Indicators for Financial Durability in an Academic Department
  53. 5.28 Examples of Performance Indicators for Financial Durability in Information Services
  54. 6.1 Examples of Mission, Vision, Guiding Principles, Strategic Goals, and Organizational Structure for an Assessment Program
  55. 6.2 Examples of Direct and Indirect Assessment Costs
  56. 6.3 Internal and External Assessment Program Elements
  57. 6.4 Examples of Critical Success Factors for an Institutional Assessment Program
  58. 6.5 Examples of Performance Indicators for Measuring Assessment Program Performance

WORKSHEETS

  1. 1.1 Assessment User Group Analysis
  2. 2.1 Mission Analysis
  3. 2.2 Vision Analysis
  4. 2.3 Guiding Principles Analysis
  5. 2.4 Strategic Goals Analysis
  6. 2.5 Organizational Design Analysis
  7. 2.6 Organizational Governance Analysis
  8. 2.7 Inputs Analysis
  9. 2.8 Key Work Processes, Outputs, and Outcomes Analysis
  10. 3.1 Upstream Systems Analysis
  11. 3.2 Customers Analysis
  12. 3.3 Stakeholders Analysis
  13. 4.1 Critical Success Factor Analysis
  14. 4.2 Assessment Report Schedule
  15. 5.1 Assessing Effectiveness
  16. 5.2 Assessing Productivity
  17. 5.3 Assessing Q1: Quality of Upstream Systems
  18. 5.4 Assessing Q2: Quality of Inputs
  19. 5.5 Assessing Q3: Quality of Key Work Processes
  20. 5.6 Assessing Q4: Quality of Outputs
  21. 5.7 Assessing Q5: Quality of Leadership Systems: Follower Satisfaction and External Relations
  22. 5.8 Assessing Q5: Quality of Leadership Systems: Direction and Support
  23. 5.9 Assessing Q6: Quality of Worklife
  24. 5.10 Assessing Customer Satisfaction
  25. 5.11 Assessing Stakeholder Satisfaction
  26. 5.12 Assessing Efficiency
  27. 5.13 Assessing Innovation
  28. 5.14 Assessing Financial Durability
  29. 5.15 Assessing Critical Success Factors
  30. 5.16 Organizational Performance Areas Important to Assessment Users
  31. 6.1 Communication Planning

FOREWORD

Anyone interested in the survival of higher education realizes that the industry is going through a profound change. Just like manufacturing and health care before it, higher education must face the reality that costs, new technologies, and changing customer expectations create pressures on the industry. Anyone who works in colleges or has a stake in their success will find this book of great interest. Quality education in all its manifestations is crucial to the survival of democracy, as well as to the industry itself.

Peter Drucker, a longtime authority in management, proposed in Management Challenges for the 21st Century (1999) that we may need to stop thinking from a perspective of managing the work of people and begin managing for performance. To be effective, we must define customers’ values and their decision-making processes regarding their income distribution. Management must organize and evaluate the entire operational process, focusing on results and performance.

No one knows these principles better as they relate to higher education than Barbara A. Miller (formerly Lembcke). She has served as an administrative leader, teacher, researcher, and consultant in private and public universities. Her breadth of perspective and knowledge about systems—how they are defined, measured, evaluated, and changed—are extensive. Miller’s broadly based higher education background, combined with her teaching and administrative experience, makes her insights and analysis extremely valuable for those of us serving a variety of roles in the institution as well as those in evaluation positions as stakeholders outside the organization.

Assessing Organizational Performance in Higher Education embraces assessment at the organizational, program, and process levels and evaluates the work from a perspective rooted in systems thinking. Readers will be able to identify major work processes, the significance of these processes in producing quality outcomes, and the strategies necessary for continuous improvement. The book complements the body of literature on assessment, providing both an in-depth theoretical framework and techniques useful for implementation. The information in it is pertinent to everyone from the boardroom to the individual faculty or staff member and will serve as a set of tools to improve the work of the institution. Readers who fully understand the message Miller presents and who work through the exercises as they apply to the institution or program they are assessing will have done a great service to their constituencies—to the students whom they so gratefully serve and to others, both staff and faculty, who care about the quality of their work and the important role they play in this society.

Suzanne Swope, Ed.D.

Vice President for Enrollment and Student Affairs
Emerson College, Boston

PREFACE

I wrote this book to meet the needs of two important groups associated with assessment in higher education: assessors and assessment users. The first group, assessors, consists of persons engaged in day-to-day assessment work. They are faculty, staff, and administrators with part-time or full-time, temporary or permanent responsibilities for assessment. The second group, assessment users, are persons who evaluate or judge performance results measured and conveyed by assessors. I see assessment users as the end users or customers of assessment programs.

Assessors seek avenues for measuring performance required of assessment users; assessment users seek appropriate contexts for evaluating assessment findings measured and conveyed by assessors. Often assessors and assessment users are actually the same persons. However, I choose to differentiate the roles for purposes of discussion, assessor referring exclusively to persons exploring matters of measurement, and assessment user referring exclusively to persons engaged in evaluation. I describe various groups of external and internal assessment users and explain how each group uses assessment findings to support a wide range of decisions that have a potential impact on an organization’s capacity to perform.

My purpose in writing this book is to strengthen the knowledge, skills, and abilities of assessors and assessment users in higher education whether they are novices or experts. I define assessment as the measurement of organizational performance that assessment users evaluate in relation to reference points for the purpose of supporting their requirements and expectations.

The premise of this book is that assessors in higher education must go beyond assessment of student learning outcomes and institutional effectiveness and into assessment of performance of whole organizations, programs, and processes. This raises two questions: why? and how?

Why assess performance at the organization, program, and process levels? For a variety of reasons:

How is performance assessed at the organizational level?

The book’s focus on performance at the organization, program, and process levels complements and advances the many published works available today on assessment of student learning outcomes and institutional effectiveness. This focus helps readers understand the interdependence of organizations in higher education and complexities inherent in organizational performance. I believe that this understanding is fundamental to the practice and scholarship of assessment.

For assessors, the book offers a conceptual framework to guide the measurement of organizational performance in all seven areas of organizational performance. The conceptual framework applies to both academic and administrative units of analysis at any level within the hierarchical structure of educational institutions; it also applies to important programs and key work processes that operate within single organizations or across several organizations or functions within an institution.

What is most exciting about this book is its examination of assessment in several new and different areas of organizational performance—areas that include but go beyond institutional effectiveness, student learning outcomes, and input quality. The following are some of the new areas of performance that assessors can measure:

For external assessment users such as governing boards, governmental agencies, and organizations that affirm accreditation, classification, rank, and eligibility, the book is designed to expand knowledge of the nature and complexity of organizational performance in higher education—knowledge that will, ideally, enhance the ability to frame appropriate accountability questions of educational leaders.

For internal assessment users, such as senior leaders, administrators, and faculty and staff, the book is designed to expand knowledge of the internal workings and interdependence of organizations both inside and outside the institution, complexities inherent in organizational performance, and important links among organizational system elements, areas of organizational performance, and assessment. This knowledge will enhance their ability, as assessment users, to frame better performance questions that lead to better assessments of organizational performance.

Finally, the book offers educational leaders specific recommendations on how to build, deploy, and evaluate assessment programs in ways that provide the right information, at the right time, in the right format to meet ever-changing needs of important external and internal assessment users. The book presents many examples and worksheets to help assessors describe their unit’s organizational system elements and measure complex and interdependent areas of organizational performance using performance indicators and reference points appropriate to the organization’s mission, vision, strategic goals, and critical success factors.

Organization of the Book

The book is organized into six chapters. Chapter One describes external and internal assessment user groups in higher education. It explains what types of organizational performance results assessment users want to know, how they typically use assessment findings in their decision-making processes, and what is at stake for organizations whose performance is under review. A worksheet is provided to help assessors identify assessment information required of important external and internal assessment users groups.

Chapter Two introduces systems thinking and explains the benefits of viewing organizations as open, living, unique systems with a purpose. It begins with a discussion of interdependent system elements that make up organizations, programs, and processes in higher education and explains how each system element presents opportunities for assessment. Chapter Two describes five internal system elements: leadership systems, inputs, key work processes, outputs, and outcomes. Many examples are provided for academic and administrative organizations. Worksheets are also provided to help assessors identify and describe internal system elements of units whose performance they intend to measure.

Chapter Three continues the discussion of system elements and their link to assessment. It describes three external system elements: upstream systems, customers, and stakeholders. Again, many examples are provided for academic and administrative organizations. Worksheets are also provided to help assessors identify and describe external system elements of units whose performance they intend to measure.

Chapter Four is a discussion of how to assess organizational performance. It summarizes assessment methods and terminology. The chapter begins by differentiating the work of measurement from evaluation in assessment. It explains how to clarify units of analysis and the proper ways to use time frames, critical success factors, performance indicators, and reference points. It describes methods for collecting assessment data and disseminating performance results. Worksheets are provided to help assessors identify critical success factors and build an assessment report schedule for units whose performance they intend to measure.

Chapter Five is a discussion of what to assess in organizational performance. It covers the seven operational definitions of organizational performance noted earlier in this Preface: effectiveness, productivity, quality, customer and stakeholder satisfaction, efficiency, innovation, and financial durability. Many examples of performance indicators in each area are provided for academic and administrative organizations. Worksheets are provided to help assessors identify and describe performance indicators and reference points in all seven areas (including critical success factors) and link performance areas to specific assessment user needs and preferences.

Finally, Chapter Six is about how to build, deploy, and assess new or more formalized campuswide assessment programs. It offers suggestions about the importance of clarifying purpose, identifying important assessment user groups, and ensuring two-way, ongoing communication about assessment. It explains how to create and sustain a supportive organizational culture for assessment and how to build a leadership structure that ensures program success. It describes direct and indirect costs of assessment. It presents external and internal system elements of an assessment program as well as examples of indicators for measuring performance in areas deemed critical to program success. A worksheet is provided to help assessment leaders build an assessment communication plan.

Acknowledgments

This book reflects many years of work with friends and colleagues who helped me frame and apply this conceptual model for assessing performance of organizations in higher education. In particular, I would like to thank my husband and longtime friend and colleague, Louie Miller III, who not only served as my sounding board throughout the development of this book but also provided patient guidance and expertise resulting from his long and successful professional career as a tenured professor in sociology and senior executive in information services. I would also like to thank my friend and colleague Suzanne Swope, currently vice president for enrollment and student affairs at Emerson College, for her advice and collaboration over the many years we worked together at George Mason University. I would also like to thank my longtime friend Sandra Everett at Lorain County Community College for sharing her expertise in the area of quality management and helping me understand and apply those principles in the context of organizations in higher education. Finally, I would like to thank Scott Sink, Tom Tuttle, and Carl Thor, whose early works inspired the formation of this conceptual framework for assessing performance of organizations in higher education.

Greencastle, Indiana
June 2006

Barbara A. Miller

ABOUT THE AUTHOR

Barbara A. Miller (formerly Lembcke) is an experienced administrator in higher education and has served as a director of institutional planning and research, a senior planning and policy analyst, and an internal management consultant specializing in organizational development, and continuous quality improvement. She is also an experienced faculty member who has taught courses in management, leadership theory, organizational development, and communication. Her expertise in assessment results from thirty years of experience in large public research institutions, large and medium-sized two-year comprehensive community colleges, and small liberal arts institutions. She served for two years as an examiner for the Malcolm Baldrige National Quality Award Program and one year as an evaluator in the Baldrige pilot program in education, where documentation of performance results is critical.

Miller earned her bachelor of arts degree in sociology at the University of California, Berkeley; her master of arts degree in higher education and student personnel administration at Syracuse University; and her doctorate in higher education administration at the University of Florida in Gainesville. She has also taken M.B.A. courses at the University of North Florida, Jacksonville.

Miller lives in Greencastle, Indiana, where she serves as guest scholar at DePauw University and coordinates her consulting service.