EVERY ERA BRINGS CHALLENGES. Even so, by all accounts, this second decade of the twenty-first century has swept in a steady stream of disruptive developments that threaten some of the most basic assumptions on which the higher education enterprise rests—including how and by whom its core academic functions are delivered.
More than 18 million undergraduate students are currently enrolled at thousands of academic institutions—some quite large, others small, some public, others private, some for-profit, and still others virtual. Movement of students and faculty across these sectors has grown. On many campuses, a large portion of undergraduate teaching is provided by other-than-tenure-track faculty members: part-time adjunct faculty members and graduate teaching assistants. Soaring college costs, unacceptably low degree-completion rates, new technologies, and competitive new providers have become defining features of what some call higher education’s “new normal.” Further disruption comes from the uneasy sense that the quality of student learning may be falling well short of what the twenty-first century demands of our graduates, the economy, and our democracy. It is in this complex context that understanding student performance and optimizing success is not just important to maintain public confidence; it is even more necessary to guide and inform academic decisions and policies.
But with challenge comes opportunity. By every relevant measure, higher education adds value to individuals and to society (MacMahon, 2009). What today’s students know and are able to do will shape their lives and determine their future prospects more than at any time in history. In addition to the numerous lifelong benefits college graduates enjoy, the performance of our colleges and universities has profound implications for the nation’s economy, our quality of life, and America’s place in the world. It is this profound relevance and worth of higher education that adds a palpable sense of urgency to the need to document how college affects students and to use this information effectively to enhance student attainment and institutional effectiveness.
The big question is this: How will colleges and universities in the United States both broaden access to higher learning and also enhance student accomplishment and success for all students while at the same time containing and reducing costs? This is higher education’s signal challenge in this century. Any meaningful response requires accurate, reliable data about what students know and are able to do as a result of their collegiate experience. In the parlance of the academy, this systematic stock-taking—the gathering and use of evidence of student learning in decision making and in strengthening institutional performance and public accountability—is known as student learning outcomes assessment. Gathering evidence and understanding what students know and can do as a result of their college experience is not easy, but harnessing that evidence and using it to improve student success and institutional functioning is even more demanding. This second challenge is the subject of this volume.
Assessment should be intentional and purposive, relevant to deliberately posed questions important to both institutions and their stakeholders, and based on multiple data sources of information, according to the guidelines for evidence of the Western Association of Schools and Colleges (WASC, 2014). Evidence does not “speak for itself.” Instead, it requires interpretation, integration, and reflection in the search for holistic understanding and implications for action. As did assessment pioneers at Alverno College many years ago, Larry Braskamp and Mark Engberg (2014) describe this work as “sitting beside” in an effort to assist and collaborate with members of the academy in ways that engender trust, involvement, and high quality performance.
Whatever the preferred formula or approach—and there are many—we are convinced that if campus leaders, faculty and staff, and assessment professionals change the way they think about and undertake their work, they can multiply the contributions of learning outcomes assessment to American higher education. The good news is that the capacity of the vast majority of American colleges and universities to assess student learning has expanded considerably during the past two decades, albeit largely in response to external pressures. Accreditors of academic institutions and programs have been the primary force leading to the material increase in assessment work, as these groups have consistently demanded more and better evidence of student learning to inform and exercise their quality assurance responsibilities (Kuh & Ikenberry, 2009; Kuh, Jankowski, Ikenberry, & Kinzie, 2014). Prior to the mid-1990s, accrediting groups tended to focus primarily on judgments about whether an institution’s resources—credentials of the faculty, adequacy of facilities, coherence of the curriculum, number of library holdings, and fiscal soundness—were sufficient to deliver its academic programs. Over the past 15 years, however, both institutional and program accreditors have slowly shifted their focus and now expect colleges and universities to obtain and use evidence of student accomplishment (Gaston, 2014). In other words, the question has become “What have students learned, not just in a single course, but as a result of their overall college experience?” Still more recently, in addition to collecting evidence of student performance, accreditors are beginning to press institutions to direct more attention to the consequential use of assessment results for modifying campus policies and practices in ways that lead to improved learning outcomes.
The push from accrediting bodies for institutions to gather and use information about student learning has been reinforced by demands from policymakers at both the federal and state levels. As college costs continue to escalate and public investment in aid to students and institutions has grown, governmental entities have become more interested in how and to what extent students actually benefit, sometimes referred to as the “value added” of attending college. This, in turn, has brought even more attention to the processes and evidence accrediting groups use to make their decisions. Employers also have an obvious interest in knowing what students know and can do, prompting them to join the call for more transparent evidence of student accomplishment.
Taken together, this cacophony of calls for more attention to documenting student learning has not gone unheard by colleges and universities. Thought leaders in the field of assessment have developed tools and conceptual frameworks to guide assessment practice (Banta & Palomba, 2014; Suskie, 2009). In fact, the number of assessment approaches and related instruments jumped almost ten-fold between 2000 and 2009 (Borden & Kernel, 2013), both reflecting and driving increased assessment activity on campuses. Perhaps the best marker of the growth in the capacity and commitment of colleges and universities to assess student learning comes from two national surveys of provosts at accredited two- and four-year institutions conducted by the National Institute for Learning Outcomes Assessment (NILOA) (Kuh & Ikenberry, 2009; Kuh et al., 2014). The most recent of these studies found that 84% of all accredited colleges and universities now have stated learning goals for their undergraduate students, up from three-quarters just five years ago. Most institutions have organizational structures and policies in place to support learning outcomes assessment, including a faculty or professional staff member who coordinates institution-wide assessment and facilitates the assessment efforts of faculty in various academic units. While the majority of institutions use student surveys to collect information about the student experience, increasingly, classroom-based assessments such as portfolios and rubrics are employed. Taken together, this activity strongly suggests that many U.S. institutions of higher education are working to understand and document what students know and can do.
At the same time, all this effort to assess student learning, at best, seems to have had only a modest influence on academic decisions, policies, and practices. Make no mistake: the growth in assessment capacity is noteworthy and encouraging. But harnessing evidence of student learning, making it consequential in the improvement of student success and strengthened institutional performance is what matters to the long-term health and vitality of American higher education and the students and society we serve. Moreover, consequential use of evidence of student learning to solve problems and improve performance will also raise the public’s confidence in its academic institutions and give accreditors empirical grounds on which to make high-stakes decisions.
What is needed to make student learning outcomes assessment more consequential? Answering that question first requires a deeper, more nuanced understanding of the motivations of different groups who conduct this work and their sometimes conflicting effects on faculty members—who are and must continue to be the primary arbiters of educational quality. That is the conundrum we take up in this volume.
A Culture of Compliance
To make evidence of student learning consequential, we must first address the culture of compliance that now tends to dominate the assessment of student learning outcomes at most colleges and universities. While external forces fueled the sharp growth of assessment activity in higher education over the past two decades, these same influences unintentionally nurtured the unfortunate side effect of casting student learning outcomes assessment as an act of compliance rather than a volitional faculty and institutional responsibility. As a result, a plethora of external pressures to collect and use student learning outcomes assessment data quickly filled the incentive vacuum, creating the dominant narrative for why and how institutions should set assessment priorities and design assessment programs. That is, instead of faculty members and institutional leaders declaring that improvement of student success and institutional performance was the guiding purpose for documenting student performance—and being encouraged and rewarded for doing so—the interests of others outside the institution with no direct role in the local process held sway. Thus, from the outset of the assessment movement circa 1985, complying with the expectations of those beyond the campus has tended to trump the internal academic needs of colleges and universities. Compounding the effects of what is sometimes called initiative fatigue, discussed in Chapter 9, a syndrome that commonly develops when campuses are swamped by the competing demands of multiple initiatives, assessment for compliance has meant second-guessing the interests and demands of external bodies with no clear vision of how the results can or will be used to help students and strengthen institutional performance.
So it is that by defaulting to the demands and expectations of others, the purposes and approaches of learning outcomes assessment morphed over time into a compliance culture that has effectively separated the work of assessment from those individuals and groups on campus who most need evidence of student learning and who are strategically positioned to apply assessment results productively. The assessment function—determining how well students are learning what institutions say they should know and be able to do—inadvertently became lodged at arm’s length from its natural allies, partners, and end users—including the faculty, but others as well. Ironically, it is the faculty who are responsible for setting and upholding academic standards and who are in the best position to judge student accomplishment. Yet because the externally driven compliance culture has defined and framed assessment, the work of assessment is frequently off-putting, misguided, inadequately conceptualized, and poorly implemented.
Thus, rather than student learning outcomes assessment being embraced by the faculty and academic leadership as a useful tool focused on the core institutional functions of preparing students well for their lives after college and enabling continuous improvement in teaching and learning, on too many campuses this work remains separate from the academic mainstream, severely limiting its contribution to the very student learning and institutional performance it is designed to enhance. As a result, the purposes and processes of assessment—collecting and reporting data to external audiences—continue to take primacy over the institution’s consequential use of the results of outcomes assessment.
Peter Ewell (2009) offers a cogent analysis of the implications of these conditions by describing two distinct, competing assessment paradigms, one that serves an accountability function and the other that addresses continuous quality improvement of both student learning and institutional effectiveness. In practice, the urgent necessity of accountability has tended to overwhelm the need and opportunity for improvement. It is these two worlds that must be joined.
Without question, providing data about student and institution performance to external entities for the purpose of accountability is both necessary and legitimate. Still, we believe that the two—the interest of faculty and staff to improve teaching and learning and the proper interest of external bodies for accountab...