Using Evidence of Student Learning to Improve Higher Education
eBook - ePub

Using Evidence of Student Learning to Improve Higher Education

George D. Kuh, Stanley O. Ikenberry, Natasha A. Jankowski, Timothy Reese Cain, Peter T. Ewell, Pat Hutchings, Jillian Kinzie

Share book
  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Using Evidence of Student Learning to Improve Higher Education

George D. Kuh, Stanley O. Ikenberry, Natasha A. Jankowski, Timothy Reese Cain, Peter T. Ewell, Pat Hutchings, Jillian Kinzie

Book details
Book preview
Table of contents
Citations

About This Book

American higher education needs a major reframing of student learning outcomes assessment

Dynamic changes are underway in American higher education. New providers, emerging technologies, cost concerns, student debt, and nagging doubts about quality all call out the need for institutions to show evidence of student learning. From scholars at the National Institute for Learning Outcomes Assessment (NILOA), Using Evidence of Student Learning to Improve Higher Education presents a reframed conception and approach to student learning outcomes assessment. The authors explain why it is counterproductive to view collecting and using evidence of student accomplishment as primarily a compliance activity.

Today's circumstances demand a fresh and more strategic approach to the processes by which evidence about student learning is obtained and used to inform efforts to improve teaching, learning, and decision-making. Whether you're in the classroom, an administrative office, or on an assessment committee, data about what students know and are able to do are critical for guiding changes that are needed in institutional policies and practices to improve student learning and success.

Use this book to:

  • Understand how and why student learning outcomes assessment can enhance student accomplishment and increase institutional effectiveness
  • Shift the view of assessment from being externally driven to internally motivated
  • Learn how assessment results can help inform decision-making
  • Use assessment data to manage change and improve student success

Gauging student learning is necessary if institutions are to prepare students to meet the 21 st century needs of employers and live an economically independent, civically responsible life. For assessment professionals and educational leaders, Using Evidence of Student Learning to Improve Higher Education offers both a compelling rationale and practical advice for making student learning outcomes assessment more effective and efficient.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Using Evidence of Student Learning to Improve Higher Education an online PDF/ePUB?
Yes, you can access Using Evidence of Student Learning to Improve Higher Education by George D. Kuh, Stanley O. Ikenberry, Natasha A. Jankowski, Timothy Reese Cain, Peter T. Ewell, Pat Hutchings, Jillian Kinzie in PDF and/or ePUB format, as well as other popular books in Éducation & Évaluation dans le domaine de l'éducation. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Jossey-Bass
Year
2014
ISBN
9781118903667

1
FROM COMPLIANCE TO OWNERSHIP
WHY AND HOW COLLEGES AND UNIVERSITIES ASSESS STUDENT LEARNING

Stanley O. Ikenberry and George D. Kuh
Control leads to compliance; autonomy leads to engagement.
—Daniel H. Pink
EVERY ERA BRINGS CHALLENGES. Even so, by all accounts, this second decade of the twenty-first century has swept in a steady stream of disruptive developments that threaten some of the most basic assumptions on which the higher education enterprise rests—including how and by whom its core academic functions are delivered.
More than 18 million undergraduate students are currently enrolled at thousands of academic institutions—some quite large, others small, some public, others private, some for-profit, and still others virtual. Movement of students and faculty across these sectors has grown. On many campuses, a large portion of undergraduate teaching is provided by other-than-tenure-track faculty members: part-time adjunct faculty members and graduate teaching assistants. Soaring college costs, unacceptably low degree-completion rates, new technologies, and competitive new providers have become defining features of what some call higher education’s “new normal.” Further disruption comes from the uneasy sense that the quality of student learning may be falling well short of what the twenty-first century demands of our graduates, the economy, and our democracy. It is in this complex context that understanding student performance and optimizing success is not just important to maintain public confidence; it is even more necessary to guide and inform academic decisions and policies.
But with challenge comes opportunity. By every relevant measure, higher education adds value to individuals and to society (MacMahon, 2009). What today’s students know and are able to do will shape their lives and determine their future prospects more than at any time in history. In addition to the numerous lifelong benefits college graduates enjoy, the performance of our colleges and universities has profound implications for the nation’s economy, our quality of life, and America’s place in the world. It is this profound relevance and worth of higher education that adds a palpable sense of urgency to the need to document how college affects students and to use this information effectively to enhance student attainment and institutional effectiveness.
The big question is this: How will colleges and universities in the United States both broaden access to higher learning and also enhance student accomplishment and success for all students while at the same time containing and reducing costs? This is higher education’s signal challenge in this century. Any meaningful response requires accurate, reliable data about what students know and are able to do as a result of their collegiate experience. In the parlance of the academy, this systematic stock-taking—the gathering and use of evidence of student learning in decision making and in strengthening institutional performance and public accountability—is known as student learning outcomes assessment. Gathering evidence and understanding what students know and can do as a result of their college experience is not easy, but harnessing that evidence and using it to improve student success and institutional functioning is even more demanding. This second challenge is the subject of this volume.
Assessment should be intentional and purposive, relevant to deliberately posed questions important to both institutions and their stakeholders, and based on multiple data sources of information, according to the guidelines for evidence of the Western Association of Schools and Colleges (WASC, 2014). Evidence does not “speak for itself.” Instead, it requires interpretation, integration, and reflection in the search for holistic understanding and implications for action. As did assessment pioneers at Alverno College many years ago, Larry Braskamp and Mark Engberg (2014) describe this work as “sitting beside” in an effort to assist and collaborate with members of the academy in ways that engender trust, involvement, and high quality performance.
Whatever the preferred formula or approach—and there are many—we are convinced that if campus leaders, faculty and staff, and assessment professionals change the way they think about and undertake their work, they can multiply the contributions of learning outcomes assessment to American higher education. The good news is that the capacity of the vast majority of American colleges and universities to assess student learning has expanded considerably during the past two decades, albeit largely in response to external pressures. Accreditors of academic institutions and programs have been the primary force leading to the material increase in assessment work, as these groups have consistently demanded more and better evidence of student learning to inform and exercise their quality assurance responsibilities (Kuh & Ikenberry, 2009; Kuh, Jankowski, Ikenberry, & Kinzie, 2014). Prior to the mid-1990s, accrediting groups tended to focus primarily on judgments about whether an institution’s resources—credentials of the faculty, adequacy of facilities, coherence of the curriculum, number of library holdings, and fiscal soundness—were sufficient to deliver its academic programs. Over the past 15 years, however, both institutional and program accreditors have slowly shifted their focus and now expect colleges and universities to obtain and use evidence of student accomplishment (Gaston, 2014). In other words, the question has become “What have students learned, not just in a single course, but as a result of their overall college experience?” Still more recently, in addition to collecting evidence of student performance, accreditors are beginning to press institutions to direct more attention to the consequential use of assessment results for modifying campus policies and practices in ways that lead to improved learning outcomes.
The push from accrediting bodies for institutions to gather and use information about student learning has been reinforced by demands from policymakers at both the federal and state levels. As college costs continue to escalate and public investment in aid to students and institutions has grown, governmental entities have become more interested in how and to what extent students actually benefit, sometimes referred to as the “value added” of attending college. This, in turn, has brought even more attention to the processes and evidence accrediting groups use to make their decisions. Employers also have an obvious interest in knowing what students know and can do, prompting them to join the call for more transparent evidence of student accomplishment.
Taken together, this cacophony of calls for more attention to documenting student learning has not gone unheard by colleges and universities. Thought leaders in the field of assessment have developed tools and conceptual frameworks to guide assessment practice (Banta & Palomba, 2014; Suskie, 2009). In fact, the number of assessment approaches and related instruments jumped almost ten-fold between 2000 and 2009 (Borden & Kernel, 2013), both reflecting and driving increased assessment activity on campuses. Perhaps the best marker of the growth in the capacity and commitment of colleges and universities to assess student learning comes from two national surveys of provosts at accredited two- and four-year institutions conducted by the National Institute for Learning Outcomes Assessment (NILOA) (Kuh & Ikenberry, 2009; Kuh et al., 2014). The most recent of these studies found that 84% of all accredited colleges and universities now have stated learning goals for their undergraduate students, up from three-quarters just five years ago. Most institutions have organizational structures and policies in place to support learning outcomes assessment, including a faculty or professional staff member who coordinates institution-wide assessment and facilitates the assessment efforts of faculty in various academic units. While the majority of institutions use student surveys to collect information about the student experience, increasingly, classroom-based assessments such as portfolios and rubrics are employed. Taken together, this activity strongly suggests that many U.S. institutions of higher education are working to understand and document what students know and can do.
At the same time, all this effort to assess student learning, at best, seems to have had only a modest influence on academic decisions, policies, and practices. Make no mistake: the growth in assessment capacity is noteworthy and encouraging. But harnessing evidence of student learning, making it consequential in the improvement of student success and strengthened institutional performance is what matters to the long-term health and vitality of American higher education and the students and society we serve. Moreover, consequential use of evidence of student learning to solve problems and improve performance will also raise the public’s confidence in its academic institutions and give accreditors empirical grounds on which to make high-stakes decisions.
What is needed to make student learning outcomes assessment more consequential? Answering that question first requires a deeper, more nuanced understanding of the motivations of different groups who conduct this work and their sometimes conflicting effects on faculty members—who are and must continue to be the primary arbiters of educational quality. That is the conundrum we take up in this volume.

A Culture of Compliance

To make evidence of student learning consequential, we must first address the culture of compliance that now tends to dominate the assessment of student learning outcomes at most colleges and universities. While external forces fueled the sharp growth of assessment activity in higher education over the past two decades, these same influences unintentionally nurtured the unfortunate side effect of casting student learning outcomes assessment as an act of compliance rather than a volitional faculty and institutional responsibility. As a result, a plethora of external pressures to collect and use student learning outcomes assessment data quickly filled the incentive vacuum, creating the dominant narrative for why and how institutions should set assessment priorities and design assessment programs. That is, instead of faculty members and institutional leaders declaring that improvement of student success and institutional performance was the guiding purpose for documenting student performance—and being encouraged and rewarded for doing so—the interests of others outside the institution with no direct role in the local process held sway. Thus, from the outset of the assessment movement circa 1985, complying with the expectations of those beyond the campus has tended to trump the internal academic needs of colleges and universities. Compounding the effects of what is sometimes called initiative fatigue, discussed in Chapter 9, a syndrome that commonly develops when campuses are swamped by the competing demands of multiple initiatives, assessment for compliance has meant second-guessing the interests and demands of external bodies with no clear vision of how the results can or will be used to help students and strengthen institutional performance.
So it is that by defaulting to the demands and expectations of others, the purposes and approaches of learning outcomes assessment morphed over time into a compliance culture that has effectively separated the work of assessment from those individuals and groups on campus who most need evidence of student learning and who are strategically positioned to apply assessment results productively. The assessment function—determining how well students are learning what institutions say they should know and be able to do—inadvertently became lodged at arm’s length from its natural allies, partners, and end users—including the faculty, but others as well. Ironically, it is the faculty who are responsible for setting and upholding academic standards and who are in the best position to judge student accomplishment. Yet because the externally driven compliance culture has defined and framed assessment, the work of assessment is frequently off-putting, misguided, inadequately conceptualized, and poorly implemented.
Thus, rather than student learning outcomes assessment being embraced by the faculty and academic leadership as a useful tool focused on the core institutional functions of preparing students well for their lives after college and enabling continuous improvement in teaching and learning, on too many campuses this work remains separate from the academic mainstream, severely limiting its contribution to the very student learning and institutional performance it is designed to enhance. As a result, the purposes and processes of assessment—collecting and reporting data to external audiences—continue to take primacy over the institution’s consequential use of the results of outcomes assessment.
Peter Ewell (2009) offers a cogent analysis of the implications of these conditions by describing two distinct, competing assessment paradigms, one that serves an accountability function and the other that addresses continuous quality improvement of both student learning and institutional effectiveness. In practice, the urgent necessity of accountability has tended to overwhelm the need and opportunity for improvement. It is these two worlds that must be joined.
Without question, providing data about student and institution performance to external entities for the purpose of accountability is both necessary and legitimate. Still, we believe that the two—the interest of faculty and staff to improve teaching and learning and the proper interest of external bodies for accountab...

Table of contents