Assessment Essentials
eBook - ePub

Assessment Essentials

Planning, Implementing, and Improving Assessment in Higher Education

Trudy W. Banta, Catherine A. Palomba

Share book
  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Assessment Essentials

Planning, Implementing, and Improving Assessment in Higher Education

Trudy W. Banta, Catherine A. Palomba

Book details
Book preview
Table of contents
Citations

About This Book

A comprehensive expansion to the essential higher education assessment text

This second edition of Assessment Essentials updates the bestselling first edition, the go-to resource on outcomes assessment in higher education. In this thoroughly revised edition, you will find, in a familiar framework, nearly all new material, examples from more than 100 campuses, and indispensable descriptions of direct and indirect assessment methods that have helped to educate faculty, staff, and students about assessment.

Outcomes assessment is of increasing importance in higher education, especially as new technologies and policy proposals spotlight performance-based success measures. Leading authorities Trudy Banta and Catherine Palomba draw on research, standards, and best practices to address the timeless andtimeliest issues in higher education accountability. New topics include:

  • Using electronic portfolios in assessment
  • Rubrics and course-embedded assessment
  • Assessment in student affairs
  • Assessing institutional effectiveness

As always, the step-by-step approach of Assessment Essentials will guide you through the process of developing an assessment program, from the research and planning phase to implementation and beyond, with more than 100 examples along the way. Assessment data are increasingly being used to guide everything from funding to hiring to curriculum decisions, and all faculty and staff will need to know how to use them effectively. Perfect for anyone new to the assessment process, as well as for the growing number of assessment professionals, this expanded edition of Assessment Essentials will be an essential resource on every college campus.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Assessment Essentials an online PDF/ePUB?
Yes, you can access Assessment Essentials by Trudy W. Banta, Catherine A. Palomba in PDF and/or ePUB format, as well as other popular books in Education & Evaluation & Assessment in Education. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Jossey-Bass
Year
2014
ISBN
9781118903650

CHAPTER 1
DEFINING ASSESSMENT

The concept of assessment resides in the eye of the beholder. It many definitions, so it is essential that anyone who writes or speaks about assessment defines it at the outset.

Some Definitions

In common parlance, assessment as applied in education describes the measurement of what an individual knows and can do. Over the past three decades, the term outcomes assessment in higher education has come to imply aggregating individual measures for the purpose of discovering group strengths and weaknesses that can guide improvement actions.
Some higher education scholars have focused their attention on the assessment of student learning. Linda Suskie, for instance, in the second edition of her book Assessing Student Learning: A Common Sense Guide (2009) tells us that for her, the term assessment “refers to the assessment of student learning.” In the first edition of this book, we also adopted the focus on student learning:
Assessment is the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development. (Palomba and Banta, 1999, p. 4)
The term assessment in higher education has also come to encompass the entire process of evaluating institutional effectiveness. Reflecting her career in applying her background in educational psychology in program evaluation, the first author of this book uses this definition:
Assessment is the process of providing credible evidence of
  • resources
  • implementation actions, and
  • outcomes
undertaken for the purpose of improving the effectiveness of
  • instruction,
  • programs, and
  • services
in higher education.
In this book, the term assessment will certainly apply to student learning. But we also use it to describe the evaluation of academic programs, student support services such as advising, and even administrative services as we look at overall institutional effectiveness.
We will describe the assessment of student learning as well as of instructional and curricular effectiveness in general education and major fields of study. We will consider methods for assessing student learning and program effectiveness in student services areas. We also will present approaches to assessing student learning and program and process effectiveness at the institutional level. In fact, the most meaningful assessment is related to institutional mission.
Disciplinary accreditation is a form of assessing program effectiveness in a major field. Regional accreditation is a form of assessing institutional effectiveness. Both are powerful influences in motivating and guiding campus approaches to assessment. Federal, state, and trustee mandates for measures that demonstrate accountability may determine levels of performance funding and also shape campus assessment responses. We will discuss the many external factors that impel college faculty and administrators to undertake assessment activities.
Our guiding principle in this book, however, will be to present approaches to assessment that are designed to help faculty and staff improve instruction, programs, and services, and thus student learning, continuously. Assessment for improvement can also be used to demonstrate accountability. Unfortunately, assessment undertaken primarily to comply with accountability mandates often does not result in campus improvements.

Pioneering in Assessment

In his book The Self-Regarding Institution (1984), Peter Ewell portrays the first work in outcomes assessment of three institutions. In the early 1970s, Sister Joel Reed, president of Alverno College, and Charles McClain, president of Northeast Missouri State University, determined that the assessment of student learning outcomes could be a powerful force in improving the effectiveness of their respective institutions. Alverno faculty surveyed their alumnae to find out what their graduates valued most in terms of their learning at Alverno (Loacker and Mentkowski, 1993). Survey findings shaped faculty development of eight abilities, including communication, analysis, and aesthetic responsiveness, that would become the foundation for curriculum and instruction at Alverno. In addition to work in their own discipline, Alverno faculty were asked to join cross-disciplinary faculty specializing in one of the eight core abilities. Alverno’s (2011) “assessment as learning” approach has transformed that college, increasing its reputation among students and parents, its enrollment, and its visibility in the United States and abroad as a leader in conducting conscientious and mission-centric assessment.
At Northeast Missouri State University, President McClain and his chief academic officer, Darrell Krueger, became early advocates of value-added assessment, giving tests of generic skills to their freshmen and seniors and tracking the gain scores. In addition, department faculty were strongly encouraged to give their seniors an appropriate nationally normed test in their major field if one existed. McClain famously asked his department chairs one persistent question: “Are we making a difference?” meaning, “How are our students doing on those tests we’re giving?” (Krueger, 1993). The early emphasis on test scores had the effect of raising the ability profile of Northeast Missouri’s entering students. Subsequently the faculty and administration decided to pursue and gain approval from the state as Missouri’s public liberal arts institution, with the new name of Truman State University.
The third pioneering institution profiled in Ewell’s book was the University of Tennessee, Knoxville (UTK). Whereas Alverno’s and Northeast Missouri’s assessment initiatives were internal in their origins and aimed at improving institutional effectiveness in accordance with institutional mission, UTK was confronted with the need to address an external mandate—a performance funding program instituted in 1979 by the Tennessee Higher Education Commission and the Tennessee state legislature. Initially UTK’s chancellor, Jack Reese, called the requirements to test freshmen and seniors in general education and seniors in their major field, conduct annual surveys of graduates, and accredit all accreditable programs “an abridgement of academic freedom.” His administrative intern at the time, Trudy Banta, thought the performance funding components looked like elements of her chosen field, program evaluation. She took advantage of a timely opportunity to write a proposal for a grant that the Kellogg Foundation would subsequently fund: “Increasing the Use of Student Information in Decision-Making.” For the first three years of addressing the external accountability mandate, faculty and administrators charted their own course on the performance funding measures on the basis of their Kellogg Project. While the amount of the Kellogg funding was tiny—just ten thousand dollars—for research-oriented faculty, the “Kellogg grant” gave them the opportunity to begin testing of students and questioning of graduates in their own way. Within five years, UTK was recognized by the National Council for Measurement in Education for outstanding practice in “using measurement technology” (Banta, 1984).
By 1985 three additional states joined Tennessee in establishing performance funding programs for their public colleges and universities. Colorado, New Jersey, and Virginia issued far less prescriptive guidelines than Tennessee, however. The state higher education organizations and legislatures in the three new entries provided examples, but left it to their public institutions to select or design tests and other measures to demonstrate their accountability.
In his 2009 paper for the newly formed National Institute for Learning Outcomes Assessment (NILOA), Ewell notes that “two decades ago, the principal actors external to colleges and universities requiring attention to assessment were state governments.” However, by the 1990s, mandates in several states were no longer being enforced because of budget constraints, and so attention turned to other goals, such as higher degree completion rates. Tennessee remained an exception in continuing to employ several learning outcomes measures in its long-established performance funding program.
In 1988, Secretary of Education William Bennett issued an executive order requiring all federally approved accreditation organizations to include in their criteria for accreditation evidence of institutional outcomes (US Department of Education, 1988). During the next several years, the primary external stimulus for assessment moved from states to regional associations as they began to issue specific outcomes assessment directives for institutional accreditation, and discipline-specific bodies created such guidelines for program accreditation. The 1992 Amendments to the federal Higher Education Act (HEA) codified assessment obligations for accrediting agencies, and subsequent renewals of the HEA have continued to require accreditors to include standards specifying that student achievement and program outcomes be assessed. It has taken some accreditors longer than others to comply, however. Accreditors of health professions were in the vanguard, followed by social science professions like education, social work, and business. Engineering accreditors initiated “ABET 2000” standards in 1997 (ABET, 2013). The first trial balloon for standards related to student learning outcomes in law was launched in 2013, for approval within three years (American Bar Association, 2013).
By the time NILOA’s first survey of chief academic officers was undertaken in 2009, accreditation—either disciplinary or regional, or both—was being cited as the most important reason for undertaking assessment. According to Ewell (2009), the shift in stimulus from state governments to regional accreditors had the important effect of increasing the emphasis on assessment to guide improvement in addition to demonstrating accountability. Advocating congruence of assessment and campus mission is another hallmark of the influence of accrediting agencies on outcomes assessment. A July 19, 2013, statement of Principles of Effective Assessment of Student Achievement endorsed by leaders of the six regional accrediting commissions and six national higher education associations begins, “[This] statement is intended to emphasize the need to assess effectively student achievement, and the importance of conducting such assessments in ways that are congruent with the institution’s mission” (American Association of Community Colleges et al., 2013).
The pendulum is swinging once again with respect to state interest in assessment. In spring 2010, the National Center for Higher Education Management Systems surveyed state higher education executive offices concerning policies, mandates, and requirements regarding student outcomes assessment (Zis, Boeke, and Ewell, 2010). According to study results, eight states, including Minnesota, Georgia, Tennessee, and West Virginia, were unusually active in assessment, some requiring common testing. Some states have systemwide requirements rather than state requirements. For many years, students at the campuses of the City University of New York were required to obtain a minimum score on a locally developed standardized examination in order to earn their degrees.
More recently, declining global rankings, rising tuition and student debt, and poor prospects for employment of college graduates have alarmed state and federal decision makers (Miller, 2013). This has prompted an emphasis on productivity and efficiency in higher education, which is now seen as an engine of the economy and of the nation’s competitiveness (Hazelkorn, 2013). Many state reporting systems are focusing more on graduation rates, job placement, and debt-to-earnings ratios than on measures of student learning. The Voluntary Framework of Accountability for community colleges contains measures not only of how many students obtain degrees, but of how many pass remedial courses, earn academic credit, transfer to another institution, and get a job (American Association of Community Colleges, 2013). President Barack Obama’s administration has proposed a College Scorecard (White House, 2013). The emphasis on producing numbers of degrees and job-ready employees has alarmed educators. They fear that educational quality will suffer if too much weight in funding regimes is placed on simply graduating more students or turning out majors who are prepared for today’s jobs rather than with the abilities to adapt to ever-changing workplace demands. As Margaret Miller puts it, “The completion goal is downright pernicious if it entails the minting of an increasingly worthless currency” (2013, p. 4). In addition, emphasizing college completion in a shorter time frame could encourage institutions to raise their entrance requirements to be sure they enroll students who are best prepared for college work, which could make a college education unattainable for those who need it most.
As a result of all these external influences, as well as internal interests in obtaining guidance for continuous improvement of student learning and institutional effectiveness, increasing numbers of faculty have been called on to participate in assessment. Some assume leadership roles, serving on campuswide committees charged with planning the institution’s overall approach to assessment or designing a program to assess general education. A greater number are involved at the department level, helping to design and carry out assessment of programs or courses for majors. Attendance at national, regional, state, and discipline-specific assessment conferences attests to continued interest in sharing assessment information. In fact, for more than a decade, the number of participants at the annual Assessment Institute in Indianapolis, the oldest and largest assessment conference in the United States, has approached or exceeded one thousand. This book is designed to fill some of the continuing need for information about assessment.

Quality Assurance: An International Perspective

Interest in obtaining evidence of accountability from postsecondary institutions emerged as a worldwide phenomenon in the mid-1980s. In Europe, China, Australia, South Africa, and other countries, as in the United States, stakeholders in higher education have become increasingly concerned about the value received for resources invested, accommodating increasing numbers and diversity of st...

Table of contents