
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
Guide to College Writing Assessment
About this book
While most English professionals feel comfortable with language and literacy theories, assessment theories seem more alien. English professionals often don't have a clear understanding of the key concepts in educational measurement, such as validity and reliability, nor do they understand the statistical formulas associated with psychometrics. But understanding assessment theoryâand applying itâby those who are not psychometricians is critical in developing useful, ethical assessments in college writing programs, and in interpreting and using assessment results.
A Guide to College Writing Assessment is designed as an introduction and source book for WPAs, department chairs, teachers, and administrators. Always cognizant of the critical components of particular teaching contexts, O'Neill, Moore, and Huot have written sophisticated but accessible chapters on the history, theory, application and background of writing assessment, and they offer a dozen appendices of practical samples and models for a range of common assessment needs.
Because there are numerous resources available to assist faculty in assessing the writing of individual students in particular classrooms, A Guide to College Writing Assessment focuses on approaches to the kinds of assessment that typically happen outside of individual classrooms: placement evaluation, exit examination, programmatic assessment, and faculty evaluation. Most of all, the argument of this book is that creating the conditions for meaningful college writing assessment hinges not only on understanding the history and theories informing assessment practice, but also on composition programs availing themselves of the full range of available assessment practices.
A Guide to College Writing Assessment is designed as an introduction and source book for WPAs, department chairs, teachers, and administrators. Always cognizant of the critical components of particular teaching contexts, O'Neill, Moore, and Huot have written sophisticated but accessible chapters on the history, theory, application and background of writing assessment, and they offer a dozen appendices of practical samples and models for a range of common assessment needs.
Because there are numerous resources available to assist faculty in assessing the writing of individual students in particular classrooms, A Guide to College Writing Assessment focuses on approaches to the kinds of assessment that typically happen outside of individual classrooms: placement evaluation, exit examination, programmatic assessment, and faculty evaluation. Most of all, the argument of this book is that creating the conditions for meaningful college writing assessment hinges not only on understanding the history and theories informing assessment practice, but also on composition programs availing themselves of the full range of available assessment practices.
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weâve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere â even offline. Perfect for commutes or when youâre on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Guide to College Writing Assessment by Peggy O'Neill,Cindy Moore,Brian Huot in PDF and/or ePUB format, as well as other popular books in Languages & Linguistics & Education General. We have over one million books available in our catalogue for you to explore.
Information
1
INTRODUCTION
Embracing the Power of Assessment
Can we have not simply writing-across-the-curriculum but also writing-assessment-across-the-curriculum? If the Department of Writing could model this for the rest of us, that would be great.
This question, asked in an e-mail from a dean at a liberal arts college to the composition director, illustrates just how central writing and writing assessment have become to discussions about institutional assessment goals and practices that are occurring at colleges and universities across the country (and around the globe). When considered within a historical context, the contemporary embrace of writing as a means for evaluating learning outside of the composition classroom is not surprising. Writing, after all, has been linked to large-scale assessment ever since college entrance examinations evolved from oral tests of literacy abilities to written ones (Brereton 1995; Elliot 2005; Trachsel 1992) and is still a component of entrance evaluations at most institutions of higher education. Writing frequently plays a role in campus-wide assessments of individual student achievement as well, through rising-junior exams, graduation tests, and other competency certifications (Haswell 2001a; Murphy, Carlson, and Rooney 1993).
That a composition director would be included in discussions about institutional assessment is not surprising either, given that more and more program-level administrators are being asked to provide information for campus-wide self-studies and accreditation reviews. Colleges and universities are under such pressure these days to demonstrate the quality of their programs that it is rare for any administrator to be excluded from calls for assessment data of one kind or another. This is especially true for writing program administrators, who typically participate in cross-curricular general education initiatives by way of coordinating introductory composition courses and supporting the instructors who teach them.
What is, perhaps, most compelling about the e-mail query is the implicit message, conveyed by the second sentence, about the potential role of the composition director in the broad-based assessment this dean is beginning to imagine. The dean seems not to be ordering or cajoling the writing program administrator (WPA) to fall in line with an assessment regimen that has already been envisioned (as higher-ed administrative lore might encourage us to expect) but rather inviting the WPA to take an active part in designing and facilitating what promises to become a significant campus-wide initiative.
The proposition embedded within this e-mail is an important one indeed. As research shows, writing assessments do much more than simply allow administrators to demonstrate that their institutions, departments, and programs are successful; they have the power to influence curriculum and pedagogy, to categorize teachers and writers, and, ultimately, to define âgood writingâ (e.g., Hillocks 2002; O'Neill, Murphy, Huot, and Williamson 2005). In fact, specific writing assessments, especially those perceived to have high stakes for students and teachers, function as what Deborah Brandt (1998) calls âliteracy sponsorsâ because they encourage and support the development of certain types of writing and writing abilities over others. In short, a department-level administrator who embraces assessmentâespecially the kind of assessment that extends beyond the boundaries of her specific programâis in a position not only to help set the agenda for campus-wide assessment initiatives, but to affect, even âtransform,â teaching and learning across the university community (Bean, Carrithers, and Earenfight 2005).
Unfortunately, while the particular WPA in this real-life scenario understood the positive aspects of involvement and was willing to help her dean think through how a college-wide writing initiative might be used, simultaneously, to evaluate learning across campus, many writing program administrators are not inclined to assume an active role in assessmentâeven when department chairs or deans show confidence in their doing so. A key reason for the reluctance is that while the negative aspects of program-level assessment are well known (and well publicized through listservs, conference presentations, and articles), the positive potential remains, to a large degree, unrealizedâboth by individual writing specialists and by composition and rhetoric, at large.
This guide is intended to help address what we see as both a serious problem and an overlooked opportunity: just as writing program administrators (and writing faculty, in general) are being asked to assume more responsibility for large-scale assessment, many are uninspiredâor unpreparedâto do so. Some resist the very idea of assessment efforts that seem externally motivated and, thus, ultimately unconcerned with improving student learning. Others struggle to justify the time and effort needed for an activity that often appears extraneous to the work they were hired to do (e.g., coordinate courses, supervise instructors, teach, conduct research, advise students, and so on). Still others understand the potential importance and relevance of large-scale assessments but have trouble making them work for their programs, faculty, and students.
We seek to meet the needs of a wide range of colleaguesâthose who direct (or help direct) writing programs and those who teach within them, those who are resistant to assessment generally and those whose prior experience with poorly conceived or inappropriate assessments has made them suspicious or cynical, and those who want to participate inâor even leadâlarge-scale assessment efforts but don't possess the knowledge to do so confidently or well. Our aim is not to minimize the challenges associated with assessment (there are many) but to help readers confront and contextualize these challenges so they will feel able to design and facilitate assessments that support the educational goals of their institutions and, in the process, enhance teaching and learning within their departments and programs. Because assessment is central to teaching and learning in general (Johnston 1989; Moss 1992; Shepard 2000) and to writing in particular (Huot 2002; White 1994), and because the stakes are so high for faculty and students, WPAs and their composition and rhetoric colleagues must find ways to help promote meaningful assessments and participate in the powerful acts of analyzing and using results. This guide's key contention is that creating the conditions that support meaningful assessment hinges on appreciating not only the range of available assessment practices but understanding the history and theories informing those practices as well as the critical components of our particular teaching contexts.
CONFRONTING THE CHALLENGES
As writing program administrators and faculty understand, far too often assessment initiatives are imposed from the top-down, rather than invited or encouraged. When assessment is imposed (or perceived to be imposed), its relevance may not be apparent. This is especially the case when people outside of a program (a dean, provost, or institutional effectiveness director) dictate the parameters of the assessment (e.g., the purpose(s), guiding question(s), and methods for data collection, analysis, reporting, and eventual use). An assessment that is not framed by questions important to the program administrators and faculty gathering the data and whose results, therefore, may not seem meaningful likely will be viewed as pointless busywork, completed simply to help others fill in the blanks of reports that, if they are read at all by decision-makers, will never be acted upon. Worse yet, if the purposes, audiences, and implications of externally initiated assessments are not made clear, program administrators and faculty may assume that results will be used in undesirable ways, for example, to exclude students, monitor faculty, and control curriculum, as has too often been the case at higher-ed institutions (e.g., Greenberg 1998; Gleason 2000; Agnew and McLaughlin 2001).
Negative feelings about assessment can be further exacerbated when program administrators are unfamiliar with possibilities for approaching large-scale assessment, as well as the key concepts, documented history, and recorded beliefs associated with various approaches. This unfamiliarity is reflected in multiple waysâthrough urgent postings on disciplinary listservs asking for the âbest wayâ to assess student work for course placement or curricular review, through assessment workshops in which program directors clamor for practical advice on how to confront administrative assessment mandates, and through the now-ubiquitous short articles in the Chronicle of Higher Education and elsewhere about tensions between various constituencies (e.g., faculty, university administrators, legislators) over the presumed âvalidityâ and/or âreliabilityâ of particular assessment methods.
Unfortunately, even the most informed responses to public pleas for assistance or reassurance do not magically solve the crises because, as assessment scholars know, good assessments are those that are designed locally, for the needs of specific institutions, faculty, and students. As a result, well-intentioned pleas often lead to poor assessments, which, in a circular way, can reinforce bad feelings about assessment generally. As Ed White (1994) and others have suggested, when writing program administrators are not knowledgeable or confident enough about assessment, they become vulnerable to individuals and agencies whose beliefs, goals, and agendas may not support writing curricula, pedagogy, and faculty, and may in fact conflict with what we define as best practices. Core disciplinary activities and values can be undermined by writing assessments that are at odds with what our scholarship supports. In short, when policymakers, university administrators, and testing companiesâinstead of knowledgeable WPAs and faculty membersâmake decisions about writing assessment, we risk losing the ability to define our own field as well as make decisions about our programs and students.
Unfamiliarity with approaches to large-scale writing assessment is understandable, given that many people charged with administering writing programs and facilitating program assessments do not have degrees in composition and rhetoric. A survey of composition placement practices conducted in the early 1990s indicated that while 97 percent of writing programs are administered by full-time faculty, only 14 percent of these administrators had a degree in composition and rhetoric or were pursuing scholarship in writing assessment (Huot 1994, 57â58). Similarly, research conducted later in the decade on employment prospects for composition and rhetoric specialists, indicated that there were more jobs in the field than specialists available to fill them (Stygall 2000). Given the relative stability of composition requirements over the past ten years and the concurrent reduction of tenure-track professorial lines nationwide, it is reasonable to expect that the number of non-specialists directing writing programs has increased (and will account for a large portion of the readership for this guide).
Yet, even a degree in composition and rhetoric does not guarantee familiarity with key aspects of writing program assessment. Though many writing administrators and faculty matriculated through composition and rhetoric programs that grounded them in composition theory and pedagogy, most are not familiar with the literature on large-scale assessment, nor did they take part in this type of assessment during graduate school. Sometimes the opportunities simply do not exist for gaining expertise and experience. Graduate courses that focus on assessment are relatively rare, for instance, and while teaching assistants may take part in large-scale assessments by reading placement portfolios or submitting sample first-year composition papers to the WPA, they aren't often asked to help design such assessments. When opportunities to learn about or participate more fully in assessment are provided, students do not always take advantage of them; despite evidence to the contrary, students do not believe they will ever need to know more than the assessment âbasicsâ to succeed in their future academic roles.
As most experienced composition and rhetoric professionals know, however, many (if not most) positions in the fieldâwhether tenure-line or notâinclude an administrative component, either on a permanent or rotating basis. In addition to highlighting general employment trends in the field, Gail Stygall (2000) notes that 33 percent of the composition and rhetoric positions advertised in 1998 included some form of administrationânearly a 10 percent increase since 1994 (386). Our more recent analysis of job ads suggests that the current percentage of positions requiring administration is more than 50 percent. Given that writing program administration of any kind necessarily involves assessment of curricula, student achievement, and/or faculty performance, it is reasonable to assume that a majority of composition and rhetoric specialists will not only end up administering programs but assessing them, whether or not they are sufficiently prepared to do so.
Without a background in large-scale assessment, WPAs and their composition and rhetoric colleagues may find concepts typically associated with such assessment strange and intimidating. Having developed their professional identities within the humanities, for the most part, they may cringe at references to âmeasuringâ or âvalidating,â which reflect a traditional socialscience perspective. Though scholarship on writing assessment offers ways of negotiating liberal-arts values with those from the sciences, and though publication of such scholarship has increased significantly over recent years, it often goes unread. Until recently, much of the most useful literature was difficult to find, appearing in a seemingly scattered way in essay collections and journals focused on topics other than large-scale assessment. The more accessible literature, though not irrelevant, has often been of the âtool-boxâ type, focusing on methods used by a particular department or program with scant discussion of supporting research and theory. As a result, many writing specialists are confronted with terms, definitions, and interpretations imported from other disciplines with little knowledge about how they should be applied to situations that require evaluation of writing abilities, development, and instruction. Thus, many are left feeling unprepared to argue for the kinds of assessments that make sense on an intuitive level or, more likely, argue against those that appear inappropriate.
AN ILLUSTRATION
Cindy's early-career narrative provides a good illustration of how frustrating it can be to possess a basic understanding of current writing assessment practice, without having a real familiarity with assessment history and theory. Like many composition and rhetoric specialists, Cindy was hired right out of graduate school to direct a substantial writing program at a midsized university. The three years of practical administrative experience she obtained as a PhD student, along with the courses she took in composition theory and pedagogy, prepared her well to take on many of the challenges of her first position, including hiring, course scheduling, and faculty development. Unfortunatelyâand largely due to her own decisions (e.g., electing not to take her program's course in assessment)âher graduate-school apprenticeship did not fully equip her for what became one of the most important aspects of her position: designing, arguing for, and facilitating meaningful large-scale assessments.
During her first semester (fall 1998), Cindy was confronted with several assessment issues that needed to be addressed. Among these was a writing-course placement process that relied on a computerized multiple-choice exam taken by students during summer orientation. Many faculty and students complained about the exam, which seemed inappropriate in many ways. Among other problems, the exam rested on the assumption that students' ability to write well in college courses correlated with their ability to correctly answer questions about grammar and usage. However, because student placement was a university issue, affecting faculty, staff, and administrators outside of the English Department, Cindy and her colleagues could not just make changes unilaterally. In addition to speaking to other faculty within their department, they would need to consult staff in the testing office, the VP of student affairs, and other departments, such as mathematics, that relied on a similar placement test. They would need to convince others that the test was problematic and that there were viable alternatives.
Having participated in a program assessment as a graduate student and taken pedagogy courses that addressed classroom assessment practices, Cindy understood that direct methods for assessing student writing (i.e., methods that require students to actually write) are preferred in the composition and rhetoric community to indirect methods like multiple-choice exams. This preference seemed consistent with classroom assessment methods promoted by prominent scholars at the timeâmethods such as portfolio evaluation and holistic scoring. The problem was that others outside her departmentâthose who were familiar with standardized testing but not with writing assessmentâpointed to validity and reliability data offered by the exam manufacturers as reason enough to keep the test as it was.
Though she was suspicious of data provided by the very agency that profited from her school's use of the exam and uncomfortable with the company's context-deficient definitions of validity and reliability, Cindy could not, with any confidence, argue against the data. Because she was unfamiliar with the history of writing assessment, she did not know that tests are most often chosen not for their âability to describe the promise and limitations of a writer working within a particular rhetorical and linguistic contextâ (Huot 2002, 107), but because they are a cheap, efficient means of sorting people into convenient categories. Further, because she did not know that there were alternative, context-oriented definitions of validity and that reliability statistics alone say nothing about a test's appropriateness, she could not confidently question the test manufacturer's use of these terms or her university's belief in their persuasive power. Though she was, in the end, able to convince the testing office to add some background questions to the testâquestions aimed at gathering information about students' actual writing experienceâthe test itself remained (and ten years later still remains) essentially unchanged.
Fortunately, as Cindy was struggling with these issues, Brian and Peggy were working steadily to help administrators like her become more aware of the options available for assessment as well as the historical and theoretical assumptions informing them. Through Conference on College Composition and Communication workshops, edited collections, and articles, they and other scholars were developing a disciplinary literature that would allow administrators to both make informed assessment decisions and discuss their merits and drawbacks with others. The fact that Cindy was able to achieve some success with the placement process and to facilitate other important assessment initiatives during her first years as a WPA, was due to her willingness to read some of the more accessible assessment scholarship being published (such as Brian's 1996 CCC article âToward a New Theory of Writing Assessmentâ) and ask Peggy for details about the many panels and workshops that she and Brian were organizing to help WPAs understand connections among assessment practice, research, and theory. Still, it was not until very recently (during the summer of 2006) that Cindy took the time to sit down and read the assessment literature, both within composition and rhetoric and in other, ancillary, fields and begin to fully appreciate how much better her assessment practice could have been over the years if she had truly understood the assumptions informing it.
What has bewildered all of usâand inspired this current volumeâis that Cindy's experience is both all-too-typical and, in terms of her efforts to educate herself about history and theory, problematically atypical. We have discovered that even those faculty and administrators who do recognize the importance of assessment, are willing to do it, and know the basics of large-scale assessment often have trouble translating their understanding and knowledge into assessments that work. As is true with classroom teaching, it is one thing to possess a general sense of the assumptions supporting particular methods; it is another thing to be able to enact beliefs and values in consistently productive ways and convince others of their success (or the potential for it).
CONTEXTUALIZING THE CHALLENGES
As we've thought over the last few years about what kind of resource would be most useful to writing administrators and faculty who are poised to design and conduct large-scale assessments, we have often returned to our own experiences and to the assessment stories that appear throughout the composition and rhetoric literature (many of which inform l...
Table of contents
- Cover Page
- Title Page
- Copyright Page
- Contents
- Acknowledgments
- Chapter 1 - Introduction: Embracing the Power of Assessment
- Chapter 2 - Historicizing Writing Assessment
- Chapter 3 - Considering Theory
- Chapter 4 - Attending to Context
- Chapter 5 - Assessing Student Writers: Placement
- Chapter 6 - Assessing Student Writers: Proficiency
- Chapter 7 - Conducting Writing Program Assessments
- Chapter 8 - Evaluating Writing Faculty and Instruction
- Appendix A: Timeline: Contextualizing Key Events in the History of Writing Assessment
- Appendix B: Writing Assessment: A Position Statement, the Conference on College Composition and Communication Committee on Assessment
- Appendix C: Sample Scoring Rubrics
- Appendix D: Sample Classroom Observation Form
- Appendix E: Sample Outcome-Based Student Survey
- Appendix F: Sample Teaching Portfolio Table of Contents
- Appendix G: Sample Course Portfolio Directions
- Appendix H: Sample Course Portfolio Reading Guidelines
- Appendix I: Getting Started Guide for Program Assessment
- Appendix J: Sample Program Assessment Surveys
- Appendix K: Sample Student Focus Group Outline
- Appendix L: Selective Annotated Bibliography of Additional Readings
- Glossary
- References
- Index
- About the Authors