Research into second language writing has developed in depth and scope over the past few decades and researchers have shown a growing interest in new approaches to the teaching and assessing of writing. In the past two decades, there has been a healthy surge of research studies that have tackled issues relating to the process of L2 writing and writing assessment as well as important related elements such as the use of rubrics, written corrective feedback and rater reliability in general (e.g., Knoch, 2009a, b; Knoch, Rouhshad, & Storch, 2014; Rakedzon & Baram-Tsabari, 2017; Wang, Engelhard, Raczynski, Song, & Wolfe, 2017) and in the MENA context in particular (e.g., Aryadoust & Fox, 2016; Assalahi, 2013; Coombe, 2010; Coombe, Jendli, & Davidson, 2008; Ezza, 2017; Ghalib & Al-Hattami, 2015; Hamouda, 2011; Hidri, 2019; Mohammad & Hazarika, 2016; Obaid, 2017; Reynolds, 2018).
At the heart of such a surge in the MENA context lies the ever-increasing need to communicate in English. The status, demand and use of English across the region have continued to grow and takes on an increasingly important role in professional contexts. In light of this increase, there has been a subsequent need to ensure that citizens in the MENA region have a recognisable and often professionally accredited level of English language proficiency. In line with this need, there has also been an increasing need for citizens to have a level of written English proficiency that allows them to communicate in writing at local, regional and international levels. This need for written proficiency in English is also being considered through a more critical lens as the ability to write well in English impacts on the academic and professional success of MENA region citizens. The assessment of written English proficiency therefore has a considerable role to play in determining current and future levels of success in education and the opportunities it brings to citizens both regionally and internationally.
The assessment of written English proficiency across the MENA region on the surface may appear homogenous as the countries that make up the region by and large share a common first language in Arabic and have historically similar backgrounds. However, there remain several nuanced differences between the countries in the region in terms of populations, wealth and resource distribution, and cultural beliefs, all of which impact the practices we see being taken in the teaching and assessing of English writing. Indeed, the primary motivation for this book was to uncover the why and how of these practices. It is our belief that this book uncovers and critiques a number of observations about how writing is currently assessed and most importantly provides a platform where the assessment of writing across the region can be established theoretically and empirically. In providing such a platform, it is hoped that the book will add weight to the understandings of writing assessment already presented in other book-length works (e.g., Ahmed & Abouabdelkader, 2018; Ahmed, Troudi, & Riley, 2020; Hidri, 2019). We also believe this contribution is a timely one given the interest in writing at the present time in the region.
Arguably, at the time of bringing this synthesis together, interest in Language Assessment Literacy (e.g., see Davidson & Coombe, 2019) and the complete assessment cycle (e.g., see Coombe, 2010) has never been greater with our synthesis also joining initiatives in assessment that characterise the region. Examples of these initiatives include teaching and testing organisations and committees (e.g., TESOL Arabia, 2020) and scholarly journals which provide a platform for the discussion of assessment throughout the MENA region (e.g., Arab Journal of Applied Linguistics (AJAL, 2020) as well as collaborations with international testing committees (e.g., the hosting of the Language Testing Research Colloqium (LTRC) via the International Language Testing Association (ILTA) in Tunisia, planned for 2021).
The fifteen chapters in this volume have investigated several important issues in the assessment of second language writing skills and have discussed the implications of their findings for teaching and assessment. The studies attempt to shed light on long-held lingering questions in the field and offer suggestions for future research. Chapter authors based in seven MENA countries have situated their research on writing assessment in varied contexts while drawing on theories of language, assessment literacy and psychometrics as well as other interpretive traditions and paradigms. It is the intention of this book to highlight areas in which research into writing in a second language in MENA contexts can and does inform classroom practice. Chapter authors have focused on the complexity of the writing assessment process and the interplay between the various issues that must be addressed by teachers and students who engage in writing and writing assessment activities in second language classrooms.
Part 1, Test Design and Administration: Connections to Curriculum and Teacher Understandings of Assessment, brings together issues of test design and administration with a focus on how teachers understand the assessment process, and how it relates to their wider ELT curriculum and notions of writing proficiency. In Chapter 2, Rauf and McCallum look at the importance of language assessment literacy and carry out an analysis of how teacher-designed assessment tasks match the goals of the course learning outcomes in three Saudi universities. They find that teachersā task design needs to be sharpened to better meet course outcomes. In Chapter 3, El Rahal and Dimashkie describe how they redesigned a local placement test in a UAE university and how they sought to better police their departmentās testing policy/procedures by creating test banks of appropriate essay prompts and improving the rubric scoring process. In the last chapter of Part I, Babaii takes a broad view of these traditional assessment issues by reminding us of the need to consider and implement an understanding of World Englishes into the assessment process. In this chapter, Babaii sets out the key considerations and challenges that those involved in writing assessment in the MENA need to be constantly aware of.
Part II, Grading and Feedback Connections: Exploring Grading Criteria, Practices and the Provision of Feedback, explores grading criteria, grading practices and the provision of feedback across different tasks and writing instructional techniques. In a move towards considering the construct of writing proficiency and how teachers understand and describe it, Shahmirzadiās chapter revisits the key constructs of Complexity, Accuracy and Fluency (CAF) in judging learnersā writing proficiency in an Iranian university. She notes the need for this linguistic description to also take into account contextual and task factors via the use of Cognitive Diagnostic Assessment (CDA). She advocates that this combination of CAF considerations of language and the use of CDA can help us understand how learner performance is judged and acted upon. In Chapter 6, Bustamante and Yilmaz, compare and contrast the grading practices of EAP teachers in Turkey and Kurdistan. They find that writing/learning context and teacher experience have a notable influence on grading practices. In another study of grading practices, Ben Hedia looks at how Tunisian EFL instructors at a university held similar and different beliefs about grading writing that did not always match their actual evidenced grading practices. The last two chapters of Part II consider how rubrics are best designed for different assessment stakeholders. In covering rubric design, Mohammadi and Kamaliās chapter reports on the issue of tailoring different scoring rubrics to different tasks. They report on the design and implementation of a rubric that assessed resume writing on a language...