1Assessing Speaking in Context: Expanding the Construct and the Applications
M. Rafael Salaberry and Alfred Rue Burch
This edited volume, based upon the Rice University Center for Languages and Intercultural Communicationâs conference of the same name held in May 2018, draws together research that takes a critical eye towards assessing speaking in second/foreign languages, with special attention given to reconceptualizing or adapting existing approaches and frameworks. The main contribution of this volume is the explicit focus on and analysis of the effect of an expanded sociolinguistic definition of speaking that incorporates recent research findings on interactional dynamics of conversations, interviews, etc.
1 Background
Over the last three decades, the field of second language acquisition (SLA) has witnessed two major turning points that have influenced the way we define a second language: the social turn (e.g. Block, 2003; Duff, 2015; Eskildsen & Majlesi, 2018; Firth & Wagner, 1997, 2007; Lantolf, 2011; Norton, 2000); and the multilingual turn (e.g. Cook, 1992; Douglas Fir Group, 2016; GarcĂa & Flores, 2014; May, 2011, 2014; Norton, 2014; Ortega, 2014). A common concern of both the social and the multilingual turn has been the expansion of an outdated definition of language (confined until rather recently to the realm of individual cognition) to incorporate, among other new constructs, the notion of interactional competence and the concept of the multilingual learner and speaker.
This critical reconceptualization of language needs to be reflected in the redesign of testing instruments that properly assess this broad definition of second language proficiency (e.g. Chalhoub-Deville, 2003; Lantolf & Poehner, 2011; McNamara & Roever, 2006; Roever & Kasper, 2018; Shohamy, 2011; Valdés & Figueroa, 1994). The potential opportunities to redesign testing instruments aligned with the new conceptualizations of language are limited, however, by the prevailing institutional infrastructure of testing instruments, testing standards and testing policies that were developed around narrow definitions of language. In effect, despite their avowed focus on the interactional dynamics of speaking in context, current assessment models used in institutional settings (e.g. ACTFL in the USA, CEFR in Europe) largely eschew the complexity brought about by actual interactional competence in speaking tasks (e.g. Brown, 2003; Fulcher, 2004; Galaczi, 2008, 2013; Johnson, 2001; Kormos, 1999; Patharakorn, 2018; Plough, 2018; Roever & Kasper, 2018; Weir, 2005; Youn, 2015; Young, 2011).
For instance, paired speaking test formats introduced to the assessment profession in the 1990s (Stansfield & Kenyon, 1992; McNamara, 1996) have been deemed problematic to assess the L2 proficiency of individual students due to the effect of interlocutorsâ differences in proficiency, familiarity, gender and other factors (e.g. Brown, 2003; Davis, 2009; Ducasse & Brown, 2009; Lazaraton, 1996). Similarly, in contrast with scripted interview questionnaires, unguided informal conversations have been regarded as detrimental to the collection of relevant language samples to perform a fair assessment of proficiency across a large number of students due to the lack of standardization of the procedures used to collect language samples (e.g. Bachman, 2007; Young, 2011). Finally, there have been few attempts at expanding the realm of assessment of language ability to include paralinguistic and nonlinguistic facets of interactional settings of communication (Plough et al., 2018; Roever & Kasper, 2018; Ross, 2018).
Overall, there is a gap between the most recent research studies (e.g. Galaczi, 2008; Roever & Kasper, 2018; Youn, 2015) and the current structure of major institutional testing models of speaking. Considering the significant challenges to assessing speaking in context as described above, it is not surprising that such a gap would exist. To identify the major points of disparity between the type of testing instrument necessary to assess a socially contextualized definition of language and the features of the current models of language assessment embodied in the traditional testing instruments, we review the main features of the theoretical foundation of both assessment frameworks.1
2 Previous Theoretical Construct: Communicative Competence
The starting point to describing the concept of language proficiency enshrined in the established models of assessment is the notion of communicative competence. The latter concept, described by Hymes (1972), focused on the type of linguistic knowledge that is not only possible in a language, but that is also actually used in socially contextualized situations (Hymes also discussed the type of knowledge associated with what is feasible and what is appropriate). Notably in Hymesâ position is his early reference to a key component of the concept of interactional competence, making explicit reference to the âevidence for linguistic competence [that] co-varies with interlocutorâ (Hymes, 1972: 276). The theoretical framework developed by Canale and Swain (1980) almost a decade later introduced a tripartite division of the concept of communicative competence: grammatical competence, sociolinguistic competence and strategic competence (a separate fourth component of discourse competence was added later by Canale, 1983). The objective of Canale and Swainâs article was to identify a series of principles that would be useful to develop a foundation (and guidelines) for second language teaching and testing, including âmore valid and reliable measurement of second language communication skillsâ (Canale & Swain, 1980: 1). For the purpose of our discussion, Canale and Swain explicitly distinguished between communicative competence and communicative performance, leading them to advocate for the inclusion of testing procedures in the context of ârealistic communicative situationsâ, during which there would be âlittle time to reflect on and monitor language input and outputâ (Canale & Swain, 1980: 34). Canale and Swain noted that this direct type of assessment (at the time exemplified, as stated by the authors, by the FSI Oral Proficiency Interview and communicative tests) increases the face validity of the test.
To address this newly developing focus on L2 communicative competence, and in particular the spoken components of this competence, the American Council for the Teaching of Foreign Languages-Oral Proficiency Interview (ACTFL-OPI) became an important institutional instrument for that purpose. Even though the ACTFL Proficiency Guidelines claimed to be a theoretical, their professed focus on proficiency seems to be aligned with an implied communicative approach model that had been evolving during the 1970s and 1980s as described above: the ACTFL Proficiency Guidelines âare descriptions of what individuals can do with language ⊠in real-world situations in a spontaneous and non-rehearsed contextâ (Swender et al., 2012: 3). Furthermore, the test is described as a conversation: âit mirrors a typical spontaneous conversation inasmuch as the interviewerâs line of questioning and posing of tasks are determined by the way in which the interviewee respondsâ (Glisan et al., 2013: 267). Kormos (1999: 165), nevertheless, points out that a conversation is âan unplanned face-to-face interaction with unpredictable sequence and outcome ⊠and in which speakersâ turns are reactively or mutually contingentâ (see also van Lier, 1989). Johnson (2001: 142) explained further that âthe OPI tests speaking ability in the context of an interview, and, to be more precise, in the context of two types of interviews, sociolinguistic and survey researchâ. Johnson noted that:
in a survey research interview, questions and answers are regarded as stimuli and responses. All âextraneous materialâ is suppressed in order that the finding may be generalized to a larger population. ⊠The context is not viewed as an important factor influencing participantsâ interaction. (Johnson, 2001: 59)
Thus, there are two obvious problems with the use of the ACTFL-OPI to measure general language competence: first, an interview is of limited value for extrapolating to other interactional contexts; and second, the lack of attention to bringing up contextual factors further constrains the definition of conversational interaction.2
3 Interactional Competence
The view of communicative competence described above differs in significant ways from the concept of interactional competence (IC) described in detail by He and Young (1998): interactional competence is âco-constructed by all participants in an interactive practice and is specific to that practiceâ (He & Young, 1998: 7). Apart from linguistic resources identified in previous models of communicative competence, the construct of IC highlights the role played by identity resources such as participation frameworks and interactional resources such as turn-taking, sequence and preference organization, and repair (Young, 2011). Furthermore, the concept of linguistic resources is expanded to include not just verbal, but also embodied means of communication such as gaze, facial expressions and gestures (e.g. Burch & Kasper, 2016). Overall, interactional resources enable speakers both to design their turns for a particular recipient in a particular context to accomplish social actions (e.g. invitations, requests, rejections to invitations, etc.), and also to react appropriately to the actions produced by other participants (Pekarek Doehler & Pochon-Berger, 2015).
3.1 Two problems: Local context and co-construction of meaning
By its very definition, IC is inherently characterized by a dynamic and emergent understanding of the interaction as it unfolds in real time as part of a locally co-constructed communicative act. For the purpose of assessment, however, Bachman (2007) points out that the definition of IC brings up two problems: it is determined by the local context (thus, not generalizable) and it is based on the co-construct...