
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
Gesture in Multiparty Interaction
About this book
Gesture in Multiparty Interaction confronts the competing views that exist regarding gesture's relationship to language. In this work, Emily Shaw examines embodied discourses in American Sign Language and spoken English and seeks to establish connections between sign language and co-speech gesture. By bringing the two modalities together, Shaw illuminates the similarities between certain phenomena and presents a unified analysis of embodied discourse that more clearly captures gesture's connection to language as a whole.
? Shaw filmed Deaf and hearing participants playing a gesture-based game as part of a social game night. Their interactions were then studied using discourse analysis to see whether and how Deaf and hearing people craft discourses through the use of their bodies. This volume examines gesture, not just for its iconic, imagistic qualities, but also as an interactive resource in signed and spoken discourse. In addition, Shaw addresses the key theoretical barriers that prevent a full accounting of gesture's interface with signed and spoken language. Her study pushes further the notion that language is fundamentally embodied.
? Shaw filmed Deaf and hearing participants playing a gesture-based game as part of a social game night. Their interactions were then studied using discourse analysis to see whether and how Deaf and hearing people craft discourses through the use of their bodies. This volume examines gesture, not just for its iconic, imagistic qualities, but also as an interactive resource in signed and spoken discourse. In addition, Shaw addresses the key theoretical barriers that prevent a full accounting of gesture's interface with signed and spoken language. Her study pushes further the notion that language is fundamentally embodied.
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weâve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere â even offline. Perfect for commutes or when youâre on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Gesture in Multiparty Interaction by Emily Shaw in PDF and/or ePUB format, as well as other popular books in Languages & Linguistics & Linguistics. We have over one million books available in our catalogue for you to explore.
Information
Chapter 1
Introduction
The study of gesture is a study in contrasts where seemingly disparate symbolic phenomena mix and mingle, furnishing visual representations of meaning that range from the highly iconic to the highly abstract. People have misconceptions about gesture. There is no simpler way to put it. It is nebulous, it is difficult to define, and it is everywhere. Gesture has been a subject of scrutiny for centuries (e.g., De Jorio, 2000/1832; Kendon, 2004, for a review). It has been characterized as transient and fixed, iconic and arbitrary, language and not language. Nowhere are these contrasts more germane than in the study of sign languages where analysts have no choice but to account for how the body (through gesture) becomes a conspicuously communicative medium capable of producing language.
Gestureâs relationship to sign language is half the issue. The other side of the linguistic coin is its function in relation to spoken language. Here, too, scholars have struggled to make sense of how the body contributes meaning without being linguistic. It seems intuitive that sign languages are related to gesture and yet different from gestures hearing people use when they speak (Armstrong & Wilcox, 2007). Wresting through this intuition, accounting for gestureâs form and function in sign and speech, has proven much more complicated.
As a hearing person who joined the Deaf community through personal (though not familial) connections and (importantly) American Sign Language (ASL) classes,1 I have observed the ideological contrast where what it means to be âDeafâ is described, at least partially, in contrast to cultural conceptions of what it means to be âhearingâ (cf. Laddâs DEAF-COMMUNITY HEARING-WORLD dichotomy, 2003, p. 41). Hearing novitiates are colloquially positioned as body language amateurs, perhaps in part because the gestures that co-occur with speech do not make sense without sound and not all hearing people are quick to learn sign with native-like fluency. Kemp (1998), describing his experience teaching ASL to hearing students, said:
I find it a sometimes tedious task when I try to teach the use of nonmanual signals in my ASL classes. For example, if I mention that they show blank faces while signing, my students will make either exaggerated or nonsynchronized facial movements when signing specific sentence types such as questions, assertions, negations, topic-comment, and so on. (p. 218)
In my own experience working with interpreting students, controlling the manual and nonmanual articulators in creatively depictive waysâsuch as through constructed action or other types of depictionâproves especially challenging to teach.
And yet, gesture researchers have shown that hearing people systematically use their bodies to communicate incessantly; they gesture from an early age, they acquire more complicated gestures and gesture phrases as they develop language, they gesture even when no one is looking at them (such as on the phone), and they attune their gestures to their addressees depending on context. Stated differently, hearing people cannot communicate without gesturing; they are expert gesturers, masters of the craft.
The notion that hearing people are incompetent gesturers more likely comes from the operative use of the word gesture, which is âgesture as performanceâ or mime. This particular use of the body, where subjects are constrained from using speech, has the potential to look more like (sign) language (e.g., Goldin-Meadow, McNeill, & Singleton, 1996). When the performative use of gesture is set next to sign language, the two resemble each other, but the âhearing versionâ indeed looks sloppier. These language-like utterances are newly born; they have not stood the test of time, endured the shifting of positions and filing off of excess movements that refine signs and signed utterances over generations. These forms may resemble sign language, but they are not nearly as complex and sophisticated.
Why belabor the point, then? Perhaps it is sufficient to say that sign language and gesture are not the same. The problem persists because these perceptible differences between co-speech gesture and sign language have influenced how scholars treat visual imagery in sign languages. It has, in turn, indirectly impacted co-speech gesture scholarsâ accounts of where sign language fits in their analytical frameworks. As it stands, there are competing views of gesture: one that affiliates it with sign language and one that expels it from sign language. This messy contrast is reflected in contemporary attempts to (re)situate gesture in theoretical accounts of sign language.
Researchers of gesture in spoken and signed languages have made inroads, especially in the last forty years, accounting for the means by which the body creates and expresses meaning. We now know that gesture is part of a communication system (Kendon, 2004), that it co-occurs with speech (McNeill, 1992, 2005), that it has the potential to become more like language when it takes on the full burden of communication (Goldin-Meadow et al., 1996), and that it constitutes at least a limited portion of sign language (Liddell, 1995, 1996, 2000, 2003). And yet, we still cannot fully explain how the gestures hearing and Deaf people use are related, if at all. In this book, I address what I see to be three key theoretical barriers preventing us from fully accounting for gestureâs interface with both spoken and signed languages. These barriers have led analysts to either overlook or underestimate gestureâs contribution to discourse coherence and interaction. While much of the progress scholars have made in characterizing gesture as it operates in speech has been fruitful, we have reached an impasse where the murkiness of gestureâs relationship to language, regardless of modality, must be tackled head-on.
The first theoretical barrier derives from discernible differences between sign language and what is commonly referred to as âco-speech gestureâ (McNeillâs gesticulation). Researchers examining co-speech gesture emphasize its close integration with spoken utterances as one system where both modalities work in tandem to convey different aspects of thought: speech represents the static dimension while gesture represents the dynamic dimension (McNeill, 2005, p. 18). The binary characterization of speech and gesture as two distinct modes discounts the level of gradience spoken utterances exhibit (nonce words and phonation, for example) and the level of systematicity exhibited by gesture (the use of eyebrow raises with Yes/No questions and referential deixis, for example). Scholars interested in multimodal interaction (e.g., Enfield, 2009; Goodwin, 2011; Streeck, 2011) have pointed out the inconsistency in such absolute categories. However, a unified account of gestureâs interface with language that includes sign language has yet to be reached. While the boundaries between speech and gesture are easy to draw in theory, they are difficult to uphold in situated discourse and even more challenging in situated signed discourse.
In this study, the focus is on social events where hearing people are told to use gesture without speech and where Deaf people are told to use it without sign in the context of the gesture-centric game Guesstures. Participants were asked to play the game, not in a controlled, laboratory environment, but as part of a game night among four friends. By situating this particular communicative use of the body in two actual interactions, participants in both groups were inclined to transfer expressive burden among articulators as they navigated through speech eventsâsome of which required them not to speak or sign. In the coming chapters, three of these distinct speech events are highlighted. Both Deaf and hearing participants similarly constructed embodied, composite utterances (Enfield, 2009) uniquely suited to their respective addressees and interactive goals.
The second theoretical barrier that prevents the integration of gesture with language comes from the perspective that hearing and Deaf people must necessarily gesture in different ways because of modality. As was already mentioned, it is obvious that co-speech gesture (alone) and sign language are not the same. Researchers (e.g., Emmorey, 1999; Liddell & Metzger, 1998; Schembri, Jones, & Burnam, 2005) characterize this difference largely by relying on a definition of gesture as a range of (primarily) manual forms on a continuum (McNeill, 1992) or set of continua (McNeill, 2005) where sign language is positioned as the exemplar of linguistic systematization of gesture. At first glance, this conceptualization appears entirely apropos. Studies have shown that when hearing people produce gesture without speech, the linguistic potential of communication through the body becomes enhanced (Brentari, Di Renzo, Keane, & Volterra, 2015; Goldin-Meadow, 2005; Goldin-Meadow et al., 1996; Singleton, Goldin-Meadow, & McNeill, 1995). That is, hearing gesturers begin to structure gestures the way Deaf people use signs.
The consequence of viewing gesture and language through this lens, though, is that only a small set of discourse featuresâmainly depicting constructions, constructed action, and referential use of space (Liddell, 2003)âare eligible instantiations of gesture in sign language. The other ways Deaf people structure their discourses through their bodies (to regulate turns or deictically refer with eye gaze, for instance) or signal pragmatic moves (like marking stances) are not considered to fall under the gesture domain, although these same behaviors in spoken discourses are attributed to gesture. So, while typologies of gesture have been used as a starting point for reassessing a certain class of signs, in general, the typologies are viewed (and rightly so) as insufficient for fully explaining gesture as it is used in sign language (e.g., Cormier, Quinto-Pozos, Sevcikova, & Schembri, 2012).
Gesture can assume different forms, which is the motivation behind schematizing it on a continuum, but conceiving of it as immune from linguistic treatment in this way prevents us from characterizing the much broader system of embodied discourse. We need to account for gestureâs relationship to language, but to successfully make the claim that the two are related, we have to shift how we view and define both gesture and language. Language is not purely static or digital, and gesture is not purely dynamic or analog. Recent works on multimodal interaction (e.g., Enfield, 2009, 2011; Goodwin, 2007, 2011; Kockelman, 2005) capture this notion by furthering Charles S. Peirceâs (1955/1893) theory of semiotics in the analysis of language in interaction. These scholars argue that examining gesture and language in binary terms precludes us from understanding the rich and expansive instantiations gesture takes throughout the course of an interaction. In this study, gesture is assessed as situated in interaction by also incorporating a model of discourse that accounts for the layers of interactional work people conduct in face-to-face encounters (Schiffrin, 1987). By examining gesture as a product of interaction, the array of forms and functions it exhibits in situ can be explained.
The final theoretical barrier to fully accounting for gesture in both spoken and signed languages is the assumption that abstract forms typically associated with gesticulation, whose meanings are not transparent, either are not used by Deaf people or have been incorporated into their linguistic code. For example, Schembri et al. (2005) turn to co-speech gesture theory (McNeill, 1992) as a starting point for analyzing sign language constructions; however, the forms these authors target as gestural are depicting constructions, the most iconic or âmimetic gesturesâ in sign language (p. 273). The value of co-speech gesture theory to the analysis of sign language is unequivocal. But there has yet to be an assessment of more abstract forms in sign language (gestures that do not depict imagery) akin to co-speech gestures. This has consequences for the way spoken discourse is analyzed as well. The embodied gestures hearing people use are more easily relegated to paralinguistic status because they emerge in a distinct modality from speech (Kendon, 2008; Sicoli, 2007). Depiction is the first conceptual step toward linking the existence of transient forms (gesture) with conventionalized ones (signs/words). The next step is assessing the range of strategies that spans modalitiesâaccounting for the more entrenched, conventionalized forms and the more transient, unconventional instantiations of sign/speechâwhich both groups use to structure discourse. Ultimately, I further the examination here of embodied discourses by juxtaposing traditional definitions of gesture with situated instances in spoken and signed interactions.
APPROACH
Deaf people continue to use gestural forms, even in developed sign languages. But the connection between gesture (and the related gestural, gesture-like) and sign language is murky. One of the first treatments of gesture-like forms in sign language was Nancy Frishbergâs theory of historical change from highly iconic gestures to arbitrary signs. Frishbergâs theory that signs not only lose but also abandon iconicity over time only partially explains how iconicity operates in ASL (cf. Taub, 2001). Deaf people become more efficient as they make repeated use of signs, and this efficiency is manifest through a diminished iconicity. Frishbergâs theory does not explain how iconicity remains a productive and ubiquitous feature of signed discourse, though. Deaf people are capable of conveying highly abstract forms as part of their discourse, highly iconic depictions (like when performing a narrative or playing a game), and a range in between as they see fit. What is typically perceived as a one-way movement, like an evolution on a continuum, is best explained as a two-way movement, both away from and toward iconicity, based on the demands discourse imposes on signers.
Several decades of comparing spoken and signed languages have produced enough empirical data to prove signed languages are just as systematized as spoken languages (Frishberg, 1975; Klima & Bellugi, 1979; Liddell & Johnson, 1989); they are true, linguistic systems through and through. However, when sign language scholars imported spoken language theories (based on transcribed spoken discourse that excluded gesture) into their preliminary assessments of ASL, they also imported the assumption that gesture (and its associated feature iconicity) was not a part of language (Kendon, 2008). Now that co-speech gesture theory is gaining favor among some scholarsâ treatments of visual imagery in sign language (e.g., Cormier et al., 2012; Liddell, 2003; Quinto-Pozos & Mehta, 2010; Schembri et al., 2005), there remains an entrenched ideology that positions gesture as paralinguistic, even though a great deal of embodied utterances display systematicity.
The preliminary comparisons between co-speech gesture and sign language constructions mentioned in the previous paragraph have illuminated some important inconsistencies and gaps that can only be addressed by returning to the definition of gesture and where it is placed in language. The key to comparing the two is analyzing spoken language as it is almost always produced, which is with gesture. Additionally, by incorporating a semiotic analysis with an understanding of language as embodied, we can begin to explain how these resources work together to create meaning in each modality.
The analysis presented in this book favors the incorporation of meaningful body behaviors as part of language (cf. Sicoli, 2007). By examining these data side-by-side, it becomes clear that analyses of signed language and spoken language have both been limited by their modalities in different waysâways that ultimately impacted respective representations of how gesture operates within them. Analysts of signed language suffer from the difficulty in parsing the two; and analysts of spoken language suffer from the ease in doing so. In that vein, I further arguments others have already made that language can include a range of forms from the static to the dynamic and that the body is a locus for meaningful units not subordinate to but fully integrated with the speech/sign stream (e.g., Armstrong & Wilcox, 2007; Goodwin, 2007; Kendon, 2008; Sicoli, 2007; Yerian, 2000). In the end, I reach the conclusion that spoken language is best described as a verbal-visual-gestural language just as signed language is described as a visual-gestural language.
ASPECTS OF GESTURE IN THE CONTEXT OF PLAYING A GAME
Gesture in the study of spoken language occupies a tenuous place; the different modalities present obstacles for those linguists who have long been married to the spoken form. In signed contexts, the reverse is true: the modality that carries the primary burden for communication is the same channel through which gesture is executed. In a very real sense, defining what gesture is for the purposes of linguistic analysis has led to the practice of segmenting gestural forms into artificial categories to which situated language use does not necessarily conform. This study brings to the fore the integrated moves participants produce through their bodies and challenges assumptions that position spoken and signed languages in diametric opposition. I depart from focusing on one manual type as a sort of exemplar of gesture in sign and instead adopt Enfieldâs call for starting with a unit of analysis called the composite utterance, which is defined as:
a whole utterance, a complete unit of social action which always has multiple components, which is always embedded in a sequential context (simultaneously an effect of something prior and a cause of something next), and whose interpretation always draws on both conventional and non-conventional signs, joined indexically as wholes. (2009, p. 223)
Composite utterances, multimodally expressed in situated contexts, are the substance of the analysis presented here. I integrate the notion that gesture is âtoo coarseâ a term (Enfield, 2011, p. 62) to describe the variety of ways people create meaning with the understanding of social interaction as âa vociferous process, always hungry for stuff out of which signs, symbols, and scenic arrangements can be madeâ (Streeck, 2011, p. 67). For this text, any meaningful use of the body, including all visible articulatorsâeyes, eyebrows, torso, and even the legs and feet2âare examined as âsources of composite meaningâ (Enfield, 2009, p. 15). Interactants shift through these articulators depending on both local and global interactional demands (cf. Goodwin, 2000, 2007). Much as spoken words weave in and out of a discourse, sometimes dropping off, sometimes continuing for strings at a time, gesture, too, is woven into the same fabric. When analyzed from a Peircean semiotic perspective, gesture, speech, and sign can be accounted for as products of interaction, each representing an array of meaning-making tools that both hearing and Deaf people manipulate to construct discourses and signal connections to their environments and each other. In sum, rather than looking for gesture and then describing what it does, these data are approached by identifying moves of the articulators for what they contribute and what they accomplish as a layer (or layers) of interactional meaning.
The benefit of comparing ASL to spoken English in this book is that sign language pushes the analyst t...
Table of contents
- Cover
- Half Title Page
- Title Page
- Copyright Page
- Contents
- Acknowledgments
- Editorial Advisory Board
- CHAPTER 1 Introduction
- CHAPTER 2 A Theoretical Framework for Analyzing Gesture, Sign, and Interaction
- CHAPTER 3 Collecting and Analyzing Embodied Discourses
- CHAPTER 4 When Gesture Shifts: Gesture During Turns-At-Play
- CHAPTER 5 Gesture as Action: Contextualizing Gesture in Task-Based Interactions
- CHAPTER 6 Mirroring and the Open Hand Palm Up Form
- CHAPTER 7 Conclusion
- APPENDIX 1 Transcription Conventions
- APPENDIX 2 Handshape Typology
- References
- Index