The Bloomsbury Companion to Discourse Analysis
eBook - ePub

The Bloomsbury Companion to Discourse Analysis

Ken Hyland, Brian Paltridge, Ken Hyland, Brian Paltridge

Share book
  1. 416 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

The Bloomsbury Companion to Discourse Analysis

Ken Hyland, Brian Paltridge, Ken Hyland, Brian Paltridge

Book details
Book preview
Table of contents
Citations

About This Book

Originally published as The Continuum Companion to Discourse Analysis, this book is designed to be the essential one-volume resource for advanced students and academics.
This companion offers a comprehensive and accessible reference resource to research in contemporary discourse studies. In 21 chapters written by leading figures in the field, the volume provides readers with an authoritative overview of key terms, methods and current research topics and directions. It offers both a survey of current research and gives more practical guidance for advanced study in the area.
The volume covers all the most important issues, concepts, movements and approaches in the field and features a glossary of key terms in the area of discourse
analysis. It is the complete resource for postgraduate students and researchers working within discourse studies, applied linguistics, TESOL and the social sciences.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is The Bloomsbury Companion to Discourse Analysis an online PDF/ePUB?
Yes, you can access The Bloomsbury Companion to Discourse Analysis by Ken Hyland, Brian Paltridge, Ken Hyland, Brian Paltridge in PDF and/or ePUB format, as well as other popular books in Sprachen & Linguistik & Sprachliche Semantik. We have over one million books available in our catalogue for you to explore.

Information

Year
2013
ISBN
9781441160126
Part I

Methods of Analysis in Discourse Research
1
Data Collection and Transcription in Discourse Analysis
Rodney H. Jones
Chapter Overview
Data Collection as Mediated Action
Five Processes of Entextualization
Data in the Audio Age
Video Killed the Discourse Analyst?
Data Collection and Transcription in the Digital Age
Conclusion
Key Readings
Data Collection as Mediated Action
The topic of this chapter is data collection and transcription, and, in it I will limit myself to the collection and transcription of data from real-time social interactions rather than considering issues around the collection of written texts, which has its own set of complications.
Since the publication of Elinor Ochs’s groundbreaking 1979 article ‘Transcription as Theory’, it has become axiomatic to acknowledge that data collection and transcription are affected by the theoretical interests of the analyst, which inevitably determine which aspects of an interaction will be attended to and how they will be represented. In fact, this argument has been so thoroughly rehearsed by so many (see, for example, Bloom 1993, Edwards 1993, Mishler 1991) that there is little need to repeat it here.
Neither do I intend to engage in debates about the ‘best system’ for transcribing spoken discourse (see, for example, Du Bois et al. 1993, Gumperz and Berenz 1993, Psathas and Anderson 1990) or ‘multimodal interaction’ (Baldry and Thibault 2006, Norris 2004), or about the need for standardization in transcription conventions (Bucholtz 2007, Lapadat and Lindsay 1999) since, to my mind, the acknowledgement that ‘transcription is theory’ basically pre-empts the need for such debates: if ‘transcription is theory’, one ought to be able to choose whatever system of representation best promotes one’s theory.
I would like instead to focus on data collection and transcription as cultural practices of discourse analysts (Jaffe 2007), and examine how these cultural practices have changed over the years as different cultural tools (tape recorders, video cameras and computers) have become available to analysts, making new kinds of knowledge and new kinds of disciplinary identities possible.
The theoretical framework through which I will be approaching these issues is mediated discourse analysis, a perspective which seeks to understand discourse through analysing the real time social actions it is used to take and the kinds of social identities and social relationships these actions make possible (Norris and Jones 2005). Central to this perspective is the concept of mediation, the idea that all actions, including the action of thought itself, are mediated through cultural tools (which include technological tools like tape recorders as well as semiotic tools like languages and transcription systems), and that the affordances and constraints of these tools help to determine what kinds of actions are possible in a given circumstance. This focus on mediation invites us to look at data collection and transcription not just as matters of theoretical debate, but as matters of physical actions that take place within a material world governed by a whole host of technological, semiotic and sociological affordances and constraints on what can be done and what can be thought, affordances and constraints that change as new cultural tools are introduced.
Mediated discourse analysis, then, allows us to consider data collection and transcription as both situated practices, tied to particular times, places and material configurations of cultural tools, and community practices, tied to particular ‘kinds of people’ within particular disciplinary narratives.
Five Processes of Entextualization
The primary cultural practice discourse analysts engage in is ‘entextualization’. We spend nearly all of our time transforming actions into texts and texts into actions. We turn ideas into research proposals, proposals into practices of interviewing, observation and recording, recordings into transcripts, transcripts into analyses, analyses into academic papers and academic papers into promotions. Ashmore and Reed (2000) argue that the business of an analyst consists of creating a series of artefacts – such as transcripts and articles – that are endowed with ‘analytic utility’.
Bauman and Briggs (1990) define ‘entextualization’ as the process whereby language becomes detachable from its original context of production and is thus reified as a ‘text’, a portable linguistic object. In the case of discourse analysts, this process usually involves two discrete instances of transformation, one in which discourse is ‘collected’ with the aid of some kind of recording device, and the other in which the recording is transformed into some kind of written or multimodal artefact suitable for analysis.
Practices of entextualization have historically defined elite communities in society, who, through the ‘authority’ of their entextualizations are able to exercise power over others: scribes and story tellers, social workers and police officers, academics and lawmakers. To be engaged in creating texts about reality is to be engaged in creating reality.
Whether we are talking about discourse analysts making transcripts or police officers issuing reports, entextualization normally involves at least six processes: (1) framing, in which borders are drawn around the phenomenon to be entextualized, (2) selecting, in which particular features of the phenomenon are selected to represent the phenomenon, (3) summarizing, in which we determine the level of detail with which to represent these features, (4) resemiotizing, in which we translate the phenomena from one set of semiotic materialities into another, and (5) positioning, in which we claim and impute social identities based on how we have performed the first four processes.
These processes are themselves mediated through various ‘technologies of entextualization’ (Jones 2009), tools like tape recorders, video cameras, transcription systems and computer programs, each with its own set of affordances and constraints as to how much and what aspects of a phenomenon can be entextualized and what kinds of identities are implicated in this act. Changes in these ‘technologies of entextualization’ result in changes in the practice of entextualization itself, what it means, what can be done with it, what kinds of authority adheres to it, and what kinds of identities are made possible by it.
Data in the Audio Age
The act of writing down what people say was probably pioneered as a research practice at the turn of the twentieth century by anthropologists and linguists working to document the phonological and grammatical patterns of ‘native’ languages. Up until 40 or so years ago, however, what people actually said was treated quite casually by the majority of social scientists, mostly because they lacked the technology to conveniently and accurately record it. On the spot transcriptions and field notes composed after the fact failed to offer the degree of detail necessary to analyse the moment by moment rhetorical unfolding of interaction. The ‘technologies of entextualization’ necessary to make what we now know as ‘discourse analysis’ possible were not yet available.
This all changed in the 1960s when audiotaping technology became portable enough to enable the recording of interactions in the field. According to Erickson (2004), the fist known instance of recording ‘naturally occurring talk’ was reported by Soskin and John in 1963 and involved a tape recorder with a battery the size of an automobile battery placed into a rowboat occupied by two arguing newlyweds. By the end of the decade, the problem of battery size had been solved and small portable audio recorders became ubiquitous, as did studies of what came to be known as ‘naturally occurring talk’, a class of data which, ironically, did not exist before tape recorders were invented to capture it (Speer 2002).
The development of portable audio-recording technology, along with the IBM Selectric typewriter, made the inception of fields like conversation analysis, interactional sociolinguistics and discursive psychology possible by making accessible to scrutiny the very features of interaction that would become the analytical objects of these fields. The transcription conventions analysts developed for these disciplines basically arose from what audiotapes allowed them to hear, and these affordances eventually became standardized as practices of ‘professional hearing’ (Ashmore et al. 2004) among certain communities of analysts.
The introduction of these new technologies of entextualization brought a whole host of new affordances and constraints to how phenomena could be framed, what features could be selected for analysis, how these features could be represented and summarized, the ways meanings could be translated across modes, and the kinds of positions analysts could take up vis-Ă -vis others.
Framing refers to the process through which a segment of interaction is selected for collection and/or transcription. Scollon and Scollon (2004) would doubtless prefer the term ‘circumferencing’ to ‘framing’. All data collection, they argue involves the analyst drawing a ‘circumference’ around phenomena, which, in effect, requires making a decision about the widest and narrowest ‘timescales’ upon which the action or interaction under consideration depends. All interactions are parts of longer timescale activities (e.g. relationships, life histories), and are made up of shorter scale activities (e.g. turns, thought units). The act of ‘circumferencing’, then, is one of determining which processes on which timescales are relevant to understanding what is ‘going on’.
Among the most important ways audio recording transformed the process of framing for discourse analysts was that it enabled, and in some respects compelled them to focus on processes occurring on shorter timescales at the expense of those occurring on longer ones. One reason for this was that tapes themselves had a finite duration, and another reason was that audio recordings permitted the analyst to attend to smaller and smaller units of talk.
This narrowing of the circumference of analysis brought on by audio-recording technology had a similar effect on the processes of selecting and summarizing that went in to creating textual artefacts from recordings. Selecting and summarizing have to do with how we choose to represent the portion of a phenomenon around which we have drawn our boundaries. Selecting is the process of choosing what to include in our representation, and summarizing is the process of representing what we have selected in greater or lesser detail.
The most obvious effect of audio-recording technology on the processes of selecting and summarizing was that, since audiotape only captured the auditory channel of the interaction, that was the only one available to the analyst for selection. Even though for many researchers the practice of tape recording was accompanied by the making of detailed observational notes regarding non-verbal behaviour, these notes could hardly compete with the richness, the accuracy, and the ‘authority’ of the recorded voice. As a result, speech came to be regarded as the ‘text’ – and all the other aspects of the interaction became the ‘context’.
It is important to remember that this privileging of speech in our study of social interaction was not entirely the result of a considered theoretical debate, but also a matter of contingency. Analysts privileged that to which they had access. Sacks himself (1984: 26) admitted that the ‘single virtue’ of tape recordings is that they gave him something he could analyse. ‘The tape-recorded materials constituted a “good enough” record of what had happened,’ he wrote. ‘Other things, to be sure, happened, but at least what was on the tape had happened.’
Beyond limiting what could be selected to the audible, the technology of audio recording hardly simplified the selection process. Because tapes could be played over and over again and divided into smaller and smaller segmen...

Table of contents