Languages & Linguistics

Multimodal Texts

Multimodal texts refer to communication that involves the use of multiple modes or forms of expression, such as written language, images, sounds, and gestures. These texts are designed to convey meaning through a combination of different elements, allowing for a richer and more nuanced communication experience. Multimodal texts are commonly found in various media, including advertisements, websites, and multimedia presentations.

Written by Perlego with AI-assistance

12 Key excerpts on "Multimodal Texts"

  • Book cover image for: Contemporary Stylistics
    eBook - PDF

    Contemporary Stylistics

    Language, Cognition, Interpretation

    As the prefix multi- indicates, multimodality is the coexistence of multiple modes within any given context. Everyday conversations are an obvious example of multimodal interaction: when we talk, we rely on the modes of spoken language, intonation, and gesture (amongst others). Strictly speaking, there is no such thing as a monomodal text (a text that uses only one mode). Even a textbook like this one, which looks predominantly monomodal, exploits various semiotic resources of the visual system, including written language, 250 READING AS EXPERIENCE numerical signs, graphological emphasis such as bold and italics, dia-grams, and the conventions of textual layout. Nevertheless, analysts generally reserve the term ‘multimodality’ for texts that more notice-ably use multiple modes. A stylistic approach to multimodality considers how Multimodal Texts are composed and how the various modes interact to produce meaning and influence interpretation. 19.2 Analysing multimodal literature Digital printing technologies reduced the publication costs of books with visual and coloured elements, and this has spurred a steady rise in the number of works of multimodal printed literature. Building on Gibbons (2012a: 2), some possible features of multimodal printed fictions include: • unusual textual layouts and page designs • concrete realisation of text to create images, as in concrete poetry • varied typography • images, such as photographs or drawings • use of colour in type and/or imagistic content • flipbook sections • textual deictic devices, drawing attention to the text’s materiality • footnotes and self-interrogative critical voices • genre-mixing, both in literary terms (such as horror) or in terms of visual effect (such as the inclusion of newspaper clippings or play dialogue). Note that the presence of illustrations does not necessarily make a liter-ary work multimodal.
  • Book cover image for: New Studies in Multimodality
    eBook - PDF

    New Studies in Multimodality

    Conceptual and Methodological Elaborations

    Visual texts are defined as discourses that are constructed using only images or that have a combination of image(s) and written or oral language. Examples include advertisements and posters— print and electronic. This chapter describes the translational process from theories in the field of multimodality to a set of instructional strategies for the teaching of visual texts. New Studies in Multimodality 176 2 Multimodal literacy Information, particularly in the digital age, is represented not just with language alone. Instead, language is often nestled among other semiotic resources in a multimodal text. Halliday (1985: 4) explains that linguistics is at the same time a “kind of semiotics” because language is viewed as “one among a number of systems of meaning that, taken all together, constitute human culture.” In particular, technology has accentuated the multimodal nature of text, by facilitating the production and consumption of visual texts. Visual texts such as webpages have images, both static and dynamic, that work together with language to convey meaning multimodally. In addition, webpages may also include various audio and sound effects that, together with the interactive links, offer an intensely multimodal viewing experience not available from reading a printed book. The epistemological implication of multimodality is that meanings in a text can no longer be assumed to be the result of a single semiotic resource. Meanings are the result of the collective semiotic resources co-deployed within the same text. The multimodal approach takes into account how language and image (as well as other) choices fulfill the purposes of the text, the audience and context, and how those choices work together in the organization and development of information and ideas.
  • Book cover image for: Learning to Teach in the Primary School
    To be literate in the 21st century demands a repertoire of literacy practices that permits texts to be read and produced for multiple contexts, in multiple modes, in multiple media, using multiple forms of technology; that is, multiliteracies (Cope & Kalantzis, 2000). Chapter 6 Multimodality and complex texts 95 Kress (2010) describes a mode as a semiotic resource for making meaning. Semiotics refers to the study of systems for making mean- ing – signs and symbols and their signification (that is, their meanings). His examples include images, alphabetic and/or pictorial writing, lay- out, music, gesture, speech, moving images, a soundtrack and 3D objects (Kress, 2010). Each mode offers a different potential for making meaning and adheres to different conventions and codes by drawing upon the var- ious semiotic systems: Linguistic: realised in oral and written language Visual: realised in still and moving images Gestural: realised in facial expression, body language and movement Aural: realised in music and sound Spatial: realised through layout and organisation of objects in space (Anstey & Bull, 2010, p. 10) When a text is multimodal, the various modes work together to produce a text that is understandable to its intended audience because it draws upon the appropriate com- bination of the conventions and codes of the five semiotic systems. Thus, a filmmaker producing a text is using the modes of moving image, colour, word, space and sound in such a way that meaning is being constructed by the audience. Being literate with Multimodal Texts demands an understanding of the possibilities and limitations of all modes, as well as familiarity with the conventions and codes of the semiotic systems upon which the text draws. The conventions and codes of each semiotic system are the socio- culturally accepted rules and patterns by which they work; that is, their grammar(s). Each mode can utilise different media. A medium is the means by which the text is realised.
  • Book cover image for: Transforming Literacies and Language
    eBook - PDF

    Transforming Literacies and Language

    Multimodality and Literacy in the New Media Age

    • Caroline M. L. Ho, Kate T. Anderson, Alvin P. Leong(Authors)
    • 2010(Publication Date)
    • Continuum
      (Publisher)
    Part II Multimodality and Digital Narratives Multimodality offers a holistic perspective on communication by bringing into high relief the various modes used to make meaning, including visual, aural, gestural, spatial, temporal, and linguistic among others. Malinowski and Nelson illuminate the contentious questions of whether and how to prioritize the linguistic mode given the evanescent and increasingly multimodal textual landscape in education. Engaging the foundational linguistic concepts of “value” and “arbitrariness,” the authors illustrate how language is in a dialogic relationship with other modes, in which the meaning making potentials of different modes may be curtailed, expanded, redistributed, and transformed. Guo, Amasha, and Tan also consider shift-ing the focus from centralizing on language to considering other modes of meaning making as they discuss another dialogic relationship, that between formal and informal learning. The authors argue that teachers may need to expand their notion of learning to take into account the informal and the multimodal. The two chapters in this strand highlight the significance of multimodal communication as a lens for understanding meaning-making and learning beyond traditional, print-based literacy through the trans-formation of the role of language in literacy research and practice. This page intentionally left blank Chapter 3 What Now for Language in a Multimedial World? David Malinowski University of California, Berkeley Mark Evan Nelson National Institute of Education, Nanyang Technological University Introduction In an era in which communication, within and without school settings, is suffused with image-intensive books, icon-laden screens, and streaming videos, the ground that underlies the role of language in education would seem to be shifting.
  • Book cover image for: Beyond Early Writing
    No longer available |Learn more

    Beyond Early Writing

    Teaching Writing in Primary Schools

    • David Waugh, Adam Bushnell, Sally Neaum, David Waugh, Adam Bushnell, Sally Neaum(Authors)
    • 2015(Publication Date)
    Multimodal literacies can motivate boys to write • 145 cinema but also through presentations on computers and tablets, the ever growing possibil-ities of the internet, and not forgetting mobile phones and the world of gaming and console machines. Communication and meaning-making is becoming ever more mixed and re-mixed. Thus literacy is ever evolving. Many texts that children enjoy outside the classroom are multimodal, combining the modes of sound, word, image and movement (SWIM). Multimodality can be understood by recognis-ing the mixing or combining of modes. As Bearne and Wolstencroft ( 2007 , p 2) argue, Multimodality involves the complex interweaving of word, image, gesture and movement, and sound, including speech. These can be combined in different ways and presented through a range of media. FOCUS ON RESEARCH The visual mode The visual mode can be particularly appealing to boys and can contribute important meanings to a text. Kress and Van Leeuwen’s (1996/2006) research, ‘Reading Image: The Grammar of Visual Design’, draws on a vast range of examples, including children’s drawings and textbook illustrations to examine the ways in which images communicate SWIM Four key modes SOUND WRITING IMAGE MOVEMENT Music Speech Sound effect Size Sequence Transitions Length Gesture Facial expressions Layout Photo Painted Font Drawn Still Silence Present Figure 10.1 SWIM: the four main modes of a multimodal text 146 • Beyond Early Writing meaning. They highlight structures which they identify as a grammar to identify the pos-ition and role of visual design such as framing and use of colour. Their research can be used to analyse children’s work as they identify a systematic account and construction of a langue (a tool for describing the sign-making practices). They highlight the key principles of text production as follows. Texts can be read in more than one way. • The design of the text by the author and/or publisher facilitates this process.
  • Book cover image for: Computer-Assisted Language Learning
    eBook - PDF

    Computer-Assisted Language Learning

    Diversity in Research and Practice

    The polysemy and semantic heterogeneity of this list make the word modality unusable as an operational concept. In the field of multimodality research, a neighboring discipline to CALL, a consistent definition is available. Multimodality research seeks to understand how we make meaning through the diversity of communicative forms – lan- guage, image, music, sound, gesture, touch, and smell – that surround us. With the exception of smell, these can all be found within the experience of learning and teaching online, which is why I propose to draw some definitions from the work of multimodality research’s main theorists, Kress and van Leeuwen, who Marie-Noëlle Lamy 110 developed a theory for understanding the meanings communicated to us by objects such as adverts and posters, which use language as one – but not as the main – resource for meaning-making. Let us start with the notion of “mode.” “Words processors,” these authors tell us, “must systematize such things as the thickness and positioning of the lines that separate sections of text, and develop a metalanguage, whether visual or verbal, for making these choices explicit” (2001, p. 79). Anyone who has had the opportunity to compare a page from a magazine in the English-speaking press with one from the French press knows that choice of typeface and positioning of headers, annotations, captions, and so forth are systematized differently in these cultures. The culture-dependent sys- tematization of graphic resources amounts to a specific “grammar” of visual communication, which Kress and van Leeuwen call a “mode.” The physical tool (a computer) changes the physical media (paper and ink) into a mode (a culturally intelligible page layout). Other, much more immediately obvious, culturally intelligible systems include language (written, spoken), the visual (figurative and non-figurative or coded, such as icons), sound (figurative and non-figurative such as music, or coded such as signals), and body-language.
  • Book cover image for: Multimodal Semiotics
    eBook - PDF

    Multimodal Semiotics

    Functional Analysis in Contexts of Education

    • Len Unsworth(Author)
    • 2008(Publication Date)
    • Continuum
      (Publisher)
    Kress has argued that it is now impossible to make sense of texts, even their linguistic parts alone, without having a clear idea of what these other features might be contribut-ing to the meaning of a text. (Kress, 2000: 337) Writing about Books for Youth in a Digital Age , Dresang (1999) noted that [i]n the graphically oriented, digital, multimedia world, the distinction between pictures and words has become less and less certain; (1999: 21) and that [i]n order to understand the role of print in the digital age, it is essential to have a solid grasp of the growing integrative relationship of print and graph-ics. (1999: 22) In both electronic and paper media environments then, [a]lthough the fundamental principles of reading and writing have not changed, the process has shifted from the serial cognitive processing of linear print text to parallel processing of multimodal text–image informa-tion. (Luke, 2003: 399) Andrews (2004) has explicitly noted the importance of the visual–verbal interface in both computer and hard copy texts: [I]t is the visual/verbal interface that is at the heart of literacy learning and development for both computer-users and those without access to computers. (Andrews, 2004: 63) And the New London Group argued that what is required is an educationally accessible functional grammar, that is, a metalanguage that describes meaning in various realms. These include the textual, the visual, as well as the multimodal relations between different meaning-making Multimodal Semiotic Analyses and Education 5 processes that are now so critical in media texts and the texts of electronic multimedia.
  • Book cover image for: Discourse in Context: Contemporary Applied Linguistics Volume 3
    12 A multimodal approach to discourse, context and culture Kay L. O’Halloran, Sabine Tan and Marissa K. L. E “Originally, the context meant the accompanying text, the wording that came before and after whatever was under attention. In the nineteenth century it was extended to things other than language, both concrete and abstract: the context of the building, the moral context of the day; but if you were talking about language, then it still referred to the surrounding words, and it was only in modern linguistics that it came to refer to the non-verbal environment in which language was used” . HALLIDAY (2007 [1991]: 271) 1 Introduction This chapter explores how our understanding of context moves beyond ‘the non-verbal environment in which language [is] used’ (Halliday 2007 [1991]: 271) when language is considered as one of many semiotic resources (e.g. visual, audio, embodied action and so forth) which combine to create meaning in discourse. The paradigm, variously called ‘multimodal analysis’, ‘multimodality’ and ‘multimodal studies’ (e.g. Jewitt 2009), shifts the focus from language to the study of the interaction of language with other semiotic choices in multimodal discourse which is embedded in situational and cultural contexts which are themselves multimodal in nature. In this chapter, the implications of the multimodal approach to discourse, context and culture are explored through the investigation of the identities and social relationships constructed in news videos DISCOURSE IN CONTEXT 248 mediated on the internet through the multiplicative interplay of verbiage, graphic imagery, audio and video streams. The multimodal analysis reveals that discourse analysis based on language alone is insufficient for interpreting how meaning is created and negotiated today. As a result, the relationship between discourse, context and culture has to necessarily be redefined in multimodal terms.
  • Book cover image for: The World Told and the World Shown
    eBook - PDF
    • Eija Ventola, Arsenio Jesús Moya Guijarro(Authors)
    • 2009(Publication Date)
    Developing Multimodal Texture 55 Bateman, J., J. Delin, and R. Henschel (2004) ‘Multimodality and empiricism’, in E. Ventola, C. Charles and M. Kaltenbacher (eds) Perspectives on Multimodality (Amsterdam: John Benjamins), pp. 65–87. Bernhardt, S. A. (1985) ‘Text structure and graphic design: The visible design’, in J. D. Benson and W. S. Greaves (eds) Systemic Perspectives on Discourse, Vol. 2 (Norwood, NJ: Ablex), pp. 18–38. Bringhurst, R. (2002) The Elements of Typographic Style, 2.5 edn, (Point Roberts: Hartley and Marks). Delin, J., J. Bateman and P. Allen (2002) ‘A model of genre in document layout.’ Information Design Journal, 11(1): pp. 54–66. Halliday, M. A. K. (1973) Explorations in the Functions of Language (London: Arnold). —— (1978) Language as Social Semiotic (London: Arnold). Halliday, M. A. K. and C. M. I. M. Matthiessen (2004) An Introduction to Functional Grammar , 3rd edn (London: Arnold). Halliday, M. A. K. and R. Hasan (1976), Cohesion in English (London: Longman). Kress, G. and T. van Leeuwen (1996) Reading Images: The Grammar of Visual Design (London: Routledge). —— (2006) Reading Images: The Grammar of Visual Design, 2nd edn (London: Routledge). Lemke, J. (1998) ‘Multiplying meaning: Visual and verbal semiotics in scientific text’, in J. R. Martin and R. Veel (eds) Reading Science: Critical and Functional Perspectives on Discourses of Science (London: Routledge), pp. 87–113. Li, E. S. (2007) A Systemic Functional Grammar of Chinese: A Text-Based Analysis (London: Continuum). O’Toole, M. (1994) The Language of Displayed Art (London: Leicester University Press). Thibault, P. J. (2000) ‘The multimodal transcription of a television advertisement: Theory and practice’, in A. Baldry (ed.) Multimodality and Multimediality in the Distance Learning Age (Campobasso: Palladino) pp.
  • Book cover image for: The SAGE Handbook of Writing Development
    • Roger Beard, Debra Myhill, Jeni Riley, Martin Nystrand, Roger Beard, Debra Myhill, Jeni Riley, Martin Nystrand(Authors)
    • 2009(Publication Date)
    When writing was the dominant mode, these media would vary in their writ-ing to construct different audiences. In the contemporary textbook, the whole range of lexico-grammatical and graphic resources is used to do so. On the web, such differentiation works differently. Often ‘educational’ websites have separate entries for students, teachers, and children. Not only do such sites allow learn-ers to choose themselves the text that they think is apt for their learning, but also allow learners to access all other texts – not only those for other year groups, but also those for teachers or for ‘experts’. ‘The Poetry Archive’, a website which ‘exists to help make poetry accessible, relevant, and enjoy-able to a wide audience’ may serve as an example (see www.poetryarchive.org.uk; retrieved 1 August 2007). The point we want to make repeats the point about the social generating semiotic forms, which we have made several times. In this case, we have a text-entity, which addresses a very different audience to that of the text-book with significant effects in all aspects of the multimodal text, writing included. 180 THE SAGE HANDBOOK OF WRITING DEVELOPMENT Outlook: writing in a multimodal communicational world What are the implications of multimodality for a pedagogy of writing and for writing itself? The future uses, shapes, potentials of writing as well as conceptions of writing pedagogies need to be considered within a clear sense of social environments. Pedagogy is a specific instance of a larger-level social practice with its relations, processes, and structures, characterized by a focus on par-ticular selections and shaping of ‘knowledge’ (as ‘curriculum’) and learning (as engage-ment with and transformation of that ‘curri-culum’ in relation to the learner’s interest), in or out of institutions such as schools, univer-sity, and the like.
  • Book cover image for: Non-discursive Rhetoric
    eBook - PDF

    Non-discursive Rhetoric

    Image and Affect in Multimodal Composition

    • Joddy Murray(Author)
    • 2009(Publication Date)
    • SUNY Press
      (Publisher)
    The Myth of Methodical Multimodality Just as Sharon Crowley and others have worked to dissuade scholars that the “methodical memory” reflected the “quality of authorial minds”—the more logical the writing, the more logical the mind that produced it—so too is there a myth of methodical multimodality. Multimodality (or monomodality, for that matter) does not reflect the “quality of authorial minds”: there is no legitimacy to the notion that some of us are “more visual” or “more aural” than others when it comes time to create rhetorically appropriate texts for an audience—only, perhaps, that some of us are more practiced at it. By dispelling this myth, teachers and students cannot claim to “be less visual” or “be more visual” than others (and therefore more or less inclined toward composing Multimodal Texts). In fact, multimodality is a compositional form that comes from processes based in images which, coincidentally, happens to be closer to the way humans think than the chaining together of concepts as demanded by discursive text. Part of the difficulty both students and teachers have who are unfamiliar with incorporating multimedia into their rhetorical texts stems from their inexperience in reading such texts. Just as any writing course stresses close reading as a way to improve writing, so must multimodal reading become a method of improving multimodal writing. As teachers of beginning film courses know, it takes some time to get students used to thinking about the intentionality of these texts. This requires practice in what Lanham calls “looking THROUGH” or looking AT” text: We are always looking first AT [the text] and then THROUGH it, and this oscillation creates a different implied ideal of decorum, both stylistic and behavioral.
  • Book cover image for: Semiotic Margins
    eBook - PDF

    Semiotic Margins

    Meaning in Multimodalities

    • Shoshana Dreyfus, Susan Hood, Maree Stenglin(Authors)
    • 2010(Publication Date)
    • Continuum
      (Publisher)
    Integrating Visual and Verbal Meaning 145 among the elements. It is intended that such a model will contribute to a richer understanding of students’ reading of Multimodal Texts, while offering a sys-tematic approach to describing inter-semiotic relations in a way that is both useful and accessible to teachers and test-writers. To test the efficacy of the model, the framework has been applied to the analysis of data from a project investigating multimodal reading comprehension in group literacy tests admin-istered by a state government education authority (Unsworth et al. 2006–2008). The questions explored in this research relate to how image and verbiage interact in the test stimulus materials and how students interpret meanings involving image-text relations. One of the goals of the project was to develop an account of the kinds of image-text relations students are likely to encounter in curriculum materials, tested in the first instance with the data from this study. The modelling of these relations, while initially derived from theory and research on multimodal analysis from a social-semiotic perspective, is also very much data-driven and draws on 3 sets of data gathered for this project: 1. Stimulus texts from the reading comprehension section of the Basic Skills Tests (BST) for students in primary Years 3 and 5 in 2005 and 2007, and the English Language and Literacy Assessment (ELLA) for students in Year 7 in 2007 (NSW DET 2005a, 2005b, 2007a, 2007b, 2007c); 2. student results on questions involving images from the literacy (Reading) component of the BST and ELLA for the state test populations, and post-test performance on the same subset of items for individual student participants in the study; and, 3. participants’ verbalizations of their understandings of the images and texts in the test stimulus materials, and their strategies for responding to test items related to these texts – these were audio recorded in post-test interviews.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.