Languages & Linguistics

Analytical Techniques

Analytical techniques in the field of languages and linguistics refer to the methods and tools used to analyze and interpret language data. These techniques can include statistical analysis, computational modeling, and qualitative analysis methods such as discourse analysis and content analysis. They are essential for gaining insights into language structure, usage, and meaning.

Written by Perlego with AI-assistance

4 Key excerpts on "Analytical Techniques"

  • Book cover image for: Cross-Linguistic Variation in System and Text
    eBook - PDF

    Cross-Linguistic Variation in System and Text

    A Methodology for the Investigation of Translations and Comparable Texts

    2.4. Tools and techniques The goal of this section is to discuss some state-of-the-art tools for corpus analysis and compile a suite of tools that meets the present analysis needs, with special consideration of multilingual and translation data. The analysis tasks at hand place certain requirements on the tools and techniques to be used for corpus analysis. These requirements arise from the fact that we deal with a multilingual corpus that includes translations and that we carry out monolingual, multilingual as well as translation analysis. Also, the linguistic phenomena we are interested in are grammatical phe-nomena (rather than, say, lexical or phonological phenomena). There is no ready-made tool kit for this kind of analysis; but there are a number of tech-niques that can be used in combination to suit the present corpus analysis purposes. 160 Text: English-German comparable texts and translations The basis of any corpus analysis are the data obtained from the corpus. Once the relevant data have been extracted, the actual linguistic work starts, e.g., exploring a concordance in order to detect grammatical patterns, or sta-tistically interpreting the frequency of a particular word or linguistic feature as rejecting or conforming a prior hypothesis. To obtain the data, the cor-pus has to contain the relevant types of information, both extralinguistic and linguistic. 2.4.1. Extra-linguistic information It is standard practice to encode a corpus in terms of its document type, us-ing a document description language, such as SGML (Standard Generalized Markup Language) or XML (Extensible Markup Language) (Goldfarb 1990; Sperberg-McQueen and Burnard 1994a; De Rose 1997; DuCharme 1999). 89 Recently, XML has become the most commonly used language of this kind.
  • Book cover image for: Words, Worlds, and Contexts
    • Hans J. Eikmeyer, Hannes Rieser(Authors)
    • 2015(Publication Date)
    • De Gruyter
      (Publisher)
    T H O M A S Τ B A L L M E R W A L T R A U D B R E N N E N S T U H L Lexical Analysis and Language Theory 1. Introduction In this paper we shall investigate the role which lexical analysis could play in a comprehensive theory of language. A brief historical sketch will remind the reader that modern linguistics treated words, especially simple, noncomposed words, somewhat slightingly. The lexicon appeared to be an appendix to grammar in which the irregular phenomena of language are kept track of by an immense list with little internal structure. The thesaurus and its internal organi-zation has not really been accepted as an object of thorough linguistic investi-gation. It was rather put aside as a topic for linguistic application if not for mere commercial use and exploitation. Here we shall try to revitalize the role of word-level linguistics and point out some of its relations to a more comprehensive theory of language, as we just have mentioned. 1 For this reason we shall present what we consider to be the optimal way to tackle lexical analysis. This includes arguments why the chosen way is favorable and what the status of the results are if this way is followed carefully enough. Although we shall illustrate this presentation with examples, the major issue of this paper is a methodological one. The linguistic results are published elsewhere in fuller detail (cf. Ballmer/Brennenstuhl, this volume and 1979). 2. A Historical Remark Mainstream linguistics of the sixties and the seventies has been dominated by a syntax oriented view. The three major linguistic strands of that period, trans-formational grammar (cf. Chomsky, 1965, 1969, 1975), logical grammar (Montague 1970, Keenan 1978) and pragmatics (Searle 1969, Lakoff 1972, Wunderlich 1976) focussed essentially upon single sentences: proper syntax, semantics and pragmatics were based on the syntactic analysis of more or less 1 Recently there seems to be a growing interest in lexical questions.
  • Book cover image for: Data and Methods in Corpus Linguistics
    eBook - PDF
    Part 4 Applications of Classification-based Approaches The final part takes the examination of data and methods to regions where corpus linguistics meets computational linguistics and machine learning, two overlapping disciplines that have potential for supplementing and advancing linguistic research. The content of linguistic corpora can be processed using different categories of annotation (e.g. raw word forms, lemmas, part-of- speech tags and automatically inferred syntactic dependencies) and different levels of granularity (e.g. n-grams of different lengths). Rather than testing linguistic hypotheses directly, input generated in this way can be used to classify linguistic structures (or entire texts) based on the frequency profiles of categories at the respective level. In the case of preconceived divisions (e.g. into earlier and later corpus texts), an exhaustive computational analysis can output a virtually complete list of skewed unit frequencies. Alternatively, if linguists are looking for unknown correspondences (e.g. translation equivalents between parallel corpora), such an analysis can turn up a list of potential reflexes of source text structures in target texts. Since innovative 8 Julia Schlüter and Ole Schützler approaches like those presented in these contributions do not by themselves ensure interpretability, it is indispensable to evaluate results in a linguistic- ally informed perspective. The procedures showcased in this final part are thus exemplary in that they take their methodological inspiration from computational linguistics and discuss applications to corpus-linguistic tasks. Focusing on grammatical changes in Late Modern and Present-Day English, Gerold Schneider applies a corpus-driven method to texts from two frequently used corpora for diachronic research, the Representative Corpus of Historical English Registers (ARCHER) and the Corpus of Historical American English (COHA).
  • Book cover image for: A Discourse Analysis of the Letter to the Hebrews
    eBook - PDF

    A Discourse Analysis of the Letter to the Hebrews

    The Relationship between Form and Meaning

    • Cynthia Long Westfall(Author)
    • 2006(Publication Date)
    • T&T Clark
      (Publisher)
    A. van Dijk and W. Kintsch, Strategies of Discourse Comprehension (New York: Academic, 1985); A. Georgakopoulou and D. Goutsos Discourse Analysis: An Introduction (Edinburgh: Edinburgh University Press, 1997); J. E. Grimes, The Thread of Dis-course (Janua linguarum: Series minor, 207; Hague: Mouton, 1975); M. Hoey, Patterns of Lexis in Text (Describing English Language; Oxford: Oxford University Press, 1991); idem, Textual Interaction: An Introduction to Written Discourse Analysis (London: Routledge, 2001); R. E. Longacre, The Grammar of Discourse (Topics in Language and Linguistics; New York: Plenum Press, 2nd edn, 1996); D. Nunan, Introducing Discourse Analysis (Penguin English Applied Linguistics; London: Penguin, 1993); D. Shiffrin, D. Tannen and H. E. Hamilton (eds), The Handbook of Discourse Analysis (Blackwell Handbooks in Linguistics, Maiden, MA: Blackwell, 2001); M. Stubbs, Discourse Analysis: The Sociolinguistic Analysis of Natural Language (Language in Society, 4; Oxford: Blackwell, 1983). 2. Introduction to Discourse Analysis Theory 23 words or clauses, but looks at the smaller units in their linguistic context. However, it also looks at the text in its social environment, regarding dis-course as involving the speaker/writer and the hearer(s)/listener(s), and their attempt to communicate in situations where language is one of many means of exchange. 2 In simplest terms, discourse analysis is the linguistic analysis of texts above the sentence level. 3 J. T. Reed has identified four guiding tenets of discourse analysis: 4 (1) Analysis of the production and interpretation of discourse (2) Analysis beyond the sentence (3) Analysis of social functions of language use (4) Analysis of cohesiveness Therefore, discourse analysis emphasizes language as it is used. However, in practice, it is an interdisciplinary approach which is still in the early stages of development, demonstrating a bewildering and often contradictory range of terminology and methodology.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.