The Future of Translation Technology
eBook - ePub

The Future of Translation Technology

Towards a World without Babel

  1. 302 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

The Future of Translation Technology

Towards a World without Babel

About this book

Technology has revolutionized the field of translation, bringing drastic changes to the way translation is studied and done. To an average user, technology is simply about clicking buttons and storing data. What we need to do is to look beyond a system's interface to see what is at work and what should be done to make it work more efficiently. This book is both macroscopic and microscopic in approach: macroscopic as it adopts a holistic orientation when outlining the development of translation technology in the last forty years, organizing concepts in a coherent and logical way with a theoretical framework, and predicting what is to come in the years ahead; microscopic as it examines in detail the five stages of technology-oriented translation procedure and the strengths and weaknesses of the free and paid systems available to users. The Future of Translation Technology studies, among other issues:



  • The Development of Translation Technology


  • Major Concepts in Computer-aided Translation


  • Functions in Computer-aided Translation Systems


  • A Theoretical Framework for Computer-Aided Translation Studies


  • The Future of Translation Technology

This book is an essential read for scholars and researchers of translational studies and computational linguistics, and a guide to system users and professionals.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access The Future of Translation Technology by Chan Sin-wai in PDF and/or ePUB format, as well as other popular books in Languages & Linguistics & Linguistics. We have over one million books available in our catalogue for you to explore.

Information

1
The development of translation technology

1967–2014

Introduction

The history of translation technology, or more specifically computer-aided translation (CAT), is short, but its development has been fast (Chan 2015: 3). It is generally recognized that the failure of machine translation in the 1960s as a result of the infamous ALPAC report (1966) led to the emergence of computer-aided translation. The development of computer-aided translation in the course of the last forty-seven years, from its beginning in 1967 to 2014, can be divided into four periods. The first period, which goes from 1967 to 1983, is a period of germination. The second period, covering the years between 1984 and 1992, is a period of steady growth. The third period, from 1993 to 2002, is a decade of rapid growth. The last period, which extends from 2003 to 2014, is a period of global development.

1967–1983: A period of germination

Computer-aided translation, as mentioned above, originated from machine translation, which, in turn, resulted from the invention of the computer. By 1966, when the ALPAC report was published, machine translation had made considerable progress in a number of countries since the invention of the first computer, ENIAC, in 1946. Several events that took place over these two decades are worth noting. In 1947, merely one year after the advent of the computer, Warren Weaver, President of the Rockefeller Foundation, and Andrew D. Booth of Birkbeck College, London University, proposed to make use of the newly invented computer to translate natural languages, becoming the first two scholars who discussed the possibility of incorporating computers into the translation process (Chan 2004: 290–291). In 1949, Warren Weaver wrote a memorandum for peer review outlining the prospects of machine translation, which went down in history as ‘Weaver’s Memorandum’. In 1952, Yehoshua Bar-Hillel held the first conference on machine translation at the Massachusetts Institute of Technology. Some of the papers that were presented in the conference were compiled by William N. Locke and Andrew D. Booth into an anthology entitled Machine Translation of Languages: Fourteen Essays, the first book on machine translation (Locke and Booth 1955). In 1954, Leon Dostert of Georgetown University and Peter Sheridan of IBM used the IBM701 machine to make a public demonstration of the translation of Russian sentences into English, which marked a milestone in machine translation (Chan 2004: 125–126; Hutchins 1999). Later that year, the inaugural issue of Mechanical Translation, the first journal in the field of machine translation, was published by the Massachusetts Institute of Technology (Yngve 2000: 50–51). In 1962, the Association for Computational Linguistics was founded in the United States, and the journal of the association, Computational Linguistics, began to be published. It was roughly estimated that by 1965, there were sixteen countries or research institutions engaged in studies on machine translation, including the United States, the former Soviet Union, the United Kingdom, Japan, France, West Germany, Italy, the former Czechoslovakia, the former Yugoslavia, East Germany, Mexico, Hungary, Canada, Holland, Romania, and Belgium (Zhang 2006: 30–34).
The development of machine translation in the United States since the late 1940s, however, fell short of expectations. In 1963, the Georgetown machine translation project was terminated, signifying the end of the largest machine translation project in the United States (Chan 2004: 303). In 1964, the government of the United States set up the Automatic Language Processing Advisory Committee (ALPAC), which comprised seven experts in the field, to enquire into the state of machine translation (ALPAC 1966; Warwick 1987: 22–37). In 1966, the report of the Committee, entitled Languages and Machines: Computers in Translation and Linguistics, pointed out that ‘there is no immediate or predictable prospect of useful machine translation’ (ALPAC 1966: 32) and, as machine translation was twice as expensive as human translation, it failed to meet people’s expectations. The Committee thus recommended that resources to support machine translation should be terminated. The report also mentions that ‘as it becomes increasingly evident that fully automatic high-quality machine translation was not going to be realized for a long time, interest began to be shown in machine-aided translation’ (ALPAC 1966: 25). Therefore, the focus on machine translation shifted to machine-aided translation that was ‘aimed at improving human translation, with an appropriate use of machine aids’ (ALPAC 1966: iii), and they concluded that ‘machine-aided translation may be an important avenue toward better, quicker, and cheaper translation’ (ALPAC 1966: 32). The ALPAC report dealt a serious blow to machine translation in the United States, which was to remain stagnant for more than a decade, and it also had a negative impact on the research on machine translation in Europe and Russia. However, this provided an opportunity for machine-aided translation to come into being. All these show that the birth of machine-aided translation is closely related to the development of machine translation.
Computer-aided translation, nevertheless, would not have been possible without the support of related concepts and software. It was no mere coincidence that the idea of a translation memory, which is one of the major concepts and functions of computer-aided translation, emerged during this period. According to John Hutchins, the concept of translation memory can be traced back to the period between the 1960s and the 1980s (Hutchins 1998). In 1978, when Alan Melby of the Translation Research Group of Brigham Young University conducted research on machine translation and developed an interactive translation system ALPS (Automated Language Processing Systems), he incorporated the idea of translation memory into a tool known as ‘Repetitions Processing’, which aimed at finding matched strings (Kingscott 1984: 27–29; Melby 1978; Melby and Warner 1995: 187). In the following year, Peter Arthern, in his paper on the issue of whether machine translation should be used in a conference organized by the European Commission, proposed the method of ‘translation by text-retrieval’ (Arthern 1979: 93). According to Arthern:
This information would have to be stored in such a way that any given portion of text in any of the languages involved can be located immediately … together with its translation into any or all of the other languages which the organization employs.
(Arthern 1979: 95)
In October 1980, Martin Kay published an article ‘The Proper Place of Men and Machines in Language Translation’ at the Palo Alto Research Center of Xerox. He proposed to create a machine translation system in which the display on the screen was divided into two windows. The text to be translated would appear in the upper window, while the translation would be composed in the bottom one to allow the translator to edit the translation with the help of simple facilities peculiar to translation, such as aids for word selection and dictionary consultation, which are labeled by Kay as a ‘translator amanuensis’ (Kay 1980: 9–18). In view of the level of word-processing capacities at that time, his proposal was inspiring to the development of computer-aided translation and exerted a huge impact on its research later on. Kay is generally considered as a pioneer in proposing an interactive translation system.
It can be seen that the idea of translation memory was established in the late 1970s and early 1980s (Bruderer 1975: 258–261, 1977: 529–556). Hutchins believes that the first person to propose the concept of translation memory was Arthern. However, as Melby and Arthern proposed the idea almost at the same time, both could be considered as forerunners. In addition, it should be acknowledged that Arthern, Melby, and Kay made a great contribution to the growth of computer-aided translation in its early days.
The first attempt to deploy the idea of translation memory in a machine translation system was made by Alan Melby and his co-researchers at Brigham Young University, who jointly developed the Automated Language Processing System, or ALPS for short. This system provided access to previously translated segments which were identical (Hutchins 1998: 291). Some scholars classify this type of full match as a function of first-generation translation memory systems (Elita and Gavrila 2006; Gotti, Langlais, Macklovitch, Bourigault, Robichaud, and Coulombe 2005; Kavak 2009). One of the major shortcomings of this generation of computer-aided translation systems was that sentences with full matching were very small in number, minimizing the reusability of the translation memory and the role of the translation memory database (Wang 2011: 141).
Some researchers around 1980 began to collect and store translation samples with the intention of redeploying and sharing their translation resources. Constrained by the limitations of computer hardware (such as limited storage space), the cost of building a bilingual database was high, and with the immaturity in the algorithms for bilingual data alignment, translation memory technology was forced to remain in a stage of exploration. As a result, a truly commercial computer-aided translation system did not emerge during the sixteen years of this period, and, therefore, translation technology did not have an impact on the translation practice and translation industry (Zachary 1979: 13–28).

1984–1992: A period of steady growth

The eight-year period between 1984 and 1992 is characterized by a steady growth of computer-aided translation and by some developments that took place: corporate operation, in 1984; system commercialization, in 1988; and regional expansion, in 1992 (Marčuk 1989: 682–688).

Company operation

It was during this period that the first computer-aided translation companies, Trados in Germany and Star Group in Switzerland, were founded. These two companies later had a great impact on the development of computer-aided translation.
The German company was founded by Jochen Hummel and Iko Knyphausen in Stuttgart, Germany, in 1984. The name Trados GmbH stood for ‘TRAnslation and DOcumentation Software’. This company was set up initially as a language service provider (LSP) to work on a translation project that they had received from IBM. As the company later developed computer-aided translation to help complete the project, the establishment of Trados GmbH is regarded as the starting point of the period of steady growth in computer-aided translation (Garcia and Stevenson 2005: 18–31; http://www.lspzone.com).
Of equal significance was the founding of the Swiss company STAR AG in the same year. STAR, an acronym of ‘Software, Translation, Artwork, and Recording’, provided manual technical editing and translations with information technology and automation. Two years later, STAR opened its first foreign office in Germany in order to serve the increasingly important software localization market, and started developing two software products, namely GRIPS and Transit, for information management and translation memory, respectively. At the same time, client demand and growing export markets led to the establishment of additional overseas locations in Japan and China. The STAR Group still plays an important role in the translation technology industry (http://www.star-group.net).
It can be observed that during this early period of computer-aided translation, all companies in the field either were established or operated in Europe. This Eurocentric phenomenon was bound to change in the next period.

System commercialization

The commercialization of computer-aided translation systems began in 1988, when Eiichiro Sumita and Yutaka Tsutsumi of the Japanese branch of IBM released the ETOC (‘Easy TO Consult’) tool, which was no more than an upgraded electronic dictionary. Consultation of traditional electronic dictionaries was performed based on individual words; it was impossible to search phrases or sentences with more than two words. ETOC, however, offered a flexible solution. When inputting the sentence to be searched into ETOC, the system tried to extract it from its dictionary. If no matches were found, the system carried out a grammatical analysis of the sentence, extracting some substantive words but keeping the empty words and adjectives, which formed the sentence pattern. The sentence pattern was then compared with the bilingual sentences in the dictionary database to find those with a similar pattern, which were displayed for the translator to select. The translator could then copy and paste the sentence onto the Editor, where he was able to revise it to complete the translation. Although the system did not use the term translation memory, and the translation database was still considered a ‘dictionary’, it had essentially the basic features of today’s translation memory tools. The main shortcoming of this system was that, as it needed to perform grammatical analyses, its programming was difficult and its scalability limited. If a new language were to be added, a grammatical analysis module would have to be programmed for the language in question. Furthermore, as the system could only work on perfect matching but not fuzzy matching, it drastically cut down on the reusability of translations (Sumita and Tsutsumi 1988: 2).
Around the time that ETOC was released in Japan, Trados developed TED, a plug-in for text processor tools that was later to become, in its expanded form, the first Translator’s Workbench editor, developed by two people and their secretary (Brace 1992a; Garcia and Stevenson 2005). It was also around this time that Trados made the decision to split the company, passing the translation services part of the business to INK in the Netherlands, so that they could concentrate on developing translation software (http://www.translationzone.com).
Two years later, the company also released the first version of MultiTerm as a memory-resident multilingual terminology management tool for DOS, taking the innovative approach of storing all data in a single, freely structured database with entries classified by user-defined attributes (Eurolux Computers 1992: 8; http://www.translationzone.com; Wassmer 2011).
Three years later, in 1991, STAR AG released worldwide Transit 1.0 32-bit DOS version, which had been under development since 1987 and used exclusively for in-house production. Transit, derived from the phrase ‘translate it’, featured the modules that are standard features of today’s computer-aided translation systems, such as a proprietary translation editor with separate but synchronized windows for source and target languages, tag protection, a translation memory engine, a terminology management component, and project management features. In the context of system development, the ideas of terminolo...

Table of contents

  1. Cover
  2. Title
  3. Copyright
  4. Contents
  5. List of illustrations
  6. Preface
  7. Acknowledgements
  8. 1 The development of translation technology: 1967–2014
  9. 2 Major concepts in computer-aided translation
  10. 3 Functions in computer-aided translation systems
  11. 4 Computer-aided translation: Free and paid systems
  12. 5 A theoretical framework for computer-aided translation studies
  13. 6 The future of translation technology
  14. Index