Spreading Activation, Lexical Priming and the Semantic Web
eBook - ePub

Spreading Activation, Lexical Priming and the Semantic Web

Early Psycholinguistic Theories, Corpus Linguistics and AI Applications

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Spreading Activation, Lexical Priming and the Semantic Web

Early Psycholinguistic Theories, Corpus Linguistics and AI Applications

About this book

This book explores the interconnections between linguistics and Artificial Intelligence (AI) research, their mutually influential theories and developments, and the areas where these two groups can still learn from each other. It begins with a brief history of artificial intelligence theories focusing on figures including Alan Turing and M. Ross Quillian and the key concepts of priming, spread-activation and the semantic web. The author details the origins of the theory of lexical priming in early AI research and how it can be used to explain structures of language that corpus linguists have uncovered. He explores how the idea of mirroring the mind's language processing has been adopted to create machines that can be taught to listen and understand human speech in a way that goes beyond a fixed set of commands. In doing so, he reveals how the latest research into the semantic web and Natural Language Processing has developed from its early roots. The book moves on to describe how the technology has evolved with the adoption of inference concepts, probabilistic grammar models, and deep neural networks in order to fine-tune the latest language-processing and translation tools. This engaging book offers thought-provoking insights to corpus linguists, computational linguists and those working in AI and NLP.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Spreading Activation, Lexical Priming and the Semantic Web by Michael Pace-Sigge in PDF and/or ePUB format, as well as other popular books in Computer Science & Artificial Intelligence (AI) & Semantics. We have over one million books available in our catalogue for you to explore.
Ā© The Author(s) 2018
Michael Pace-SiggeSpreading Activation, Lexical Priming and the Semantic Webhttps://doi.org/10.1007/978-3-319-90719-2_1
Begin Abstract

1. Introduction

Michael Pace-Sigge1
(1)
Department of English Language and Culture, University of Eastern Finland, Joensuu, Finland
Michael Pace-Sigge
References

Abstract

The aim of the book is to provide an overview of the interconnection of linguistics and artificial intelligence (AI). By the late 1950s, researchers seriously considered tools to teach machines to comprehend human language. Thus, engineers in the computing sciences started working together with linguists. Today, trillions of words from different sources can be collated and used for computer-based calculations. This allows for a better-informed (because fully empirical) vision of language. As a result, it can be seen that linguistic knowledge underpins the ability of a computational device to process human language. Conversely, such electronic devices are getting closer to creating a mirror image of how language is processed, thus providing support for theories of the underlying structure of language.

Keywords

TuringQuillianNorvigSpreading activationAI
End Abstract
It was in the early 1960s, on a wave of progress and optimistic faith in technological solutions, that everything seemed to come together. The first automated computing machines were not much over a decade old when researchers seriously considered tools to teach these machines to comprehend human language. Thus, engineers in the computing sciences started working together with linguists. Ideas, even good ideas, need incubation time, however. Other people have to work on these ideas, coming up with new techniques; new people might think of new applications for the same idea but in very different fields. It is a bit like the first steps of human bipeds to leave the ground they were standing on. Apart from special geographical features (like hills and mountains), all man could see was their surrounds. The moment a balloon took people up into the air, however, a completely new view of the familiar surroundings was possible. Similarly, modern technology allows us to delve into subatomic geographies. The same experience is true when going beyond the use of single books as a basis to understand and manipulate language. With the second decade of the second millennium approaching, trillions of words from different sources can be collated and used for computer-based calculations—with tools that are available to most people who have nothing more than a simple PC or mobile device with the appropriate application. This then allows a new, different, better-informed (because fully empirical) vision of language. Yet it is much more than that—this knowledge can now be harvested to have machine-mediated understanding of spoken utterances; algorithms can now be designed to mimic natural human speech. As a result, we are witnessing a whole new dimension in communication. 1
Consequently, a parallel set of conclusions can be drawn: it can be seen that linguistic knowledge underpins the ability of a computational device to process human language in written or spoken form. Conversely, such electronic devices are getting closer and closer in creating a mirror image to how language is produced, processed and understood thus providing support for the theories of the underlying structure of language, while undermining rival claims: if a form of AI works, this can be seen as a result of successfully turning one theory into practise.
The genesis of the book is a story of coincidental discoveries which, over time, have built up to draw connections that changed the intended outcome several times. While I was preparing my first book, Lexical Priming in Spoken English (Pace-Sigge 2013), I happened to read Steven Levy’s 2011 book about the search engine company, Google. As an aside, Levy encourages the readers to have a look at the personal page of Google’s first head of research, Peter Norvig (2017)—to see something devoid of gimmicks: a proper engineer’s web page. Curious, I went and had a look, only to find that Norvig himself had made, in a number of his published articles, reference to Ross Michael Quillian—the very man that I had identified in my book as a key figure in the development of the concept of priming . Looking at the processes to retrieve the best possible search results for any given Google search, as described by Levy, the connection to the concept of lexical priming became quite obvious. This connection has subsequently been described, albeit not in too much detail, in my earlier book.
A few years down the line, my partner suggested I could write a primer on the concept of lexical priming as part of the Palgrave Pivot series. It took, however, another year or two before I had time to think about that project. Yet, as I started to investigate the matter, it became clear that a far more interesting project offered itself: the link between the psycholinguistic theory of lexical priming developed by Michael Hoey (2005) and the current developments in speech recognition and speech production technology which are born out of current advances in AI. For both appear to have a common root in the concepts of the semantic web and spreading activation , first developed by Quillian. The task for Quillian (1969) was to create a theoretical framework explaining how to programme a machine to understand natural human speech—the Teachable Language Comprehender ( TLC ) as he called it. The core to this task was, for Quillian, to create a form of Word-Sense Disambiguation (WSD). Tellingly, research into what is now referred to as WSD is at the heart of many AI and computational linguistics projects in the twenty-first century. 2 As a consequence, it seems to make sense to write a book that shows the development of the theory, then outline the two strands of research which developed out of it and finally see what these two communities of researchers can learn from each other.
There are, of course, a number of hugely important books available that cover the concepts in this book in far greater detail. First and foremost, the magisterial bible on AI, Stuart Russell’s and Peter Norvig’s Artificial Intelligence. A modern approach. Originally published in late 1995, the latest updated edition came out in 2016. The book more focussed on the area discussed here—language—is the equally impressive Speech and Language Processing : An Introduction to Natural Language Processing, Computational Linguistics and Speech Recognition by Daniel Jurafsky and James H. Martin. Its first edition was published in 2000, with the second edition announced for 2018. One might want to take a shortcut—as the publisher, Prentice Hall (now Pearson) must have thought—and go for the 2014 book Speech and Language Processing by Jurafsky, Martin, Norvig and Russell. All three bo...

Table of contents

  1. Cover
  2. Front Matter
  3. 1.Ā Introduction
  4. 2.Ā M. Ross Quillian, Priming, Spreading-Activation and the Semantic Web
  5. 3.Ā Where Corpus Linguistics and Artificial Intelligence (AI) Meet
  6. 4.Ā Take Home Messages for Linguists and Artificial Intelligence Designers
  7. 5.Ā Conclusions
  8. Back Matter