Languages & Linguistics

Chatbots

Chatbots are computer programs designed to simulate conversation with human users, typically through text-based interfaces. They use natural language processing and artificial intelligence to understand and respond to user queries. Chatbots are used in various applications, including customer service, virtual assistants, and language learning platforms.

Written by Perlego with AI-assistance

7 Key excerpts on "Chatbots"

Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.
  • Bots
    eBook - ePub
    • Nick Monaco, Samuel Woolley(Authors)
    • 2022(Publication Date)
    • Polity
      (Publisher)

    ...The bots that interact with humans using natural language are Chatbots, which we have discussed in detail in previous chapters. Under the hood, Chatbots can take different approaches to engage in conversation with humans and, as we have seen in previous chapters, not all of them use AI techniques. The techniques used by non-AI bots tend to either be rule-based, which rely on carefully stipulated rules for how to process conversational inputs, or corpus-based, which use vast amounts of data (known as “linguistic corpora”) as a reservoir of examples that craft how a bot responds to an utterance (Jurafsky & Martin, 2018). Two bots we have met already, ELIZA and Eugene Goostman, both used the former approach of pre-defined, rule-based heuristics to simulate human conversation. Non-AI Chatbots Most non-AI Chatbots use rule-based techniques. The most basic Chatbots use pattern matching, keywords, and canned phrases to simulate conversation with humans. Bots like these are typically used in extremely narrow and constrained contexts, such as customer service – and environments where the bot is likely to encounter only a small set of utterances. At the most basic level, a chatbot simply spits out preformulated sentences regardless of user input. Bots like these include the @everyword bot, which tweets every word of the English language from a dictionary but does not interact with other users on Twitter (Dubbin, 2013). The next step up from this most primitive type is a chatbot that has basic input processing: it searches for keywords and patterns in user input and responds to each keyword with preformulated responses. Think of a customer service bot that asks what it can help you with, but simply waits for you to mention the name of a product it provides support for so it can direct you to a webpage about that product...

  • AI in Marketing, Sales and Service
    eBook - ePub

    AI in Marketing, Sales and Service

    How Marketers without a Data Science Degree can use AI, Big Data and Bots

    ...The intervention of the developer is only necessary for maintenance purposes. Current breakthroughs in NLP, the sub-area of AI that occupies itself with man-machine communication, increase the dynamism of the bot development even further. Back in 2014, Chatbots that faked being a human vis-à-vis a third of the human users were successfully developed. In the meantime, it is possible in verbal communications to pull together 90% of the spoken word into context. Written communication in this field is, however, much more developed and thus more widespread. Bob and Alice, two AI-based Chatbots in Facebook’s research laboratory for artificial intelligence, FAIR, invent a language that their human inventors do not understand. The original idea was to teach the Chatbots how to negotiate. With that, the systems developed their own language between each other that not even the creators were able to understand. This sounds sort of like this: Bob : I can can I I everything else Alice : Balls have 0 to me to me to me to me to me to me to me to me to This independence and apparent loss of control was discussed in the press almost with panic down to apocalyptic end-of-the-world scenarios. Some saw the development to become Skynet; others the end of our civilisation in the spirit of super intelligence or singularity. Yet, it is not quite as dramatic by far. Bob and Alice were to negotiate various items whereby certain items were more important to each bot than others were. The AI was meant to find out in dialogue where the other bot’s preferences were. What did in principle work well, if the developers had not forgotten to reward the bots for following the modalities and rules of the English language. So Alice and Bob began to use a kind of computerised stenography...

  • Chatbots and the Domestication of AI
    eBook - ePub

    ...This time, as the next two chapters will elaborate, both the way of computing the detection of input and the appropriateness of the output were fundamentally different. With the overall goal to increase user engagement and answer specific tasks given by the user, the construction goal of Chatbots changed as well, from an experimental, boundary-pushing approach to a search for an application. The current approach, based on Machine Learning (ML), allows for the utilization of the massive amounts of data that have been produced by the rise of social media. Instead of providing paths and decision trees for an algorithm to go through to come an evaluative choice, machine learned algorithms are unguidedly trained with data sets. Through those, an algorithm detects patterns and heuristics that may be foreign to humans but yield reliable results. This way, a primitive chatbot does not explicitly understand the sentences it is presented with, but instead picks up specific words or phrases and guesses an appropriate response based on probability function learned from the data. The more sophisticated a chatbot is, the more precise will its probabilities be in evaluating the sentence meaning, including the previous conversational context, pragmatics, and possible speaker-related idiosyncrasies (mistakes, slang). This development has led to Chatbots that not only can fool people into thinking that they are speaking to another human being, but that manage to keep a conversation as bots (i.e. without pretending to be human)...

  • Artificial Psychology
    eBook - ePub

    Artificial Psychology

    The Quest for What It Means to Be Human

    ...People engage in this type of language activity more than any other. Turing’s famous test is, of course, based on this capacity. There is something magical about carrying out a conversation with a machine. The experience causes us to attribute characteristics like thought, intelligence, and consciousness. We mention computer dialogue programs at various points throughout this book. ALICE was an example of a computer chatterbot that won the Loebner prize and is described in the chapter on thinking. The SHRDLU program from the intelligence chapter receives typed commands through a computer keyboard and responds with questions of its own. ELIZA is a computerized conversational therapist we will discuss later. A conversational agent then, is a computer program that communicates with users using natural language. Most conversational agents in commercial use now are designed to Table 6.1 Selected dialogue act tags in the DAMSL program. Forward-looking functions identify the type of statements made by a conversational partner. Backward-looking functions identify the relationship of an utterance to previous utterances by the other speaker. perform some specific task like book airline flights, reserve tickets to a film, or check a bank or credit card balance. They are capable of understanding spoken user input, responding appropriately to questions and asking questions of their own. In this section we describe the different types of architectures that underlie conversational agents. The dialogue manager of a conversational agent is the “higher-level” part that guide’s the agent’s side of the dialogue. It controls the flow of dialogue, determining what statements to make or questions to ask and when to do so (Jurafsky & Martin, 2000). The simplest dialogue managers follow a flow chart specifying what responses and questions need to be made based on user utterances...

  • Linguistics
    eBook - ePub

    Linguistics

    Why It Matters

    • Geoffrey K. Pullum(Author)
    • 2018(Publication Date)
    • Polity
      (Publisher)

    ...They are, in short, astonishingly willing to believe that computers are engaged in thinking and understanding. But they aren’t. In 2017, Facebook did some research, widely reported in the press, to see if two Chatbots (computer programs intended to simulate conversation) could learn how to negotiate pricing of some imaginary commodities – balls, hats, and books. On one run, as the pseudo-conversation proceeded, it started looking like this: Bob : i can i i everything else … Alice : balls have zero to me to me to me to me to me to me to me to me to Bob : you i everything else … Alice : balls have a ball to me to me to me to me to me to me to me And journalists wrote stories about how the experiment had to be terminated because the bots were evolving a new language that the scientists couldn’t understand. Newsweek went so far as to say that it was beginning to look as if a negotiation bot could turn into ‘a potential monster: a bot that can cut deals with no empathy for people, says whatever it takes to get what it wants, hacks language so no one is sure what it’s communicating and can’t be distinguished from a human being.’ It went on: ‘If we’re not careful, a bot like that could rule the world.’ Are journalists gullible enough to truly believe this stuff? Or do they write such nonsense because they know we’ll lap it up like kittens at a saucer of milk? The lesson from the two bots’ flailing, illustrated above, is that when extremely complex computer programs are trained to become familiar with the patterns found in huge bodies of complex data, and are programmed to feed results about their own performance back into their own further learning, they will produce strange and apparently random effects whenever they are put in a situation where no sensible output is determined by their training regimes...

  • Artificial Intelligence
    eBook - ePub

    Artificial Intelligence

    Research Directions in Cognitive Science: European Perspectives Vol. 5

    • D. Sleeman, N. O. Bernsen, D. Sleeman, N. O. Bernsen(Authors)
    • 2019(Publication Date)
    • Routledge
      (Publisher)

    ...CHAPTER 2 LANGUAGE UNDERSTANDING BY COMPUTER: DEVELOPMENTS ON THE THEORETICAL SIDE Harry Bunt ITK, Institute for Language Technology and Al, Tilburg, The Netherlands 1. INTRODUCTION This paper consists of three parts. In the first part I discuss the notion of language understanding and how it relates to Artificial Intelligence. In the second part I review some of the more important recent work on the theoretical side in the design of computer systems intended to understand natural language. In the third part I present a view on directions in the computational modelling of language understanding that seem most important for the near future. 2. UNDERSTANDING LANGUAGE 2.1 Human and Artificial Language Understanding Until two decades ago, the only type of language understander was the human understander; language understanding was synonymous with human language understanding, and the study of language understanding was part of cognitive psychology and psycholinguistics. In the sixties, Chomsky pointed out the theoretical importance of the fact that humans are able to understand infinite varieties of natural-language expressions in spite of finite information-processing resources; the implication being that meaning is encoded in natural language in systematic ways, describable by finite sets of grammatical rules and principles in combination with lexical knowledge. Since computers are able to store and effectively apply lexicons and large sets of rules in complex tasks, the human understander is no longer the only conceivable kind of language understander. When undertaking the design of a language understanding system, we have to face the question of what it is exactly that has to happen inside the machine in order to speak of "understanding". In other words, what exactly should be the result of an understanding process...

  • Linguistic Relativity Today
    eBook - ePub

    Linguistic Relativity Today

    Language, Mind, Society, and the Foundations of Linguistic Anthropology

    • Marcel Danesi(Author)
    • 2021(Publication Date)
    • Routledge
      (Publisher)

    ...The project was the idea of Joseph Weizenbaum in 1966. He designed his computer program to mimic the speech that a psychotherapist would use. Eliza’s questions—such as “Why do you say your head hurts?” in response to “My head hurts”—were perceived by subjects as being so realistic that many believed that the machine was actually alive. But, as Weizenbaum wrote a decade later, Eliza was a parodic imitation of psychoanalytic therapy speech; it had no consciousness of what it was saying. Weizenbaum’s project gave momentum to natural language programming (NLP), with the goal of producing human speech that verged on verisimilitude. Today, NLP uses sophisticated logical, probabilistic, and neural network systems that can produce and comprehend conversations (whatever that means in natural intelligence terms). The probabilistic aspect of NLP is a central one, given that many aspects of human communication involve uncertainty. In this framework, a prior hypothesis is updated in the light of new relevant observations or evidence, and this is done via a set of algorithmic procedures. But the problem of understanding comes up: What would a communication between a human and a machine truly mean? Again, this is an ongoing area of research which is beginning to provide interesting insights, but not any practical results as of yet (to the best of my knowledge). NLP programs are quite sophisticated; for instance, they can determine the sense of, say, an ambiguous word on the basis of word collocations in a text. A collocation is a sequence of words that typically co-occur in speech more often than would be anticipated by random chance. Collocations are not idioms, which have fixed phraseology. Phrases such as crystal clear, cosmetic surgery, and clean bill of health are all collocations. Whether the collocation is derived from some syntactic (make choices) or lexical (clearcut) criterion, the principle underlying collocations—frequency of usage of words in tandem—always applies...