Exploring GPT-3
eBook - ePub

Exploring GPT-3

Steve Tingiris, Bret Kinsella

Condividi libro
  1. 296 pagine
  2. English
  3. ePUB (disponibile sull'app)
  4. Disponibile su iOS e Android
eBook - ePub

Exploring GPT-3

Steve Tingiris, Bret Kinsella

Dettagli del libro
Anteprima del libro
Indice dei contenuti
Citazioni

Informazioni sul libro

Get started with GPT-3 and the OpenAI API for natural language processing using JavaScript and PythonKey Features• Understand the power of potential GPT-3 language models and the risks involved• Explore core GPT-3 use cases such as text generation, classification, and semantic search using engaging examples• Plan and prepare a GPT-3 application for the OpenAI review process required for publishing a live applicationBook DescriptionGenerative Pre-trained Transformer 3 (GPT-3) is a highly advanced language model from OpenAI that can generate written text that is virtually indistinguishable from text written by humans. Whether you have a technical or non-technical background, this book will help you understand and start working with GPT-3 and the OpenAI API. If you want to get hands-on with leveraging artificial intelligence for natural language processing (NLP) tasks, this easy-to-follow book will help you get started. Beginning with a high-level introduction to NLP and GPT-3, the book takes you through practical examples that show how to leverage the OpenAI API and GPT-3 for text generation, classification, and semantic search. You'll explore the capabilities of the OpenAI API and GPT-3 and find out which NLP use cases GPT-3 is best suited for. You'll also learn how to use the API and optimize requests for the best possible results. With examples focusing on the OpenAI Playground and easy-to-follow JavaScript and Python code samples, the book illustrates the possible applications of GPT-3 in production. By the end of this book, you'll understand the best use cases for GPT-3 and how to integrate the OpenAI API in your applications for a wide array of NLP tasks.What you will learn• Understand what GPT-3 is and how it can be used for various NLP tasks• Get a high-level introduction to GPT-3 and the OpenAI API• Implement JavaScript and Python code examples that call the OpenAI API• Structure GPT-3 prompts and options to get the best possible results• Select the right GPT-3 engine or model to optimize for speed and cost-efficiency• Find out which use cases would not be suitable for GPT-3• Create a GPT-3-powered knowledge base application that follows OpenAI guidelinesWho this book is forExploring GPT-3 is for anyone interested in natural language processing or learning GPT-3 with or without a technical background. Developers, product managers, entrepreneurs, and hobbyists looking to get to grips with NLP, AI, and GPT-3 will find this book useful. Basic computer skills are all you need to get the most out of this book.

Domande frequenti

Come faccio ad annullare l'abbonamento?
È semplicissimo: basta accedere alla sezione Account nelle Impostazioni e cliccare su "Annulla abbonamento". Dopo la cancellazione, l'abbonamento rimarrà attivo per il periodo rimanente già pagato. Per maggiori informazioni, clicca qui
È possibile scaricare libri? Se sì, come?
Al momento è possibile scaricare tramite l'app tutti i nostri libri ePub mobile-friendly. Anche la maggior parte dei nostri PDF è scaricabile e stiamo lavorando per rendere disponibile quanto prima il download di tutti gli altri file. Per maggiori informazioni, clicca qui
Che differenza c'è tra i piani?
Entrambi i piani ti danno accesso illimitato alla libreria e a tutte le funzionalità di Perlego. Le uniche differenze sono il prezzo e il periodo di abbonamento: con il piano annuale risparmierai circa il 30% rispetto a 12 rate con quello mensile.
Cos'è Perlego?
Perlego è un servizio di abbonamento a testi accademici, che ti permette di accedere a un'intera libreria online a un prezzo inferiore rispetto a quello che pagheresti per acquistare un singolo libro al mese. Con oltre 1 milione di testi suddivisi in più di 1.000 categorie, troverai sicuramente ciò che fa per te! Per maggiori informazioni, clicca qui.
Perlego supporta la sintesi vocale?
Cerca l'icona Sintesi vocale nel prossimo libro che leggerai per verificare se è possibile riprodurre l'audio. Questo strumento permette di leggere il testo a voce alta, evidenziandolo man mano che la lettura procede. Puoi aumentare o diminuire la velocità della sintesi vocale, oppure sospendere la riproduzione. Per maggiori informazioni, clicca qui.
Exploring GPT-3 è disponibile online in formato PDF/ePub?
Sì, puoi accedere a Exploring GPT-3 di Steve Tingiris, Bret Kinsella in formato PDF e/o ePub, così come ad altri libri molto apprezzati nelle sezioni relative a Informatik e Digitale Medien. Scopri oltre 1 milione di libri disponibili nel nostro catalogo.

Informazioni

Anno
2021
ISBN
9781800565494
Edizione
1
Argomento
Informatik

Section 1: Understanding GPT-3 and the OpenAI API

The objective of this section is to provide you with a high-level introduction to GPT-3 and the OpenAI API and to show how easy it is to get started with. The goal is to engage you with fun examples that are quick and simple to implement.
This section comprises the following chapters:
  • Chapter 1, Introducing GPT-3 and the OpenAI API
  • Chapter 2, GPT-3 Applications and Use Cases

Chapter 1: Introducing GPT-3 and the OpenAI API

The buzz about Generative Pre-trained Transformer Version 3 (GPT-3) started with a blog post from a leading Artificial Intelligence (AI) research lab, OpenAI, on June 11, 2020. The post began as follows:
We're releasing an API for accessing new AI models developed by OpenAI. Unlike most AI systems which are designed for one use-case, the API today provides a general-purpose "text in, text out" interface, allowing users to try it on virtually any English language task.
Online demos from early beta testers soon followed—some seemed too good to be true. GPT-3 was writing articles, penning poetry, answering questions, chatting with lifelike responses, translating text from one language to another, summarizing complex documents, and even writing code. The demos were incredibly impressive—things we hadn't seen a general-purpose AI system do before—but equally impressive was that many of the demos were created by people with a limited or no formal background in AI and Machine Learning (ML). GPT-3 had raised the bar, not just in terms of the technology, but also in terms of AI accessibility.
GPT-3 is a general-purpose language processing AI model that practically anybody can understand and start using in a matter of minutes. You don't need a Doctor of Philosophy (PhD) in computer science—you don't even need to know how to write code. In fact, everything you'll need to get started is right here in this book. We'll begin in this chapter with the following topics:
  • Introduction to GPT-3
  • Democratizing NLP
  • Understanding prompts, completions, and tokens
  • Introducing Davinci, Babbage, Curie, and Ada
  • Understanding GPT-3 risks

Technical requirements

This chapter requires you to have access to the OpenAI Application Programming Interface (API). You can register for API access by visiting https://openai.com/.

Introduction to GPT-3

In short, GPT-3 is a language model: a statistical model that calculates the probability distribution over a sequence of words. In other words, GPT-3 is a system for guessing which text comes next when text is given as an input.
Now, before we delve further into what GPT-3 is, let's cover a brief introduction (or refresher) on Natural Language Processing (NLP).

Simplifying NLP

NLP is a branch of AI that focuses on the use of natural human language for various computing applications. NLP is a broad category that encompasses many different types of language processing tasks, including sentiment analysis, speech recognition, machine translation, text generation, and text summarization, to name but a few.
In NLP, language models are used to calculate the probability distribution over a sequence of words. Language models are essential because of the extremely complex and nuanced nature of human languages. For example, pay in full and painful or tee time and teatime sound alike but have very different meanings. A phrase such as she's on fire could be literal or figurative, and words such as big and large can be used interchangeably in some cases but not in others—for example, using the word big to refer to an older sibling wouldn't have the same meaning as using the word large. Thus, language models are used to deal with this complexity, but that's easier said than done.
While understanding things such as word meanings and their appropriate usage seems trivial to humans, NLP tasks can be challenging for machines. This is especially true for more complex language processing tasks such as recognizing irony or sarcasm—tasks that even challenge humans at times.
Today, the best technical approach to a given NLP task depends on the task. So, most of the best-performing, state-of-the-art (SOTA) NLP systems are specialized systems that have been fine-tuned for a single purpose or a narrow range of tasks. Ideally, however, a single system could successfully handle any NLP task. That's the goal of GPT-3: to provide a general-purpose AI system for NLP. So, even though the best-performing NLP systems today tend to be specialized, purpose-built systems, GPT-3 achieves SOTA performance on a number of common NLP tasks, showing the potential for a future general-purpose NLP system that could provide SOTA performance for any NLP task.

What exactly is GPT-3?

Although GPT-3 is a general-purpose NLP system, it really just does one thing: it predicts what comes next based on the text that is provided as input. But it turns out that, with the right architecture and enough data, this one thing can handle a stunning array of language processing tasks.
GPT-3 is the third version of the GPT language model from OpenAI. So, although it started to become popular in the summer of 2020, the first version of GPT was announced 2 years earlier, and the following version, GPT-2, was announced in February 2019. But even though GPT-3 is the third version, the general system design and architecture hasn't changed much from GPT-2. There is one big difference, however, and that's the size of the dataset that was used for training.
GPT-3 was trained with a massive dataset comprised of text from the internet, books, and other sources, containing roughly 57 billion words and 175 billion parameters. That's 10 times larger than GPT-2 and the next-largest language model. To put the model size into perspective, the average human might read, write, speak, and hear upward of a billion words in an entire lifetime. So, GPT-3 has been trained on an estimated 57 times the number of words most humans will ever process.
The GPT-3 language model is massive, so it isn't something you'll be downloading and dabbling with on your laptop. But even if you could (which you can't because it's not available to download), it would cost millions of dollars in computing resources each time you wanted to build the model. This would put GPT-3 out of reach for most small companies and virtually all individuals if you had to rely on your own computer resource to use it. Thankfull...

Indice dei contenuti