Learn AWS Serverless Computing
eBook - ePub

Learn AWS Serverless Computing

A beginner's guide to using AWS Lambda, Amazon API Gateway, and services from Amazon Web Services

Scott Patterson

  1. 382 pages
  2. English
  3. ePUB (adapté aux mobiles)
  4. Disponible sur iOS et Android
eBook - ePub

Learn AWS Serverless Computing

A beginner's guide to using AWS Lambda, Amazon API Gateway, and services from Amazon Web Services

Scott Patterson

DĂ©tails du livre
Aperçu du livre
Table des matiĂšres
Citations

À propos de ce livre

Build, deploy, test, and run cloud-native serverless applications using AWS Lambda and other popular AWS services

Key Features

  • Learn how to write, run, and deploy serverless applications in Amazon Web Services
  • Make the most of AWS Lambda functions to build scalable and cost-efficient systems
  • Build and deploy serverless applications with Amazon API Gateway and AWS Lambda functions

Book Description

Serverless computing is a way to run your code without having to provision or manage servers. Amazon Web Services provides serverless services that you can use to build and deploy cloud-native applications. Starting with the basics of AWS Lambda, this book takes you through combining Lambda with other services from AWS, such as Amazon API Gateway, Amazon DynamoDB, and Amazon Step Functions.

You'll learn how to write, run, and test Lambda functions using examples in Node.js, Java, Python, and C# before you move on to developing and deploying serverless APIs efficiently using the Serverless Framework. In the concluding chapters, you'll discover tips and best practices for leveraging Serverless Framework to increase your development productivity.

By the end of this book, you'll have become well-versed in building, securing, and running serverless applications using Amazon API Gateway and AWS Lambda without having to manage any servers.

What you will learn

  • Understand the core concepts of serverless computing in AWS
  • Create your own AWS Lambda functions and build serverless APIs using Amazon API Gateway
  • Explore best practices for developing serverless applications at scale using Serverless Framework
  • Discover the DevOps patterns in a modern CI/CD pipeline with AWS CodePipeline
  • Build serverless data processing jobs to extract, transform, and load data
  • Enforce resource tagging policies with continuous compliance and AWS Config
  • Create chatbots with natural language understanding to perform automated tasks

Who this book is for

This AWS book is for cloud architects and developers who want to build and deploy serverless applications using AWS Lambda. A basic understanding of AWS is required to get the most out of this book.

Foire aux questions

Comment puis-je résilier mon abonnement ?
Il vous suffit de vous rendre dans la section compte dans paramĂštres et de cliquer sur « RĂ©silier l’abonnement ». C’est aussi simple que cela ! Une fois que vous aurez rĂ©siliĂ© votre abonnement, il restera actif pour le reste de la pĂ©riode pour laquelle vous avez payĂ©. DĂ©couvrez-en plus ici.
Puis-je / comment puis-je télécharger des livres ?
Pour le moment, tous nos livres en format ePub adaptĂ©s aux mobiles peuvent ĂȘtre tĂ©lĂ©chargĂ©s via l’application. La plupart de nos PDF sont Ă©galement disponibles en tĂ©lĂ©chargement et les autres seront tĂ©lĂ©chargeables trĂšs prochainement. DĂ©couvrez-en plus ici.
Quelle est la différence entre les formules tarifaires ?
Les deux abonnements vous donnent un accĂšs complet Ă  la bibliothĂšque et Ă  toutes les fonctionnalitĂ©s de Perlego. Les seules diffĂ©rences sont les tarifs ainsi que la pĂ©riode d’abonnement : avec l’abonnement annuel, vous Ă©conomiserez environ 30 % par rapport Ă  12 mois d’abonnement mensuel.
Qu’est-ce que Perlego ?
Nous sommes un service d’abonnement Ă  des ouvrages universitaires en ligne, oĂč vous pouvez accĂ©der Ă  toute une bibliothĂšque pour un prix infĂ©rieur Ă  celui d’un seul livre par mois. Avec plus d’un million de livres sur plus de 1 000 sujets, nous avons ce qu’il vous faut ! DĂ©couvrez-en plus ici.
Prenez-vous en charge la synthÚse vocale ?
Recherchez le symbole Écouter sur votre prochain livre pour voir si vous pouvez l’écouter. L’outil Écouter lit le texte Ă  haute voix pour vous, en surlignant le passage qui est en cours de lecture. Vous pouvez le mettre sur pause, l’accĂ©lĂ©rer ou le ralentir. DĂ©couvrez-en plus ici.
Est-ce que Learn AWS Serverless Computing est un PDF/ePUB en ligne ?
Oui, vous pouvez accĂ©der Ă  Learn AWS Serverless Computing par Scott Patterson en format PDF et/ou ePUB ainsi qu’à d’autres livres populaires dans Computer Science et Computer Science General. Nous disposons de plus d’un million d’ouvrages Ă  dĂ©couvrir dans notre catalogue.

Informations

Année
2019
ISBN
9781789959956

Section 1: Why We're Here

The objective of Section 1 is to give context to where Functions as a Service (FaaS) resides in the compute abstraction spectrum and introduce AWS Lambda.
This section comprises the following chapters:
  • Chapter 1, The Evolution of Compute
  • Chapter 2, Event-Driven Applications

The Evolution of Compute

You're here to learn a new skill, to expand your understanding of new ways of compute, and to follow along with the examples in this book to gain practical experience. Before we begin, it would be helpful to know a bit of the history and context behind why serverless exists. This chapter will explain the progression of thinking from the humble data center through to the new goalposts of serverless.
We'll learn how the placement of physical hardware has evolved from the perspective of the infrastructure engineer and developer, and how the different stages have allowed us to achieve new layers of compute abstraction. The evolution of these factors has also driven new ways of structuring software, and we will explore some of the reasons behind this.
The following topics will be covered in this chapter:
  • Understanding enterprise data centers
  • Exploring the units of compute
  • Understanding software architectures
  • Predicting what comes next
We will delve in these topics one by one and learn how each aspect functions.

Understanding enterprise data centers

How we view a data center has changed with the introduction of new technologies and software development methods. We will begin with a recap of the characteristics of a fully self-managed data center and go on to explain how our views have changed over time:
Evolutionary changes in hardware, compute, and software architectures
It's worth noting that, in all the examples in this book, there is still an underlying data center that has to be managed, but now we're talking about shifting that responsibility of management. In the preceding diagram, each stream of evolution is not tightly linked. For example, monoliths can still be run on private or public clouds. In this section, we will cover the evolution of hardware over time and focus in on the following topics:
  • The physical data center
  • Colocating our gear
  • Cloud born

The physical data center

Your typical data center consists of a room with some metal racks arranged into rows with corridors between the rows. Each rack is filled with servers, networking gear, and maybe storage. The room must be kept at a consistent temperature and humidity to maintain the efficiency and reliability of the hardware components within the servers and other pieces of kit. The machines and equipment in the room also need power to run—lots of power.
Often three phases of power are needed to power back up batteries (in an uninterruptible power supply) in case of a brief power interruption, and then one or more backup power generators are used in the event of sustained mains power loss.
All of these components that make up a data center require special technical engineers to install and maintain them. All of these components also have dependencies for installing the application that runs the business code.
Once the servers are racked and plumbed into power and networking, you still need to install and configure them, as well as the operating system and the latest patches. The administration and maintenance of these things doesn't stop once the application is deployed either, so this would require dedicated operations staff.
Wouldn't it be good if we didn't have to be concerned about challenges such as finding available thermal space or adding redundant power sources? The drive to gain efficiencies in the way we do business has led to the emergence of new models, such as the next one.

Colocating our gear

Thankfully, if we already have our own servers that we need to run, we can use what's called a colocated space. This is when an organization running a data center has spare space (space meaning rack space, thermal space, power space, and networking space) and will rent it to you for a fee. You still maintain total control over your own server hardware.
The good thing about renting space in a data center is that we can reduce the number of specialist engineers that are needed to manage the hosting. We still need hardware and storage engineers, but we don't have to worry about making sure the air conditioning is operational, keeping track of the leasing and maintenance contracts, or depreciating the non-server assets over a given period of time.
Colocating can also go a step further where, instead of providing your own servers, you can rent the actual bare metal as well. This can save the burden on the finance team that's related to tracking IT assets.

Cloud born

Most consumers of server resources are not in the business of building data centers and aren't looking to scale the business or create this capability. They are concerned about building their application into a product that has business value. This group of builders needs a system that abstracts the details of the physical hardware and insulates the consumer from the failure of that hardware. They can't wait around to procure more hardware to scale if their product suddenly becomes popular.
A lot of these drivers (and there are plenty of others) are the reason that the cloud as we know it today was conceptualized.

Exploring the units of compute

In the previous section, we were reminded of the progression of building data centers to consuming cloud resources. We can also relate this shift to how we provision, deploy, and scale our compute resources. Let's have a look at how our thinking has moved from deploying physical servers to scaling our applications and to scaling at an application function level:
  • Physical servers: Scale at the physical server layer
  • Virtual machines: Density efficiencies achieved by virtualizing the hardware
  • Containers: Better density with faster start times
  • Functions: Best density by abstracting the runtime

Physical servers – scale at the physical server layer

When designing for scale in a physical server world, we needed to predict when our peak load may exceed our current capacity. This is the point at which one or more of our servers are fully utilized and become a bottleneck for the application or service. If we had enough foresight, we could order a new server kit and get it racked and bring it into service before it's needed. Unfortunately, this isn't how internet-scale works.
The lead time and work behind getting that new gear into service could include the following processes:
  • Securing a capital expenditure budget from finance or the Project Management Office (PMO)
  • Procuring hardware from a vendor
  • Capacity planning to confirm that there is space for the new hardware
  • Updates to the colocation agreement with the hosting provider
  • Scheduling engineers to travel to the site
  • Licensing considerations for new software being installed
The unit of scale here is an entire server, and, as you can see from the preceding list, there is a considerable list of work ahead of you before you make the decision to scale. Compounding this problem is the fact that once you have scaled, there's no scaling back. The new hardware will live in service for years and now your baseline operational costs have increased for the duration.

Virtual machines – density efficiencies achieved by virtualizing the hardware

The drawback of scaling by physical nodes is that we always have to spec out the gear for a peak load. This means that, during low load times, the server isn't doing a lot of work and we have spare capacity.
If we could run more workloads on this single server, we could achieve more efficient use of the hardware. This is what we mean when we talk about density – the number of workloads that we can cram into a server, virtual machine, or operating system. Here we introduce the hypervisor. A hypervisor is a layer of software that abstracts the server's operating system and applications from the underlying hardware.
By running a hypervisor on the host server, we can share hardware resources between more than one virtual operating system running simultaneously on the host. We call these guest machines or, more commonly, virtual machines (VMs). Each VM can operate independently from the other, allowing multiple tenants to use the same physical host.
The following is a diagram showing how the layers can be visualized. A hypervisor sits between the host operating system and the virtual machines and allows a layer of translation so that the guest operating systems can communicate with the underlying hardware:
Multiple virtual machines running on an abstracted hardware layer (IaaS)
Now, our unit of scale is the virtual machine. Virtual machines are configured and deployed in minutes using the hypervisor management software or through the APIs that are provided. The life cycle of a VM is typically weeks or months, though it can sometimes be years.
When we add virtual mach...

Table des matiĂšres

  1. Title Page
  2. Copyright and Credits
  3. About Packt
  4. Contributors
  5. Preface
  6. Section 1: Why We're Here
  7. The Evolution of Compute
  8. Event-Driven Applications
  9. Section 2: Getting Started with AWS Lambda Functions
  10. The Foundations of a Function in AWS
  11. Adding Amazon API Gateway
  12. Leveraging AWS Services
  13. Going Deeper with Lambda
  14. Section 3: Development Patterns
  15. Serverless Framework
  16. CI/CD with the Serverless Framework
  17. Section 4: Architectures and Use Cases
  18. Data Processing
  19. AWS Automation
  20. Creating Chatbots
  21. Hosting Single-Page Web Applications
  22. GraphQL APIs
  23. Assessment
  24. Other Books You May Enjoy
Normes de citation pour Learn AWS Serverless Computing

APA 6 Citation

Patterson, S. (2019). Learn AWS Serverless Computing (1st ed.). Packt Publishing. Retrieved from https://www.perlego.com/book/1343358/learn-aws-serverless-computing-a-beginners-guide-to-using-aws-lambda-amazon-api-gateway-and-services-from-amazon-web-services-pdf (Original work published 2019)

Chicago Citation

Patterson, Scott. (2019) 2019. Learn AWS Serverless Computing. 1st ed. Packt Publishing. https://www.perlego.com/book/1343358/learn-aws-serverless-computing-a-beginners-guide-to-using-aws-lambda-amazon-api-gateway-and-services-from-amazon-web-services-pdf.

Harvard Citation

Patterson, S. (2019) Learn AWS Serverless Computing. 1st edn. Packt Publishing. Available at: https://www.perlego.com/book/1343358/learn-aws-serverless-computing-a-beginners-guide-to-using-aws-lambda-amazon-api-gateway-and-services-from-amazon-web-services-pdf (Accessed: 14 October 2022).

MLA 7 Citation

Patterson, Scott. Learn AWS Serverless Computing. 1st ed. Packt Publishing, 2019. Web. 14 Oct. 2022.