Learn AWS Serverless Computing
eBook - ePub

Learn AWS Serverless Computing

A beginner's guide to using AWS Lambda, Amazon API Gateway, and services from Amazon Web Services

Scott Patterson

Compartir libro
  1. 382 páginas
  2. English
  3. ePUB (apto para móviles)
  4. Disponible en iOS y Android
eBook - ePub

Learn AWS Serverless Computing

A beginner's guide to using AWS Lambda, Amazon API Gateway, and services from Amazon Web Services

Scott Patterson

Detalles del libro
Vista previa del libro

Información del libro

Build, deploy, test, and run cloud-native serverless applications using AWS Lambda and other popular AWS services

Key Features

  • Learn how to write, run, and deploy serverless applications in Amazon Web Services
  • Make the most of AWS Lambda functions to build scalable and cost-efficient systems
  • Build and deploy serverless applications with Amazon API Gateway and AWS Lambda functions

Book Description

Serverless computing is a way to run your code without having to provision or manage servers. Amazon Web Services provides serverless services that you can use to build and deploy cloud-native applications. Starting with the basics of AWS Lambda, this book takes you through combining Lambda with other services from AWS, such as Amazon API Gateway, Amazon DynamoDB, and Amazon Step Functions.

You'll learn how to write, run, and test Lambda functions using examples in Node.js, Java, Python, and C# before you move on to developing and deploying serverless APIs efficiently using the Serverless Framework. In the concluding chapters, you'll discover tips and best practices for leveraging Serverless Framework to increase your development productivity.

By the end of this book, you'll have become well-versed in building, securing, and running serverless applications using Amazon API Gateway and AWS Lambda without having to manage any servers.

What you will learn

  • Understand the core concepts of serverless computing in AWS
  • Create your own AWS Lambda functions and build serverless APIs using Amazon API Gateway
  • Explore best practices for developing serverless applications at scale using Serverless Framework
  • Discover the DevOps patterns in a modern CI/CD pipeline with AWS CodePipeline
  • Build serverless data processing jobs to extract, transform, and load data
  • Enforce resource tagging policies with continuous compliance and AWS Config
  • Create chatbots with natural language understanding to perform automated tasks

Who this book is for

This AWS book is for cloud architects and developers who want to build and deploy serverless applications using AWS Lambda. A basic understanding of AWS is required to get the most out of this book.

Preguntas frecuentes

¿Cómo cancelo mi suscripción?
Simplemente, dirígete a la sección ajustes de la cuenta y haz clic en «Cancelar suscripción». Así de sencillo. Después de cancelar tu suscripción, esta permanecerá activa el tiempo restante que hayas pagado. Obtén más información aquí.
¿Cómo descargo los libros?
Por el momento, todos nuestros libros ePub adaptables a dispositivos móviles se pueden descargar a través de la aplicación. La mayor parte de nuestros PDF también se puede descargar y ya estamos trabajando para que el resto también sea descargable. Obtén más información aquí.
¿En qué se diferencian los planes de precios?
Ambos planes te permiten acceder por completo a la biblioteca y a todas las funciones de Perlego. Las únicas diferencias son el precio y el período de suscripción: con el plan anual ahorrarás en torno a un 30 % en comparación con 12 meses de un plan mensual.
¿Qué es Perlego?
Somos un servicio de suscripción de libros de texto en línea que te permite acceder a toda una biblioteca en línea por menos de lo que cuesta un libro al mes. Con más de un millón de libros sobre más de 1000 categorías, ¡tenemos todo lo que necesitas! Obtén más información aquí.
¿Perlego ofrece la función de texto a voz?
Busca el símbolo de lectura en voz alta en tu próximo libro para ver si puedes escucharlo. La herramienta de lectura en voz alta lee el texto en voz alta por ti, resaltando el texto a medida que se lee. Puedes pausarla, acelerarla y ralentizarla. Obtén más información aquí.
¿Es Learn AWS Serverless Computing un PDF/ePUB en línea?
Sí, puedes acceder a Learn AWS Serverless Computing de Scott Patterson en formato PDF o ePUB, así como a otros libros populares de Informatique y Sciences générales de l'informatique. Tenemos más de un millón de libros disponibles en nuestro catálogo para que explores.



Section 1: Why We're Here

The objective of Section 1 is to give context to where Functions as a Service (FaaS) resides in the compute abstraction spectrum and introduce AWS Lambda.
This section comprises the following chapters:
  • Chapter 1, The Evolution of Compute
  • Chapter 2, Event-Driven Applications

The Evolution of Compute

You're here to learn a new skill, to expand your understanding of new ways of compute, and to follow along with the examples in this book to gain practical experience. Before we begin, it would be helpful to know a bit of the history and context behind why serverless exists. This chapter will explain the progression of thinking from the humble data center through to the new goalposts of serverless.
We'll learn how the placement of physical hardware has evolved from the perspective of the infrastructure engineer and developer, and how the different stages have allowed us to achieve new layers of compute abstraction. The evolution of these factors has also driven new ways of structuring software, and we will explore some of the reasons behind this.
The following topics will be covered in this chapter:
  • Understanding enterprise data centers
  • Exploring the units of compute
  • Understanding software architectures
  • Predicting what comes next
We will delve in these topics one by one and learn how each aspect functions.

Understanding enterprise data centers

How we view a data center has changed with the introduction of new technologies and software development methods. We will begin with a recap of the characteristics of a fully self-managed data center and go on to explain how our views have changed over time:
Evolutionary changes in hardware, compute, and software architectures
It's worth noting that, in all the examples in this book, there is still an underlying data center that has to be managed, but now we're talking about shifting that responsibility of management. In the preceding diagram, each stream of evolution is not tightly linked. For example, monoliths can still be run on private or public clouds. In this section, we will cover the evolution of hardware over time and focus in on the following topics:
  • The physical data center
  • Colocating our gear
  • Cloud born

The physical data center

Your typical data center consists of a room with some metal racks arranged into rows with corridors between the rows. Each rack is filled with servers, networking gear, and maybe storage. The room must be kept at a consistent temperature and humidity to maintain the efficiency and reliability of the hardware components within the servers and other pieces of kit. The machines and equipment in the room also need power to run—lots of power.
Often three phases of power are needed to power back up batteries (in an uninterruptible power supply) in case of a brief power interruption, and then one or more backup power generators are used in the event of sustained mains power loss.
All of these components that make up a data center require special technical engineers to install and maintain them. All of these components also have dependencies for installing the application that runs the business code.
Once the servers are racked and plumbed into power and networking, you still need to install and configure them, as well as the operating system and the latest patches. The administration and maintenance of these things doesn't stop once the application is deployed either, so this would require dedicated operations staff.
Wouldn't it be good if we didn't have to be concerned about challenges such as finding available thermal space or adding redundant power sources? The drive to gain efficiencies in the way we do business has led to the emergence of new models, such as the next one.

Colocating our gear

Thankfully, if we already have our own servers that we need to run, we can use what's called a colocated space. This is when an organization running a data center has spare space (space meaning rack space, thermal space, power space, and networking space) and will rent it to you for a fee. You still maintain total control over your own server hardware.
The good thing about renting space in a data center is that we can reduce the number of specialist engineers that are needed to manage the hosting. We still need hardware and storage engineers, but we don't have to worry about making sure the air conditioning is operational, keeping track of the leasing and maintenance contracts, or depreciating the non-server assets over a given period of time.
Colocating can also go a step further where, instead of providing your own servers, you can rent the actual bare metal as well. This can save the burden on the finance team that's related to tracking IT assets.

Cloud born

Most consumers of server resources are not in the business of building data centers and aren't looking to scale the business or create this capability. They are concerned about building their application into a product that has business value. This group of builders needs a system that abstracts the details of the physical hardware and insulates the consumer from the failure of that hardware. They can't wait around to procure more hardware to scale if their product suddenly becomes popular.
A lot of these drivers (and there are plenty of others) are the reason that the cloud as we know it today was conceptualized.

Exploring the units of compute

In the previous section, we were reminded of the progression of building data centers to consuming cloud resources. We can also relate this shift to how we provision, deploy, and scale our compute resources. Let's have a look at how our thinking has moved from deploying physical servers to scaling our applications and to scaling at an application function level:
  • Physical servers: Scale at the physical server layer
  • Virtual machines: Density efficiencies achieved by virtualizing the hardware
  • Containers: Better density with faster start times
  • Functions: Best density by abstracting the runtime

Physical servers – scale at the physical server layer

When designing for scale in a physical server world, we needed to predict when our peak load may exceed our current capacity. This is the point at which one or more of our servers are fully utilized and become a bottleneck for the application or service. If we had enough foresight, we could order a new server kit and get it racked and bring it into service before it's needed. Unfortunately, this isn't how internet-scale works.
The lead time and work behind getting that new gear into service could include the following processes:
  • Securing a capital expenditure budget from finance or the Project Management Office (PMO)
  • Procuring hardware from a vendor
  • Capacity planning to confirm that there is space for the new hardware
  • Updates to the colocation agreement with the hosting provider
  • Scheduling engineers to travel to the site
  • Licensing considerations for new software being installed
The unit of scale here is an entire server, and, as you can see from the preceding list, there is a considerable list of work ahead of you before you make the decision to scale. Compounding this problem is the fact that once you have scaled, there's no scaling back. The new hardware will live in service for years and now your baseline operational costs have increased for the duration.

Virtual machines – density efficiencies achieved by virtualizing the hardware

The drawback of scaling by physical nodes is that we always have to spec out the gear for a peak load. This means that, during low load times, the server isn't doing a lot of work and we have spare capacity.
If we could run more workloads on this single server, we could achieve more efficient use of the hardware. This is what we mean when we talk about density the number of workloads that we can cram into a server, virtual machine, or operating system. Here we introduce the hypervisor. A hypervisor is a layer of software that abstracts the server's operating system and applications from the underlying hardware.
By running a hypervisor on the host server, we can share hardware resources between more than one virtual operating system running simultaneously on the host. We call these guest machines or, more commonly, virtual machines (VMs). Each VM can operate independently from the other, allowing multiple tenants to use the same physical host.
The following is a diagram showing how the layers can be visualized. A hypervisor sits between the host operating system and the virtual machines and allows a layer of translation so that the guest operating systems can communicate with the underlying hardware:
Multiple virtual machines running on an abstracted hardware layer (IaaS)
Now, our unit of scale is the virtual machine. Virtual machines are configured and deployed in minutes using the hypervisor management software or through the APIs that are provided. The life cycle of a VM is typically weeks or months, though it can sometimes be years.
When we add virtual mach...