Kubernetes – An Enterprise Guide
eBook - ePub

Kubernetes – An Enterprise Guide

Marc Boorshtein, Scott Surovich

Share book
  1. 578 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Kubernetes – An Enterprise Guide

Marc Boorshtein, Scott Surovich

Book details
Book preview
Table of contents
Citations

About This Book

Master core Kubernetes concepts important to enterprises from security, policy, and management point-of-view. Learn to deploy a service mesh using Istio, build a CI/CD platform, and provide enterprise security to your clusters.Key Features• Extensively revised edition to cover the latest updates and new releases along with two new chapters to introduce Istio• Get a firm command of Kubernetes from a dual perspective of an admin as well as a developer• Understand advanced topics including load balancing, externalDNS, global load balancing, authentication integration, policy, security, auditing, backup, Istio and CI/CDBook DescriptionKubernetes has taken the world by storm, becoming the standard infrastructure for DevOps teams to develop, test, and run applications. With significant updates in each chapter, this revised edition will help you acquire the knowledge and tools required to integrate Kubernetes clusters in an enterprise environment.The book introduces you to Docker and Kubernetes fundamentals, including a review of basic Kubernetes objects. You'll get to grips with containerization and understand its core functionalities such as creating ephemeral multinode clusters using KinD. The book has replaced PodSecurityPolicies (PSP) with OPA/Gatekeeper for PSP-like enforcement. You'll integrate your container into a cloud platform and tools including MetalLB, externalDNS, OpenID connect (OIDC), Open Policy Agent (OPA), Falco, and Velero. After learning to deploy your core cluster, you'll learn how to deploy Istio and how to deploy both monolithic applications and microservices into your service mesh. Finally, you will discover how to deploy an entire GitOps platform to Kubernetes using continuous integration and continuous delivery (CI/CD).What you will learn• Create a multinode Kubernetes cluster using KinD• Implement Ingress, MetalLB, ExternalDNS, and the new sandbox project, K8GBConfigure a cluster OIDC and impersonation• Deploy a monolithic application in Istio service mesh• Map enterprise authorization to Kubernetes• Secure clusters using OPA and GateKeeper• Enhance auditing using Falco and ECK• Back up your workload for disaster recovery and cluster migration• Deploy to a GitOps platform using Tekton, GitLab, and ArgoCDWho this book is forThis book is for anyone interested in DevOps, containerization, and going beyond basic Kubernetes cluster deployments. DevOps engineers, developers, and system administrators looking to enhance their IT career paths will also find this book helpful.Although some prior experience with Docker and Kubernetes is recommended, this book includes a Kubernetes bootcamp that provides a description of Kubernetes objects to help you if you are new to the topic or need a refresher.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Kubernetes – An Enterprise Guide an online PDF/ePUB?
Yes, you can access Kubernetes – An Enterprise Guide by Marc Boorshtein, Scott Surovich in PDF and/or ePUB format, as well as other popular books in Ciencia de la computación & Desarrollo de software. We have over one million books available in our catalogue for you to explore.

Information

Year
2021
ISBN
9781803236094

4

Services, Load Balancing, ExternalDNS, and Global Balancing

Before systems like Kubernetes were available, scaling an application often required a manual process that could involve multiple teams, and multiple processes, in many larger organizations. To scale out a common web application, you would have to add additional servers, and update the frontend load balancer to include the additional servers. We will discuss load balancers in this chapter, but for a quick introduction to anyone that may be new to the term, a load balancer provides a single point of entry to an application. The incoming request is handled by the load balancer, which routes traffic to any backend server that hosts the application. This is a very high-level explanation of a load balancer, and most offer very powerful features well beyond simply routing traffic, but for the purpose of this chapter, we are only concerned with the routing features.
When you deploy an application to a Kubernetes cluster, your pods are assigned ephemeral IP addresses. Since the assigned addresses are likely to change as pods are restarted, you should never target a service using a pod IP address; instead, you should use a service object, which will map a service IP address to backend pods based on labels. If you need to offer service access to external requests, you can deploy an Ingress controller, which will expose your service to external traffic on a per-URL basis. For more advanced workloads, you can deploy a load balancer, which provides your service with an external IP address, allowing you to expose any IP-based service to external requests.
We will explain how to implement each of these by deploying them on our KinD cluster. To help us understand how the Ingress works, we will deploy an NGINX Ingress controller to the cluster and expose a web server. Since Ingress rules are based on the incoming URL name, we need to be able to provide stable DNS names. In an enterprise environment, this would be accomplished using standard DNS. Since we are using a development environment without a DNS server, we will use a popular service from nip.io.
The chapter will end with two advanced topics. In the first, we will explain how you can dynamically register service names using an ETCD-integrated DNS zone with the Kubernetes incubator project, external-dns. The second advanced topic will explain how to set up an integrated Kubernetes global balancer to offer highly available services that can span multiple clusters, using a new CNCF project called K8GB.
As you may imagine, these topics can get very involved and to fully understand them, they require examples and detailed explanations. Due to the complexity of the subjects covered in this chapter, we have formatted the chapter into "mini chapters."
In this chapter, we will cover the following topics:
  • Exposing workloads to requests
    • Understanding Kubernetes service options
  • Using Kubernetes load balancers
    • Layer 7 load balancers
    • Layer 4 load balancers
  • Enhancing basic load balancers for the enterprise
  • Making service names available externally
  • Load balancing between multiple clusters
By the end of the chapter, you will have a strong understanding of the multiple options for exposing your workloads in a single Kubernetes cluster. You will also learn how to leverage an open source global load balancer to provide access to workloads that run on multiple clusters.

Technical requirements

This chapter has the following technical requirements:
  • An Ubuntu 18.03 or 20.04 server with a minimum of 4 GB of RAM.
  • A KinD cluster configured using the configuration from Chapter 2, Deploying Kubernetes Using KinD.
You can access the code for this chapter by going to this book's GitHub repository: https://github.com/PacktPublishing/Kubernetes---An-Enterprise-Guide-2E/tree/main/chapter4.

Exposing workloads to requests

Over the years, we have discovered that the three most commonly misunderstood concepts in Kubernetes are services, Ingress controllers, and load balancers. In order to expose your workloads, you need to understand how each object works and the options that are available to you. Let's look at these in detail.

Understanding how services work

As we mentioned in the introduction, any pod that is running a workload is assigned an IP address at pod startup. Many events will cause a deployment to restart a pod, and when the pod is restarted, it will likely receive a new IP address. Since the addresses that are assigned to pods may change, you should never target a pod's workload directly.
One of the most powerful features that Kubernetes offers is the ability to scale your deployments. When a deployment is scaled, Kubernetes will create additional pods to handle any additional resource requirements. Each pod will have an IP address, and as you may know, most applications only target a single IP address or name. If your application were to scale from a single pod to 10 pods, how would you utilize the additional pods?
Services use Kubernetes labels to create a dynamic mapping between the service itself and the pods running the workload. The pods that are running the workload are labeled when they start up. Each pod has the same label that is defined in the deployment. For example, if we were using an NGINX web server in our deployment, we would create a deployment with the following manifest:
apiVersion: apps/v1 kind: Deployment metadata: labels: run: nginx-frontend name: nginx-frontend spec: replicas: 3 selector: matchLabels: run: nginx-frontend template: metadata: labels: run: nginx-frontend spec: containers: - image: bitnami/nginx name: nginx-frontend 
This deployment will create three NGINX servers and each pod will be labeled with run=nginx-frontend. We can verify whether the pods are labeled correctly by listing the pods using kubectl, and adding the --show-labels option, kubectl get pods --show-labels.
This will list each pod and any associated labels:
nginx-frontend-6c4dbf86d4-72cbc 1/...

Table of contents