Mastering Service Mesh
eBook - ePub

Mastering Service Mesh

Enhance, secure, and observe cloud-native applications with Istio, Linkerd, and Consul

Anjali Khatri, Vikram Khatri

Share book
  1. 626 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Mastering Service Mesh

Enhance, secure, and observe cloud-native applications with Istio, Linkerd, and Consul

Anjali Khatri, Vikram Khatri

Book details
Book preview
Table of contents
Citations

About This Book

Understand how to use service mesh architecture to efficiently manage and safeguard microservices-based applications with the help of examples

Key Features

  • Manage your cloud-native applications easily using service mesh architecture
  • Learn about Istio, Linkerd, and Consul – the three primary open source service mesh providers
  • Explore tips, techniques, and best practices for building secure, high-performance microservices

Book Description

Although microservices-based applications support DevOps and continuous delivery, they can also add to the complexity of testing and observability. The implementation of a service mesh architecture, however, allows you to secure, manage, and scale your microservices more efficiently. With the help of practical examples, this book demonstrates how to install, configure, and deploy an efficient service mesh for microservices in a Kubernetes environment.

You'll get started with a hands-on introduction to the concepts of cloud-native application management and service mesh architecture, before learning how to build your own Kubernetes environment. While exploring later chapters, you'll get to grips with the three major service mesh providers: Istio, Linkerd, and Consul. You'll be able to identify their specific functionalities, from traffic management, security, and certificate authority through to sidecar injections and observability.

By the end of this book, you will have developed the skills you need to effectively manage modern microservices-based applications.

What you will learn

  • Compare the functionalities of Istio, Linkerd, and Consul
  • Become well-versed with service mesh control and data plane concepts
  • Understand service mesh architecture with the help of hands-on examples
  • Work through hands-on exercises in traffic management, security, policy, and observability
  • Set up secure communication for microservices using a service mesh
  • Explore service mesh features such as traffic management, service discovery, and resiliency

Who this book is for

This book is for solution architects and network administrators, as well as DevOps and site reliability engineers who are new to the cloud-native framework. You will also find this book useful if you're looking to build a career in DevOps, particularly in operations. Working knowledge of Kubernetes and building microservices that are cloud-native is necessary to get the most out of this book.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Mastering Service Mesh an online PDF/ePUB?
Yes, you can access Mastering Service Mesh by Anjali Khatri, Vikram Khatri in PDF and/or ePUB format, as well as other popular books in Informatik & Informatik Allgemein. We have over one million books available in our catalogue for you to explore.

Information

Year
2020
ISBN
9781789611946

Section 1: Cloud-Native Application Management

In this section, you will look at high-level artifact of cloud-native applications in order to understand the service mesh architecture.
This section contains the following chapters:
  • Chapter 1, Monolithic Versus Microservices
  • Chapter 2, Cloud-Native Applications

Monolithic Versus Microservices

The purpose of this book is to walk you through the service mesh architecture. We will cover three main open source service mesh providers: Istio, Linkerd, and Consul. First of all, we will talk about how the evolution of technology led to Service Mesh. In this chapter, we will cover the application development journey from monolithic to microservices.
The technology landscape that fueled the growth of the monolithic framework is based on the technology stack that became available 20+ years ago. As hardware and software virtualization improved significantly, a new wave of innovation started with the adoption of microservices in 2011 by Netflix, Amazon, and other companies. This trend started by redesigning monolithic applications into small and independent microservices.
Before we get started on monolithic versus microservices, let's take a step back and review what led to where we are today before the inception of microservices. This chapter will go through the brief evolution of early computer machines, hardware virtualization, software virtualization, and transitioning from monolithic to microservices-based applications. We will try to summarize the journey from the early days to where we are today.
In this chapter, we will cover the following topics:
  • Early computer machines
  • Monolithic applications
  • Microservices applications

Early computer machines

IBM launched its first commercial computer (https://ibm.biz/Bd294n), the IBM 701, in 1953, which was the most powerful high-speed electronic calculator of that time. Further progression of the technology produced mainframes, and that revolution was started in the mid-1950s (https://ibm.biz/Bd294p).
Even before co-founding Intel in 1968 with Robert Noyce, Gordon Moore espoused his theory of Moore's Law (https://intel.ly/2IY5qLU) in 1965, which states that the number of transistors incorporated in a chip will approximately double every 24 months. Exponential growth still continues to this day, though this trend may not continue for long.
IBM created its first official VM product called VM/370 in 1972 (http://www.vm.ibm.com/history), followed by hardware virtualization on the Intel/AMD platform in 2005 and 2006. Monolithic applications were the only choice on early computing machines.
Early machines ran only one operating system. As time passed and machines grew in size, a need to run multiple operating systems by slicing the machines into smaller virtual machines led to the virtualization of hardware.

Hardware virtualization

Hardware virtualization led to the proliferation of virtual machines in data centers. Greg Kalinsky, EVP and CIO of Geico, in his keynote address to the IBM Think 2019 conference, mentioned the use of 70,000 virtual machines. The management of virtual machines required a different set of tools. In this area, VMware was very successful in the Intel market, whereas IBM's usage of the Hardware Management Console (HMC) was prolific in POWER for creating Logical Partitions (LPARs), or the PowerVM. Hardware virtualization had its own overheads, and it has been very popular for running multiple operating systems machines on the same physical machine.
Multiple monolithic applications have different OS requirements and languages, and it was possible to run the runtime on the same hardware but using multiple virtual machines. During this period of hardware virtualization, work on enterprise applications using the Service-Oriented Architecture (SOA) and the Enterprise Service Bus (ESB) started to evolve, which led to large monolithic applications.

Software virtualization

The next wave of innovation started with software virtualization with the use of containerization technology. Though not new, software virtualization started to get serious traction when it became easier to start adopting through tools. Docker was an early pioneer in this space in order to make software virtualization available to general IT professionals.
Solomon Hykes started dotCloud in 2010 and renamed it Docker in 2013. Software virtualization became possible due to advances in technology to provide namespace, filesystem, and processes isolation while still using the same kernel running in a bare-metal environment or in a virtual machine.
Software virtualization using containers provides better resource utilization compared to running multiple virtual machines. This leads to 30% to 40% effective resource utilization. Usually, a virtual machine takes seconds to minutes to initialize, whereas containerization shares the same kernel space, so the start up time is a lot quicker than it is with a virtual machine.
As a matter of fact, Google used software virtualization at a very large scale and used containerization for close to 10 years. This revealed the existence of their project, known as Borg. When Google published a research paper in 2015 in the EuroSys conference (https://goo.gl/Ez99hu) about its approach in managing data centers using containerization technology, it piqued interest among many technologists and, at the very same time, Docker exploded in popularity during 2014 and 2015, which made software virtualization simple enough to use.
One of the main benefits of software virtualization (also known as containerization) was to eliminate the dependency problem for a particular piece of software. For example, the Linux glibc is the main building block library, and there are hundreds of libraries that have dependencies on a particular version of glibc. We could build a Docker container that has a particular version of glibc, and it could run on a machine that has a later version of glibc. Normally, these kinds of deep dependencies have a very complex way of maintaining two different software stacks that have been built using different versions of glibc, but containers made this very simple. Docker is credited for making a simple user interface that made software packaging easy and accessible to developers.
Software virtualization made it possible to run different monolithic applications that can run within the same hardware (bare metal) or within the same virtual machine. This also led to the birth of smaller services (a complete business function) being packaged as independent software units. This is when the era of microservices started.

Container orchestration

It is easy to manage a few containers and their deployment. When the number of containers increases, a container orchestration platform makes deployment and management simpler and easier through declarative prescriptions. As containerization proliferated in 2015, the orchestration platform for containerization also evolved. Docker came with its own open source container orchestration platform known as Docker Swarm, which was a clustering and scheduling tool for Docker containers.
Apache Mesos, though not exactly similar to Docker Swarm, was built using the same principles as the Linux kernel. It was an abstract layer between applications and the Linux kernel. It was meant for distributed computing and acts as a cluster manager with an API for resource management and scheduling.
Kubernetes was the open source evolution of Google's Borg project, and its first version was released in 2015 through the Cloud Native Computing Foundation (https://cncf.io) as its first incubator project.
Major companies such as Google, Red Hat, Huawei, ZTE, VMware, Cisco, Docker, AWS, IBM, and Microsoft are contributing to the Kubernetes open source platform, and it has become a modern cluster manager and container orchestration platform. It's not a surprise that Kubernetes has become the de facto platform and is now used by all major cloud providers, with 125 companies working on it and more than 2,800+ contributors adding to it (https://www.stackalytics.com/cncf?module=kubernetes).
As container orchestration began to simplify cluster management, it became easy to run microservices in a distributed environment, which made microservices-based applications loosely coupled systems with horizontal scale-out possibilities.
Horizontal scale-out distributed computing is not new, with IBM's shared-nothing architecture for the Db2 ...

Table of contents