Containers in OpenStack
eBook - ePub

Containers in OpenStack

Pradeep Kumar Singh, Madhuri Kumari, Vinoth Kumar Selvaraj, Felipe Monteiro, Vinoth Kumar Selvaraj, Venkatesh Loganathan

Condividi libro
  1. 176 pagine
  2. English
  3. ePUB (disponibile sull'app)
  4. Disponibile su iOS e Android
eBook - ePub

Containers in OpenStack

Pradeep Kumar Singh, Madhuri Kumari, Vinoth Kumar Selvaraj, Felipe Monteiro, Vinoth Kumar Selvaraj, Venkatesh Loganathan

Dettagli del libro
Anteprima del libro
Indice dei contenuti
Citazioni

Informazioni sul libro

A practical book which will help the readers understand how the container ecosystem and OpenStack work together.About This Book• Gets you acquainted with containerization in private cloud• Learn to effectively manage and secure your containers in OpenStack• Practical use cases on container deployment and management using OpenStack componentsWho This Book Is ForThis book is targeted towards cloud engineers, system administrators, or anyone from the production team who works on OpenStack cloud. This book act as an end to end guide for anyone who wants to start using the concept of containerization on private cloud. Some basic knowledge of Docker and Kubernetes will help.What You Will Learn• Understand the role of containers in the OpenStack ecosystem• Learn about containers and different types of container runtimes tools.• Understand containerization in OpenStack with respect to the deployment framework, platform services, application deployment, and security• Get skilled in using OpenStack to run your applications inside containers• Explore the best practices of using containers in OpenStack.In DetailContainers are one of the most talked about technologies of recent times. They have become increasingly popular as they are changing the way we develop, deploy, and run software applications. OpenStack gets tremendous traction as it is used by many organizations across the globe and as containers gain in popularity and become complex, it's necessary for OpenStack to provide various infrastructure resources for containers, such as compute, network, and storage.Containers in OpenStack answers the question, how can OpenStack keep ahead of the increasing challenges of container technology? You will start by getting familiar with container and OpenStack basics, so that you understand how the container ecosystem and OpenStack work together. To understand networking, managing application services and deployment tools, the book has dedicated chapters for different OpenStack projects: Magnum, Zun, Kuryr, Murano, and Kolla.Towards the end, you will be introduced to some best practices to secure your containers and COE on OpenStack, with an overview of using each OpenStack projects for different use cases.Style and approachAn end to end guide for anyone who wants to start using the concept of containerization on private cloud.

Domande frequenti

Come faccio ad annullare l'abbonamento?
È semplicissimo: basta accedere alla sezione Account nelle Impostazioni e cliccare su "Annulla abbonamento". Dopo la cancellazione, l'abbonamento rimarrà attivo per il periodo rimanente già pagato. Per maggiori informazioni, clicca qui
È possibile scaricare libri? Se sì, come?
Al momento è possibile scaricare tramite l'app tutti i nostri libri ePub mobile-friendly. Anche la maggior parte dei nostri PDF è scaricabile e stiamo lavorando per rendere disponibile quanto prima il download di tutti gli altri file. Per maggiori informazioni, clicca qui
Che differenza c'è tra i piani?
Entrambi i piani ti danno accesso illimitato alla libreria e a tutte le funzionalità di Perlego. Le uniche differenze sono il prezzo e il periodo di abbonamento: con il piano annuale risparmierai circa il 30% rispetto a 12 rate con quello mensile.
Cos'è Perlego?
Perlego è un servizio di abbonamento a testi accademici, che ti permette di accedere a un'intera libreria online a un prezzo inferiore rispetto a quello che pagheresti per acquistare un singolo libro al mese. Con oltre 1 milione di testi suddivisi in più di 1.000 categorie, troverai sicuramente ciò che fa per te! Per maggiori informazioni, clicca qui.
Perlego supporta la sintesi vocale?
Cerca l'icona Sintesi vocale nel prossimo libro che leggerai per verificare se è possibile riprodurre l'audio. Questo strumento permette di leggere il testo a voce alta, evidenziandolo man mano che la lettura procede. Puoi aumentare o diminuire la velocità della sintesi vocale, oppure sospendere la riproduzione. Per maggiori informazioni, clicca qui.
Containers in OpenStack è disponibile online in formato PDF/ePub?
Sì, puoi accedere a Containers in OpenStack di Pradeep Kumar Singh, Madhuri Kumari, Vinoth Kumar Selvaraj, Felipe Monteiro, Vinoth Kumar Selvaraj, Venkatesh Loganathan in formato PDF e/o ePub, così come ad altri libri molto apprezzati nelle sezioni relative a Ciencia de la computación e Computación en la nube. Scopri oltre 1 milione di libri disponibili nel nostro catalogo.

Informazioni

Anno
2017
ISBN
9781788391924

Working with Container Orchestration Engines

In this chapter, we will be looking at the Container Orchestration Engine (COE). Container Orchestration Engines are tools which help in managing many containers running on multiple hosts.
In this chapter, we will be covering the following topics:
  • Introduction to COE
  • Docker Swarm
  • Apache Mesos
  • Kubernetes
  • Kubernetes installation
  • Kubernetes hands-on

Introduction to COE

Containers provide users with an easy way to package and run their applications. Packaging involves defining the library and tools that are necessary for a user's application to run. These packages, once converted to images, can be used to create and run containers. These containers can be run anywhere, whether it's on developer laptops, QA systems, or production machines, without any change in environment. Docker and other container runtime tools provide the facility to manage the life cycle of such containers.
Using these tools, users can build and manage images, run containers, delete containers, and perform other container life cycle operations. But these tools can only manage one container on a single host. When we deploy our application on multiple containers and multiple hosts, we need some kind of automation tool. This type of automation is generally called orchestration. Orchestration tools provide a number of features, including:
  • Provisioning and managing hosts on which containers will run
  • Pulling the images from the repository and instantiating the containers
  • Managing the life cycle of containers
  • Scheduling containers on hosts based on the host's resource availability
  • Starting a new container when one dies
  • Scaling the containers to match the application's demand
  • Providing networking between containers so that they can access each other on different hosts
  • Exposing these containers as services so that they can be accessed from outside
  • Health monitoring of the containers
  • Upgrading the containers
Generally, these kinds of orchestration tools provide declarative configuration in YAML or JSON format. These definitions carry all of the information related to containers including image, networking, storage, scaling, and other things. Orchestration tools use these definitions to apply the same setting to provide the same environment every time.
There are many container orchestration tools available, such as Docker Machine, Docker Compose, Kuberenetes, Docker Swarm, and Apache Mesos, but this chapter focuses only on Docker Swarm, Apache Mesos, and Kubernetes.

Docker Swarm

Docker Swarm is a native orchestration tool from Docker itself. It manages a pool of Docker hosts and turns them into a single virtual Docker host. Docker Swarm provides a standard Docker API to manage containers on the cluster. It's easy for users to move to Docker Swarm if they are already using Docker to manage their containers.
Docker Swarm follows a swap, plug, and play principle. This provides pluggable scheduling algorithms, a broad registry, and discovery backend support in the cluster. Users can use various scheduling algorithms and discovery backends as per their needs. The following diagram represents the Docker Swarm architecture:

Docker Swarm components

The following sections explain the various components in Docker Swarm.

Node

Node is an instance of the Docker host participating in the Swarm cluster. There can be one or multiple nodes in a single Swarm cluster deployment. Nodes are categorized into Manager and Worker based on their roles in the system.

Manager node

The Swarm manager node manages the nodes in the cluster. It provides the API to manage the nodes and containers across the cluster. Manager nodes distribute units of work, also known as tasks, to worker nodes. If there are multiple manager nodes, then they select a single leader to perform an orchestration task.

Worker node

The worker node receives and executes task distributed by manager nodes. By default, every manager node is also a worker node, but they can be configured to run Manager tasks exclusively. Worker nodes run agents and keep track of tasks running on them, and reports them. The Worker node also notifies the manager node about the current state of assigned tasks.

Tasks

Task is the individual Docker container with a command to run inside the container. The manager assigns the tasks to worker nodes. Tasks are the smallest unit of scheduling in the cluster.

Services

Service is the interface for a set of Docker containers or tasks running across the Swarm cluster.

Discovery service

The Discovery service stores cluster states and provides node and service discoverability. Swarm supports a pluggable backend architecture that supports etcd, Consul, Zookeeper, static files, lists of IPs, and so on, as discovery services.

Scheduler

The Swarm scheduler schedules the tasks on different nodes in the system. Docker Swarm comes with many built-in scheduling strategies that gives users the ability to guide container placement on nodes in order to maximize or minimize the task distribution across the cluster. The random strategy is also supported by Swarm. It chooses a random node to place the task on.

Swarm mode

In version 1.12, Docker introduced the Swarm mode, built into its engine. To run a cluster, the user needs to execute two commands on each Docker host:
To enter Swarm mode:
$ docker swarm init
To add a node to the cluster:
$ docker swarm join 
Unlike Swarm, Swarm mode comes with service discovery, load balancing, security, rolling updates and scaling, and so on, built into the Docker engine itself. Swarm mode makes the management of the cluster easy since it does not require any orchestration tools to create and manage the cluster.

Apache Mesos

Apache Mesos is an open source, fault-tolerant cluster manager. It manages a set of nodes called slaves and offers their available computing resources to frameworks. Frameworks take the resource availability from the master and launches the tasks on the slaves. Marathon is one such framework, which runs containerized ap...

Indice dei contenuti