Containers in OpenStack
eBook - ePub

Containers in OpenStack

Pradeep Kumar Singh, Madhuri Kumari, Vinoth Kumar Selvaraj, Felipe Monteiro, Vinoth Kumar Selvaraj, Venkatesh Loganathan

Partager le livre
  1. 176 pages
  2. English
  3. ePUB (adapté aux mobiles)
  4. Disponible sur iOS et Android
eBook - ePub

Containers in OpenStack

Pradeep Kumar Singh, Madhuri Kumari, Vinoth Kumar Selvaraj, Felipe Monteiro, Vinoth Kumar Selvaraj, Venkatesh Loganathan

DĂ©tails du livre
Aperçu du livre
Table des matiĂšres
Citations

À propos de ce livre

A practical book which will help the readers understand how the container ecosystem and OpenStack work together.About This Book‱ Gets you acquainted with containerization in private cloud‱ Learn to effectively manage and secure your containers in OpenStack‱ Practical use cases on container deployment and management using OpenStack componentsWho This Book Is ForThis book is targeted towards cloud engineers, system administrators, or anyone from the production team who works on OpenStack cloud. This book act as an end to end guide for anyone who wants to start using the concept of containerization on private cloud. Some basic knowledge of Docker and Kubernetes will help.What You Will Learn‱ Understand the role of containers in the OpenStack ecosystem‱ Learn about containers and different types of container runtimes tools.‱ Understand containerization in OpenStack with respect to the deployment framework, platform services, application deployment, and security‱ Get skilled in using OpenStack to run your applications inside containers‱ Explore the best practices of using containers in OpenStack.In DetailContainers are one of the most talked about technologies of recent times. They have become increasingly popular as they are changing the way we develop, deploy, and run software applications. OpenStack gets tremendous traction as it is used by many organizations across the globe and as containers gain in popularity and become complex, it's necessary for OpenStack to provide various infrastructure resources for containers, such as compute, network, and storage.Containers in OpenStack answers the question, how can OpenStack keep ahead of the increasing challenges of container technology? You will start by getting familiar with container and OpenStack basics, so that you understand how the container ecosystem and OpenStack work together. To understand networking, managing application services and deployment tools, the book has dedicated chapters for different OpenStack projects: Magnum, Zun, Kuryr, Murano, and Kolla.Towards the end, you will be introduced to some best practices to secure your containers and COE on OpenStack, with an overview of using each OpenStack projects for different use cases.Style and approachAn end to end guide for anyone who wants to start using the concept of containerization on private cloud.

Foire aux questions

Comment puis-je résilier mon abonnement ?
Il vous suffit de vous rendre dans la section compte dans paramĂštres et de cliquer sur « RĂ©silier l’abonnement ». C’est aussi simple que cela ! Une fois que vous aurez rĂ©siliĂ© votre abonnement, il restera actif pour le reste de la pĂ©riode pour laquelle vous avez payĂ©. DĂ©couvrez-en plus ici.
Puis-je / comment puis-je télécharger des livres ?
Pour le moment, tous nos livres en format ePub adaptĂ©s aux mobiles peuvent ĂȘtre tĂ©lĂ©chargĂ©s via l’application. La plupart de nos PDF sont Ă©galement disponibles en tĂ©lĂ©chargement et les autres seront tĂ©lĂ©chargeables trĂšs prochainement. DĂ©couvrez-en plus ici.
Quelle est la différence entre les formules tarifaires ?
Les deux abonnements vous donnent un accĂšs complet Ă  la bibliothĂšque et Ă  toutes les fonctionnalitĂ©s de Perlego. Les seules diffĂ©rences sont les tarifs ainsi que la pĂ©riode d’abonnement : avec l’abonnement annuel, vous Ă©conomiserez environ 30 % par rapport Ă  12 mois d’abonnement mensuel.
Qu’est-ce que Perlego ?
Nous sommes un service d’abonnement Ă  des ouvrages universitaires en ligne, oĂč vous pouvez accĂ©der Ă  toute une bibliothĂšque pour un prix infĂ©rieur Ă  celui d’un seul livre par mois. Avec plus d’un million de livres sur plus de 1 000 sujets, nous avons ce qu’il vous faut ! DĂ©couvrez-en plus ici.
Prenez-vous en charge la synthÚse vocale ?
Recherchez le symbole Écouter sur votre prochain livre pour voir si vous pouvez l’écouter. L’outil Écouter lit le texte Ă  haute voix pour vous, en surlignant le passage qui est en cours de lecture. Vous pouvez le mettre sur pause, l’accĂ©lĂ©rer ou le ralentir. DĂ©couvrez-en plus ici.
Est-ce que Containers in OpenStack est un PDF/ePUB en ligne ?
Oui, vous pouvez accĂ©der Ă  Containers in OpenStack par Pradeep Kumar Singh, Madhuri Kumari, Vinoth Kumar Selvaraj, Felipe Monteiro, Vinoth Kumar Selvaraj, Venkatesh Loganathan en format PDF et/ou ePUB ainsi qu’à d’autres livres populaires dans Ciencia de la computaciĂłn et ComputaciĂłn en la nube. Nous disposons de plus d’un million d’ouvrages Ă  dĂ©couvrir dans notre catalogue.

Informations

Année
2017
ISBN
9781788391924

Working with Container Orchestration Engines

In this chapter, we will be looking at the Container Orchestration Engine (COE). Container Orchestration Engines are tools which help in managing many containers running on multiple hosts.
In this chapter, we will be covering the following topics:
  • Introduction to COE
  • Docker Swarm
  • Apache Mesos
  • Kubernetes
  • Kubernetes installation
  • Kubernetes hands-on

Introduction to COE

Containers provide users with an easy way to package and run their applications. Packaging involves defining the library and tools that are necessary for a user's application to run. These packages, once converted to images, can be used to create and run containers. These containers can be run anywhere, whether it's on developer laptops, QA systems, or production machines, without any change in environment. Docker and other container runtime tools provide the facility to manage the life cycle of such containers.
Using these tools, users can build and manage images, run containers, delete containers, and perform other container life cycle operations. But these tools can only manage one container on a single host. When we deploy our application on multiple containers and multiple hosts, we need some kind of automation tool. This type of automation is generally called orchestration. Orchestration tools provide a number of features, including:
  • Provisioning and managing hosts on which containers will run
  • Pulling the images from the repository and instantiating the containers
  • Managing the life cycle of containers
  • Scheduling containers on hosts based on the host's resource availability
  • Starting a new container when one dies
  • Scaling the containers to match the application's demand
  • Providing networking between containers so that they can access each other on different hosts
  • Exposing these containers as services so that they can be accessed from outside
  • Health monitoring of the containers
  • Upgrading the containers
Generally, these kinds of orchestration tools provide declarative configuration in YAML or JSON format. These definitions carry all of the information related to containers including image, networking, storage, scaling, and other things. Orchestration tools use these definitions to apply the same setting to provide the same environment every time.
There are many container orchestration tools available, such as Docker Machine, Docker Compose, Kuberenetes, Docker Swarm, and Apache Mesos, but this chapter focuses only on Docker Swarm, Apache Mesos, and Kubernetes.

Docker Swarm

Docker Swarm is a native orchestration tool from Docker itself. It manages a pool of Docker hosts and turns them into a single virtual Docker host. Docker Swarm provides a standard Docker API to manage containers on the cluster. It's easy for users to move to Docker Swarm if they are already using Docker to manage their containers.
Docker Swarm follows a swap, plug, and play principle. This provides pluggable scheduling algorithms, a broad registry, and discovery backend support in the cluster. Users can use various scheduling algorithms and discovery backends as per their needs. The following diagram represents the Docker Swarm architecture:

Docker Swarm components

The following sections explain the various components in Docker Swarm.

Node

Node is an instance of the Docker host participating in the Swarm cluster. There can be one or multiple nodes in a single Swarm cluster deployment. Nodes are categorized into Manager and Worker based on their roles in the system.

Manager node

The Swarm manager node manages the nodes in the cluster. It provides the API to manage the nodes and containers across the cluster. Manager nodes distribute units of work, also known as tasks, to worker nodes. If there are multiple manager nodes, then they select a single leader to perform an orchestration task.

Worker node

The worker node receives and executes task distributed by manager nodes. By default, every manager node is also a worker node, but they can be configured to run Manager tasks exclusively. Worker nodes run agents and keep track of tasks running on them, and reports them. The Worker node also notifies the manager node about the current state of assigned tasks.

Tasks

Task is the individual Docker container with a command to run inside the container. The manager assigns the tasks to worker nodes. Tasks are the smallest unit of scheduling in the cluster.

Services

Service is the interface for a set of Docker containers or tasks running across the Swarm cluster.

Discovery service

The Discovery service stores cluster states and provides node and service discoverability. Swarm supports a pluggable backend architecture that supports etcd, Consul, Zookeeper, static files, lists of IPs, and so on, as discovery services.

Scheduler

The Swarm scheduler schedules the tasks on different nodes in the system. Docker Swarm comes with many built-in scheduling strategies that gives users the ability to guide container placement on nodes in order to maximize or minimize the task distribution across the cluster. The random strategy is also supported by Swarm. It chooses a random node to place the task on.

Swarm mode

In version 1.12, Docker introduced the Swarm mode, built into its engine. To run a cluster, the user needs to execute two commands on each Docker host:
To enter Swarm mode:
$ docker swarm init
To add a node to the cluster:
$ docker swarm join 
Unlike Swarm, Swarm mode comes with service discovery, load balancing, security, rolling updates and scaling, and so on, built into the Docker engine itself. Swarm mode makes the management of the cluster easy since it does not require any orchestration tools to create and manage the cluster.

Apache Mesos

Apache Mesos is an open source, fault-tolerant cluster manager. It manages a set of nodes called slaves and offers their available computing resources to frameworks. Frameworks take the resource availability from the master and launches the tasks on the slaves. Marathon is one such framework, which runs containerized ap...

Table des matiĂšres