Docker High Performance
eBook - ePub

Docker High Performance

Complete your Docker journey by optimizing your application's workflows and performance, 2nd Edition

Allan Espinosa, Russ McKendrick

Share book
  1. 174 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Docker High Performance

Complete your Docker journey by optimizing your application's workflows and performance, 2nd Edition

Allan Espinosa, Russ McKendrick

Book details
Book preview
Table of contents
Citations

About This Book

Leverage Docker to unlock efficient and rapid container deployments to improve your development workflow

Key Features

  • Reconfigure Docker hosts to create a logging system with the ElasticSearch-Logstash-Kibana (ELK) stack
  • Tackle the challenges of large-scale container deployment with this fast-paced guide
  • Benchmark the performance of your Docker containers using Apache JMeter

Book Description

Docker is an enterprise-grade container platform that allows you to build and deploy your apps. Its portable format lets you run your code right from your desktop workstations to popular cloud computing providers. This comprehensive guide will improve your Docker workflows and ensure your application's production environment runs smoothly.

This book starts with a refresher on setting up and running Docker and details the basic setup for creating a Docker Swarm cluster. You will then learn how to automate this cluster by using Chef Server and Cookbook. After that, you will run the Docker monitoring system with Prometheus and Grafana, and deploy the ELK stack. You will also learn some tips for optimizing Docker images.

After deploying containers with the help of Jenkins, you will then move on to a tutorial on using Apache JMeter to analyze your application's performance. You will learn how to use Docker Swarm and NGINX to load-balance your application and how common debugging tools in Linux can be used to troubleshoot Docker containers.

By the end of this book, you will be able to integrate all the optimizations that you have learned and put everything into practice in your applications.

What you will learn

  • Automate provisioning and setting up nodes in a Docker Swarm cluster
  • Configure a monitoring system with Prometheus and Grafana
  • Use Apache JMeter to create workloads for benchmarking the performance of Docker containers
  • Understand how to load-balance an application with Docker Swarm and Nginx
  • Deploy strace, tcdump, blktrace, and other Linux debugging tools to troubleshoot containers
  • Integrate Docker optimizations for DevOps, Site Reliability Engineering, CI, and CD

Who this book is for

If you are a software developer with a good understanding of managing Docker services and the Linux file system and are looking for ways to optimize working with Docker containers, then this is the book for you. Developers fascinated with containers and workflow automation with benefit from this book.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Docker High Performance an online PDF/ePUB?
Yes, you can access Docker High Performance by Allan Espinosa, Russ McKendrick in PDF and/or ePUB format, as well as other popular books in Computer Science & System Administration. We have over one million books available in our catalogue for you to explore.

Information

Year
2019
ISBN
9781789804409
Edition
2

Optimizing Docker Images

Docker images provide a standard package format that lets developers and system administrators work together to simplify the management of an application's code. Docker's container format allows us to rapidly iterate the versions of our application and share them with the rest of our organization. Our development, testing, and deployment time are shorter than they would otherwise be because of the lightweight feature and speed of Docker containers. The portability of Docker containers allows us to scale our applications from physical servers to virtual machines in the cloud.
However, we will start noticing that the same reasons for which we used Docker in the first place are losing their beneficial effects. Development time is increasing because we always have to download the newest version of our application's Docker image runtime library. Deployment takes a lot more time because Docker Hub is slow; at worst, Docker Hub may be down, and we won't be able to do any deployment at all. Our Docker images are now so big, in the order of gigabytes, that simple single-line updates take the whole day.
This chapter will cover the following scenarios of how Docker containers get out of hand and suggests steps to remediate the problems mentioned earlier:
  • Reducing image deployment time
  • Improving image build time
  • Reducing Docker image size
  • Guide to optimization
After this chapter, we will have a more streamlined build and deploy workflow of our Docker images.

Reducing deployment time

As time goes by while we build our Docker container, its size will get bigger and bigger. Updating running containers in our existing Docker hosts is not a problem. Docker takes advantage of the Docker image layers that we build over time as our application grows; however, consider the case where we want to scale out our application. This requires deploying more Docker containers to additional Docker hosts. In this case, each new Docker host will have to download all of the large image layers that we built over time. This section will show you how a large Docker application affects deployment time on new Docker hosts. First, let's build this problematic Docker application by carrying out the following steps:
  1. Write the following Dockerfile to create our large Docker image:
FROM debian:stretch RUN dd if=/dev/urandom of=/largefile bs=1024 count=524288
  1. Next, build the Dockerfile as hubuser/largeapp using the following command:
dockerhost$ docker build -t hubuser/largeapp .
  1. Take note of how large the created Docker image is. In the following example output, the size is 662 MB:
dockerhost$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE

aespinosa/largeapp latest 0e519ec6e31f 2 minutes ago 637MB
debian stretch d508d16c64cd 2 days ago 101MB
  1. Using the time command, record how long it takes to push and pull it from Docker Hub, as follows:
dockerhost$ time docker push hubuser/largeapp
The push refers to repository [docker.io/hubuser/largeapp]
eb6ed1590bf8: Pushed
13d5529fd232: Layer already exists
latest: digest: sha256:0ff29cc728fc8c274bc0ac43ca2815986ced89fbff35399b1c0e7d4bf31c
size: 742

real 11m34.133s
user 0m0.164s
sys 0m0.104s
  1. Let's also do the same timing test when pulling the Docker image, but let's now delete the image from our Docker daemon first, as shown in the following code:
dockerhost$ docker rmi hubuser/largeapp
dockerhost$ time docker pull hubuser/largeapp
Using default tag: latest
latest: Pulling from hubuser/largeapp
741437d97401: Pull complete
d7d6d7c9ffc5: Pull complete
Digest: sha256:0ff29cc728fc8c274bc0ac43ca2815986ced89fbff35399b1c0e7d4bf31c
Status: Downloaded newer image for hubuser/largeapp:latest
real 2m19.909s
user 0m0.707s
sys 0m0.645s
As we can see in the timing results, it takes a lot of time to use docker push to upload an image to Docker Hub. Upon deployment, docker pull takes just as long to propagate our newly created Docker image to our new production Docker hosts. These upload and download time values also depend on the network connection between Docker Hub and our Docker hosts. Ultimately, when Docker Hub goes down, we will lose the ability to deploy new Docker containers or scale out to additional Docker hosts on demand.
In order to take advantage of Docker's fast delivery of applications and ease of deployment and scaling, it is important that our method of pushing and pulling Docker images is reliable and fast. Fortunately, we can run our own Docker registry so that it can host and distribute our Docker images without relying on the public Docker Hub. The next few steps describe how to set this up to confirm the improvement in performance:
  1. Let's run our own Docker registry by typing the following command. This code is for a local registry running at tcp://dockerhost:5000:
dockerhost$ docker run --network=host -d registry:2
  1. Next, let's confirm how our Docker image deployments have improved. First, create a tag for the image we created earlier in order to push it to the local Docker registry using the following:
dockerhost$ docker tag hubuser/largeapp dockerhost:5000/largeapp
  1. Observe how much faster it is to push the same Docker image over our newly running Docker registry. The following tests show that pushing Docker images is now at least 10 times faster:
dockerhost$ docker push dockerhost:5000/largeapp
The push refers to repository [dockerhost:5000/largeapp]
latest: digest: sha256:0ff29cc728fc8c274bc0ac43ca2815986ced89fbff35399b1c0e7d4bf... size: 742
real 0m46.816s
user 0m0.090s
sys 0m0.064s
  1. Now, confirm the new performance when we pull our Docker images from our local Docker registry. The following tests show that the downloading of Docker images is now several orders of magnitude faster:
# make sure we removed the image we built
dockerhost$ docker rmi dockerhost:5000/largeapp hubuser/largeapp

dockerhost$ time docker pull dockerhost:5000/largeapp
Using default tag: latest
latest: Pulling from largeapp
Digest: sha256:0ff29cc728fc8c274bc0ac43ca2815986ced89fbff35399b1c0e7d4bf31c38e1
Status: Downloaded newer image for dockerhost:5000/largeapp:latest
real 0m16.328s
user 0m0.039s
sys 0m0.100s
The main cause of these improvements is that we uploaded and downloaded the same images from our local network. We saved on the bandwidth of our Docker hosts, and our deployment time got shorter. The best part of all is that we no longer have to rely on the availability of Docker Hub in order to deploy.
In order to deploy our Docker images to other Docker hosts, we need to set up security for our Docker registry. Details on how to set this up are outside the scope of this book; however, more details on how to set up a Docker registry are available at https://docs.docker.com/registry/deploying.

Improving image build time

Docker images are the main artifacts that developers most of their time working on. The simplicity of Docker files and the speed of container technology al...

Table of contents