Distributed Computing with Go
eBook - ePub

Distributed Computing with Go

V.N. Nikhil Anurag

Share book
  1. 246 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Distributed Computing with Go

V.N. Nikhil Anurag

Book details
Book preview
Table of contents
Citations

About This Book

A tutorial leading the aspiring Go developer to full mastery of Golang's distributed features.About This Book• This book provides enough concurrency theory to give you a contextual understanding of Go concurrency • It gives weight to synchronous and asynchronous data streams in Golang web applications • It makes Goroutines and Channels completely familiar and natural to Go developersWho This Book Is ForThis book is for developers who are familiar with the Golang syntax and have a good idea of how basic Go development works. It would be advantageous if you have been through a web application product cycle, although it's not necessary.What You Will Learn• Gain proficiency with concurrency and parallelism in Go• Learn how to test your application using Go's standard library• Learn industry best practices with technologies such as REST, OpenAPI, Docker, and so on• Design and build a distributed search engine• Learn strategies on how to design a system for web scaleIn DetailDistributed Computing with Go gives developers with a good idea how basic Go development works the tools to fulfill the true potential of Golang development in a world of concurrent web and cloud applications. Nikhil starts out by setting up a professional Go development environment. Then you'll learn the basic concepts and practices of Golang concurrent and parallel development. You'll find out in the new few chapters how to balance resources and data with REST and standard web approaches while keeping concurrency in mind. Most Go applications these days will run in a data center or on the cloud, which is a condition upon which the next chapter depends. There, you'll expand your skills considerably by writing a distributed document indexing system during the next two chapters. This system has to balance a large corpus of documents with considerable analytical demands. Another use case is the way in which a web application written in Go can be consciously redesigned to take distributed features into account. The chapter is rather interesting for Go developers who have to migrate existing Go applications to computationally and memory-intensive environments. The final chapter relates to the rather onerous task of testing parallel and distributed applications, something that is not usually taught in standard computer science curricula.Style and approachDistributed Computing with Go takes you through a series of carefully graded tutorials, building ever more sophisticated applications.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Distributed Computing with Go an online PDF/ePUB?
Yes, you can access Distributed Computing with Go by V.N. Nikhil Anurag in PDF and/or ePUB format, as well as other popular books in Ciencia de la computación & Programación. We have over one million books available in our catalogue for you to explore.

Information

Year
2018
ISBN
9781787127708

Foundations of Web Scale Architecture

Chapter 5, Introducing Goophr, Chapter 6, Goophr Concierge, and Chapter 7, Goophr Librarian were about the design and implementation of a distributed search index system, starting from basic concepts to running individual components and verifying that they work as expected. In Chapter 8, Deploying Goophr, we connected the various components with the help of docker-compose so that we could launch and connect all the components in an easy and reliable manner. We have achieved quite a lot in the past four chapters, but you may have noticed that we ran everything on a single machine, most likely our laptop or desktop.
Ideally, we should next try to prepare our distributed system to work reliably under a heavy user load and expose it over the web for general use. However, the reality is that we will have to make a lot of upgrades to our current system to make it reliable and resilient enough to be able to work under real-world traffic.
In this chapter, we are going to look at various factors we should keep in mind while we try to design for the web. We will be looking at:
  • Scaling a web application
  • Monolith app versus microservices
  • Deployment options

Scaling a web application

In this chapter, we will not be discussing Goophr but instead a simple web application for blogging so that we can concentrate on scaling it for the web. Such an application may consist of a single server instance running the database and the blog server.
Scaling a web application is an intricate topic, and we will devote a lot of time to this very subject. As we shall see throughout this section, there are multiple ways to scale a system:
  • Scaling the system as a whole
  • Splitting up the system and scaling individual components
  • Choosing specific solutions to better scale the system
Let's start with the most basic setup, a single server instance.

The single server instance

A single server setup will generally consist of:
  • A web server to serve web pages and handle server-side logic
  • A database to save all user data (blog posts, user login details, and so on) related to the blog
The following figure shows what such a server would look like:
The figure shows a simple setup where the user interacts with the blog server, which will be interacting with a database internally. This setup of a database and blog server on the same instance will be efficient and responsive only up to a certain number of users.
As the system starts to slow down or storage starts to fill up, we can redeploy our application (database and blog server) on to a different server instance with more storage, RAM, and CPU power; this is known as vertical scaling. As you may suspect, this can be time consuming and an inconvenient way of upgrading your server. Wouldn't it be better if we could stave off this upgrade for as long as possible?
An important point to think about is that the issue might be due to any combination of the following factors:
  • Out of memory due to the database or blog server
  • Performance degradation due to the web server or database requiring more CPU cycles
  • Out of storage space due to the database
Scaling the complete application for any of the preceding factors isn't an optimal way to deal with the issue because we are spending a lot of money where we could have solved the issue with far fewer resources! So how should we fashion our system so that we can solve the right problem in the right manner?

Separate layers for the web and database

If we take the three issues stated earlier, we can solve each of them in one or two ways. Let's look at them first:
Issue #1: Out of memory
Solution:
  • Due to the database: Increase RAM for the database
  • Due to the blog server: Increase RAM for the blog server
Issue #2: Performance degradation
Solution:
  • Due to the database: Increase the CPU power for the database
  • Due to the blog server: Increase the CPU power for the blog server
Issue #3: Out of storage space
Solution:
  • Due to the database: Increase the storage space for the database
Using this listing, we can upgrade our system as and when required for a particular problem we are facing. However, we first need to correctly identify the component that is causing the issue. For this reason, even before we start scaling our application vertically, we should separate our database from our web server as shown in this figure:
This new setup with the database and the blog server on separate server instances would enable us to monitor which component is having an issue and vertically scale only that particular component. We should be able to serve a larger user traffic with this new setup.
However, as the load on the server increases, we might have other issues on our hands. For example, what would happen if our blog server were to become unresponsive? We would no longer be able to serve blog posts and no one would be able to post comments on said blog posts. This is a situation no one wants to face. Wouldn't it be nice if we could keep serving traffic even if the blog server were down?

Multiple server instances

Serving a large traffic of users with a single server instance for our blog server or any application (business logic) server is dangerous because we are essentially creating a single point of failure. The most logical and simplest way to avoid such a situation is to have duplicate instances of our blog server to handle incoming user traffic. This approach of scaling a single server to multiple instances is known as horizontal scaling. However, this raises the question: how can we reliably distribute the traffic between the various instances of our blog server? For this we use a load balancer.

The load balancer

A load balancer is a type of HTTP server responsible for distributing traffic (routing) to various web servers based on the rules defined by the devel...

Table of contents