Section 1: Getting Familiar with Deno
In this section, you will get to know what Deno is, why it was created, and how it was created. This section will help you set up the environment and get familiar with the ecosystem and available tooling.
This section contains the following chapters:
- Chapter 1, What Is Deno?
- Chapter 2, The Toolchain
- Chapter 3, The Runtime and Standard Library
Chapter 1: What is Deno?
Deno is a secure runtime for JavaScript and TypeScript. I'll guess that you are probably getting that excitement of experimenting with a new tool. You have worked with JavaScript or TypeScript and have at least heard of Node.js. Deno will feel like it has the right amount of novelty for you and, at the same time, has some things that will sound familiar for someone working in the ecosystem.
Before we start getting our hands dirty, we'll understand how Deno was created and its motivations. Doing that will help us learn and understand it better.
We'll be focusing on practical examples throughout this book. We'll be writing code and then rationalizing and explaining the underlying decisions we've made. If you come from a Node.js background, some of the concepts might sound familiar to you. We will also explain Deno and compare it with its ancestor, Node.js.
Once the fundamentals are in place, we'll dive into Deno and explore its runtime features by building small utilities and real-world applications.
Without Node, there would be no Deno. To understand the latter well, we can't ignore its 10+ year-old ancestor, which is what we'll look at in this chapter. We'll explain the reasons for its creation back in 2009 and the pain points that were detected after a decade of usage.
After that, we'll present Deno and the fundamental differences and challenges it proposes to solve. We'll have a look at its architecture, some principles and influences of the runtime, and the use cases where it shines.
After understanding how Deno came to life, we will explore its ecosystem, its standard library, and some use cases where Deno is instrumental.
Once you've read this chapter, you'll be aware of what Deno is and what it is not, why it is not the next version of Node.js, and what to think about when you're considering Deno for your next project.
In this chapter, we'll cover the following topics:
- A little history
- Why Deno?
- Architecture and technologies that support Deno
- Grasping Deno's limitations
- Exploring Deno's use cases
Let's get started!
A little history
Deno's first stable version, v1.0.0, was launched on the May 13, 2020.
The first time Ryan Dahl – Node.js creator – mentioned it was in his famous talk, 10 things I regret about node.js (https://youtu.be/M3BM9TB-8yA). Apart from the fact that it presents the first very alpha version of Deno, it is a talk worth watching as a lesson on how software ages. It is an excellent reflection on how decisions evolve, even when they're made by some of the smartest people in the open source community, and how they can end up in a different place than what they initially planned for.
After the launch, in May 2020 and due to its historical background, its core team, and the fact that it appeals to the JavaScript community, Deno has been getting lots of attention. That's probably one way you've heard about it, be it via blog posts, tweets, or conference talks.
This enthusiasm is having positive consequences on its runtime, with lots of people wanting to contribute and use it. The community is growing due to its Discord channel (https://discord.gg/deno) and the number of pull requests on Deno's repositories (https://github.com/denoland). It is currently evolving at a cadence of one minor version per month, with lots of bug fixes and improvements being shipped. The roadmap shows a vision for a future that is no less exciting than the present. With a well-defined path and set of principles, Deno has everything it takes to become more significant by the day.
Let's rewind a little and go back to 2009 and the creation of Node.js.
At the time, Ryan started by questioning how most backend languages and frameworks were dealing with I/O (input/output). Most of the tools were looking at I/O as an synchronous operation, blocking the process until it is done, and only then continuing to execute the code.
Fundamentally, it was this synchronous blocking operation that Ryan questioned.
Handling I/O
When you are writing servers that must deal with thousands of requests per second, resource consumption and speed are two significant factors.
For such resource-critical projects, it is important that the base tools – the primitives – have an architecture that is accounting for this. When the time to scale arises, it helps that the fundamental decisions you made at the beginning support that.
Web servers are one of those cases. The web is a significant platform in today's world. It never stops growing, with more devices and new tools accessing the internet daily, making it accessible to more people. The web is the common, democratized, decentralized ground for people around the world. With this in mind, the servers behind those applications and websites need to handle giant loads. Web applications such as Twitter, Facebook, and Reddit, among many others, deal with thousands of requests per minute. So, scale is essential.
To kickstart a conversation about performance and resource efficiency, let's look at the following graph, which is comparing two of the most used open-source web servers: Apache and Nginx:
Figure 1.1 – Requests per second versus concurrent connections – Nginx versus Apache
At first glance, this tells us that Nginx comes out on top pretty much every time. We can also understand that, as the number of concurrent connections increases, Apache's number of requests per second decreases. Comparatively, Nginx keeps the number of requests per second pretty stable, despite also showing an expected drop in requests per second as the number of connections grows. After reaching a thousand concurrent connections, Nginx gets close to double the number of Apache's requests per second.
Let's look at a comparison of the RAM memory consumption:
Figure 1.2 – Memory consumption versus concurrent connections – Nginx versus Apache
Apache's memory consumption grows linearly with the number of concurrent connections, while Nginx's memory footprint is constant.
You might already be wondering why this happens.
This happens because Apache and Nginx have very different ways of dealing with concurrent connections. Apache spawns a new thread per request, while Nginx uses an event loop.
In a thread-per-request architecture, it creates a thread every time a new request comes in. That thread is responsible for handling the request until it finishes. If another request comes while the previous one is still being handled, a new thread is created.
On top of this, handling networking on threaded environments is not known as something particularly easy to do. You can incur in file and resource locking, thread communication issues, and common problems such as deadlocks. Adding to the difficulties presented to the developer, using threads does not come for free, as threads by themselves have a resource overhead.
In contrast, in an event loop architecture, everything happens on a single thread. This decision dramatically simplifies the lives of developers. You do not have to account for the factors mentioned previously, which means more time to de...