Network Programming with Rust
eBook - ePub

Network Programming with Rust

Abhishek Chanda

Share book
  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Network Programming with Rust

Abhishek Chanda

Book details
Book preview
Table of contents
Citations

About This Book

Learn to write servers and network clients using Rust's low-level socket classes with this guideAbout This Book• Build a solid foundation in Rust while also mastering important network programming details• Leverage the power of a number of available libraries to perform network operations in Rust• Develop a fully functional web server to gain the skills you need, fastWho This Book Is ForThis book is for software developers who want to write networking software with Rust. A basic familiarity with networking concepts is assumed. Beginner-level knowledge of Rust will help but is not necessary.What You Will Learn• Appreciate why networking is important in implementing distributed systems• Write a non-asynchronous echo server over TCP that talks to a client over a network• Parse JSON and binary data using parser combinators such as nom• Write an HTTP client that talks to the server using reqwest• Modify an existing Rust HTTTP server and add SSL to it• Master asynchronous programming support in Rust• Use external packages in a Rust projectIn DetailRust is low-level enough to provide fine-grained control over memory while providing safety through compile-time validation. This makes it uniquely suitable for writing low-level networking applications.This book is divided into three main parts that will take you on an exciting journey of building a fully functional web server. The book starts with a solid introduction to Rust and essential networking concepts. This will lay a foundation for, and set the tone of, the entire book. In the second part, we will take an in-depth look at using Rust for networking software. From client-server networking using sockets to IPv4/v6, DNS, TCP, UDP, you will also learn about serializing and deserializing data using serde. The book shows how to communicate with REST servers over HTTP. The final part of the book discusses asynchronous network programming using the Tokio stack. Given the importance of security for modern systems, you will see how Rust supports common primitives such as TLS and public-key cryptography.After reading this book, you will be more than confident enough to use Rust to build effective networking softwareStyle and approachThis book will get you started with building networking software in Rust by taking you through all the essential concepts.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Network Programming with Rust an online PDF/ePUB?
Yes, you can access Network Programming with Rust by Abhishek Chanda in PDF and/or ePUB format, as well as other popular books in Informatique & Programmation. We have over one million books available in our catalogue for you to explore.

Information

Year
2018
ISBN
9781788621717
Edition
1

Asynchronous Network Programming Using Tokio

In a sequential programming model, code is always executed in the order dictated by the semantics of the programming language. Thus, if one operation blocks for some reason (waiting for a resource, and so forth), the whole execution blocks and can only move forward once that operation has completed. This often leads to poor utilization of resources, because the main thread will be busy waiting on one operation. In GUI apps, this also leads to poor user interactivity, because the main thread, which is responsible for managing the GUI, is busy waiting for something else. This is a major problem in our specific case of network programming, as we often need to wait for data to be available on a socket. In the past, we worked around these issues using multiple threads. In that model, we delegated a costly operation to a background thread, making the main thread free for user interaction, or some other task. In contrast, an asynchronous model of programming dictates that no operation should ever block. Instead, there should be a mechanism to check whether they have completed from the main thread. But how do we achieve this? A simple way would be to run each operation in its own thread, and then to join on all of those threads. In practice, this is troublesome owing to the large number of potential threads and coordination between them.
Rust provides a few crates that support asynchronous programming using a futures-based, event loop-driven model. We will study that in detail in this chapter. Here are the topics we will cover here:
  • Futures abstraction in Rust
  • Asynchronous programming using the tokio stack

Looking into the Future

The backbone of Rust's asynchronous programming story is the futures crate. This crate provides a construct called a future. This is essentially a placeholder for the result of an operation. As you would expect, the result of an operation can be in one of two states—either the operation is still in progress and the result is not available yet, or the operation has finished and the result is available. Note that in the second case, there might have been an error, making the result immaterial.
The library provides a trait called Future (among other things),which any type can implement to be able to act like a future. This is how the trait looks:
trait Future {
type Item;
type Error;
fn poll(&mut self) -> Poll<Self::Item, Self::Error>;
...
}
Here, Item refers to the type of the returned result on successful completion of the operation, and Error is the type that is returned if the operation fails. An implementation must specify those and also implement the poll method that gets the current state of the computation. If it has already finished, the result will be returned. If not, the future will register that the current task is interested in the outcome of the given operation. This function returns a Poll, which looks like this:
type Poll<T, E> = Result<Async<T>, E>;
A Poll is typed to a result of another type called Async (and the given error type), which is defined next.
pub enum Async<T> { Ready(T), NotReady, }
Async, in turn, is an enum that can either be in Ready(T) or NotReady. These last two states correspond to the state of the operation. Thus, the poll function can return three possible states:
  • Ok(Async::Ready(result)) when the operation has completed successfully and the result is in the inner variable called result.
  • Ok(Async::NotReady) when the operation has not completed yet and a result is not available. Note that this does not indicate an error condition.
  • Err(e) when the operation ran into an error. No result is available in this case.
It is easy to note that a Future is essentially a Result that might still be running something to actually produce that Result. If one removes the case that the Result might not be ready at any point in time, the only two options we are left with are the Ok and the Err cases, which exactly correspond to a Result.
Thus, a Future can represent anything that takes a non-trivial amount of time to complete. This can be a networking event, a disk read, and so on. Now, the most common question at this point is: how do we return a future from a given function? There are a few ways of doing that. Let us look at an example here. The project setup is the same as it always is.
$ cargo new --bin futures-example
We will need to add some libraries in our Cargo config, which will look like this:
[package]
name = "futures-example"
version = "0.1.0"
authors = ["Foo<[email protected]>"]

[dependencies]
futures = "0.1.17"
futures-cpupool = "0.1.7"
In our main file, we set up everything as usual. We are interested in finding out whether a given integer is a prime or not, and this will represent the part of our operation that takes some time to complete. We have two functions, doing exactly that. These two use two different styles of returning futures, as we will see later. In practice, the naive way of primality testing did not turn out to be slow enough to be a good example. Thus, we had to sleep for a random time to simulate slowness.
// ch7/futures-example/src/main.rs

#![feature(conservative_impl_trait)]
extern crate futures;
extern crate futures_cpupool;

use std::io;
use futures::Future;
use futures_cpupool::CpuPool;

// This implementation returns a boxed future
fn check_prime_boxed(n: u64) -> Box<Future<Item = bool, Error = io::Error>> {
for i in 2..n {
if n % i == 0 { return Box::new(futures::future::ok(false)); }
}
Box::new(futures::future::ok(true))
}

// This returns a future using impl trait
fn check_prime_impl_trait(n: u64) -> impl Future<Item = bool, Error = io::Error> {
for i in 2..n {
if n % i == 0 { return futures::future::ok(false); }
}
futures::future::ok(true)
}

// This does not return a future
fn check_prime(n: u64) -> bool {
for i in 2..n {
if n % i == 0 { return false }
}
true
}

fn main() {
let input: u64 = 58466453;
println!("Right before first call");
let res_one = check_prime_boxed(input);
println!("Called check_prime_boxed");
let res_two = check_prime_impl_trait(input);
println!("Called check_prime_impl_trait");
println!("Results are {} and {}", res_one.wait().unwrap(),
res_two.wait().unwrap());

let thread_pool = CpuPool::new(4);
let res_three = thread_pool.spawn_fn(move || {
let temp = check_prime(input);
let result: Result<bool, ()> = Ok(temp);
result
});
println!("Called check_prime in another thread");
println!("Result from the last call: {}", res_three.wait().unwrap());
}
There are a few major ways of returning futures. The first one is using trait objects, as done in check_prime_boxed. Now, Box is a pointer type pointing to an object on the heap. It is a managed pointer in the sense that the object will be automatically cleaned up when it goes out of scope. The return type of the function is a trait object, which can represent any future that has its Item set to bool and Error set to io:Error. Thus, this represents dynamic dispatch. The second way of returning a future is using the impl trait feature. In the case of check_prime_impl_trait, that is what we do. We say that the function returns a type that implements Future<Item=bool, Error=io::Error>, and as any type that implements the Future trait is a future, our function is returning a future. Note that in this case, we do not need to box before returning the result. Thus, an advantage of this approach is that no allocation is necessary for returning the future. Both of our functions use the future::ok function to signal that our computation has finished successfully with the given result. Another option is to not actually return a future and to use the futures-based thread pool crate to do the heavy lifting toward creating a future and managing it. This is the case with check_prime that just returns a bool. In our main function, we set up a thread pool using the futures-cpupool crate, and we run the last function in that pool. We get back a future on which we can call wait to get the result. A totally different option for achieving the same goal is to return a custom type that implements the Future trait....

Table of contents