Network Programming with Rust
eBook - ePub

Network Programming with Rust

Abhishek Chanda

Compartir libro
  1. English
  2. ePUB (apto para móviles)
  3. Disponible en iOS y Android
eBook - ePub

Network Programming with Rust

Abhishek Chanda

Detalles del libro
Vista previa del libro
Índice
Citas

Información del libro

Learn to write servers and network clients using Rust's low-level socket classes with this guideAbout This Book• Build a solid foundation in Rust while also mastering important network programming details• Leverage the power of a number of available libraries to perform network operations in Rust• Develop a fully functional web server to gain the skills you need, fastWho This Book Is ForThis book is for software developers who want to write networking software with Rust. A basic familiarity with networking concepts is assumed. Beginner-level knowledge of Rust will help but is not necessary.What You Will Learn• Appreciate why networking is important in implementing distributed systems• Write a non-asynchronous echo server over TCP that talks to a client over a network• Parse JSON and binary data using parser combinators such as nom• Write an HTTP client that talks to the server using reqwest• Modify an existing Rust HTTTP server and add SSL to it• Master asynchronous programming support in Rust• Use external packages in a Rust projectIn DetailRust is low-level enough to provide fine-grained control over memory while providing safety through compile-time validation. This makes it uniquely suitable for writing low-level networking applications.This book is divided into three main parts that will take you on an exciting journey of building a fully functional web server. The book starts with a solid introduction to Rust and essential networking concepts. This will lay a foundation for, and set the tone of, the entire book. In the second part, we will take an in-depth look at using Rust for networking software. From client-server networking using sockets to IPv4/v6, DNS, TCP, UDP, you will also learn about serializing and deserializing data using serde. The book shows how to communicate with REST servers over HTTP. The final part of the book discusses asynchronous network programming using the Tokio stack. Given the importance of security for modern systems, you will see how Rust supports common primitives such as TLS and public-key cryptography.After reading this book, you will be more than confident enough to use Rust to build effective networking softwareStyle and approachThis book will get you started with building networking software in Rust by taking you through all the essential concepts.

Preguntas frecuentes

¿Cómo cancelo mi suscripción?
Simplemente, dirígete a la sección ajustes de la cuenta y haz clic en «Cancelar suscripción». Así de sencillo. Después de cancelar tu suscripción, esta permanecerá activa el tiempo restante que hayas pagado. Obtén más información aquí.
¿Cómo descargo los libros?
Por el momento, todos nuestros libros ePub adaptables a dispositivos móviles se pueden descargar a través de la aplicación. La mayor parte de nuestros PDF también se puede descargar y ya estamos trabajando para que el resto también sea descargable. Obtén más información aquí.
¿En qué se diferencian los planes de precios?
Ambos planes te permiten acceder por completo a la biblioteca y a todas las funciones de Perlego. Las únicas diferencias son el precio y el período de suscripción: con el plan anual ahorrarás en torno a un 30 % en comparación con 12 meses de un plan mensual.
¿Qué es Perlego?
Somos un servicio de suscripción de libros de texto en línea que te permite acceder a toda una biblioteca en línea por menos de lo que cuesta un libro al mes. Con más de un millón de libros sobre más de 1000 categorías, ¡tenemos todo lo que necesitas! Obtén más información aquí.
¿Perlego ofrece la función de texto a voz?
Busca el símbolo de lectura en voz alta en tu próximo libro para ver si puedes escucharlo. La herramienta de lectura en voz alta lee el texto en voz alta por ti, resaltando el texto a medida que se lee. Puedes pausarla, acelerarla y ralentizarla. Obtén más información aquí.
¿Es Network Programming with Rust un PDF/ePUB en línea?
Sí, puedes acceder a Network Programming with Rust de Abhishek Chanda en formato PDF o ePUB, así como a otros libros populares de Computer Science y Programming. Tenemos más de un millón de libros disponibles en nuestro catálogo para que explores.

Información

Año
2018
ISBN
9781788621717
Edición
1
Categoría
Programming

Asynchronous Network Programming Using Tokio

In a sequential programming model, code is always executed in the order dictated by the semantics of the programming language. Thus, if one operation blocks for some reason (waiting for a resource, and so forth), the whole execution blocks and can only move forward once that operation has completed. This often leads to poor utilization of resources, because the main thread will be busy waiting on one operation. In GUI apps, this also leads to poor user interactivity, because the main thread, which is responsible for managing the GUI, is busy waiting for something else. This is a major problem in our specific case of network programming, as we often need to wait for data to be available on a socket. In the past, we worked around these issues using multiple threads. In that model, we delegated a costly operation to a background thread, making the main thread free for user interaction, or some other task. In contrast, an asynchronous model of programming dictates that no operation should ever block. Instead, there should be a mechanism to check whether they have completed from the main thread. But how do we achieve this? A simple way would be to run each operation in its own thread, and then to join on all of those threads. In practice, this is troublesome owing to the large number of potential threads and coordination between them.
Rust provides a few crates that support asynchronous programming using a futures-based, event loop-driven model. We will study that in detail in this chapter. Here are the topics we will cover here:
  • Futures abstraction in Rust
  • Asynchronous programming using the tokio stack

Looking into the Future

The backbone of Rust's asynchronous programming story is the futures crate. This crate provides a construct called a future. This is essentially a placeholder for the result of an operation. As you would expect, the result of an operation can be in one of two states—either the operation is still in progress and the result is not available yet, or the operation has finished and the result is available. Note that in the second case, there might have been an error, making the result immaterial.
The library provides a trait called Future (among other things),which any type can implement to be able to act like a future. This is how the trait looks:
trait Future {
type Item;
type Error;
fn poll(&mut self) -> Poll<Self::Item, Self::Error>;
...
}
Here, Item refers to the type of the returned result on successful completion of the operation, and Error is the type that is returned if the operation fails. An implementation must specify those and also implement the poll method that gets the current state of the computation. If it has already finished, the result will be returned. If not, the future will register that the current task is interested in the outcome of the given operation. This function returns a Poll, which looks like this:
type Poll<T, E> = Result<Async<T>, E>;
A Poll is typed to a result of another type called Async (and the given error type), which is defined next.
pub enum Async<T> { Ready(T), NotReady, }
Async, in turn, is an enum that can either be in Ready(T) or NotReady. These last two states correspond to the state of the operation. Thus, the poll function can return three possible states:
  • Ok(Async::Ready(result)) when the operation has completed successfully and the result is in the inner variable called result.
  • Ok(Async::NotReady) when the operation has not completed yet and a result is not available. Note that this does not indicate an error condition.
  • Err(e) when the operation ran into an error. No result is available in this case.
It is easy to note that a Future is essentially a Result that might still be running something to actually produce that Result. If one removes the case that the Result might not be ready at any point in time, the only two options we are left with are the Ok and the Err cases, which exactly correspond to a Result.
Thus, a Future can represent anything that takes a non-trivial amount of time to complete. This can be a networking event, a disk read, and so on. Now, the most common question at this point is: how do we return a future from a given function? There are a few ways of doing that. Let us look at an example here. The project setup is the same as it always is.
$ cargo new --bin futures-example
We will need to add some libraries in our Cargo config, which will look like this:
[package]
name = "futures-example"
version = "0.1.0"
authors = ["Foo<[email protected]>"]

[dependencies]
futures = "0.1.17"
futures-cpupool = "0.1.7"
In our main file, we set up everything as usual. We are interested in finding out whether a given integer is a prime or not, and this will represent the part of our operation that takes some time to complete. We have two functions, doing exactly that. These two use two different styles of returning futures, as we will see later. In practice, the naive way of primality testing did not turn out to be slow enough to be a good example. Thus, we had to sleep for a random time to simulate slowness.
// ch7/futures-example/src/main.rs

#![feature(conservative_impl_trait)]
extern crate futures;
extern crate futures_cpupool;

use std::io;
use futures::Future;
use futures_cpupool::CpuPool;

// This implementation returns a boxed future
fn check_prime_boxed(n: u64) -> Box<Future<Item = bool, Error = io::Error>> {
for i in 2..n {
if n % i == 0 { return Box::new(futures::future::ok(false)); }
}
Box::new(futures::future::ok(true))
}

// This returns a future using impl trait
fn check_prime_impl_trait(n: u64) -> impl Future<Item = bool, Error = io::Error> {
for i in 2..n {
if n % i == 0 { return futures::future::ok(false); }
}
futures::future::ok(true)
}

// This does not return a future
fn check_prime(n: u64) -> bool {
for i in 2..n {
if n % i == 0 { return false }
}
true
}

fn main() {
let input: u64 = 58466453;
println!("Right before first call");
let res_one = check_prime_boxed(input);
println!("Called check_prime_boxed");
let res_two = check_prime_impl_trait(input);
println!("Called check_prime_impl_trait");
println!("Results are {} and {}", res_one.wait().unwrap(),
res_two.wait().unwrap());

let thread_pool = CpuPool::new(4);
let res_three = thread_pool.spawn_fn(move || {
let temp = check_prime(input);
let result: Result<bool, ()> = Ok(temp);
result
});
println!("Called check_prime in another thread");
println!("Result from the last call: {}", res_three.wait().unwrap());
}
There are a few major ways of returning futures. The first one is using trait objects, as done in check_prime_boxed. Now, Box is a pointer type pointing to an object on the heap. It is a managed pointer in the sense that the object will be automatically cleaned up when it goes out of scope. The return type of the function is a trait object, which can represent any future that has its Item set to bool and Error set to io:Error. Thus, this represents dynamic dispatch. The second way of returning a future is using the impl trait feature. In the case of check_prime_impl_trait, that is what we do. We say that the function returns a type that implements Future<Item=bool, Error=io::Error>, and as any type that implements the Future trait is a future, our function is returning a future. Note that in this case, we do not need to box before returning the result. Thus, an advantage of this approach is that no allocation is necessary for returning the future. Both of our functions use the future::ok function to signal that our computation has finished successfully with the given result. Another option is to not actually return a future and to use the futures-based thread pool crate to do the heavy lifting toward creating a future and managing it. This is the case with check_prime that just returns a bool. In our main function, we set up a thread pool using the futures-cpupool crate, and we run the last function in that pool. We get back a future on which we can call wait to get the result. A totally different option for achieving the same goal is to return a custom type that implements the Future trait....

Índice