Building Big Data Pipelines with Apache Beam
eBook - ePub

Building Big Data Pipelines with Apache Beam

Jan Lukavsky

Partager le livre
  1. 342 pages
  2. English
  3. ePUB (adapté aux mobiles)
  4. Disponible sur iOS et Android
eBook - ePub

Building Big Data Pipelines with Apache Beam

Jan Lukavsky

DĂ©tails du livre
Aperçu du livre
Table des matiĂšres
Citations

À propos de ce livre

Implement, run, operate, and test data processing pipelines using Apache BeamKey Features‱ Understand how to improve usability and productivity when implementing Beam pipelines‱ Learn how to use stateful processing to implement complex use cases using Apache Beam‱ Implement, test, and run Apache Beam pipelines with the help of expert tips and techniquesBook DescriptionApache Beam is an open source unified programming model for implementing and executing data processing pipelines, including Extract, Transform, and Load (ETL), batch, and stream processing.This book will help you to confidently build data processing pipelines with Apache Beam. You'll start with an overview of Apache Beam and understand how to use it to implement basic pipelines. You'll also learn how to test and run the pipelines efficiently. As you progress, you'll explore how to structure your code for reusability and also use various Domain Specific Languages (DSLs). Later chapters will show you how to use schemas and query your data using (streaming) SQL. Finally, you'll understand advanced Apache Beam concepts, such as implementing your own I/O connectors.By the end of this book, you'll have gained a deep understanding of the Apache Beam model and be able to apply it to solve problems.What you will learn‱ Understand the core concepts and architecture of Apache Beam‱ Implement stateless and stateful data processing pipelines‱ Use state and timers for processing real-time event processing‱ Structure your code for reusability‱ Use streaming SQL to process real-time data for increasing productivity and data accessibility‱ Run a pipeline using a portable runner and implement data processing using the Apache Beam Python SDK‱ Implement Apache Beam I/O connectors using the Splittable DoFn APIWho this book is forThis book is for data engineers, data scientists, and data analysts who want to learn how Apache Beam works. Intermediate-level knowledge of the Java programming language is assumed.

Foire aux questions

Comment puis-je résilier mon abonnement ?
Il vous suffit de vous rendre dans la section compte dans paramĂštres et de cliquer sur « RĂ©silier l’abonnement ». C’est aussi simple que cela ! Une fois que vous aurez rĂ©siliĂ© votre abonnement, il restera actif pour le reste de la pĂ©riode pour laquelle vous avez payĂ©. DĂ©couvrez-en plus ici.
Puis-je / comment puis-je télécharger des livres ?
Pour le moment, tous nos livres en format ePub adaptĂ©s aux mobiles peuvent ĂȘtre tĂ©lĂ©chargĂ©s via l’application. La plupart de nos PDF sont Ă©galement disponibles en tĂ©lĂ©chargement et les autres seront tĂ©lĂ©chargeables trĂšs prochainement. DĂ©couvrez-en plus ici.
Quelle est la différence entre les formules tarifaires ?
Les deux abonnements vous donnent un accĂšs complet Ă  la bibliothĂšque et Ă  toutes les fonctionnalitĂ©s de Perlego. Les seules diffĂ©rences sont les tarifs ainsi que la pĂ©riode d’abonnement : avec l’abonnement annuel, vous Ă©conomiserez environ 30 % par rapport Ă  12 mois d’abonnement mensuel.
Qu’est-ce que Perlego ?
Nous sommes un service d’abonnement Ă  des ouvrages universitaires en ligne, oĂč vous pouvez accĂ©der Ă  toute une bibliothĂšque pour un prix infĂ©rieur Ă  celui d’un seul livre par mois. Avec plus d’un million de livres sur plus de 1 000 sujets, nous avons ce qu’il vous faut ! DĂ©couvrez-en plus ici.
Prenez-vous en charge la synthÚse vocale ?
Recherchez le symbole Écouter sur votre prochain livre pour voir si vous pouvez l’écouter. L’outil Écouter lit le texte Ă  haute voix pour vous, en surlignant le passage qui est en cours de lecture. Vous pouvez le mettre sur pause, l’accĂ©lĂ©rer ou le ralentir. DĂ©couvrez-en plus ici.
Est-ce que Building Big Data Pipelines with Apache Beam est un PDF/ePUB en ligne ?
Oui, vous pouvez accĂ©der Ă  Building Big Data Pipelines with Apache Beam par Jan Lukavsky en format PDF et/ou ePUB ainsi qu’à d’autres livres populaires dans Informatik et Datenmodellierung- & design. Nous disposons de plus d’un million d’ouvrages Ă  dĂ©couvrir dans notre catalogue.

Informations

Année
2022
ISBN
9781800566569

Section 1 Apache Beam: Essentials

This section represents a general introduction to how most streaming data processing systems work, what the general properties of data streams are, and what problems are needed to be solved for computational correctness and for balancing throughput and latency in the context of Apache Beam. This section also covers how pipelines are implemented, tested, and run.
This section comprises the following chapters:
  • Chapter 1, Introduction to Data Processing with Apache Beam
  • Chapter 2, Implementing, Testing, and Deploying Basic Pipelines
  • Chapter 3, Implementing Pipelines Using Stateful Processing

Chapter 1: Introduction to Data Processing with Apache Beam

Data. Big data. Real-time data. Data streams. Many buzzwords to describe many things, and yet they have many common properties. Mind-blowing applications can be developed from the successful application of (theoretically) simple logic – take data and produce knowledge. However, a simple-sounding task can turn out to be difficult when the amount of data needed to produce knowledge is huge (and still growing). Given the vast volumes of data produced by humanity every day, which tools should we choose to turn our simple logic into scalable solutions? That is, solutions that protect our investment in creating the data extraction logic, even in the presence of new requirements arising or changing on a daily basis, and new data processing technologies being created? This book focuses on why Apache Beam might be a good solution to these challenges, and it will guide you through the Beam learning process.
In this chapter, we will cover the following topics:
  • Why Apache Beam?
  • Writing your first pipeline
  • Running a pipeline against streaming data
  • Exploring the key properties of Unbounded data
  • Measuring the event time progress inside data streams
  • Assigning data to windows
  • Unifying batch and streaming data processing

Technical requirements

In this chapter, we will introduce some elementary pipelines written using Beam's Java Software Development Kit (SDK).
We will use the code located in the GitHub repository for this book: https://github.com/PacktPublishing/Building-Big-Data-Pipelines-with-Apache-Beam.
We will also need the following tools to be installed:
  • Java Development Kit (JDK) 11 (possibly OpenJDK 11), with JAVA_HOME set appropriately
  • Git
  • Bash
    Important note
    Although it is possible to run many tools in this book using the Windows shell, we will focus on using Bash scripting only. We hope Windows users will be able to run Bash using virtualization or Windows Subsystem for Linux (or any similar technology).
First of all, we need to clone the repository:
  1. To do this, we create a suitable directory, and then we run the following command:
    $ git clone https://github.com/PacktPublishing/Building-Big-Data-Pipelines-with-Apache-Beam.git
  2. This will result in a directory, Building-Big-Data-Pipelines-with-Apache-Beam, being created in the working directory. We then run the following command in this newly created directory:
    $ ./mvnw clean install
Throughout this book, the $ character will denote a Bash shell. Therefore, $ ./mvnw clean install would mean to run the ./mvnw command in the top-level directory of the git clone (that is, Building-Big-Data-Pipelines-with-Apache-Beam). By using chapter1$ ../mvnw clean install, we mean to run the specified command in the subdirectory called chapter1.

Why Apache Beam?

There are two basic questions we might ask when considering a new technology to learn and apply in practice:
  • What problem am I struggling with that the new technology can help me solve?
  • What would the costs associated with the technology be?
Every sound technology has a well-defined selling point – that is, something that justifies its existence in the presence of competing technologies. In the case of Beam, this selling point could be reduced to a single word: portability. Beam is portable on several layers:
  • Beam's pipelines are portable between multiple runners (that is, a technology that executes the distributed computation described by a pipeline's author).
  • Beam's data processing model is portable between various programming languages.
  • Beam's data processing logic is portable between bounded and unbounded data.
Each of these points deserves a few words of explanation. By runner portability, we mean the possibility to run existing pipelines written in one of the supported programming languages (for instance, Java, Python, Go, Scala, or even SQL) against a data processing engine that can be chosen at runtime. A typical example of a runner would be Apache Flink, Apache Spark, or Google Cloud Dataflow. However, Beam is by no means limited to these; new runners are created as new technologies arise, and it's very likely that many more will be developed.
When we say Beam's data processing model is portable between various programming languages, we mean it has the ability to provide support for multiple SDKs, regardless of the language or technology used by the runner. This way, we can code Beam pipelines in the Go language, and then run these against the Apache Flink Runner, written in Java.
Last but not least, the core of Apache Beam's model is designed so that it is portable between bounded and unbounded data. Bounded data is what was historically called batch processing, while unbounded data refers to real-time processing (that is, an application crunching live data as it arrives in the system and producing a low-latency output).
Putting these pieces together, we can describe Beam as a tool that lets you deal with your big data architecture with the following vision:
Choose your preferred language, write your data processing pipeline, run this pipeline using a runner of your choice, and do all of this for both batch and real-time data at the same time.
Because everything comes at a price, you should expect to pay for flexibility like this – this price would be a somewhat bigger overhead in terms of CPU and/or memory usage. The Beam community works...

Table des matiĂšres