Distributed Data Systems with Azure Databricks
eBook - ePub

Distributed Data Systems with Azure Databricks

Create, deploy, and manage enterprise data pipelines

Alan Bernardo Palacio

Partager le livre
  1. 414 pages
  2. English
  3. ePUB (adapté aux mobiles)
  4. Disponible sur iOS et Android
eBook - ePub

Distributed Data Systems with Azure Databricks

Create, deploy, and manage enterprise data pipelines

Alan Bernardo Palacio

DĂ©tails du livre
Aperçu du livre
Table des matiĂšres
Citations

À propos de ce livre

Quickly build and deploy massive data pipelines and improve productivity using Azure Databricks

Key Features

  • Get to grips with the distributed training and deployment of machine learning and deep learning models
  • Learn how ETLs are integrated with Azure Data Factory and Delta Lake
  • Explore deep learning and machine learning models in a distributed computing infrastructure

Book Description

Microsoft Azure Databricks helps you to harness the power of distributed computing and apply it to create robust data pipelines, along with training and deploying machine learning and deep learning models. Databricks' advanced features enable developers to process, transform, and explore data. Distributed Data Systems with Azure Databricks will help you to put your knowledge of Databricks to work to create big data pipelines.

The book provides a hands-on approach to implementing Azure Databricks and its associated methodologies that will make you productive in no time. Complete with detailed explanations of essential concepts, practical examples, and self-assessment questions, you'll begin with a quick introduction to Databricks core functionalities, before performing distributed model training and inference using TensorFlow and Spark MLlib. As you advance, you'll explore MLflow Model Serving on Azure Databricks and implement distributed training pipelines using HorovodRunner in Databricks.

Finally, you'll discover how to transform, use, and obtain insights from massive amounts of data to train predictive models and create entire fully working data pipelines. By the end of this MS Azure book, you'll have gained a solid understanding of how to work with Databricks to create and manage an entire big data pipeline.

What you will learn

  • Create ETLs for big data in Azure Databricks
  • Train, manage, and deploy machine learning and deep learning models
  • Integrate Databricks with Azure Data Factory for extract, transform, load (ETL) pipeline creation
  • Discover how to use Horovod for distributed deep learning
  • Find out how to use Delta Engine to query and process data from Delta Lake
  • Understand how to use Data Factory in combination with Databricks
  • Use Structured Streaming in a production-like environment

Who this book is for

This book is for software engineers, machine learning engineers, data scientists, and data engineers who are new to Azure Databricks and want to build high-quality data pipelines without worrying about infrastructure. Knowledge of Azure Databricks basics is required to learn the concepts covered in this book more effectively. A basic understanding of machine learning concepts and beginner-level Python programming knowledge is also recommended.

Foire aux questions

Comment puis-je résilier mon abonnement ?
Il vous suffit de vous rendre dans la section compte dans paramĂštres et de cliquer sur « RĂ©silier l’abonnement ». C’est aussi simple que cela ! Une fois que vous aurez rĂ©siliĂ© votre abonnement, il restera actif pour le reste de la pĂ©riode pour laquelle vous avez payĂ©. DĂ©couvrez-en plus ici.
Puis-je / comment puis-je télécharger des livres ?
Pour le moment, tous nos livres en format ePub adaptĂ©s aux mobiles peuvent ĂȘtre tĂ©lĂ©chargĂ©s via l’application. La plupart de nos PDF sont Ă©galement disponibles en tĂ©lĂ©chargement et les autres seront tĂ©lĂ©chargeables trĂšs prochainement. DĂ©couvrez-en plus ici.
Quelle est la différence entre les formules tarifaires ?
Les deux abonnements vous donnent un accĂšs complet Ă  la bibliothĂšque et Ă  toutes les fonctionnalitĂ©s de Perlego. Les seules diffĂ©rences sont les tarifs ainsi que la pĂ©riode d’abonnement : avec l’abonnement annuel, vous Ă©conomiserez environ 30 % par rapport Ă  12 mois d’abonnement mensuel.
Qu’est-ce que Perlego ?
Nous sommes un service d’abonnement Ă  des ouvrages universitaires en ligne, oĂč vous pouvez accĂ©der Ă  toute une bibliothĂšque pour un prix infĂ©rieur Ă  celui d’un seul livre par mois. Avec plus d’un million de livres sur plus de 1 000 sujets, nous avons ce qu’il vous faut ! DĂ©couvrez-en plus ici.
Prenez-vous en charge la synthÚse vocale ?
Recherchez le symbole Écouter sur votre prochain livre pour voir si vous pouvez l’écouter. L’outil Écouter lit le texte Ă  haute voix pour vous, en surlignant le passage qui est en cours de lecture. Vous pouvez le mettre sur pause, l’accĂ©lĂ©rer ou le ralentir. DĂ©couvrez-en plus ici.
Est-ce que Distributed Data Systems with Azure Databricks est un PDF/ePUB en ligne ?
Oui, vous pouvez accĂ©der Ă  Distributed Data Systems with Azure Databricks par Alan Bernardo Palacio en format PDF et/ou ePUB ainsi qu’à d’autres livres populaires dans Computer Science et Entreprise Applications. Nous disposons de plus d’un million d’ouvrages Ă  dĂ©couvrir dans notre catalogue.

Informations

Année
2021
ISBN
9781838642693

Section 1: Introducing Databricks

This section introduces Databricks for new users and discusses its functionalities as well as the advantages that we have while dealing with massive amounts of data.
This section contains the following chapters:
  • Chapter 1, Introduction to Azure Databricks
  • Chapter 2, Creating an Azure Databricks Workspace

Chapter 1: Introduction to Azure Databricks

Modern information systems work with massive amounts of data, with a constant flow that increases every day at an exponential rate. This flow comes from different sources, including sales information, transactional data, social media, and more. Organizations have to work with this information in processes that include transformation and aggregation to develop applications that seek to extract value from this data.
Apache Spark was developed to process this massive amount of data. Azure Databricks is built on top of Apache Spark, abstracting most of the complexities of implementing it, and with all the benefits that come with integration with other Azure services. This book aims to provide an introduction to Azure Databricks and explore the applications it has in modern data pipelines to transform, visualize, and extract insights from large amounts of data in a distributed computation environment.
In this introductory chapter, we will explore these topics:
  • Introducing Apache Spark
  • Introducing Azure Databricks
  • Discovering core concepts and terminology
  • Interacting with the Azure Databricks workspace
  • Using Azure Databricks notebooks
  • Exploring data management
  • Exploring computation management
  • Exploring authentication and authorization
These concepts will help us to later understand all of the aspects of the execution of our jobs in Azure Databricks and to move easily between all its assets.

Technical requirements

To understand the topics presented in this book, you must be familiar with data science and data engineering terms, and have a good understanding of Python, which is the main programming language used in this book, although we will also use SQL to make queries on views and tables.
In terms of the resources required, to execute the steps in this section and those presented in this book, you will require an Azure account as well as an active subscription. Bear in mind that this is a service that is paid, so you will have to introduce your credit card details to create an account. When you create a new account, you will receive a certain amount of free credit, but there are certain options that are limited to premium users. Always remember to stop all the services if you are not using them.

Introducing Apache Spark

To work with the huge amount of information available to modern consumers, Apache Spark was created. It is a distributed, cluster-based computing system and a highly popular framework used for big data, with capabilities that provide speed and ease of use, and includes APIs that support the following use cases:
  • Easy cluster management
  • Data integration and ETL procedures
  • Interactive advanced analytics
  • ML and deep learning
  • Real-time data processing
It can run very quickly on large datasets thanks to its in-memory processing design that allows it to run with very few read/write disk operations. It has a SQL-like interface and its object-oriented design makes it very easy to understand and write code for; it also has a large support community.
Despite its numerous benefits, Apache Spark has its limitations. These limitations include the following:
  • Users need to provide a database infrastructure to store the information to work with.
  • The in-memory processing feature makes it fast to run, but also implies that it has high memory requirements.
  • It isn't well suited for real-time analytics.
  • It has an inherent complexity with a significant learning curve.
  • Because of its open source nature, it lacks dedicated training and customer support.
Let's look at the solution to these issues: Azure Databricks.

Introducing Azure Databricks

With these and other limitations in mind, Databricks was designed. It is a cloud-based platform that uses Apache Spark as a backend and builds on top of it, to add features including the following:
  • Highly reliable data pipelines
  • Data science at scale
  • ...

Table des matiĂšres