Azure Synapse Analytics Cookbook
eBook - ePub

Azure Synapse Analytics Cookbook

Gaurav Agarwal, Meenakshi Muralidharan, Rohini Srivathsa

Buch teilen
  1. 238 Seiten
  2. English
  3. ePUB (handyfreundlich)
  4. Über iOS und Android verfügbar
eBook - ePub

Azure Synapse Analytics Cookbook

Gaurav Agarwal, Meenakshi Muralidharan, Rohini Srivathsa

Angaben zum Buch
Buchvorschau
Inhaltsverzeichnis
Quellenangaben

Über dieses Buch

Whether you're an Azure veteran or just getting started, get the most out of your data with effective recipes for Azure SynapseKey Features• Discover new techniques for using Azure Synapse, regardless of your level of expertise• Integrate Azure Synapse with other data sources to create a unified experience for your analytical needs using Microsoft Azure• Learn how to embed data governance and classification with Synapse Analytics by integrating Azure PurviewBook DescriptionAs data warehouse management becomes increasingly integral to successful organizations, choosing and running the right solution is more important than ever. Microsoft Azure Synapse is an enterprise-grade, cloud-based data warehousing platform, and this book holds the key to using Synapse to its full potential. If you want the skills and confidence to create a robust enterprise analytical platform, this cookbook is a great place to start. You'll learn and execute enterprise-level deployments on medium-to-large data platforms. Using the step-by-step recipes and accompanying theory covered in this book, you'll understand how to integrate various services with Synapse to make it a robust solution for all your data needs. Whether you're new to Azure Synapse or just getting started, you'll find the instructions you need to solve any problem you may face, including using Azure services for data visualization as well as for artificial intelligence (AI) and machine learning (ML) solutions. By the end of this Azure book, you'll have the skills you need to implement an enterprise-grade analytical platform, enabling your organization to explore and manage heterogeneous data workloads and employ various data integration services to solve real-time industry problems.What you will learn• Discover the optimal approach for loading and managing data• Work with notebooks for various tasks, including ML• Run real-time analytics using Azure Synapse Link for Cosmos DB• Perform exploratory data analytics using Apache Spark• Read and write DataFrames into Parquet files using PySpark• Create reports on various metrics for monitoring key KPIs• Combine Power BI and Serverless for distributed analysis• Enhance your Synapse analysis with data visualizationsWho this book is forThis book is for data architects, data engineers, and developers who want to learn and understand the main concepts of Azure Synapse analytics and implement them in real-world scenarios.

Häufig gestellte Fragen

Wie kann ich mein Abo kündigen?
Gehe einfach zum Kontobereich in den Einstellungen und klicke auf „Abo kündigen“ – ganz einfach. Nachdem du gekündigt hast, bleibt deine Mitgliedschaft für den verbleibenden Abozeitraum, den du bereits bezahlt hast, aktiv. Mehr Informationen hier.
(Wie) Kann ich Bücher herunterladen?
Derzeit stehen all unsere auf Mobilgeräte reagierenden ePub-Bücher zum Download über die App zur Verfügung. Die meisten unserer PDFs stehen ebenfalls zum Download bereit; wir arbeiten daran, auch die übrigen PDFs zum Download anzubieten, bei denen dies aktuell noch nicht möglich ist. Weitere Informationen hier.
Welcher Unterschied besteht bei den Preisen zwischen den Aboplänen?
Mit beiden Aboplänen erhältst du vollen Zugang zur Bibliothek und allen Funktionen von Perlego. Die einzigen Unterschiede bestehen im Preis und dem Abozeitraum: Mit dem Jahresabo sparst du auf 12 Monate gerechnet im Vergleich zum Monatsabo rund 30 %.
Was ist Perlego?
Wir sind ein Online-Abodienst für Lehrbücher, bei dem du für weniger als den Preis eines einzelnen Buches pro Monat Zugang zu einer ganzen Online-Bibliothek erhältst. Mit über 1 Million Büchern zu über 1.000 verschiedenen Themen haben wir bestimmt alles, was du brauchst! Weitere Informationen hier.
Unterstützt Perlego Text-zu-Sprache?
Achte auf das Symbol zum Vorlesen in deinem nächsten Buch, um zu sehen, ob du es dir auch anhören kannst. Bei diesem Tool wird dir Text laut vorgelesen, wobei der Text beim Vorlesen auch grafisch hervorgehoben wird. Du kannst das Vorlesen jederzeit anhalten, beschleunigen und verlangsamen. Weitere Informationen hier.
Ist Azure Synapse Analytics Cookbook als Online-PDF/ePub verfügbar?
Ja, du hast Zugang zu Azure Synapse Analytics Cookbook von Gaurav Agarwal, Meenakshi Muralidharan, Rohini Srivathsa im PDF- und/oder ePub-Format sowie zu anderen beliebten Büchern aus Computer Science & Data Processing. Aus unserem Katalog stehen dir über 1 Million Bücher zur Verfügung.

Information

Jahr
2022
ISBN
9781803245577

Chapter 1: Choosing the Optimal Method for Loading Data to Synapse

In this chapter, we will cover how to enrich and load data to Azure Synapse using the most optimal method. We will be covering, in detail, different techniques to load data, considering the variety of data source options. We will learn the best practices to be followed for different data loading options, along with unsupported scenarios.
We will cover the following recipes:
  • Choosing a data loading option
  • Achieving parallelism in data loading using PolyBase
  • Moving and transforming using a data flow
  • Adding a trigger to a data flow pipeline
  • Unsupported data loading scenarios
  • Data loading best practices

Choosing a data loading option

Data loading is one of the most important aspects of data orchestration in Azure Synapse Analytics. Loading data into Synapse requires handling a variety of data sources of different formats, sizes, and frequencies.
There are multiple options available to load data to Synapse. To enrich and load the data in the most appropriate manner, it is very important to understand which option is the best when it comes to actual data loading.
Here are some of the most well-known data loading techniques:
  • Loading data using the COPY command
  • Loading data using PolyBase
  • Loading data into Azure Synapse using Azure Data Factory (ADF)
We'll look at each of them in this recipe.

Getting ready

We will be using a public dataset for our scenario. This dataset will consist of New York yellow taxi trip data; this includes attributes such as trip distances, itemized fares, rate types, payment types, pick-up and drop-off dates and times, driver-reported passenger counts, and pick-up and drop-off locations. We will be using this dataset throughout this recipe to demonstrate various use cases:
  • To get the dataset, you can go to the following URL: https://www.kaggle.com/microize/newyork-yellow-taxi-trip-data-2020-2019.
  • The code for this recipe can be downloaded from the GitHub repository: https://github.com/PacktPublishing/Analytics-in-Azure-Synapse-Simplified.
  • For a quick-start guide of how to create a Synapse workspace, you can refer to https://docs.microsoft.com/en-us/azure/synapse-analytics/quickstart-create-workspace.
  • For a quick-start guide of how to create SQL dedicated pool, check out https://docs.microsoft.com/en-us/azure/synapse-analytics/quickstart-create-sql-pool-studio.
Let's get started.

How to do it…

Let's look at each of the three methods in turn and see which is most optimal and when to use each of them.

Loading data using the COPY command

We will be using the new COPY command to load the dataset from external storage:
  1. Before we get started, let's upload the data from the Kaggle New York yellow taxi trip data dataset to the Azure Data Lake Storage Gen2 (ADLS2) storage container named taxistagingdata. You can download the dataset to your local machine and upload it to the Azure storage container, as shown in Figure 1.1:
Figure 1.1 – The New York taxi dataset
Figure 1.1 – The New York taxi dataset
  1. Let's create a table to load the data from the data lake storage. You can use SQL Server Management Studio (SSMS) to run the following queries against the SQL pool that you have created:
    CREATE SCHEMA [NYCTaxi];
    IF NOT EXISTS (SELECT * FROM sys.objects WHERE NAME = 'TripsStg' AND TYPE = 'U')
    CREATE TABLE [NYCTaxi].[TripsStg]
    (
    VendorID nvarchar(30),
    tpep_pickup_datetime nvarchar(30),
    tpep_dropoff_datetime nvarchar(30),
    passenger_count nvarchar(30),
    trip_distance nvarchar(30),
    RatecodeID nvarchar(30),
    store_and_fwd_flag nvarchar(30),
    PULocationID nvarchar(30),
    DOLocationID nvarchar(30),
    payment_type nvarchar(30),
    fare_amount nvarchar(10),
    extra nvarchar(10),
    mta_tax nvarchar(10),
    tip_amount nvarchar(10),
    tolls_amount nvarchar(10),
    improvement_surcharge nvarchar(10),
    total_amount nvarchar(10)
    )
    WITH
    (
    DISTRIBUTION = ROUND_ROBIN,
    HEAP
    )
  2. Use the COPY INTO command to load the data from ADLS2. This helps by reducing multiple steps in the data loading process and its complexity:
    COPY INTO [NYCTaxi].[TripsStg]
    FROM 'https://mystorageaccount.blob.core.windows.net/myblobc...

Inhaltsverzeichnis