Azure Synapse Analytics Cookbook
eBook - ePub

Azure Synapse Analytics Cookbook

Gaurav Agarwal, Meenakshi Muralidharan, Rohini Srivathsa

Condividi libro
  1. 238 pagine
  2. English
  3. ePUB (disponibile sull'app)
  4. Disponibile su iOS e Android
eBook - ePub

Azure Synapse Analytics Cookbook

Gaurav Agarwal, Meenakshi Muralidharan, Rohini Srivathsa

Dettagli del libro
Anteprima del libro
Indice dei contenuti
Citazioni

Informazioni sul libro

Whether you're an Azure veteran or just getting started, get the most out of your data with effective recipes for Azure SynapseKey Features• Discover new techniques for using Azure Synapse, regardless of your level of expertise• Integrate Azure Synapse with other data sources to create a unified experience for your analytical needs using Microsoft Azure• Learn how to embed data governance and classification with Synapse Analytics by integrating Azure PurviewBook DescriptionAs data warehouse management becomes increasingly integral to successful organizations, choosing and running the right solution is more important than ever. Microsoft Azure Synapse is an enterprise-grade, cloud-based data warehousing platform, and this book holds the key to using Synapse to its full potential. If you want the skills and confidence to create a robust enterprise analytical platform, this cookbook is a great place to start. You'll learn and execute enterprise-level deployments on medium-to-large data platforms. Using the step-by-step recipes and accompanying theory covered in this book, you'll understand how to integrate various services with Synapse to make it a robust solution for all your data needs. Whether you're new to Azure Synapse or just getting started, you'll find the instructions you need to solve any problem you may face, including using Azure services for data visualization as well as for artificial intelligence (AI) and machine learning (ML) solutions. By the end of this Azure book, you'll have the skills you need to implement an enterprise-grade analytical platform, enabling your organization to explore and manage heterogeneous data workloads and employ various data integration services to solve real-time industry problems.What you will learn• Discover the optimal approach for loading and managing data• Work with notebooks for various tasks, including ML• Run real-time analytics using Azure Synapse Link for Cosmos DB• Perform exploratory data analytics using Apache Spark• Read and write DataFrames into Parquet files using PySpark• Create reports on various metrics for monitoring key KPIs• Combine Power BI and Serverless for distributed analysis• Enhance your Synapse analysis with data visualizationsWho this book is forThis book is for data architects, data engineers, and developers who want to learn and understand the main concepts of Azure Synapse analytics and implement them in real-world scenarios.

Domande frequenti

Come faccio ad annullare l'abbonamento?
È semplicissimo: basta accedere alla sezione Account nelle Impostazioni e cliccare su "Annulla abbonamento". Dopo la cancellazione, l'abbonamento rimarrà attivo per il periodo rimanente già pagato. Per maggiori informazioni, clicca qui
È possibile scaricare libri? Se sì, come?
Al momento è possibile scaricare tramite l'app tutti i nostri libri ePub mobile-friendly. Anche la maggior parte dei nostri PDF è scaricabile e stiamo lavorando per rendere disponibile quanto prima il download di tutti gli altri file. Per maggiori informazioni, clicca qui
Che differenza c'è tra i piani?
Entrambi i piani ti danno accesso illimitato alla libreria e a tutte le funzionalità di Perlego. Le uniche differenze sono il prezzo e il periodo di abbonamento: con il piano annuale risparmierai circa il 30% rispetto a 12 rate con quello mensile.
Cos'è Perlego?
Perlego è un servizio di abbonamento a testi accademici, che ti permette di accedere a un'intera libreria online a un prezzo inferiore rispetto a quello che pagheresti per acquistare un singolo libro al mese. Con oltre 1 milione di testi suddivisi in più di 1.000 categorie, troverai sicuramente ciò che fa per te! Per maggiori informazioni, clicca qui.
Perlego supporta la sintesi vocale?
Cerca l'icona Sintesi vocale nel prossimo libro che leggerai per verificare se è possibile riprodurre l'audio. Questo strumento permette di leggere il testo a voce alta, evidenziandolo man mano che la lettura procede. Puoi aumentare o diminuire la velocità della sintesi vocale, oppure sospendere la riproduzione. Per maggiori informazioni, clicca qui.
Azure Synapse Analytics Cookbook è disponibile online in formato PDF/ePub?
Sì, puoi accedere a Azure Synapse Analytics Cookbook di Gaurav Agarwal, Meenakshi Muralidharan, Rohini Srivathsa in formato PDF e/o ePub, così come ad altri libri molto apprezzati nelle sezioni relative a Computer Science e Data Processing. Scopri oltre 1 milione di libri disponibili nel nostro catalogo.

Informazioni

Anno
2022
ISBN
9781803245577
Edizione
1

Chapter 1: Choosing the Optimal Method for Loading Data to Synapse

In this chapter, we will cover how to enrich and load data to Azure Synapse using the most optimal method. We will be covering, in detail, different techniques to load data, considering the variety of data source options. We will learn the best practices to be followed for different data loading options, along with unsupported scenarios.
We will cover the following recipes:
  • Choosing a data loading option
  • Achieving parallelism in data loading using PolyBase
  • Moving and transforming using a data flow
  • Adding a trigger to a data flow pipeline
  • Unsupported data loading scenarios
  • Data loading best practices

Choosing a data loading option

Data loading is one of the most important aspects of data orchestration in Azure Synapse Analytics. Loading data into Synapse requires handling a variety of data sources of different formats, sizes, and frequencies.
There are multiple options available to load data to Synapse. To enrich and load the data in the most appropriate manner, it is very important to understand which option is the best when it comes to actual data loading.
Here are some of the most well-known data loading techniques:
  • Loading data using the COPY command
  • Loading data using PolyBase
  • Loading data into Azure Synapse using Azure Data Factory (ADF)
We'll look at each of them in this recipe.

Getting ready

We will be using a public dataset for our scenario. This dataset will consist of New York yellow taxi trip data; this includes attributes such as trip distances, itemized fares, rate types, payment types, pick-up and drop-off dates and times, driver-reported passenger counts, and pick-up and drop-off locations. We will be using this dataset throughout this recipe to demonstrate various use cases:
  • To get the dataset, you can go to the following URL: https://www.kaggle.com/microize/newyork-yellow-taxi-trip-data-2020-2019.
  • The code for this recipe can be downloaded from the GitHub repository: https://github.com/PacktPublishing/Analytics-in-Azure-Synapse-Simplified.
  • For a quick-start guide of how to create a Synapse workspace, you can refer to https://docs.microsoft.com/en-us/azure/synapse-analytics/quickstart-create-workspace.
  • For a quick-start guide of how to create SQL dedicated pool, check out https://docs.microsoft.com/en-us/azure/synapse-analytics/quickstart-create-sql-pool-studio.
Let's get started.

How to do it…

Let's look at each of the three methods in turn and see which is most optimal and when to use each of them.

Loading data using the COPY command

We will be using the new COPY command to load the dataset from external storage:
  1. Before we get started, let's upload the data from the Kaggle New York yellow taxi trip data dataset to the Azure Data Lake Storage Gen2 (ADLS2) storage container named taxistagingdata. You can download the dataset to your local machine and upload it to the Azure storage container, as shown in Figure 1.1:
Figure 1.1 – The New York taxi dataset
Figure 1.1 – The New York taxi dataset
  1. Let's create a table to load the data from the data lake storage. You can use SQL Server Management Studio (SSMS) to run the following queries against the SQL pool that you have created:
    CREATE SCHEMA [NYCTaxi];
    IF NOT EXISTS (SELECT * FROM sys.objects WHERE NAME = 'TripsStg' AND TYPE = 'U')
    CREATE TABLE [NYCTaxi].[TripsStg]
    (
    VendorID nvarchar(30),
    tpep_pickup_datetime nvarchar(30),
    tpep_dropoff_datetime nvarchar(30),
    passenger_count nvarchar(30),
    trip_distance nvarchar(30),
    RatecodeID nvarchar(30),
    store_and_fwd_flag nvarchar(30),
    PULocationID nvarchar(30),
    DOLocationID nvarchar(30),
    payment_type nvarchar(30),
    fare_amount nvarchar(10),
    extra nvarchar(10),
    mta_tax nvarchar(10),
    tip_amount nvarchar(10),
    tolls_amount nvarchar(10),
    improvement_surcharge nvarchar(10),
    total_amount nvarchar(10)
    )
    WITH
    (
    DISTRIBUTION = ROUND_ROBIN,
    HEAP
    )
  2. Use the COPY INTO command to load the data from ADLS2. This helps by reducing multiple steps in the data loading process and its complexity:
    COPY INTO [NYCTaxi].[TripsStg]
    FROM 'https://mystorageaccount.blob.core.windows.net/myblobc...

Indice dei contenuti