Azure Synapse Analytics Cookbook
Gaurav Agarwal, Meenakshi Muralidharan, Rohini Srivathsa
- 238 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
Azure Synapse Analytics Cookbook
Gaurav Agarwal, Meenakshi Muralidharan, Rohini Srivathsa
About This Book
Whether you're an Azure veteran or just getting started, get the most out of your data with effective recipes for Azure SynapseKey Features⢠Discover new techniques for using Azure Synapse, regardless of your level of expertise⢠Integrate Azure Synapse with other data sources to create a unified experience for your analytical needs using Microsoft Azure⢠Learn how to embed data governance and classification with Synapse Analytics by integrating Azure PurviewBook DescriptionAs data warehouse management becomes increasingly integral to successful organizations, choosing and running the right solution is more important than ever. Microsoft Azure Synapse is an enterprise-grade, cloud-based data warehousing platform, and this book holds the key to using Synapse to its full potential. If you want the skills and confidence to create a robust enterprise analytical platform, this cookbook is a great place to start. You'll learn and execute enterprise-level deployments on medium-to-large data platforms. Using the step-by-step recipes and accompanying theory covered in this book, you'll understand how to integrate various services with Synapse to make it a robust solution for all your data needs. Whether you're new to Azure Synapse or just getting started, you'll find the instructions you need to solve any problem you may face, including using Azure services for data visualization as well as for artificial intelligence (AI) and machine learning (ML) solutions. By the end of this Azure book, you'll have the skills you need to implement an enterprise-grade analytical platform, enabling your organization to explore and manage heterogeneous data workloads and employ various data integration services to solve real-time industry problems.What you will learn⢠Discover the optimal approach for loading and managing data⢠Work with notebooks for various tasks, including ML⢠Run real-time analytics using Azure Synapse Link for Cosmos DB⢠Perform exploratory data analytics using Apache Spark⢠Read and write DataFrames into Parquet files using PySpark⢠Create reports on various metrics for monitoring key KPIs⢠Combine Power BI and Serverless for distributed analysis⢠Enhance your Synapse analysis with data visualizationsWho this book is forThis book is for data architects, data engineers, and developers who want to learn and understand the main concepts of Azure Synapse analytics and implement them in real-world scenarios.
Frequently asked questions
Information
Chapter 1: Choosing the Optimal Method for Loading Data to Synapse
- Choosing a data loading option
- Achieving parallelism in data loading using PolyBase
- Moving and transforming using a data flow
- Adding a trigger to a data flow pipeline
- Unsupported data loading scenarios
- Data loading best practices
Choosing a data loading option
- Loading data using the COPY command
- Loading data using PolyBase
- Loading data into Azure Synapse using Azure Data Factory (ADF)
Getting ready
- To get the dataset, you can go to the following URL: https://www.kaggle.com/microize/newyork-yellow-taxi-trip-data-2020-2019.
- The code for this recipe can be downloaded from the GitHub repository: https://github.com/PacktPublishing/Analytics-in-Azure-Synapse-Simplified.
- For a quick-start guide of how to create a Synapse workspace, you can refer to https://docs.microsoft.com/en-us/azure/synapse-analytics/quickstart-create-workspace.
- For a quick-start guide of how to create SQL dedicated pool, check out https://docs.microsoft.com/en-us/azure/synapse-analytics/quickstart-create-sql-pool-studio.
How to do itâŚ
Loading data using the COPY command
- Before we get started, let's upload the data from the Kaggle New York yellow taxi trip data dataset to the Azure Data Lake Storage Gen2 (ADLS2) storage container named taxistagingdata. You can download the dataset to your local machine and upload it to the Azure storage container, as shown in Figure 1.1:
- Let's create a table to load the data from the data lake storage. You can use SQL Server Management Studio (SSMS) to run the following queries against the SQL pool that you have created:CREATE SCHEMA [NYCTaxi];IF NOT EXISTS (SELECT * FROM sys.objects WHERE NAME = 'TripsStg' AND TYPE = 'U')CREATE TABLE [NYCTaxi].[TripsStg](VendorID nvarchar(30),tpep_pickup_datetime nvarchar(30),tpep_dropoff_datetime nvarchar(30),passenger_count nvarchar(30),trip_distance nvarchar(30),RatecodeID nvarchar(30),store_and_fwd_flag nvarchar(30),PULocationID nvarchar(30),DOLocationID nvarchar(30),payment_type nvarchar(30),fare_amount nvarchar(10),extra nvarchar(10),mta_tax nvarchar(10),tip_amount nvarchar(10),tolls_amount nvarchar(10),improvement_surcharge nvarchar(10),total_amount nvarchar(10))WITH(DISTRIBUTION = ROUND_ROBIN,HEAP)
- Use the COPY INTO command to load the data from ADLS2. This helps by reducing multiple steps in the data loading process and its complexity:COPY INTO [NYCTaxi].[TripsStg]FROM 'https://mystorageaccount.blob.core.windows.net/myblobc...