Azure Databricks & Spark Core For Data Engineers(Python/SQL)

0
Join & Subscribe
Udemy
Paid Course
English
Certificate Available
15 hours worth of material
selfpaced

Overview

Real World Project on Formula1 Racing for Data Engineers using Azure Databricks, Delta Lake, Azure Data Factory [DP203]

What you'll learn:
  • You will learn how to build a real world data project using Azure Databricks and Spark Core. This course has been taught using real world data from Formula1 motor racing
  • You will acquire professional level data engineering skills in Azure Databricks, Delta Lake, Spark Core, Azure Data Lake Gen2 and Azure Data Factory (ADF)
  • You will learn how to create notebooks, dashboards, clusters, cluster pools and jobs in Azure Databricks
  • You will learn how to ingest and transform data using PySpark in Azure Databricks
  • You will learn how to transform and analyse data using Spark SQL in Azure Databricks
  • You will learn about Data Lake architecture and Lakehouse architecture. Also, you will learn how to implement a solution for Lakehouse architecture using Delta Lake.
  • You will learn how to create Azure Data Factory pipelines to execute Databricks notebooks
  • You will learn how to create Azure Data Factory triggers to schedule pipelines as well as monitor them.
  • You will gain the skills required around Azure Databricks and Data Factory to pass the Azure Data Engineer Associate certification exam DP203, but the primary objective of the course is not to teach you to pass the exams.
  • You will learn how to connect to Azure Databricks from PowerBI to create reports

Welcome!

I am looking forward to helping you with learning one of the in-demand data engineering tools in the cloud, Azure Databricks! This course has been taught with implementing a data engineering solution using Azure Databricks and Spark core for a real world project of analysing and reporting on Formula1 motor racing data.

This is like no other course in Udemy for Azure Databricks. Once you have completed the course including all the assignments, I strongly believe that you will be in a position to start a real world data engineering project on your own and also proficient on Azure Databricks. I have also included lessons on Azure Data Lake Storage Gen2, Azure Data Factory as well as PowerBI. The primary focus of the course is Azure Databricks and Spark core, but it also covers the relevant concepts and connectivity to the other technologies mentioned. Please note that the course doesn't cover other aspects of Spark such as Spark streaming and Spark ML. Also the course has been taught using PySpark as well as Spark SQL; It doesn't cover Scala or Java.

The course follows a logical progression of a real world project implementation with technical concepts being explained and the Databricks notebooks being built at the same time. Even though this course is not specifically designed to teach you the skills required for passing the Azure Data Engineer Associate Certification Exam DP203, it can greatly help you get most of the necessary skills required for the exam.

I value your time as much as I do mine. So, I have designed this course to be fast-paced and to the point. Also, the course has been taught with simple English and no jargons. I start the course from basics and by the end of the course you will be proficient in the technologies used.

Currently the course teaches you the following

Azure Databricks

  • Building a solution architecture for a data engineering solution using Azure Databricks, Azure Data Lake Gen2, Azure Data Factory and Power BI

  • Creating and using Azure Databricks service and the architecture of Databricks within Azure

  • Working with Databricks notebooks as well as using Databricks utilities, magic commands etc

  • Passing parameters between notebooks as well as creating notebook workflows

  • Creating, configuring and monitoring Databricks clusters, cluster pools and jobs

  • Mounting Azure Storage in Databricks using secrets stored in Azure Key Vault

  • Working with Databricks Tables, Databricks File System (DBFS) etc

  • Using Delta Lake to implement a solution using Lakehouse architecture

  • Creating dashboards to visualise the outputs

  • Connecting to the Azure Databricks tables from PowerBI

Spark (Only PySpark and SQL)

  • Spark architecture, DataSources API and Dataframe API

  • PySpark - Ingestion of CSV, simple and complex JSON files into the data lake as parquet files/ tables.

  • PySpark - Transformations such as Filter, Join, Simple Aggregations, GroupBy, Window functions etc.

  • PySpark - Creating local and temporary views

  • Spark SQL - Creating databases, tables and views

  • Spark SQL- Transformations such as Filter, Join, Simple Aggregations, GroupBy, Window functions etc.

  • Spark SQL - Creating local and temporary views

  • Implementing full refresh and incremental load patterns using partitions

Delta Lake

  • Emergence of Data Lakehouse architecture and the role of delta lake.

  • Read, Write, Update, Delete and Merge to delta lake using both PySpark as well as SQL

  • History, Time Travel and Vacuum

  • Converting Parquet files to Delta files

  • Implementing incremental load pattern using delta lake

Azure Data Factory

  • Creating pipelines to execute Databricks notebooks

  • Designing robust pipelines to deal with unexpected scenarios such as missing files

  • Creating dependencies between activities as well as pipelines

  • Scheduling the pipelines using data factory triggers to execute at regular intervals

  • Monitor the triggers/ pipelines to check for errors/ outputs.


Taught by

Ramesh Retnasamy