Implementing a Lakehouse with Microsoft Fabric (DP-601T00)
Description
This course is designed to build your foundational skills in data engineering on Microsoft Fabric, focusing on the Lakehouse concept. This course will explore the powerful capabilities of Apache Spark for distributed data processing and the essential techniques for efficient data management, versioning, and reliability by working with Delta Lake tables. This course will also explore data ingestion and orchestration using Dataflows Gen2 and Data Factory pipelines. This course includes a combination of lectures and hands-on exercises that will prepare you to work with lakehouses in Microsoft Fabric.
Please note that the contents of this course are also embedded within DP-600. So if you're doing for example the Certification course on DP-600, you will already have covered everything for this course as well
The primary audience for this course is data professionals who are familiar with data modeling, extraction, and analytics. It is designed for professionals who are interested in gaining knowledge about Lakehouse architecture, the Microsoft Fabric platform, and how to enable end-to-end analytics using these technologies.
In this training, you'll learn to:
- Describe end-to-end analytics in Microsoft Fabric
- Create a lakehouse
- Ingest data into files and tables into a lakehouse
- Query lakehouse tables with SQL
- Configure Spark in a Microsoft Fabric workspace
- Identify suitable scenarios for Spark notebooks and Spark jobs
- Use Spark dataframes to analyze and transform data
- Use Spark SQL to query data in tables and views
- Visualize data in a Spark notebook
- Understand Delta Lake and delta tables in Microsoft Fabric
- Create and manage delta tables using Spark
- Use Spark to query and transform data in delta tables
- Use delta tables with Spark structured streaming
- Describe Dataflow (Gen2) capabilities in Microsoft Fabric
- Create Dataflow (Gen2) solutions to ingest and transform data
- Include a Dataflow (Gen2) in a pipeline
- Describe pipeline capabilities in Microsoft Fabric
- Use the Copy Data activity in a pipeline
- Create pipelines based on predefined templates
- Run and monitor pipelines
Prerequisites to follow the Implementing a Lakehouse with Microsoft Fabric training
To be able to follow this training, it is important that you are already familiar with Fabric. You should be familiar with basic data concepts and terminology.
In addition, you must be familiar with data platforms. In any case, you must be able to read SQL well.
Course outline
- Introduction to end-to-end analytics using Microsoft Fabric
- Get started with lakehouses in Microsoft Fabric
- Use Apache Spark in Microsoft Fabric
- Work with Delta Lake tables in Microsoft Fabric
- Ingest Data with Dataflows Gen2 in Microsoft Fabric
- Use Data Factory pipelines in Microsoft Fabric
Course material
In the course "DP-601T00: Implementing a Lakehouse with Microsoft Fabric" we use Microsoft Official Courseware. We will ensure that you receive this (digital) material at the start of the course.
Beschikbare datums
Titel | Datum |
---|---|
DP-601 |
Titel | Datum |
---|---|
DP-601 |
Titel | Datum |
---|---|
DP-601 |
Titel | Datum |
---|---|
DP-601 |
Titel | Datum |
---|---|
DP-601 |