🔒 Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided DP-600 Domain 1
Domain 1 — Module 5 of 7 71%
5 of 29 overall

DP-600 Study Guide

Domain 1: Maintain a Data Analytics Solution

  • Workspace Access Controls
  • Row-Level & Object-Level Security
  • Sensitivity Labels & Endorsement
  • Git Version Control in Fabric
  • Deployment Pipelines: Dev → Test → Prod
  • Impact Analysis & Dependencies
  • XMLA Endpoint & Reusable Assets

Domain 2: Prepare Data

  • Microsoft Fabric: The Big Picture Free
  • Lakehouses: Your Data Foundation Free
  • Warehouses in Fabric Free
  • Choosing the Right Data Store Free
  • Data Connections & OneLake Catalog
  • Shortcuts & OneLake Integration
  • Ingesting Data: Dataflows Gen2 & Pipelines
  • Star Schema Design Free
  • SQL Objects: Views, Functions & Stored Procedures
  • Transforming Data: Reshape & Enrich
  • Data Quality & Cleansing
  • Querying with SQL
  • Querying with KQL
  • Querying with DAX

Domain 3: Implement and Manage Semantic Models

  • Semantic Models: Storage Modes
  • Relationships & Advanced Modeling
  • DAX Essentials: Variables & Functions
  • Calculation Groups & Field Parameters
  • Large Models & Composite Models
  • Direct Lake Mode
  • DAX Performance Optimization
  • Incremental Refresh

DP-600 Study Guide

Domain 1: Maintain a Data Analytics Solution

  • Workspace Access Controls
  • Row-Level & Object-Level Security
  • Sensitivity Labels & Endorsement
  • Git Version Control in Fabric
  • Deployment Pipelines: Dev → Test → Prod
  • Impact Analysis & Dependencies
  • XMLA Endpoint & Reusable Assets

Domain 2: Prepare Data

  • Microsoft Fabric: The Big Picture Free
  • Lakehouses: Your Data Foundation Free
  • Warehouses in Fabric Free
  • Choosing the Right Data Store Free
  • Data Connections & OneLake Catalog
  • Shortcuts & OneLake Integration
  • Ingesting Data: Dataflows Gen2 & Pipelines
  • Star Schema Design Free
  • SQL Objects: Views, Functions & Stored Procedures
  • Transforming Data: Reshape & Enrich
  • Data Quality & Cleansing
  • Querying with SQL
  • Querying with KQL
  • Querying with DAX

Domain 3: Implement and Manage Semantic Models

  • Semantic Models: Storage Modes
  • Relationships & Advanced Modeling
  • DAX Essentials: Variables & Functions
  • Calculation Groups & Field Parameters
  • Large Models & Composite Models
  • Direct Lake Mode
  • DAX Performance Optimization
  • Incremental Refresh
Domain 1: Maintain a Data Analytics Solution Premium ⏱ ~13 min read

Deployment Pipelines: Dev → Test → Prod

Promote Fabric items through environments with confidence. Deployment pipelines, stage configuration, and deployment rules.

What are deployment pipelines?

☕ Simple explanation

Think of deployment pipelines like a quality control line in a factory.

A new product (report, model) starts in the development workshop. After testing, it moves to the test floor for quality checks. Once approved, it goes to the production warehouse where customers (business users) access it.

Fabric deployment pipelines automate this: Dev workspace → Test workspace → Production workspace. Each promotion copies items and can swap data connections (dev data → prod data).

Fabric deployment pipelines manage the promotion of items across workspace stages (Development, Test, Production). They compare items between stages, show what has changed, and deploy selected items forward. Deployment rules can change parameters (data source connections, connection strings) per stage — so dev uses test data while production uses real data.

Deployment pipelines are NOT the same as Azure DevOps pipelines or GitHub Actions. They are a Fabric-native UI for workspace-to-workspace promotion.

Pipeline stages

StagePurposeAudience
DevelopmentBuild and iterate on new featuresData engineers, report developers
TestValidate with production-like dataQA team, business stakeholders
ProductionLive environment for business usersAll report consumers

Creating a deployment pipeline

  1. Go to Fabric portal → Deployment pipelines
  2. Create pipeline and name it
  3. Assign workspaces to each stage (Dev, Test, Prod)
  4. Compare — the pipeline shows differences between stages
  5. Deploy — promote items from one stage to the next

What gets deployed

All Fabric items can be deployed: semantic models, reports, dashboards, lakehouses (metadata), warehouses (metadata), dataflows, pipelines, notebooks.

Deployment rules

Deployment rules change item parameters per stage — crucial for data source separation:

Rule TypeWhat It ChangesExample
Data sourceConnection string or server nameDev connects to sql-dev.fabric.com, Prod connects to sql-prod.fabric.com
ParameterPower Query M parameter valuesServerName = "dev-server" → "prod-server"
LakehouseWhich lakehouse an item points toDev lakehouse → Prod lakehouse
💡 Scenario: James promotes a client report

James at Summit Consulting builds a new revenue dashboard in the Dev workspace using test data. After review:

  1. He deploys from Dev → Test (the pipeline copies the report and model)
  2. Deployment rules swap the data connection from lakehouse-dev to lakehouse-test
  3. The QA team validates with production-like data
  4. James deploys from Test → Prod (data connection swaps to lakehouse-prod)

Business users see the new dashboard in the Prod workspace — connected to real data — without any manual configuration.

💡 Exam tip: Deployment pipelines vs Azure DevOps

The exam may ask you to differentiate:

  • Fabric deployment pipelines = Fabric-native, workspace-to-workspace promotion, UI-based
  • Azure DevOps / GitHub Actions = CI/CD automation, code-based, can call Fabric REST APIs
  • Git integration = version control, tracks changes, enables branching

These tools complement each other: Git tracks changes, deployment pipelines promote items, and CI/CD automates the process.

Question

What do Fabric deployment pipelines do?

Click or press Enter to reveal answer

Answer

Promote items across workspace stages (Dev → Test → Prod). They compare stages, show changes, and deploy selected items. Deployment rules swap data connections per stage.

Click to flip back

Question

What are deployment rules?

Click or press Enter to reveal answer

Answer

Rules that change item parameters per stage. Example: data source connection swaps from dev-server to prod-server during deployment. This ensures each stage uses appropriate data without manual reconfiguration.

Click to flip back

Knowledge Check

James deploys a semantic model from Dev to Prod. The model connects to lakehouse-dev. What ensures it connects to lakehouse-prod after deployment?

Question

What can deployment rules change per stage?

Click or press Enter to reveal answer

Answer

Deployment rules change three types of parameters when items move between stages: (1) Data source connection strings — swap dev to prod server, (2) Power Query M parameter values — change server names or file paths, (3) Lakehouse references — point items to a different lakehouse per stage. Rules are configured once and applied automatically on every deployment.

Click to flip back

Knowledge Check

James deploys a report from Dev to Test. The report's semantic model uses lakehouse-dev. In Test, the data should come from lakehouse-test. What enables this automatic swap?

🎬 Video coming soon


Next up: Impact Analysis & Dependencies

← Previous

Git Version Control in Fabric

Next →

Impact Analysis & Dependencies

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.