Deployment Pipelines: Dev → Test → Prod
Promote Fabric items through environments with confidence. Deployment pipelines, stage configuration, and deployment rules.
What are deployment pipelines?
Think of deployment pipelines like a quality control line in a factory.
A new product (report, model) starts in the development workshop. After testing, it moves to the test floor for quality checks. Once approved, it goes to the production warehouse where customers (business users) access it.
Fabric deployment pipelines automate this: Dev workspace → Test workspace → Production workspace. Each promotion copies items and can swap data connections (dev data → prod data).
Pipeline stages
| Stage | Purpose | Audience |
|---|---|---|
| Development | Build and iterate on new features | Data engineers, report developers |
| Test | Validate with production-like data | QA team, business stakeholders |
| Production | Live environment for business users | All report consumers |
Creating a deployment pipeline
- Go to Fabric portal → Deployment pipelines
- Create pipeline and name it
- Assign workspaces to each stage (Dev, Test, Prod)
- Compare — the pipeline shows differences between stages
- Deploy — promote items from one stage to the next
What gets deployed
All Fabric items can be deployed: semantic models, reports, dashboards, lakehouses (metadata), warehouses (metadata), dataflows, pipelines, notebooks.
Deployment rules
Deployment rules change item parameters per stage — crucial for data source separation:
| Rule Type | What It Changes | Example |
|---|---|---|
| Data source | Connection string or server name | Dev connects to sql-dev.fabric.com, Prod connects to sql-prod.fabric.com |
| Parameter | Power Query M parameter values | ServerName = "dev-server" → "prod-server" |
| Lakehouse | Which lakehouse an item points to | Dev lakehouse → Prod lakehouse |
Scenario: James promotes a client report
James at Summit Consulting builds a new revenue dashboard in the Dev workspace using test data. After review:
- He deploys from Dev → Test (the pipeline copies the report and model)
- Deployment rules swap the data connection from
lakehouse-devtolakehouse-test - The QA team validates with production-like data
- James deploys from Test → Prod (data connection swaps to
lakehouse-prod)
Business users see the new dashboard in the Prod workspace — connected to real data — without any manual configuration.
Exam tip: Deployment pipelines vs Azure DevOps
The exam may ask you to differentiate:
- Fabric deployment pipelines = Fabric-native, workspace-to-workspace promotion, UI-based
- Azure DevOps / GitHub Actions = CI/CD automation, code-based, can call Fabric REST APIs
- Git integration = version control, tracks changes, enables branching
These tools complement each other: Git tracks changes, deployment pipelines promote items, and CI/CD automates the process.
James deploys a semantic model from Dev to Prod. The model connects to lakehouse-dev. What ensures it connects to lakehouse-prod after deployment?
James deploys a report from Dev to Test. The report's semantic model uses lakehouse-dev. In Test, the data should come from lakehouse-test. What enables this automatic swap?
🎬 Video coming soon
Next up: Impact Analysis & Dependencies