Deployment Pipelines: Dev to Production
Create and configure deployment pipelines to promote Fabric content safely across development, test, and production workspaces.
What are deployment pipelines?
Think of a restaurant kitchen with three stations.
Station 1 (Dev) is where the chef experiments with new recipes. Station 2 (Test) is where a taster checks the dish. Station 3 (Production) is where it goes to the customerβs table.
A deployment pipeline in Fabric is that three-station system for your analytics content. You build in Dev, validate in Test, and promote to Production. The pipeline handles the transfer β you donβt manually copy anything. If a pipeline breaks in Test, Production stays untouched.
How deployment pipelines work
The stage model
ββββββββββββ Deploy ββββββββββββ Deploy ββββββββββββββββ
β Dev β βββββββββββΊ β Test β βββββββββββΊ β Production β
β Workspaceβ β Workspaceβ β Workspace β
ββββββββββββ ββββββββββββ ββββββββββββββββ
- Each stage is linked to exactly one Fabric workspace
- Deploy forward promotes content from one stage to the next
- Deploy backward is also supported (e.g., resetting Test from Production)
- Up to 10 stages β most teams use 3 (Dev/Test/Prod)
What happens during deployment
| Step | Action |
|---|---|
| 1. Compare | Fabric compares items in source and target stages |
| 2. Diff | Shows you whatβs new, modified, deleted, or unchanged |
| 3. Review | You decide which items to include in this deployment |
| 4. Deploy | Selected items are copied to the target workspace |
| 5. Rules applied | Deployment rules swap parameters, data sources, connections |
Deployment rules
Deployment rules are the key to making the same content work across environments. Without them, your production pipeline would try to read from your dev database.
| Rule Type | Example |
|---|---|
| Data source rules | Dev reads from dev-sqlserver.database.windows.net; Prod reads from prod-sqlserver.database.windows.net |
| Parameter rules | environment parameter = βdevβ in Dev, βprodβ in Production |
| Lakehouse rules | Dev pipeline loads to dev-lakehouse; Prod loads to prod-lakehouse |
Scenario: Ibrahim's three-stage pipeline
Ibrahim configures a deployment pipeline for Nexus Financialβs risk analytics:
- Dev workspace: Engineers iterate on notebooks and pipelines. Data source: a sample of 10,000 trades.
- Test workspace: QA runs the full pipeline against a copy of production data. Deployment rules swap the data source to the test database.
- Production workspace: Serves the risk dashboard to traders. Deployment rules point to the live trading database.
Every Friday, the lead engineer reviews changes in Dev, deploys to Test, runs validation overnight, and if tests pass, deploys to Production on Monday morning.
Git integration + deployment pipelines
These two features solve different problems and work best together.
| Aspect | Git Integration | Deployment Pipelines |
|---|---|---|
| Purpose | Version control β track who changed what and when | Release management β promote content between environments |
| Main action | Commit/update (sync workspace with repo) | Deploy (copy items between stages) |
| Rollback method | Revert to a previous Git commit | Deploy backward from a known-good stage |
| Environment config | Branch per environment (dev, main) | Deployment rules swap data sources and parameters |
| Review process | Pull requests with code review | Deployment comparison shows diff between stages |
| Best for | Collaboration, audit trail, branching | Controlled releases, environment-specific config |
Exam tip: When to use which
Exam questions often describe a scenario and ask which tool solves it:
- βAn engineer needs to see what changed last weekβ β Git integration (commit history)
- βThe team needs to promote tested changes to productionβ β Deployment pipeline
- βTwo engineers changed the same notebookβ β Git integration (branch + merge via PR)
- βProduction pipeline needs to read from a different server than Devβ β Deployment rules
- βThe team wants to automate deployment after a PR is mergedβ β Git integration triggers a deployment pipeline (CI/CD)
Automating deployments
Deployment pipelines have a REST API, which means you can trigger deployments from external CI/CD tools:
- Azure DevOps Pipeline β merges a PR β calls the Fabric deployment pipeline API β content promotes to Test
- GitHub Actions β same pattern, different tool
- You can automate deployments on a schedule by calling the Fabric deployment pipeline REST API from Azure DevOps or GitHub Actions β deploy at a fixed time (e.g., every Monday 6 AM)
Scenario: Carlos automates Friday deployments
Carlos sets up an Azure DevOps pipeline for Precision Manufacturingβs Fabric workspace:
- Engineers commit changes to the
devbranch during the week - Friday at 3 PM, a scheduled Azure DevOps pipeline runs
- It calls the Fabric deployment pipeline API to promote Dev β Test
- Overnight, automated tests validate the ETL outputs
- Monday at 7 AM, if tests pass, a second pipeline promotes Test β Production
No manual clicks. The entire release cycle is automated and auditable.
A data engineer deploys a pipeline from Dev to Production without configuring deployment rules. The production pipeline starts pulling data from the development database. What should the engineer have done?
Ibrahim wants to automate the promotion of Fabric content from Test to Production every Monday at 7 AM, but only if weekend tests pass. Which approach is most appropriate?
π¬ Video coming soon
Next up: Access Controls: Who Gets In β configure workspace and item-level permissions to control access.