🔒 Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided DP-800 Domain 2
Domain 2 — Module 7 of 11 64%
17 of 28 overall

DP-800 Study Guide

Domain 1: Design and Develop Database Solutions

  • Table Design: Columns, Types, and Indexes Free
  • Constraints: Protecting Your Data Free
  • Specialised Tables and Graph Queries Free
  • JSON in SQL: Store, Query, and Index
  • Programmability: Views, Functions, Procedures, and Triggers
  • Advanced T-SQL: CTEs, Windows, and Correlated Queries
  • Pattern Power: Regular Expressions in T-SQL
  • Fuzzy String Matching: Finding Similar Text
  • AI-Assisted SQL with GitHub Copilot Free
  • MCP: Connecting AI to Your Database

Domain 2: Secure, Optimize, and Deploy Database Solutions

  • Encryption: Always Encrypted and Column-Level
  • Dynamic Data Masking and Row-Level Security
  • Permissions, Auditing, and Passwordless Access
  • Transaction Isolation and Concurrency
  • Query Performance: Plans, DMVs, and Query Store
  • SQL Database Projects: Build and Validate
  • CI/CD Pipelines for SQL Databases
  • Data API Builder: REST and GraphQL from SQL
  • Securing AI and API Endpoints
  • Change Detection: CES, CDC, and Change Tracking
  • Azure Integration and Monitoring

Domain 3: Implement AI Capabilities in Database Solutions

  • External AI Models: Choose, Create, Manage
  • Embeddings: Design, Chunk, and Generate
  • Choose Your Search Strategy
  • Vector Data: Types, Indexes, and Storage
  • Vector Search: Distance, ANN, and ENN
  • Hybrid Search and Reciprocal Rank Fusion
  • RAG with SQL: Prompt, Process, Respond

DP-800 Study Guide

Domain 1: Design and Develop Database Solutions

  • Table Design: Columns, Types, and Indexes Free
  • Constraints: Protecting Your Data Free
  • Specialised Tables and Graph Queries Free
  • JSON in SQL: Store, Query, and Index
  • Programmability: Views, Functions, Procedures, and Triggers
  • Advanced T-SQL: CTEs, Windows, and Correlated Queries
  • Pattern Power: Regular Expressions in T-SQL
  • Fuzzy String Matching: Finding Similar Text
  • AI-Assisted SQL with GitHub Copilot Free
  • MCP: Connecting AI to Your Database

Domain 2: Secure, Optimize, and Deploy Database Solutions

  • Encryption: Always Encrypted and Column-Level
  • Dynamic Data Masking and Row-Level Security
  • Permissions, Auditing, and Passwordless Access
  • Transaction Isolation and Concurrency
  • Query Performance: Plans, DMVs, and Query Store
  • SQL Database Projects: Build and Validate
  • CI/CD Pipelines for SQL Databases
  • Data API Builder: REST and GraphQL from SQL
  • Securing AI and API Endpoints
  • Change Detection: CES, CDC, and Change Tracking
  • Azure Integration and Monitoring

Domain 3: Implement AI Capabilities in Database Solutions

  • External AI Models: Choose, Create, Manage
  • Embeddings: Design, Chunk, and Generate
  • Choose Your Search Strategy
  • Vector Data: Types, Indexes, and Storage
  • Vector Search: Distance, ANN, and ENN
  • Hybrid Search and Reciprocal Rank Fusion
  • RAG with SQL: Prompt, Process, Respond
Domain 2: Secure, Optimize, and Deploy Database Solutions Premium ⏱ ~14 min read

CI/CD Pipelines for SQL Databases

Automate database deployments with CI/CD pipelines — branching strategies, schema drift detection, secrets management, and deployment controls for SQL Database Projects.

From manual scripts to automated deployments

☕ Simple explanation

Think of a restaurant kitchen with a head chef.

Without CI/CD, deploying a database change is like each cook walking up to the stove and adding ingredients whenever they feel like it. Nobody checks if the dish still tastes good before serving it to customers.

A CI/CD pipeline is the head chef. Every ingredient (code change) goes through tasting (build and test), gets approved (pull request review), and only then reaches the customer (production). If the dish is wrong, the head chef stops it before it leaves the kitchen.

CI (Continuous Integration) = taste-testing every change. CD (Continuous Deployment) = serving approved dishes automatically.

CI/CD for SQL databases extends the DevOps practices already standard for application code. The pipeline builds your SQL Database Project (.dacpac), runs schema validation and tests, detects drift between the project and target databases, and deploys changes using SqlPackage. Key differences from application CI/CD: database changes are stateful (you cannot just replace the old version), deployments generate ALTER scripts (not clean installs), and rollback requires reverse migration scripts rather than redeploying a previous build.

Branching strategies for databases

Not every branching model works well for databases. Application code is stateless — you can switch branches freely. Database schemas are stateful — merging conflicting column changes is painful.

StrategyHow It WorksBest For
Trunk-basedEveryone commits to main with short-lived feature branches (hours, not weeks)Small teams, frequent releases
GitFlowLong-lived develop and release branches with feature branchesLarge teams, scheduled releases
Environment branchesSeparate branches per environment (dev, staging, prod)Strict change promotion workflows

For database projects, trunk-based development with short-lived feature branches is safest. Why? Long-lived branches accumulate schema conflicts. If two developers both add columns to the same table on different branches for weeks, the merge is messy.

Pull request workflow for SQL changes

1. Developer creates feature branch from main
2. Makes schema changes (ALTER TABLE, new stored procedure, etc.)
3. Builds locally: dotnet build VaultBankDB.sqlproj
4. Opens pull request → triggers CI pipeline
5. CI pipeline: build → test → schema compare → report
6. Code review by database owner (CODEOWNERS)
7. Approval gate passed → merge to main
8. CD pipeline deploys .dacpac to target environment
💡 Exam tip: Conflict resolution in SQL projects

When two branches modify the same database object, Git shows a merge conflict in the .sql file. Resolution follows the same process as application code — edit the conflicting file to include both changes (or pick one), then build the project to validate the combined schema. The key exam point: always build after resolving conflicts to catch subtle issues like duplicate column names or broken foreign key references.

Schema drift detection

Schema drift happens when someone makes a change directly on the production database without going through source control. Your SQL Database Project says the table has 10 columns, but production has 11.

SqlPackage detects drift by comparing the .dacpac (what should be deployed) against the live database:

# Generate a drift report (no changes applied)
SqlPackage /a:DeployReport \
  /sf:bin/Release/VaultBankDB.dacpac \
  /tcs:"Server=vaultbank.database.windows.net;Database=VaultBankDB;Authentication=Active Directory Default" \
  /OutputPath:drift-report.xml

The report shows every difference — added columns, modified procedures, missing indexes. Your pipeline can fail the build if drift is detected, forcing the team to reconcile before deploying.

# GitHub Actions step to detect drift
- name: Check for schema drift
  run: |
    SqlPackage /a:DeployReport \
      /sf:VaultBankDB/bin/Release/VaultBankDB.dacpac \
      /tcs:"$CONN_STRING" \
      /OutputPath:drift-report.xml
    # Fail if report contains unexpected changes
    if grep -q "<Operation Name" drift-report.xml; then
      echo "Schema drift detected! Review drift-report.xml"
      exit 1
    fi
💡 Scenario: Priya catches a rogue index at Vault Bank

A DBA at Vault Bank created an index directly on the production database to fix a slow query over the weekend. On Monday, the CI/CD pipeline flags schema drift — the index exists in production but not in the SQL Database Project. Priya adds the index to the project, submits a pull request, and the team reviews whether the index is appropriate for all environments. The pipeline passes and the drift is resolved.

This is exactly why drift detection matters — it catches well-intentioned but untracked changes before they cause deployment surprises.

Secrets management

Database connection strings contain sensitive information. Hard-coding them in pipeline files is a security risk.

Secret StoreIntegrationBest For
GitHub SecretsNative in GitHub Actions, referenced as environment variablesGitHub-hosted pipelines
Azure DevOps Variable GroupsLinked to pipelines, supports Key Vault integrationAzure DevOps pipelines
Azure Key VaultCentral secret store, accessed via managed identity or service principalEnterprise environments, shared secrets across pipelines

GitHub Actions with Key Vault

- name: Get connection string from Key Vault
  uses: Azure/get-keyvault-secrets@v1
  with:
    keyvault: "vaultbank-kv"
    secrets: "SqlConnectionString"
  id: kv-secrets

- name: Deploy database
  run: |
    SqlPackage /a:Publish \
      /sf:VaultBankDB/bin/Release/VaultBankDB.dacpac \
      /tcs:"$SQLCONNECTIONSTRING"
  env:
    SQLCONNECTIONSTRING: ${{ steps.kv-secrets.outputs.SqlConnectionString }}

Never store connection strings, passwords, or API keys in:

  • Pipeline YAML files committed to the repository
  • Application settings files checked into source control
  • Build logs (mask variables in pipeline output)

Pipeline controls

The exam expects you to know the gates and guardrails that protect production deployments.

Branch protection rules

Branch: main
├── Require pull request before merging
├── Require at least 2 approvals
├── Require status checks to pass (build, test, drift check)
├── Require conversation resolution
└── Restrict who can push (no direct commits)

CODEOWNERS

A CODEOWNERS file assigns automatic reviewers based on file paths:

# .github/CODEOWNERS
# Database schema changes require DBA team review
/VaultBankDB/Tables/     @vaultbank/dba-team
/VaultBankDB/Security/   @vaultbank/dba-team @vaultbank/security

# Stored procedures can be reviewed by backend devs
/VaultBankDB/StoredProcedures/  @vaultbank/backend-team

# Pipeline changes require DevOps review
/.github/workflows/      @vaultbank/devops-team

Approval gates and triggers

Pipeline controls for database deployments
ControlWhat It DoesWhere to Configure
Branch policyRequires PR, reviewers, and passing checks before mergeRepository settings or Azure DevOps branch policies
CODEOWNERSAuto-assigns reviewers based on changed file pathsCODEOWNERS file in repository root
Environment protectionRequires manual approval before deploying to specific environmentsGitHub Environments or Azure DevOps Environments
Deployment gatesAutomated checks (health check, query performance baseline) before proceedingAzure DevOps release gates or GitHub Actions environment rules
Scheduled triggersPipelines run at specific times (e.g., deploy to prod only during maintenance windows)Cron schedule in pipeline YAML

Deploying with SqlPackage

SqlPackage is the engine that applies .dacpac changes to a target database. It compares the model to the target, generates ALTER scripts, and executes them.

# Publish (deploy) the .dacpac to a target database
SqlPackage /a:Publish \
  /sf:VaultBankDB/bin/Release/VaultBankDB.dacpac \
  /tcs:"Server=vaultbank.database.windows.net;Database=VaultBankDB;Authentication=Active Directory Default" \
  /p:BlockOnPossibleDataLoss=true \
  /p:DropObjectsNotInSource=false

Key parameters:

  • BlockOnPossibleDataLoss=true — stops deployment if a change would drop a column with data
  • DropObjectsNotInSource=false — keeps objects that exist in the target but not in the project (safer for incremental adoption)
💡 Exam tip: SqlPackage actions

Know the four main SqlPackage actions:

  • Publish — deploys .dacpac to a target database (generates and runs ALTER scripts)
  • DeployReport — generates an XML report of what WOULD change without applying anything
  • Script — generates the deployment SQL script without executing it (for manual review)
  • Extract — creates a .dacpac from an existing database (reverse-engineer the schema)

The exam may describe a scenario where a team wants to review changes before applying them. The answer is Script or DeployReport, not Publish.

Question

Why is trunk-based development preferred for database projects?

Click or press Enter to reveal answer

Answer

Long-lived branches accumulate schema conflicts that are difficult to merge. If two developers modify the same table on separate branches for weeks, reconciling the changes is error-prone. Trunk-based development with short-lived branches (merged within hours or days) keeps conflicts small and manageable.

Click to flip back

Question

What is schema drift and how do you detect it?

Click or press Enter to reveal answer

Answer

Schema drift occurs when the live database differs from the SQL Database Project definition — usually because someone made a change directly on the server. Detect it by running SqlPackage with the DeployReport action, which compares the .dacpac to the target and lists every difference. Pipelines can automate this check on every build.

Click to flip back

Question

What is a CODEOWNERS file?

Click or press Enter to reveal answer

Answer

A CODEOWNERS file maps file paths to required reviewers. When a pull request changes files matching a pattern, the specified teams are automatically assigned as reviewers. For database projects, this ensures DBA team review for schema changes and security team review for permission changes.

Click to flip back

Question

What does BlockOnPossibleDataLoss do in SqlPackage?

Click or press Enter to reveal answer

Answer

When set to true, SqlPackage stops the deployment if any change would cause data loss — such as dropping a column that contains data, reducing a column width, or changing a data type. This is a critical safety net for production deployments.

Click to flip back

Knowledge Check

Priya's team at Vault Bank wants to ensure that database schema changes cannot reach production without DBA review, passing unit tests, and no schema drift detected. Which combination of controls should she implement?

Knowledge Check

A deployment pipeline runs SqlPackage /a:Publish against the staging database. The command fails with 'Rows were detected. The schema update is terminating because data loss might occur.' What caused this error and how should the team proceed?

🎬 Video coming soon

Next up: Data API Builder: REST and GraphQL from SQL — expose your database as REST and GraphQL APIs without writing backend code.

← Previous

SQL Database Projects: Build and Validate

Next →

Data API Builder: REST and GraphQL from SQL

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.