🔒 Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided DP-800 Domain 2
Domain 2 — Module 6 of 11 55%
16 of 28 overall

DP-800 Study Guide

Domain 1: Design and Develop Database Solutions

  • Table Design: Columns, Types, and Indexes Free
  • Constraints: Protecting Your Data Free
  • Specialised Tables and Graph Queries Free
  • JSON in SQL: Store, Query, and Index
  • Programmability: Views, Functions, Procedures, and Triggers
  • Advanced T-SQL: CTEs, Windows, and Correlated Queries
  • Pattern Power: Regular Expressions in T-SQL
  • Fuzzy String Matching: Finding Similar Text
  • AI-Assisted SQL with GitHub Copilot Free
  • MCP: Connecting AI to Your Database

Domain 2: Secure, Optimize, and Deploy Database Solutions

  • Encryption: Always Encrypted and Column-Level
  • Dynamic Data Masking and Row-Level Security
  • Permissions, Auditing, and Passwordless Access
  • Transaction Isolation and Concurrency
  • Query Performance: Plans, DMVs, and Query Store
  • SQL Database Projects: Build and Validate
  • CI/CD Pipelines for SQL Databases
  • Data API Builder: REST and GraphQL from SQL
  • Securing AI and API Endpoints
  • Change Detection: CES, CDC, and Change Tracking
  • Azure Integration and Monitoring

Domain 3: Implement AI Capabilities in Database Solutions

  • External AI Models: Choose, Create, Manage
  • Embeddings: Design, Chunk, and Generate
  • Choose Your Search Strategy
  • Vector Data: Types, Indexes, and Storage
  • Vector Search: Distance, ANN, and ENN
  • Hybrid Search and Reciprocal Rank Fusion
  • RAG with SQL: Prompt, Process, Respond

DP-800 Study Guide

Domain 1: Design and Develop Database Solutions

  • Table Design: Columns, Types, and Indexes Free
  • Constraints: Protecting Your Data Free
  • Specialised Tables and Graph Queries Free
  • JSON in SQL: Store, Query, and Index
  • Programmability: Views, Functions, Procedures, and Triggers
  • Advanced T-SQL: CTEs, Windows, and Correlated Queries
  • Pattern Power: Regular Expressions in T-SQL
  • Fuzzy String Matching: Finding Similar Text
  • AI-Assisted SQL with GitHub Copilot Free
  • MCP: Connecting AI to Your Database

Domain 2: Secure, Optimize, and Deploy Database Solutions

  • Encryption: Always Encrypted and Column-Level
  • Dynamic Data Masking and Row-Level Security
  • Permissions, Auditing, and Passwordless Access
  • Transaction Isolation and Concurrency
  • Query Performance: Plans, DMVs, and Query Store
  • SQL Database Projects: Build and Validate
  • CI/CD Pipelines for SQL Databases
  • Data API Builder: REST and GraphQL from SQL
  • Securing AI and API Endpoints
  • Change Detection: CES, CDC, and Change Tracking
  • Azure Integration and Monitoring

Domain 3: Implement AI Capabilities in Database Solutions

  • External AI Models: Choose, Create, Manage
  • Embeddings: Design, Chunk, and Generate
  • Choose Your Search Strategy
  • Vector Data: Types, Indexes, and Storage
  • Vector Search: Distance, ANN, and ENN
  • Hybrid Search and Reciprocal Rank Fusion
  • RAG with SQL: Prompt, Process, Respond
Domain 2: Secure, Optimize, and Deploy Database Solutions Premium ⏱ ~13 min read

SQL Database Projects: Build and Validate

Manage your database schema as code with SQL Database Projects — create SDK-style models, validate builds, manage reference data, and implement testing strategies.

Your database deserves source control too

☕ Simple explanation

Think of a construction blueprint.

Application developers keep their code in Git — every change tracked, every version recoverable. But database teams often make changes directly on the live server with no record of what changed or why. That is like building a skyscraper without blueprints.

SQL Database Projects are your database blueprint in code. Every table, view, procedure, and index is defined in SQL files stored in Git. When you change something, you build the project (like compiling code) to check for errors BEFORE deploying to the real database.

SQL Database Projects (formerly SSDT) represent your database schema as a set of SQL files in a project structure. The SDK-style format (preview) modernises this with simplified project files, cross-platform support, and NuGet-based tooling. The project can be built to validate schema correctness, compared against a target database to detect drift, and deployed to apply changes.

SDK-style SQL Database Projects

The new SDK-style format uses a streamlined .sqlproj file:

<Project Sdk="Microsoft.Build.Sql/0.2.0-preview">
  <PropertyGroup>
    <Name>VaultBankDB</Name>
    <DSP>Microsoft.Data.Tools.Schema.Sql.SqlAzureV12DatabaseSchemaProvider</DSP>
  </PropertyGroup>
</Project>

Project structure

VaultBankDB/
├── VaultBankDB.sqlproj          # Project file
├── Tables/
│   ├── Customers.sql            # CREATE TABLE Customers (...)
│   ├── Orders.sql               # CREATE TABLE Orders (...)
│   └── Transactions.sql
├── Views/
│   └── vw_ActiveCustomers.sql
├── StoredProcedures/
│   ├── usp_GetCustomer.sql
│   └── usp_ProcessOrder.sql
├── Security/
│   ├── Roles.sql
│   └── Permissions.sql
├── Data/                        # Reference/static data
│   ├── Countries.sql            # INSERT INTO Countries VALUES (...)
│   └── TransactionTypes.sql
└── Tests/
    ├── test_GetCustomer.sql
    └── test_ProcessOrder.sql

Building and validating

# Build the project (validates schema correctness)
dotnet build VaultBankDB.sqlproj

# Build produces a .dacpac file (compiled database model)

A successful build means: all references resolve, data types match across foreign keys, no syntax errors, and no circular dependencies.

Reference data in source control

Reference data (lookup tables, configuration values) should be in source control alongside the schema:

-- Data/TransactionTypes.sql
MERGE INTO dbo.TransactionTypes AS target
USING (VALUES
    (1, 'Deposit', 'Credit'),
    (2, 'Withdrawal', 'Debit'),
    (3, 'Transfer', 'Both'),
    (4, 'Fee', 'Debit')
) AS source (TypeId, TypeName, Direction)
ON target.TypeId = source.TypeId
WHEN MATCHED THEN UPDATE SET TypeName = source.TypeName, Direction = source.Direction
WHEN NOT MATCHED THEN INSERT (TypeId, TypeName, Direction) VALUES (source.TypeId, source.TypeName, source.Direction);

Use MERGE for idempotent data scripts — they can be run repeatedly without duplicating data.

Testing strategies

Testing pyramid for SQL Database Projects
Test TypeWhat It TestsToolsWhen to Run
Schema validationAll objects compile, references resolvedotnet build / MSBuildEvery commit (CI)
Unit testsIndividual procedures/functions return expected resultstSQLt framework, .NET test projectsEvery commit (CI)
Integration testsEnd-to-end workflows across multiple objectsDeploy to test DB, run scenarios, validateBefore merge to main
Schema comparisonProject matches target database (no drift)SqlPackage /a:DeployReport, Schema CompareBefore deployment
-- tSQLt unit test example
EXEC tSQLt.NewTestClass 'CustomerTests';
GO

CREATE PROCEDURE CustomerTests.[test that usp_GetCustomer returns correct customer]
AS
BEGIN
    -- Arrange: insert test data
    EXEC tSQLt.FakeTable 'dbo.Customers';
    INSERT INTO dbo.Customers (CustomerId, FullName, Email)
    VALUES (1, 'Test User', 'test@example.com');

    -- Act: call the procedure
    CREATE TABLE #Result (CustomerId INT, FullName NVARCHAR(200), Email NVARCHAR(320));
    INSERT INTO #Result EXEC dbo.usp_GetCustomer @CustomerId = 1;

    -- Assert: check results
    EXEC tSQLt.AssertEqualsTable '#Result',
        (SELECT 1 AS CustomerId, 'Test User' AS FullName, 'test@example.com' AS Email);
END;
Question

What is a .dacpac file?

Click or press Enter to reveal answer

Answer

A .dacpac (Data-tier Application Package) is the compiled output of a SQL Database Project build. It contains the complete database model (schema, procedures, permissions) in a portable format. It is used for deployment — SqlPackage compares the .dacpac to a target database and generates the ALTER scripts needed.

Click to flip back

Question

What is the advantage of SDK-style SQL projects over legacy SSDT?

Click or press Enter to reveal answer

Answer

SDK-style projects use a simplified .sqlproj format, support cross-platform builds (dotnet build on Windows, Mac, Linux), use NuGet for dependencies, and integrate naturally with modern CI/CD pipelines. Legacy SSDT requires Visual Studio on Windows.

Click to flip back

Question

Why use MERGE for reference data scripts?

Click or press Enter to reveal answer

Answer

MERGE is idempotent — it inserts missing rows, updates changed rows, and optionally deletes removed rows. Running the script multiple times produces the same result. This makes reference data scripts safe for repeated deployment in CI/CD pipelines.

Click to flip back

Knowledge Check

Dev at PixelForge runs 'dotnet build' on the SQL Database Project and it fails with 'unresolved reference to dbo.Categories.' The Categories table exists in a separate shared database project. What should Dev do?

🎬 Video coming soon

Next up: CI/CD Pipelines for SQL Databases — automate deployments with branching, pull requests, and pipeline controls.

← Previous

Query Performance: Plans, DMVs, and Query Store

Next →

CI/CD Pipelines for SQL Databases

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.