πŸ”’ Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided DP-600 Domain 2
Domain 2 β€” Module 3 of 14 21%
10 of 29 overall

DP-600 Study Guide

Domain 1: Maintain a Data Analytics Solution

  • Workspace Access Controls
  • Row-Level & Object-Level Security
  • Sensitivity Labels & Endorsement
  • Git Version Control in Fabric
  • Deployment Pipelines: Dev β†’ Test β†’ Prod
  • Impact Analysis & Dependencies
  • XMLA Endpoint & Reusable Assets

Domain 2: Prepare Data

  • Microsoft Fabric: The Big Picture Free
  • Lakehouses: Your Data Foundation Free
  • Warehouses in Fabric Free
  • Choosing the Right Data Store Free
  • Data Connections & OneLake Catalog
  • Shortcuts & OneLake Integration
  • Ingesting Data: Dataflows Gen2 & Pipelines
  • Star Schema Design Free
  • SQL Objects: Views, Functions & Stored Procedures
  • Transforming Data: Reshape & Enrich
  • Data Quality & Cleansing
  • Querying with SQL
  • Querying with KQL
  • Querying with DAX

Domain 3: Implement and Manage Semantic Models

  • Semantic Models: Storage Modes
  • Relationships & Advanced Modeling
  • DAX Essentials: Variables & Functions
  • Calculation Groups & Field Parameters
  • Large Models & Composite Models
  • Direct Lake Mode
  • DAX Performance Optimization
  • Incremental Refresh

DP-600 Study Guide

Domain 1: Maintain a Data Analytics Solution

  • Workspace Access Controls
  • Row-Level & Object-Level Security
  • Sensitivity Labels & Endorsement
  • Git Version Control in Fabric
  • Deployment Pipelines: Dev β†’ Test β†’ Prod
  • Impact Analysis & Dependencies
  • XMLA Endpoint & Reusable Assets

Domain 2: Prepare Data

  • Microsoft Fabric: The Big Picture Free
  • Lakehouses: Your Data Foundation Free
  • Warehouses in Fabric Free
  • Choosing the Right Data Store Free
  • Data Connections & OneLake Catalog
  • Shortcuts & OneLake Integration
  • Ingesting Data: Dataflows Gen2 & Pipelines
  • Star Schema Design Free
  • SQL Objects: Views, Functions & Stored Procedures
  • Transforming Data: Reshape & Enrich
  • Data Quality & Cleansing
  • Querying with SQL
  • Querying with KQL
  • Querying with DAX

Domain 3: Implement and Manage Semantic Models

  • Semantic Models: Storage Modes
  • Relationships & Advanced Modeling
  • DAX Essentials: Variables & Functions
  • Calculation Groups & Field Parameters
  • Large Models & Composite Models
  • Direct Lake Mode
  • DAX Performance Optimization
  • Incremental Refresh
Domain 2: Prepare Data Free ⏱ ~13 min read

Warehouses in Fabric

When your team thinks in SQL and needs full DML support, the Fabric Warehouse delivers. T-SQL, stored procedures, views, and cross-database queries β€” all on OneLake.

What is a Fabric Warehouse?

β˜• Simple explanation

Think of a warehouse as a traditional restaurant kitchen.

A lakehouse is like a food truck β€” fast, flexible, you can cook almost anything, but the setup changes daily. A warehouse is a proper kitchen with fixed stations, labelled shelves, and recipes (stored procedures) that anyone can follow.

If your team already thinks in SQL and needs to INSERT, UPDATE, DELETE, and MERGE data using familiar T-SQL syntax, the Fabric Warehouse is your home. It reads from and writes to OneLake β€” just like the lakehouse β€” but you work exclusively in SQL.

The Fabric Warehouse is a fully managed, enterprise-grade SQL data warehouse built on OneLake. It supports full T-SQL DML (INSERT, UPDATE, DELETE, MERGE), DDL (CREATE TABLE, ALTER, DROP), stored procedures, views, functions, and cross-database queries. Data is stored as Delta Parquet files in OneLake β€” the same format lakehouses use.

Unlike the lakehouse’s SQL analytics endpoint (which is read-only), the warehouse provides full read-write SQL access. Unlike traditional on-premises SQL Server Data Warehouses or Azure Synapse dedicated SQL pools, the Fabric Warehouse is serverless β€” you do not manage infrastructure, and it scales automatically within your Fabric capacity.

The warehouse automatically creates a default semantic model that Power BI can use for reporting. It also exposes a SQL connection string for tools like SSMS, Azure Data Studio, and third-party BI tools.

Warehouse architecture

Storage

Warehouse data lives in OneLake as Delta Parquet files β€” the same format as lakehouse data. This means:

  • Other Fabric items can read warehouse data via cross-database queries or shortcuts
  • Data is not locked into a proprietary format
  • You benefit from Delta features like ACID transactions and schema enforcement

Compute

The warehouse uses a distributed SQL engine that scales automatically within your Fabric capacity. There are no dedicated SQL pools to manage, no pause/resume decisions, and no manual scaling.

Automatic features

FeatureWhat It Does
Auto statisticsAutomatically generates query statistics for the optimiser
Auto cachingFrequently accessed data is cached in memory
Default semantic modelA Power BI semantic model is auto-generated from warehouse tables
SQL connection stringConnect with SSMS, Azure Data Studio, or any ODBC/JDBC tool

When to use a warehouse vs a lakehouse

This is one of the most exam-tested decisions in DP-600.

Both store data as Delta Parquet in OneLake β€” the difference is how you interact with the data
FactorLakehouseWarehouse
Primary languagePySpark / Scala / SQL (read-only via endpoint)T-SQL (full DML)
Write accessSpark writes; SQL endpoint is read-onlyFull SQL DML β€” INSERT, UPDATE, DELETE, MERGE
Stored proceduresNot available in lakehouseFull T-SQL stored procedures
File formats ingestedAny β€” CSV, Parquet, JSON, Avro, ORC, etc.Structured data via SQL INSERT or COPY INTO
Best forData engineers, Spark users, unstructured/semi-structured dataSQL-first teams, BI professionals, structured data
Power BI connectionVia SQL analytics endpoint (read-only)Via default semantic model or SQL connection
Cross-database queriesCan query warehouse tables from lakehouse SQL endpointCan query lakehouse tables from warehouse

Decision framework

Ask these questions to choose:

  1. Does your team primarily use SQL? β†’ Warehouse
  2. Do you need stored procedures and views with DML? β†’ Warehouse
  3. Do you need to process unstructured data (JSON, images, logs)? β†’ Lakehouse
  4. Do you need PySpark transformations? β†’ Lakehouse
  5. Do you need both SQL DML and Spark? β†’ Use both β€” they can cross-query each other
πŸ’‘ Scenario: Raj builds the financial warehouse

Raj at Atlas Capital builds a Fabric Warehouse for the finance team. They need:

  • Stored procedures to calculate daily P&L (profit and loss)
  • MERGE statements to update position data from a trading platform
  • Views that hide raw data complexity from report builders
  • Cross-database queries to join warehouse data with a lakehouse containing external market data

The warehouse handles all of this with familiar T-SQL syntax. Raj does not need to learn Spark β€” he works entirely in SQL.

Key SQL features in the Fabric Warehouse

COPY INTO

The fastest way to bulk-load data into a warehouse table:

COPY INTO dbo.daily_positions
FROM 'https://onelake.dfs.fabric.microsoft.com/...'
WITH (FILE_TYPE = 'PARQUET')

Cross-database queries

Query across lakehouses and warehouses in the same workspace:

-- From a warehouse, query a lakehouse's SQL endpoint
SELECT p.product_name, s.total_sales
FROM lakehouse_freshcart.dbo.products AS p
JOIN dbo.sales_summary AS s ON p.product_id = s.product_id

Stored procedures

Full T-SQL stored procedures with parameters, variables, and control flow:

CREATE PROCEDURE dbo.UpdateDailyPnL
    @trade_date DATE
AS
BEGIN
    MERGE dbo.daily_pnl AS target
    USING (
        SELECT portfolio_id, SUM(market_value - cost_basis) AS pnl
        FROM dbo.positions
        WHERE trade_date = @trade_date
        GROUP BY portfolio_id
    ) AS source
    ON target.portfolio_id = source.portfolio_id
       AND target.trade_date = @trade_date
    WHEN MATCHED THEN UPDATE SET pnl = source.pnl
    WHEN NOT MATCHED THEN INSERT (portfolio_id, trade_date, pnl)
         VALUES (source.portfolio_id, @trade_date, source.pnl);
END
πŸ’‘ Exam tip: What SQL features are NOT supported?

The Fabric Warehouse supports most T-SQL but not everything from SQL Server. Notable gaps include:

  • Triggers β€” not supported
  • Cursors β€” not supported (use set-based logic instead)
  • Temporary tables (#temp) β€” use CTEs or table variables instead
  • Full-text search β€” not available
  • SQL Agent jobs β€” use Fabric Pipelines for scheduling

The exam may test whether you can identify unsupported features. The general pattern: if the feature relies on row-by-row processing or server-level configuration, it probably is not supported.

The default semantic model

Every warehouse automatically creates a Power BI semantic model containing:

  • All warehouse tables and views
  • Basic relationships (if defined via foreign keys)
  • A DirectQuery connection to the warehouse

This means Power BI report builders can start creating reports immediately without manually setting up a data connection or importing data. The semantic model updates automatically as you add tables or views.

πŸ’‘ Scenario: James sets up a client reporting layer

James at Summit Consulting creates a warehouse for Client B with three views: vw_revenue, vw_expenses, and vw_headcount. The warehouse’s default semantic model immediately exposes these views to the client’s Power BI developers.

James does not need to create a separate semantic model or configure a data connection. The client’s report builders open Power BI, connect to the workspace, and see the views ready to use.

For more complex modelling (DAX measures, calculation groups), James creates a custom semantic model later β€” but the default model gets the team started in minutes.

Question

What is the primary difference between a Fabric Warehouse and a lakehouse SQL analytics endpoint?

Click or press Enter to reveal answer

Answer

The Fabric Warehouse supports full SQL DML (INSERT, UPDATE, DELETE, MERGE) and stored procedures. The lakehouse SQL analytics endpoint is read-only β€” you can query but not write via SQL. Both store data as Delta Parquet in OneLake.

Click to flip back

Question

What happens automatically when you create a Fabric Warehouse?

Click or press Enter to reveal answer

Answer

Fabric automatically creates: (1) A default semantic model for Power BI reporting, (2) A SQL connection string for SSMS/Azure Data Studio, (3) Auto-statistics for query optimisation, and (4) Auto-caching for frequently accessed data.

Click to flip back

Question

Can a warehouse query data from a lakehouse in the same workspace?

Click or press Enter to reveal answer

Answer

Yes. Cross-database queries let a warehouse query lakehouse tables (via the SQL analytics endpoint) and vice versa. The syntax uses three-part names: lakehouse_name.schema.table.

Click to flip back

Knowledge Check

Raj at Atlas Capital needs to run a nightly stored procedure that uses MERGE to update 500,000 financial position records. The data source is structured (CSV from a trading platform). Which Fabric item should he use?

Knowledge Check

James at Summit Consulting needs a single SQL query to join revenue data from Client A's warehouse with external market benchmarks stored in a lakehouse. Both are in the same Fabric workspace. Is this possible, and how?

🎬 Video coming soon


Next up: Choosing the Right Data Store β€” lakehouse vs warehouse vs Eventhouse: a decision framework for every scenario.

← Previous

Lakehouses: Your Data Foundation

Next β†’

Choosing the Right Data Store

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.