πŸ”’ Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided DP-600 Domain 3
Domain 3 β€” Module 6 of 8 75%
27 of 29 overall

DP-600 Study Guide

Domain 1: Maintain a Data Analytics Solution

  • Workspace Access Controls
  • Row-Level & Object-Level Security
  • Sensitivity Labels & Endorsement
  • Git Version Control in Fabric
  • Deployment Pipelines: Dev β†’ Test β†’ Prod
  • Impact Analysis & Dependencies
  • XMLA Endpoint & Reusable Assets

Domain 2: Prepare Data

  • Microsoft Fabric: The Big Picture Free
  • Lakehouses: Your Data Foundation Free
  • Warehouses in Fabric Free
  • Choosing the Right Data Store Free
  • Data Connections & OneLake Catalog
  • Shortcuts & OneLake Integration
  • Ingesting Data: Dataflows Gen2 & Pipelines
  • Star Schema Design Free
  • SQL Objects: Views, Functions & Stored Procedures
  • Transforming Data: Reshape & Enrich
  • Data Quality & Cleansing
  • Querying with SQL
  • Querying with KQL
  • Querying with DAX

Domain 3: Implement and Manage Semantic Models

  • Semantic Models: Storage Modes
  • Relationships & Advanced Modeling
  • DAX Essentials: Variables & Functions
  • Calculation Groups & Field Parameters
  • Large Models & Composite Models
  • Direct Lake Mode
  • DAX Performance Optimization
  • Incremental Refresh

DP-600 Study Guide

Domain 1: Maintain a Data Analytics Solution

  • Workspace Access Controls
  • Row-Level & Object-Level Security
  • Sensitivity Labels & Endorsement
  • Git Version Control in Fabric
  • Deployment Pipelines: Dev β†’ Test β†’ Prod
  • Impact Analysis & Dependencies
  • XMLA Endpoint & Reusable Assets

Domain 2: Prepare Data

  • Microsoft Fabric: The Big Picture Free
  • Lakehouses: Your Data Foundation Free
  • Warehouses in Fabric Free
  • Choosing the Right Data Store Free
  • Data Connections & OneLake Catalog
  • Shortcuts & OneLake Integration
  • Ingesting Data: Dataflows Gen2 & Pipelines
  • Star Schema Design Free
  • SQL Objects: Views, Functions & Stored Procedures
  • Transforming Data: Reshape & Enrich
  • Data Quality & Cleansing
  • Querying with SQL
  • Querying with KQL
  • Querying with DAX

Domain 3: Implement and Manage Semantic Models

  • Semantic Models: Storage Modes
  • Relationships & Advanced Modeling
  • DAX Essentials: Variables & Functions
  • Calculation Groups & Field Parameters
  • Large Models & Composite Models
  • Direct Lake Mode
  • DAX Performance Optimization
  • Incremental Refresh
Domain 3: Implement and Manage Semantic Models Premium ⏱ ~13 min read

Direct Lake Mode

Fabric's recommended storage mode in depth. Configuration, fallback behavior, OneLake vs SQL endpoints, and performance best practices.

Direct Lake in depth

β˜• Simple explanation

Think of Direct Lake as reading a book from a shelf right next to you.

Import copies the book to your desk (fast to read, but your copy gets outdated). DirectQuery reads from the library across town (always the latest edition, but slow because of travel). Direct Lake puts the shelf right next to your desk β€” you read the latest version instantly without copying it.

This module explains the mechanics: how the engine reads Delta files, what happens when it cannot (fallback), and how to choose between reading from the lakehouse or the warehouse SQL endpoint.

Direct Lake is a Fabric-exclusive storage mode where the Analysis Services engine reads columns directly from Delta Parquet files in OneLake. When a query needs a column, the engine locates the relevant Parquet file segments, loads them into memory (VertiPaq format), and caches them. Subsequent queries use the cached data until the underlying Delta table is updated.

Direct Lake eliminates two bottlenecks: (1) scheduled refreshes (data is read on demand) and (2) DirectQuery translation (queries run against in-memory VertiPaq, not against the source SQL engine).

How Direct Lake reads data

The read cycle

  1. A DAX query arrives (from a visual, API, or DAX query tool)
  2. The engine checks if the required column is already in memory (cache hit)
  3. If not cached, the engine reads the column from the Delta Parquet file in OneLake
  4. The data is loaded into VertiPaq format (compressed columnar) in memory
  5. The query executes at in-memory speed
  6. The engine watches the Delta transaction log β€” when a new version appears, it invalidates the cache

Framing

The process of loading data from Parquet into VertiPaq is called framing. A β€œframe” represents a snapshot of the Delta table at a specific version. When the table is updated (new data appended, rows modified), the engine creates a new frame.

You can trigger a manual frame update by calling refresh on the semantic model β€” but unlike Import mode, this is fast because it only reads the changed Delta log, not the entire dataset.

Direct Lake fallback

When Direct Lake cannot serve a query in its normal mode, it falls back to DirectQuery:

What triggers fallback?

TriggerDescription
Column exceeds memoryA single column is too large to load into available capacity memory
Row count exceeds guardrailsThe table exceeds the max row count for the capacity SKU
Unsupported data typeThe Delta table contains a type not supported by VertiPaq
Too many columnsThe model references more columns than the SKU allows
Parquet row group sizeParquet files with very large row groups may cause timeout

Fallback behavior settings

SettingBehavior
Automatic (default)Seamlessly falls back to DirectQuery β€” queries continue but are slower
DisabledQueries FAIL instead of falling back β€” guarantees Import-speed or nothing

When to disable fallback

  • When you guarantee consistent dashboard performance (no slow surprises)
  • When you want immediate alerts that the model exceeds capacity limits
  • In production dashboards where query speed is critical
πŸ’‘ Exam tip: Fallback questions

The exam may describe a scenario where a Direct Lake model suddenly becomes slower. The expected analysis:

  1. Check if fallback was triggered (look at the DirectQuery counter in usage metrics)
  2. Identify the cause (column too large, row count exceeded guardrails)
  3. Fix: OPTIMIZE Delta tables (reduce file count), increase capacity SKU, or reduce model scope

If the question asks β€œhow to ensure queries never fall back?”, the answer is: disable fallback and ensure the model fits within capacity limits.

Direct Lake on OneLake vs Direct Lake on SQL endpoints

The exam specifically tests this choice:

OneLake is faster; SQL endpoints offer more schema flexibility
SourceDirect Lake on OneLakeDirect Lake on SQL Endpoints
Reads fromDelta Parquet files directly in OneLake (lakehouse Files/Tables)SQL analytics endpoint (auto-generated from lakehouse or warehouse)
Data sourceLakehouse Delta tablesLakehouse SQL endpoint or Warehouse tables/views
PerformanceOptimal β€” direct Parquet readSlightly more overhead β€” goes through SQL endpoint layer
SchemaBased on Delta table schemaBased on SQL endpoint schema (can include custom views)
Best forMaximum performance on lakehouse dataWhen you need views or computed columns from the SQL layer

When to choose each

Choose OneLake when…Choose SQL endpoint when…
Data is in a lakehouse and performance is top priorityYou need SQL views or computed columns exposed to the model
The Delta table schema matches what the model needsThe warehouse has business logic in views that the model should use
You want the simplest, most direct pathYou need cross-database queries reflected in the model
πŸ’‘ Scenario: Raj optimises Direct Lake performance

Raj at Atlas Capital notices that his Direct Lake model occasionally falls back to DirectQuery during month-end reporting (when 200 analysts hit dashboards simultaneously).

His investigation reveals: the fact_trades table has 50 billion rows and the position_value column exceeds the F64 per-column memory limit.

Raj’s fixes:

  1. OPTIMIZE the Delta table to reduce Parquet file fragmentation
  2. Create an aggregate table (agg_daily_trades) that reduces 50B rows to 5M rows
  3. Point the semantic model at the aggregate table for high-level dashboards
  4. Keep the detail table for drill-through scenarios only
  5. Disable fallback on the production dashboard to guarantee performance

Direct Lake refresh behavior

Unlike Import mode, Direct Lake does not require traditional scheduled refreshes:

ActionImport ModeDirect Lake
Load data into modelFull or incremental refresh (minutes to hours)Automatic framing from Delta log (seconds)
Detect source changesOnly at refresh timeContinuous β€” reads Delta transaction log
Manual refreshRe-imports all dataUpdates the frame (reads new Delta entries)
CostHigh (full data processing)Low (only reads changes)

You CAN still call refresh on a Direct Lake model β€” it forces a frame update, which is useful when you want the model to immediately reflect a large data load.

Question

What is 'framing' in Direct Lake?

Click or press Enter to reveal answer

Answer

Framing is the process of loading data from Delta Parquet files into VertiPaq memory. A frame is a snapshot at a specific Delta version. When the table is updated, a new frame is created. Framing is fast because it reads only changed Delta log entries.

Click to flip back

Question

What happens when Direct Lake falls back to DirectQuery?

Click or press Enter to reveal answer

Answer

Queries are translated to SQL and sent to the source (lakehouse SQL endpoint or warehouse). They still work but are slower β€” Direct Lake speed is lost. Fallback is triggered by memory limits, row count guardrails, unsupported types, or column count limits.

Click to flip back

Question

When should you disable Direct Lake fallback?

Click or press Enter to reveal answer

Answer

Disable fallback when you need guaranteed Import-speed performance. Queries will fail instead of silently degrading. Use this for production dashboards where slowness is worse than a clear error. Ensure the model fits within capacity limits before disabling.

Click to flip back

Knowledge Check

Anita at FreshCart has a Direct Lake semantic model that performs well most of the time but slows down significantly during peak hours. Investigation shows the model is falling back to DirectQuery. What is the most likely cause?

Knowledge Check

Raj at Atlas Capital needs to connect a Direct Lake semantic model to a warehouse view that joins three tables and includes computed columns. Should he use Direct Lake on OneLake or Direct Lake on SQL endpoints?

🎬 Video coming soon


Next up: DAX Performance Optimization β€” make your measures and queries faster with proven optimization techniques.

← Previous

Large Models & Composite Models

Next β†’

DAX Performance Optimization

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.