🔒 Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided DP-700 Domain 2
Domain 2 — Module 5 of 10 50%
13 of 26 overall

DP-700 Study Guide

Domain 1: Implement and Manage an Analytics Solution

  • Workspace Settings: Your Fabric Foundation
  • Version Control: Git in Fabric
  • Deployment Pipelines: Dev to Production
  • Access Controls: Who Gets In
  • Data Security: Control Who Sees What
  • Governance: Labels, Endorsement & Audit
  • Orchestration: Pick the Right Tool
  • Pipeline Patterns: Parameters & Expressions

Domain 2: Ingest and Transform Data

  • Delta Lake: The Heart of Fabric Free
  • Loading Patterns: Full, Incremental & Streaming Free
  • Dimensional Modeling: Prep for Analytics Free
  • Data Stores & Tools: Make the Right Choice Free
  • OneLake Shortcuts: Data Without Duplication
  • Mirroring: Real-Time Database Replication
  • PySpark Transformations: Code Your Pipeline
  • Transform Data with SQL & KQL
  • Eventstreams & Spark Streaming: Real-Time Ingestion
  • Real-Time Intelligence: KQL & Windowing

Domain 3: Monitor and Optimize an Analytics Solution

  • Monitoring & Alerts: Catch Problems Early
  • Troubleshoot Pipelines & Dataflows
  • Troubleshoot Notebooks & SQL
  • Troubleshoot Streaming & Shortcuts
  • Optimize Lakehouse Tables: Delta Tuning
  • Optimize Spark: Speed Up Your Code
  • Optimize Pipelines & Warehouses
  • Optimize Streaming: Real-Time Performance

DP-700 Study Guide

Domain 1: Implement and Manage an Analytics Solution

  • Workspace Settings: Your Fabric Foundation
  • Version Control: Git in Fabric
  • Deployment Pipelines: Dev to Production
  • Access Controls: Who Gets In
  • Data Security: Control Who Sees What
  • Governance: Labels, Endorsement & Audit
  • Orchestration: Pick the Right Tool
  • Pipeline Patterns: Parameters & Expressions

Domain 2: Ingest and Transform Data

  • Delta Lake: The Heart of Fabric Free
  • Loading Patterns: Full, Incremental & Streaming Free
  • Dimensional Modeling: Prep for Analytics Free
  • Data Stores & Tools: Make the Right Choice Free
  • OneLake Shortcuts: Data Without Duplication
  • Mirroring: Real-Time Database Replication
  • PySpark Transformations: Code Your Pipeline
  • Transform Data with SQL & KQL
  • Eventstreams & Spark Streaming: Real-Time Ingestion
  • Real-Time Intelligence: KQL & Windowing

Domain 3: Monitor and Optimize an Analytics Solution

  • Monitoring & Alerts: Catch Problems Early
  • Troubleshoot Pipelines & Dataflows
  • Troubleshoot Notebooks & SQL
  • Troubleshoot Streaming & Shortcuts
  • Optimize Lakehouse Tables: Delta Tuning
  • Optimize Spark: Speed Up Your Code
  • Optimize Pipelines & Warehouses
  • Optimize Streaming: Real-Time Performance
Domain 2: Ingest and Transform Data Premium ⏱ ~12 min read

OneLake Shortcuts: Data Without Duplication

Create OneLake shortcuts to access data in ADLS Gen2, Amazon S3, Google Cloud Storage, and other Fabric items — without copying a single byte.

What are OneLake shortcuts?

☕ Simple explanation

Think of a shortcut on your desktop.

The shortcut icon points to a file somewhere else on your computer. Double-click the shortcut and it opens the real file. The file itself doesn’t move — you just have a convenient pointer to it.

OneLake shortcuts work the same way for data. You create a pointer in your lakehouse that points to data stored elsewhere — another lakehouse, Azure Data Lake Storage, Amazon S3, or Google Cloud Storage. Your PySpark notebooks and SQL queries read the data as if it’s local, but no data is copied.

This means zero duplication, zero extra storage cost, and always-fresh data.

OneLake shortcuts are virtualised references that allow Fabric items to access data stored in external or internal locations without physical data movement. They appear as folders within a lakehouse’s Tables/ or Files/ section and support read operations through the same Spark and SQL analytics endpoint interfaces used for local data.

Supported sources: other Fabric lakehouses (same or different workspace), Azure Data Lake Storage Gen2, Amazon S3, Google Cloud Storage, Dataverse, and S3-compatible storage. Shortcuts use delegated authentication for internal sources and stored credentials for external sources.

Shortcut types

Shortcuts work across clouds and across Fabric workspaces
SourceAuthenticationTypical Use Case
Another Fabric lakehouseDelegated (user identity or workspace identity)Cross-workspace data sharing without duplication
Azure Data Lake Storage Gen2Service principal or org identityExisting data lake → Fabric without migration
Amazon S3Access key + secret keyMulti-cloud — read AWS data from Fabric
Google Cloud StorageService account keyMulti-cloud — read GCS data from Fabric
DataverseOrg identityPower Platform data accessible in Fabric analytics
S3-compatible (MinIO, etc.)Access key + secret keyOn-premises or custom S3-API storage

Where shortcuts can point

  • Tables/ section → shortcut appears as a Delta table (queryable via Spark and SQL endpoint)
  • Files/ section → shortcut appears as a folder of files (any format)

Creating a shortcut

  1. Open a lakehouse in Fabric
  2. Right-click on Tables/ or Files/ → New shortcut
  3. Choose the source type (Fabric, ADLS, S3, etc.)
  4. Provide connection details and credentials
  5. Select the target folder or table
  6. The shortcut appears instantly — no data copy, no waiting
💡 Scenario: Anika's multi-cloud shortcuts

ShopStream’s data lives in three places:

  • Product catalog — existing Azure Data Lake Gen2 (legacy system)
  • Payment data — Amazon S3 (payment provider stores data there)
  • Marketing events — another Fabric lakehouse in the marketing workspace

Instead of copying data into her lakehouse, Anika creates three shortcuts:

  • /Tables/ProductCatalog → ADLS Gen2 shortcut
  • /Files/PaymentRaw/ → S3 shortcut
  • /Tables/MarketingEvents → Fabric lakehouse shortcut

Her PySpark notebooks join across all three as if they’re local tables. Zero data duplication. Always fresh.

Shortcuts vs mirroring

This comparison appears frequently on the exam.

Shortcuts point to data; mirroring copies data — choose based on source type and resilience needs
FeatureShortcutsMirroring
Data copied?No — reads from source at query timeYes — continuous replication into OneLake as Delta tables
LatencyReal-time (reads source directly)Near real-time (minutes — CDC-based replication)
Storage costNone (no duplication)OneLake storage for the replicated copy
Source typesADLS, S3, GCS, Dataverse, Fabric itemsAzure SQL, Cosmos DB, Snowflake, PostgreSQL, MySQL, Spark catalog
Write to source?No (read-only)No (read-only replica)
Offline access?No — if source is down, shortcut failsYes — replicated data in OneLake survives source outages
Best forAccessing file-based or lake-based data without duplicationReplicating operational databases for analytics without ETL code
💡 Exam tip: Shortcut vs mirror decision

Use a shortcut when:

  • The source is a file store (ADLS, S3, GCS) or another Fabric item
  • You want zero data duplication
  • The source is always available (no offline access needed)

Use mirroring when:

  • The source is a relational database (SQL, Cosmos DB, Snowflake)
  • You need a local replica that survives source outages
  • You want automatic CDC-based replication without building ETL pipelines

Key exam pattern: “Access data without copying” → Shortcut. “Replicate a database” → Mirroring.

Shortcut considerations

ConsiderationDetail
PerformanceReading from external shortcuts (S3, GCS) may be slower than local OneLake data due to network latency
SecurityShortcuts inherit lakehouse permissions — but the user must also have access to the source
CostNo OneLake storage cost, but egress charges may apply from AWS or GCP
SchemaShortcut to a Delta table inherits its schema; shortcut to files doesn’t enforce schema
WriteShortcuts are read-only — you cannot write data through a shortcut

Question

What is an OneLake shortcut?

Click or press Enter to reveal answer

Answer

A virtualised reference that lets a lakehouse access data stored elsewhere (ADLS, S3, GCS, another Fabric lakehouse) without copying the data. Queries read the source directly. Zero duplication, zero extra storage.

Click to flip back

Question

Can you write data through a OneLake shortcut?

Click or press Enter to reveal answer

Answer

No. Shortcuts are read-only. You can query the data via Spark or SQL endpoint, but writes must go directly to the source storage.

Click to flip back

Question

What happens if the external source behind a shortcut goes offline?

Click or press Enter to reveal answer

Answer

Queries against the shortcut fail — there's no local copy to fall back on. If you need offline access, use mirroring instead (which creates a replicated copy in OneLake).

Click to flip back


Knowledge Check

Anika needs to query data stored in Amazon S3 from a Fabric lakehouse. She does not want to copy the data or incur OneLake storage costs. Which feature should she use?

Knowledge Check

The source behind a lakehouse shortcut (pointing to ADLS Gen2) experiences a 2-hour outage. What impact does this have on Fabric queries that use the shortcut?

🎬 Video coming soon

Next up: Mirroring: Real-Time Database Replication — bring operational databases into Fabric without building a single pipeline.

← Previous

Data Stores & Tools: Make the Right Choice

Next →

Mirroring: Real-Time Database Replication

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.