Shortcuts & OneLake Integration
Access external data without copying it. OneLake shortcuts, Eventhouse integration, and semantic model connections β Fabric's zero-copy architecture.
What are OneLake shortcuts?
Think of shortcuts like symbolic links on your computer.
When you create a shortcut on your desktop to a file on another drive, you can open it as if it were right there β but the actual file stays where it is. No duplication, no extra storage used.
OneLake shortcuts do the same thing for data in Fabric. You can create a shortcut in your lakehouse that points to data in Azure Data Lake, Amazon S3, Google Cloud Storage, or even another Fabric item. Your Spark notebooks and SQL queries see the data as if it were local β but the data never moves.
Types of shortcuts
| Shortcut Target | Use Case | Authentication |
|---|---|---|
| Internal (Fabric item) | Reference another lakehouse, warehouse, or KQL database in the same tenant | Fabric identity (Entra ID) β automatic |
| ADLS Gen2 | Access data in Azure Data Lake Storage without copying to OneLake | Storage account key, SAS token, or service principal |
| Amazon S3 | Access data in AWS S3 buckets β cross-cloud without data movement | S3 access key and secret key |
| Google Cloud Storage | Access data in GCS buckets β multi-cloud data federation | HMAC key |
| Dataverse | Access Dynamics 365 / Power Platform data directly | Entra ID (organisational account) |
Creating a shortcut
In a lakehouse, shortcuts appear in the Tables or Files section:
- Open your lakehouse β click New shortcut
- Choose the target type (Internal, ADLS Gen2, S3, GCS)
- Provide the connection details and path
- Select the sub-folder or table to reference
- The shortcut appears in your lakehouse β queryable immediately via Spark and SQL
Key rules
- Shortcuts do not support write-through for most operations β you cannot INSERT or UPDATE through a shortcut. However, if you have permissions on the target, deleting a file or folder within a shortcut WILL delete it at the source.
- No storage cost for the shortcut itself β you only pay for the data at its original location
- Permissions are evaluated at query time β the user must have access to both the shortcut and the target
- Delta tables via shortcuts support Direct Lake mode in semantic models
Scenario: Anita federates supplier data
Anita at FreshCart receives supplier product catalogues from three suppliers, each storing data in different cloud locations:
- Supplier A: Azure Data Lake Gen2 (Anita creates an ADLS shortcut)
- Supplier B: Amazon S3 bucket (Anita creates an S3 shortcut)
- Supplier C: Another Fabric workspace in the same tenant (Anita creates an internal shortcut)
All three appear as tables in her lakehouse. Her PySpark notebook joins them with FreshCartβs sales data in a single query β as if all the data were in one place. No data was copied.
OneLake integration for Eventhouse
The exam specifically tests Implement OneLake integration for Eventhouse and semantic models. Here is how Eventhouse integrates with OneLake:
OneLake availability
When you enable OneLake availability on an Eventhouse database or table, the data becomes accessible via OneLake β meaning:
- Other Fabric items (lakehouses, warehouses) can create shortcuts to the Eventhouse data
- The data appears in OneLake in Delta Parquet format
- Power BI semantic models can read the data via Direct Lake mode
How to enable it
- Open your Eventhouse database
- Go to Database settings β OneLake availability
- Toggle it On for the database or individual tables
- The data is now available in OneLake as Delta Parquet
What this means
| Without OneLake Availability | With OneLake Availability |
|---|---|
| Eventhouse data is only accessible via KQL | Eventhouse data is also accessible via OneLake shortcuts |
| Other Fabric items cannot read it directly | Lakehouses and warehouses can query it via shortcuts |
| Semantic models use DirectQuery to KQL | Semantic models can use Direct Lake on the OneLake copy |
Scenario: Raj bridges real-time and batch
Raj at Atlas Capital has trade monitoring data in an Eventhouse (real-time KQL queries) and financial reporting in a warehouse (batch SQL). His compliance team needs a single report that combines both.
Raj enables OneLake availability on the Eventhouse trade data. He then creates a shortcut in the warehouse that references the Eventhouse data. Now a single SQL query in the warehouse joins batch financial data with near-real-time trade data β no data copies, no ETL pipeline.
OneLake integration for semantic models
Semantic models integrate with OneLake primarily through Direct Lake mode β a storage mode covered in detail in Domain 3 (Module 26). The key concept for now:
- Direct Lake semantic models read Delta tables directly from OneLake
- This includes Delta tables in lakehouses, warehouses, and Eventhouse (via OneLake availability)
- No data import or DirectQuery overhead β Direct Lake loads data straight from Parquet files
Anita at FreshCart needs to join her sales data (in a Fabric lakehouse) with supplier catalogues stored in an Amazon S3 bucket. She wants to avoid copying the supplier data into OneLake. What should she do?
Raj at Atlas Capital has trade data in an Eventhouse. His compliance team needs to join this data with financial reports in a warehouse using SQL. The compliance analyst does not know KQL. What must Raj enable first?
π¬ Video coming soon
Next up: Ingesting Data: Dataflows Gen2 & Pipelines β move data into Fabric with no-code and code-first tools.