🔒 Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided DP-700 Domain 1
Domain 1 — Module 8 of 8 100%
8 of 26 overall

DP-700 Study Guide

Domain 1: Implement and Manage an Analytics Solution

  • Workspace Settings: Your Fabric Foundation
  • Version Control: Git in Fabric
  • Deployment Pipelines: Dev to Production
  • Access Controls: Who Gets In
  • Data Security: Control Who Sees What
  • Governance: Labels, Endorsement & Audit
  • Orchestration: Pick the Right Tool
  • Pipeline Patterns: Parameters & Expressions

Domain 2: Ingest and Transform Data

  • Delta Lake: The Heart of Fabric Free
  • Loading Patterns: Full, Incremental & Streaming Free
  • Dimensional Modeling: Prep for Analytics Free
  • Data Stores & Tools: Make the Right Choice Free
  • OneLake Shortcuts: Data Without Duplication
  • Mirroring: Real-Time Database Replication
  • PySpark Transformations: Code Your Pipeline
  • Transform Data with SQL & KQL
  • Eventstreams & Spark Streaming: Real-Time Ingestion
  • Real-Time Intelligence: KQL & Windowing

Domain 3: Monitor and Optimize an Analytics Solution

  • Monitoring & Alerts: Catch Problems Early
  • Troubleshoot Pipelines & Dataflows
  • Troubleshoot Notebooks & SQL
  • Troubleshoot Streaming & Shortcuts
  • Optimize Lakehouse Tables: Delta Tuning
  • Optimize Spark: Speed Up Your Code
  • Optimize Pipelines & Warehouses
  • Optimize Streaming: Real-Time Performance

DP-700 Study Guide

Domain 1: Implement and Manage an Analytics Solution

  • Workspace Settings: Your Fabric Foundation
  • Version Control: Git in Fabric
  • Deployment Pipelines: Dev to Production
  • Access Controls: Who Gets In
  • Data Security: Control Who Sees What
  • Governance: Labels, Endorsement & Audit
  • Orchestration: Pick the Right Tool
  • Pipeline Patterns: Parameters & Expressions

Domain 2: Ingest and Transform Data

  • Delta Lake: The Heart of Fabric Free
  • Loading Patterns: Full, Incremental & Streaming Free
  • Dimensional Modeling: Prep for Analytics Free
  • Data Stores & Tools: Make the Right Choice Free
  • OneLake Shortcuts: Data Without Duplication
  • Mirroring: Real-Time Database Replication
  • PySpark Transformations: Code Your Pipeline
  • Transform Data with SQL & KQL
  • Eventstreams & Spark Streaming: Real-Time Ingestion
  • Real-Time Intelligence: KQL & Windowing

Domain 3: Monitor and Optimize an Analytics Solution

  • Monitoring & Alerts: Catch Problems Early
  • Troubleshoot Pipelines & Dataflows
  • Troubleshoot Notebooks & SQL
  • Troubleshoot Streaming & Shortcuts
  • Optimize Lakehouse Tables: Delta Tuning
  • Optimize Spark: Speed Up Your Code
  • Optimize Pipelines & Warehouses
  • Optimize Streaming: Real-Time Performance
Domain 1: Implement and Manage an Analytics Solution Premium ⏱ ~13 min read

Pipeline Patterns: Parameters & Expressions

Build reusable, dynamic pipelines with parameters, expressions, variables, and orchestration patterns like master-child and conditional branching.

Why parameterise pipelines?

☕ Simple explanation

Think of a recipe template.

Instead of writing separate recipes for chicken curry, beef curry, and vegetable curry, you write one recipe with a blank: “Cook [PROTEIN] with curry sauce.” Fill in the blank at cooking time.

Pipeline parameters are those blanks. Instead of building separate pipelines for each data source, each date range, or each environment, you build ONE pipeline with parameters. At runtime, you fill in the values: “Load data for [DATE], from [SOURCE], into [DESTINATION].”

This turns one pipeline into many — without duplicating anything.

Parameters in Fabric pipelines are runtime input values passed at the start of a run. They make pipelines reusable across different sources, destinations, date ranges, and configurations. Combined with dynamic expressions (a formula language similar to Azure Data Factory expressions), parameters enable conditional logic, dynamic file paths, and runtime configuration.

Common patterns include: master-child pipelines (a parent pipeline calls child pipelines with different parameters), conditional branching (If/Switch activities route execution based on expressions), and ForEach loops (iterate over arrays of items).

Parameters vs variables

Parameters are inputs; variables are working memory
FeatureParametersVariables
Set when?Before the run starts (input values)During the run (computed/updated by activities)
Changed during run?No — immutable once the run startsYes — Set Variable activity updates them
ScopePipeline-wide (passed to child pipelines)Pipeline-wide (not passed to children)
TypesString, Int, Float, Bool, Array, ObjectString, Bool, Array
Typical useDate range, source name, environment flagLoop counters, accumulated results, flags set by conditions

Dynamic expressions

Expressions let you compute values at runtime using system variables, parameters, and functions.

Common expression patterns

PatternExpressionResult
Today’s date@formatDateTime(utcNow(), 'yyyy-MM-dd')2026-04-21
Yesterday@formatDateTime(addDays(utcNow(), -1), 'yyyy-MM-dd')2026-04-20
Dynamic file path@concat('raw/', pipeline().parameters.source, '/', formatDateTime(utcNow(), 'yyyy/MM/dd'), '/')raw/orders/2026/04/21/
Conditional value@if(equals(pipeline().parameters.env, 'prod'), 'prod-server', 'dev-server')prod-server or dev-server
Parameter from parent@pipeline().parameters.startDateWhatever the parent pipeline passed
💡 Scenario: Carlos's dynamic file paths

Carlos’s daily ETL pipeline at Precision Manufacturing loads files from Azure Blob Storage. The files are organised by date: /raw/production/2026/04/21/output.csv.

Instead of hardcoding the path, Carlos uses a dynamic expression:

@concat('raw/production/', formatDateTime(pipeline().parameters.loadDate, 'yyyy/MM/dd'), '/output.csv')

The loadDate parameter defaults to yesterday’s date. For backfill runs, Carlos passes a specific date. One pipeline handles both daily and historical loads.

Orchestration patterns

Pattern 1: Master-child pipelines

A master pipeline calls multiple child pipelines, each with different parameters. This keeps individual pipelines small and testable.

Master Pipeline
├── Invoke: Load-Customers (source="customers", date="2026-04-21")
├── Invoke: Load-Orders (source="orders", date="2026-04-21")
├── Invoke: Load-Products (source="products", date="2026-04-21")
└── Transform-Notebook (depends on all three above)

Benefits: each child pipeline can be tested independently, reused across multiple master pipelines, and has its own retry logic.

Pattern 2: ForEach loop

Process a list of items in parallel or sequentially.

SettingSequentialParallel
BehaviourProcess items one at a timeProcess multiple items simultaneously
Use whenOrder matters, or items share a resourceItems are independent, speed matters
Max parallel1Up to 50 (configurable)
ℹ️ Scenario: Carlos processes 12 factories

Precision Manufacturing has 12 factories, each producing a daily CSV. Carlos builds a ForEach pipeline:

  1. Master pipeline gets a list of factory codes: ["F01", "F02", ..., "F12"]
  2. ForEach activity iterates over the list in parallel (batch size: 4)
  3. Inside the loop: Copy activity downloads the factory’s CSV, Notebook activity transforms and loads

All 12 factories are processed in 3 parallel batches instead of sequentially. Total time drops from 2 hours to 35 minutes.

Pattern 3: Conditional branching

If Condition and Switch activities route execution based on expressions.

ActivityUse Case
If ConditionBinary: if file exists → load it; else → send alert
SwitchMulti-way: based on source type → different transformation logic

Pattern 4: Notebook parameters

Notebooks accept parameters from pipelines via base parameters. The pipeline passes key-value pairs, and the notebook reads them as variables.

In the notebook:

# Parameters cell (tagged as parameters in notebook UI)
start_date = "2026-04-21"
source_name = "orders"

The pipeline overrides these values at runtime. The notebook code uses them as regular Python variables.

💡 Exam tip: How notebooks receive parameters

In the exam, look for the pattern: “A pipeline invokes a notebook with dynamic values.” The mechanism is base parameters on the Notebook activity in the pipeline. Values are passed as key-value pairs and override the defaults in the notebook’s parameters cell.

Common trap: a question mentions “setting notebook variables from a pipeline.” The answer is NOT environment variables or Spark config — it’s base parameters.


Question

What is the difference between a pipeline parameter and a variable?

Click or press Enter to reveal answer

Answer

Parameters are set BEFORE the run and are immutable during execution. Variables are set DURING the run by Set Variable activities and can change. Parameters are passed to child pipelines; variables are not.

Click to flip back

Question

What is a master-child pipeline pattern?

Click or press Enter to reveal answer

Answer

A master pipeline calls multiple child pipelines, each with different parameters. Benefits: child pipelines are small, testable, reusable, and have their own retry logic. The master coordinates sequencing and dependencies.

Click to flip back

Question

How does a pipeline pass parameters to a notebook?

Click or press Enter to reveal answer

Answer

Through base parameters on the Notebook activity. The pipeline sends key-value pairs that override the default values in the notebook's parameters cell. The notebook reads them as regular Python/Scala variables.

Click to flip back

Question

What expression returns yesterday's date in a pipeline?

Click or press Enter to reveal answer

Answer

@formatDateTime(addDays(utcNow(), -1), 'yyyy-MM-dd') — subtracts one day from the current UTC time and formats it.

Click to flip back


Knowledge Check

Carlos needs to load data from 12 factories. Each factory's data is independent. He wants maximum speed. Which pipeline pattern should he use?

Knowledge Check

A pipeline needs to use different database connection strings based on an `environment` parameter ('dev' or 'prod'). Which expression correctly returns the right connection string?

🎬 Video coming soon

Next up: Delta Lake: The Heart of Fabric — understand the storage foundation that powers every lakehouse in Fabric.

← Previous

Orchestration: Pick the Right Tool

Next →

Delta Lake: The Heart of Fabric

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.