Pipeline Patterns: Parameters & Expressions
Build reusable, dynamic pipelines with parameters, expressions, variables, and orchestration patterns like master-child and conditional branching.
Why parameterise pipelines?
Think of a recipe template.
Instead of writing separate recipes for chicken curry, beef curry, and vegetable curry, you write one recipe with a blank: “Cook [PROTEIN] with curry sauce.” Fill in the blank at cooking time.
Pipeline parameters are those blanks. Instead of building separate pipelines for each data source, each date range, or each environment, you build ONE pipeline with parameters. At runtime, you fill in the values: “Load data for [DATE], from [SOURCE], into [DESTINATION].”
This turns one pipeline into many — without duplicating anything.
Parameters vs variables
| Feature | Parameters | Variables |
|---|---|---|
| Set when? | Before the run starts (input values) | During the run (computed/updated by activities) |
| Changed during run? | No — immutable once the run starts | Yes — Set Variable activity updates them |
| Scope | Pipeline-wide (passed to child pipelines) | Pipeline-wide (not passed to children) |
| Types | String, Int, Float, Bool, Array, Object | String, Bool, Array |
| Typical use | Date range, source name, environment flag | Loop counters, accumulated results, flags set by conditions |
Dynamic expressions
Expressions let you compute values at runtime using system variables, parameters, and functions.
Common expression patterns
| Pattern | Expression | Result |
|---|---|---|
| Today’s date | @formatDateTime(utcNow(), 'yyyy-MM-dd') | 2026-04-21 |
| Yesterday | @formatDateTime(addDays(utcNow(), -1), 'yyyy-MM-dd') | 2026-04-20 |
| Dynamic file path | @concat('raw/', pipeline().parameters.source, '/', formatDateTime(utcNow(), 'yyyy/MM/dd'), '/') | raw/orders/2026/04/21/ |
| Conditional value | @if(equals(pipeline().parameters.env, 'prod'), 'prod-server', 'dev-server') | prod-server or dev-server |
| Parameter from parent | @pipeline().parameters.startDate | Whatever the parent pipeline passed |
Scenario: Carlos's dynamic file paths
Carlos’s daily ETL pipeline at Precision Manufacturing loads files from Azure Blob Storage. The files are organised by date: /raw/production/2026/04/21/output.csv.
Instead of hardcoding the path, Carlos uses a dynamic expression:
@concat('raw/production/', formatDateTime(pipeline().parameters.loadDate, 'yyyy/MM/dd'), '/output.csv')
The loadDate parameter defaults to yesterday’s date. For backfill runs, Carlos passes a specific date. One pipeline handles both daily and historical loads.
Orchestration patterns
Pattern 1: Master-child pipelines
A master pipeline calls multiple child pipelines, each with different parameters. This keeps individual pipelines small and testable.
Master Pipeline
├── Invoke: Load-Customers (source="customers", date="2026-04-21")
├── Invoke: Load-Orders (source="orders", date="2026-04-21")
├── Invoke: Load-Products (source="products", date="2026-04-21")
└── Transform-Notebook (depends on all three above)
Benefits: each child pipeline can be tested independently, reused across multiple master pipelines, and has its own retry logic.
Pattern 2: ForEach loop
Process a list of items in parallel or sequentially.
| Setting | Sequential | Parallel |
|---|---|---|
| Behaviour | Process items one at a time | Process multiple items simultaneously |
| Use when | Order matters, or items share a resource | Items are independent, speed matters |
| Max parallel | 1 | Up to 50 (configurable) |
Scenario: Carlos processes 12 factories
Precision Manufacturing has 12 factories, each producing a daily CSV. Carlos builds a ForEach pipeline:
- Master pipeline gets a list of factory codes:
["F01", "F02", ..., "F12"] - ForEach activity iterates over the list in parallel (batch size: 4)
- Inside the loop: Copy activity downloads the factory’s CSV, Notebook activity transforms and loads
All 12 factories are processed in 3 parallel batches instead of sequentially. Total time drops from 2 hours to 35 minutes.
Pattern 3: Conditional branching
If Condition and Switch activities route execution based on expressions.
| Activity | Use Case |
|---|---|
| If Condition | Binary: if file exists → load it; else → send alert |
| Switch | Multi-way: based on source type → different transformation logic |
Pattern 4: Notebook parameters
Notebooks accept parameters from pipelines via base parameters. The pipeline passes key-value pairs, and the notebook reads them as variables.
In the notebook:
# Parameters cell (tagged as parameters in notebook UI)
start_date = "2026-04-21"
source_name = "orders"
The pipeline overrides these values at runtime. The notebook code uses them as regular Python variables.
Exam tip: How notebooks receive parameters
In the exam, look for the pattern: “A pipeline invokes a notebook with dynamic values.” The mechanism is base parameters on the Notebook activity in the pipeline. Values are passed as key-value pairs and override the defaults in the notebook’s parameters cell.
Common trap: a question mentions “setting notebook variables from a pipeline.” The answer is NOT environment variables or Spark config — it’s base parameters.
Carlos needs to load data from 12 factories. Each factory's data is independent. He wants maximum speed. Which pipeline pattern should he use?
A pipeline needs to use different database connection strings based on an `environment` parameter ('dev' or 'prod'). Which expression correctly returns the right connection string?
🎬 Video coming soon
Next up: Delta Lake: The Heart of Fabric — understand the storage foundation that powers every lakehouse in Fabric.