πŸ”’ Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided AB-100 Domain 3
Domain 3 β€” Module 8 of 13 62%
24 of 29 overall

AB-100 Study Guide

Domain 1: Plan AI-Powered Business Solutions

  • Agent Requirements & Data Readiness
  • AI Strategy & the Cloud Adoption Framework
  • Multi-Agent Solution Design
  • Build, Buy, or Extend
  • Generative AI, Knowledge Sources & Prompt Engineering
  • Small Language Models & Model Selection
  • ROI, TCO & Business Case Analysis

Domain 2: Design AI-Powered Business Solutions

  • Copilot in D365 Customer Experience & Service
  • Agent Types: Task, Autonomous & Prompt/Response
  • Foundry Tools & Code-First Solutions
  • Copilot Studio: Topics, Flows & Prompt Actions
  • Power Apps, WAF & Data Processing
  • Extensibility: Custom Models, M365 Agents & Copilot Studio
  • MCP, Computer Use & Agent Behaviours
  • M365 Agents: Teams, SharePoint & Sales/Service in M365 Copilot
  • D365 AI Orchestration: Finance, SCM & Customer Experience

Domain 3: Deploy AI-Powered Business Solutions

  • Agent Monitoring: Tools, Metrics, and Processes
  • Telemetry Interpretation and Agent Tuning
  • Testing Strategy for AI Agents
  • Custom Model Validation and Prompt Best Practices
  • End-to-End Testing for Multi-App AI Solutions
  • ALM Foundations & Data Lifecycle for AI
  • ALM for Copilot Studio Agents
  • ALM for Microsoft Foundry Agents
  • ALM for D365 AI Features
  • Agent Security Free
  • Governance for AI Agents Free
  • Prompt Security & AI Vulnerabilities Free
  • Responsible AI & Audit Trails Free

AB-100 Study Guide

Domain 1: Plan AI-Powered Business Solutions

  • Agent Requirements & Data Readiness
  • AI Strategy & the Cloud Adoption Framework
  • Multi-Agent Solution Design
  • Build, Buy, or Extend
  • Generative AI, Knowledge Sources & Prompt Engineering
  • Small Language Models & Model Selection
  • ROI, TCO & Business Case Analysis

Domain 2: Design AI-Powered Business Solutions

  • Copilot in D365 Customer Experience & Service
  • Agent Types: Task, Autonomous & Prompt/Response
  • Foundry Tools & Code-First Solutions
  • Copilot Studio: Topics, Flows & Prompt Actions
  • Power Apps, WAF & Data Processing
  • Extensibility: Custom Models, M365 Agents & Copilot Studio
  • MCP, Computer Use & Agent Behaviours
  • M365 Agents: Teams, SharePoint & Sales/Service in M365 Copilot
  • D365 AI Orchestration: Finance, SCM & Customer Experience

Domain 3: Deploy AI-Powered Business Solutions

  • Agent Monitoring: Tools, Metrics, and Processes
  • Telemetry Interpretation and Agent Tuning
  • Testing Strategy for AI Agents
  • Custom Model Validation and Prompt Best Practices
  • End-to-End Testing for Multi-App AI Solutions
  • ALM Foundations & Data Lifecycle for AI
  • ALM for Copilot Studio Agents
  • ALM for Microsoft Foundry Agents
  • ALM for D365 AI Features
  • Agent Security Free
  • Governance for AI Agents Free
  • Prompt Security & AI Vulnerabilities Free
  • Responsible AI & Audit Trails Free
Domain 3: Deploy AI-Powered Business Solutions Premium ⏱ ~14 min read

ALM for Microsoft Foundry Agents

Design code-first ALM for Foundry agents and custom AI models β€” Git-based version control, CI/CD pipelines, model registries, and automated evaluation.

Foundry ALM is code-first

β˜• Simple explanation

If Copilot Studio ALM is like shipping sealed containers, Foundry ALM is like managing a software factory. Everything lives in Git β€” your agent code, your prompt flows, your model training scripts. Deployments happen through CI/CD pipelines, just like traditional software.

The big addition: models are deployable artefacts with their own lifecycle. You version them, test them, stage them, and promote them β€” just like you would with application code.

Microsoft Foundry (formerly Azure AI Studio) follows a code-first ALM model where all artefacts β€” agent definitions, prompt flows, model configurations, evaluation scripts, and infrastructure β€” are stored as code in Git repositories. Deployment automation uses standard CI/CD tools (GitHub Actions, Azure DevOps Pipelines). Model lifecycle management is handled through a model registry that tracks versions, lineage, and deployment status.

This contrasts sharply with Copilot Studio’s solution-based ALM. The exam tests whether architects can choose the right ALM approach based on the platform.

Copilot Studio vs Foundry ALM

Choose the right ALM approach based on the platform
FeatureCopilot Studio ALMFoundry ALMWhen to Use
Artefact storagePower Platform solutions in DataverseCode in Git repositoriesCopilot Studio for low-code agents. Foundry for code-first agents and custom models.
Version controlSolution versioning (major.minor.build.revision)Git commits, branches, tagsCopilot Studio versions solutions as packages. Foundry versions everything as code.
Deployment toolPower Platform Pipelines or Azure DevOps with solution tasksGitHub Actions, Azure DevOps Pipelines, or Azure CLICopilot Studio uses solution import. Foundry uses standard deployment tooling.
Environment configEnvironment variables and connection referencesInfrastructure as Code parameters, environment files, Key Vault referencesSame concept, different mechanisms.
Model managementNot applicable β€” Microsoft manages the modelsModel registry with versioning, staging, and promotionFoundry gives you full control over model lifecycle.
Testing approachManual testing plus solution checkerAutomated evaluation pipelines with quality gatesFoundry supports automated quality gates in CI/CD.

Model registry and lifecycle

The model registry is central to Foundry ALM. It tracks every model version and its metadata:

StageWhat HappensKey Artefacts
TrainingModel trained on prepared data using training scriptsTraining script, hyperparameters, training data version
EvaluationModel tested against evaluation datasetEvaluation metrics (accuracy, precision, recall, F1), evaluation dataset version
RegistrationModel registered in the registry with version and metadataModel artefact, model card (description, intended use, limitations)
StagingModel deployed to a staging endpoint for integration testingStaging endpoint URL, integration test results
ProductionModel promoted to production endpointProduction endpoint URL, traffic routing configuration
MonitoringModel performance tracked in productionPerformance metrics, data drift alerts, feedback data
RetrainingModel retrained when performance degradesNew training data, updated training script, retraining trigger

Prompt flow versioning

Prompt flows in Foundry (classic) are stored as YAML and Python files β€” making them fully version-controllable. Note that prompt flow is associated with the classic Foundry experience; current Foundry capabilities are evolving, but the ALM principles remain the same:

  • Flow definition (YAML) β€” defines the steps, inputs, outputs, and connections
  • Node implementations (Python) β€” custom logic for each step in the flow
  • Environment parameters β€” connection strings, model endpoints, API keys stored in environment-specific config
  • Evaluation flows β€” separate flows that test the quality of the main flow’s outputs

All of these live in Git. Every change creates a commit. Every deployment references a specific commit SHA.

πŸ’‘ Scenario: Ravi builds a CI/CD pipeline for Vanguard's credit risk model

Ravi Krishnan at Cloudbridge Partners sets up automated ALM for Vanguard’s credit risk model:

Git repository structure:

  • /models/credit-risk/ β€” training scripts, evaluation scripts, model configuration
  • /flows/credit-assessment/ β€” prompt flow YAML and Python nodes
  • /infra/ β€” Bicep templates for model endpoints and compute
  • /tests/ β€” integration tests and evaluation datasets

GitHub Actions pipeline (runs monthly):

  1. Data preparation β€” pull latest financial data, apply transformations, version the dataset
  2. Training β€” run the training script on GPU compute with the new data
  3. Evaluation β€” run the evaluation flow against a held-out test set
  4. Quality gate β€” if accuracy is below 90% or fairness metrics fail, the pipeline stops and alerts the team
  5. Registration β€” register the new model version in the Foundry model registry
  6. Canary deployment β€” deploy to staging, route 10% of traffic to the new model
  7. A/B comparison β€” compare new model performance against the production baseline for 48 hours
  8. Promotion or rollback β€” if A/B results pass thresholds, promote to 100%. Otherwise, roll back to the baseline.

Key design decision: Ravi parameterises the pipeline so it works across environments. Dev uses a smaller dataset and cheaper compute. Production uses the full dataset and production-grade compute. Same pipeline code, different parameters.

πŸ’‘ Exam tip: Foundry treats models as first-class deployable artefacts

The exam expects you to understand that in Foundry:

  • Models have their own CI/CD β€” separate from application code. Model training, evaluation, and deployment is a pipeline, not a manual process.
  • Model versions are immutable β€” once registered, a model version cannot be modified. You create a new version instead.
  • A/B testing is expected β€” canary deployments that compare new models against baselines are a standard pattern, not an advanced technique.
  • Prompt flows are code β€” they live in Git, have commit history, and deploy through pipelines. Do not confuse them with Copilot Studio topics (which are solution components).
  • Infrastructure as Code β€” model endpoints, compute resources, and networking are provisioned through Bicep or Terraform, not manual portal configuration.

Flashcards

Question

How does Foundry ALM differ from Copilot Studio ALM?

Click or press Enter to reveal answer

Answer

Foundry is code-first: artefacts stored in Git, deployed via CI/CD pipelines, with model registry for version management. Copilot Studio is solution-based: artefacts packaged in Power Platform solutions, deployed via Pipelines or solution import.

Click to flip back

Question

What is a model registry and why is it important?

Click or press Enter to reveal answer

Answer

A model registry tracks model versions with metadata (training data version, evaluation metrics, model card). It enables promotion from staging to production, rollback to previous versions, and audit trails for regulated industries.

Click to flip back

Question

What is canary deployment for AI models?

Click or press Enter to reveal answer

Answer

A deployment pattern where a new model version receives a small percentage of production traffic (e.g. 10%) while the baseline model handles the rest. Performance is compared over a set period. If the new model meets thresholds, it is promoted to 100%. Otherwise, it is rolled back.

Click to flip back

Question

How are prompt flows version-controlled in Foundry?

Click or press Enter to reveal answer

Answer

Prompt flows are stored as YAML definitions and Python node implementations in Git. Every change creates a commit. Deployments reference a specific commit SHA, enabling full traceability and rollback.

Click to flip back

Knowledge check

Knowledge Check

Dev Patel needs to deploy a retrained credit risk model to production. The model was trained on new data and shows improved accuracy in evaluation. What is the recommended deployment approach?

Knowledge Check

An architect proposes storing Foundry prompt flows in a SharePoint document library for version control. What is wrong with this approach?

🎬 Video coming soon

Next up: ALM for D365 AI Features β€” managing AI feature rollouts in Dynamics 365 Finance, Supply Chain, Customer Service, and Sales.

← Previous

ALM for Copilot Studio Agents

Next β†’

ALM for D365 AI Features

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.