πŸ”’ Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided AB-100 Domain 1
Domain 1 β€” Module 4 of 7 57%
4 of 29 overall

AB-100 Study Guide

Domain 1: Plan AI-Powered Business Solutions

  • Agent Requirements & Data Readiness
  • AI Strategy & the Cloud Adoption Framework
  • Multi-Agent Solution Design
  • Build, Buy, or Extend
  • Generative AI, Knowledge Sources & Prompt Engineering
  • Small Language Models & Model Selection
  • ROI, TCO & Business Case Analysis

Domain 2: Design AI-Powered Business Solutions

  • Copilot in D365 Customer Experience & Service
  • Agent Types: Task, Autonomous & Prompt/Response
  • Foundry Tools & Code-First Solutions
  • Copilot Studio: Topics, Flows & Prompt Actions
  • Power Apps, WAF & Data Processing
  • Extensibility: Custom Models, M365 Agents & Copilot Studio
  • MCP, Computer Use & Agent Behaviours
  • M365 Agents: Teams, SharePoint & Sales/Service in M365 Copilot
  • D365 AI Orchestration: Finance, SCM & Customer Experience

Domain 3: Deploy AI-Powered Business Solutions

  • Agent Monitoring: Tools, Metrics, and Processes
  • Telemetry Interpretation and Agent Tuning
  • Testing Strategy for AI Agents
  • Custom Model Validation and Prompt Best Practices
  • End-to-End Testing for Multi-App AI Solutions
  • ALM Foundations & Data Lifecycle for AI
  • ALM for Copilot Studio Agents
  • ALM for Microsoft Foundry Agents
  • ALM for D365 AI Features
  • Agent Security Free
  • Governance for AI Agents Free
  • Prompt Security & AI Vulnerabilities Free
  • Responsible AI & Audit Trails Free

AB-100 Study Guide

Domain 1: Plan AI-Powered Business Solutions

  • Agent Requirements & Data Readiness
  • AI Strategy & the Cloud Adoption Framework
  • Multi-Agent Solution Design
  • Build, Buy, or Extend
  • Generative AI, Knowledge Sources & Prompt Engineering
  • Small Language Models & Model Selection
  • ROI, TCO & Business Case Analysis

Domain 2: Design AI-Powered Business Solutions

  • Copilot in D365 Customer Experience & Service
  • Agent Types: Task, Autonomous & Prompt/Response
  • Foundry Tools & Code-First Solutions
  • Copilot Studio: Topics, Flows & Prompt Actions
  • Power Apps, WAF & Data Processing
  • Extensibility: Custom Models, M365 Agents & Copilot Studio
  • MCP, Computer Use & Agent Behaviours
  • M365 Agents: Teams, SharePoint & Sales/Service in M365 Copilot
  • D365 AI Orchestration: Finance, SCM & Customer Experience

Domain 3: Deploy AI-Powered Business Solutions

  • Agent Monitoring: Tools, Metrics, and Processes
  • Telemetry Interpretation and Agent Tuning
  • Testing Strategy for AI Agents
  • Custom Model Validation and Prompt Best Practices
  • End-to-End Testing for Multi-App AI Solutions
  • ALM Foundations & Data Lifecycle for AI
  • ALM for Copilot Studio Agents
  • ALM for Microsoft Foundry Agents
  • ALM for D365 AI Features
  • Agent Security Free
  • Governance for AI Agents Free
  • Prompt Security & AI Vulnerabilities Free
  • Responsible AI & Audit Trails Free
Domain 1: Plan AI-Powered Business Solutions Premium ⏱ ~15 min read

Build, Buy, or Extend

Determine when to use prebuilt agents, extend M365 Copilot, build in Copilot Studio or Foundry, or create custom AI models β€” using a structured decision hierarchy.

The cheapest agent is the one you do not build

β˜• Simple explanation

Imagine you need a shelf. You could buy one from a furniture store, customise a flat-pack kit, hire a carpenter, or grow the tree yourself.

Each option costs more time and effort. The furniture store shelf (prebuilt agent) works if it fits your space. The flat-pack kit (extending Copilot) lets you adjust the dimensions. The carpenter (Copilot Studio or Foundry) builds exactly what you want. Growing the tree (custom model) only makes sense if no wood on earth matches your requirements.

The exam rewards choosing the simplest option that meets the requirement. Start from β€œbuy” and only escalate when you hit a wall.

The AB-100 exam tests a five-step decision hierarchy for AI component selection. Each step increases cost, complexity, and maintenance burden:

  1. Use prebuilt β€” D365 embedded AI, M365 Copilot agents, Copilot Studio templates, ISV agents from marketplace
  2. Extend M365 Copilot β€” declarative agents, Graph connectors, API plugins that add capabilities to the existing Copilot experience
  3. Build in Copilot Studio β€” custom agents with low-code authoring, Dataverse integration, channel support
  4. Build in Foundry β€” code-first agents with full model control, custom orchestration, Azure-scale infrastructure
  5. Create custom models β€” fine-tune or train from scratch when no general-purpose model fits the domain

The exam penalises over-engineering. If a prebuilt agent or Copilot extension solves the problem, selecting Foundry or a custom model is the wrong answer.

Prebuilt agent sources

Before building anything, check what already exists:

SourceWhat You GetExamples
D365 prebuilt AIReady-to-use AI features embedded in Dynamics 365 appsSales forecasting in D365 Sales, sentiment analysis in D365 Customer Service, cash flow predictions in D365 Finance
M365 Copilot agentsBuilt-in agents available to all Copilot-licensed usersResearcher (deep web research), Analyst (advanced data analysis), Facilitator (meeting follow-up)
Copilot Studio templatesPre-configured agent templates you can customiseIT helpdesk, HR FAQ, employee onboarding, customer support
ISV and partner agentsThird-party agents from AppSource and partner marketplaceIndustry-specific agents (healthcare scheduling, legal document review, financial compliance)

The decision matrix

Build vs buy decision matrix
FeatureUse PrebuiltExtend M365 CopilotBuild in Copilot StudioBuild in Foundry
Time to valueHours to daysDays to weeksWeeks to monthsMonths
CostIncluded in licenceLicence + minor dev effortLicence + moderate dev effortLicence + significant dev + compute
CustomisationNone to minimalModerate β€” plugins, connectors, promptsHigh β€” full agent design within low-codeFull β€” model, tools, orchestration, UX
MaintenanceMicrosoft handles everythingShared β€” Microsoft platform, your extensionsShared β€” platform updates, your agent logicYou own model ops, drift, retraining
Technical skillAdmin or end userPower user or developerCitizen dev with pro-code escapeAI engineer, data scientist, developer
Data accessNative D365/M365 data onlyM365 Graph + custom connectorsDataverse + connectors + custom APIsAny Azure data source, Fabric, external
Best forStandard features that fit as-isM365 productivity needs + org-specific dataCustomer-facing agents, process automationComplex reasoning, custom models, multi-agent

The five-step decision hierarchy

Work through these steps in order. Stop as soon as a step meets the requirement:

Step 1: Does a prebuilt agent or feature already exist? Check D365 embedded AI, M365 Copilot agents, Copilot Studio templates, and ISV marketplace. If the prebuilt feature covers 80%+ of the requirement, use it.

Step 2: Can you extend M365 Copilot? If users need AI assistance within their M365 workflow and the gap is data access or a specific action, extend Copilot with a declarative agent, Graph connector, or API plugin.

Step 3: Should you build in Copilot Studio? If you need a standalone agent with channel support (web, Teams, phone), process automation, or customer-facing interaction β€” and the logic does not require custom models β€” build in Copilot Studio.

Step 4: Should you build in Foundry? If you need code-first orchestration, custom model hosting, multi-agent coordination, or access to Azure-scale compute and data β€” build in Foundry.

Step 5: Do you need a custom AI model? Only if general-purpose models cannot handle your domain after prompt engineering and RAG have been tried. This is the most expensive and maintenance-heavy option.

Platform constraints to know

PlatformKey Constraints
Copilot StudioMessage size limits (variable by channel). Canvas topics limited to ~200 nodes. No native model hosting β€” calls external models via HTTP. Data must be in Dataverse or accessible via connector. Environment-level DLP policies apply.
Microsoft FoundryCompute quotas per subscription (TPM limits by model and region). Model deployment requires approval for certain models. Network isolation requires private endpoints for production. Cost scales with usage β€” no flat-rate pricing for compute.
M365 Copilot (extensions)All data access goes through Microsoft Graph β€” no direct database queries. Declarative agents limited to instructions, knowledge, and actions. API plugins must follow the OpenAPI spec. Tenant admin must approve third-party plugins.
D365 Embedded AIFeatures are per-app β€” no cross-app AI orchestration natively. Configuration only, no custom logic. Availability varies by D365 licence tier. Model retraining is not user-controlled.

When custom AI models are justified

Do not jump to custom models. Follow this escalation path:

ApproachWhen to UseExample
Prompt engineeringStandard text generation, summarisation, classification with a general modelSummarise customer service cases using GPT-4o with a well-crafted system prompt
RAG (Retrieval-Augmented Generation)Domain knowledge needed but the model’s reasoning is sufficientAgent that answers questions about internal policies by retrieving from a SharePoint index
Fine-tuningDomain terminology, tone, or patterns that prompts and RAG cannot captureMedical coding agent that must use precise ICD-10 codes consistently
Train from scratchTruly novel domain with no transfer learning benefit, or edge/embedded inferenceCustom defect detection model for a specific manufacturing process using proprietary sensor data
πŸ’‘ The RAG-before-fine-tune rule

This is a critical exam concept. Before investing in model customisation, exhaust the cheaper options in order:

  1. Prompt engineering β€” Refine the system prompt, add examples (few-shot), structure the output format. Cost: near zero. Time: hours.
  2. RAG β€” Ground the model with retrieved documents from Azure AI Search, Fabric, or Dataverse. The model reasons over your data without being retrained. Cost: search index + retrieval compute. Time: days to weeks.
  3. Fine-tuning β€” Train the model on your domain-specific data to adjust its weights. Needed when the model consistently misuses terminology, misses domain patterns, or requires a specific output style that prompts cannot enforce. Cost: training compute + ongoing retraining. Time: weeks.
  4. Train from scratch β€” Only when no pretrained model transfers to your domain. Extremely rare for business solutions. Cost: massive compute + large labelled dataset. Time: months.

The exam will present scenarios where a candidate might jump to fine-tuning. The correct answer is almost always β€œtry RAG first.” Fine-tuning is justified only when RAG is demonstrably insufficient.

πŸ›οΈ Adrienne’s credit risk agent β€” a hybrid architecture

Adrienne Cole (VP Enterprise Tech, Vanguard Financial Group) needs an agent that assesses credit risk for commercial loan applications.

Step 1: Prebuilt? D365 Finance has credit management features but they are rule-based, not AI-powered. Not sufficient for the nuanced risk assessment Vanguard needs.

Step 2: Extend M365 Copilot? The agent needs to access financial models and external credit bureau data β€” not M365 data. Copilot extensions do not fit.

Step 3: Copilot Studio? The agent needs custom model inference (risk scoring) that Copilot Studio cannot host natively. However, the user interface β€” where loan officers interact with the agent β€” is a great fit for Copilot Studio.

Step 4: Foundry? Yes. The risk scoring model lives in Foundry, with access to Azure SQL (historical loan performance), external credit bureau APIs, and a Fabric lakehouse of market indicators.

Step 5: Custom model? Adrienne’s team tries RAG first β€” retrieve historical loan decisions and let GPT-4o reason over them. It works for 80% of cases but struggles with Vanguard’s proprietary risk weighting methodology. Dev Patel (AI platform engineer) fine-tunes a model on 100,000 historical risk assessments. The fine-tuned model achieves 94% agreement with senior analysts.

Final architecture:

  • Front-end: Copilot Studio agent in Teams β€” loan officers submit applications and receive risk assessments
  • Back-end: Foundry-hosted fine-tuned model performs risk scoring, orchestrated by a Foundry agent that retrieves data from multiple sources
  • Integration: Copilot Studio calls Foundry via HTTP connector. Results written back to D365 Finance for the loan record.

Marcus Webb (CISO) adds guardrails: all risk scores above β€œhigh” require human analyst review before the recommendation reaches the loan officer. Yuki Tanaka (compliance) ensures every assessment is logged with full explainability for regulatory audit.

Key terms

Question

What is the five-step decision hierarchy for AI component selection?

Click or press Enter to reveal answer

Answer

1. Use prebuilt (D365/M365/ISV). 2. Extend M365 Copilot (declarative agents, plugins, connectors). 3. Build in Copilot Studio (low-code custom agents). 4. Build in Foundry (code-first, custom models). 5. Create custom models (fine-tune or train from scratch). Stop at the first step that meets the requirement.

Click to flip back

Question

What is the RAG-before-fine-tune rule?

Click or press Enter to reveal answer

Answer

Before investing in model customisation, exhaust cheaper options in order: prompt engineering, then RAG (retrieval-augmented generation), then fine-tuning, then training from scratch. Fine-tuning is only justified when RAG demonstrably cannot handle domain-specific terminology, patterns, or output requirements.

Click to flip back

Question

When is a custom AI model justified over using a general-purpose model with RAG?

Click or press Enter to reveal answer

Answer

Custom models are justified when: the domain has proprietary patterns that general models cannot learn from prompts or retrieved documents, precise terminology is critical and RAG retrieval introduces inconsistency, edge or embedded inference requires a small specialised model, or the task has no transfer learning benefit from pretrained models.

Click to flip back

Question

What are the key constraints of building agents in Copilot Studio vs Microsoft Foundry?

Click or press Enter to reveal answer

Answer

Copilot Studio: no native model hosting, data must be in Dataverse or via connector, topic node limits, environment DLP policies. Foundry: compute quotas per subscription (TPM limits), model deployment approvals, cost scales with usage, requires private endpoints for production. Copilot Studio is configuration-first; Foundry is code-first.

Click to flip back

Knowledge check

Knowledge Check

A D365 Customer Service team wants an AI agent that summarises case history and suggests next best actions using data already in Dataverse. No custom model is needed. Which approach is correct?

Knowledge Check

Adrienne's team tried RAG for the credit risk agent but it struggles with Vanguard's proprietary risk weighting methodology. What should they do next?

Knowledge Check

A company wants to add a 'query our product catalogue' capability to M365 Copilot so employees can ask questions about products in natural language. The catalogue lives in a third-party PIM system. What is the simplest sufficient approach?


🎬 Video coming soon

Next up: Generative AI and Prompt Design β€” design effective prompts and understand how generative AI fits into your business solution architecture.

← Previous

Multi-Agent Solution Design

Next β†’

Generative AI, Knowledge Sources & Prompt Engineering

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.