🔒 Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided AB-100 Domain 1
Domain 1 — Module 1 of 7 14%
1 of 29 overall

AB-100 Study Guide

Domain 1: Plan AI-Powered Business Solutions

  • Agent Requirements & Data Readiness
  • AI Strategy & the Cloud Adoption Framework
  • Multi-Agent Solution Design
  • Build, Buy, or Extend
  • Generative AI, Knowledge Sources & Prompt Engineering
  • Small Language Models & Model Selection
  • ROI, TCO & Business Case Analysis

Domain 2: Design AI-Powered Business Solutions

  • Copilot in D365 Customer Experience & Service
  • Agent Types: Task, Autonomous & Prompt/Response
  • Foundry Tools & Code-First Solutions
  • Copilot Studio: Topics, Flows & Prompt Actions
  • Power Apps, WAF & Data Processing
  • Extensibility: Custom Models, M365 Agents & Copilot Studio
  • MCP, Computer Use & Agent Behaviours
  • M365 Agents: Teams, SharePoint & Sales/Service in M365 Copilot
  • D365 AI Orchestration: Finance, SCM & Customer Experience

Domain 3: Deploy AI-Powered Business Solutions

  • Agent Monitoring: Tools, Metrics, and Processes
  • Telemetry Interpretation and Agent Tuning
  • Testing Strategy for AI Agents
  • Custom Model Validation and Prompt Best Practices
  • End-to-End Testing for Multi-App AI Solutions
  • ALM Foundations & Data Lifecycle for AI
  • ALM for Copilot Studio Agents
  • ALM for Microsoft Foundry Agents
  • ALM for D365 AI Features
  • Agent Security Free
  • Governance for AI Agents Free
  • Prompt Security & AI Vulnerabilities Free
  • Responsible AI & Audit Trails Free

AB-100 Study Guide

Domain 1: Plan AI-Powered Business Solutions

  • Agent Requirements & Data Readiness
  • AI Strategy & the Cloud Adoption Framework
  • Multi-Agent Solution Design
  • Build, Buy, or Extend
  • Generative AI, Knowledge Sources & Prompt Engineering
  • Small Language Models & Model Selection
  • ROI, TCO & Business Case Analysis

Domain 2: Design AI-Powered Business Solutions

  • Copilot in D365 Customer Experience & Service
  • Agent Types: Task, Autonomous & Prompt/Response
  • Foundry Tools & Code-First Solutions
  • Copilot Studio: Topics, Flows & Prompt Actions
  • Power Apps, WAF & Data Processing
  • Extensibility: Custom Models, M365 Agents & Copilot Studio
  • MCP, Computer Use & Agent Behaviours
  • M365 Agents: Teams, SharePoint & Sales/Service in M365 Copilot
  • D365 AI Orchestration: Finance, SCM & Customer Experience

Domain 3: Deploy AI-Powered Business Solutions

  • Agent Monitoring: Tools, Metrics, and Processes
  • Telemetry Interpretation and Agent Tuning
  • Testing Strategy for AI Agents
  • Custom Model Validation and Prompt Best Practices
  • End-to-End Testing for Multi-App AI Solutions
  • ALM Foundations & Data Lifecycle for AI
  • ALM for Copilot Studio Agents
  • ALM for Microsoft Foundry Agents
  • ALM for D365 AI Features
  • Agent Security Free
  • Governance for AI Agents Free
  • Prompt Security & AI Vulnerabilities Free
  • Responsible AI & Audit Trails Free
Domain 1: Plan AI-Powered Business Solutions Premium ⏱ ~14 min read

Agent Requirements & Data Readiness

Assess where agents add value across task automation, analytics, and decision-making — and evaluate whether your data is ready to ground them.

Where agents actually help

☕ Simple explanation

Think of agents like specialist contractors. You would not hire an electrician to paint walls. Agents are the same — each type of agent solves a specific category of problem.

Task automation agents handle repetitive work — like a robotic arm on an assembly line that never gets tired. They follow rules, process queues, and execute steps.

Analytics agents are like data detectives. They sift through mountains of information and surface patterns humans would miss.

Decision-making agents are advisors. They weigh options, apply business logic, and recommend (or take) action when speed matters more than deliberation.

Agents in Dynamics 365 and Power Platform fall into three functional categories, each with distinct architecture and data requirements:

  1. Task automation: Process-oriented agents that execute multi-step workflows involving structured data, external APIs, and conditional logic. They replace or augment Power Automate flows when tasks require reasoning over unstructured inputs or handling exceptions.
  2. Data analytics: Agents that query, correlate, and interpret data across multiple sources. They surface insights from Dataverse, Azure Synapse, Fabric lakehouses, or external systems — going beyond static dashboards to answer ad-hoc questions.
  3. Decision-making: Agents that evaluate options against business rules, historical patterns, and real-time signals to recommend or autonomously execute decisions. They operate in the gap between “we have the data” and “someone needs to act on it.”

The exam expects you to match agent type to business scenario — not every process needs an agent, and choosing the wrong category leads to over-engineering or under-delivering.

Agent categories compared

Three categories of agent use
FeatureTask AutomationData AnalyticsDecision-Making
Primary functionExecute multi-step processesQuery and interpret dataEvaluate options and act
Data needsStructured inputs, APIs, queuesBroad access across data sourcesReal-time signals + historical patterns
Human involvementException handling onlyAsk questions, review insightsApproval gates on high-impact actions
D365 exampleAuto-route cases by sentiment + priorityForecast demand from sales + supply chain dataRecommend reorder quantities and trigger POs
Risk if wrongProcess delay, reworkMisleading insights, poor strategyFinancial loss, compliance violation
Autonomy levelHigh for routine, gated for exceptionsAdvisory — presents findingsSemiautonomous with escalation rules
💡 Should this be an agent? A pre-flight checklist

Not every process benefits from an agent. Before designing one, run through these five signals:

  1. Volume — Is this task performed hundreds or thousands of times per month? Low-volume tasks rarely justify agent investment.
  2. Data availability — Can the agent access the data it needs? If critical data lives in someone’s head or a spreadsheet on their desktop, the agent cannot function.
  3. Rules with exceptions — Does the process follow general rules but have edge cases that need reasoning? Pure rule-based processes fit Power Automate flows. Agents shine when they need to interpret context.
  4. Time sensitivity — Does a delay in execution cost money, customers, or compliance? Agents add value when speed of response matters.
  5. Structured outcome — Can you define what “done” looks like? Agents need measurable success criteria.

If fewer than three of these are true, a standard automation or dashboard may be the better fit.

The five pillars of data readiness

Agents are only as good as the data that grounds them. Before building any agent, assess your data against these five pillars:

PillarQuestion to AskRed FlagGreen Flag
AccuracyIs the data factually correct?Duplicate customer records, stale pricingMaster data management in place, validation rules enforced
RelevanceDoes the data relate to the agent’s task?Feeding an invoice agent with marketing dataScoped data sources aligned to agent purpose
TimelinessIs the data fresh enough for the use case?Inventory counts updated weekly for a real-time reorder agentNear-real-time sync from ERP to Dataverse
CleanlinessIs the data free of errors, gaps, and inconsistencies?40% of address fields blank, mixed date formatsData quality rules, automated cleansing pipelines
AvailabilityCan the agent actually access the data at runtime?Data locked in on-prem SQL with no API exposureDataverse, Azure SQL, or Fabric lakehouse with proper auth

🤖 Jordan’s data readiness assessment at CareFirst

Jordan Reeves is evaluating three agent candidates at CareFirst Health. She walks each through the five pillars:

Agent 1: Patient appointment scheduling

  • Accuracy: ✅ Patient records synced from Epic via FHIR API
  • Relevance: ✅ Scheduling data is directly applicable
  • Timeliness: ✅ Real-time availability from the booking system
  • Cleanliness: ⚠️ 15% of patient phone numbers are outdated
  • Availability: ✅ Data exposed via Dataverse connector

Jordan’s verdict: Go, with a data cleansing sprint for contact info. The agent can launch while the team cleans phone numbers in parallel.

Agent 2: Clinical supply demand forecasting

  • Accuracy: ❌ Supply counts rely on manual entry with known discrepancies
  • Relevance: ✅ Consumption data from 8 hospitals is the right input
  • Timeliness: ❌ Inventory updated every 48 hours — too slow for forecasting
  • Cleanliness: ❌ Three hospitals use different unit-of-measure conventions
  • Availability: ⚠️ Two hospital systems lack API access

Jordan’s verdict: Not ready. Three pillars fail. The team needs to standardise data entry, increase sync frequency, and expose APIs before this agent is viable.

Agent 3: Patient feedback sentiment analysis

  • Accuracy: ✅ Feedback collected via validated survey platform
  • Relevance: ✅ Direct patient voice data
  • Timeliness: ✅ Surveys processed within 4 hours of submission
  • Cleanliness: ✅ Structured fields plus free-text — both usable
  • Availability: ✅ Survey data flows to Azure SQL with API access

Jordan’s verdict: Go. All five pillars pass. This is the strongest agent candidate.

Organising data for AI consumption

Once data passes readiness assessment, it needs to be structured so agents — and other AI systems — can consume it. Three architecture patterns dominate:

PatternHow It WorksBest ForWatch Out For
CentralisedAll data lands in a single store (Fabric lakehouse, Azure SQL)Small to mid orgs with unified data teamsSingle point of failure, bottleneck for updates
FederatedData stays where it lives; agents query across sources at runtimeLarge orgs with autonomous business unitsLatency, inconsistent schemas, auth complexity
HybridCore reference data centralised; domain data federated with virtual viewsMost enterprise D365 deploymentsGovernance overhead, requires clear ownership boundaries
💡 Exam tip: Agents vs Power Automate flows

The exam tests whether you can distinguish agent-appropriate tasks from flow-appropriate tasks:

  • Power Automate flow: Predictable sequence, structured inputs, no reasoning required. Example: “When a new order arrives, create an invoice and email the customer.”
  • Agent: Requires reasoning over unstructured or ambiguous inputs, handling exceptions, or making judgement calls. Example: “Read the customer complaint email, determine severity, check order history, and draft a personalised response.”

The dividing line is reasoning. If the task needs the system to interpret, evaluate, or adapt — it is an agent. If it follows a deterministic path every time — it is a flow.

Key terms

Question

What are the five pillars of data readiness for AI agents?

Click or press Enter to reveal answer

Answer

Accuracy (data is factually correct), Relevance (data relates to the agent's task), Timeliness (data is fresh enough), Cleanliness (data is free of errors and gaps), and Availability (the agent can access the data at runtime). All five must pass before an agent is viable.

Click to flip back

Question

What distinguishes a decision-making agent from a task automation agent?

Click or press Enter to reveal answer

Answer

A task automation agent executes multi-step processes following rules and handling exceptions. A decision-making agent evaluates options against business rules, historical patterns, and real-time signals to recommend or take action. Decision-making agents carry higher risk and typically require semiautonomous operation with approval gates.

Click to flip back

Question

When should you use a hybrid data architecture for AI agents?

Click or press Enter to reveal answer

Answer

When you need core reference data (customers, products, pricing) centralised for consistency, but domain-specific data (clinical records, manufacturing telemetry) stays federated near its source. This is the most common pattern for enterprise D365 deployments because it balances governance with business unit autonomy.

Click to flip back

Knowledge check

Knowledge Check

Jordan is assessing a supply chain agent at CareFirst. Inventory data is updated every 48 hours, three hospitals use different units of measure, and two systems lack API access. What should she recommend?

Knowledge Check

A D365 Customer Service team receives emails with complaints that vary widely in tone, urgency, and topic. They want to auto-classify and route these complaints. What is the best approach?


🎬 Video coming soon

Next up: AI Strategy and the Cloud Adoption Framework — map your AI ambitions to a structured adoption roadmap using Microsoft’s CAF.

Next →

AI Strategy & the Cloud Adoption Framework

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.