🔒 Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided AB-100 Domain 2
Domain 2 — Module 2 of 9 22%
9 of 29 overall

AB-100 Study Guide

Domain 1: Plan AI-Powered Business Solutions

  • Agent Requirements & Data Readiness
  • AI Strategy & the Cloud Adoption Framework
  • Multi-Agent Solution Design
  • Build, Buy, or Extend
  • Generative AI, Knowledge Sources & Prompt Engineering
  • Small Language Models & Model Selection
  • ROI, TCO & Business Case Analysis

Domain 2: Design AI-Powered Business Solutions

  • Copilot in D365 Customer Experience & Service
  • Agent Types: Task, Autonomous & Prompt/Response
  • Foundry Tools & Code-First Solutions
  • Copilot Studio: Topics, Flows & Prompt Actions
  • Power Apps, WAF & Data Processing
  • Extensibility: Custom Models, M365 Agents & Copilot Studio
  • MCP, Computer Use & Agent Behaviours
  • M365 Agents: Teams, SharePoint & Sales/Service in M365 Copilot
  • D365 AI Orchestration: Finance, SCM & Customer Experience

Domain 3: Deploy AI-Powered Business Solutions

  • Agent Monitoring: Tools, Metrics, and Processes
  • Telemetry Interpretation and Agent Tuning
  • Testing Strategy for AI Agents
  • Custom Model Validation and Prompt Best Practices
  • End-to-End Testing for Multi-App AI Solutions
  • ALM Foundations & Data Lifecycle for AI
  • ALM for Copilot Studio Agents
  • ALM for Microsoft Foundry Agents
  • ALM for D365 AI Features
  • Agent Security Free
  • Governance for AI Agents Free
  • Prompt Security & AI Vulnerabilities Free
  • Responsible AI & Audit Trails Free

AB-100 Study Guide

Domain 1: Plan AI-Powered Business Solutions

  • Agent Requirements & Data Readiness
  • AI Strategy & the Cloud Adoption Framework
  • Multi-Agent Solution Design
  • Build, Buy, or Extend
  • Generative AI, Knowledge Sources & Prompt Engineering
  • Small Language Models & Model Selection
  • ROI, TCO & Business Case Analysis

Domain 2: Design AI-Powered Business Solutions

  • Copilot in D365 Customer Experience & Service
  • Agent Types: Task, Autonomous & Prompt/Response
  • Foundry Tools & Code-First Solutions
  • Copilot Studio: Topics, Flows & Prompt Actions
  • Power Apps, WAF & Data Processing
  • Extensibility: Custom Models, M365 Agents & Copilot Studio
  • MCP, Computer Use & Agent Behaviours
  • M365 Agents: Teams, SharePoint & Sales/Service in M365 Copilot
  • D365 AI Orchestration: Finance, SCM & Customer Experience

Domain 3: Deploy AI-Powered Business Solutions

  • Agent Monitoring: Tools, Metrics, and Processes
  • Telemetry Interpretation and Agent Tuning
  • Testing Strategy for AI Agents
  • Custom Model Validation and Prompt Best Practices
  • End-to-End Testing for Multi-App AI Solutions
  • ALM Foundations & Data Lifecycle for AI
  • ALM for Copilot Studio Agents
  • ALM for Microsoft Foundry Agents
  • ALM for D365 AI Features
  • Agent Security Free
  • Governance for AI Agents Free
  • Prompt Security & AI Vulnerabilities Free
  • Responsible AI & Audit Trails Free
Domain 2: Design AI-Powered Business Solutions Premium ⏱ ~14 min read

Agent Types: Task, Autonomous & Prompt/Response

Not all agents are created equal. Learn the three fundamental agent types in the Microsoft ecosystem — task agents, autonomous agents, and prompt/response agents — including when to use each, how they differ in governance, and the architectural patterns behind them.

Three ways agents work

☕ Simple explanation

Think of three types of colleagues:

A task agent is like a meticulous assistant who follows a checklist. “When a new hire starts, send the welcome email, create their accounts, schedule their orientation, and notify their manager.” It does tasks in order, step by step.

An autonomous agent is like a proactive team member who monitors situations and acts without being asked. “I noticed inventory is running low on brake pads — I’ve already placed a reorder with the supplier based on our rules.” It works independently, often in the background.

A prompt/response agent is like a knowledgeable colleague you chat with. “What’s our return policy for electronics?” → “Electronics can be returned within 30 days with receipt…” It answers questions using its knowledge base.

The AB-100 exam identifies three distinct agent architectures, each with different design patterns, governance requirements, and use cases:

Task agents execute predefined workflows with sequential or branching logic. They automate business processes with deterministic steps, escalation points, and completion criteria.

Autonomous agents operate independently — triggered by events, schedules, or conditions rather than user prompts. They monitor, decide, and act without human intervention, making them powerful but requiring careful governance guardrails.

Prompt/response agents operate in a conversational pattern — they receive user questions, reason over their knowledge sources, and generate contextual responses. They’re the most common agent type for customer-facing and employee-facing scenarios.

The three agent types compared

Three agent types in the Microsoft AI ecosystem
FeatureTriggerBehaviourHuman InvolvementGovernance Level
Task agentUser action or system event starts the workflowFollows a defined sequence of steps — collects data, calls APIs, creates recordsHuman may provide input at steps; reviews outputMedium — predefined flow limits what it can do
Autonomous agentSchedule, event, or condition triggers it automaticallyMonitors, reasons, decides, and acts independently — no user prompt neededMinimal — may notify humans of actions takenHigh — can act without oversight, needs strict guardrails
Prompt/response agentUser asks a question or makes a requestSearches knowledge, reasons over context, generates a natural language responseUser drives the conversation; agent respondsLow to medium — responses are advisory, user decides what to do

Designing task agents

Task agents are workflow executors. They’re ideal for repeatable business processes with defined steps.

Design pattern:

  1. Trigger — user clicks a button, form is submitted, record is created
  2. Collect — gather required information (from user input or data sources)
  3. Process — execute steps (create records, call APIs, send notifications)
  4. Decide — branch based on conditions (approval needed? escalation required?)
  5. Complete — mark the task as done, notify stakeholders

Examples in business solutions:

ProcessStepsPlatform
Employee onboardingCreate accounts → assign licences → schedule training → notify managerCopilot Studio with Power Automate actions
Purchase order approvalValidate budget → route to approver → update D365 SCM → notify supplierCopilot Studio with D365 connectors
Customer complaint handlingLog case → classify severity → assign to team → track SLA → escalate if overdueD365 Customer Service + Copilot Studio
💡 Scenario: Natalie designs a task agent for invoice processing

Natalie’s client (a logistics company) processes 500 supplier invoices per week. Currently, a team of 3 people manually checks each invoice against the purchase order, flags discrepancies, and routes for approval.

Task agent design:

  1. Trigger: New invoice arrives in email inbox
  2. Extract: AI reads the invoice (using document intelligence) and extracts key fields
  3. Match: Agent compares invoice data against the D365 purchase order
  4. Decide: If amounts match within 2% tolerance → auto-approve. If discrepancy > 2% → flag for human review
  5. Process: Create payment record in D365 Finance, update PO status
  6. Notify: Send confirmation to supplier, alert AP team of flagged invoices

Platform choice: Copilot Studio for the conversational interface + Power Automate for the workflow steps + D365 Finance connector for data

Designing autonomous agents

Autonomous agents are the most powerful — and the most dangerous. They act without user prompts, making governance critical.

Design pattern:

  1. Monitor — continuously watch for events, conditions, or scheduled triggers
  2. Reason — analyse the situation using AI models and business rules
  3. Decide — determine the appropriate action
  4. Act — execute the action (create records, send communications, update systems)
  5. Report — log what was done and notify relevant stakeholders

Critical design considerations:

ConsiderationWhy It MattersDesign Decision
Action boundariesWhat can the agent do without approval?Define maximum authority — e.g., “can reorder up to $10,000; above that, needs human approval”
Rollback capabilityWhat if the agent makes a wrong decision?Design undo mechanisms for every automated action
Monitoring and alertingHow do you know what the agent is doing?Real-time dashboard of actions taken, anomaly detection
Kill switchHow do you stop the agent immediately?Admin override that pauses all autonomous actions
Audit trailWho’s accountable for the agent’s actions?Log every decision with reasoning, input data, and timestamp
💡 Scenario: Kai designs an autonomous supply chain agent

Kai designs an autonomous agent for Apex Industries that monitors supply chain risk:

Trigger: Runs every 4 hours, scanning supplier data, weather APIs, and D365 SCM inventory levels

Behaviour:

  • Detects that a key supplier’s shipping port has been closed due to a storm
  • Identifies 12 purchase orders that will be delayed
  • Automatically contacts alternative suppliers from the approved vendor list
  • Places expedited orders for critical components (within the $10,000 auto-approval limit)
  • Escalates to procurement manager for orders above the limit
  • Sends a summary to Lin (CTO) with the impact assessment

Guardrails:

  • Cannot exceed $10,000 per order without human approval
  • Can only order from pre-approved suppliers
  • Must log every action with reasoning
  • Kill switch available to procurement team in D365 SCM

Designing prompt/response agents

Prompt/response agents are conversational — they answer questions, provide recommendations, and guide users through decisions.

Design pattern:

  1. Receive — user sends a natural language prompt
  2. Understand — agent classifies intent and extracts entities
  3. Retrieve — search knowledge sources for relevant information
  4. Reason — generate a response using the model and retrieved context
  5. Respond — deliver a natural language answer, possibly with follow-up suggestions

Key design decisions:

  • Knowledge scope — what should the agent know about? (Too broad = hallucination risk; too narrow = “I don’t know” frustration)
  • Tone and persona — professional? casual? clinical? Match the audience
  • Follow-up behaviour — should the agent ask clarifying questions or give its best guess?
  • Escalation trigger — when should it hand off to a human?
💡 Exam tip: choosing the right agent type

The exam frequently presents a scenario and asks which agent type to recommend:

  • “Process runs on a schedule without user interaction” → Autonomous agent
  • “Users need to complete a multi-step workflow” → Task agent
  • “Employees need to ask questions about company policies” → Prompt/response agent
  • “System should automatically reorder when inventory drops” → Autonomous agent
  • “Customer submits a form and the system processes it step by step” → Task agent
  • “Sales reps want AI-generated insights about their deals” → Prompt/response agent

Key differentiator: Task agents follow YOUR defined steps. Autonomous agents make THEIR OWN decisions. Prompt/response agents answer QUESTIONS.

Flashcards

Question

What is the key difference between a task agent and an autonomous agent?

Click or press Enter to reveal answer

Answer

A task agent follows a predefined workflow with defined steps (deterministic). An autonomous agent monitors conditions and makes its own decisions about what actions to take (non-deterministic). Task agents need user or event triggers; autonomous agents can act proactively.

Click to flip back

Question

What governance controls are essential for autonomous agents?

Click or press Enter to reveal answer

Answer

Action boundaries (maximum authority limits), rollback capability, real-time monitoring and alerting, a kill switch for immediate shutdown, and comprehensive audit trails logging every decision with reasoning.

Click to flip back

Question

When should you recommend a prompt/response agent over a task agent?

Click or press Enter to reveal answer

Answer

When users need to ask open-ended questions and receive natural language answers (Q&A scenarios). Task agents are better when users need to complete a structured, multi-step workflow. Prompt/response agents are conversational; task agents are procedural.

Click to flip back

Question

What is the most critical design element for autonomous agents that the exam emphasises?

Click or press Enter to reveal answer

Answer

Guardrails — action boundaries that limit what the agent can do without human approval. Without guardrails, autonomous agents can make costly mistakes at scale with no human oversight.

Click to flip back

Knowledge check

Knowledge Check

CareFirst Health needs an AI solution that monitors patient appointment no-shows. When a patient misses an appointment, the system should automatically reschedule within 48 hours, send a reminder, and flag chronic no-show patients for outreach by the patient experience team. Which agent type should Jordan design?

Knowledge Check

Natalie's client needs an agent that guides loan officers through a 12-step mortgage application review process, collecting documents, running credit checks, and routing for approval at each stage. Which agent type is most appropriate?

🎬 Video coming soon

Next up: Foundry Tools & Code-First Solutions — proposing the right Foundry tools for each requirement, designing code-first generative pages, and using agent feeds in apps.

← Previous

Copilot in D365 Customer Experience & Service

Next →

Foundry Tools & Code-First Solutions

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.