πŸ”’ Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided AB-100 Domain 3
Domain 3 β€” Module 10 of 13 77%
26 of 29 overall

AB-100 Study Guide

Domain 1: Plan AI-Powered Business Solutions

  • Agent Requirements & Data Readiness
  • AI Strategy & the Cloud Adoption Framework
  • Multi-Agent Solution Design
  • Build, Buy, or Extend
  • Generative AI, Knowledge Sources & Prompt Engineering
  • Small Language Models & Model Selection
  • ROI, TCO & Business Case Analysis

Domain 2: Design AI-Powered Business Solutions

  • Copilot in D365 Customer Experience & Service
  • Agent Types: Task, Autonomous & Prompt/Response
  • Foundry Tools & Code-First Solutions
  • Copilot Studio: Topics, Flows & Prompt Actions
  • Power Apps, WAF & Data Processing
  • Extensibility: Custom Models, M365 Agents & Copilot Studio
  • MCP, Computer Use & Agent Behaviours
  • M365 Agents: Teams, SharePoint & Sales/Service in M365 Copilot
  • D365 AI Orchestration: Finance, SCM & Customer Experience

Domain 3: Deploy AI-Powered Business Solutions

  • Agent Monitoring: Tools, Metrics, and Processes
  • Telemetry Interpretation and Agent Tuning
  • Testing Strategy for AI Agents
  • Custom Model Validation and Prompt Best Practices
  • End-to-End Testing for Multi-App AI Solutions
  • ALM Foundations & Data Lifecycle for AI
  • ALM for Copilot Studio Agents
  • ALM for Microsoft Foundry Agents
  • ALM for D365 AI Features
  • Agent Security Free
  • Governance for AI Agents Free
  • Prompt Security & AI Vulnerabilities Free
  • Responsible AI & Audit Trails Free

AB-100 Study Guide

Domain 1: Plan AI-Powered Business Solutions

  • Agent Requirements & Data Readiness
  • AI Strategy & the Cloud Adoption Framework
  • Multi-Agent Solution Design
  • Build, Buy, or Extend
  • Generative AI, Knowledge Sources & Prompt Engineering
  • Small Language Models & Model Selection
  • ROI, TCO & Business Case Analysis

Domain 2: Design AI-Powered Business Solutions

  • Copilot in D365 Customer Experience & Service
  • Agent Types: Task, Autonomous & Prompt/Response
  • Foundry Tools & Code-First Solutions
  • Copilot Studio: Topics, Flows & Prompt Actions
  • Power Apps, WAF & Data Processing
  • Extensibility: Custom Models, M365 Agents & Copilot Studio
  • MCP, Computer Use & Agent Behaviours
  • M365 Agents: Teams, SharePoint & Sales/Service in M365 Copilot
  • D365 AI Orchestration: Finance, SCM & Customer Experience

Domain 3: Deploy AI-Powered Business Solutions

  • Agent Monitoring: Tools, Metrics, and Processes
  • Telemetry Interpretation and Agent Tuning
  • Testing Strategy for AI Agents
  • Custom Model Validation and Prompt Best Practices
  • End-to-End Testing for Multi-App AI Solutions
  • ALM Foundations & Data Lifecycle for AI
  • ALM for Copilot Studio Agents
  • ALM for Microsoft Foundry Agents
  • ALM for D365 AI Features
  • Agent Security Free
  • Governance for AI Agents Free
  • Prompt Security & AI Vulnerabilities Free
  • Responsible AI & Audit Trails Free
Domain 3: Deploy AI-Powered Business Solutions Free ⏱ ~15 min read

Agent Security

Design multi-layered security for AI agents β€” covering identity, data access, network isolation, model protection, runtime hardening, and content safety.

Security is not a single layer

β˜• Simple explanation

Securing an agent is like securing a building. You need locks on the front door (authentication), security badges for each floor (data access controls), guards monitoring behaviour (runtime security), CCTV for the vault (model protection), and a screening process for everything entering or leaving (content safety).

No single control is enough. If someone gets past the front door, the floor badges stop them. If they forge a badge, the guards catch them. Defence in depth.

Agent security requires a defence-in-depth approach spanning five layers: identity and authentication (who can use the agent and what can the agent access), data access security (least-privilege access to grounding data), network security (isolation and traffic control), runtime security (sandboxing, rate limiting, monitoring), and content safety (input/output filtering, prompt shields). Each layer operates independently β€” a failure in one should be contained by the others.

The AB-100 exam tests architects on designing security architectures that cover all five layers, with special emphasis on the principle of least privilege for agent data access and content safety controls.

The five security layers

Defence in depth β€” every layer operates independently
Security LayerWhat It ProtectsKey Controls
Identity and authenticationWho can use the agent. What the agent can access.Entra ID authentication, OAuth for API access, managed identities for service-to-service, conditional access policies
Data accessThe data sources the agent reads from and writes to.Least-privilege permissions, scoped API access, sensitivity labels, row-level security, data loss prevention policies
NetworkTraffic between the agent, users, data sources, and model endpoints.Private endpoints, virtual network integration, network security groups, firewall rules, API Management gateway
RuntimeThe agent execution environment itself.Sandboxed execution, rate limiting, DDoS protection, request throttling, timeout enforcement, anomaly detection
Content safetyWhat goes into and comes out of the agent.Input validation, prompt shields, jailbreak detection, output filtering, PII redaction, content moderation

Identity and authentication

Agents interact with users AND with backend services. Both directions need authentication:

DirectionAuthentication MethodDesign Consideration
User to agentEntra ID SSO, multi-factor authenticationUsers authenticate through existing identity. Conditional access can restrict agent access by location, device, or risk level.
Agent to data sourcesManaged identity, OAuth client credentialsUse managed identities β€” no stored credentials. The agent authenticates as itself with scoped permissions.
Agent to model endpointsAPI key rotation, managed identity, network restrictionsRotate API keys automatically. Prefer managed identity where supported. Restrict endpoint access to specific virtual networks.
Agent to external APIsOAuth with connection references, API key via Key VaultStore secrets in Key Vault. Use connection references for per-environment credential management.

Data access security

The principle of least privilege is critical for agents. An agent should access only the data it needs β€” nothing more.

  • Scoped permissions β€” if an agent needs to read customer order history, it should not have access to all customers. Scope to the authenticated user’s data.
  • Sensitivity labels β€” Microsoft Purview sensitivity labels inform agent data access policies. They help classify content, but labels alone don’t automatically suppress content from agent responses. Enforce with permissions, DLP policies, grounding scope controls, and audit logging.
  • Row-level security β€” for Dataverse-backed agents, security roles control which records the agent can access on behalf of the user.
  • Data loss prevention β€” DLP policies can block agents from accessing or transmitting sensitive data types (credit card numbers, national IDs).

Model security

Protecting the model itself β€” not just the data it accesses:

  • Endpoint protection β€” model endpoints should not be publicly accessible. Use private endpoints within a virtual network.
  • Model artefact security β€” model files stored in the registry should have access controls. Not everyone should be able to download or copy production models.
  • Inference logging β€” log all requests to model endpoints for audit purposes. Monitor for unusual patterns (bulk extraction attempts).
  • Model theft prevention β€” rate limiting and output perturbation can mitigate model extraction attacks (where an attacker queries the model systematically to reconstruct it).

Runtime security

The execution environment needs its own protections:

  • Sandboxing β€” agent code runs in isolated environments. A compromised agent cannot access other agents or system resources.
  • Rate limiting β€” cap the number of requests per user, per session, and per time window. Prevents abuse and contains blast radius.
  • Timeout enforcement β€” set maximum execution time for agent responses. Prevents runaway processes from consuming resources.
  • Anomaly detection β€” monitor for unusual patterns: sudden spikes in usage, unusual query patterns, attempts to access out-of-scope data.
πŸ’‘ Scenario: Marcus designs security for Vanguard's financial advisory agent

Marcus Webb (CISO at Vanguard Financial Group) designs the security architecture for a Copilot Studio agent that provides financial advisory information to wealth management clients.

Identity and authentication:

  • Clients authenticate via Entra ID with mandatory MFA
  • Conditional access: agent accessible only from approved devices and locations
  • Agent authenticates to D365 Finance using a managed identity with read-only access to the client’s own portfolio data

Data access:

  • Row-level security ensures the agent can only access the authenticated client’s records
  • Sensitivity labels: β€œHighly Confidential” labels on portfolio valuations prevent the agent from including exact figures in unencrypted channels
  • DLP policy blocks the agent from transmitting account numbers or tax IDs in responses

Network:

  • Agent backend runs in a virtual network with private endpoints to D365 and the Foundry model endpoint
  • API Management gateway handles external-facing traffic with WAF protection
  • No direct internet access from the agent’s compute environment

Runtime:

  • Rate limited to 20 requests per user per minute
  • Session timeout after 15 minutes of inactivity
  • All interactions logged to immutable audit storage for regulatory compliance

Content safety:

  • Prompt shields enabled to detect manipulation attempts
  • Output filter prevents the agent from providing specific investment recommendations (regulatory requirement)
  • PII redaction on any logged conversation data
πŸ’‘ Exam tip: security covers more than just the agent

The exam asks about end-to-end security, not just agent-level controls:

  • The data the agent accesses β€” who has permission? What sensitivity labels apply? Is row-level security enforced?
  • The models the agent calls β€” are endpoints protected? Are model artefacts secured? Is inference logged?
  • The channels the agent communicates through β€” Teams, web chat, email? Each channel has its own security considerations.
  • The people who manage the agent β€” who can modify topics, update knowledge sources, change configuration? Admin access needs the same rigour as user access.

If the exam presents a security scenario, look for the answer that addresses the MOST layers β€” not just one.

Flashcards

Question

What are the five security layers for AI agents?

Click or press Enter to reveal answer

Answer

1) Identity and authentication β€” who can use the agent and what the agent can access. 2) Data access β€” least-privilege access to grounding data. 3) Network β€” isolation and traffic control. 4) Runtime β€” sandboxing, rate limiting, monitoring. 5) Content safety β€” input/output filtering and prompt shields.

Click to flip back

Question

Why should agents use managed identities instead of stored credentials?

Click or press Enter to reveal answer

Answer

Managed identities eliminate the need to store and rotate secrets. The identity is managed by Entra ID, automatically rotated, and scoped to specific permissions. Stored credentials risk exposure, require manual rotation, and are a common attack vector.

Click to flip back

Question

What is model theft and how do you mitigate it?

Click or press Enter to reveal answer

Answer

Model theft (or model extraction) occurs when an attacker queries a model systematically to reconstruct it. Mitigations include: rate limiting on model endpoints, output perturbation (adding slight randomness), inference logging with anomaly detection, and restricting endpoint access to authorised virtual networks.

Click to flip back

Question

How do sensitivity labels protect agent data access?

Click or press Enter to reveal answer

Answer

Microsoft Purview sensitivity labels flow through to agent interactions. If a source document is labelled Confidential, the agent respects that classification β€” it may exclude that content from responses or restrict which users can receive information from labelled sources.

Click to flip back

Knowledge check

Knowledge Check

Marcus discovers that Vanguard's financial advisory agent can return portfolio valuations for ANY client, not just the authenticated user's portfolio. Which security control is missing?

Knowledge Check

An architect proposes storing the model API key in the agent's configuration file for simplicity. What is the correct security approach?

Knowledge Check

Which combination of controls provides the strongest defence-in-depth for an agent that accesses sensitive financial data?

🎬 Video coming soon

Next up: Governance β€” designing governance frameworks for agent registration, approval workflows, data residency, and access controls on grounding data.

← Previous

ALM for D365 AI Features

Next β†’

Governance for AI Agents

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.