🔒 Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901 aws-aif-c01
Guided AB-620 Domain 1
Domain 1 — Module 5 of 10 50%
5 of 28 overall

AB-620 Study Guide

Domain 1: Plan and Configure Agent Solutions

  • Getting Started: Copilot Studio for Developers Free
  • Planning Enterprise Integration and Reusable Components Free
  • Identity Strategy for Agents Free
  • Channels, Deployment and Audience Design Free
  • Responsible AI and Security Governance Free
  • Agent Flows: Build, Monitor and Handle Errors Free
  • Human-in-the-Loop Agent Flows Free
  • Topics, Tools and Variables Free
  • Advanced Responses: Custom Prompts and Generative Answers Free
  • API Calls, HTTP Requests and Adaptive Cards Free

Domain 2: Integrate and Extend Agents in Copilot Studio

  • Enterprise Knowledge Sources: The Big Picture
  • Copilot Connectors and Power Platform Connectors
  • Azure AI Search as a Knowledge Source
  • Adding Tools: Custom Connectors and REST APIs
  • MCP Tools: Model Context Protocol in Action
  • Computer Use: Agent-Driven UI Automation
  • Multi-Agent Solutions: Design and Agent Reuse
  • Integrating Foundry Agents
  • Fabric Data Agents: Analytics Meets AI
  • A2A Protocol: Cross-Platform Agent Collaboration
  • Grounded Answers: Azure AI Search with Foundry
  • Foundry Model Catalog and Application Insights

Domain 3: Test and Manage Agents

  • Test Sets & Evaluation Methods
  • Reviewing Results & Tuning Performance
  • Solutions & Environment Variables
  • Power Platform Pipelines for Agent ALM
  • Agent Lifecycle: From Dev to Production
  • Exam Prep: Diagnostic Review

AB-620 Study Guide

Domain 1: Plan and Configure Agent Solutions

  • Getting Started: Copilot Studio for Developers Free
  • Planning Enterprise Integration and Reusable Components Free
  • Identity Strategy for Agents Free
  • Channels, Deployment and Audience Design Free
  • Responsible AI and Security Governance Free
  • Agent Flows: Build, Monitor and Handle Errors Free
  • Human-in-the-Loop Agent Flows Free
  • Topics, Tools and Variables Free
  • Advanced Responses: Custom Prompts and Generative Answers Free
  • API Calls, HTTP Requests and Adaptive Cards Free

Domain 2: Integrate and Extend Agents in Copilot Studio

  • Enterprise Knowledge Sources: The Big Picture
  • Copilot Connectors and Power Platform Connectors
  • Azure AI Search as a Knowledge Source
  • Adding Tools: Custom Connectors and REST APIs
  • MCP Tools: Model Context Protocol in Action
  • Computer Use: Agent-Driven UI Automation
  • Multi-Agent Solutions: Design and Agent Reuse
  • Integrating Foundry Agents
  • Fabric Data Agents: Analytics Meets AI
  • A2A Protocol: Cross-Platform Agent Collaboration
  • Grounded Answers: Azure AI Search with Foundry
  • Foundry Model Catalog and Application Insights

Domain 3: Test and Manage Agents

  • Test Sets & Evaluation Methods
  • Reviewing Results & Tuning Performance
  • Solutions & Environment Variables
  • Power Platform Pipelines for Agent ALM
  • Agent Lifecycle: From Dev to Production
  • Exam Prep: Diagnostic Review
Domain 1: Plan and Configure Agent Solutions Free ⏱ ~14 min read

Responsible AI and Security Governance

Plan a responsible AI strategy for Copilot Studio agents — Microsoft's six RAI principles, DLP connector classification, content moderation, and environment-level security governance.

Why responsible AI and governance are planning decisions

☕ Simple explanation

Think of your agent as a new hire who represents your company in every customer conversation.

You would not let a new employee talk to customers without training, guidelines, and supervision. Responsible AI is the training manual — it tells the agent what is appropriate to say and what is off-limits. Security governance is the building access policy — it controls which systems the agent can touch, which data it can move, and who can build agents in the first place.

Get this wrong and you end up in the news for the wrong reasons: an agent that gives medical advice, leaks confidential data, or generates biased responses. These are planning-phase decisions because retrofitting governance after deployment is painful and expensive.

Responsible AI (RAI) in Copilot Studio maps to Microsoft’s six AI principles: fairness, reliability, privacy, inclusiveness, transparency, and accountability. For developers, this translates into concrete controls: content moderation settings, generative AI safety configurations, topic-level guardrails, and monitoring for harmful outputs.

Security governance covers the Power Platform admin controls that determine who can build agents, which connectors they can use (DLP policies), how environments are segmented, and what audit trails exist. DLP policies classify connectors into Business, Non-Business, and Blocked groups — and they are enforced at the environment level. The exam tests both your understanding of RAI principles and your ability to apply governance controls in enterprise scenarios.

Microsoft’s six Responsible AI principles

These six principles form the foundation of every RAI question on the exam.

PrincipleWhat it meansCopilot Studio feature
FairnessTreats all users equitablyTest diverse inputs, monitor for bias
ReliabilityBehaves predictably, no harmContent moderation, guardrails, fallback topics
PrivacyData protected, access scopedAuth, DLP, environment segmentation, audit
InclusivenessAccessible to all usersMulti-language, accessible cards, plain language
TransparencyUsers know it is AIDisclosure messages, citations, confidence indicators
AccountabilityClear ownership and oversightAdmin roles, audit trails, human escalation
💡 Exam tip: transparency is not optional

Microsoft requires that agents identify themselves as AI. Copilot Studio includes a default system message at the start of conversations. Removing or hiding this disclosure violates Microsoft’s RAI guidelines. If the exam asks about transparency, the answer always involves making the AI nature clear to users.

Security controls in Copilot Studio

Security governance in Copilot Studio operates at multiple levels. The exam tests your understanding of each layer.

Security controls operate at different scopes — from tenant-wide DLP to per-agent moderation
FeatureWhat it controlsConfigured byScope
DLP policiesWhich connectors agents and flows can use — classified as Business, Non-Business, or BlockedPower Platform admin (or tenant admin)Environment or tenant level
Environment security rolesWho can create, edit, share, and delete agents within an environmentEnvironment adminPer environment
Connector classificationGroups connectors into categories that cannot be mixed in the same flow/agentDLP policy definitionPer DLP policy
Authentication settingsHow users authenticate and what identity the agent uses for backend callsAgent developer + admin approvalPer agent
Generative AI moderationContent safety filters for generative answers — blocks harmful, violent, or inappropriate contentAgent developer (toggle in Copilot Studio)Per agent
Audit logsTrack who created, modified, published, and deleted agentsMicrosoft 365 compliance centerTenant level

DLP connector classification

DLP (Data Loss Prevention) policies are the primary governance mechanism for controlling what agents and flows can connect to. This is heavily tested on the exam.

How DLP works:

  • Connectors classified into three groups: Business, Non-Business, and Blocked
  • A flow or agent cannot mix Business and Non-Business connectors — prevents data flowing between trusted and untrusted systems
  • Blocked connectors cannot be used at all
  • DLP policies are environment-scoped — production policy does not affect dev
DLP Policy: "Production - Insurance"
├── Business: SharePoint, Dataverse, ServiceNow, Azure AI Search
├── Non-Business: Twitter, Gmail, personal OneDrive
└── Blocked: Anonymous HTTP webhook, custom SMTP
💡 Exam tip: DLP is environment-scoped

A common exam trap: DLP policies apply to environments, not to individual agents. If you block a connector in the production environment’s DLP policy, ALL agents in that environment lose access — not just the one that was misbehaving. To give one agent an exception, you would need to move it to a different environment with a different DLP policy. Remember: environment-scoped, not agent-scoped.

ℹ️ What happens when DLP is violated?

The agent or flow is suspended — not deleted, but disabled. The maker is notified and the admin sees the violation in the Power Platform admin center. The agent cannot run until the violation is resolved (remove the offending connector or update the DLP policy).

Content moderation and generative AI safety

Copilot Studio provides built-in controls for generative answers:

  • Content moderation toggle: High/medium/low filtering aggressiveness. High blocks more but may over-filter legitimate responses.
  • Topic-level instructions: System prompts on generative nodes — e.g., “Never provide medical advice.”
  • Blocked phrases: Words or phrases the agent must never output.
  • Citation requirements: Force the agent to cite source documents (supports transparency).
  • Human escalation triggers: Hand off when the user expresses frustration, asks legal questions, or AI confidence is low.
Scenario: Kai builds governance for Pacific Mutual

Kai is setting up governance for Pacific Mutual’s Copilot Studio deployment:

DLP Policy (Production): Business: SharePoint, Dataverse, ServiceNow, Claims API, Azure AI Search. Non-Business: social media, personal email. Blocked: anonymous HTTP webhooks, custom SMTP.

Environment Security: Only 12 IT staff can create production agents. Security review required before solution promotion. Human escalation mandatory for claims above $50,000.

Content Moderation: High moderation on all generative answers. System instruction: “Never provide legal advice. Never guarantee claim outcomes.” Blocked phrases: competitor brand names.

Audit: Conversations logged to Application Insights. Monthly generative answer review. Quarterly RAI assessment.

Question

Name Microsoft's six Responsible AI principles.

Click or press Enter to reveal answer

Answer

(1) Fairness, (2) Reliability and Safety, (3) Privacy and Security, (4) Inclusiveness, (5) Transparency, (6) Accountability. Every Copilot Studio RAI decision maps back to one or more of these.

Click to flip back

Question

What are the three DLP connector classification groups?

Click or press Enter to reveal answer

Answer

Business, Non-Business, and Blocked. Connectors from Business and Non-Business groups cannot be used together in the same flow or agent. Blocked connectors cannot be used at all.

Click to flip back

Question

At what scope are DLP policies enforced in Power Platform?

Click or press Enter to reveal answer

Answer

Environment level (or tenant level). DLP policies are NOT agent-scoped — they apply to ALL agents and flows within the targeted environment. To give an agent different DLP rules, move it to a different environment.

Click to flip back

Question

What happens when an agent violates a DLP policy?

Click or press Enter to reveal answer

Answer

The agent is suspended (disabled, not deleted). The maker is notified, and the admin can see the violation in the Power Platform admin center. The agent cannot run until the violation is resolved.

Click to flip back

Question

Which RAI principle requires agents to identify themselves as AI?

Click or press Enter to reveal answer

Answer

Transparency. Copilot Studio includes a default system message disclosing the AI nature. Microsoft's guidelines require this disclosure — removing it violates RAI policy.

Click to flip back

Knowledge Check

Kai's production DLP policy classifies SharePoint as Business and Twitter as Non-Business. A developer builds an agent that reads SharePoint documents and posts summaries to Twitter. What happens?

Knowledge Check

Lena's healthcare agent uses generative answers grounded in medical literature. Which combination of controls best supports responsible AI?

Knowledge Check

A Power Platform admin wants to prevent a specific agent from using the Twitter connector, but allow other agents in the same environment to use it. What should they do?

🎬 Video coming soon

Responsible AI and Security Governance for Copilot Studio

← Previous

Channels, Deployment and Audience Design

Next →

Agent Flows: Build, Monitor and Handle Errors

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.