🔒 Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided AB-900 Domain 2
Domain 2 — Module 7 of 10 70%
17 of 28 overall

AB-900 Study Guide

Domain 1: M365 Core Features & Objects

  • Welcome to Microsoft 365
  • Exchange Online: Mailboxes & Distribution
  • SharePoint: Sites, Libraries & Permissions
  • Microsoft Teams: Teams, Channels & Policies
  • Users, Groups & Licensing
  • Zero Trust: Never Trust, Always Verify
  • Authentication: Passwords, MFA & Beyond
  • Microsoft Defender XDR
  • Microsoft Entra: Your Identity Hub
  • PIM, Audit Logs & Identity Governance

Domain 2: Data Protection & Governance

  • Microsoft Purview: The Big Picture
  • Sensitivity Labels & Data Classification
  • Data Loss Prevention (DLP)
  • Insider Risk & Communication Compliance
  • DSPM for AI & Data Lifecycle
  • How Copilot Accesses Your Data
  • Responsible AI Principles
  • Compliance Manager & eDiscovery
  • Activity Explorer & Data Monitoring
  • Oversharing in SharePoint

Domain 3: Copilot & Agent Admin

  • What is Microsoft 365 Copilot? Free
  • What Are Agents? Free
  • Copilot vs Agents: When to Use Which Free
  • Copilot Licensing: Monthly vs Pay-as-You-Go Free
  • Researcher, Analyst & Real-World Use Cases Free
  • Managing Copilot: Billing, Monitoring & Prompts Free
  • Building Agents: Create, Test & Publish Free
  • Agent Lifecycle: Access, Approval & Monitoring Free

AB-900 Study Guide

Domain 1: M365 Core Features & Objects

  • Welcome to Microsoft 365
  • Exchange Online: Mailboxes & Distribution
  • SharePoint: Sites, Libraries & Permissions
  • Microsoft Teams: Teams, Channels & Policies
  • Users, Groups & Licensing
  • Zero Trust: Never Trust, Always Verify
  • Authentication: Passwords, MFA & Beyond
  • Microsoft Defender XDR
  • Microsoft Entra: Your Identity Hub
  • PIM, Audit Logs & Identity Governance

Domain 2: Data Protection & Governance

  • Microsoft Purview: The Big Picture
  • Sensitivity Labels & Data Classification
  • Data Loss Prevention (DLP)
  • Insider Risk & Communication Compliance
  • DSPM for AI & Data Lifecycle
  • How Copilot Accesses Your Data
  • Responsible AI Principles
  • Compliance Manager & eDiscovery
  • Activity Explorer & Data Monitoring
  • Oversharing in SharePoint

Domain 3: Copilot & Agent Admin

  • What is Microsoft 365 Copilot? Free
  • What Are Agents? Free
  • Copilot vs Agents: When to Use Which Free
  • Copilot Licensing: Monthly vs Pay-as-You-Go Free
  • Researcher, Analyst & Real-World Use Cases Free
  • Managing Copilot: Billing, Monitoring & Prompts Free
  • Building Agents: Create, Test & Publish Free
  • Agent Lifecycle: Access, Approval & Monitoring Free
Domain 2: Data Protection & Governance Premium ⏱ ~10 min read

Responsible AI Principles

Microsoft's responsible AI framework isn't just marketing — the exam tests it. Six principles that guide how Copilot and agents are designed, deployed, and governed.

What are responsible AI principles?

☕ Simple explanation

Responsible AI = the rules that keep AI helpful, not harmful.

Think of them like traffic laws for AI. Without them, AI might: give biased answers, make decisions nobody can explain, compromise your privacy, or be used for harmful purposes. The principles say: “Build AI that’s fair, transparent, safe, private, inclusive, and accountable.”

Microsoft applies these to everything — Copilot, agents, Azure AI services. The exam tests whether you know what each principle means and how it applies to M365.

Microsoft’s Responsible AI Standard defines six principles that guide AI development, deployment, and governance. These principles are embedded in the design of Microsoft 365 Copilot and agents, and admins are expected to understand how they apply when deploying AI tools.

The six principles

Microsoft's six responsible AI principles
FeatureWhat It MeansHow It Applies to Copilot
🎯 FairnessAI should treat all people equitablyCopilot shouldn't produce biased recommendations based on gender, race, or other protected characteristics
🛡️ Reliability & SafetyAI should work correctly and not cause harmCopilot should produce accurate responses; hallucinations are monitored and mitigated
🔒 Privacy & SecurityAI should protect data and respect privacyCopilot respects M365 permissions; data isn't used to train models; customer data stays in the tenant boundary
🔍 TransparencyAI should be understandable and explainableUsers know when they're interacting with AI; Copilot shows which sources it used
♿ InclusivenessAI should be accessible and work for everyoneCopilot supports accessibility features, multiple languages, and diverse user needs
📋 AccountabilityPeople should be answerable for AI systemsMicrosoft publishes impact assessments; organisations should have AI governance policies

Exam tip: The exam usually presents a scenario and asks “which responsible AI principle is being demonstrated?” Focus on the keywords: bias → Fairness, accuracy → Reliability, data handling → Privacy, explainability → Transparency, accessibility → Inclusiveness, oversight → Accountability.

Key Copilot design decisions driven by responsible AI

DecisionPrinciple
Copilot doesn’t use your data to train AI modelsPrivacy & Security
Copilot shows citations (sources it used)Transparency
Copilot responses can be reviewed in audit logsAccountability
Content filters prevent harmful/toxic outputsReliability & Safety
Copilot respects existing M365 permissionsPrivacy & Security
Human review is recommended for important decisionsAccountability
💡 Scenario: Northwave's AI governance policy

After deploying Copilot, Alex (CEO) asks Maya to create an AI governance policy. Here’s what they develop, mapped to responsible AI:

Policy RulePrinciple
”Copilot outputs must be reviewed by a human before sending to customers”Accountability
”We will audit Copilot usage quarterly for signs of bias”Fairness
”Copilot web grounding is disabled — only internal data”Privacy & Security
”All teams must be trained on Copilot’s limitations”Transparency
”Copilot must be usable by employees with disabilities”Inclusiveness
”We test Copilot responses in critical workflows before relying on them”Reliability & Safety

What the exam specifically tests

The exam focuses on practical implications, not just definitions:

  1. Copilot doesn’t train on your data — your organisational data stays private and is not used to improve the AI model
  2. Copilot can hallucinate — it may produce inaccurate information, especially when data is incomplete. Human review is essential.
  3. Copilot shows its work — citations link back to source documents so users can verify
  4. Admin oversight exists — audit logs, usage reports, and DLP policies allow governance
  5. Content safety filters — Copilot blocks harmful, toxic, or inappropriate outputs

🎬 Video walkthrough

🎬 Video coming soon

Responsible AI Principles — AB-900 Module 17

Responsible AI Principles — AB-900 Module 17

~8 min

Flashcards

Question

What are Microsoft's six responsible AI principles?

Click or press Enter to reveal answer

Answer

1) Fairness — equitable treatment. 2) Reliability & Safety — accurate, no harm. 3) Privacy & Security — protect data. 4) Transparency — explainable and understandable. 5) Inclusiveness — accessible to all. 6) Accountability — humans are answerable.

Click to flip back

Question

Does Microsoft use your organisational data to train Copilot's AI models?

Click or press Enter to reveal answer

Answer

No. Your data is NOT used to train, retrain, or improve the large language models. Copilot reads your data at query time through Microsoft Graph but doesn't learn from it. Prompts and responses are subject to your organisation's Microsoft 365 compliance, audit, and eDiscovery controls.

Click to flip back

Question

Why does Copilot show citations in its responses?

Click or press Enter to reveal answer

Answer

Transparency — users can verify where the information came from by clicking the citation links. This allows them to check accuracy and confirm the sources, reducing blind trust in AI-generated content.

Click to flip back

Knowledge Check

Knowledge Check

Copilot generates a summary of a customer meeting that contains an inaccurate detail about pricing. Maya wants to prevent this from happening again. Which responsible AI principle is most relevant?


Next up: Compliance Manager & eDiscovery — measuring your compliance posture and searching for content during investigations.

← Previous

How Copilot Accesses Your Data

Next →

Compliance Manager & eDiscovery

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.