Responsible AI Principles
Microsoft's responsible AI framework isn't just marketing — the exam tests it. Six principles that guide how Copilot and agents are designed, deployed, and governed.
What are responsible AI principles?
Responsible AI = the rules that keep AI helpful, not harmful.
Think of them like traffic laws for AI. Without them, AI might: give biased answers, make decisions nobody can explain, compromise your privacy, or be used for harmful purposes. The principles say: “Build AI that’s fair, transparent, safe, private, inclusive, and accountable.”
Microsoft applies these to everything — Copilot, agents, Azure AI services. The exam tests whether you know what each principle means and how it applies to M365.
The six principles
| Feature | What It Means | How It Applies to Copilot |
|---|---|---|
| 🎯 Fairness | AI should treat all people equitably | Copilot shouldn't produce biased recommendations based on gender, race, or other protected characteristics |
| 🛡️ Reliability & Safety | AI should work correctly and not cause harm | Copilot should produce accurate responses; hallucinations are monitored and mitigated |
| 🔒 Privacy & Security | AI should protect data and respect privacy | Copilot respects M365 permissions; data isn't used to train models; customer data stays in the tenant boundary |
| 🔍 Transparency | AI should be understandable and explainable | Users know when they're interacting with AI; Copilot shows which sources it used |
| ♿ Inclusiveness | AI should be accessible and work for everyone | Copilot supports accessibility features, multiple languages, and diverse user needs |
| 📋 Accountability | People should be answerable for AI systems | Microsoft publishes impact assessments; organisations should have AI governance policies |
Exam tip: The exam usually presents a scenario and asks “which responsible AI principle is being demonstrated?” Focus on the keywords: bias → Fairness, accuracy → Reliability, data handling → Privacy, explainability → Transparency, accessibility → Inclusiveness, oversight → Accountability.
Key Copilot design decisions driven by responsible AI
| Decision | Principle |
|---|---|
| Copilot doesn’t use your data to train AI models | Privacy & Security |
| Copilot shows citations (sources it used) | Transparency |
| Copilot responses can be reviewed in audit logs | Accountability |
| Content filters prevent harmful/toxic outputs | Reliability & Safety |
| Copilot respects existing M365 permissions | Privacy & Security |
| Human review is recommended for important decisions | Accountability |
Scenario: Northwave's AI governance policy
After deploying Copilot, Alex (CEO) asks Maya to create an AI governance policy. Here’s what they develop, mapped to responsible AI:
| Policy Rule | Principle |
|---|---|
| ”Copilot outputs must be reviewed by a human before sending to customers” | Accountability |
| ”We will audit Copilot usage quarterly for signs of bias” | Fairness |
| ”Copilot web grounding is disabled — only internal data” | Privacy & Security |
| ”All teams must be trained on Copilot’s limitations” | Transparency |
| ”Copilot must be usable by employees with disabilities” | Inclusiveness |
| ”We test Copilot responses in critical workflows before relying on them” | Reliability & Safety |
What the exam specifically tests
The exam focuses on practical implications, not just definitions:
- Copilot doesn’t train on your data — your organisational data stays private and is not used to improve the AI model
- Copilot can hallucinate — it may produce inaccurate information, especially when data is incomplete. Human review is essential.
- Copilot shows its work — citations link back to source documents so users can verify
- Admin oversight exists — audit logs, usage reports, and DLP policies allow governance
- Content safety filters — Copilot blocks harmful, toxic, or inappropriate outputs
🎬 Video walkthrough
🎬 Video coming soon
Responsible AI Principles — AB-900 Module 17
Responsible AI Principles — AB-900 Module 17
~8 minFlashcards
Knowledge Check
Copilot generates a summary of a customer meeting that contains an inaccurate detail about pricing. Maya wants to prevent this from happening again. Which responsible AI principle is most relevant?
Next up: Compliance Manager & eDiscovery — measuring your compliance posture and searching for content during investigations.