🔒 Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided AI-901 Domain 1
Domain 1 — Module 2 of 11 18%
2 of 26 overall

AI-901 Study Guide

Domain 1: AI Concepts and Capabilities

  • What is AI? Your First 10 Minutes Free
  • Responsible AI: The Six Principles Free
  • How Generative AI Actually Works Free
  • Choosing the Right AI Model Free
  • Deploying AI Models: Options & Settings
  • AI Workloads at a Glance
  • Text Analysis: Keywords, Entities & Sentiment
  • Speech: Recognition & Synthesis
  • Computer Vision: Seeing the World
  • Image Generation: Creating with AI
  • Information Extraction: From Chaos to Structure

Domain 2: Implement AI Solutions Using Foundry

  • Prompting Fundamentals: System & User Prompts
  • Microsoft Foundry: Your AI Command Center Free
  • Building a Chat App with the Foundry SDK
  • Agents in Foundry: Create & Test
  • Building an Agent Client App
  • Building a Text Analysis App
  • Multimodal: Responding to Speech
  • Azure Speech in Foundry Tools
  • Visual Prompts: Images as Input
  • Generating Images with AI
  • Building a Vision App
  • Content Understanding: Documents & Forms
  • Multimodal Extraction: Images, Audio & Video
  • Building an Extraction App
  • Exam Prep: Putting It All Together

AI-901 Study Guide

Domain 1: AI Concepts and Capabilities

  • What is AI? Your First 10 Minutes Free
  • Responsible AI: The Six Principles Free
  • How Generative AI Actually Works Free
  • Choosing the Right AI Model Free
  • Deploying AI Models: Options & Settings
  • AI Workloads at a Glance
  • Text Analysis: Keywords, Entities & Sentiment
  • Speech: Recognition & Synthesis
  • Computer Vision: Seeing the World
  • Image Generation: Creating with AI
  • Information Extraction: From Chaos to Structure

Domain 2: Implement AI Solutions Using Foundry

  • Prompting Fundamentals: System & User Prompts
  • Microsoft Foundry: Your AI Command Center Free
  • Building a Chat App with the Foundry SDK
  • Agents in Foundry: Create & Test
  • Building an Agent Client App
  • Building a Text Analysis App
  • Multimodal: Responding to Speech
  • Azure Speech in Foundry Tools
  • Visual Prompts: Images as Input
  • Generating Images with AI
  • Building a Vision App
  • Content Understanding: Documents & Forms
  • Multimodal Extraction: Images, Audio & Video
  • Building an Extraction App
  • Exam Prep: Putting It All Together
Domain 1: AI Concepts and Capabilities Free ⏱ ~12 min read

Responsible AI: The Six Principles

Microsoft's responsible AI framework isn't just corporate policy — the exam tests all six principles. Learn what each one means, how they apply to Azure AI, and how to spot them in exam scenarios.

What are responsible AI principles?

☕ Simple explanation

Responsible AI = the safety rails that keep AI helpful, not harmful.

Imagine you’re teaching a new employee. You wouldn’t just say “go do stuff.” You’d say: “Be fair to everyone. Don’t make dangerous decisions alone. Respect people’s privacy. Make your work accessible. Explain your reasoning. And if something goes wrong, someone is responsible.”

That’s exactly what Microsoft’s six principles do for AI. Every Azure AI service, every Foundry model, every Copilot response — they’re all built with these principles baked in.

Microsoft’s Responsible AI Standard defines six principles that guide the development, deployment, and governance of all AI systems. These principles are not aspirational — they’re enforced through technical controls, review processes, and governance policies across Azure AI services.

For the AI-901 exam, you need to understand what each principle means, recognise examples of each in practice, and identify which principle applies in a given scenario.

The six principles at a glance

Microsoft's six responsible AI principles
FeatureWhat It MeansAzure AI Example
🎯 FairnessAI should treat all people equitably and avoid biasA hiring model should not favour candidates based on gender or ethnicity
🛡️ Reliability & SafetyAI should work correctly and safely under expected conditionsA medical AI must be tested rigorously before making diagnostic suggestions
🔒 Privacy & SecurityAI should protect data and operate within security boundariesAzure AI models process data within your tenant boundary; your data isn't used to train models
🔍 TransparencyAI behaviour should be understandable and explainableAI responses should cite sources; users should know they're interacting with AI, not a human
♿ InclusivenessAI should be accessible and useful for people with diverse abilities and backgroundsSpeech services support multiple languages; vision services include accessibility features
📋 AccountabilityPeople should be answerable for AI systems they deployOrganisations need AI governance policies; Microsoft publishes AI impact assessments

Fairness: treating everyone equitably

The principle: AI systems should not discriminate. They should produce equitable results for different groups of people.

Why it matters: AI models learn from training data. If that data reflects historical biases (e.g., more resumes from men in tech roles), the model will inherit those biases.

MediSpark scenario: MediSpark’s diagnostic AI was trained mostly on data from younger patients. When it analyses symptoms from elderly patients, it’s less accurate. To address fairness, MediSpark needs to:

  • Audit the training data for demographic balance
  • Test the model across different age groups
  • Monitor outcomes for disparities
💡 Exam tip: Fairness keywords

Look for these trigger words in exam questions:

  • Bias, discrimination, equitable, demographic groups, protected characteristics
  • If a scenario mentions an AI treating one group differently → the answer is Fairness

Reliability & Safety: working correctly under pressure

The principle: AI systems should perform reliably and safely. They should handle errors gracefully and not cause harm.

Why it matters: An AI that’s 95% accurate sounds great — until you realise the 5% failure rate in a medical or safety context could be dangerous.

DataFlow Corp scenario: DataFlow deploys a customer support agent that handles 10,000 queries per day. To ensure reliability and safety, they:

  • Test the agent with edge cases and adversarial inputs
  • Set up fallback to human agents when confidence is low
  • Monitor response quality continuously
  • Define failure modes and escalation paths

Privacy & Security: protecting your data

The principle: AI systems should respect privacy laws and protect data through strong security measures.

Key Azure AI facts:

  • Your data is not used to train Azure AI models
  • Data stays within your tenant boundary and chosen Azure region
  • Azure AI services support a wide range of compliance standards including GDPR, SOC 2, and ISO 27001 (specific compliance varies by service and region — always check the Azure compliance documentation for your scenario)
  • Encryption at rest and in transit by default

GreenLeaf scenario: GreenLeaf processes photos of farmers’ fields through Azure AI vision services. Their farmers want to know: “Will Microsoft see our crop data?” The answer is no — Azure AI processes data within the tenant and doesn’t retain it for model training.

Transparency: making AI explainable

The principle: People should understand how AI systems work and how decisions are made.

In practice:

  • AI-generated content should be labelled as AI-generated
  • AI responses should cite their sources when possible
  • Users should know when they’re talking to an AI, not a human
  • Documentation about model capabilities and limitations should be available

Priya scenario: Priya builds a chatbot using Foundry. She enables transparency by:

  • Adding a disclaimer: “This response was generated by AI”
  • Showing source citations in the response
  • Publishing model documentation (what it can and can’t do)

Inclusiveness: AI for everyone

The principle: AI should be designed to be accessible and useful for people with diverse abilities, backgrounds, and experiences.

In practice:

  • Speech services should support multiple languages and accents
  • Vision services should work across different skin tones and lighting conditions
  • AI interfaces should be keyboard navigable and screen-reader compatible
  • Content generation should avoid cultural assumptions

Accountability: someone is responsible

The principle: People and organisations should be accountable for the AI systems they design and deploy.

In practice:

  • Microsoft publishes AI impact assessments for its services
  • Organisations deploying AI should have AI governance policies
  • There should be a clear escalation process when AI causes harm
  • Audit logs should track AI decisions for review
ℹ️ How Microsoft enforces accountability

Microsoft has an internal Office of Responsible AI and a Responsible AI Standard that every product team must follow. This includes:

  • Mandatory impact assessments before deploying AI features
  • Sensitivity reviews for high-risk scenarios (medical, legal, financial)
  • Content safety systems that filter harmful outputs
  • Regular red-teaming exercises to find vulnerabilities

Quick reference: matching scenarios to principles

ScenarioPrinciple
A loan approval AI rejects more applications from one ethnic groupFairness
An AI medical assistant gives wrong dosage informationReliability & Safety
An AI service stores customer data in a region without consentPrivacy & Security
Users can’t tell if they’re chatting with a human or AITransparency
A voice assistant only works accurately in EnglishInclusiveness
No one reviews the AI’s decisions or takes responsibility for errorsAccountability

🎬 Video walkthrough

🎬 Video coming soon

Responsible AI Principles — AI-901 Module 2

Responsible AI Principles — AI-901 Module 2

~12 min

Flashcards

Question

What are Microsoft's six responsible AI principles?

Click or press Enter to reveal answer

Answer

Fairness, Reliability & Safety, Privacy & Security, Transparency, Inclusiveness, and Accountability.

Click to flip back

Question

Which responsible AI principle addresses bias in AI systems?

Click or press Enter to reveal answer

Answer

Fairness — AI should treat all people equitably and not discriminate based on demographic characteristics.

Click to flip back

Question

Does Azure AI use your data to train its models?

Click or press Enter to reveal answer

Answer

No. Your data stays within your tenant boundary. Azure AI services do not use customer data to train or improve their foundational models. This supports the Privacy & Security principle.

Click to flip back

Question

What does the Transparency principle require?

Click or press Enter to reveal answer

Answer

AI systems should be understandable and explainable. Users should know when they're interacting with AI, responses should cite sources, and model capabilities/limitations should be documented.

Click to flip back

Question

Which principle says someone must be answerable when AI causes harm?

Click or press Enter to reveal answer

Answer

Accountability — people and organisations should be accountable for the AI systems they deploy, with governance policies, audit trails, and escalation processes.

Click to flip back

Knowledge Check

Knowledge Check

MediSpark's diagnostic AI performs well on test data from urban hospitals but poorly on data from rural clinics. A review reveals the training data was 90% urban. Which responsible AI principle is being violated?

Knowledge Check

DataFlow Corp deploys a customer support agent. A user asks: 'Am I talking to a real person or a bot?' The agent responds as if it's human. Which responsible AI principle is this failing?

Knowledge Check

GreenLeaf stores farmer field images in Azure AI Vision for crop analysis. Farmers worry their data might be used by Microsoft. Which statement is correct?


Next up: How Generative AI Actually Works — tokens, transformers, and why AI sometimes makes things up.

← Previous

What is AI? Your First 10 Minutes

Next →

How Generative AI Actually Works

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.