🔒 Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided AB-100 Domain 3
Domain 3 — Module 13 of 13 100%
29 of 29 overall

AB-100 Study Guide

Domain 1: Plan AI-Powered Business Solutions

  • Agent Requirements & Data Readiness
  • AI Strategy & the Cloud Adoption Framework
  • Multi-Agent Solution Design
  • Build, Buy, or Extend
  • Generative AI, Knowledge Sources & Prompt Engineering
  • Small Language Models & Model Selection
  • ROI, TCO & Business Case Analysis

Domain 2: Design AI-Powered Business Solutions

  • Copilot in D365 Customer Experience & Service
  • Agent Types: Task, Autonomous & Prompt/Response
  • Foundry Tools & Code-First Solutions
  • Copilot Studio: Topics, Flows & Prompt Actions
  • Power Apps, WAF & Data Processing
  • Extensibility: Custom Models, M365 Agents & Copilot Studio
  • MCP, Computer Use & Agent Behaviours
  • M365 Agents: Teams, SharePoint & Sales/Service in M365 Copilot
  • D365 AI Orchestration: Finance, SCM & Customer Experience

Domain 3: Deploy AI-Powered Business Solutions

  • Agent Monitoring: Tools, Metrics, and Processes
  • Telemetry Interpretation and Agent Tuning
  • Testing Strategy for AI Agents
  • Custom Model Validation and Prompt Best Practices
  • End-to-End Testing for Multi-App AI Solutions
  • ALM Foundations & Data Lifecycle for AI
  • ALM for Copilot Studio Agents
  • ALM for Microsoft Foundry Agents
  • ALM for D365 AI Features
  • Agent Security Free
  • Governance for AI Agents Free
  • Prompt Security & AI Vulnerabilities Free
  • Responsible AI & Audit Trails Free

AB-100 Study Guide

Domain 1: Plan AI-Powered Business Solutions

  • Agent Requirements & Data Readiness
  • AI Strategy & the Cloud Adoption Framework
  • Multi-Agent Solution Design
  • Build, Buy, or Extend
  • Generative AI, Knowledge Sources & Prompt Engineering
  • Small Language Models & Model Selection
  • ROI, TCO & Business Case Analysis

Domain 2: Design AI-Powered Business Solutions

  • Copilot in D365 Customer Experience & Service
  • Agent Types: Task, Autonomous & Prompt/Response
  • Foundry Tools & Code-First Solutions
  • Copilot Studio: Topics, Flows & Prompt Actions
  • Power Apps, WAF & Data Processing
  • Extensibility: Custom Models, M365 Agents & Copilot Studio
  • MCP, Computer Use & Agent Behaviours
  • M365 Agents: Teams, SharePoint & Sales/Service in M365 Copilot
  • D365 AI Orchestration: Finance, SCM & Customer Experience

Domain 3: Deploy AI-Powered Business Solutions

  • Agent Monitoring: Tools, Metrics, and Processes
  • Telemetry Interpretation and Agent Tuning
  • Testing Strategy for AI Agents
  • Custom Model Validation and Prompt Best Practices
  • End-to-End Testing for Multi-App AI Solutions
  • ALM Foundations & Data Lifecycle for AI
  • ALM for Copilot Studio Agents
  • ALM for Microsoft Foundry Agents
  • ALM for D365 AI Features
  • Agent Security Free
  • Governance for AI Agents Free
  • Prompt Security & AI Vulnerabilities Free
  • Responsible AI & Audit Trails Free
Domain 3: Deploy AI-Powered Business Solutions Free ⏱ ~15 min read

Responsible AI & Audit Trails

Review AI solutions against Microsoft's six responsible AI principles and design comprehensive audit trails for model versions, data changes, and deployment events.

Responsible AI is a continuous practice, not a checkbox

☕ Simple explanation

Imagine building a bridge. You do not just check for safety once during construction and call it done. You inspect it regularly. You monitor for stress fractures. You redesign parts when you discover problems.

Responsible AI works the same way. Before you deploy an agent, you assess its impact — could it be unfair? Could it make harmful mistakes? Is it transparent about what it can and cannot do? After deployment, you keep monitoring. You audit every change. You fix bias when you find it.

Microsoft’s six responsible AI principles give you the framework. Audit trails give you the evidence.

Responsible AI governance requires two complementary capabilities: a review framework (ensuring solutions adhere to responsible AI principles before and after deployment) and audit infrastructure (comprehensive, tamper-evident logging of all changes to models, data, prompts, and configurations). The AB-100 exam tests architects on both — designing the review process AND the technical audit trail implementation.

This module covers Microsoft’s six responsible AI principles with practical application for business solutions, the responsible AI impact assessment process, and design patterns for immutable audit trails.

Microsoft’s six responsible AI principles

PrincipleDefinitionPractical Application
FairnessAI systems should treat all people fairly and not discriminateTest for bias across demographic groups. Monitor for disparate impact in decisions. Review training data for historical bias.
Reliability and SafetyAI systems should perform reliably and safely under expected conditionsDefine performance thresholds. Test edge cases. Implement fallback behaviour when confidence is low. Design for graceful degradation.
Privacy and SecurityAI systems should be secure and respect privacyMinimise data collection. Apply data protection controls. Secure model endpoints. Respect sensitivity labels and DLP policies.
InclusivenessAI systems should empower everyone and engage peopleTest with diverse user groups. Support accessibility standards. Avoid language or assumptions that exclude users.
TransparencyAI systems should be understandableDisclose that users are interacting with AI. Explain how the system makes decisions. Provide confidence scores where appropriate. Cite sources.
AccountabilityPeople should be accountable for AI systemsDesignate an owner for every agent. Define escalation paths. Maintain audit trails. Enable human oversight and override.

Responsible AI review checklist

Every review area requires documented evidence — not just a verbal agreement
Review AreaKey QuestionsEvidence Required
Impact assessmentWhat could go wrong? Who could be harmed? What is the blast radius?Documented impact assessment signed by the agent owner and reviewed by the governance board
Bias testingDoes the system produce different outcomes for different demographic groups? Does the training data reflect historical bias?Bias test results across protected attributes (gender, age, ethnicity, disability). Statistical parity metrics.
Fairness auditAre outcomes equitable? Do certain groups consistently receive worse service or less accurate results?Fairness metrics per group. Disparate impact analysis. Remediation plan if gaps are found.
Transparency documentationDoes the user know they are interacting with AI? Can the user understand why the agent gave a particular answer?User-facing disclosure. Explanation capability (citations, confidence scores). Model card documentation.
Human oversight designCan a human review, override, or stop the agent? Are escalation paths defined?Override mechanism documentation. Escalation workflow. Kill switch availability.

Responsible AI impact assessment

An impact assessment should be completed before any new agent deployment. It answers three fundamental questions:

  1. What could go wrong? — enumerate failure modes, harmful outputs, and misuse scenarios
  2. Who could be affected? — identify stakeholders, vulnerable populations, and downstream consumers
  3. What controls are in place? — map each risk to a mitigation (technical control, process control, or human oversight)

When to perform an impact assessment:

  • Before initial deployment of any new agent
  • Before major updates to an agent’s capabilities or data sources
  • After an incident involving an agent
  • On a regular schedule (annually at minimum) for deployed agents

Audit trails for models

Every change to a model must be logged with immutable, tamper-evident records:

EventWhat to LogWhy
Model version changePrevious version, new version, who deployed, when, deployment methodTraceability — link any model output to the specific model version that produced it
Training data changeDataset version, what changed (added records, removed records, modified labels), who approvedReproducibility — recreate any model version with the exact data that trained it
Prompt updatePrevious prompt text, new prompt text, who changed, reason for changeAccountability — understand why agent behaviour changed at a specific point in time
Configuration changeSetting name, old value, new value, who changed, approval referenceGovernance — every configuration change is authorised and traceable
Deployment eventEnvironment, version deployed, deployment method, success/failure, rollback statusOperational — know exactly what is running in each environment at any point in time
Evaluation resultModel version, evaluation dataset version, metrics (accuracy, fairness, safety), pass/failQuality — prove the model met quality standards before deployment

Audit trails for data

Data changes need the same rigour as model changes:

EventWhat to LogWhy
Data source added or removedSource name, type, owner, approval referenceKnow what data feeds the agent at any point in time
Data quality eventQuality metric, threshold, actual value, action takenTrack data quality over time and correlate with model performance
Access eventWho accessed which data, when, from where, for what purposeRegulatory compliance — prove data was accessed only by authorised parties
Deletion eventWhat was deleted, why, who approved, retention policy referenceData lifecycle — prove compliance with retention and deletion policies
Knowledge source updateDocument added/removed/modified, who, when, approval referenceLink agent behaviour changes to knowledge source changes

Design patterns for audit infrastructure

  • Immutable storage — use write-once storage (Azure Immutable Blob Storage) so logs cannot be modified or deleted after creation
  • Tamper-evident hashing — each log entry includes a hash of the previous entry, creating a chain that detects any modification
  • Automated compliance reporting — scheduled reports that aggregate audit data for regulatory submissions
  • Retention policies — define how long audit logs are retained based on regulatory requirements (7 years is common for financial services)
  • Separation of duties — the people who manage models should not be the same people who manage audit logs
💡 Scenario: Dr. Amara reviews CareFirst's scheduling agent for responsible AI

Dr. Amara Obi conducts a responsible AI review of CareFirst Health’s patient scheduling agent. The agent recommends appointment times and matches patients with available doctors.

Impact assessment findings:

  • Failure mode: The agent could recommend inappropriate wait times for urgent cases
  • Affected stakeholders: Patients (especially those with time-sensitive conditions), doctors (workload imbalance), administration (liability)

Bias testing discovery: During testing, Dr. Amara analyses 10,000 historical scheduling recommendations. She finds the agent recommends male doctors for 73% of surgical consultation referrals and female doctors for only 27% — even though the surgical team is 55% male and 45% female.

Root cause: The training data reflects 5 years of historical scheduling. In the past, the hospital had a higher proportion of male surgeons, and referral patterns from that era are baked into the model’s learned preferences.

Remediation plan:

  1. Rebalance the training data to reflect current staff composition, not historical patterns
  2. Add a fairness constraint to the model: recommendation distribution should approximate the actual staff gender ratio within a statistical tolerance
  3. Implement ongoing monitoring: monthly fairness reports comparing recommendation distributions against staff demographics
  4. Human oversight: all surgical referrals reviewed by the scheduling coordinator before confirmation

Transparency implementation:

  • Patients see a disclosure: “This appointment was suggested by an AI scheduling assistant. Your doctor or care team can adjust this recommendation.”
  • Doctors see the AI recommendation alongside the reasoning: “Recommended based on availability, specialisation match, and patient preference history.”

Key lesson: The bias was not intentional — it was inherited from historical data. Without a responsible AI review, it would have perpetuated inequity at scale.

💡 Scenario: Yuki implements audit trails for Vanguard

Yuki Tanaka implements the audit infrastructure for all AI systems at Vanguard Financial Group:

Architecture:

  • All audit events flow through an Azure Event Hub to a central audit service
  • Audit logs written to Azure Immutable Blob Storage (WORM — Write Once Read Many)
  • Each log entry includes a SHA-256 hash of the previous entry (tamper-evident chain)
  • Retention: 7 years for all AI-related audit events (APRA requirement)

What gets logged:

  • Every model version change: “Credit risk model v2.3 deployed to production by Dev Patel, approved by Marcus Webb, at 2025-09-15T14:30:00Z”
  • Every training data update: “152 new training records added to credit risk dataset v4.1, approved by data steward Lin Chen”
  • Every prompt change: “System message for financial advisory agent updated by Dev Patel — changed risk disclosure wording”
  • Every knowledge source update: “3 regulatory guidance PDFs added to advisory agent knowledge base, approved by compliance team”

Automated compliance reporting:

  • Monthly report: all model changes, data changes, and deployment events
  • Quarterly report: responsible AI review status for all registered agents
  • On-demand: full audit trail for any specific agent or model version (for regulatory inquiries)

Separation of duties:

  • AI Platform team manages models and deployments (Dev Patel)
  • Compliance team manages audit infrastructure and reviews (Yuki Tanaka)
  • Neither team can modify the other’s logs
💡 Exam tip: responsible AI is built into the ALM pipeline

The exam expects responsible AI to be a continuous process, not a one-time review:

  • Before deployment: Impact assessment, bias testing, fairness audit, transparency documentation
  • During deployment: Audit trail for every change (model, data, prompt, config)
  • After deployment: Ongoing monitoring for bias drift, regular re-assessment, incident response
  • Key distinction: Responsible AI is not a separate process — it is integrated into the ALM pipeline as quality gates. A model that fails fairness testing should not pass the deployment pipeline.

If the exam asks “when should responsible AI be reviewed?” — the answer is ALWAYS “continuously, not just at deployment.”

Flashcards

Question

What are Microsoft's six responsible AI principles?

Click or press Enter to reveal answer

Answer

Fairness, Reliability and Safety, Privacy and Security, Inclusiveness, Transparency, and Accountability. Each principle requires specific implementation actions — they are practical design requirements, not aspirational goals.

Click to flip back

Question

What is a responsible AI impact assessment and when should it be done?

Click or press Enter to reveal answer

Answer

An impact assessment enumerates failure modes, identifies affected stakeholders, and maps risks to mitigations. It should be done before initial deployment, before major updates, after incidents, and on a regular schedule (at least annually) for deployed agents.

Click to flip back

Question

What six model events must be captured in an audit trail?

Click or press Enter to reveal answer

Answer

1) Model version changes. 2) Training data changes. 3) Prompt updates. 4) Configuration changes. 5) Deployment events. 6) Evaluation results. Each event logs who, what, when, why, and the approval reference.

Click to flip back

Question

Why must audit logs use immutable, tamper-evident storage?

Click or press Enter to reveal answer

Answer

Immutable storage (WORM) prevents logs from being modified or deleted after creation. Tamper-evident hashing (each entry hashes the previous) detects any modification to the chain. Together, they ensure the audit trail is trustworthy for regulatory compliance and incident investigation.

Click to flip back

Question

How can historical bias in training data affect an AI system?

Click or press Enter to reveal answer

Answer

If training data reflects past inequities (e.g. historical hiring bias, demographic imbalances in service delivery), the model learns and reproduces those patterns at scale. The bias is not intentional but is inherited from the data. Regular bias testing and fairness audits are required to detect and correct it.

Click to flip back

Knowledge check

Knowledge Check

Dr. Amara discovers that CareFirst's scheduling agent recommends male doctors for 73% of surgical referrals despite the surgical team being 55% male. Which responsible AI principle is violated, and what is the correct remediation?

Knowledge Check

A regulator asks Vanguard to prove exactly which model version and training data were used when the credit risk model denied a specific customer's application 8 months ago. What audit trail capability enables this?

Knowledge Check

An architect proposes conducting a responsible AI impact assessment only at initial deployment and then annually. The agent processes patient health data and influences clinical scheduling. Is this sufficient?

🎬 Video coming soon


🎉 Congratulations — you have completed all 29 modules!

You have worked through the entire AB-100: Agentic AI Business Solutions Architect study guide. Here is what you covered:

Domain 1 — Design AI-Powered Business Solutions (Modules 1–10): AI landscape, solution design methodology, agent architecture patterns, integration with D365 and Power Platform, and data strategy for AI.

Domain 2 — Implement AI-Powered Business Solutions (Modules 11–21): Building agents in Copilot Studio and Microsoft Foundry, implementing AI in D365 Finance, Supply Chain, Customer Service, and Sales, prompt engineering, evaluation frameworks, and end-to-end testing.

Domain 3 — Deploy AI-Powered Business Solutions (Modules 22–29): ALM foundations, platform-specific ALM for Copilot Studio, Foundry, and D365, agent security, governance, prompt security, and responsible AI with audit trails.

What to do next:

  1. Review your flashcards — revisit the modules where you scored lowest on quizzes
  2. Practice scenarios — the exam is heavily scenario-based. Re-read the character scenarios and think through the architectural decisions
  3. Focus on trade-offs — the exam rarely asks for a single right answer. It asks for the BEST answer given specific constraints
  4. Take a practice exam — test yourself under timed conditions to build exam-day confidence

Good luck with AB-100. You have got this. 🚀

← Previous

Prompt Security & AI Vulnerabilities

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.