🔒 Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided AB-100 Domain 2
Domain 2 — Module 5 of 9 56%
12 of 29 overall

AB-100 Study Guide

Domain 1: Plan AI-Powered Business Solutions

  • Agent Requirements & Data Readiness
  • AI Strategy & the Cloud Adoption Framework
  • Multi-Agent Solution Design
  • Build, Buy, or Extend
  • Generative AI, Knowledge Sources & Prompt Engineering
  • Small Language Models & Model Selection
  • ROI, TCO & Business Case Analysis

Domain 2: Design AI-Powered Business Solutions

  • Copilot in D365 Customer Experience & Service
  • Agent Types: Task, Autonomous & Prompt/Response
  • Foundry Tools & Code-First Solutions
  • Copilot Studio: Topics, Flows & Prompt Actions
  • Power Apps, WAF & Data Processing
  • Extensibility: Custom Models, M365 Agents & Copilot Studio
  • MCP, Computer Use & Agent Behaviours
  • M365 Agents: Teams, SharePoint & Sales/Service in M365 Copilot
  • D365 AI Orchestration: Finance, SCM & Customer Experience

Domain 3: Deploy AI-Powered Business Solutions

  • Agent Monitoring: Tools, Metrics, and Processes
  • Telemetry Interpretation and Agent Tuning
  • Testing Strategy for AI Agents
  • Custom Model Validation and Prompt Best Practices
  • End-to-End Testing for Multi-App AI Solutions
  • ALM Foundations & Data Lifecycle for AI
  • ALM for Copilot Studio Agents
  • ALM for Microsoft Foundry Agents
  • ALM for D365 AI Features
  • Agent Security Free
  • Governance for AI Agents Free
  • Prompt Security & AI Vulnerabilities Free
  • Responsible AI & Audit Trails Free

AB-100 Study Guide

Domain 1: Plan AI-Powered Business Solutions

  • Agent Requirements & Data Readiness
  • AI Strategy & the Cloud Adoption Framework
  • Multi-Agent Solution Design
  • Build, Buy, or Extend
  • Generative AI, Knowledge Sources & Prompt Engineering
  • Small Language Models & Model Selection
  • ROI, TCO & Business Case Analysis

Domain 2: Design AI-Powered Business Solutions

  • Copilot in D365 Customer Experience & Service
  • Agent Types: Task, Autonomous & Prompt/Response
  • Foundry Tools & Code-First Solutions
  • Copilot Studio: Topics, Flows & Prompt Actions
  • Power Apps, WAF & Data Processing
  • Extensibility: Custom Models, M365 Agents & Copilot Studio
  • MCP, Computer Use & Agent Behaviours
  • M365 Agents: Teams, SharePoint & Sales/Service in M365 Copilot
  • D365 AI Orchestration: Finance, SCM & Customer Experience

Domain 3: Deploy AI-Powered Business Solutions

  • Agent Monitoring: Tools, Metrics, and Processes
  • Telemetry Interpretation and Agent Tuning
  • Testing Strategy for AI Agents
  • Custom Model Validation and Prompt Best Practices
  • End-to-End Testing for Multi-App AI Solutions
  • ALM Foundations & Data Lifecycle for AI
  • ALM for Copilot Studio Agents
  • ALM for Microsoft Foundry Agents
  • ALM for D365 AI Features
  • Agent Security Free
  • Governance for AI Agents Free
  • Prompt Security & AI Vulnerabilities Free
  • Responsible AI & Audit Trails Free
Domain 2: Design AI-Powered Business Solutions Premium ⏱ ~14 min read

Power Apps, WAF & Data Processing

Design AI-powered business processes in Power Apps canvas apps, apply the Power Platform Well-Architected Framework to intelligent workloads, and design data processing pipelines for AI grounding.

AI in Power Apps: beyond buttons and forms

☕ Simple explanation

A canvas app without AI is like a toolbox — useful, but the worker does all the thinking.

Adding AI to a Power App is like giving the worker an experienced colleague who watches over their shoulder and whispers advice. “That part looks defective — flag it.” “Based on the last 500 orders, this customer usually orders 200 units.” “This form entry doesn’t match the standard format — did you mean…?”

The Well-Architected Framework is a checklist to make sure your AI-powered app is reliable, secure, fast, easy to run, and pleasant to use — because AI adds new failure modes that regular apps don’t have.

The AB-100 exam expects you to design AI integration patterns in Power Apps canvas apps — selecting the right AI component (AI Builder, custom Foundry models via connectors, or the Copilot control), positioning it correctly in the business process, and handling edge cases like model unavailability.

The Power Platform Well-Architected Framework adds AI-specific guidance across five pillars: reliability (model failover, degradation), security (data in prompts, output filtering), operational excellence (model monitoring, drift detection), performance (latency budgets, caching), and experience (user trust, explainability).

Data processing for grounding is the pipeline that prepares your enterprise data so AI models can use it accurately — collection, cleaning, transformation, indexing, and continuous refresh.

AI components in canvas apps

Three ways to bring AI into a Power Apps canvas app, each suited to different scenarios:

Three approaches to embedding AI in Power Apps canvas apps
FeatureHow It WorksBest ForSkill Level
AI BuilderPrebuilt and custom AI models accessible directly in Power Apps via the AI Builder controlDocument processing, object detection, text classification, sentiment analysis — where prebuilt models fit the use caseLow — no code, drag-and-drop in the app designer
Custom Foundry modelsCustom connectors that call deployed Foundry models via REST APIsComplex, domain-specific AI tasks where prebuilt models are insufficient — custom classification, specialised generation, multi-step reasoningHigh — requires Foundry deployment, custom connector setup, and API design
Copilot controlEmbedded Copilot chat experience inside the canvas app — users ask questions in natural languageConversational interfaces where users explore data or get recommendations through chatLow — add the control and configure its knowledge sources

Design pattern for AI in a business process:

  1. Capture — user enters data or uploads a document
  2. Analyse — AI model processes the input (image analysis, text extraction, classification)
  3. Present — show the AI result with a confidence indicator
  4. Confirm — user reviews and approves or corrects the AI output
  5. Act — proceed with the business process using the confirmed result

The confirm step is essential. Never design a process where AI output bypasses human review in safety-critical or high-value scenarios.

Well-Architected Framework for AI workloads

The Power Platform WAF has five pillars. Each applies differently when AI is involved:

PillarStandard ConcernAI-Specific Concern
ReliabilityApp uptime, data availabilityModel availability, fallback when AI service is down, graceful degradation
SecurityData access, authenticationData in prompts (PII leakage), prompt injection defence, output filtering for harmful content
Operational ExcellenceDeployment, monitoringModel drift detection, response quality monitoring, feedback loops for continuous improvement
Performance EfficiencyLoad times, query speedAI call latency budgets (models can take 2-10 seconds), caching strategies for repeated queries, token consumption management
Experience OptimisationUsability, accessibilityUser trust (confidence scores, citations), explainability (why did AI recommend this?), managing expectations when AI is uncertain
💡 Exam tip: WAF pillars applied to AI

The exam tests whether you can apply WAF pillars to AI scenarios:

  • “The AI model is occasionally unavailable” → Reliability — design fallback behaviour (queue the request, show cached result, or allow manual process)
  • “Users don’t trust the AI recommendations” → Experience — show confidence scores, provide citations, let users give feedback
  • “The AI costs are higher than expected” → Performance Efficiency — implement caching, reduce token usage, use smaller models for simple tasks
  • “The AI occasionally returns inappropriate content” → Security — implement content safety filters on inputs and outputs

A good AI architecture addresses all five pillars. The exam rewards holistic thinking.

Data processing for AI grounding

Grounding is only as good as the data behind it. A data processing pipeline ensures your AI models have clean, current, relevant data to reason over.

The five-stage pipeline:

StageWhat HappensExample
CollectionGather data from source systems — D365, SharePoint, databases, APIsPull product specifications from D365 SCM, safety manuals from SharePoint
CleaningRemove duplicates, fix formatting, handle missing values, strip irrelevant contentRemove boilerplate headers/footers, deduplicate versioned documents
TransformationConvert data into a format the AI model can consume — chunking, structuring, embeddingSplit long documents into semantic chunks of 500-1000 tokens
IndexingStore processed data in a searchable index — vector indexes for semantic search, keyword indexes for exact matchCreate vector embeddings in Foundry and index in AI Search
ServingDeliver relevant data to the model at inference time — retrieval pipeline with rankingRAG retrieves top-5 chunks, re-ranks by relevance, passes to the model as context
💡 Scenario: Kai builds a quality inspection app for Apex's shop floor

Kai Mercer designs a canvas app for shop floor workers at Apex Industries. Workers photograph manufactured parts, and AI analyses each image for defects.

AI component: AI Builder custom model trained on 5,000 images of good and defective parts.

Business process:

  1. Worker opens the app and photographs the part
  2. AI Builder model analyses the image → returns “Pass” or “Defect detected” with confidence score
  3. If confidence is above 95%, result auto-populates the quality record in D365 SCM
  4. If confidence is 80-95%, the result is shown with a yellow indicator — worker confirms or overrides
  5. If confidence is below 80%, the app flags it for a quality engineer to review manually

WAF applied:

  • Reliability: If the AI model is unavailable (network issue on the shop floor), the app falls back to manual inspection mode with a checklist
  • Performance: Images are compressed before upload. Results are cached for duplicate scans of the same batch
  • Experience: Confidence score is shown as a colour indicator (green/yellow/red) — workers trust it more because they can see how confident the AI is
  • Security: Images are processed in Apex’s own Foundry instance — no factory images leave the tenant

Priya Sharma (data engineer) builds the continuous retraining pipeline — new defect images from the shop floor are labelled by quality engineers and added to the training dataset monthly.

Flashcards

Question

What are the three ways to embed AI in a Power Apps canvas app?

Click or press Enter to reveal answer

Answer

1) AI Builder — prebuilt and custom models accessible via the AI Builder control. 2) Custom Foundry models — called via custom connectors and REST APIs. 3) Copilot control — embedded conversational AI for natural language interaction. Choose based on complexity and whether prebuilt models meet the requirement.

Click to flip back

Question

Why is data processing for AI grounding NOT a one-time activity?

Click or press Enter to reveal answer

Answer

Enterprise data changes constantly — new documents, updated policies, revised pricing, evolving products. A grounding pipeline must include continuous refresh (scheduled or event-triggered) to keep the AI's knowledge current. Stale grounding data causes the AI to give outdated or incorrect answers.

Click to flip back

Question

Name the five stages of a data processing pipeline for AI grounding.

Click or press Enter to reveal answer

Answer

Collection (gather from sources), Cleaning (deduplicate, fix formatting), Transformation (chunk, structure, embed), Indexing (store in searchable indexes), and Serving (retrieve relevant data at inference time). Each stage adds quality — skip one and the AI output degrades.

Click to flip back

Knowledge check

Knowledge Check

Kai's quality inspection app uses an AI Builder model to detect defects. The shop floor has unreliable Wi-Fi, and workers report that the app sometimes freezes when the AI service is unavailable. Which WAF pillar should Kai address, and what's the recommended design?

Knowledge Check

A data engineer sets up a grounding pipeline for a customer service agent. The pipeline processes 10,000 support documents, creates vector embeddings, and indexes them. Six months later, users report that the agent gives outdated answers about product features that have been updated. What went wrong?

🎬 Video coming soon

Next up: Extensibility: Custom Models, M365 Agents & Copilot Studio — extending AI solutions with custom Foundry models, M365 Copilot declarative agents, and Copilot Studio extensibility.

← Previous

Copilot Studio: Topics, Flows & Prompt Actions

Next →

Extensibility: Custom Models, M365 Agents & Copilot Studio

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.