Power Apps, WAF & Data Processing
Design AI-powered business processes in Power Apps canvas apps, apply the Power Platform Well-Architected Framework to intelligent workloads, and design data processing pipelines for AI grounding.
AI in Power Apps: beyond buttons and forms
A canvas app without AI is like a toolbox — useful, but the worker does all the thinking.
Adding AI to a Power App is like giving the worker an experienced colleague who watches over their shoulder and whispers advice. “That part looks defective — flag it.” “Based on the last 500 orders, this customer usually orders 200 units.” “This form entry doesn’t match the standard format — did you mean…?”
The Well-Architected Framework is a checklist to make sure your AI-powered app is reliable, secure, fast, easy to run, and pleasant to use — because AI adds new failure modes that regular apps don’t have.
AI components in canvas apps
Three ways to bring AI into a Power Apps canvas app, each suited to different scenarios:
| Feature | How It Works | Best For | Skill Level |
|---|---|---|---|
| AI Builder | Prebuilt and custom AI models accessible directly in Power Apps via the AI Builder control | Document processing, object detection, text classification, sentiment analysis — where prebuilt models fit the use case | Low — no code, drag-and-drop in the app designer |
| Custom Foundry models | Custom connectors that call deployed Foundry models via REST APIs | Complex, domain-specific AI tasks where prebuilt models are insufficient — custom classification, specialised generation, multi-step reasoning | High — requires Foundry deployment, custom connector setup, and API design |
| Copilot control | Embedded Copilot chat experience inside the canvas app — users ask questions in natural language | Conversational interfaces where users explore data or get recommendations through chat | Low — add the control and configure its knowledge sources |
Design pattern for AI in a business process:
- Capture — user enters data or uploads a document
- Analyse — AI model processes the input (image analysis, text extraction, classification)
- Present — show the AI result with a confidence indicator
- Confirm — user reviews and approves or corrects the AI output
- Act — proceed with the business process using the confirmed result
The confirm step is essential. Never design a process where AI output bypasses human review in safety-critical or high-value scenarios.
Well-Architected Framework for AI workloads
The Power Platform WAF has five pillars. Each applies differently when AI is involved:
| Pillar | Standard Concern | AI-Specific Concern |
|---|---|---|
| Reliability | App uptime, data availability | Model availability, fallback when AI service is down, graceful degradation |
| Security | Data access, authentication | Data in prompts (PII leakage), prompt injection defence, output filtering for harmful content |
| Operational Excellence | Deployment, monitoring | Model drift detection, response quality monitoring, feedback loops for continuous improvement |
| Performance Efficiency | Load times, query speed | AI call latency budgets (models can take 2-10 seconds), caching strategies for repeated queries, token consumption management |
| Experience Optimisation | Usability, accessibility | User trust (confidence scores, citations), explainability (why did AI recommend this?), managing expectations when AI is uncertain |
Exam tip: WAF pillars applied to AI
The exam tests whether you can apply WAF pillars to AI scenarios:
- “The AI model is occasionally unavailable” → Reliability — design fallback behaviour (queue the request, show cached result, or allow manual process)
- “Users don’t trust the AI recommendations” → Experience — show confidence scores, provide citations, let users give feedback
- “The AI costs are higher than expected” → Performance Efficiency — implement caching, reduce token usage, use smaller models for simple tasks
- “The AI occasionally returns inappropriate content” → Security — implement content safety filters on inputs and outputs
A good AI architecture addresses all five pillars. The exam rewards holistic thinking.
Data processing for AI grounding
Grounding is only as good as the data behind it. A data processing pipeline ensures your AI models have clean, current, relevant data to reason over.
The five-stage pipeline:
| Stage | What Happens | Example |
|---|---|---|
| Collection | Gather data from source systems — D365, SharePoint, databases, APIs | Pull product specifications from D365 SCM, safety manuals from SharePoint |
| Cleaning | Remove duplicates, fix formatting, handle missing values, strip irrelevant content | Remove boilerplate headers/footers, deduplicate versioned documents |
| Transformation | Convert data into a format the AI model can consume — chunking, structuring, embedding | Split long documents into semantic chunks of 500-1000 tokens |
| Indexing | Store processed data in a searchable index — vector indexes for semantic search, keyword indexes for exact match | Create vector embeddings in Foundry and index in AI Search |
| Serving | Deliver relevant data to the model at inference time — retrieval pipeline with ranking | RAG retrieves top-5 chunks, re-ranks by relevance, passes to the model as context |
Scenario: Kai builds a quality inspection app for Apex's shop floor
Kai Mercer designs a canvas app for shop floor workers at Apex Industries. Workers photograph manufactured parts, and AI analyses each image for defects.
AI component: AI Builder custom model trained on 5,000 images of good and defective parts.
Business process:
- Worker opens the app and photographs the part
- AI Builder model analyses the image → returns “Pass” or “Defect detected” with confidence score
- If confidence is above 95%, result auto-populates the quality record in D365 SCM
- If confidence is 80-95%, the result is shown with a yellow indicator — worker confirms or overrides
- If confidence is below 80%, the app flags it for a quality engineer to review manually
WAF applied:
- Reliability: If the AI model is unavailable (network issue on the shop floor), the app falls back to manual inspection mode with a checklist
- Performance: Images are compressed before upload. Results are cached for duplicate scans of the same batch
- Experience: Confidence score is shown as a colour indicator (green/yellow/red) — workers trust it more because they can see how confident the AI is
- Security: Images are processed in Apex’s own Foundry instance — no factory images leave the tenant
Priya Sharma (data engineer) builds the continuous retraining pipeline — new defect images from the shop floor are labelled by quality engineers and added to the training dataset monthly.
Flashcards
Knowledge check
Kai's quality inspection app uses an AI Builder model to detect defects. The shop floor has unreliable Wi-Fi, and workers report that the app sometimes freezes when the AI service is unavailable. Which WAF pillar should Kai address, and what's the recommended design?
A data engineer sets up a grounding pipeline for a customer service agent. The pipeline processes 10,000 support documents, creates vector embeddings, and indexes them. Six months later, users report that the agent gives outdated answers about product features that have been updated. What went wrong?
🎬 Video coming soon
Next up: Extensibility: Custom Models, M365 Agents & Copilot Studio — extending AI solutions with custom Foundry models, M365 Copilot declarative agents, and Copilot Studio extensibility.