🔒 Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided AB-100 Domain 1
Domain 1 — Module 5 of 7 71%
5 of 29 overall

AB-100 Study Guide

Domain 1: Plan AI-Powered Business Solutions

  • Agent Requirements & Data Readiness
  • AI Strategy & the Cloud Adoption Framework
  • Multi-Agent Solution Design
  • Build, Buy, or Extend
  • Generative AI, Knowledge Sources & Prompt Engineering
  • Small Language Models & Model Selection
  • ROI, TCO & Business Case Analysis

Domain 2: Design AI-Powered Business Solutions

  • Copilot in D365 Customer Experience & Service
  • Agent Types: Task, Autonomous & Prompt/Response
  • Foundry Tools & Code-First Solutions
  • Copilot Studio: Topics, Flows & Prompt Actions
  • Power Apps, WAF & Data Processing
  • Extensibility: Custom Models, M365 Agents & Copilot Studio
  • MCP, Computer Use & Agent Behaviours
  • M365 Agents: Teams, SharePoint & Sales/Service in M365 Copilot
  • D365 AI Orchestration: Finance, SCM & Customer Experience

Domain 3: Deploy AI-Powered Business Solutions

  • Agent Monitoring: Tools, Metrics, and Processes
  • Telemetry Interpretation and Agent Tuning
  • Testing Strategy for AI Agents
  • Custom Model Validation and Prompt Best Practices
  • End-to-End Testing for Multi-App AI Solutions
  • ALM Foundations & Data Lifecycle for AI
  • ALM for Copilot Studio Agents
  • ALM for Microsoft Foundry Agents
  • ALM for D365 AI Features
  • Agent Security Free
  • Governance for AI Agents Free
  • Prompt Security & AI Vulnerabilities Free
  • Responsible AI & Audit Trails Free

AB-100 Study Guide

Domain 1: Plan AI-Powered Business Solutions

  • Agent Requirements & Data Readiness
  • AI Strategy & the Cloud Adoption Framework
  • Multi-Agent Solution Design
  • Build, Buy, or Extend
  • Generative AI, Knowledge Sources & Prompt Engineering
  • Small Language Models & Model Selection
  • ROI, TCO & Business Case Analysis

Domain 2: Design AI-Powered Business Solutions

  • Copilot in D365 Customer Experience & Service
  • Agent Types: Task, Autonomous & Prompt/Response
  • Foundry Tools & Code-First Solutions
  • Copilot Studio: Topics, Flows & Prompt Actions
  • Power Apps, WAF & Data Processing
  • Extensibility: Custom Models, M365 Agents & Copilot Studio
  • MCP, Computer Use & Agent Behaviours
  • M365 Agents: Teams, SharePoint & Sales/Service in M365 Copilot
  • D365 AI Orchestration: Finance, SCM & Customer Experience

Domain 3: Deploy AI-Powered Business Solutions

  • Agent Monitoring: Tools, Metrics, and Processes
  • Telemetry Interpretation and Agent Tuning
  • Testing Strategy for AI Agents
  • Custom Model Validation and Prompt Best Practices
  • End-to-End Testing for Multi-App AI Solutions
  • ALM Foundations & Data Lifecycle for AI
  • ALM for Copilot Studio Agents
  • ALM for Microsoft Foundry Agents
  • ALM for D365 AI Features
  • Agent Security Free
  • Governance for AI Agents Free
  • Prompt Security & AI Vulnerabilities Free
  • Responsible AI & Audit Trails Free
Domain 1: Plan AI-Powered Business Solutions Premium ⏱ ~15 min read

Generative AI, Knowledge Sources & Prompt Engineering

Agents need the right knowledge and the right prompts to give accurate answers. Learn how to design knowledge sources for Copilot Studio agents, build enterprise prompt libraries, and apply prompt engineering techniques that work at scale.

The three ingredients of a great agent

☕ Simple explanation

A great agent needs three things: intelligence (the AI model), knowledge (the data it can access), and instructions (the prompts that guide its behaviour).

The AI model is the engine — it can understand language and generate responses. But without knowledge, it makes things up. And without good prompts, it gives vague, unhelpful answers.

As an architect, your job is to connect the right knowledge sources, design prompts that get consistent results, and build a library of reusable prompts that the whole organisation can use.

Copilot Studio agents use generative AI orchestration to combine a large language model’s reasoning with grounded knowledge sources. The agent retrieves relevant information from configured knowledge sources, injects it into the model’s context, and generates a response — a pattern known as retrieval-augmented generation (RAG).

The exam tests three areas: (1) selecting and configuring knowledge sources in Copilot Studio, (2) designing enterprise prompt libraries with governance, and (3) applying prompt engineering techniques that improve response quality, consistency, and safety in production business solutions.

Knowledge sources in Copilot Studio

When you enable generative AI in a Copilot Studio agent, you connect it to knowledge sources — these are the data stores the agent searches before generating a response.

Knowledge sources available in Copilot Studio
FeatureWhat It ProvidesSetup ComplexityBest For
SharePoint sitesInternal documents, policies, procedures, wikisLow — just point to the site URLHR agents, IT help desk, policy Q&A
Dataverse tablesStructured business data from D365 or custom appsMedium — requires table selection and column mappingProduct catalogue agents, order status
Public websitesExternal web content — product pages, documentationLow — provide URLs, agent crawls themCustomer-facing FAQ, product information
Custom data (files)Uploaded documents — PDFs, Word docsLow — drag and dropQuick prototypes, small document sets
Enterprise data (connectors)External systems — Salesforce, ServiceNow, Jira, etc.High — Power Platform connector config and authCross-system agents needing external data
Azure AI Search indexVector-indexed document collections for semantic search at scaleHigh — requires AI Search setup and indexing pipelineLarge-scale RAG across thousands of documents
💡 Architecture decision: when to use which knowledge source

The exam often presents a scenario and asks which knowledge source to configure:

  • “Internal company policies in SharePoint” — SharePoint sites
  • “Product data in Dynamics 365” — Dataverse tables
  • “Customer-facing product documentation on the website” — Public websites
  • “Need to search across 50,000 documents with semantic similarity” — Azure AI Search index
  • “Agent needs to pull data from Salesforce” — Enterprise data connectors

Key principle: Keep knowledge sources as close to the source of truth as possible. Don’t copy SharePoint content into uploaded files — point directly to SharePoint so the agent always has current data.

Generative AI in Copilot Studio: how it works

Copilot Studio offers two modes for handling user questions:

ModeHow It WorksWhen to Use
Topic-based (classic)Predefined conversation flows with branching logic. Agent follows a scriptStructured processes — booking, ordering, form filling
Generative AI (gen AI)Agent searches knowledge sources and generates natural language answers dynamicallyOpen-ended Q&A — policy questions, product inquiries

Most production agents use both modes together: topics for structured workflows, generative AI for the “everything else” fallback. When no topic matches, the generative AI orchestrator kicks in and searches the connected knowledge sources.

💡 Deep dive: generative AI fallback design

The exam tests your understanding of fallback behaviour in Copilot Studio:

  1. User sends a message
  2. Copilot Studio checks for a matching topic (exact or intent-based)
  3. If a topic matches — follow the topic flow
  4. If no topic matches — trigger generative AI fallback
  5. Generative AI searches knowledge sources — generates a grounded response
  6. If knowledge sources have no relevant content — return a configurable fallback message

Architect’s design decision: How much should be topic-driven vs generative?

  • High-stakes processes (financial transactions, medical triage) — topics with strict flows
  • Knowledge retrieval and Q&A — generative AI with good knowledge sources
  • Mixed scenarios — topics for the structured parts, generative fallback for everything else

Building an enterprise prompt library

A prompt library is a governed collection of reusable prompts that teams across the organisation can use. Instead of every team writing their own prompts from scratch, the library provides tested, approved templates.

Prompt TypeExampleGovernance Level
System prompts for agents”You are a customer service agent for Apex Industries. Answer using only the provided product documentation…”High — reviewed by AI CoE
Task prompts for business users”Summarise this meeting transcript focusing on action items and owners”Medium — curated and tagged
Analysis prompts”Analyse this sales pipeline and identify the top 3 deals at risk of slipping”Medium — domain-validated
Generation prompts”Write a professional email to the supplier explaining the delivery delay”Low — self-service with guardrails

Prompt library design guidelines:

  • Version control — prompts evolve. Track versions and allow rollback
  • Categorisation — organise by department, use case, and complexity
  • Testing — every prompt should be tested against representative data before publishing
  • Governance — system prompts require AI CoE review; task prompts can be self-service
  • Metrics — track usage, satisfaction, and accuracy over time
💡 Scenario: Jordan builds CareFirst's prompt library

Jordan Reeves creates a prompt library for CareFirst Health:

System prompts (AI CoE reviewed):

  • Patient scheduling agent system prompt — includes strict guardrails about not providing medical advice
  • Clinical summary generator — designed to summarise patient interactions for shift handoff

Task prompts (curated by Dr. Amara):

  • “Summarise this patient interaction for the shift handoff report”
  • “List all follow-up actions mentioned in this clinical note”

Guardrail: Every prompt in the CareFirst library includes a responsible AI footer: “This agent provides scheduling and administrative support only. For medical advice, please consult your healthcare provider.”

Versioning: When Jordan updates the scheduling system prompt, the old version is archived but available for rollback.

Prompt engineering for business solutions

The exam expects you to know prompt engineering at an architectural level — not just “write better prompts,” but how to design prompting strategies for enterprise AI.

TechniqueWhat It DoesExample
System message designSets the agent’s persona, scope, and rules”You are Apex’s supply chain advisor. Only answer questions about inventory and procurement.”
Few-shot examplesShows the model desired input/output patternsInclude 2-3 examples of good responses in the system message
Chain of thoughtAsks the model to reason step-by-step”Think through the decision step by step before providing your recommendation”
Output formattingSpecifies the response structure”Respond with a table containing: recommendation, confidence, reasoning”
Guardrails and constraintsDefines what the agent must NOT do”Never provide financial advice. If asked, redirect to the compliance team.”
Grounding instructionsTells the model to only use provided context”Answer using only the information from the attached documents. If the answer isn’t there, say so.”
💡 Exam tip: prompt engineering at the architecture level

The exam isn’t about writing individual prompts — it’s about designing prompting strategies:

  • Consistency: How do you ensure agents across the enterprise give consistent answers? Standard system prompts from the prompt library
  • Safety: How do you prevent agents from going off-script? Guardrails, grounding instructions, content filters
  • Quality: How do you improve answer quality over time? Feedback loops, A/B testing prompts, metrics
  • Scalability: How do you manage prompts across 50 agents? Prompt library with versioning and governance

If a question asks “how should the architect ensure consistent AI responses across departments?” — the answer is a governed prompt library, not telling teams to “write better prompts.”

Flashcards

Question

What is the default fallback behaviour in Copilot Studio when no topic matches?

Click or press Enter to reveal answer

Answer

The generative AI orchestrator searches the configured knowledge sources and generates a grounded response. If no relevant content is found, a configurable fallback message is returned.

Click to flip back

Question

What is a prompt library and why does an enterprise need one?

Click or press Enter to reveal answer

Answer

A governed collection of reusable, tested, and versioned prompts that teams across the organisation can use. It ensures consistency, safety, and quality across all AI interactions instead of ad-hoc prompts.

Click to flip back

Question

What are grounding instructions in prompt engineering?

Click or press Enter to reveal answer

Answer

Instructions that tell the AI model to answer using only the information from provided context (knowledge sources, documents). They reduce hallucination by constraining the model to grounded facts.

Click to flip back

Question

Which knowledge source should you use for semantic search across 50,000+ documents?

Click or press Enter to reveal answer

Answer

Azure AI Search index. It supports semantic similarity search at scale, which SharePoint and Dataverse knowledge sources cannot match for large document collections. A pro-dev team may build the indexing pipeline in Foundry, but Copilot Studio consumes it as an Azure AI Search knowledge source.

Click to flip back

Knowledge check

Knowledge Check

Kai's client (Apex Industries) has 50,000 technical product specifications stored across multiple SharePoint libraries. The customer service agent needs to search these documents semantically to answer detailed technical questions. Which knowledge source should Kai configure?

Knowledge Check

Adrienne's compliance team at Vanguard wants to ensure all AI agents across the company follow consistent communication guidelines and never provide unauthorised financial advice. Which approach should Adrienne recommend?

Knowledge Check

A Copilot Studio agent for an HR department needs to handle both structured onboarding workflows (collecting documents, scheduling orientation) and open-ended policy questions from new hires. How should the architect design the agent?

🎬 Video coming soon

Next up: Small Language Models & Model Selection — when to use SLMs, how model routing works, and how to select the right model for each scenario.

← Previous

Build, Buy, or Extend

Next →

Small Language Models & Model Selection

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.