🔒 Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided AB-731 Domain 1
Domain 1 — Module 8 of 11 73%
8 of 27 overall

AB-731 Study Guide

Domain 1: Identify the Business Value of Generative AI Solutions

  • Generative AI vs Traditional AI: What's the Difference?
  • Choosing the Right AI Solution for Your Business
  • AI Models: Pretrained vs Fine-Tuned
  • AI Cost Drivers and ROI: Tokens, Pricing, and Business Cases
  • Challenges of Generative AI: Fabrications, Bias & Reliability
  • When Generative AI Creates Real Business Value
  • Prompt Engineering: The Skill That Multiplies AI Value
  • RAG and Grounding: Making AI Use YOUR Data
  • Data Quality: The Make-or-Break Factor for AI
  • When Traditional Machine Learning Adds Value
  • Securing AI Systems: From Application to Data

Domain 2: Identify Benefits, Capabilities, and Opportunities for Microsoft AI Apps and Services

  • Mapping Business Needs to Microsoft AI Solutions
  • Copilot Versions: Free, Business, M365, and Beyond
  • Copilot Chat: Web, Mobile & Work Experiences
  • Copilot in M365 Apps: Word, Excel, Teams & More
  • Copilot Studio & Microsoft Graph: Building Smarter Solutions
  • Researcher & Analyst: Copilot's Power Agents
  • Build, Buy, or Extend: The AI Decision Framework
  • Microsoft Foundry: Your AI Platform
  • Azure AI Services: Vision, Search & Beyond
  • Matching the Right AI Model to Your Business Need

Domain 3: Identify an Implementation and Adoption Strategy

  • Responsible AI and Governance: Principles That Protect Your Business Free
  • Setting Up an AI Council: Strategy, Oversight & Alignment Free
  • Building Your AI Adoption Team Free
  • AI Champions: Your Secret Weapon for Adoption Free
  • Data, Security, Privacy & Cost: The Four Pillars of AI Readiness Free
  • Copilot & Azure AI Licensing: Every Option Explained Free

AB-731 Study Guide

Domain 1: Identify the Business Value of Generative AI Solutions

  • Generative AI vs Traditional AI: What's the Difference?
  • Choosing the Right AI Solution for Your Business
  • AI Models: Pretrained vs Fine-Tuned
  • AI Cost Drivers and ROI: Tokens, Pricing, and Business Cases
  • Challenges of Generative AI: Fabrications, Bias & Reliability
  • When Generative AI Creates Real Business Value
  • Prompt Engineering: The Skill That Multiplies AI Value
  • RAG and Grounding: Making AI Use YOUR Data
  • Data Quality: The Make-or-Break Factor for AI
  • When Traditional Machine Learning Adds Value
  • Securing AI Systems: From Application to Data

Domain 2: Identify Benefits, Capabilities, and Opportunities for Microsoft AI Apps and Services

  • Mapping Business Needs to Microsoft AI Solutions
  • Copilot Versions: Free, Business, M365, and Beyond
  • Copilot Chat: Web, Mobile & Work Experiences
  • Copilot in M365 Apps: Word, Excel, Teams & More
  • Copilot Studio & Microsoft Graph: Building Smarter Solutions
  • Researcher & Analyst: Copilot's Power Agents
  • Build, Buy, or Extend: The AI Decision Framework
  • Microsoft Foundry: Your AI Platform
  • Azure AI Services: Vision, Search & Beyond
  • Matching the Right AI Model to Your Business Need

Domain 3: Identify an Implementation and Adoption Strategy

  • Responsible AI and Governance: Principles That Protect Your Business Free
  • Setting Up an AI Council: Strategy, Oversight & Alignment Free
  • Building Your AI Adoption Team Free
  • AI Champions: Your Secret Weapon for Adoption Free
  • Data, Security, Privacy & Cost: The Four Pillars of AI Readiness Free
  • Copilot & Azure AI Licensing: Every Option Explained Free
Domain 1: Identify the Business Value of Generative AI Solutions Premium ⏱ ~13 min read

RAG and Grounding: Making AI Use YOUR Data

Out-of-the-box AI only knows what it was trained on. Grounding connects it to your organisation's data — and RAG is the most common way to do it. Learn when and how to use both.

Why does AI need your data?

☕ Simple explanation

Think of a large language model like a well-educated new hire on their first day.

They know a lot about the world — business, writing, analysis — but they know nothing about YOUR company. Your policies, your products, your customers, your internal processes. If you ask them a company-specific question, they’ll either say “I don’t know” or make something up that sounds right.

Grounding is giving that new hire a cheat sheet — your company’s actual documents, data, and knowledge — so their answers are based on real information, not guesses.

RAG (Retrieval-Augmented Generation) is the most common way to hand over that cheat sheet: find the right documents first, then let AI answer based on what it found.

Grounding is the practice of connecting a generative AI model to authoritative data sources so its responses are factually anchored in real, current information rather than relying solely on training data. Grounding reduces fabrications and makes AI outputs relevant to the specific organisational context.

Retrieval-Augmented Generation (RAG) is the dominant architectural pattern for grounding. It works in three stages: (1) Retrieve — search a knowledge base for documents relevant to the user’s query, (2) Augment — inject the retrieved content into the prompt as context, (3) Generate — the model produces a response grounded in the retrieved information.

RAG is preferred over fine-tuning for most enterprise scenarios because it uses current data, respects access controls, and doesn’t require retraining the model.

How RAG works: Three steps

RAG follows a simple pattern every time a user asks a question:

StepWhat happensExample
1. RetrieveThe system searches your knowledge base for relevant documentsUser asks “What’s our return policy?” — the system finds the returns policy document, the FAQ page, and a recent policy update email
2. AugmentThe retrieved documents are added to the prompt as contextThe AI now has the actual policy text in front of it — not just its training data
3. GenerateThe model produces an answer grounded in the retrieved content”Our return policy allows returns within 30 days with receipt. As of March 2026, we also accept digital receipts.”

Without RAG, the model would have to guess your return policy — and it would probably fabricate something plausible but wrong.

💡 Exam tip: RAG vs fine-tuning

The exam may ask you to choose between RAG and fine-tuning. Key distinctions:

  • RAG retrieves data at query time. Data stays current. No model retraining needed. Respects existing access controls.
  • Fine-tuning bakes knowledge into the model itself. Requires retraining when data changes. Better for teaching the model a specific style or behaviour.

For most business scenarios — answering questions over company data, policy lookups, customer support — RAG is the right answer. Fine-tuning is for specialised behaviour, not data retrieval.

Grounding in the Microsoft ecosystem

Microsoft offers two primary grounding approaches:

Microsoft grounding approaches compared
FeatureHow it worksData sourcesBest for
Microsoft 365 Copilot (Graph grounding)Automatically grounded in your M365 data via Microsoft GraphEmails, Teams chats, SharePoint documents, OneDrive files, calendar eventsKnowledge workers who need AI that understands their work context — meetings, emails, documents
Azure AI Search + Azure OpenAI (Custom RAG)You build a custom retrieval pipeline over your own data sourcesDatabases, APIs, file shares, CRM systems, custom knowledge bases — any data you indexCustom applications, customer-facing chatbots, or scenarios where M365 data isn't sufficient
ℹ️ How Copilot grounding works behind the scenes

When you ask Microsoft 365 Copilot a question, here’s what happens:

  1. Your prompt goes to the Copilot orchestrator
  2. The orchestrator queries Microsoft Graph for relevant content you have access to — emails, files, chats, meetings
  3. The relevant content is injected into the prompt as grounding data
  4. The LLM generates a response based on your actual data
  5. The response goes through responsible AI filters before reaching you

Critically, Copilot only retrieves data the user already has permission to access. If you can’t see a document in SharePoint, Copilot can’t use it either.

Business requirements for grounding

Before deploying a grounded AI solution, leaders need to evaluate five key areas:

RequirementWhat to assessRisk if ignored
Data qualityIs your data accurate, complete, and well-organised?AI will ground on bad data and produce confidently wrong answers
Access controlAre document permissions correctly configured?AI could surface confidential data to unauthorised users
Data freshnessHow current is the indexed data?AI answers based on outdated information — policy changes missed
RelevanceDoes the retrieval system find the right documents?AI grounds on irrelevant content, producing unhelpful or misleading responses
CoverageIs all necessary knowledge indexed and searchable?AI says “I don’t know” for questions it should be able to answer
💡 The hidden risk: Oversharing through AI

This is a common exam topic. When organisations deploy Copilot without reviewing their SharePoint permissions, they often discover that employees can see documents they shouldn’t — salary data, board minutes, HR cases.

Before Copilot, nobody noticed because nobody searched for that content. Now Copilot proactively surfaces it. The fix isn’t to restrict Copilot — it’s to fix the underlying permissions. AI doesn’t create the access problem. It reveals it.

Real-world scenario: Ravi builds grounded customer support

🏗️ Ravi, CTO of TechVantage Solutions, needs a customer support chatbot that answers questions about their software products. Out-of-the-box ChatGPT knows nothing about TechVantage’s products.

Ravi’s team builds a RAG solution:

  1. Index all product documentation, release notes, and known issues into Azure AI Search
  2. Connect Azure AI Search to Azure OpenAI Service as a grounding source
  3. Configure the system prompt: “You are a TechVantage support assistant. Only answer from the provided documents. If you don’t have the information, say so.”
  4. Deploy the chatbot on the customer portal

Results after 8 weeks:

  • 85% of customer questions answered without human escalation (up from 30% with their old rule-based chatbot)
  • Fabrication rate below 3% (the “say I don’t know” instruction works)
  • Customer satisfaction up 22 points
  • Support team focuses on complex issues instead of answering the same questions repeatedly
ℹ️ Why Ravi chose RAG over fine-tuning

Ravi considered fine-tuning but chose RAG because:

  • Product docs change weekly — fine-tuning would require constant retraining
  • He needs the bot to cite specific documents as sources
  • RAG lets him update the knowledge base without touching the model
  • Access controls ensure customers only see documentation for products they’ve purchased

Fine-tuning would have been appropriate if the goal was to change the model’s communication style or teach it a domain-specific language.

Key flashcards

Question

What does RAG stand for and what does it do?

Click or press Enter to reveal answer

Answer

Retrieval-Augmented Generation. It retrieves relevant documents from a knowledge base, adds them to the AI prompt as context, and then generates a response grounded in that real data — reducing fabrications.

Click to flip back

Question

What is 'grounding' in the context of generative AI?

Click or press Enter to reveal answer

Answer

Grounding connects a generative AI model to authoritative data sources so its responses are based on real, current information rather than just its training data. It reduces fabrications and makes output organisationally relevant.

Click to flip back

Question

How does Microsoft 365 Copilot ground its responses?

Click or press Enter to reveal answer

Answer

Copilot uses Microsoft Graph to retrieve content the user has access to — emails, files, chats, meetings — and injects it into the prompt as context before the LLM generates a response.

Click to flip back

Question

When should you choose RAG over fine-tuning?

Click or press Enter to reveal answer

Answer

Choose RAG when data changes frequently, you need source citations, and existing access controls must be respected. Choose fine-tuning when you need to change the model's behaviour, style, or teach it domain-specific language.

Click to flip back

Knowledge check

Knowledge Check

Ravi's support chatbot answers product questions using indexed documentation. When a customer asks about a feature, the system finds relevant docs, includes them in the prompt, and the AI generates an answer. What is this architecture called?

Knowledge Check

Dr. Patel is called in after a Copilot deployment at a financial services firm. Employees have discovered they can ask Copilot about executive salary data stored in a SharePoint site. What is the root cause?

🎬 Video coming soon

Next up: Data Quality: The Make-or-Break Factor for AI — why clean, representative data is the foundation of every successful AI deployment.

← Previous

Prompt Engineering: The Skill That Multiplies AI Value

Next →

Data Quality: The Make-or-Break Factor for AI

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.