🔒 Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided AI-103 Domain 1
Domain 1 — Module 7 of 8 88%
7 of 27 overall

AI-103 Study Guide

Domain 1: Plan and Manage an Azure AI Solution

  • Choosing the Right AI Model Free
  • Foundry Services: Your AI Toolkit Free
  • Retrieval, Indexing & Agent Memory
  • Designing AI Infrastructure
  • Deploying Models & CI/CD
  • Quotas, Scaling & Cost
  • Monitoring & Security
  • Responsible AI: Filters, Auditing & Governance

Domain 2: Implement Generative AI and Agentic Solutions

  • Connecting Your App to Foundry Free
  • Building RAG Applications
  • Workflows & Reasoning Pipelines
  • Evaluating AI Models & Apps
  • Agent Fundamentals: Roles, Goals & Tools Free
  • Building Agents with Retrieval & Memory
  • Agent Tools & Knowledge Integration
  • Multi-Agent Orchestration & Safeguards
  • Agent Monitoring & Error Analysis
  • Prompt Engineering & Model Tuning
  • Observability & Production Operations

Domain 3: Implement Computer Vision Solutions

  • Image & Video Generation
  • Multimodal Visual Understanding
  • Responsible AI for Visual Content

Domain 4: Implement Text Analysis Solutions

  • Text Analysis with Language Models
  • Speech, Translation & Voice Agents

Domain 5: Implement Information Extraction Solutions

  • Ingestion, Indexing & Grounding Pipelines
  • Extracting Content with Content Understanding
  • Exam Prep: Putting It All Together

AI-103 Study Guide

Domain 1: Plan and Manage an Azure AI Solution

  • Choosing the Right AI Model Free
  • Foundry Services: Your AI Toolkit Free
  • Retrieval, Indexing & Agent Memory
  • Designing AI Infrastructure
  • Deploying Models & CI/CD
  • Quotas, Scaling & Cost
  • Monitoring & Security
  • Responsible AI: Filters, Auditing & Governance

Domain 2: Implement Generative AI and Agentic Solutions

  • Connecting Your App to Foundry Free
  • Building RAG Applications
  • Workflows & Reasoning Pipelines
  • Evaluating AI Models & Apps
  • Agent Fundamentals: Roles, Goals & Tools Free
  • Building Agents with Retrieval & Memory
  • Agent Tools & Knowledge Integration
  • Multi-Agent Orchestration & Safeguards
  • Agent Monitoring & Error Analysis
  • Prompt Engineering & Model Tuning
  • Observability & Production Operations

Domain 3: Implement Computer Vision Solutions

  • Image & Video Generation
  • Multimodal Visual Understanding
  • Responsible AI for Visual Content

Domain 4: Implement Text Analysis Solutions

  • Text Analysis with Language Models
  • Speech, Translation & Voice Agents

Domain 5: Implement Information Extraction Solutions

  • Ingestion, Indexing & Grounding Pipelines
  • Extracting Content with Content Understanding
  • Exam Prep: Putting It All Together
Domain 1: Plan and Manage an Azure AI Solution Premium ⏱ ~14 min read

Monitoring & Security

Your AI solution is only as good as its data pipeline and security posture. Learn how to monitor search index health, data ingestion quality, and lock down your AI infrastructure with managed identity, private endpoints, and RBAC.

Monitoring data and search quality

☕ Simple explanation

Your AI is only as good as the data it searches. If the data pipeline breaks or the search index goes stale, your AI starts giving wrong answers — and nobody tells you.

Monitoring means watching two things: (1) Is new data flowing in correctly? (2) When users search, are they finding what they need? If either breaks, your RAG application starts hallucinating or giving irrelevant responses.

For AI solutions that use retrieval (RAG, agent knowledge), monitoring the data pipeline is as important as monitoring the model itself. You need visibility into:

  • Data ingestion quality — are documents being processed correctly? Are there parsing errors, missing fields, or encoding issues?
  • Search index health — is the index up to date? Are all documents indexed? Are embeddings current?
  • Relevance performance — when users query, do the top results actually match their intent? Are there systematic gaps?

Data ingestion monitoring

What to MonitorWhyRed Flag
Indexer statusConfirms documents are being processedIndexer in “failed” or “degraded” state
Document countTracks how many documents are indexedCount plateaus when new docs should be flowing in
Parsing errorsCatches corrupt or unsupported filesError rate above 1-2%
Field completenessEnsures extracted metadata is populatedRequired fields (title, date) returning null
Embedding freshnessConfirms vectors match current modelEmbeddings generated with outdated model version

Search relevance monitoring

MetricWhat It MeasuresHow to Improve
Precision at KOf the top K results, how many are relevant?Tune ranking profiles, adjust chunking
RecallOf all relevant documents, how many were found?Add more search types (hybrid), broaden index
Mean Reciprocal RankHow high does the correct answer rank?Improve semantic ranker configuration
User satisfactionDo users rephrase and retry?Track query reformulations as a proxy for poor results
💡 Exam tip: Stale index vs model drift

The exam may present declining quality and ask for the cause. Key distinction:

  • Stale index = data pipeline stopped, new documents aren’t indexed, answers are outdated
  • Model drift = model behaviour changed, but data is fine

Check the data pipeline first, then the model. In practice, stale indexes cause more quality issues than model drift.

Security fundamentals for AI

Four security pillars for AI infrastructure
FeatureSecurity FeatureWhat It Does
Managed identitySystem-assigned identityAzure resources authenticate to each other without storing credentials in code. No API keys needed.
Private endpointsPrivate networkingAI services communicate over Azure's private network backbone — never touching the public internet.
Keyless credentialsToken-based authApplications use Microsoft Entra ID tokens instead of API keys. Tokens expire automatically, keys don't.
RBAC (role policies)Role-Based Access ControlFine-grained permissions: who can deploy models, who can read data, who can manage agents.

Managed identity in practice

Managed identity is the number one security best practice for Azure AI:

Without Managed IdentityWith Managed Identity
Store API key in app config or Key VaultNo keys to store — Azure handles authentication
Rotate keys manuallyNo rotation needed — tokens are short-lived
Risk of key exposure in logs or codeNo secret to expose
Configure key per serviceOne identity, grant roles to each resource
ℹ️ Real-world example: Atlas Financial's security posture

Atlas Financial handles sensitive financial data. Their AI security setup:

  • Managed identity on all Foundry resources — zero API keys in code
  • Private endpoints for Foundry Project, AI Search, and Storage — no public internet exposure
  • RBAC roles:
    • Data scientists: “Cognitive Services User” (can call models, can’t deploy)
    • AI engineers: “Cognitive Services Contributor” (can deploy models)
    • Security team: “Reader” + custom role for audit log access
  • VNet integration — all AI traffic stays within Atlas’s private network
  • Key Vault — only for third-party API keys (external services that don’t support managed identity)

RBAC roles for AI services

RoleWhat It AllowsWho Gets It
Cognitive Services UserCall deployed models and agentsApplication service principals, developers
Cognitive Services ContributorDeploy and manage modelsAI engineers, DevOps
Search Index Data ReaderQuery search indexesApplications, agents
Search Index Data ContributorRead and write search index dataIndexing pipelines
Search Service ContributorManage search service configurationInfrastructure admins

Key terms

Question

What is managed identity in Azure?

Click or press Enter to reveal answer

Answer

A system-assigned identity for Azure resources that enables passwordless authentication between services. Eliminates the need to store API keys or credentials in code. Resources authenticate using short-lived Microsoft Entra ID tokens.

Click to flip back

Question

What are private endpoints?

Click or press Enter to reveal answer

Answer

Network interfaces that connect Azure services over Microsoft's private backbone network instead of the public internet. Ensures AI traffic (model calls, search queries, data transfers) never leaves the private network.

Click to flip back

Question

What does 'keyless credentials' mean?

Click or press Enter to reveal answer

Answer

Using Microsoft Entra ID token-based authentication instead of API keys. Tokens expire automatically (no rotation needed) and can't be accidentally committed to code. Managed identity is the primary mechanism for keyless auth.

Click to flip back

Question

What is search index health?

Click or press Enter to reveal answer

Answer

The operational status of your Azure AI Search index — whether documents are being indexed correctly, embeddings are current, and the index is serving relevant results. Degraded index health directly impacts RAG and agent quality.

Click to flip back

Knowledge check

Knowledge Check

NeuralMed's RAG chatbot has been giving outdated drug interaction information, even though new research papers are being uploaded to storage daily. What should the team investigate first?

Knowledge Check

Kai's team stores an API key for the Foundry model deployment in their application's environment variables. The security team flags this as a risk. What's the recommended fix?

Knowledge Check

Which RBAC role should Atlas Financial assign to their AI application's service principal so it can call deployed models but NOT deploy or delete them?

🎬 Video coming soon

← Previous

Quotas, Scaling & Cost

Next →

Responsible AI: Filters, Auditing & Governance

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.