🔒 Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided AI-300 Domain 3
Domain 3 — Module 1 of 5 20%
14 of 25 overall

AI-300 Study Guide

Domain 1: Design and Implement an MLOps Infrastructure

  • ML Workspace: Your AI Control Room Free
  • Data, Environments & Components
  • Compute Targets: Choosing the Right Engine
  • Infrastructure as Code: Provisioning at Scale
  • Git & CI/CD for ML Projects

Domain 2: Implement Machine Learning Model Lifecycle and Operations

  • MLflow: Track Every Experiment Free
  • AutoML & Hyperparameter Tuning
  • Training Pipelines: Automate Everything
  • Distributed Training: Scale to Big Data
  • Model Registration & Versioning
  • Model Approval & Responsible AI Gates
  • Deploying Models: Endpoints in Production
  • Drift, Monitoring & Retraining

Domain 3: Design and Implement a GenAIOps Infrastructure

  • Foundry: Hubs, Projects & Platform Setup Free
  • Network Security & IaC for Foundry
  • Deploying Foundation Models
  • Model Versioning & Production Strategies
  • PromptOps: Design, Compare, Version & Ship

Domain 4: Implement Generative AI Quality Assurance and Observability

  • Evaluation: Datasets, Metrics & Quality Gates Free
  • Safety Evaluations & Custom Metrics
  • Monitoring GenAI in Production
  • Cost Tracking, Logging & Debugging

Domain 5: Optimize Generative AI Systems and Model Performance

  • RAG Optimization: Better Retrieval, Better Answers Free
  • Embeddings & Hybrid Search
  • Fine-Tuning: Methods, Data & Production

AI-300 Study Guide

Domain 1: Design and Implement an MLOps Infrastructure

  • ML Workspace: Your AI Control Room Free
  • Data, Environments & Components
  • Compute Targets: Choosing the Right Engine
  • Infrastructure as Code: Provisioning at Scale
  • Git & CI/CD for ML Projects

Domain 2: Implement Machine Learning Model Lifecycle and Operations

  • MLflow: Track Every Experiment Free
  • AutoML & Hyperparameter Tuning
  • Training Pipelines: Automate Everything
  • Distributed Training: Scale to Big Data
  • Model Registration & Versioning
  • Model Approval & Responsible AI Gates
  • Deploying Models: Endpoints in Production
  • Drift, Monitoring & Retraining

Domain 3: Design and Implement a GenAIOps Infrastructure

  • Foundry: Hubs, Projects & Platform Setup Free
  • Network Security & IaC for Foundry
  • Deploying Foundation Models
  • Model Versioning & Production Strategies
  • PromptOps: Design, Compare, Version & Ship

Domain 4: Implement Generative AI Quality Assurance and Observability

  • Evaluation: Datasets, Metrics & Quality Gates Free
  • Safety Evaluations & Custom Metrics
  • Monitoring GenAI in Production
  • Cost Tracking, Logging & Debugging

Domain 5: Optimize Generative AI Systems and Model Performance

  • RAG Optimization: Better Retrieval, Better Answers Free
  • Embeddings & Hybrid Search
  • Fine-Tuning: Methods, Data & Production
Domain 3: Design and Implement a GenAIOps Infrastructure Free ⏱ ~14 min read

Foundry: Hubs, Projects & Platform Setup

Microsoft Foundry is the platform for GenAI. Learn to create hubs and projects, configure RBAC with managed identities, and organize your GenAI development environment.

What is Microsoft Foundry?

☕ Simple explanation

Think of Foundry as a two-level office building.

The hub is the building itself — it has shared facilities like security systems, internet connections, and building policies. Every team in the building uses the same front door and follows the same rules.

Each project is an individual office inside the building. Teams get their own locked room with their own desk, files, and whiteboards. They share the building’s infrastructure, but their work is private.

This way, IT manages one building (hub) while every team (project) works independently.

Azure AI Foundry (formerly Azure AI Studio) is the platform for building, evaluating, and deploying generative AI solutions on Azure. It introduces a two-tier resource model:

  • Hub — a top-level Azure resource that provides shared infrastructure: networking, identity, key vault, storage, and Azure OpenAI connections. Governs all projects beneath it.
  • Project — a child resource of a hub. Each project is an isolated workspace for a team or application, with its own agents, deployments, evaluations, and datasets.

This separation lets platform teams set governance at the hub level while giving individual teams autonomy at the project level.

Note: The hub-based project model described here is how Foundry (classic) organizes resources. Microsoft Foundry (the new portal) uses a simplified “Foundry resource + projects” model. The exam may test both patterns — the concepts of shared governance and project isolation apply to both.

Hub vs project

Hub = shared governance, Project = team isolation
FeatureWhat It ManagesWho Manages ItScope
HubShared connections (Azure OpenAI, Search), networking, key vault, storage, policiesPlatform/IT teamOne hub per business unit or region
ProjectAgents, deployments, evaluations, datasets, indexesApplication/dev teamOne project per application or team

Creating a hub

A hub is the foundation. Create it first, then add projects beneath it.

# Create a resource group for your GenAI platform
az group create \
  --name rg-genai-prod \
  --location australiaeast

# Create a Foundry hub
az ml workspace create \
  --kind hub \
  --name hub-genai-prod \
  --resource-group rg-genai-prod \
  --location australiaeast \
  --storage-account sagenaiprod \
  --key-vault kv-genai-prod

What’s happening:

  • Lines 2-3: Creates a resource group to hold all GenAI resources
  • Lines 6-11: Creates a hub with --kind hub — this is the key flag that makes it a Foundry hub rather than a regular ML workspace
  • Lines 10-11: Links the hub to a storage account (for artifacts) and key vault (for secrets and connection strings)

Creating projects under a hub

# Create a project for the customer support bot team
az ml workspace create \
  --kind project \
  --name proj-support-bot \
  --hub-id /subscriptions/<sub-id>/resourceGroups/rg-genai-prod/providers/Microsoft.MachineLearningServices/workspaces/hub-genai-prod \
  --resource-group rg-genai-prod

# Create a second project for the document analysis team
az ml workspace create \
  --kind project \
  --name proj-doc-analysis \
  --hub-id /subscriptions/<sub-id>/resourceGroups/rg-genai-prod/providers/Microsoft.MachineLearningServices/workspaces/hub-genai-prod \
  --resource-group rg-genai-prod

What’s happening:

  • Line 3: --kind project makes this a child of a hub
  • Line 5: --hub-id links the project to its parent hub — the project inherits the hub’s storage, key vault, networking, and policies
  • Lines 9-14: A second project for a different team — both share the hub’s infrastructure but have isolated workspaces

Deploying with Bicep

For repeatable, auditable deployments, use Infrastructure as Code:

// main.bicep — Foundry hub with a project
param storageAccountId string
param keyVaultId string
param appInsightsId string

resource hub 'Microsoft.MachineLearningServices/workspaces@2024-10-01' = {
  name: 'hub-genai-prod'
  location: resourceGroup().location
  kind: 'Hub'
  identity: {
    type: 'SystemAssigned'
  }
  properties: {
    friendlyName: 'GenAI Production Hub'
    storageAccount: storageAccountId
    keyVault: keyVaultId
    applicationInsights: appInsightsId
  }
}

resource project 'Microsoft.MachineLearningServices/workspaces@2024-10-01' = {
  name: 'proj-support-bot'
  location: resourceGroup().location
  kind: 'Project'
  identity: {
    type: 'SystemAssigned'
  }
  properties: {
    friendlyName: 'Customer Support Bot'
    hubResourceId: hub.id
  }
}

What’s happening:

  • Lines 2-14: Creates the hub resource with kind: 'Hub' and system-assigned managed identity
  • Lines 7-8: System-assigned managed identity — Azure creates and manages the identity automatically
  • Lines 13-16: Links to shared infrastructure via parameters (storage, key vault, Application Insights)
  • Lines 17-27: Creates a project linked to the hub via hubResourceId
  • Lines 21-22: The project also gets its own managed identity for accessing resources

RBAC roles for Foundry

Foundry uses Azure RBAC to control who can do what. These roles appear on the exam:

RoleWhat It Can DoTypical User
OwnerFull control — manage access, create/delete resources, deploy modelsPlatform team lead
ContributorCreate/delete resources and deployments, but cannot manage accessSenior developer
Azure AI DeveloperCreate projects, deployments, and connections. Cannot manage hub settingsApplication developer
Azure AI Inference Deployment OperatorDeploy models to endpoints only. Cannot create projects or manage connectionsCI/CD pipeline service principal
ReaderView resources only, no changesAuditor, stakeholder
💡 Exam tip: Least privilege for Foundry

The exam loves “least privilege” questions for Foundry RBAC:

  • A developer who needs to create prompt flows and deploy models? Azure AI Developer (not Contributor — they don’t need to manage hub infrastructure)
  • A CI/CD pipeline that only deploys models to existing endpoints? Azure AI Inference Deployment Operator (minimum scope)
  • A compliance auditor who reviews configurations? Reader (view only)

Remember: assign roles at the narrowest scope possible. If someone only works in one project, assign the role at the project level, not the hub level.

Managed identities for projects

Every Foundry project should use a managed identity (not passwords or keys) to access Azure resources:

# Assign the project's managed identity access to Azure OpenAI
az role assignment create \
  --assignee <project-managed-identity-object-id> \
  --role "Cognitive Services OpenAI User" \
  --scope /subscriptions/<sub-id>/resourceGroups/rg-genai-prod/providers/Microsoft.CognitiveServices/accounts/aoai-genai-prod

# Assign access to the storage account for reading data
az role assignment create \
  --assignee <project-managed-identity-object-id> \
  --role "Storage Blob Data Reader" \
  --scope /subscriptions/<sub-id>/resourceGroups/rg-genai-prod/providers/Microsoft.Storage/storageAccounts/sagenaiprod

What’s happening:

  • Lines 2-4: Gives the project’s managed identity permission to call Azure OpenAI — no API keys needed
  • Lines 7-9: Gives read access to blob storage for training data and evaluation datasets
  • Both use the project’s system-assigned managed identity, which Azure manages automatically (no key rotation needed)
Scenario: Zara sets up Atlas Consulting's Foundry

Zara Okonkwo, GenAI engineer at Atlas Consulting (300 consultants), needs to set up their Foundry environment. Marcus Webb, her team lead, wants:

  • One hub for shared governance (Azure OpenAI connections, networking)
  • Separate projects per client engagement (data isolation is critical — Client A’s data must never leak to Client B)
  • Consultants can only access their assigned client project

Zara’s design:

  1. Hub: hub-atlas-genai in Australia East — shared Azure OpenAI connection, central key vault
  2. Projects: proj-client-alpha, proj-client-beta, proj-client-gamma — one per engagement
  3. RBAC: Each consultant gets Azure AI Developer role scoped to their specific project only
  4. Managed identity: Each project has a system-assigned identity with “Cognitive Services OpenAI User” on the shared Azure OpenAI resource

Result: consultants share the same GPT-4o deployment (cost efficient) but their agents, evaluations, and data are completely isolated.

Scenario: Dr. Fatima configures hub-level governance at Meridian

Dr. Fatima Al-Rashid, ML Platform Lead at Meridian Financial (5000 staff), needs strict governance. CISO James Chen requires:

  • All AI workloads must use private endpoints (no public internet)
  • Model deployments need approval from the security team
  • Audit trail for all configuration changes

Fatima’s approach:

  1. Hub: hub-meridian-genai with managed VNet and private endpoints (covered in Module 15)
  2. Hub RBAC: Only the platform team has Contributor on the hub. Development teams get Azure AI Developer scoped to their projects
  3. Deployment gate: CI/CD pipeline service principal gets Azure AI Inference Deployment Operator role — it can deploy models but cannot change hub settings or networking
  4. Audit: Azure Activity Log streams to Log Analytics for James Chen’s security team to monitor

Result: development teams move fast within their projects, but hub-level governance (networking, connections, policies) is controlled by the platform team.

Key terms flashcards

Question

What is the relationship between a Foundry hub and project?

Click or press Enter to reveal answer

Answer

A hub is the parent resource providing shared infrastructure (networking, key vault, storage, connections, policies). A project is a child resource — an isolated workspace for a team or application that inherits the hub's shared resources. One hub can have many projects.

Click to flip back

Question

Why use managed identities instead of API keys in Foundry?

Click or press Enter to reveal answer

Answer

Managed identities are managed by Azure — no passwords to store, rotate, or risk leaking. The project's identity authenticates to Azure OpenAI, storage, and other services automatically. This is the recommended approach for production.

Click to flip back

Question

What is the Azure AI Developer role?

Click or press Enter to reveal answer

Answer

A built-in RBAC role for Foundry. It lets users create projects, deployments, prompt flows, and connections — but NOT manage hub-level settings or access control. It's the standard role for application developers.

Click to flip back

Question

What is the Azure AI Inference Deployment Operator role?

Click or press Enter to reveal answer

Answer

The narrowest deployment role. It allows deploying models to existing endpoints only — cannot create projects, manage connections, or change hub settings. Ideal for CI/CD pipeline service principals (least privilege).

Click to flip back

Knowledge check

Knowledge Check

Zara needs to give a consultant access to build prompt flows in one specific client project at Atlas Consulting, but the consultant should NOT be able to access other client projects or change hub settings. Which role and scope should she assign?

Knowledge Check

Dr. Fatima's CI/CD pipeline needs to deploy updated models to production endpoints in Meridian's Foundry project. The pipeline should NOT be able to create new projects or modify hub networking. What role should the pipeline's service principal have?

🎬 Video coming soon


Next up: Network Security and IaC for Foundry — locking down your GenAI platform with private endpoints and Bicep templates.

← Previous

Drift, Monitoring & Retraining

Next →

Network Security & IaC for Foundry

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.