🔒 Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided AI-300 Domain 1
Domain 1 — Module 1 of 5 20%
1 of 25 overall

AI-300 Study Guide

Domain 1: Design and Implement an MLOps Infrastructure

  • ML Workspace: Your AI Control Room Free
  • Data, Environments & Components
  • Compute Targets: Choosing the Right Engine
  • Infrastructure as Code: Provisioning at Scale
  • Git & CI/CD for ML Projects

Domain 2: Implement Machine Learning Model Lifecycle and Operations

  • MLflow: Track Every Experiment Free
  • AutoML & Hyperparameter Tuning
  • Training Pipelines: Automate Everything
  • Distributed Training: Scale to Big Data
  • Model Registration & Versioning
  • Model Approval & Responsible AI Gates
  • Deploying Models: Endpoints in Production
  • Drift, Monitoring & Retraining

Domain 3: Design and Implement a GenAIOps Infrastructure

  • Foundry: Hubs, Projects & Platform Setup Free
  • Network Security & IaC for Foundry
  • Deploying Foundation Models
  • Model Versioning & Production Strategies
  • PromptOps: Design, Compare, Version & Ship

Domain 4: Implement Generative AI Quality Assurance and Observability

  • Evaluation: Datasets, Metrics & Quality Gates Free
  • Safety Evaluations & Custom Metrics
  • Monitoring GenAI in Production
  • Cost Tracking, Logging & Debugging

Domain 5: Optimize Generative AI Systems and Model Performance

  • RAG Optimization: Better Retrieval, Better Answers Free
  • Embeddings & Hybrid Search
  • Fine-Tuning: Methods, Data & Production

AI-300 Study Guide

Domain 1: Design and Implement an MLOps Infrastructure

  • ML Workspace: Your AI Control Room Free
  • Data, Environments & Components
  • Compute Targets: Choosing the Right Engine
  • Infrastructure as Code: Provisioning at Scale
  • Git & CI/CD for ML Projects

Domain 2: Implement Machine Learning Model Lifecycle and Operations

  • MLflow: Track Every Experiment Free
  • AutoML & Hyperparameter Tuning
  • Training Pipelines: Automate Everything
  • Distributed Training: Scale to Big Data
  • Model Registration & Versioning
  • Model Approval & Responsible AI Gates
  • Deploying Models: Endpoints in Production
  • Drift, Monitoring & Retraining

Domain 3: Design and Implement a GenAIOps Infrastructure

  • Foundry: Hubs, Projects & Platform Setup Free
  • Network Security & IaC for Foundry
  • Deploying Foundation Models
  • Model Versioning & Production Strategies
  • PromptOps: Design, Compare, Version & Ship

Domain 4: Implement Generative AI Quality Assurance and Observability

  • Evaluation: Datasets, Metrics & Quality Gates Free
  • Safety Evaluations & Custom Metrics
  • Monitoring GenAI in Production
  • Cost Tracking, Logging & Debugging

Domain 5: Optimize Generative AI Systems and Model Performance

  • RAG Optimization: Better Retrieval, Better Answers Free
  • Embeddings & Hybrid Search
  • Fine-Tuning: Methods, Data & Production
Domain 1: Design and Implement an MLOps Infrastructure Free ⏱ ~12 min read

ML Workspace: Your AI Control Room

Every ML project starts with a workspace. Learn how to create, configure, and secure Azure Machine Learning workspaces — the central hub for all your MLOps and data science work.

AI-300 is a BETA exam. Content may change before general availability (~June-July 2026). This guide is based on the official study guide published by Microsoft. We’ll update as the exam evolves.

What is a Machine Learning workspace?

☕ Simple explanation

A workspace is like a fully equipped science lab.

Imagine a research lab where everything has its place: the chemicals (data), the equipment (compute), the notebooks (experiments), and the safety protocols (access control). You don’t mix Lab A’s experiments with Lab B’s. Each lab has its own budget, its own team, and its own locked door.

An Azure Machine Learning workspace works the same way. It’s the central hub where your team stores data, runs experiments, tracks results, and deploys models — all in one organised, secure space.

An Azure Machine Learning workspace is a top-level resource that provides a centralised place to work with all the artifacts you create when you use Azure Machine Learning. It stores:

  • Compute targets — where training and inference run
  • Datastores — connections to your data sources
  • Environments — reproducible software stacks (conda/pip/Docker)
  • Models — trained artifacts registered for deployment
  • Experiments and runs — MLflow tracking history
  • Endpoints — deployed model serving infrastructure

Each workspace is backed by associated Azure resources: a Storage Account (for artifacts), a Key Vault (for secrets), an Application Insights (for monitoring), and a Container Registry (for Docker images).

Workspace architecture

When you create a workspace, Azure automatically provisions these supporting resources:

ResourcePurposeCreated Automatically?
Azure Storage AccountStores experiment logs, model artifacts, datasets, snapshotsYes
Azure Key VaultStores secrets — connection strings, API keys, passwordsYes
Application InsightsCaptures telemetry from deployed endpointsYes
Azure Container RegistryStores Docker images for environments and deploymentsCreated on first use
Scenario: Kai sets up NeuralSpark's first workspace

Kai Nakamura, MLOps engineer at NeuralSpark (a 50-person AI startup), needs to set up their first production workspace. Priya, the CTO, wants a workspace that:

  1. Keeps staging and production experiments separate
  2. Lets the data science team run experiments without touching production
  3. Doesn’t blow the cloud budget

Kai’s plan:

  • One resource group for all ML resources (keeps billing visible)
  • Two workspaces: neuralspark-dev (experiments, cheap compute) and neuralspark-prod (production endpoints, managed compute)
  • Managed identity on both workspaces (no passwords stored anywhere)
  • RBAC: data scientists get “AzureML Data Scientist” role on dev, read-only on prod

This two-workspace pattern is common for startups that want speed without production risk.

Creating a workspace

You can create a workspace through the Azure portal, Azure CLI, Python SDK v2, or Bicep/ARM templates.

Azure CLI (most exam-relevant):

# Create a resource group
az group create --name rg-ml-prod --location eastus

# Create the workspace
az ml workspace create \
  --name ml-workspace-prod \
  --resource-group rg-ml-prod \
  --location eastus

What’s happening:

  • Line 1-2: Creates a resource group to hold all ML resources
  • Line 4-7: Creates the workspace — Azure auto-provisions the Storage Account, Key Vault, and Application Insights

Python SDK v2:

from azure.ai.ml import MLClient
from azure.ai.ml.entities import Workspace
from azure.identity import DefaultAzureCredential

# Authenticate
credential = DefaultAzureCredential()

# Define the workspace
ws = Workspace(
    name="ml-workspace-prod",
    location="eastus",
    resource_group="rg-ml-prod",
    description="NeuralSpark production workspace"
)

# Create it
ml_client = MLClient(
    credential=credential,
    subscription_id="your-subscription-id",
    resource_group_name="rg-ml-prod"
)
ml_client.workspaces.begin_create(ws).result()

What’s happening:

  • Lines 1-3: Import the SDK and authentication classes
  • Line 6: Uses DefaultAzureCredential — tries managed identity first, then Azure CLI, then environment variables
  • Lines 9-14: Defines workspace configuration as a Python object
  • Lines 17-21: Creates an MLClient to talk to Azure
  • Line 22: Sends the create request and waits for completion
💡 Exam tip: SDK v2 vs SDK v1

AI-300 uses Azure ML SDK v2 (the azure-ai-ml package). If you see import paths like from azureml.core import Workspace, that’s SDK v1 — deprecated for new projects. The exam focuses on SDK v2 patterns:

  • MLClient instead of Workspace.from_config()
  • YAML-based definitions instead of pure Python objects
  • azure.ai.ml.entities for resource definitions

Identity and access management

Workspaces use Azure RBAC (Role-Based Access Control) to control who can do what:

Built-in roles for Azure Machine Learning
FeatureCan Create ComputeCan Run ExperimentsCan Deploy ModelsCan Manage Workspace
ReaderNoNo (view only)NoNo
AzureML Data ScientistYesYesYes (to existing endpoints)No
AzureML Compute OperatorYesNoNoNo (compute only)
ContributorYesYesYesYes (except RBAC)
OwnerYesYesYesYes (including RBAC)

Managed identity is the recommended authentication method for workspaces:

  • System-assigned managed identity — created automatically with the workspace, tied to its lifecycle
  • User-assigned managed identity — you create it separately and attach it; can be shared across resources
Scenario: Dr. Fatima locks down Meridian's workspace

Dr. Fatima Al-Rashid, ML Platform Lead at Meridian Financial, needs enterprise-grade access control:

  • Data scientists → “AzureML Data Scientist” role (can experiment, can’t change infrastructure)
  • ML engineers → “Contributor” role (can deploy and manage endpoints)
  • Compliance team → “Reader” role (audit experiments and model lineage)
  • Service accounts for CI/CD → User-assigned managed identity with “AzureML Data Scientist” role

James Chen (CISO) insists on no passwords in code — managed identity handles all auth between the workspace and its backing resources (storage, key vault, container registry).

💡 Exam tip: Managed identity vs service principal

The exam favours managed identity over service principals for workspace authentication. Key reasons:

  • No secrets to manage or rotate
  • Automatic credential lifecycle
  • Works natively with RBAC

If a question asks about the “most secure” or “recommended” way to authenticate a workspace to other Azure services, the answer is almost always managed identity.

Multi-workspace strategies

Most organisations use more than one workspace. Common patterns:

StrategyWorkspacesBest For
Dev/Prod split2: dev + prodStartups, small teams (Kai’s pattern)
Team-basedPer team: NLP, vision, forecastingLarge orgs with independent ML teams
Project-basedPer project: fraud-detection, churn-predictionRegulated industries needing audit isolation
Environment-based3: dev + staging + prodEnterprise CI/CD with promotion gates
💡 When to use one workspace vs many

One workspace when:

  • Small team (under 10 data scientists)
  • Single project or tightly related projects
  • Want simplicity over isolation

Multiple workspaces when:

  • Different teams need different access controls
  • Regulatory requirements mandate data isolation
  • CI/CD pipeline needs distinct environments (dev → staging → prod)
  • Cost tracking per team or project is required

The exam may present a scenario where you need to choose between strategies. Focus on the requirement: isolation (security, compliance) favours more workspaces; collaboration (shared models, shared data) favours fewer.

Key terms flashcards

Question

What is a Machine Learning workspace?

Click or press Enter to reveal answer

Answer

A top-level Azure resource that serves as the central hub for ML operations — storing compute, data, experiments, models, and endpoints. Backed by Storage Account, Key Vault, Application Insights, and Container Registry.

Click to flip back

Question

What resources does Azure auto-create with a workspace?

Click or press Enter to reveal answer

Answer

Storage Account (artifacts/logs), Key Vault (secrets), Application Insights (telemetry). Container Registry is created on first use (not immediately).

Click to flip back

Question

What role should a data scientist have on a workspace?

Click or press Enter to reveal answer

Answer

AzureML Data Scientist — can create compute, run experiments, and deploy to existing endpoints, but cannot manage the workspace itself or assign RBAC roles.

Click to flip back

Question

Managed identity vs service principal for workspace auth?

Click or press Enter to reveal answer

Answer

Managed identity is recommended: no secrets to rotate, automatic lifecycle, native RBAC integration. Service principals require manual secret management.

Click to flip back

Knowledge check

Knowledge Check

Kai is setting up workspaces at NeuralSpark. He wants data scientists to run experiments freely but not modify production endpoints. Which role should he assign to data scientists on the production workspace?

Knowledge Check

Dr. Fatima needs Meridian's workspace to authenticate to Azure Storage without storing any credentials in code. What should she configure?

🎬 Video coming soon


Next up: Data, Environments & Components — the building blocks that make your experiments reproducible.

Next →

Data, Environments & Components

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.