🔒 Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided AB-731 Domain 1
Domain 1 — Module 10 of 11 91%
10 of 27 overall

AB-731 Study Guide

Domain 1: Identify the Business Value of Generative AI Solutions

  • Generative AI vs Traditional AI: What's the Difference?
  • Choosing the Right AI Solution for Your Business
  • AI Models: Pretrained vs Fine-Tuned
  • AI Cost Drivers and ROI: Tokens, Pricing, and Business Cases
  • Challenges of Generative AI: Fabrications, Bias & Reliability
  • When Generative AI Creates Real Business Value
  • Prompt Engineering: The Skill That Multiplies AI Value
  • RAG and Grounding: Making AI Use YOUR Data
  • Data Quality: The Make-or-Break Factor for AI
  • When Traditional Machine Learning Adds Value
  • Securing AI Systems: From Application to Data

Domain 2: Identify Benefits, Capabilities, and Opportunities for Microsoft AI Apps and Services

  • Mapping Business Needs to Microsoft AI Solutions
  • Copilot Versions: Free, Business, M365, and Beyond
  • Copilot Chat: Web, Mobile & Work Experiences
  • Copilot in M365 Apps: Word, Excel, Teams & More
  • Copilot Studio & Microsoft Graph: Building Smarter Solutions
  • Researcher & Analyst: Copilot's Power Agents
  • Build, Buy, or Extend: The AI Decision Framework
  • Microsoft Foundry: Your AI Platform
  • Azure AI Services: Vision, Search & Beyond
  • Matching the Right AI Model to Your Business Need

Domain 3: Identify an Implementation and Adoption Strategy

  • Responsible AI and Governance: Principles That Protect Your Business Free
  • Setting Up an AI Council: Strategy, Oversight & Alignment Free
  • Building Your AI Adoption Team Free
  • AI Champions: Your Secret Weapon for Adoption Free
  • Data, Security, Privacy & Cost: The Four Pillars of AI Readiness Free
  • Copilot & Azure AI Licensing: Every Option Explained Free

AB-731 Study Guide

Domain 1: Identify the Business Value of Generative AI Solutions

  • Generative AI vs Traditional AI: What's the Difference?
  • Choosing the Right AI Solution for Your Business
  • AI Models: Pretrained vs Fine-Tuned
  • AI Cost Drivers and ROI: Tokens, Pricing, and Business Cases
  • Challenges of Generative AI: Fabrications, Bias & Reliability
  • When Generative AI Creates Real Business Value
  • Prompt Engineering: The Skill That Multiplies AI Value
  • RAG and Grounding: Making AI Use YOUR Data
  • Data Quality: The Make-or-Break Factor for AI
  • When Traditional Machine Learning Adds Value
  • Securing AI Systems: From Application to Data

Domain 2: Identify Benefits, Capabilities, and Opportunities for Microsoft AI Apps and Services

  • Mapping Business Needs to Microsoft AI Solutions
  • Copilot Versions: Free, Business, M365, and Beyond
  • Copilot Chat: Web, Mobile & Work Experiences
  • Copilot in M365 Apps: Word, Excel, Teams & More
  • Copilot Studio & Microsoft Graph: Building Smarter Solutions
  • Researcher & Analyst: Copilot's Power Agents
  • Build, Buy, or Extend: The AI Decision Framework
  • Microsoft Foundry: Your AI Platform
  • Azure AI Services: Vision, Search & Beyond
  • Matching the Right AI Model to Your Business Need

Domain 3: Identify an Implementation and Adoption Strategy

  • Responsible AI and Governance: Principles That Protect Your Business Free
  • Setting Up an AI Council: Strategy, Oversight & Alignment Free
  • Building Your AI Adoption Team Free
  • AI Champions: Your Secret Weapon for Adoption Free
  • Data, Security, Privacy & Cost: The Four Pillars of AI Readiness Free
  • Copilot & Azure AI Licensing: Every Option Explained Free
Domain 1: Identify the Business Value of Generative AI Solutions Premium ⏱ ~12 min read

When Traditional Machine Learning Adds Value

Generative AI grabs headlines, but traditional machine learning quietly powers most enterprise AI today. Learn when ML is the right tool — and how the ML lifecycle works.

When does traditional ML beat generative AI?

☕ Simple explanation

Generative AI is like a versatile writer. Traditional ML is like a specialist analyst.

If you need someone to draft a report, summarise a meeting, or brainstorm ideas — the writer is your pick. But if you need someone to predict which machines will break down next week, detect fraudulent transactions in real time, or forecast next quarter’s revenue — the specialist analyst wins every time.

Traditional ML is purpose-built for prediction, classification, and pattern detection. It’s faster, cheaper, and more accurate than generative AI for those specific jobs.

Traditional machine learning encompasses supervised, unsupervised, and reinforcement learning models designed for specific analytical tasks: regression (predicting numerical values), classification (sorting into categories), clustering (finding natural groupings), and anomaly detection (identifying outliers).

While generative AI excels at creating content and handling open-ended tasks, traditional ML is superior for scenarios requiring:

  • Numerical precision: Predicting exact revenue figures or failure probabilities
  • Real-time decisions: Approving transactions in milliseconds
  • Explainability: Showing exactly which factors drove a prediction
  • Cost efficiency: Processing millions of predictions at a fraction of LLM cost

The exam tests your ability to choose the right approach for a given scenario — not just default to generative AI for everything.

ML vs generative AI: Choosing the right tool

Traditional ML vs Generative AI
FeatureTraditional MLGenerative AI
Primary purposePredict, classify, detect patternsCreate new content, understand and respond to language
OutputA number, label, or categoryText, images, code, audio
TrainingTask-specific model trained on labelled dataFoundation model pre-trained on massive diverse datasets
Cost per predictionFractions of a centCents to dollars depending on complexity
SpeedMillisecondsSeconds
ExplainabilityHigh — can show which features drove the decisionLow — difficult to explain why specific content was generated
Best forFraud detection, demand forecasting, predictive maintenanceContent creation, summarisation, Q&A, code generation
💡 Exam tip: The decision tree

When the exam presents a scenario, ask these questions in order:

  1. Does the task require creating new content? Yes = Generative AI. No = continue.
  2. Does the task involve predicting a number or classifying something? Yes = Traditional ML.
  3. Does the task need real-time, high-volume decisions? Yes = Traditional ML (LLMs are too slow and expensive at scale).
  4. Does the task require explainability for regulatory reasons? Yes = Traditional ML (easier to audit).

Many real-world solutions combine both: ML predicts which customers are at risk of churning, and generative AI drafts personalised retention emails.

Business scenarios where ML excels

ScenarioML approachWhy ML, not gen AI
Predictive maintenanceRegression/classification models predict equipment failure probabilityNeeds numerical precision and real-time sensor data processing
Fraud detectionAnomaly detection identifies unusual transaction patternsRequires millisecond decisions on millions of transactions
Demand forecastingTime series models predict future sales volumesNeeds accurate numerical predictions, not text generation
Customer churn predictionClassification model flags at-risk customersExplainability matters — sales team needs to know WHY a customer is at risk
Quality controlComputer vision models detect defects on manufacturing linesReal-time image analysis at production line speed
Credit scoringRegression models calculate risk scoresRegulatory requirement for explainable, auditable decisions

The ML lifecycle: From idea to production

Understanding the ML lifecycle helps leaders set realistic expectations. It’s not “plug in data, get predictions.” It’s an iterative process:

PhaseWhat happensWho’s involvedTime
1. DefineIdentify the business problem and success metricsBusiness leaders + data team1-2 weeks
2. CollectGather and access the training dataData engineers2-4 weeks
3. PrepareClean, label, and transform data for trainingData engineers + domain experts2-6 weeks
4. TrainBuild and train the model on prepared dataData scientists1-4 weeks
5. EvaluateTest model accuracy against held-out dataData scientists + business stakeholders1-2 weeks
6. DeployPut the model into production systemsML engineers + DevOps1-3 weeks
7. MonitorTrack model performance in the real worldML engineers + business teamOngoing
8. RetrainUpdate the model as data patterns changeData scientistsPeriodic
ℹ️ Why the lifecycle matters for leaders

Leaders often underestimate the time and effort in phases 2-3 (data collection and preparation). In most ML projects, 60-80% of the effort is in data work — not model building.

Key leadership decisions at each phase:

  • Define: Is this problem worth solving with ML? What’s the business case?
  • Collect/Prepare: Do we have the data? Is it clean enough? Who owns it?
  • Train/Evaluate: What accuracy is “good enough”? What’s the cost of wrong predictions?
  • Deploy/Monitor: Who’s responsible for keeping the model accurate over time?
  • Retrain: How often does the real world change? Budget for ongoing maintenance.

Model drift: Why ML models need maintenance

Model drift is what happens when the real world changes but the model doesn’t. A model trained on pre-pandemic data doesn’t understand post-pandemic customer behaviour. A demand forecasting model doesn’t account for a new competitor entering the market.

Signs of model drift:

  • Prediction accuracy gradually decreases
  • Business users report the model “feels wrong”
  • The distribution of real-world data no longer matches training data

The fix: regular retraining on fresh data and ongoing monitoring of model performance metrics.

Real-world scenario: Tomás uses ML for predictive maintenance

🔄 Tomás at PacificSteel Manufacturing has 300 pieces of heavy equipment across 12 plants. Unplanned downtime costs approximately $50,000 per hour per production line.

His data team builds a predictive maintenance ML model:

  • Data sources: Sensor readings (temperature, vibration, pressure), maintenance logs, equipment age
  • Model type: Classification model predicting “failure likely within 7 days” vs “operating normally”
  • Training: 3 years of historical sensor data paired with actual failure records
  • Accuracy: 87% — catches most failures, with some false alarms

Results after 6 months:

  • Unplanned downtime reduced by 35% — estimated saving of $4.2M annually
  • Maintenance teams shift from reactive to planned maintenance windows
  • False alarms are manageable — better to check a machine that’s fine than miss one that’s about to fail
ℹ️ Why Tomás chose ML, not generative AI

Tomás considered using generative AI for maintenance insights, but ML was the clear winner because:

  • Numerical precision matters — he needs failure probabilities, not prose about equipment health
  • Real-time processing — sensor data streams in continuously and needs millisecond classification
  • Explainability — maintenance teams need to know WHICH sensor readings triggered the alert
  • Cost — processing millions of sensor readings through an LLM would be prohibitively expensive

Where generative AI helps: after the ML model flags at-risk equipment, Copilot drafts the maintenance work order from the sensor data and maintenance history. ML predicts. Gen AI communicates.

Key flashcards

Question

What are the eight phases of the ML lifecycle?

Click or press Enter to reveal answer

Answer

Define, Collect, Prepare, Train, Evaluate, Deploy, Monitor, Retrain. Data preparation (Collect + Prepare) typically consumes 60-80% of total project effort.

Click to flip back

Question

What is model drift?

Click or press Enter to reveal answer

Answer

Model drift occurs when the real world changes but the model doesn't — predictions become less accurate over time. It requires regular monitoring and periodic retraining to address.

Click to flip back

Question

When should you choose traditional ML over generative AI?

Click or press Enter to reveal answer

Answer

When you need numerical predictions, real-time decisions at scale, regulatory explainability, or cost-efficient high-volume processing. Examples: fraud detection, demand forecasting, predictive maintenance.

Click to flip back

Knowledge check

Knowledge Check

Tomás needs to predict which equipment will fail in the next 7 days based on sensor data. Why is traditional ML better than generative AI for this task?

Knowledge Check

Elena's consulting firm advises a retail client whose demand forecasting model was trained on 2023 data. In mid-2026, predictions are consistently 20% too high, and the client wants to know why. What is this an example of?

🎬 Video coming soon

Next up: Securing AI Systems: From Application to Data — the security threats unique to AI and how to protect against them.

← Previous

Data Quality: The Make-or-Break Factor for AI

Next →

Securing AI Systems: From Application to Data

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.