AI Strategy & the Cloud Adoption Framework
Implement the AI adoption process from Azure's Cloud Adoption Framework and establish an AI Center of Excellence to accelerate and govern AI initiatives.
AI adoption needs a roadmap
Adopting AI without a framework is like renovating a house without blueprints. You might get lucky with the kitchen, but the plumbing will not connect to the bathroom.
Microsoft’s Cloud Adoption Framework (CAF) is that blueprint. It breaks the journey into phases — from “why are we doing this?” to “how do we keep it running?” Each phase has AI-specific guidance so you do not skip critical steps like data governance or responsible AI review.
The AI Center of Excellence (CoE) is the team that maintains those blueprints, trains the builders, and makes sure every renovation follows the code. Without a CoE, every team invents its own approach — and you end up with 15 different AI projects that cannot talk to each other.
CAF phases with an AI lens
The Cloud Adoption Framework provides the foundation. Microsoft’s AI adoption guidance builds on it with AI-specific activities at each stage:
| Phase | Core Question | AI-Specific Focus | Key Deliverable |
|---|---|---|---|
| Strategy | Why AI? What is the business case? | Identify high-value agent use cases aligned to business outcomes, not technology curiosity | AI vision statement, prioritised use case backlog |
| Plan | What do we need to get there? | Skills gap analysis, data readiness audit (the five pillars from Module 1), platform selection | AI adoption plan with timelines and resource requirements |
| Ready | Is our environment prepared? | Provision Foundry projects, Copilot Studio environments, Dataverse, Azure AI Services | Landing zone configured with security, networking, and identity |
| Adopt | How do we build and deploy? | Build agent prototypes, test with real users, iterate. Also consolidate any existing shadow AI onto enterprise platforms | Working agent MVPs validated against success criteria |
| Govern | How do we stay safe and compliant? | Responsible AI review boards, data classification, cost guardrails, agent access policies | Governance framework with automated policy enforcement |
| Secure | How do we protect AI assets? | Agent security, model security, prompt injection defence, data residency | Security architecture and controls for all AI workloads |
| Manage | How do we keep it running well? | Monitor agent accuracy, detect model drift, manage prompt versioning, maintain SLAs | Operational runbooks, alerting, and continuous improvement |
Platform strategy for AI and agents
| Feature | Copilot Studio | Microsoft Foundry | M365 Copilot | D365 Embedded AI |
|---|---|---|---|---|
| Primary audience | Citizen devs + pro devs | AI engineers + data scientists | All M365 users | D365 functional consultants |
| Agent type | Declarative and custom agents | Code-first orchestrated agents | Embedded AI assistants | Prebuilt AI features |
| Customisation level | Medium — low-code with pro-code escape hatches | High — full control over models, tools, and orchestration | Low — extend via plugins and connectors | Low — configure, do not build |
| Data access | Dataverse, connectors, custom APIs | Any Azure data source, Fabric, external APIs | Microsoft Graph (M365 data) | D365 Dataverse tables |
| Best for | Customer-facing agents, internal assistants, process automation | Complex multi-agent systems, custom models, domain-specific AI | Productivity augmentation across M365 apps | Out-of-box AI in sales, service, finance, supply chain |
| Governance | DLP policies, environment controls | Azure RBAC, network isolation, model registry | Tenant-level admin controls | D365 security roles |
🏗️ Kai builds Apex Industries’ AI roadmap
Kai Mercer is advising Apex Industries (3,500 employees, manufacturing, D365 Supply Chain Management) on their AI strategy. He maps each CAF phase to Apex’s reality:
Strategy: Kai runs a workshop with Lin Chen (CTO) and business unit leaders. They identify three high-value use cases: demand forecasting, quality defect detection, and supplier risk assessment. The business case shows a projected 12% reduction in inventory carrying costs.
Plan: Priya Sharma (data engineer) audits data readiness. Demand data passes all five pillars. Quality data fails on timeliness — defect reports are entered manually with a 3-day lag. Supplier data fails on availability — it lives in a legacy system with no API.
Ready: Tomasz Kowalski (D365 consultant) provisions a Foundry project for the forecasting model and a Copilot Studio environment for the supplier risk agent. Kai defines the AI governance guardrails: all agents must pass a responsible AI review before production.
Adopt — Innovate: The team builds the demand forecasting agent first (cleanest data, highest ROI). They test it with two factories, measure forecast accuracy weekly, and iterate the prompt engineering.
Govern: Kai establishes a responsible AI checklist: fairness review for supplier scoring, transparency requirements for demand forecasts (users must see which factors drove predictions), and data retention policies.
Manage: Priya sets up drift detection on the forecasting model — if accuracy drops below 85%, the team gets alerted and the model is flagged for retraining.
The AI Center of Excellence
An AI CoE is not another IT team. It is a cross-functional body that enables the entire organisation to build, deploy, and govern AI responsibly.
| CoE Element | Purpose | Who Is Involved |
|---|---|---|
| Governance | Define policies for data access, model deployment, and agent behaviour | CISO, compliance, legal, AI lead |
| Reusable assets | Build shared prompt libraries, agent templates, data connectors, and evaluation frameworks | Platform engineers, senior developers |
| Skills and training | Upskill business users on prompt engineering; train developers on Foundry and Copilot Studio | L&D, AI champions, external trainers |
| Community of practice | Connect AI builders across business units — share learnings, avoid duplicate work | AI champions from each department |
| Measurement | Track AI adoption metrics: usage, accuracy, ROI, user satisfaction, incidents | Data analysts, product owners |
| Responsible AI oversight | Review agents for bias, fairness, transparency, and safety before production deployment | Ethics board, domain experts, AI engineers |
🏛️ Adrienne establishes Vanguard’s AI CoE
Adrienne Cole (VP Enterprise Tech, Vanguard Financial Group) is building the AI CoE for a 12,000-employee financial services firm:
Governance: Marcus Webb (CISO) insists that every agent accessing customer financial data must pass a security review and use customer-managed encryption keys. Yuki Tanaka (compliance) adds regulatory requirements — agents in advisory roles must log every recommendation for audit.
Reusable assets: Dev Patel (AI platform engineer) builds a shared agent template library: pre-configured connectors for D365 Finance, standardised prompt templates for financial document analysis, and a shared evaluation dataset for testing.
Skills and training: Adrienne launches a three-tier training programme: “AI Awareness” for all staff (what AI can and cannot do), “AI Builder” for power users (Copilot Studio), and “AI Engineer” for developers (Foundry, custom models).
Community of practice: Monthly “AI Lunch and Learn” sessions where teams demo agents they have built. The wealth management team’s client summary agent inspires the insurance team to build a claims triage agent.
Measurement: Adrienne defines four KPIs: agent adoption rate (monthly active users), task completion accuracy, time saved per user per week, and incident count (hallucinations, errors, escalations).
Exam tip: AI CoE vs IT department
The exam distinguishes between an AI CoE and a traditional IT department:
- IT department: Operates infrastructure, manages access, handles incidents. Execution-focused.
- AI CoE: Enables business units to build their own AI solutions. Cross-functional. Focuses on governance, reusable assets, training, and community — not building every agent centrally.
Key differentiator: the CoE enables, IT operates. If a question asks who builds reusable agent templates and trains citizen developers, the answer is the CoE. If it asks who provisions the Azure subscription, it is IT.
Key terms
Knowledge check
Kai's data readiness audit shows that Apex's quality defect data has a 3-day manual entry lag and the supplier system has no API. Which CAF phase should address these issues before agent development begins?
Adrienne is setting up Vanguard's AI CoE. The wealth management team wants to build a client summary agent using Copilot Studio. Who should build it?
🎬 Video coming soon
Next up: Multi-Agent Solution Design — design multi-agent architectures that span Copilot Studio, Foundry, M365 Copilot, and Dynamics 365.