Agent Requirements & Data Readiness
Assess where agents add value across task automation, analytics, and decision-making — and evaluate whether your data is ready to ground them.
Where agents actually help
Think of agents like specialist contractors. You would not hire an electrician to paint walls. Agents are the same — each type of agent solves a specific category of problem.
Task automation agents handle repetitive work — like a robotic arm on an assembly line that never gets tired. They follow rules, process queues, and execute steps.
Analytics agents are like data detectives. They sift through mountains of information and surface patterns humans would miss.
Decision-making agents are advisors. They weigh options, apply business logic, and recommend (or take) action when speed matters more than deliberation.
Agent categories compared
| Feature | Task Automation | Data Analytics | Decision-Making |
|---|---|---|---|
| Primary function | Execute multi-step processes | Query and interpret data | Evaluate options and act |
| Data needs | Structured inputs, APIs, queues | Broad access across data sources | Real-time signals + historical patterns |
| Human involvement | Exception handling only | Ask questions, review insights | Approval gates on high-impact actions |
| D365 example | Auto-route cases by sentiment + priority | Forecast demand from sales + supply chain data | Recommend reorder quantities and trigger POs |
| Risk if wrong | Process delay, rework | Misleading insights, poor strategy | Financial loss, compliance violation |
| Autonomy level | High for routine, gated for exceptions | Advisory — presents findings | Semiautonomous with escalation rules |
Should this be an agent? A pre-flight checklist
Not every process benefits from an agent. Before designing one, run through these five signals:
- Volume — Is this task performed hundreds or thousands of times per month? Low-volume tasks rarely justify agent investment.
- Data availability — Can the agent access the data it needs? If critical data lives in someone’s head or a spreadsheet on their desktop, the agent cannot function.
- Rules with exceptions — Does the process follow general rules but have edge cases that need reasoning? Pure rule-based processes fit Power Automate flows. Agents shine when they need to interpret context.
- Time sensitivity — Does a delay in execution cost money, customers, or compliance? Agents add value when speed of response matters.
- Structured outcome — Can you define what “done” looks like? Agents need measurable success criteria.
If fewer than three of these are true, a standard automation or dashboard may be the better fit.
The five pillars of data readiness
Agents are only as good as the data that grounds them. Before building any agent, assess your data against these five pillars:
| Pillar | Question to Ask | Red Flag | Green Flag |
|---|---|---|---|
| Accuracy | Is the data factually correct? | Duplicate customer records, stale pricing | Master data management in place, validation rules enforced |
| Relevance | Does the data relate to the agent’s task? | Feeding an invoice agent with marketing data | Scoped data sources aligned to agent purpose |
| Timeliness | Is the data fresh enough for the use case? | Inventory counts updated weekly for a real-time reorder agent | Near-real-time sync from ERP to Dataverse |
| Cleanliness | Is the data free of errors, gaps, and inconsistencies? | 40% of address fields blank, mixed date formats | Data quality rules, automated cleansing pipelines |
| Availability | Can the agent actually access the data at runtime? | Data locked in on-prem SQL with no API exposure | Dataverse, Azure SQL, or Fabric lakehouse with proper auth |
🤖 Jordan’s data readiness assessment at CareFirst
Jordan Reeves is evaluating three agent candidates at CareFirst Health. She walks each through the five pillars:
Agent 1: Patient appointment scheduling
- Accuracy: ✅ Patient records synced from Epic via FHIR API
- Relevance: ✅ Scheduling data is directly applicable
- Timeliness: ✅ Real-time availability from the booking system
- Cleanliness: ⚠️ 15% of patient phone numbers are outdated
- Availability: ✅ Data exposed via Dataverse connector
Jordan’s verdict: Go, with a data cleansing sprint for contact info. The agent can launch while the team cleans phone numbers in parallel.
Agent 2: Clinical supply demand forecasting
- Accuracy: ❌ Supply counts rely on manual entry with known discrepancies
- Relevance: ✅ Consumption data from 8 hospitals is the right input
- Timeliness: ❌ Inventory updated every 48 hours — too slow for forecasting
- Cleanliness: ❌ Three hospitals use different unit-of-measure conventions
- Availability: ⚠️ Two hospital systems lack API access
Jordan’s verdict: Not ready. Three pillars fail. The team needs to standardise data entry, increase sync frequency, and expose APIs before this agent is viable.
Agent 3: Patient feedback sentiment analysis
- Accuracy: ✅ Feedback collected via validated survey platform
- Relevance: ✅ Direct patient voice data
- Timeliness: ✅ Surveys processed within 4 hours of submission
- Cleanliness: ✅ Structured fields plus free-text — both usable
- Availability: ✅ Survey data flows to Azure SQL with API access
Jordan’s verdict: Go. All five pillars pass. This is the strongest agent candidate.
Organising data for AI consumption
Once data passes readiness assessment, it needs to be structured so agents — and other AI systems — can consume it. Three architecture patterns dominate:
| Pattern | How It Works | Best For | Watch Out For |
|---|---|---|---|
| Centralised | All data lands in a single store (Fabric lakehouse, Azure SQL) | Small to mid orgs with unified data teams | Single point of failure, bottleneck for updates |
| Federated | Data stays where it lives; agents query across sources at runtime | Large orgs with autonomous business units | Latency, inconsistent schemas, auth complexity |
| Hybrid | Core reference data centralised; domain data federated with virtual views | Most enterprise D365 deployments | Governance overhead, requires clear ownership boundaries |
Exam tip: Agents vs Power Automate flows
The exam tests whether you can distinguish agent-appropriate tasks from flow-appropriate tasks:
- Power Automate flow: Predictable sequence, structured inputs, no reasoning required. Example: “When a new order arrives, create an invoice and email the customer.”
- Agent: Requires reasoning over unstructured or ambiguous inputs, handling exceptions, or making judgement calls. Example: “Read the customer complaint email, determine severity, check order history, and draft a personalised response.”
The dividing line is reasoning. If the task needs the system to interpret, evaluate, or adapt — it is an agent. If it follows a deterministic path every time — it is a flow.
Key terms
Knowledge check
Jordan is assessing a supply chain agent at CareFirst. Inventory data is updated every 48 hours, three hospitals use different units of measure, and two systems lack API access. What should she recommend?
A D365 Customer Service team receives emails with complaints that vary widely in tone, urgency, and topic. They want to auto-classify and route these complaints. What is the best approach?
🎬 Video coming soon
Next up: AI Strategy and the Cloud Adoption Framework — map your AI ambitions to a structured adoption roadmap using Microsoft’s CAF.