DSPM for AI: Policies & Monitoring
Configure DSPM for AI policies to track how AI services interact with your sensitive data. Monitor AI activity, detect anomalies, and ensure your data security posture stays strong as AI adoption grows.
From setup to ongoing governance
Setting up DSPM is like installing security cameras. Configuring policies and monitoring is like actually watching the footage and setting up motion alerts.
In the previous module, you prepared your environment for AI. Now you create policies that define what to watch for, and use monitoring dashboards to track how AI interacts with your sensitive data day to day. You want to know: What sensitive data is AI accessing? Are there anomalies? Are oversharing patterns emerging?
DSPM for AI policies
What policies monitor
| Policy Focus | What It Tracks |
|---|---|
| AI interactions with sensitive data | When Copilot or other AI services access, summarise, or reference content matching SITs |
| Oversharing in AI context | When AI surfaces broadly shared content that may not be appropriate |
| Unprotected sensitive data | When AI accesses sensitive content without sensitivity labels or encryption |
| Anomalous AI usage | Unusual patterns — sudden spikes in AI queries about sensitive topics |
Configuring a DSPM for AI policy
| Step | What You Configure |
|---|---|
| 1. Policy scope | Which AI services to monitor — Microsoft 365 Copilot, Azure AI services, third-party AI |
| 2. Data conditions | Which sensitive data types to watch — specific SITs, sensitivity labels, or all sensitive content |
| 3. Activity types | Which AI activities to track — prompts, responses, file references, summarisation |
| 4. Alerts and thresholds | When to generate alerts — volume thresholds, anomaly detection, specific pattern matches |
| 5. Recommendations | Enable actionable recommendations for improving data posture |
Policy types
| Policy Type | What It Does | Use Case |
|---|---|---|
| Oversharing detection | Identifies content accessible by AI due to broad permissions | Pre-Copilot deployment assessment and ongoing monitoring |
| Sensitive data in AI | Monitors when AI accesses content matching specific SITs | Track AI interaction with financial data, patient records, or PII |
| Unlabelled content risk | Flags sensitive content without labels that AI could surface | Identify gaps in your labeling coverage |
| Anomalous AI usage | Detects unusual spikes or patterns in AI data access | Catch potential misuse or compromised accounts using AI |
Monitoring AI activities
The DSPM for AI dashboard
The dashboard provides a central view of AI data security:
| Dashboard Section | What It Shows |
|---|---|
| Overview | Summary of AI data risks — total sensitive items accessible, oversharing count, unlabelled content |
| Recommendations | Actionable steps to improve posture — fix permissions, apply labels, configure policies |
| Data assessments | Deep-dive into specific risk areas — which sites, which data types, which users |
| Activity monitoring | Timeline of AI interactions with sensitive data |
| Reports | Exportable reports for compliance teams and auditors |
Key metrics to track
| Metric | What It Tells You | Target |
|---|---|---|
| Sensitive items accessible by AI | Volume of sensitive data AI can surface | Decreasing over time as you remediate |
| Overshared files | Files with broad permissions containing sensitive data | Near zero before AI deployment |
| Unlabelled sensitive content | Sensitive items without labels | Decreasing — auto-labeling should close gaps |
| AI interaction volume | How actively AI services are being used with sensitive content | Baseline tracking — spikes may indicate misuse |
| Recommendations completion | Percentage of DSPM recommendations addressed | 100% for critical items |
Recommendations workflow
DSPM for AI generates recommendations based on its assessment:
| Recommendation Type | Example | Priority |
|---|---|---|
| Fix oversharing | ”Remove ‘Everyone’ access from site containing 340 source code files” | High |
| Apply labels | ”8,500 documents contain PII but have no sensitivity label — configure auto-labeling” | High |
| Remove stale access | ”45 sites have access for former employees — review and revoke” | Medium |
| Configure DLP | ”No DLP policy monitors AI interactions with financial data — create one” | Medium |
| Enable audit | ”Audit logging is not capturing AI activities — enable Copilot audit events” | High |
Scenario: Marcus monitors AI at NovaTech
Three months after deploying Copilot, Marcus reviews the DSPM dashboard:
Good news:
- Overshared files dropped from 12,000 to 200 (pre-deployment remediation worked)
- 95% of sensitive documents now have labels (auto-labeling closed the gap)
Concerns:
- 3 users are making unusually high volumes of AI queries about “client contracts” and “pricing” — DSPM flagged this as anomalous
- 500 new files were created without labels in the last month (new employees not trained on labeling)
Actions:
- Investigate the 3 users via Insider Risk Management
- Configure auto-labeling for the new document library
- Update the mandatory labeling policy to cover new user segments
Monitoring for Azure AI services
DSPM for AI extends beyond Microsoft 365 Copilot to Azure AI services:
| Azure AI Scope | What It Monitors |
|---|---|
| Azure OpenAI Service | Prompts and responses processed by your Azure OpenAI deployments |
| Azure AI Foundry | AI apps and agents built on the Foundry platform |
| Custom AI apps | Applications using Azure AI services that process your organisation’s data |
To capture these signals:
- Connect Azure subscriptions to DSPM for AI
- Enable prompt and response logging in your Azure AI deployments
- Configure policies to monitor for sensitive data in AI prompts and responses
Exam tip: DSPM for AI monitoring scope
The exam may ask what DSPM for AI can monitor. Key scopes:
- Microsoft 365 Copilot — prompts, responses, file references, meeting summaries
- Azure AI services — Azure OpenAI, Foundry, custom apps (requires Azure subscription connection)
- Third-party AI — limited, primarily through DLP and sensitivity labels
DSPM for AI is NOT a real-time blocking tool. It monitors, assesses, and recommends. Blocking is done by DLP policies and sensitivity labels, which DSPM helps you configure correctly.
Marcus at NovaTech sees a DSPM recommendation: '8,500 documents contain PII but have no sensitivity label.' What should he do to address this at scale?
DSPM for AI flags that 3 NovaTech users are making abnormally high volumes of AI queries about 'client contracts' and 'pricing data'. What should Marcus investigate and how?
🎬 Video coming soon
🎉 Congratulations — you’ve completed all 25 modules of the SC-401 study guide!
You’ve covered:
- Domain 1: Classification, sensitivity labels, encryption, and on-premises protection
- Domain 2: DLP policies, endpoint DLP, and data retention lifecycle
- Domain 3: Insider Risk Management, Adaptive Protection, audit, alerts, and DSPM for AI
Ready to test your knowledge? Head to the SC-401 Practice Questions when available.