Insider Risk: Foundations & Setup
The biggest threats often come from inside. Set up Microsoft Purview Insider Risk Management β roles, connectors, Defender for Endpoint integration, and global settings that enable detection before damage happens.
What is Insider Risk Management?
Every security system focuses on keeping bad people OUT. But what about the threat from people already INSIDE?
An employee who copies customer data before leaving. A contractor who emails trade secrets to a competitor. A frustrated worker who deletes critical files. These are insider threats β and traditional perimeter security cannot stop them because these people already have the keys.
Microsoft Purview Insider Risk Management watches for patterns of risky behaviour by correlating signals from across M365 β unusual file downloads, abnormal email patterns, data exfiltration attempts β and generates alerts for investigation. Crucially, it protects user privacy through pseudonymisation until an investigation is formally opened.
Roles and permissions
Insider Risk Management uses strict role-based access to protect user privacy:
| Role Group | What It Can Do |
|---|---|
| Insider Risk Management | Full access β configure policies, view alerts, investigate cases, manage settings |
| Insider Risk Management Admins | Configure settings and policies, but cannot view alerts or cases |
| Insider Risk Management Analysts | View and triage alerts, but cannot view user-identifying information (pseudonymised) |
| Insider Risk Management Investigators | View alerts AND user details, manage cases, take action |
| Insider Risk Management Approvers | Approve forensic evidence capture requests |
Separation of duties
The role structure enforces separation between administration and investigation:
- Admins configure the system but cannot see investigation data
- Analysts triage alerts but see pseudonymised names (User1, User2)
- Investigators see real identities but only for escalated cases
- Approvers are a separate check for invasive evidence collection
Exam tip: pseudonymisation by default
The exam tests privacy controls in Insider Risk Management. Key facts:
- User identities are pseudonymised by default β analysts see βUser1β not βJohn Smithβ
- Only users in the Insider Risk Management Investigators role see real identities
- Pseudonymisation can be turned off globally, but Microsoft recommends keeping it on
- All investigator actions are logged in the audit log for accountability
Connectors β feeding signals into IRM
IRM needs data from multiple sources to detect patterns:
| Connector | What Signals It Provides | Why It Matters |
|---|---|---|
| HR connector | Employee departure dates, performance plans, terminations | Departing employees are the #1 data theft risk β the HR signal is critical for the βdeparting employeeβ policy template |
| Microsoft Defender for Endpoint | Device activities β USB usage, printing, application access | Endpoint signals detect physical data exfiltration (USB copies, printing sensitive docs) |
| Healthcare connector | Patient record access patterns | Detects inappropriate access to patient data (curiosity browsing) |
| Physical badging connector | Building access logs | Unusual after-hours access to secure areas |
| Third-party connectors | Custom data sources via API | Integrate with SIEM, HRIS, or other security tools |
Setting up the HR connector
The HR connector is the most important for exam purposes:
- Prepare a CSV file with columns: EmailAddress, ResignationDate, LastWorkingDay, EffectiveDate
- Create the connector in the Purview portal β Settings β Connectors
- Schedule uploads β automate CSV delivery on a regular basis
- Validate β ensure the connector is receiving and processing data
Scenario: Zara sets up IRM at Atlas Global
Atlas Global has 15,000 employees across 40 countries. Zaraβs setup:
- HR connector: Automated CSV from the HRIS system β resignation dates, performance plans
- Defender for Endpoint: Already deployed on managed devices β signals flow automatically
- Roles: Zara β IRM Admin. Two compliance investigators β IRM Investigators. Three HR analysts β IRM Analysts (pseudonymised view)
- Privacy: Pseudonymisation ON. Investigators must request approval to view real identities.
- Settings: Analytics enabled in test mode for 30 days to establish baseline activity patterns before creating policies.
Global settings
Before creating policies, configure global IRM settings:
| Setting | What It Controls |
|---|---|
| Privacy | Pseudonymisation on/off for usernames in alerts |
| Policy indicators | Which activities IRM monitors (configured globally, policies select which to use) |
| Policy timeframes | How far back to look (activation window: 5-30 days) |
| Intelligent detections | File type exclusions, volume thresholds, anomaly sensitivity |
| Export alerts | Integration with SIEM via Office 365 Management API |
| Priority user groups | Users who receive extra scrutiny (executives, people with access to sensitive data) |
| Power Automate flows | Automated workflows triggered by IRM alerts |
| Analytics | Pre-policy analytics that show potential risk patterns before any policy is created |
Analytics (pre-policy scanning)
Before creating your first policy, enable analytics to scan your tenant for potential risk patterns. This 48-hour scan reveals:
- How many users show departing employee patterns
- Volume of abnormal file activity
- Potential data theft indicators
This helps you prioritise which policies to create first and set realistic thresholds.
Zara at Atlas Global wants an HR team member to triage Insider Risk alerts but not see the real names of flagged employees. Which role should she assign?
Dr. Liam wants to detect when departing employees at St. Harbour Health download patient records. He has configured an Insider Risk policy using the 'Data theft by departing users' template, but no alerts are being generated for employees who have submitted resignations. What is the most likely issue?
π¬ Video coming soon
Next up: Insider Risk: Policies & Indicators β choose the right policy template, configure indicators, and create policies that detect real threats.