Detection Engineering: Putting It All Together
Detections, analytics, threat intel, and MITRE coverage β here is how they all connect into one unified detection strategy for your SOC.
The detection engineering mindset
You have built the entire security alarm system. Cameras are recording (data connectors). Sensors are placed on doors and windows (analytics rules). You know the burglarβs playbook (MITRE ATT&CK). Motion detectors catch unexpected movement (anomaly rules).
But an alarm system is only as good as how it all works together. If the camera in the garage is recording but no sensor is watching it, you have a blind spot. If every sensor triggers the same alarm, your team cannot prioritise.
Detection engineering is the discipline of making all these pieces work as one system β no gaps, no redundancy, no noise. This module ties together everything from Domain 1.
The unified detection stack
Here is how all the detection components connect:
| Layer | Tool | What It Does | Data Source |
|---|---|---|---|
| Defender XDR custom detections | Advanced Hunting queries | Detect endpoint, email, identity, and cloud app threats | Defender tables |
| Sentinel scheduled rules | KQL queries on schedule | Detect threats across ALL ingested data | Any workspace table |
| Sentinel NRT rules | Fast KQL queries | Time-critical single-table detections | Single workspace table |
| TI matching | Automatic indicator correlation | Match known-bad IOCs against log data | ThreatIntelligenceIndicator + log tables |
| Anomaly rules | Statistical baseline deviation | Catch unknown behavioural anomalies | Behavioural data (logins, access, transfers) |
| MITRE ATT&CK | Coverage analysis | Identify detection gaps | All of the above |
| SOC optimization | Recommendations engine | Suggest improvements | Workspace configuration |
Where each detection type excels
| Threat Type | Best Detection | Why |
|---|---|---|
| Known malware hash | TI matching | Automatic indicator correlation β no query needed |
| Brute force attack | Scheduled analytics rule | Needs aggregation (count failed logins) across a time window |
| Admin account login from new country | NRT rule | Time-critical, single table, simple logic |
| PowerShell downloading executables | Defender XDR custom detection | Endpoint-specific, needs DeviceProcessEvents table |
| Insider using valid creds at unusual hours | Anomaly rule | No known pattern β deviation from learned baseline |
| Ransomware spreading across network | Automatic attack disruption | Real-time containment β faster than any analytics rule |
The detection development lifecycle
Detection engineering is not βset and forget.β It follows a lifecycle:
1. Identify the threat
Start with a MITRE technique, a threat intelligence report, or an incident learning.
2. Write the detection
Create a KQL query in Advanced Hunting (for Defender) or the Sentinel query editor.
3. Test against historical data
Run the query against past data to check:
- Does it catch known incidents? (true positives)
- Does it fire on normal activity? (false positives)
- What is the expected alert volume?
4. Deploy in production
Save as a custom detection or analytics rule with appropriate frequency, severity, and entity mapping.
5. Tune continuously
Monitor the rule over 2-4 weeks:
- Suppress or exclude known false positives
- Adjust thresholds if too noisy or too quiet
- Update the query as the threat evolves
6. Review and retire
Periodically review all detections:
- Are they still relevant? (threats change)
- Are they generating value? (incidents that analysts act on)
- Can they be replaced by better detections?
Scenario: Anika's monthly detection review
Anika at Sentinel Shield runs a monthly detection engineering review:
- MITRE coverage audit β check for new gaps (new data sources may have created yellow cells)
- Noise review β top 10 noisiest rules. Any generating more than 50 alerts/week? Tune or suppress.
- Missed detections β review incidents discovered by hunting, not by rules. Can a rule be written to catch this next time?
- TI health β are threat feeds active? Any expired indicators not replaced?
- SOC optimization β review Sentinelβs recommendations and action the top 3
This monthly cycle ensures detection coverage improves over time rather than degrading.
Domain 1 summary
You have now covered the full scope of Manage a Security Operations Environment (40-45% of the exam):
| Area | What You Learned | Key Modules |
|---|---|---|
| Sentinel workspace | Roles, retention tiers, workbooks, SOC optimization | Module 1 |
| Data ingestion | Windows events, Syslog, CEF, Azure activities, custom tables | Modules 2-3 |
| Defender for Endpoint | Advanced features, rules, ASR, security policies | Modules 4-5 |
| Alert management | Notifications, tuning, suppression, correlation | Module 6 |
| Automation | AIR, attack disruption, device groups, automation rules, playbooks | Modules 7-8 |
| Detection engineering | Custom detections, analytics rules, TI, MITRE, anomalies | Modules 9-12 |
Exam strategy: Domain 1 key concepts
Domain 1 is the heaviest domain (40-45%). Focus on:
- Sentinel roles β know the hierarchy (Reader β Responder β Contributor)
- Data retention tiers β Analytics vs Data lake vs XDR
- AMA vs WEF β when to use which
- Automation levels β Full, Semi, No automation
- NRT vs scheduled β speed vs complexity trade-off
- MITRE ATT&CK β how to identify and close coverage gaps
- Suppression vs disabling β always suppress, never disable unless the detection is wrong
James at Pacific Meridian discovers that a new employee accidentally triggered a brute force analytics rule by mistyping their password 15 times. The rule is generating daily false positives for this user. What should he do?
Anika's monthly review shows that a Sentinel analytics rule for SSH brute force has not generated a single alert in 6 months. All SSH data connectors are active. What should she do?
π¬ Video coming soon
Next up: Domain 1 is complete. Domain 2 shifts to Respond to Security Incidents β from triage to investigation to remediation. We start with the incident lifecycle.