Microsoft Sentinel and SOAR Automation
Design Microsoft Sentinel workspace architectures, SOAR playbooks, multi-workspace strategies, cost management, and integration with Defender XDR for enterprise security operations.
Microsoft Sentinel and SOAR Automation
Sentinel as Cloud-Native SIEM
Microsoft Sentinel fundamentally changes the SIEM model. Traditional on-premises SIEMs require hardware provisioning, storage management, and capacity planning. Sentinel runs on Azure Log Analytics — it scales automatically, requires no infrastructure management, and charges based on data ingestion volume.
Core Sentinel Components
Data Connectors bring logs into Sentinel. There are 300+ built-in connectors for Microsoft products, third-party security tools, cloud providers, and custom sources. The architecture decision is which data to ingest — more data means better visibility but higher cost.
Analytics Rules are the detection engine. They query ingested data on a schedule and generate alerts when conditions are met. Rule types include:
- Scheduled rules — KQL queries that run at defined intervals (every 5 minutes, hourly, daily)
- Near-real-time (NRT) rules — Run every minute for time-sensitive detections
- Fusion rules — ML-powered correlation that detects multi-stage attacks by combining low-fidelity alerts into high-fidelity incidents
- Anomaly rules — Baseline normal behavior and detect deviations
Hunting is the proactive component. Analysts write KQL queries to search for threats that automated rules haven’t caught. Sentinel provides built-in hunting queries, bookmarks for saving interesting findings, and livestream for real-time monitoring of hunting queries.
Workbooks provide visual dashboards for monitoring, investigation, and reporting. They’re built on Azure Workbooks and can visualize any data in the Log Analytics workspace.
Content Hub is a marketplace of packaged solutions — each solution includes data connectors, analytics rules, hunting queries, workbooks, and playbooks for a specific data source or scenario. Solutions dramatically accelerate deployment.
Workspace Architecture: The Critical Design Decision
The single most important architecture decision for Sentinel is workspace design. This decision affects data residency, access control, cost, and operational efficiency.
| Factor | Single Workspace | Multi-Workspace |
|---|---|---|
| Complexity | Simple — one workspace to manage, query, and monitor | Complex — requires cross-workspace queries, Azure Lighthouse, and careful coordination |
| Cost | Potentially lower — commitment tiers apply to total volume across all data | Higher overhead — each workspace may need its own commitment tier; cross-workspace queries add cost |
| Data Residency | All data in one Azure region — may violate regulations if users span regions | Data stays in its region — satisfies GDPR, data sovereignty, and local regulatory requirements |
| RBAC Granularity | Table-level and resource-context RBAC available but limited | Workspace-level RBAC provides strong isolation between business units or tenants |
| Query Performance | Fast — all data in one workspace for correlation | Slower for cross-workspace queries; some analytics rules don't support cross-workspace |
| Best For | Single region, unified SOC, no regulatory data residency requirements | Multi-region compliance, MSSP multi-tenant, strong isolation between business units |
When to Use Multiple Workspaces
You need multiple workspaces when:
- Data residency regulations require logs to stay in specific geographic regions (GDPR in EU, data sovereignty laws)
- MSSP scenarios where each customer’s data must be completely isolated
- Strong RBAC isolation is needed between business units that cannot see each other’s data
- Billing separation is required between departments or subsidiaries
Cross-Workspace Queries
When using multiple workspaces, analysts still need to correlate events across them. KQL supports cross-workspace queries using the workspace() function:
union
SecurityEvent,
workspace("sentinel-eu-workspace").SecurityEvent
| where EventID == 4625
| summarize FailedLogons = count() by Account, bin(TimeGenerated, 1h)
Azure Lighthouse enables MSSP scenarios — the service provider can manage multiple customer Sentinel workspaces from their own tenant without switching directories.
🏛️ Torres Designs Regional Workspaces
Commander Aiden Torres is designing Sentinel for the Department of Federal Systems, which operates across three regions — US mainland, European facilities, and Pacific installations.
“We have a data sovereignty problem,” Torres tells Colonel Reeves. “Our European facilities process data covered by GDPR. That data cannot leave EU data centers. Our Pacific installations have similar requirements under local regulations. But our SOC in Virginia needs to see everything.”
Torres’s architecture:
- Three regional workspaces: US East (primary), West Europe, and Australia East
- Each workspace ingests logs only from assets in that region
- Cross-workspace queries allow the Virginia SOC to search across all three workspaces without moving data
- Azure Lighthouse gives the Virginia SOC visibility into all workspaces from a single pane
- Analytics rules run locally in each workspace for time-sensitive detections, plus a global rule set that correlates cross-region using scheduled queries
“What about incidents that span regions?” Specialist Diaz asks. “If an attacker compromises an account in Europe and uses it to access a system in the US?”
“The cross-workspace analytics rule catches that,” Torres explains. “It queries authentication events across all three workspaces and correlates by user principal name. The incident gets created in the US workspace since that’s where the SOC operates, but it references evidence from the EU workspace.”
“And the data never leaves Europe?” Colonel Reeves confirms.
“Correct. The query runs in the EU workspace and returns only the result — the summarized alert data. The raw logs stay in the EU region. The EU workspace retains full custody of the underlying data.”
SOAR: Automated Response with Logic App Playbooks
SOAR (Security Orchestration, Automation, and Response) in Sentinel is powered by Azure Logic Apps. Playbooks are Logic App workflows that execute automatically when triggered by analytics rules, incidents, or manual analyst actions.
What SOAR Playbooks Automate
Enrichment playbooks add context to incidents automatically:
- Look up IP reputation in threat intelligence feeds
- Query user details from Entra ID (department, manager, recent activity)
- Check device compliance status in Intune
- Retrieve file reputation from VirusTotal
Containment playbooks take immediate action:
- Isolate a compromised device via Defender for Endpoint API
- Disable a compromised user account in Entra ID
- Block a malicious IP in the firewall
- Revoke active sessions and force re-authentication
Notification playbooks keep stakeholders informed:
- Send Teams message to the SOC channel
- Create a ticket in ServiceNow or Jira
- Email the incident report to the CISO
- Page the on-call analyst via PagerDuty
Remediation playbooks clean up after incidents:
- Reset compromised user passwords
- Remove malicious inbox rules
- Quarantine malicious emails across all mailboxes
- Apply Conditional Access policy to block further access
☁️ Rajan Builds Auto-Containment Playbooks
Rajan is building SOAR playbooks for a client’s hybrid SOC. The MSSP handles Tier 1, but automated containment needs to happen immediately — before any human reviews the alert.
“Here’s the scenario I want to automate,” Rajan tells Priya. “When Sentinel detects a compromised device with high confidence — say, Defender for Endpoint raises a high-severity alert for ransomware pre-encryption activity — I want three things to happen automatically within 60 seconds.”
Rajan’s containment playbook:
- Isolate the device via the Defender for Endpoint API — the device can still communicate with the Defender cloud service but nothing else
- Disable the logged-in user account in Entra ID — prevents the attacker from using stolen credentials on other systems
- Send an urgent Teams message to the SOC channel with incident details, device name, user account, and a direct link to the incident in the Defender portal
“But what about false positives?” Priya asks. “What if we isolate a device that wasn’t actually compromised?”
“That’s why the playbook only triggers on high-confidence, high-severity alerts,” Rajan explains. “And even if it’s a false positive, the impact is limited — the device is isolated, not wiped. The analyst can un-isolate it in two clicks. The cost of a 30-minute false positive isolation is trivial compared to the cost of ransomware spreading for 30 minutes while we wait for human review.”
Cost Management: A Critical Architecture Concern
Sentinel pricing is based on data ingestion volume (per GB). For large organizations ingesting terabytes daily, cost management is a major architecture concern.
Data Tiers
Analytics Logs (full-featured): Full query capability, 90-day interactive retention (extendable to 2 years), supports analytics rules, hunting, and SOAR. This is the default tier and the most expensive.
Basic Logs: Reduced cost (significantly cheaper per GB), limited query capabilities (KQL with restrictions), 30-day interactive retention. Good for high-volume, low-security-value logs (verbose application logs, flow logs) where you want them available for investigation but don’t need to run analytics rules against them.
Archive: Lowest cost. Logs are moved to cold storage after the interactive retention period. Can be restored to Analytics tier on-demand for investigation (restoration takes minutes). Good for compliance retention requirements.
Cost Optimization Strategies
- Data Collection Rules (DCR): Filter logs at ingestion time — only collect the fields and events you need, not everything a source generates
- Commitment tiers: Pre-commit to a daily ingestion volume for a discount (100 GB/day, 200 GB/day, etc.)
- Free data sources: Some Microsoft sources are free to ingest (Microsoft Entra ID sign-in logs, Office 365 audit logs, Defender XDR incidents)
- Basic Logs for high-volume sources: Move verbose sources like firewall flow logs, DNS query logs, and debug logs to Basic tier
- Sentinel Content Hub solutions: Pre-built analytics rules are more efficient than custom rules that scan large datasets
Integration with Defender XDR
When Sentinel is connected to Defender XDR in the unified portal:
- Defender XDR incidents appear in the Sentinel incident queue
- Sentinel analytics rules can reference Defender XDR data
- SOAR playbooks can be triggered by either Sentinel or Defender XDR incidents
- Advanced hunting in the portal queries both Sentinel and Defender XDR data
- Defender XDR incident data ingested into Sentinel is free (no additional cost)
SC-100 Exam Strategy: Sentinel and SOAR
Torres is designing Sentinel for a government organization with facilities in the US, Europe, and Asia-Pacific. European regulations require that log data from EU facilities cannot leave the EU. The central SOC operates from the US. Which architecture should Torres recommend?
A security architect is optimizing Sentinel costs for an organization that ingests 500 GB/day. The breakdown is: 100 GB from security products (Defender, Entra ID), 50 GB from firewall threat logs, 200 GB from firewall flow logs (verbose), and 150 GB from application debug logs. Which cost optimization strategy is most effective?
Rajan wants to build a SOAR playbook that automatically isolates devices when Sentinel detects high-confidence ransomware activity. Which consideration is MOST important for the architect to address?
🎬 Video coming soon
Next up: Identity and Access Architecture — We shift from security operations to identity design, covering Entra ID tenant architecture, external identity strategies, and workload identities.