Azure Integration and Monitoring
Monitor SQL databases, DAB endpoints, and data pipelines with Azure Monitor, Application Insights, and Log Analytics β build dashboards, set alerts, and diagnose issues with KQL.
You cannot fix what you cannot see
Think of a car dashboard.
Your car has a speedometer, fuel gauge, temperature warning light, and engine light. Without them, you would not know you are running out of fuel until the car stops. You would not know the engine is overheating until it catches fire.
Azure Monitor is the dashboard for your database. Metrics are the gauges (CPU usage, storage, connections). Alerts are the warning lights (email you when CPU hits 90 percent). Log Analytics is the mechanicβs diagnostic computer (deep dive into exactly what happened and when). Application Insights watches your application code β how fast are API calls, where are errors happening.
Azure Monitor for SQL databases
Azure Monitor collects two types of data from SQL databases:
Metrics (the gauges)
Metrics are numeric values collected at regular intervals. They answer βhow is my database doing right now?β
| Metric | What It Measures | Alert When |
|---|---|---|
| CPU percentage | Processor utilisation | Above 80% sustained |
| DTU/vCore percentage | Overall resource consumption | Above 90% sustained |
| Data IO percentage | Disk read/write pressure | Above 85% sustained |
| Log IO percentage | Transaction log write throughput | Above 80% sustained |
| Storage percentage | Database size vs max size | Above 85% (prevent hitting the limit) |
| Connection failed | Authentication or connection errors | Any spike above baseline |
| Deadlocks | Deadlock occurrences | Any count above zero |
| Sessions percentage | Active sessions vs max allowed | Above 70% |
Diagnostic settings
To send detailed logs to Log Analytics, configure diagnostic settings on the SQL database:
Azure SQL Database
β Diagnostic settings
β Add diagnostic setting
β Select categories:
β SQLInsights
β AutomaticTuning
β QueryStoreRuntimeStatistics
β QueryStoreWaitStatistics
β Errors
β DatabaseWaitStatistics
β Timeouts
β Blocks
β Deadlocks
β Send to: Log Analytics workspace
These categories stream detailed data into your Log Analytics workspace where you can query it with KQL.
Log Analytics and KQL
Log Analytics stores logs in tables that you query using Kusto Query Language (KQL). Think of it as SQL for logs.
Essential KQL queries for SQL monitoring
// Top 10 longest-running queries in the last 24 hours
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.SQL"
| where Category == "QueryStoreRuntimeStatistics"
| where TimeGenerated > ago(24h)
| summarize AvgDuration = avg(duration_d) by query_hash_s
| top 10 by AvgDuration desc
// Connection failures over the past week
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.SQL"
| where Category == "Errors"
| where TimeGenerated > ago(7d)
| summarize FailureCount = count() by bin(TimeGenerated, 1h), error_number_d
| render timechart
// Deadlock events with details
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.SQL"
| where Category == "Deadlocks"
| project TimeGenerated, deadlock_xml_s
| order by TimeGenerated desc
KQL basics for the exam
| KQL Operator | What It Does | SQL Equivalent |
|---|---|---|
where | Filter rows | WHERE |
summarize | Aggregate | GROUP BY with aggregate functions |
project | Select columns | SELECT specific columns |
extend | Add calculated column | Computed column in SELECT |
top N by | Limit and sort | TOP N β¦ ORDER BY |
bin() | Time bucketing | DATEPART grouping |
ago() | Relative time | DATEADD from GETDATE |
render | Visualise results | No SQL equivalent (charting) |
Exam tip: KQL is not SQL
KQL reads left-to-right with pipe operators. Do not confuse it with T-SQL:
- KQL:
AzureDiagnostics | where Category == "Errors" | summarize count() by error_number_d - SQL:
SELECT error_number, COUNT(*) FROM AzureDiagnostics WHERE Category = 'Errors' GROUP BY error_number
The exam may show KQL snippets and ask what they return. Focus on the pipe flow: data starts on the left, each pipe transforms it, and the final result comes out the right side.
Application Insights for DAB
Application Insights monitors the application layer β your Data API Builder endpoints, web applications, and API gateways. While Azure Monitor for SQL watches the database engine, Application Insights watches the code that talks to it.
What Application Insights captures
| Signal | Example |
|---|---|
| Request rates | 500 REST requests per minute to /api/products |
| Response times | Average 45ms, P95 at 200ms |
| Failure rates | 2% of requests returning 500 errors |
| Dependencies | Calls from DAB to SQL taking 120ms average |
| Exceptions | Stack traces when requests fail |
| Custom events | Business-specific telemetry you add to code |
Instrumenting DAB
DAB supports Application Insights through configuration:
"runtime": {
"telemetry": {
"application-insights": {
"connection-string": "@env('APPLICATIONINSIGHTS_CONNECTION_STRING')",
"enabled": true
}
}
}
With this enabled, every REST and GraphQL request is tracked β response time, status code, and SQL dependency calls. You can correlate a slow API response directly to the underlying SQL query.
The application map
Application Insights generates an application map showing how your components connect:
Browser/Mobile App
β
Data API Builder (DAB)
β β
Azure SQL Azure OpenAI
Each node shows request rates, response times, and failure rates. Red nodes indicate problems. You can click any connection to drill into specific slow or failing requests.
Scenario: Leo diagnoses slow search at SearchWave
Users report that the SearchWave product search is slow during peak hours. Leo opens Application Insights:
- Application map shows DAB-to-SQL dependency has P95 latency of 3 seconds (normally 50ms)
- Drills into the dependency β it is the
usp_SearchProductsstored procedure - Checks Log Analytics for the SQL database β CPU is at 95% during peak hours
- Cross-references with Query Store β the search procedureβs execution plan regressed after a statistics update
The fix: force the good plan in Query Store and scale up the SQL database tier during peak hours. Leo found the root cause in minutes because all three monitoring layers (Application Insights, Azure Monitor metrics, Log Analytics with Query Store data) connected the dots.
Monitoring recommendations by workload
| Workload Type | Key Metrics to Watch | Alert Thresholds | Tools |
|---|---|---|---|
| OLTP (transactional) | DTU/vCore %, deadlocks, connection failures, response time | CPU above 80%, any deadlocks, response P95 above 200ms | Azure Monitor metrics + Application Insights |
| Analytics / reporting | Query duration, data IO %, tempdb usage, concurrent queries | Query duration above 5 min, data IO above 90% | Log Analytics with Query Store data |
| DAB API layer | Request rate, response time, failure rate, dependency latency | Failure rate above 1%, P95 above 500ms | Application Insights with dependency tracking |
| CDC / CES pipelines | CDC latency, Event Hubs throughput, backlog size | CDC scan latency above 5 min, consumer lag growing | Azure Monitor for Event Hubs + SQL diagnostic logs |
Alert rules
Alerts notify your team before users notice problems. Configure them in Azure Monitor:
Metric alert example
Alert rule: High CPU on VaultBank production database
Condition: CPU percentage > 85% for 10 minutes
Severity: Warning (Sev 2)
Action group:
β Send email to dba-team@vaultbank.com
β Send Teams notification to #database-alerts
β Create incident in ServiceNow (webhook)
Log alert example (KQL-based)
// Alert when deadlocks exceed 5 in 15 minutes
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.SQL"
| where Category == "Deadlocks"
| where TimeGenerated > ago(15m)
| summarize DeadlockCount = count()
| where DeadlockCount > 5
Smart detection
Application Insights includes smart detection that automatically identifies:
- Sudden spikes in failure rates
- Abnormal response time increases
- Memory leak patterns
- Dependency call degradation
No manual alert configuration needed β it learns your baseline and alerts on anomalies.
Exam tip: Metric alerts vs Log alerts
The exam may ask you to choose between metric alerts and log alerts:
- Metric alerts β evaluate numeric time-series data at regular intervals (every 1-5 minutes). Fast to fire, low latency. Use for CPU, DTU, connection counts.
- Log alerts β run a KQL query on a schedule. More flexible (any data in Log Analytics) but higher latency (5-15 minute minimum). Use for complex conditions like βmore than 5 deadlocks involving the Orders table in 15 minutes.β
When the question describes a simple threshold (CPU above 80%), use metric alerts. When it describes a complex condition requiring query logic, use log alerts.
Bringing it all together
A complete monitoring setup for a DAB-backed SQL application:
| Layer | Tool | What You See |
|---|---|---|
| User experience | Application Insights (browser SDK) | Page load time, client errors, user flows |
| API layer | Application Insights (DAB telemetry) | Request rate, response time, failures |
| Database engine | Azure Monitor metrics | CPU, DTU, IO, connections, deadlocks |
| Query performance | Log Analytics with Query Store | Slow queries, plan regressions, wait stats |
| Change pipelines | Azure Monitor for Event Hubs | CES throughput, consumer lag, errors |
| Alerts | Azure Monitor alert rules | Proactive notification before user impact |
Leo at SearchWave notices that the product API response time has doubled this week. Application Insights shows the DAB-to-SQL dependency is the bottleneck. What should Leo check next to find the root cause?
Ingrid at Nordic Shield needs to be alerted immediately when deadlocks occur on the Claims database. She needs the deadlock graph XML for diagnosis. Which alert type should she configure?
π¬ Video coming soon
Next up: External AI Models and SQL Server 2025 AI Features β call AI models from T-SQL, vector search, and intelligent query processing (Domain 3).