Securing AI and API Endpoints
Lock down your Data API Builder endpoints, AI model connections, and MCP servers with authentication, role-based authorization, managed identity, and CORS policies.
Every endpoint is a door β lock them all
Think of a hotel.
Your database is the vault in the basement. Data API Builder (REST and GraphQL) is the front desk β guests make requests and get information. AI model endpoints are like room service delivered from an outside restaurant. MCP endpoints are the staff-only service corridors connecting different systems.
Each door needs different security: the front desk needs guest ID checks (authentication), certain floors need keycard access (authorization by role), and the service corridors need verified staff badges (managed identity). You would never leave the vault door propped open β yet that is what an unsecured endpoint looks like.
Securing DAB REST and GraphQL endpoints
Authentication providers
DAB supports multiple authentication providers configured in the runtime section:
"runtime": {
"host": {
"authentication": {
"provider": "AzureAD",
"jwt": {
"audience": "api://searchwave-dab",
"issuer": "https://login.microsoftonline.com/TENANT_ID/v2.0"
}
}
}
}
| Provider | How It Works | Best For |
|---|---|---|
| AzureAD (Entra ID) | Validates JWT tokens from Microsoft Entra ID | Enterprise apps, Microsoft ecosystem |
| EasyAuth | App Service/Static Web Apps built-in authentication | Managed hosting with built-in auth |
| Custom JWT | Any identity provider that issues standard JWTs | Third-party IDPs (Auth0, Okta) |
Role-based authorization
After authentication, DAB checks what the userβs role is allowed to do. Roles are defined per entity:
"Product": {
"permissions": [
{
"role": "anonymous",
"actions": [
{
"action": "read",
"fields": {
"include": ["id", "name", "price", "description"],
"exclude": ["internalCost", "supplierNotes"]
}
}
]
},
{
"role": "editor",
"actions": [
{ "action": "read" },
{ "action": "create" },
{
"action": "update",
"policy": {
"database": "@item.ownerId eq @claims.userId"
}
}
]
},
{
"role": "admin",
"actions": [{ "action": "*" }]
}
]
}
Key concepts:
- anonymous β no authentication required (public endpoints)
- authenticated β any valid token, regardless of role claim
- Custom roles β matched against role claims in the JWT token
- Field-level security β include/exclude specific columns per role
- Database policies β row-level filters using
@item(database column) and@claims(token claim)
Exam tip: Anonymous vs Authenticated vs Custom roles
These three tiers are tested frequently:
- anonymous β no token needed. Use for public-facing read-only data (product catalog, public API).
- authenticated β any valid JWT. Use when you need to know the userβs identity but do not care about their specific role.
- Custom roles (e.g., βeditorβ, βadminβ) β matched against claims in the token. Use for fine-grained permissions.
A common exam trap: setting all entities to βauthenticatedβ when some should be βanonymousβ (public data) or role-specific (admin operations). Match the permission level to the sensitivity of the data.
CORS configuration
Cross-Origin Resource Sharing controls which frontends can call your API:
"runtime": {
"host": {
"cors": {
"origins": [
"https://searchwave.com",
"https://admin.searchwave.com"
],
"allow-credentials": true
}
}
}
Never use "origins": ["*"] in production. It allows any website to call your API, which opens the door to cross-site request forgery and data exfiltration.
Securing AI model endpoints
SQL Server 2025 and Azure SQL can call external AI models directly using sp_invoke_external_rest_endpoint and the new AI integration features. These external calls need security too.
Managed identity for model access
Managed identity eliminates secrets entirely. Your SQL database authenticates to the AI service using its Azure identity β no API keys, no connection strings, no passwords to rotate.
-- Azure SQL calling Azure OpenAI with managed identity
-- The database's managed identity needs 'Cognitive Services User' role
-- on the Azure OpenAI resource
DECLARE @response NVARCHAR(MAX);
EXEC sp_invoke_external_rest_endpoint
@url = 'https://searchwave-ai.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-06-01',
@method = 'POST',
@credential = [https://cognitiveservices.azure.com],
@payload = '{"messages":[{"role":"user","content":"Summarise this product review"}]}',
@response = @response OUTPUT;
The @credential parameter tells SQL to use the database-scoped credential mapped to the managed identity. No secret appears anywhere in the code.
When you must use API keys
Some external services do not support managed identity. Store keys in Azure Key Vault and reference them through database-scoped credentials:
-- Create credential with key from Key Vault
CREATE DATABASE SCOPED CREDENTIAL [ExternalAIKey]
WITH IDENTITY = 'HTTPEndpointHeaders',
SECRET = '{"api-key":"RETRIEVED_FROM_KEYVAULT"}';
| Auth Method | Secrets Stored? | Rotation | Best For |
|---|---|---|---|
| Managed Identity | None β Azure handles tokens automatically | Automatic (no action needed) | Azure-to-Azure service calls |
| API Key in Key Vault | Key Vault stores the secret | Manual rotation (automate with Key Vault rotation policy) | Third-party AI services without Entra support |
| API Key in code | In source code or config | Manual and risky | NEVER β this is the wrong approach |
Scenario: Ingrid secures the insurance AI endpoint at Nordic Shield
Nordic Shield Insurance uses an Azure OpenAI model to classify insurance claims by severity. The model receives claim descriptions β potentially containing personal health information.
Ingrid implements layered security: (1) managed identity from Azure SQL to Azure OpenAI (no API keys), (2) Azure Private Link so traffic never leaves the Azure backbone network, (3) Azure OpenAI content filtering to block prompt injection attempts, and (4) a database policy in DAB ensuring only the claims-processing service role can trigger AI classification.
If the exam describes a scenario with sensitive data going to an AI model, think: managed identity + private networking + content filtering.
Securing MCP endpoints
Model Context Protocol (MCP) endpoints connect AI agents to tools and data sources. They are powerful β and therefore need strong security.
MCP security fundamentals
| Layer | What To Secure | How |
|---|---|---|
| Transport | Data in transit | HTTPS only, TLS 1.2+ |
| Authentication | Who is calling | OAuth 2.0 tokens, API keys, managed identity |
| Authorization | What they can do | Scope-based permissions, tool-level access control |
| Input validation | What they send | Schema validation, input sanitisation, rate limiting |
| Network | Where calls come from | Private endpoints, IP allowlists, VNet integration |
Key practices
- Use HTTPS for all MCP endpoints β never expose MCP over plain HTTP, even in development
- Authenticate every request β MCP servers should validate tokens before executing any tool
- Limit tool exposure β only expose the tools that each client actually needs
- Log all MCP calls β every tool invocation should be logged for audit and debugging
- Rate limit β prevent abuse by limiting requests per client per time window
Exam tip: Managed identity is almost always the answer
When the exam describes a service-to-service authentication scenario in Azure, managed identity is almost always the correct answer. It eliminates secrets, handles token rotation automatically, and follows the principle of least privilege.
The only exceptions: third-party services outside Azure that do not support Entra ID authentication, and scenarios specifically requiring certificate-based authentication.
Putting it all together
Here is the security surface for a typical DAB deployment:
Internet β CORS filter β Authentication provider β Role check β Field filter β Database
β
AI Model endpoint
(managed identity auth)
β
MCP tool endpoints
(OAuth + scope validation)
Each layer reduces the attack surface. Remove any layer and you create a gap:
- No CORS β any website can call your API
- No authentication β anyone can read your data
- No role check β all authenticated users have admin access
- No field filter β internal data leaks to public endpoints
Ingrid at Nordic Shield needs to call an Azure OpenAI endpoint from Azure SQL Database to classify insurance claims. The claims contain personal data. Which authentication approach should she use?
A developer configures a DAB entity with permissions set to 'authenticated' for all actions. Users report they can see internal pricing fields they should not have access to. What is the best fix?
π¬ Video coming soon
Next up: Change Detection: CES, CDC, and Change Tracking β react to data changes with streaming, capture, and tracking mechanisms.