Foundry Tools & Code-First Solutions
Microsoft Foundry provides a suite of AI tools for building custom solutions. Learn how to propose the right Foundry tool for each requirement, design code-first generative pages, and use agent feeds to surface AI intelligence in business apps.
Beyond no-code: when you need Foundry
Copilot Studio is like a kitchen where anyone can cook a meal using pre-made ingredients. Foundry is like a professional chefβs kitchen where you can make anything from scratch β but you need to know how to cook.
Foundry Tools are the specialised equipment in that chefβs kitchen: a vector search engine for finding similar documents, prompt flows for orchestrating complex AI pipelines, model catalogues for choosing the right AI model, and evaluation tools for measuring whether your AI actually works.
Code-first generative pages and agent feeds let developers embed AI-generated content directly into Power Apps and other business applications β so the AI intelligence surfaces where users already work.
Foundry Tools: matching requirements to capabilities
| Feature | What It Does | When to Propose It | Complexity |
|---|---|---|---|
| Model catalogue | Browse and deploy foundation models (GPT, Phi, Llama, Claude, etc.) | When the solution needs a specific model capability not available in Copilot Studio | Low β deploy from catalogue |
| Model router | Intelligently routes prompts to the best model for each request | When cost optimisation across multiple AI tasks is important | Low β deploy and configure routing mode |
| Prompt flows (classic) | Orchestrate multi-step AI pipelines with branching and tool calling. Note: prompt flow content in Microsoft docs is increasingly marked as Foundry (classic) | When the AI process has multiple stages (retrieve, reason, validate, respond) β consider whether newer Foundry experiences better fit your needs | Medium β visual or code-based flow design |
| Retrieval with AI Search | Semantic and vector search over large document collections using Azure AI Search | When RAG needs to search across thousands of documents with meaning-based matching | Medium β requires indexing pipeline and AI Search configuration |
| Evaluation | Measure AI quality (groundedness, relevance, coherence, safety) | When you need to prove that AI responses meet quality standards before deployment | Medium β requires test datasets and metrics definition |
| Content Safety | Detect and filter harmful content in AI inputs and outputs | Always β every production AI solution needs content safety filters | Low β configure filters on model deployments |
| Tracing and monitoring | Observe model calls, latency, token usage, and errors in production | When you need visibility into how deployed models perform and where failures occur | Low β enable on model deployments |
| Fine-tuning | Customise a foundation model with your domain-specific data | When RAG alone doesn't achieve the required accuracy for domain-specific reasoning | High β requires labelled training data and compute |
Scenario: Dev builds an AI pipeline in Foundry for Vanguard
Dev Patel (AI Platform Engineer at Vanguard Financial Group) designs a credit risk assessment pipeline:
Step 1 β Model catalogue: Deploys GPT-4 for complex reasoning and Phi-3 for simple classifications
Step 2 β AI Search with vector search: Creates a semantic index over 10 years of credit decision documents and regulatory guidelines using Azure AI Search
Step 3 β Prompt flow (classic): Orchestrates the pipeline:
- Receive loan application data
- Search AI Search index for similar historical cases
- Call GPT-4 with application + historical context + regulatory rules
- Validate output against compliance rules (deterministic check)
- Return risk score with reasoning
Step 4 β Evaluation: Tests the pipeline against 500 historical loan decisions. Measures groundedness (are citations from real documents?), accuracy (does the risk score match expert assessment?), and safety (no biased language?).
Step 5 β Content Safety: Configures prompt shields and jailbreak detection to prevent manipulation of the risk assessment.
Code-first generative pages
Generative pages are AI-powered pages in Power Apps that display dynamically generated content β summaries, recommendations, analyses β instead of static data views.
Code-first means developers write the logic that generates the content, typically using:
- Power Apps component framework (PCF) controls with AI backend calls
- Custom connectors that call Foundry APIs
- Dataverse plugins that trigger AI processing on data changes
| Use Case | What the Page Shows | How Itβs Built |
|---|---|---|
| Customer summary | AI-generated overview of a customerβs history, open cases, and recommended next actions | PCF control calls Foundry model via custom connector |
| Deal risk analysis | AI assessment of deal probability, risks, and recommended actions | Prompt flow returns structured analysis to a canvas app |
| Inventory forecast | AI-predicted demand for the next 30 days with confidence intervals | Foundry model output rendered in a model-driven app page |
Exam tip: generative pages vs static reports
The exam may ask when to recommend generative pages over traditional reporting:
- Static data, standard visuals β use Power BI or standard app views
- Dynamic AI-generated insights, personalised to the user β use generative pages
- Real-time recommendations based on current context β use generative pages
- Historical trend analysis β use Power BI
Generative pages add value when the content needs AI reasoning, not just data visualisation.
Agent feeds in apps
An agent feed surfaces agent-generated intelligence as a feed within a business application β similar to a social media feed, but with AI-generated cards showing insights, alerts, and recommendations.
Design patterns:
- Proactive insights: β3 customers are at risk of churn based on recent support interactionsβ
- Action suggestions: βReorder brake pads β current stock covers only 5 days at current demandβ
- Status updates: βThe supplier communication agent resolved 12 PO discrepancies todayβ
- Learning moments: βBased on similar deals, adding a product demo increases win rate by 25%β
Scenario: Ravi builds an agent feed for a retail client
Ravi Krishnan (Natalieβs senior developer at Cloudbridge Partners) implements an agent feed in a D365 Sales app:
Feed items:
- AI-generated deal summaries each morning (from Sales in M365 Copilot)
- Competitor mention alerts when a customer emails about a competitor product
- Recommended next actions based on deal stage and historical win patterns
- Weekly pipeline health summary with AI-identified risks
Technical implementation:
- Foundry prompt flow generates insights on a schedule
- Results stored in Dataverse as feed items
- Power Apps model-driven page displays the feed with card-based UI
- Users can dismiss, act on, or share each feed item
Flashcards
Knowledge check
Kai's manufacturing client needs to search across 50,000 technical specifications to find products similar to a customer's description β even when the customer uses different terminology than the specifications. Which Foundry capability should Kai propose?
A D365 Sales user opens a customer record and sees an AI-generated summary showing recent interactions, open deals, and recommended next actions β updated in real time based on the latest data. What is this an example of?
π¬ Video coming soon
Next up: Copilot Studio: Topics, Flows & Prompt Actions β designing conversation flows, choosing between NLP and generative AI orchestration, and creating prompt actions in Copilot Studio.