Copilot Studio: Topics, Flows & Prompt Actions
Design conversation topics with fallback strategies, choose between NLP, CLU, and generative AI orchestration, and build agent flows and prompt actions in Copilot Studio.
How Copilot Studio agents think
Think of Copilot Studio as a restaurant with three types of order-taking:
Standard NLP is like a fast-food menu board — the customer picks from a fixed list. “I want a burger” maps to the burger topic. Simple, fast, predictable.
CLU (Conversational Language Understanding) is like a trained waiter who understands the menu deeply. “I’m in the mood for something grilled with cheese” still gets you a burger — because the waiter understands intent and entities, even when the customer doesn’t use the exact words.
Generative AI orchestration is like a chef who can cook anything. “I had this amazing dish in Tokyo with a crispy coating and a tangy sauce” — the chef improvises from knowledge and context. Powerful, but harder to predict exactly what you’ll get.
NLP vs CLU vs generative AI orchestration
| Feature | How It Works | Best For | Trade-offs |
|---|---|---|---|
| Standard NLP | Trigger phrases on each topic — keyword matching with basic natural language understanding | Simple, predictable scenarios with well-defined user intents. Internal tools where users know the vocabulary | Fast to set up, deterministic. But brittle — struggles with unexpected phrasing or synonyms |
| Azure CLU | Trained intent classifier and entity extractor — you provide labelled examples, CLU learns the patterns | Domain-specific language where standard NLP fails. Regulated industries needing auditable intent classification | Higher accuracy for trained intents. Requires labelled training data and ongoing model updates as language evolves |
| Generative AI orchestration | LLM dynamically understands intent and either routes to a topic or generates a response from knowledge sources | Open-ended conversations, broad knowledge domains, scenarios where users phrase requests unpredictably | Most flexible and handles novel phrasing. But less predictable — needs guardrails, content safety, and testing for hallucination |
Exam tip: the orchestration decision
The exam tests whether you can recommend the right approach for a given scenario:
- “Users ask the same 15 questions in predictable ways” → Standard NLP with trigger phrases
- “Users describe problems in domain-specific jargon that varies by region” → CLU with trained intents
- “Users ask open-ended questions across a large knowledge base” → Generative AI orchestration
- “The solution must provide auditable intent classification for compliance” → CLU (deterministic, logged)
- “The agent needs to handle both structured workflows AND open Q&A” → Hybrid — generative orchestration for routing, with topics for structured flows
The hybrid approach is often the right answer on the exam. Generative orchestration handles the “understanding” layer, and structured topics handle the “action” layer.
Topic types in Copilot Studio
Topics are the building blocks of a Copilot Studio agent. Each topic handles a specific user intent or system event.
| Topic Type | Purpose | Example |
|---|---|---|
| Custom topics | Handle specific user intents you define — the core of your agent design | ”Track my order,” “Reset my password,” “Request a refund” |
| System topics | Built-in topics that handle common events (greeting, goodbye, escalation, error) | Greeting topic fires when a user starts a conversation |
| Fallback topic | Catches any message that no other topic matches — your safety net | ”I’m not sure I understand. Would you like to speak with a human?” |
Fallback design is critical. A poorly designed fallback frustrates users. A well-designed fallback either:
- Routes to generative answers — “I don’t have a specific workflow for that, but let me check our knowledge base…”
- Asks a clarifying question — “I can help with orders, returns, or account issues. Which one?”
- Escalates gracefully — “Let me connect you with someone who can help with that.”
Agent flows: multi-step workflows
Agent flows are the multi-step workflows inside Copilot Studio topics. They define what the agent does after it understands the user’s intent.
Design elements of an agent flow:
- Trigger — what starts the flow (user message, event, schedule)
- Conditions — branching logic based on data or user responses
- Actions — calls to connectors, Power Automate flows, or external APIs
- Variables — store and pass data between steps
- Messages — responses sent back to the user at each stage
- Escalation — hand off to a human agent when needed
Scenario: Natalie designs a hybrid agent for a client's support portal
Natalie Torres (Cloudbridge Partners) designs an agent for a telecom client’s customer support portal. The agent needs to handle both structured workflows (plan changes, billing inquiries) and open-ended product questions.
Design decisions:
Orchestration: Generative AI orchestration as the primary router — it understands user intent from natural language and either triggers a structured topic or generates an answer from the knowledge base.
Structured topics (agent flows):
- “Change my plan” → Authenticate user → Show current plan → Present options → Confirm change → Call billing API → Confirm to user
- “Dispute a charge” → Authenticate → Pull billing history → Collect dispute details → Create case in D365 → Provide case number
Generative fallback: For open-ended questions (“Does 5G work in rural areas?”), the orchestrator searches the knowledge base and generates a contextual response.
Fallback topic: If confidence is low on both structured and generative paths: “I want to make sure I give you the right answer. Let me connect you with a specialist.”
Zoe Park (PM) tracks the design: 40% of interactions hit structured topics, 50% are handled by generative answers, and 10% reach the fallback escalation.
Prompt actions
A prompt action is a custom AI-powered step within a topic that calls an LLM with specific instructions. It lets you inject AI reasoning at any point in an agent flow.
When to use prompt actions:
- Summarise — condense a long document or conversation into key points
- Classify — categorise user input (sentiment, urgency, product category)
- Extract — pull structured data from unstructured text (dates, amounts, entities)
- Generate — create personalised responses, recommendations, or content
- Transform — rewrite text for a different audience or format
Design considerations:
- Prompt actions use tokens — design prompts to be concise to control cost
- Output should be validated before passing to downstream steps
- Include guardrail instructions in the prompt (“respond only about our products,” “do not provide medical advice”)
| Prompt Action Step | Example Prompt | Output Used For |
|---|---|---|
| Classify urgency | ”Classify this support message as Low, Medium, or High urgency based on these criteria…” | Routing to the right queue |
| Summarise case history | ”Summarise this customer’s case history in 3 bullet points for the support agent” | Agent handoff context |
| Generate recommendation | ”Based on this customer’s usage data, recommend the best plan from our current offerings” | Personalised upsell suggestion |
| Extract entities | ”Extract the product name, order number, and issue description from this message” | Populating a case form |
Flashcards
Knowledge check
Natalie's telecom client has a customer support agent built with standard NLP trigger phrases. Customers frequently complain that the agent responds with 'I don't understand' — even when asking valid questions using slightly different words than the trigger phrases. What should Natalie recommend?
Ravi is designing an agent flow for a client's return processing. The flow needs to: authenticate the customer, look up the order, determine if the item is eligible for return based on policy, and either process the return or explain why it's ineligible. At which step should Ravi use a prompt action?
A Copilot Studio agent uses generative AI orchestration. A user sends a message that could match either a structured 'billing inquiry' topic or a generative knowledge-base answer. What determines which path the agent takes?
🎬 Video coming soon
Next up: Power Apps, WAF & Data Processing — applying the Well-Architected Framework to intelligent workloads, embedding AI in Power Apps, and designing data processing pipelines for grounding.