Building a Text Analysis App
Put text analysis into practice. Build a lightweight Python app that extracts sentiment, entities, and key phrases from text using Azure AI Language in Foundry.
Building with text analysis
You learned what text analysis can do in Module 7. Now youβll build an app that actually does it.
Imagine DataFlow Corpβs support team receives thousands of customer emails. Your app will automatically: detect the sentiment (happy or frustrated?), extract key phrases (whatβs it about?), and identify entities (which customer? which product?). All in a few lines of Python.
Two approaches to text analysis
| Feature | Azure AI Language (Foundry Tools) | GPT-4o (Chat Completions) |
|---|---|---|
| How it works | Dedicated NLP API with specific endpoints for each task | Send text to GPT-4o with instructions to analyse it |
| Output format | Structured JSON with scores and labels | Natural language response (flexible format) |
| Best for | High-volume processing, consistent structured output | Flexible analysis, combined with other tasks |
| Cost | Lower per-transaction for dedicated tasks | Higher per-token but more versatile |
| SDK | azure-ai-textanalytics | azure-ai-projects / openai |
Approach 1: Azure AI Language SDK
Sentiment analysis
from azure.ai.textanalytics import TextAnalyticsClient
from azure.core.credentials import AzureKeyCredential
client = TextAnalyticsClient(
endpoint="https://your-language-resource.cognitiveservices.azure.com/",
credential=AzureKeyCredential("your-key")
)
documents = [
"The product is amazing! Best purchase I've ever made.",
"Delivery was late and the packaging was damaged.",
"I received my order on Tuesday."
]
result = client.analyze_sentiment(documents)
for doc in result:
print(f"Sentiment: {doc.sentiment}, Scores: pos={doc.confidence_scores.positive:.2f}, neg={doc.confidence_scores.negative:.2f}")
Key phrase extraction
result = client.extract_key_phrases(documents)
for doc in result:
print(f"Key phrases: {', '.join(doc.key_phrases)}")
Entity recognition
result = client.recognize_entities(documents)
for doc in result:
for entity in doc.entities:
print(f" {entity.text} ({entity.category})")
Approach 2: Using GPT-4o for text analysis
response = chat.complete(
model="gpt4o-deployment",
messages=[
{"role": "system", "content": "Analyse the following text. Return JSON with: sentiment (positive/negative/neutral), key_phrases (list), entities (list with name and type)."},
{"role": "user", "content": "Dr. Sarah Chen from MediSpark reviewed the order placed on April 15th for $4,500 worth of medical supplies. She was very pleased with the quality."}
],
temperature=0
)
When to use which approach
Use Azure AI Language when:
- Processing large batches of text (thousands of documents)
- You need consistent, structured JSON output
- Cost per transaction matters
- You need specific NLP features like opinion mining or PII detection
Use GPT-4o when:
- You need flexible analysis combined with other tasks
- The analysis requires understanding complex context
- You want natural language explanations alongside the analysis
- Youβre already using GPT-4o for other features in the same app
π¬ Video walkthrough
π¬ Video coming soon
Building a Text Analysis App β AI-901 Module 17
Building a Text Analysis App β AI-901 Module 17
~14 minFlashcards
Knowledge Check
DataFlow Corp processes 50,000 customer emails daily and needs to tag each with sentiment (positive/negative/neutral) in a consistent JSON format. Which approach is most appropriate?
MediSpark wants their app to read patient feedback, determine sentiment, extract the doctor's name and clinic location, and generate a personalised response. Which approach makes most sense?
Next up: Multimodal: Responding to Speech β using AI to respond to spoken prompts.