🔒 Guided

Pre-launch preview. Authorised access only.

Incorrect code

Guided by A Guide to Cloud
Explore AB-900 AI-901
Guided AZ-400 Domain 3
Domain 3 — Module 3 of 13 23%
9 of 25 overall

AZ-400 Study Guide

Domain 1: Design and Implement Processes and Communications

  • Work Item Tracking: Boards, GitHub & Flow
  • DevOps Metrics: Dashboards That Drive Decisions
  • Collaboration: Wikis, Teams & Release Notes

Domain 2: Design and Implement a Source Control Strategy

  • Branching Strategies: Trunk-Based, Feature & Release
  • Pull Requests: Policies, Protections & Merge Rules
  • Repository Management: LFS, Permissions & Recovery

Domain 3: Design and Implement Build and Release Pipelines

  • Package Management: Feeds, Versioning & Upstream
  • Testing Strategy: Quality Gates & Release Gates
  • Test Implementation: Code Coverage & Pipeline Tests
  • Azure Pipelines: YAML from Scratch Free
  • GitHub Actions: Workflows from Scratch Free
  • Pipeline Agents: Self-Hosted, Hybrid & VM Templates
  • Multi-Stage Pipelines: Templates, Variables & Approvals
  • Deployment Strategies: Blue-Green, Canary & Ring Free
  • Safe Rollouts: Slots, Dependencies & Hotfix Paths
  • Deployment Implementations: Containers, Scripts & Databases
  • Infrastructure as Code: ARM vs Bicep vs Terraform
  • IaC in Practice: Desired State & Deployment Environments
  • Pipeline Maintenance: Health, Migration & Retention

Domain 4: Develop a Security and Compliance Plan

  • Pipeline Identity: Service Principals, Managed IDs & OIDC Free
  • Authorization & Access: GitHub Roles & Azure DevOps Security
  • Secrets & Secure Pipelines: Key Vault & Workload Federation
  • Security Scanning: GHAS, Defender & Dependabot

Domain 5: Implement an Instrumentation Strategy

  • Monitoring for DevOps: Azure Monitor & App Insights
  • Metrics & KQL: Analysing Telemetry & Traces

AZ-400 Study Guide

Domain 1: Design and Implement Processes and Communications

  • Work Item Tracking: Boards, GitHub & Flow
  • DevOps Metrics: Dashboards That Drive Decisions
  • Collaboration: Wikis, Teams & Release Notes

Domain 2: Design and Implement a Source Control Strategy

  • Branching Strategies: Trunk-Based, Feature & Release
  • Pull Requests: Policies, Protections & Merge Rules
  • Repository Management: LFS, Permissions & Recovery

Domain 3: Design and Implement Build and Release Pipelines

  • Package Management: Feeds, Versioning & Upstream
  • Testing Strategy: Quality Gates & Release Gates
  • Test Implementation: Code Coverage & Pipeline Tests
  • Azure Pipelines: YAML from Scratch Free
  • GitHub Actions: Workflows from Scratch Free
  • Pipeline Agents: Self-Hosted, Hybrid & VM Templates
  • Multi-Stage Pipelines: Templates, Variables & Approvals
  • Deployment Strategies: Blue-Green, Canary & Ring Free
  • Safe Rollouts: Slots, Dependencies & Hotfix Paths
  • Deployment Implementations: Containers, Scripts & Databases
  • Infrastructure as Code: ARM vs Bicep vs Terraform
  • IaC in Practice: Desired State & Deployment Environments
  • Pipeline Maintenance: Health, Migration & Retention

Domain 4: Develop a Security and Compliance Plan

  • Pipeline Identity: Service Principals, Managed IDs & OIDC Free
  • Authorization & Access: GitHub Roles & Azure DevOps Security
  • Secrets & Secure Pipelines: Key Vault & Workload Federation
  • Security Scanning: GHAS, Defender & Dependabot

Domain 5: Implement an Instrumentation Strategy

  • Monitoring for DevOps: Azure Monitor & App Insights
  • Metrics & KQL: Analysing Telemetry & Traces
Domain 3: Design and Implement Build and Release Pipelines Premium ⏱ ~12 min read

Test Implementation: Code Coverage & Pipeline Tests

Implement automated tests in CI/CD pipelines. Configure test tasks, test agents, publish test results, and analyse code coverage with Cobertura, JaCoCo, and Istanbul.

Implementing tests in pipelines

☕ Simple explanation

Think of a factory quality control line.

Every product on the assembly line passes through inspection stations. Each station has specific testing equipment — one measures weight, another checks colour, a third stress-tests durability. The inspector logs results on a shared dashboard and stamps “PASS” or “FAIL” on each product.

CI/CD pipelines work the same way. Each test task is an inspection station. It runs specific tests (unit, integration, security), records results in a standard format (JUnit, Cobertura), and publishes them to the pipeline dashboard. If any station fails, the product — your code — doesn’t ship.

Implementing tests in a pipeline involves three concerns:

  • Test execution — choosing the right task or action to run your test framework (dotnet test, npm test, pytest, VSTest)
  • Result publishing — converting test output into a standard format so the pipeline can display pass/fail/skip counts and trend them over time
  • Code coverage — measuring which lines, branches, and functions your tests actually exercise, then enforcing minimum thresholds

The AZ-400 exam tests both Azure Pipelines tasks and GitHub Actions steps for all three. You need to know which tasks produce coverage, which formats are supported, and how to configure distributed testing for large test suites.

Test tasks in Azure Pipelines

Azure Pipelines provides built-in tasks for running tests across different tech stacks:

Key test tasks

TaskWhat It RunsCoverage Built-inResult Format
DotNetCoreCLI@2dotnet test for .NET projectsYes — add --collect:"XPlat Code Coverage"VSTest (.trx) or JUnit
VSTest@2Visual Studio Test runner for .NET FrameworkYes — enable code coverage in task settingsVSTest (.trx)
Npm@1npm test for Node.js projectsNo — configure Jest/Istanbul separatelyDepends on test runner config
Maven@4mvn test for Java projectsNo — configure JaCoCo plugin in pom.xmlJUnit (surefire reports)
PythonScript@0pytest or custom Python test scriptsNo — configure pytest-cov separatelyJUnit (with —junitxml flag)

PublishTestResults task

After tests run, the PublishTestResults@2 task uploads results to the pipeline’s Test tab:

- task: PublishTestResults@2
  inputs:
    testResultsFormat: 'JUnit'    # JUnit, NUnit, VSTest, XUnit, CTest
    testResultsFiles: '**/TEST-*.xml'
    mergeTestResults: true

Supported formats: JUnit, NUnit, VSTest (.trx), XUnit, CTest. JUnit is the most universal — most test frameworks can output JUnit XML.

💡 Exam tip: format matching

The exam may ask which format a specific test framework produces:

  • .NET (dotnet test) — VSTest (.trx) by default, JUnit with --logger "junit"
  • Java (Maven Surefire) — JUnit XML in target/surefire-reports/
  • JavaScript (Jest) — JUnit with jest-junit reporter package
  • Python (pytest) — JUnit with --junitxml=results.xml flag

If the question says “publish .NET test results,” the format is typically VSTest. If it says “a cross-platform format that works with any language,” the answer is JUnit.

Test steps in GitHub Actions

GitHub Actions doesn’t have built-in “test tasks” — you use run steps with your test framework’s CLI commands:

- name: Run tests
  run: dotnet test --logger "junit" --collect:"XPlat Code Coverage"

- name: Publish test results
  uses: dorny/test-reporter@v1
  if: always()
  with:
    name: 'Test Results'
    path: '**/*.xml'
    reporter: 'java-junit'

Key differences from Azure Pipelines:

  • No built-in PublishTestResults equivalent — use community actions like dorny/test-reporter or EnricoMi/publish-unit-test-result-action
  • Results appear as check run annotations on the PR, not a dedicated Test tab
  • The if: always() condition is critical — without it, test publishing is skipped when tests fail (which is exactly when you need the results)
Azure Pipelines has richer built-in test integration; GitHub Actions relies on community actions and external services
AspectAzure PipelinesGitHub Actions
Test executionBuilt-in tasks (DotNetCoreCLI, VSTest, Maven, Npm)run steps with CLI commands (dotnet test, npm test, pytest)
Result publishingPublishTestResults@2 task — native integration with Test tabCommunity actions (dorny/test-reporter) — results as PR check annotations
Coverage publishingPublishCodeCoverageResults@2 task — native Code Coverage tabCommunity actions or upload to external services (Codecov, Coveralls)
Supported formatsJUnit, NUnit, VSTest, XUnit, CTest (5 formats)Depends on action — typically JUnit, plus framework-specific parsers
Test analyticsBuilt-in pipeline analytics — flaky test detection, pass rate trendsNo built-in analytics — use third-party services or custom dashboards
Distributed testingVSTest multi-agent parallelism with test slicingMatrix strategy — run different test slices across parallel jobs

Code coverage analysis

Code coverage measures how much of your codebase is exercised by your test suite. It answers: “When tests run, which lines of code actually execute?”

Coverage metrics

MetricWhat It MeasuresExample
Line coveragePercentage of executable lines hit by tests450 of 500 lines executed = 90%
Branch coveragePercentage of conditional branches (if/else, switch) takenBoth the if and else paths tested
Function coveragePercentage of functions/methods called38 of 40 functions invoked = 95%
Statement coveragePercentage of statements executed (similar to line but counts multi-statement lines)Each statement on a multi-statement line counted separately

Branch coverage is the most valuable metric — high line coverage with low branch coverage means your tests hit the happy path but miss edge cases and error handling.

Coverage tools by language

LanguageCoverage ToolOutput FormatPipeline Task
.NETCoverlet (built-in with dotnet test)Cobertura XMLPublishCodeCoverageResults@2
JavaJaCoCoJaCoCo XML or CoberturaPublishCodeCoverageResults@2
JavaScript/TypeScriptIstanbul (via nyc or Jest —coverage)Cobertura, lcov, textPublishCodeCoverageResults@2
Pythoncoverage.py (via pytest-cov)Cobertura XMLPublishCodeCoverageResults@2

Cobertura is the universal format — every major coverage tool can output Cobertura XML, and both Azure Pipelines and most GitHub Actions coverage tools accept it.

Publishing coverage in Azure Pipelines

- task: PublishCodeCoverageResults@2
  inputs:
    summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/coverage.cobertura.xml'

This publishes coverage results to the pipeline’s Code Coverage tab, showing line-by-line highlights of covered and uncovered code.

💡 Scenario: Jordan's test configuration at Cloudstream Media

☁️ Jordan Rivera sets up the test pipeline for Cloudstream Media’s Node.js streaming API:

  1. Unit tests — Jest with --coverage flag, outputs Cobertura XML
  2. Integration tests — Supertest against a Docker Compose environment (API + Redis + PostgreSQL)
  3. Coverage threshold — 80% branch coverage minimum, enforced in Jest config (coverageThreshold in package.json)
  4. Pipeline publishing — PublishTestResults for JUnit XML, PublishCodeCoverageResults for Cobertura

Avery (dev lead) asks: “What happens if I add a new endpoint but don’t write tests?” Jordan’s answer: “Branch coverage drops below 80%, Jest fails with a threshold violation, the pipeline fails, and your PR can’t merge.”

Chen (SRE) adds load tests separately — they run on a schedule (nightly) rather than on every PR, because they take 15 minutes and hit shared infrastructure.

Distributed testing

Large test suites can take too long on a single agent. Both platforms support distributing tests across multiple machines.

Azure Pipelines — VSTest multi-agent

The VSTest@2 task supports test slicing across multiple agents:

  • By test count — each agent runs an equal number of tests
  • By previous run duration — agents get balanced workloads based on historical timing
  • By test assemblies — each agent runs a different DLL

Configure a job with strategy: parallel and the desired agent count. The VSTest task automatically partitions tests.

GitHub Actions — matrix strategy

Use a matrix to split tests across parallel jobs:

strategy:
  matrix:
    shard: [1, 2, 3, 4]
steps:
  - run: npm test -- --shard=${{ matrix.shard }}/4

Each job runs a different shard. Combine results with a downstream job that aggregates coverage reports.

Question

What is the PublishTestResults task in Azure Pipelines, and what formats does it support?

Click or press Enter to reveal answer

Answer

PublishTestResults@2 uploads test results to the pipeline's Test tab for visualisation and trending. Supported formats: JUnit, NUnit, VSTest (.trx), XUnit, and CTest. JUnit is the most universal — most test frameworks can output JUnit XML regardless of language.

Click to flip back

Question

What is the difference between line coverage and branch coverage?

Click or press Enter to reveal answer

Answer

Line coverage measures the percentage of executable lines hit by tests. Branch coverage measures the percentage of conditional paths (if/else branches) taken. Branch coverage is more valuable — you can have 90% line coverage but miss error-handling branches entirely. High line + low branch coverage means tests only cover the happy path.

Click to flip back

Question

Which code coverage format is universal across languages?

Click or press Enter to reveal answer

Answer

Cobertura XML. Every major coverage tool (.NET Coverlet, Java JaCoCo, JavaScript Istanbul, Python coverage.py) can output Cobertura format. Azure Pipelines PublishCodeCoverageResults@2 task accepts Cobertura natively. It is the safest choice for cross-language pipelines.

Click to flip back

Question

Why is 'if: always()' important when publishing test results in GitHub Actions?

Click or press Enter to reveal answer

Answer

Without 'if: always()', the publish step only runs when the previous step succeeds. But when tests fail, the test-run step has a non-zero exit code — so subsequent steps are skipped by default. This means you lose test results exactly when you need them most. 'if: always()' ensures the publish step runs regardless of test outcome.

Click to flip back

Knowledge check

Knowledge Check

Jordan's .NET microservice pipeline runs 'dotnet test' and produces coverage data. He wants to see line-by-line coverage highlights in the Azure Pipelines UI. Which task and format should he use?

Knowledge Check

Nadia's team has a Python test suite using pytest. The tests produce JUnit XML and Cobertura coverage. In Azure Pipelines, which two tasks does she need to add after the test step?

Knowledge Check

Kai's team at Launchpad Labs has 2,000 Jest tests that take 20 minutes on a single GitHub Actions runner. What is the most effective way to reduce the test time?

🎬 Video coming soon

Next up: Azure Pipelines: YAML from Scratch — master pipeline structure, triggers, and multi-stage CI pipelines in Azure DevOps.

← Previous

Testing Strategy: Quality Gates & Release Gates

Next →

Azure Pipelines: YAML from Scratch

Guided

I learn, I simplify, I share.

A Guide to Cloud YouTube Feedback

© 2026 Sutheesh. All rights reserved.

Guided is an independent study resource and is not affiliated with, endorsed by, or officially connected to Microsoft. Microsoft, Azure, and related trademarks are property of Microsoft Corporation. Always verify information against Microsoft Learn.