Test Implementation: Code Coverage & Pipeline Tests
Implement automated tests in CI/CD pipelines. Configure test tasks, test agents, publish test results, and analyse code coverage with Cobertura, JaCoCo, and Istanbul.
Implementing tests in pipelines
Think of a factory quality control line.
Every product on the assembly line passes through inspection stations. Each station has specific testing equipment — one measures weight, another checks colour, a third stress-tests durability. The inspector logs results on a shared dashboard and stamps “PASS” or “FAIL” on each product.
CI/CD pipelines work the same way. Each test task is an inspection station. It runs specific tests (unit, integration, security), records results in a standard format (JUnit, Cobertura), and publishes them to the pipeline dashboard. If any station fails, the product — your code — doesn’t ship.
Test tasks in Azure Pipelines
Azure Pipelines provides built-in tasks for running tests across different tech stacks:
Key test tasks
| Task | What It Runs | Coverage Built-in | Result Format |
|---|---|---|---|
| DotNetCoreCLI@2 | dotnet test for .NET projects | Yes — add --collect:"XPlat Code Coverage" | VSTest (.trx) or JUnit |
| VSTest@2 | Visual Studio Test runner for .NET Framework | Yes — enable code coverage in task settings | VSTest (.trx) |
| Npm@1 | npm test for Node.js projects | No — configure Jest/Istanbul separately | Depends on test runner config |
| Maven@4 | mvn test for Java projects | No — configure JaCoCo plugin in pom.xml | JUnit (surefire reports) |
| PythonScript@0 | pytest or custom Python test scripts | No — configure pytest-cov separately | JUnit (with —junitxml flag) |
PublishTestResults task
After tests run, the PublishTestResults@2 task uploads results to the pipeline’s Test tab:
- task: PublishTestResults@2
inputs:
testResultsFormat: 'JUnit' # JUnit, NUnit, VSTest, XUnit, CTest
testResultsFiles: '**/TEST-*.xml'
mergeTestResults: true
Supported formats: JUnit, NUnit, VSTest (.trx), XUnit, CTest. JUnit is the most universal — most test frameworks can output JUnit XML.
Exam tip: format matching
The exam may ask which format a specific test framework produces:
- .NET (dotnet test) — VSTest (.trx) by default, JUnit with
--logger "junit" - Java (Maven Surefire) — JUnit XML in
target/surefire-reports/ - JavaScript (Jest) — JUnit with
jest-junitreporter package - Python (pytest) — JUnit with
--junitxml=results.xmlflag
If the question says “publish .NET test results,” the format is typically VSTest. If it says “a cross-platform format that works with any language,” the answer is JUnit.
Test steps in GitHub Actions
GitHub Actions doesn’t have built-in “test tasks” — you use run steps with your test framework’s CLI commands:
- name: Run tests
run: dotnet test --logger "junit" --collect:"XPlat Code Coverage"
- name: Publish test results
uses: dorny/test-reporter@v1
if: always()
with:
name: 'Test Results'
path: '**/*.xml'
reporter: 'java-junit'
Key differences from Azure Pipelines:
- No built-in
PublishTestResultsequivalent — use community actions likedorny/test-reporterorEnricoMi/publish-unit-test-result-action - Results appear as check run annotations on the PR, not a dedicated Test tab
- The
if: always()condition is critical — without it, test publishing is skipped when tests fail (which is exactly when you need the results)
| Aspect | Azure Pipelines | GitHub Actions |
|---|---|---|
| Test execution | Built-in tasks (DotNetCoreCLI, VSTest, Maven, Npm) | run steps with CLI commands (dotnet test, npm test, pytest) |
| Result publishing | PublishTestResults@2 task — native integration with Test tab | Community actions (dorny/test-reporter) — results as PR check annotations |
| Coverage publishing | PublishCodeCoverageResults@2 task — native Code Coverage tab | Community actions or upload to external services (Codecov, Coveralls) |
| Supported formats | JUnit, NUnit, VSTest, XUnit, CTest (5 formats) | Depends on action — typically JUnit, plus framework-specific parsers |
| Test analytics | Built-in pipeline analytics — flaky test detection, pass rate trends | No built-in analytics — use third-party services or custom dashboards |
| Distributed testing | VSTest multi-agent parallelism with test slicing | Matrix strategy — run different test slices across parallel jobs |
Code coverage analysis
Code coverage measures how much of your codebase is exercised by your test suite. It answers: “When tests run, which lines of code actually execute?”
Coverage metrics
| Metric | What It Measures | Example |
|---|---|---|
| Line coverage | Percentage of executable lines hit by tests | 450 of 500 lines executed = 90% |
| Branch coverage | Percentage of conditional branches (if/else, switch) taken | Both the if and else paths tested |
| Function coverage | Percentage of functions/methods called | 38 of 40 functions invoked = 95% |
| Statement coverage | Percentage of statements executed (similar to line but counts multi-statement lines) | Each statement on a multi-statement line counted separately |
Branch coverage is the most valuable metric — high line coverage with low branch coverage means your tests hit the happy path but miss edge cases and error handling.
Coverage tools by language
| Language | Coverage Tool | Output Format | Pipeline Task |
|---|---|---|---|
| .NET | Coverlet (built-in with dotnet test) | Cobertura XML | PublishCodeCoverageResults@2 |
| Java | JaCoCo | JaCoCo XML or Cobertura | PublishCodeCoverageResults@2 |
| JavaScript/TypeScript | Istanbul (via nyc or Jest —coverage) | Cobertura, lcov, text | PublishCodeCoverageResults@2 |
| Python | coverage.py (via pytest-cov) | Cobertura XML | PublishCodeCoverageResults@2 |
Cobertura is the universal format — every major coverage tool can output Cobertura XML, and both Azure Pipelines and most GitHub Actions coverage tools accept it.
Publishing coverage in Azure Pipelines
- task: PublishCodeCoverageResults@2
inputs:
summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/coverage.cobertura.xml'
This publishes coverage results to the pipeline’s Code Coverage tab, showing line-by-line highlights of covered and uncovered code.
Scenario: Jordan's test configuration at Cloudstream Media
☁️ Jordan Rivera sets up the test pipeline for Cloudstream Media’s Node.js streaming API:
- Unit tests — Jest with
--coverageflag, outputs Cobertura XML - Integration tests — Supertest against a Docker Compose environment (API + Redis + PostgreSQL)
- Coverage threshold — 80% branch coverage minimum, enforced in Jest config (
coverageThresholdin package.json) - Pipeline publishing — PublishTestResults for JUnit XML, PublishCodeCoverageResults for Cobertura
Avery (dev lead) asks: “What happens if I add a new endpoint but don’t write tests?” Jordan’s answer: “Branch coverage drops below 80%, Jest fails with a threshold violation, the pipeline fails, and your PR can’t merge.”
Chen (SRE) adds load tests separately — they run on a schedule (nightly) rather than on every PR, because they take 15 minutes and hit shared infrastructure.
Distributed testing
Large test suites can take too long on a single agent. Both platforms support distributing tests across multiple machines.
Azure Pipelines — VSTest multi-agent
The VSTest@2 task supports test slicing across multiple agents:
- By test count — each agent runs an equal number of tests
- By previous run duration — agents get balanced workloads based on historical timing
- By test assemblies — each agent runs a different DLL
Configure a job with strategy: parallel and the desired agent count. The VSTest task automatically partitions tests.
GitHub Actions — matrix strategy
Use a matrix to split tests across parallel jobs:
strategy:
matrix:
shard: [1, 2, 3, 4]
steps:
- run: npm test -- --shard=${{ matrix.shard }}/4
Each job runs a different shard. Combine results with a downstream job that aggregates coverage reports.
Knowledge check
Jordan's .NET microservice pipeline runs 'dotnet test' and produces coverage data. He wants to see line-by-line coverage highlights in the Azure Pipelines UI. Which task and format should he use?
Nadia's team has a Python test suite using pytest. The tests produce JUnit XML and Cobertura coverage. In Azure Pipelines, which two tasks does she need to add after the test step?
Kai's team at Launchpad Labs has 2,000 Jest tests that take 20 minutes on a single GitHub Actions runner. What is the most effective way to reduce the test time?
🎬 Video coming soon
Next up: Azure Pipelines: YAML from Scratch — master pipeline structure, triggers, and multi-stage CI pipelines in Azure DevOps.