Continuous Integration Metrics
Continuous Integration (CI) metrics help you identify bottlenecks in the build and test phase of your development lifecycle. By measuring the speed, stability, and cost of your CI processes, you can shorten the developer feedback loop, improve code quality, and reduce wasted cloud spend.
These metrics collectively provide a comprehensive, data-driven view into the health, speed, and cost of the Build/Test stage that occurs before a pull request is merged.
Key CI Metrics
Below are the key CI metrics tracked on our platform.
| Metric | Definition | Why It Matters |
|---|---|---|
| Time to CI Success | The time from when a pull request is created to the first complete and successful CI run. | This metric measures how quickly a developer gets a definitive “green” signal on their changes. A long delay creates a major context-switching penalty, as developers either wait idly or move on to another task, losing valuable focus. |
| CI Run Time | The time it takes for a single CI workflow, including all of its jobs, to complete on a pull request. | This directly impacts the speed of the developer feedback loop. Faster CI runs allow for quicker iteration, code reviews, and merging, which accelerates the overall Lead Time for new features and fixes. |
| CI First Pass Rate | The percentage of pull requests where all CI checks passed on the very first attempt. | This is a strong leading indicator of code quality and effective pre-commit local testing. A high first pass rate means less developer time is wasted waiting on CI and more time is spent on new, productive work. |
| CI Failure Rate | The percentage of pull requests that had at least one failed CI job. | A high failure rate indicates wasted developer time, increased context switching, and higher cloud costs due to repeated runs. Lowering this rate directly improves team productivity and reduces operational spend. |
| Wasted CI Compute Time | The total CI execution time spent on failed runs across all pull requests. | This metric quantifies the direct cloud infrastructure cost of inefficient or flaky CI processes. It translates developer friction into a clear dollar amount, making it a powerful tool for justifying investments in better testing or infrastructure. |
CI Signals
We analyze the following signals from your Git provider to calculate CI metrics.
| Git Provider | Supported Signals |
|---|---|
| GitHub | External processes that report on the status of a commit. There are two types: Checks Suites : A collection of pass/fail statuses for the last commit on a branch. Commit Statuses : A simpler pass/fail label attached to a commit, often from an external CI/CD tool. |
| Gitlab | Pipelines : The core of GitLab’s CI/CD functionality, composed of jobs and stages that build, test, and deploy your code. |
| Bitbucket | Pipelines : Bitbucket’s integrated CI/CD service for automating builds, tests, and deployments. |
| Azure DevOps | Pipelines : The language-agnostic CI/CD platform within Azure DevOps for building, testing, and deploying applications. |
How We Calculate CI Metrics
Considerations for how we calculate your metrics:
- Completed Runs Only: Our calculations only include CI runs that have a definitive final status of “success” or “failure”. Runs that were skipped, cancelled, timed out, or had a startup failure are excluded.
- Commit Status Checks: All of the GitHub commit status checks that run for a commit count as 1 CI job. Each commit status check is effectively a step in a CI job.
- Concurrent Check Run and Commit Status Checks: If you run both GitHub check suites and commit statuses triggered by a commit, they are counted as a single CI job.
- Rebase and Force Push: Workflows that involve rewriting Git history (like rebasing locally and then force pushing) can cause a mismatch between what is visible in your Git provider’s UI and the data we analyze. We use the true event timeline (including status checks on the commits before you rebased) to provide an accurate calculation, which may differ from the commit history shown in the UI.
- Outliers: To ensure the accuracy of your metrics, we automatically remove extreme outliers from the following calculations:
- Time to CI Success: We use the 99.5th percentile as the upper bound when calculating averages.
- CI Run Time: We use the 99.5th percentile or 24 hours (whichever is smaller) as the upper bound when calculating averages.
Last updated on