As an engineering VP or manager, you don’t have consistent metrics to show the business what your teams are working on. Plus, it’s hard to prove that what they’re working on aligns with business needs. What you need is a way to quantify your engineering org’s efficiency so you can justify scaling your team up, and that means using DevOps metrics.

By using DevOps metrics and key performance indicators (KPIs), you can effectively gauge your DevOps effectiveness and success—trust us, this is tried and true.

What Are DevOps Metrics?

DevOps metrics are specific data points that show you how well a DevOps software development pipeline is operating. These stats are quantitative assessment tools that help you evaluate your DevOps’ efficiency as well as the quality of the output. 

You can use these metrics to measure, benchmark, and improve team operations and technical skills. What’s more, these metrics provide key insight that DevOps  teams can use to quickly pinpoint  and address any process bottlenecks.

Meme pie chart of what programming actually entails: Mostly debugging, then thinking and planning, and finally, a tiny sliver for actually programming.

You can’t show this graph to stakeholders—you need better DevOps metrics! 
Source: i.imgflip

If you want to discover bottlenecks to continuously improve teams’ performance, you can always rely on DevOps Research and Assessments (DORA) metrics—deployment frequency, cycle time, change failure rate, and mean time to restore. They have become the standard method engineering leaders use to get a high-level overview of how their organizations are performing. They’re also a great starting point for setting up an engineering metrics program.

But DORA metrics aren’t a perfect solution because they’re lagging indicators–only available to those monitoring them after work has gone through the pipeline. And they don’t tie directly to business outcomes. Too much focus on DORA metrics ignores the bigger picture. So let’s look at a few key metrics you should work to optimize along with the 4 DORA metrics.

Top 7 DevOps KPIs

DevOps metrics are plentiful, so you need to narrow in on the key performance indicators relevant to your business needs. We’ve compiled this list of 7 DevOps KPIs to help you get started. 

1. Cycle Time

Cycle time measures the amount of time from work started to work delivered–typically from first commit to release to production. This metric can be further broken down into 4 phases—coding time, pickup time, review time, and deployment time. 

Teams with shorter cycle times often deliver quality code (and business value) more quickly, more often.

cycle time benchmarks

2. Deployment Frequency

Deployment frequency measures how often an organization successfully releases to production. If your teams are deploying often, that signals a healthy and stable CI/CD pipeline. A great goal for any team should be small, on-demand deployments. This ensures regular value delivery, minimizes delays, keeps quality high, and keeps service interruptions low (more on this in a sec).

deployment frequency

3. Change Failure Rate

Change failure rate (CFR) measures the percentage of failed deployments to production. The best teams have a CFR of just under 15%. In a perfect world, you’d have a CFR of zero–but humans write code and sometimes they make mistakes. That said, the GenAI revolution is here (87% of companies are likely or highly likely to invest in GenAI tools in 2024) and it can be measured! It’s very possible that a new benchmark for CFR could be set in the near future!

change failure rate

4. Mean Time To Recovery

Mean Time to Recovery (MTTR) measures how long it takes an organization to recover from a failure in production. If your team is quickly able to resolve production bugs or outages, you’ll have a better user experience when things hit the fan. The key to an elite MTTR metric is small PRs that can be easily and quickly scanned. It makes the job of root cause analysis and addressing bugs much easier and faster–all of which contributes to a healthy MTTR. 

mean time to restore

5. PR Size

Pull request (PR) size is a metric calculating the average code changes (in lines of code) of a pull request. It’s also one of the most powerful leading indicator metrics out there. It has a huge impact on all four DORA metrics as well as things like merge frequency, overall quality, Large PRs can be a huge bottleneck and impact your team’s efficiency. Why? Large PRs:

  • Take more time to review
  • Are usually more complex and therefore harder to review
  • Sit idle for longer (because they take more time to review)

One way to make the PR process more efficient is to automate processes and provide more context for reviewers. gitStream provides automations for many aspects of the PR process including applying helpful labels about code context, automating reviews and approvals for things like docs changes or Dependabot-generated PRs, and helping new developers adhere to best practices.


If you want DORA metrics, leading indicator metrics like PR size and merge frequency, and automation to help you improve (all in one place), sign up for a free LinearB account!


6. Rework Rate

Rework refers to code that is rewritten or deleted shortly after being added to your code base. LinearB considers changes to code that have been in your code-base for less than 21 days as reworked code. 

Your rework rate is the percentage of changes that are considered rework for a given timeframe. Ideally, you’d keep this rate as low as possible, as low rework rates mean your teams are writing clean, efficient code and have more time to focus on delivering new features.

rework ratio

7. Accuracy Scores

Planning accuracy is the ratio of the planned and completed issues or story points out of the total issues or story points planned for the iteration. Capacity Accuracy is all completed work (planned or unplanned) as a ratio of planned work. These two scores together–in addition to being a great leading indicator of whether a team will deliver a project on-time or not–also paint a picture of a team’s ability to: 

  • Scope/plan their iterations
  • Execute on those plans
  • Adapt to the normal rhythm of engineering work

Nobody loves telling the execs, sales, customer success, and marketing that all their plans have to change because engineering missed a deadline. 

It’s inevitable that you’re going to have scope creep and unplanned work each iteration, but keeping a high planning accuracy means you’re delivering on your promises to the rest of the business most of the time. 


In addition to measuring these 7 DevOps KPIs, knowing how your teams compare against industry standards helps you justify where you need to focus improvement efforts. We studied over 2,000 teams in our Engineering Metrics Benchmarks study to determine what makes engineering teams elite. Then, we incorporated these benchmarks into our analytics dashboards so you can see at a glance how your team is performing and contrast that against the industry average.


Want to learn more about being an elite engineering team? Check out this blog detailing our engineering benchmarks study and methodology.

Improving Your DevOps KPIs

Once you’ve identified the key metrics to measure your software development process efficiency and benchmarked where your teams stand, you can create an action plan and strategies to improve your numbers. 

As we’ve learned with CI/CD, automation is key to dramatic improvement in engineering productivity and team health. So rather than try to micromanage your teams to improve DevOps metrics, provide them with the automation tools that help them self-improve.

WorkerB Notification Explainer

LinearB provides both automated real time alerting and programmable workflow automation at the code review level. The alerts let the team know about work in flight, set operational goals, PRs awaiting review, additional context, and provide one-click automations to remove manual work–like creating Jira tickets and approving small PRs directly from Slack. 

Code Experts gitStream rule

gitStream workflow automation helps pave a golden path toward higher merge frequency–not to mention happier developers–by automating the most frustrating parts of the PR review process. It drives both efficiency and quality using workflows for things like flagging deprecated componentsfinding and assigning the appropriate reviewer, and automatically labeling PRs missing tests.