DORA metrics are a proven framework for measuring software delivery performance. This article provides everything you need to know about DORA metrics and how to use them to drive continuous improvement in your engineering organization.

In this guide:

  • What is DORA?
  • What are DORA metrics?
  • Why should you track DORA metrics?
  • How do you track DORA metrics?
  • How do you improve DORA metrics?
  • What does elite DORA metrics achievement look like?

What is DORA?

DevOps Research and Assessment (DORA) is a program that defines metrics that help organizations measure and improve their software delivery performance. Gene Kim, Jex Humble, and Nicole Forsgren founded the organization in 2015, and Google Cloud purchased DORA in 2018.

The primary output of DORA's research is four metrics that measure software delivery performance. These metrics, known collectively as the DORA metrics, have become the industry standard for evaluating software delivery performance. Every year, the DORA research team publishes the State of DevOps Report, which contains benchmarks and performance indicators for DevOps productivity. They update their metrics definitions as part of this report.

What are DORA Metrics?

DORA metrics provide a quantitative way to measure software delivery performance, and they focus on four key areas:

  1. Deployment Frequency represents how often an organization successfully releases to production.
  2. Change Lead Time shows the time it takes for a commit to get into production.
  3. Change Fail Percentage is the rate of deployments that lead to a failure in production.
  4. Failed Deployment Recovery Time is how long it takes to recover from a failure in production.

Additionally, Change Lead Time is sometimes referred to as Cycle Time and is broken down into the following components:

  • Coding time: The period between the first commit to a branch and the opening of a PR to merge this branch into the primary branch.
  • Pickup time: The period between when someone creates a PR and when a code review has started.
  • Review time: The period between the start of a code review and when someone merges the code.
  • Deploy time: The period between when code is ready to merge and when your CD system deploys it to production.

Cycle time breakdown showcasing coding time, pickup and review time, and deploy time.

These metrics are essential because they comprehensively view an organization's software delivery performance. Deployment frequency and change lead time measure speed, while change failure rate and failed deployment recovery time measure stability. Together, they offer a balanced view of performance, allowing organizations to track and improve the speed and reliability of their software delivery.

Why Should You Track DORA Metrics?

DORA metrics are designed to help teams focus on improvements that ensure engineering efforts contribute to the business's overall success. Teams that perform well against DORA metrics are more likely to achieve better customer satisfaction, operational efficiency, and overall organizational performance. Whether you're building a product for end users or creating internal tools, reliably delivering software directly impacts the bottom line.

DORA metrics can help you answer critical questions related to your software delivery lifecycle:

  • Are we quickly and efficiently delivering value to our customers?
  • Are software quality issues impacting our ability to execute? 
  • Are we minimizing risks and the potential for failure? 
  • Do we recover quickly when things go wrong?

Furthermore, tracking DORA metrics enables you to identify bottlenecks and make data-driven decisions about areas for improvement and investment opportunities. For example, a high change lead time could indicate a range of collaboration and productivity challenges, including better task scoping, code review workflows, or CI/CD pipelines. Breaking change lead time down by its parts - coding, pickup, review, and deploy time - gives you the visibility to identify the specific aspects of your software delivery process that experience the most friction.

How Do You Track DORA Metrics?

The biggest challenge you must overcome to track DORA metrics is correlating data across multiple tools attached to your software development process. Specifically, you need to connect your git, project management, incident management, and deployment data. Each of these sources provides critical metadata that is necessary to track DORA metrics. 

Dashboard displaying DORA metrics you can use to showcase engineering health.

Git (GitHub, GitLab, BitBucket)

Your git repositories house the bulk of your DORA metrics data, and nearly every DORA metric relies on it directly or indirectly. Commits, code reviews, and merge pipelines all contain essential data for tracking change lead time. Additionally, change fail percentage and failed deployment recovery time both require git metadata as part of their formulas.

Project Management (Jira, Shortcut, Azure Boards)

Your project management tool provides data about the inputs and outputs of your engineering team and helps measure the business impact of developer activities. For some organizations, project management data defines the start of their change lead time and establishes what constitutes a failure for change fail percentage.

Incident Management (PagerDuty, DataDog)

Depending on your organization’s process, your incident management platform may contain data necessary to track change fail percentage and failed deployment recovery time. Specifically, your platform contains timestamps for when an incident starts, when someone first initiates the effort to resolve the incident, and when the issue is resolved. Without this data, it’s impossible to track how your team recovers from failure accurately.

Deployment (GitHub Actions, GitLab CI, CircleCI, Harness)

All four DORA metrics require deployment data, and one of the biggest challenges in tracking DORA metrics is accounting for the variability in deployment processes across different teams and tools. Every organization defines "done" differently, whether it’s code deployed to production, added to a release, or committed to a pre-production environment. You can accurately track and improve your DORA metrics by designing your system to handle this variability.

Use Software Engineering Intelligence to Track DORA Metrics

While manually measuring these metrics is possible, the process can be time-consuming and prone to error. Instead, most organizations use automated tools that integrate with their existing CI/CD pipelines. These tools collect data from version control systems, deployment servers, and project management environments to calculate real-time DORA metrics. Software Engineering Intelligence (SEI) platforms are a standard solution to connect these data sources in a way that makes it easy to slice and dice in a way that aligns with your organization’s specific needs, whether by initiative, team, project, or repository.

Engineering metrics dashboard showing cycle time, coding time, pickup time, review time, and deploy time.

How Do You Improve DORA Metrics?

Tracking DORA metrics is not a one-time effort. To be successful, you’ll need to provide ongoing attention and analysis to ensure that you're continuously improving your performance. Once you’ve begun to track DORA metrics, it’s time to identify opportunities for improvement and track your progress toward achieving goals over time. This section covers how to use each DORA metric to drive operational improvements.

Deployment Frequency

High deployment rates result from a product team that produces well-scoped code changes, uses automation to streamline the merge process, and has implemented an efficient CI/CD pipeline to automate testing and delivery stages. High deployment rates correlate with project predictability and increased trust levels from users and clients. It reflects your team’s agility and speed in delivering value that impacts user satisfaction and your business’s ability to capture market opportunities.

Here are three tips for improving deployment frequency:

  • Adopt a practice of releasing early and often by releasing code in smaller, more manageable chunks. Set goals to limit PR size and use automation to notify your team of large PRs that have the potential to slow down deployment pipelines.
  • Use policy-as-code to automate standardized workflows for the merge process that set clear expectations for all participants about the end-to-end experience. Automate standard code reviews and minimize excessive review notifications to route code reviews based on the contents of PRs. 
  • Implement parallel testing incorporating automated code integration, testing, and deployment to strengthen your CI/CD pipelines. Identify and remediate infrastructure bottlenecks such as server capacity, sluggish networks, and outdated hardware.

Read the guide on improving deployment frequency for more practical strategies on improving deployment frequency.

Change Lead Time

Change lead time is the best place to look for a glance at engineering efficiency because it represents the time it takes software to reach your customers. Most organizations start with change lead time when seeking opportunities to improve their efficiency because it offers the most immediate and direct opportunities for improvement.

Here are some tips to improve your change lead time

  • Benchmark your performance against industry standards to identify your software delivery lifecycle bottlenecks.
  • Streamline merge pathways by automating manual tasks, unblocking low-risk changes, quickly routing PRs to relevant code experts, and providing deeper context to help developers better understand and evaluate code.
  • Invest in CI/CD tooling to automate scanning, testing, and building requirements. Use programmable workflows to empower engineering teams to optimize workflows while providing guardrails with policy-as-code.

WorkerB app notifications related to pull requests in a software development dashboard. The first notification highlights a pull request that took 23 days, 4 hours to merge, against a goal of merging PRs in 4 days, with a link to the specific PR. The second notification alerts about a pull request with 967 code changes, exceeding the goal of limiting PRs to 300 code changes or less. The notifications include options to view more long lifespan PRs and large PRs. The background shows various charts related to PR lifecycle and review times.

Read How to Master Lead Time for Changes for a detailed exploration of how to reduce change lead time.

Change Failure Percentage

Change Failure Percentage is a trailing indicator of your product’s predictability and reliability, two critical components of building customer trust. An inability to minimize failures can cause long-term damage to your product’s reputation and may result in customers seeking more reliable alternatives. A high Change Failure Percentage can also drastically reduce developer productivity as they manage rollbacks and hotfixes that disrupt their daily workflows. Failures force developers to context switch and can contribute to frustration over introducing unplanned work.

Here are some tips on how to improve your change failure percentage:

  • Adopt testing frameworks across your engineering organization to standardize testing practices. Use automation to maintain best practices for unit, integration, and end-to-end testing.
  • Improve quality assurance processes to optimize code reviews by ensuring product and engineering experts review all code before it reaches production. Use automation to reduce the cognitive burden of code reviews while minimizing operational overhead.
  • Use a phased deployment strategy and avoid rushing deployments into production. Use tools like feature flagging to gradually implement new code and improve your CI/CD automation to identify risky code and roll it back before it hits production.

Look at How to Improve Change Failure Percentage to learn more about strategies for improving this metric.

Failed Deployment Recovery Time

Failed deployment recovery time is a trailing indicator of your organization’s operational resilience and the robustness of your product; these are two critical components of building customer trust. Failure to maintain quick response times can cause long-term damage to your product’s reputation and may result in customers seeking more reliable alternatives.

Poor failed deployment recovery time can also impact developer productivity. The more time developers spend responding to failures, the less time they have to produce new value. Extended disruptions also negatively affect business continuity and the ability to deliver against goals and deadlines, particularly in mission-critical environments.  

Here are some tips on improving failed deployment recovery time:

  • Provide the tooling and training your teams need to be prepared to respond to incidents quickly and effectively. As reliability improves, proactively invest in stress testing, controlled disruptions, and preemptive fortification.
  • Conduct regular post-mortems after failures to identify the root causes of vulnerabilities and weaknesses. Cultivate a culture of continuous improvement that uses blameless post-mortems as a preventative measure to improve processes, documentation, and tooling.
  • Unblock workflows to quickly organize a response team and remove operational inefficiencies that prevent teams from responding to significant issues. Use automation to notify stakeholders when their attention and support are required.

For practical advice on reducing failed deployment recovery time, check out How to Improve Failed Deployment Recovery Time.

What Does Elite DORA Metrics Achievement Look Like?

Every year, LinearB publishes the Engineering Benchmarks Report, which collects data from thousands of engineering teams and compiles it into a series of benchmarks across industries, locations, and company sizes. This report is a good place to start evaluating your organization’s success at achieving elite-level achievement against DORA metrics.

The following table breaks down the benchmarks for all DORA metrics.

MetricEliteGoodFairNeeds 
Improvement 
Deployment Frequency
(per service)
> 1/day> 2/week1 - 2/week< 1/week
Change Lead Time 
(hours)
< 19 19 - 66 66 - 218> 218 
Change Fail Percentage
(%)
< 1% 1% - 8%8% - 39%> 39 %
Failed Deployment Recovery Time 
(hours)
< 7 7 - 9 9 - 10> 10 
Coding time
(hours)
< 0.50.5 - 2.52.5 - 24> 24
Pickup time 
(hours)
< 11 - 33 - 14> 14
Review time
(hours)
< 0.50.5 - 33 - 18> 18
Deploy time 
(hours)
< 3 3 - 6969 - 197> 197

Habitualize DORA Metrics for Continuous Improvement

DORA metrics provide a robust framework for measuring and improving software delivery performance. However, the actual value of these metrics lies in their ability to empower teams to improve continuously. Here are some habits your team can form to ensure long-term success in tracking DORA metrics:

  • Optimize change lead time by setting goals for pickup time, review time, and PR size and notifying developer when their attention is needed to keep work on track.
  • Hold monthly metrics meetings where engineering leadership shares their wins and challenges and works together to unblock developer workflows.
  • Encourage managers and team leads to check DORA metrics weekly to uncover work or teammates who need additional support.
  • Use workflow automation to optimize your code review processes. Code reviews are the most frequent source of inefficiencies related to change lead time.

It's important to remember that improving DORA metrics is not an end goal. Instead, they’re a means to measure engineering’s ability to deliver quality software and enhance organizational performance efficiently. DORA metrics are helpful for engineering leaders who want to make data-driven decisions that improve their ability to provide value to customers.

If you’re ready to track DORA metrics, sign up for a free LinearB account.