Engineering Metrics Features

DORA Metrics

What are DORA metrics?

DevOps Research and Assessment (DORA) is a research team at Google that outlines a set of metrics to provide an end-to-end snapshot of your engineering quality and efficiency. The key DORA metrics are lead time for changes, deployment frequency, change failure rate, and mean time to restore.
Cycle time breakdown, CFR, MTTR, Deploy Frequency Line charts

Why should you monitor DORA metrics?

DORA metrics are the industry-standard way to get a high-level overview of an engineering organization's performance. They are a valuable tool for determining where your team should focus to make operational improvements and track progress toward goals. Improving your DORA metrics leads to improvements in software delivery and profitability and encourages a culture of continuous improvement and growth. Lastly, DORA metrics also provide an effective solution for engineering to demonstrate their operational ability to company leadership.

How do you improve your DORA metrics?

Dive deeper into LinearB resources to learn more about each DORA metric.

Cycle Time

What is cycle time?

Cycle time is the time from the first commit to production release, and it consists of four phases: coding time, pickup time, review time, and deploy time. LinearB uses cycle time instead of the DORA lead time for changes metric; learn about the difference here.
Cycle time breakdown

Why should you monitor cycle time?

Cycle time is the best place to look for a glance at engineering efficiency because it represents the time it takes software to reach your customers. Most organizations start with this metric when seeking opportunities to improve their efficiency. Learn more about each stage of cycle time:

How do you improve cycle time?

  1. Benchmark your performance against industry standards to identify your software delivery lifecycle bottlenecks.
  2. Streamline merge pathways by automating manual tasks, unblocking low-risk changes, quickly routing PRs to relevant code experts, and providing deeper context to help developers better understand and evaluate code.
  3. Invest in CI/CD tooling to automate scanning, testing, and building requirements. Use programmable workflows to empower engineering teams to optimize workflows while providing guardrails with policy-as-code.

What is a good cycle time?

Elite
< 19 hours
Good
19 - 66 hours
Fair
66 - 218 hours
Needs Focus
> 218 hours

Coding Time

What is coding time?

Coding time is the period between the first commit to a branch and the opening of a PR to merge this branch into the primary branch.
Coding Time Graph

Why should you monitor coding time?

A high coding time can indicate upstream problems with project management and engineering planning and design. Development requirements should be as detailed as possible, and poor planning can result in excessive development time spent responding to uncertain conditions. High coding time can also indicate underinvestment in developer experience because developers need help with the most critical aspect of their job: writing code. To fully understand the sources of poor developer experience, it’s important to analyze coding time by team, repo, project, and domain.

How do you improve coding time?

  1. Focus on breaking tasks into more manageable pieces to limit the number of changes introduced in individual PRs. Commit to a culture of commit early, commit often to make work visible as early as possible, and limit the time you spend tracking down sources of inefficiencies.
  2. Empower engineering teams to manage goals around PR size and make it easy to investigate sources of high coding time. Foster an environment of collective accountability where each team self-manages working agreements.
  3. For projects with higher coding time, invest cycles into reducing technical debt and code complexity. Minimize unknown unknowns by spending more time planning and designing before beginning development work.

What is a good coding time?

Elite
< 0.5 hours
Good
0.5 - 2.5 hours
Fair
2.5 - 24 hours
Needs Focus
> 24 hours

Pickup Time

What is pickup time?

Pickup time is the time between when someone creates a PR and when a code review has started.
Pickup time line chart, WorkerB pickup time notification

Why should you monitor pickup time?

Research shows that PR pickup and review time are the most frequent sources of software delivery inefficiency. For most organizations, this is one of the quickest paths to immediate improvements in software delivery efficiency. The longer code waits for review, the bigger the context switch for all involved participants. Long pickup times and excessive code review notifications reduce developer productivity and negatively impact developer experience by hurting developers’ ability to achieve a flow state.

How do you improve pickup time?

  1. Unblock merges by reducing or eliminating review requirements for documentation improvements, test additions, non-critical updates, and other low-risk changes to reduce the review burden on your engineering teams.
  2. Optimize merge routing to ensure developers are brought into the code review process when their attention is required. Flag outliers that need reviews from specialized teams, automate standard code reviews, and minimize excessive review notifications to keep developers focused on productive work.
  3. Empower engineering teams to manage work agreements for the code review process. Foster a culture of collective responsibility to ensure no PR is left unchecked and redeploy resources quickly to support teams in need.

What is a good pickup time?

Elite
< 1 hours
Good
1 - 3 hours
Fair
3 - 14 hours
Needs Focus
> 14 hours

Review Time

What is review time?

Review time is the time between the start of a code review and when someone merges the code.
Review time line chart and bar chart

Why should you monitor review time?

Research shows that PR pickup and review time are the most frequent sources of software delivery inefficiency. For most organizations, this is one of the quickest paths to immediate improvements in software delivery efficiency. Long review times can indicate your organization might have unclear policies and processes for the merge process, that specific teams or individuals may be overworked, or that you may need more code expertise to distribute review responsibilities—all of these negatively impact developer productivity and experience.

How do you improve review time?

  1. Foster a culture of shared accountability in the code review process by establishing working agreements that ensure code gets the attention it needs. Set goals to limit PR size and review time, and use automation to notify your team when reviews are at risk of falling behind.
  2. Use policy-as-code to automate standardized workflows for the merge process that set clear expectations for all participants about the end-to-end experience. Enforce code requirements without creating excessive operational overhead.
  3. Optimize the PR merge process to efficiently route code reviews based on the contents of PRs. Automate standard code reviews and minimize excessive notifications to keep developers focused on productive work.

What is a good review time?

Elite
< 0.5 hours
Good
0.5 - 3 hours
Fair
3 - 18 hours
Needs Focus
> 18 hours

Deploy Time

What is deploy time?

Deploy time is the time between when code is ready to merge and when your CD system deploys it to production.
Deploy Time line chart

Why should you monitor deploy time?

Deploy time represents how efficiently your organization delivers finished product to your customers. A high deploy time can affect reliability by delaying the ability to identify and respond to production failures.

How do you improve deploy time?

  1. Minimize personnel bottlenecks by reducing or eliminating the need for manual intervention during software deployment. Build robust processes that diversify the responsibility of gate-checking code before it enters production to eliminate reliance on individuals or single teams.
  2. Adopt a mindset of release early, release often by releasing code in smaller, more manageable chunks. Set goals to limit PR size and use automation to notify your team of large PRs that have the potential to slow down deployment pipelines.
  3. Invest in optimizing and automating your CI/CD tooling to reduce the time spent waiting for pipelines to finish.

What is a good deploy time?

Elite
< 3 hours
Good
3 - 69 hours
Fair
69 - 197 hours
Needs Focus
> 197 hours

Deployment Frequency

What is deployment frequency?

Deploy frequency is how often your organization deploys code to production over time.
Deploy Frequency bar chart

Why should you monitor deployment frequency?

High deployment rates result from a product team that produces well-scoped code changes, uses automation to streamline the merge process, and has implemented an efficient CI/CD pipeline to automate testing and delivery stages. High deployment rates correlate directly with project predictability and increased trust levels from users and clients. It reflects your team’s agility and speed in delivering value that impacts user satisfaction and your business’s ability to capture market opportunities.

How do you improve deployment frequency?

  1. Adopt a mindset of release early, release often by releasing code in smaller, more manageable chunks. Set goals to limit PR size and use automation to notify your team of large PRs that have the potential to slow down deployment pipelines.
  2. Use policy-as-code to automate standardized workflows for the merge process that set clear expectations for all participants about the end-to-end experience. Automate standard code reviews and minimize excessive review notifications to efficiently route code reviews based on the contents of PRs.
  3. Implement parallel testing incorporating automated code integration, testing, and deployment to strengthen your CI/CD pipelines. Identify and remediate infrastructure bottlenecks such as server capacity, sluggish networks, and outdated hardware.

What is a good deployment frequency?

Elite
> 1/day per service
Good
> 2/week per service
Fair
1 - 2/week per service
Needs Focus
< 1/week per service

Change Failure Rate

What is Change Failure Rate?

Change Failure Rate (CFR) is the percentage of production deployments that result in one or more failures. Failures may include bugs, incidents, feature rollbacks, or other regressions.
Change Failure Rate bar chart

Why should you monitor CFR?

CFR is a trailing indicator of your product’s predictability and reliability, two critical components of building customer trust. An inability to minimize failures can cause long-term damage to your product’s reputation and may result in customers seeking more reliable alternatives. High CFR can also drastically reduce developer productivity as they manage rollbacks and hotfixes that disrupt their daily workflows. Failures force developers to context switch and can contribute to frustration over introducing unplanned work.

How do you improve CFR?

  1. Adopt testing frameworks across your engineering organization to standardize testing practices. Use automation to maintain best practices for unit, integration, and end-to-end testing.
  2. Improve quality assurance processes to optimize code reviews by ensuring product and engineering experts review all code before it reaches production. Use automation to reduce the cognitive burden of code reviews while minimizing operational overhead.
  3. Use a phased deployment strategy and avoid rushing deployments into production. Use tools like feature flagging to gradually implement new code and improve your CI/CD automation to identify risky code and roll it back before it hits production.

CFR Benchmark

Elite
< 1%
Good
1% - 8%
Fair
8% - 39%
Needs Focus
> 39%

Mean Time To Restore

What is Mean Time To Restore?

Mean Time To Restore (MTTR) is the average amount of time it takes your team to respond to and recover from failures in production.
Mean Time to Restore bar chart

Why should you monitor MTTR?

MTTR is a trailing indicator of your organization’s operational resilience and the robustness of your product; these are two critical components of building customer trust. Failure to maintain quick response times can cause long-term damage to your product’s reputation and may result in customers seeking more reliable alternatives. Poor MTTR can also impact developer productivity because the more time developers spend responding to failures, the less time they have for producing new value. Long disruptions also negatively impact business continuity and the ability to deliver against goals and deadlines, particularly in mission-critical environments.

How do you improve MTTR?

  1. Provide the tooling and training your teams need to be prepared to respond to incidents quickly and effectively. As reliability improves, proactively invest in stress testing, controlled disruptions, and preemptive fortification.
  2. Conduct regular post-mortems after failures to identify the root causes of vulnerabilities and weaknesses. Cultivate a culture of continuous improvement that uses blameless post-mortems as a preventative measure to improve processes, documentation, and tooling.
  3. Unblock workflows to quickly organize a response team and remove operational inefficiencies that prevent teams from responding to significant issues. Use automation to notify stakeholders when their attention and support are required.

What is a good MTTR?

Elite
< 7 hours
Good
7 - 9 hours
Fair
9 - 10 hours
Needs Focus
> 10 hours

Rework Rate

What is rework rate?

Reworked code is relatively new code that is modified in a PR, and rework rate is the percent of total code that has been reworked over a specified period.
High Risk Work: Avoid merging PRs with more than 150 code changes and over 65% rework or refactor

Why should you monitor rework rate?

Code rework is a leading indicator of team frustration because it is often a symptom of a misalignment between product and development. This misalignment could result from process failures, incomplete requirements, a lack of skills, poor communication, excessive tech debt, poor testing practices, architectural problems, or high code complexity. High rework can also indicate software quality problems because, statistically speaking, the more code is changed, the more likely you are to introduce additional problems. High rework often correlates with a higher change failure rate and can indicate a need to invest in CI/CD to catch errors earlier in the development cycle.

How do you improve rework rate?

  1. Benchmark your investments into developer experience to ensure you’re making adequate investments relative to new value generation. Long-term underinvestment in tooling, platforms, test automation, technical debt, and backlog reduction can force developers to face more unexpected challenges.
  2. Identify disruptive tasks by monitoring added, unplanned, and incomplete work during each sprint. Investigate these tasks to determine if there are deeper problems with task scoping, code complexity, knowledge gaps, or other issues that hinder productivity.
  3. Investigate PRs with high rework to find opportunities to improve development guardrails. User static analysis, test automation, and code review workflows to help developers identify and resolve common mistakes and reduce the amount of errors introduced into production.

What is a good rework rate?

Elite
< 2%
Good
2% - 5%
Fair
5% - 7%
Needs Focus
> 7%

Additional Resources

PRs Merged Without Review

What are PRs merged without review?

PRs merged without review represent the number of code changes that enter production without peer review.
Merged Without Review: A pull request with 2.8k code changes was merged without review.

Why should you monitor PRs merged without review?

Code reviews are essential to ensure software quality because even the best developers sometimes make mistakes, and peer review helps cover individual blindspots. A high volume of PRs without review often correlates to a higher change failure rate. In other words, a lack of formal review policies significantly increases the risk of bugs or other problems reaching production.

How do you reduce the number of PRs merged without review?

  1. Create dashboards to visualize code review depth and identify teams, projects, or initiatives with a high rate of PRs merged without review for further investigation.
  2. Coordinate with engineering managers to set team working agreements related to the PR process and establish standards for code review metrics like review depth, pickup time, and review time. Monitor team progress towards achieving PR review goals and notify your team when PRs risk failing to meet objectives.
  3. Optimize merge pathways by automatically distributing code review responsibilities to ensure you aren’t over-relying on a small number of developers for reviews. Use static analysis, test automation, and other workflow automations to minimize common errors and reduce over-notifying your developers for reviews you could automate.

Review Depth

What is review depth?

Review depth measures the average number of comments per pull request review.
WorkerB: A pull request with 44 code changes was merged without review.

Why should you monitor review depth?

Code reviews are only helpful when each PR gets adequate attention. When developers experience overwork or unclear expectations, they often respond by limiting engagement with non-development tasks like code reviews. Shallow code reviews indicate potential risk areas for software quality. It’s important to note that review depth should not be a goal in itself. Instead, it should be used as an indicator to identify areas for investigation. Changes that affect documentation, test automation, non-production systems, and other low-risk areas often require little to no scrutiny; review depth has much less significance in these situations.

How do you improve review depth?

  1. Create dashboards to visualize code review depth and identify teams, projects, or initiatives with a high rate of PRs merged with little to no review for further investigation.
  2. Coordinate with engineering managers to set team working agreements related to the PR process and establish standards for code review metrics like review depth, pickup time, and review time. Monitor team progress towards achieving PR review goals and notify your team when PRs risk failing to meet goals.
  3. Optimize merge pathways by automatically distributing code review responsibilities to ensure you aren’t over-relying on a small number of developers for reviews. Use static analysis, test automation, and other workflow automations to minimize common errors and reduce over-notifying your developers for reviews you can automate.

Non-Functional Work

What is non-functional work?

Non-functional work is tasks that ensure reliability, availability, stability, speed, performance, security, speed, etc.

Why should you monitor non-functional work?

A primary goal of most software engineering organizations is to deliver new value aligned with strategic business objectives. However, dedicating all resources solely to new features can accumulate technical debt, bugs, and a poor developer experience. Monitoring non-functional work is crucial for sustaining long-term value delivery and software quality. If neglected, these issues undermine software quality and increase the likelihood of unexpected problems that disrupt the team's workflow. By monitoring and addressing non-functional work, engineering leaders can ensure a more stable, efficient, and scalable development environment, ultimately supporting the continuous delivery of high-quality software.

How do you manage non-functional work?

  1. Set benchmarks for your team that right-size investments into new value vs. keeping the lights on. Track long-term progress toward optimizing your organization to generate new value while maintaining consistent investments in maintenance and overhead.
  2. Monitor your inefficiency pool to identify opportunities to improve developer productivity. Invest in developer tooling improvements and workflow automation to minimize wasted time on non-productive work.
  3. Build dashboards to track bug investment and rework rate to identify teams managing high volumes of technical debt and reallocate engineering resources to reduce code complexity in projects that cause large numbers of failures.

PR Size

What is PR size?

Pull request size is a metric that calculates the average code changes based on lines of code in a pull request.
Pull Request Size: A pull request was opened with 967 code changes.

Why should you monitor PR size?

PR size is a leading indicator for cycle time and software quality. Smaller, more manageable pull requests help developers minimize the risk of introducing new bugs or causing production issues. When PRs are concise, they are easier to test and review, allowing developers to focus on specific functionalities without being overwhelmed by numerous changes. Additionally, smaller PRs enable teammates to conduct more thorough and focused reviews, increasing the likelihood of identifying problems within the code. This practice improves code quality and enhances collaboration and efficiency within the team. Engineering leaders can foster a more productive development environment by monitoring and encouraging smaller pull requests, ensuring more reliable software delivery.

How do you improve PR size?

  1. Create dashboards to visualize PR size and identify teams, projects, or initiatives that merge a high rate of large PRs for further investigation.
  2. Use team working agreements to set PR size goals and monitor progress towards meeting these goals. Alert your team when PRs don’t meet expectations to raise awareness in situations where team members need additional help.
  3. Make PR size a part of retrospective analysis following bugs, outages, and other failures to determine if excessive PR complexity contributed to the failure. Reallocate resources to reduce complexity on projects that consistently experience high PR size.

What is a good PR size?

Elite
< 98
code changes
Good
98-148
code changes
Fair
148-218
code changes
Needs Focus
> 218
code changes

Refactor Rate

What is refactor rate?

Refactor rate is the ratio of changes to legacy code vs. creating new code.
Mean Time to Restore bar chart

Why should you monitor refactor rate?

Monitoring your code refactor rate helps maintain a healthy balance between new value creation and system upkeep. While refactoring is necessary for improving system quality, excessive refactoring can jeopardize existing functionality and indicate underlying software quality issues. If you’re more focused on stability and optimization, you should expect a higher refactor rate because it helps improve and maintain the codebase. However, if the goal is new value creation, a high refactor rate could signal problems that hinder progress. By monitoring the refactor rate, engineering leaders can ensure their teams effectively balance innovation with maintenance, leading to a more stable and scalable development environment.

How do you improve your refactor rate?

  1. Create dashboards to visualize code refactoring and identify teams, projects, or initiatives that merge a high rate of refactored changes for further investigation.
  2. Set benchmarks for your team that right-size investments into new value vs. keeping the lights on. Track long-term progress toward optimizing your organization to generate new value while maintaining consistent investments in maintenance and overhead.
  3. Monitor your inefficiency pool to identify opportunities to improve developer productivity. Invest in developer tooling improvements and workflow automation to minimize wasted time on non-productive work.

Additional Resources

Now is the time to begin your journey

Change the way your team delivers software.