It’s notoriously hard to evaluate quality and productivity when it comes to software development. That’s true both at the individual and team level. So, it shouldn’t be surprising that engineers and team leads are always looking for ways to obtain an objective assessment of a team or project’s health. One could argue that there’s no better place to look for such assessments than the code itself. That’s why the idea of using Git metrics or Git analytics to evaluate and improve the performance of software teams has been getting traction lately.
Commits don’t lie. Real work doesn’t lie. That’s why analytics you gather from your repository are much more valuable in painting an accurate picture of what your codebase and project really look like. However, not all Git metrics are created equal. Some are more useful, while others, not so much. In this post, we’ll walk you through nine metrics that can help you not only track your team’s performance but also give it the boost that you want. Let’s begin.
Git Analytics: The 9 That Can Help You Most
We’re about to start covering our nine most valuable metrics for your Git repositories. But first, we need to make a brief caveat. Keep in mind that we use the expression “Git analytics” in kind of a loose way. For instance, you’re probably aware that pull requests aren’t a native feature of Git but rather of repository management services like GitHub and GitLab. Though such services aren’t exactly required to work with Git, many organizations use them because they have workflows that rely on pull requests. So, because pull requests are an essential part of many organizations’ workflows, our list will feature metrics that rely on them.
Cycle time refers to how much time the team spends from the start of work until it gets delivered. Simply put, the shorter the cycle time, the better. Shorter cycle times are a sign of a great software development process, complete with excellent communication between clients and the development team, efficient automation, and skilled engineers.
Coding time is a component of cycle time. It refers to how long it takes from the moment an engineer starts working on a task until they create the pull request. All else being equal, shorter coding times are better. That’s a sign, among other things, that the developer received clear and unambiguous requirements from the business, allowing them to develop the required functionality in an efficient way.
PR Pick-Up Time
The amount of time it takes for pull requests (PRs) to be reviewed after the moment of their creation is what we call PR pick-up time. When PRs get parked for a long time before being reviewed, that might be a red flag indicating a problem with the team’s health.
Engineers might be overwhelmed with work and don’t have enough bandwidth to dedicate to code review. Maybe they do have the time but don’t have an incentive to review the pull requests. Alternatively, the size and complexity of PRs might scare reviewers away.
The opposite is also true: smaller PRs present benefits for the team. They’re usually easier to review since they’re simpler and less cognitively taxing to the reviewers. Since they have less code, they’re less likely to introduce defects to the code—and if bugs are introduced, they’re more likely to be spotted during review. Additionally—and most importantly—small PRs are more likely to be cautiously reviewed; reviewers might just glance over gigantic PRs and approve them, but they’re less likely to do that with small PRs. So, besides contributing to a lower pick-up time, small PRs also contribute to better software quality. What’s not to like?
Another cause of PRs not being picked up is inefficiency in automation and integrations. For instance, a team might decide to get Slack notifications from their CI/CD pipeline. That sounds like a great idea, but the team can quickly become overwhelmed by the sheer number of notifications and start to ignore them. A better solution here would be to use a strategy that eliminates the noise by focusing only on the most meaningful Slack alerts, like LinearB does.
Long PR pick-up times aren’t only a sign of potential problems. They can also be the cause of further issues, such as integration problems. The more time a branch is isolated, the likelier it is for it to have merge conflicts—and worse, logical conflicts—when it does get merged. Additionally, PRs that get stuck waiting to be picked up can also be a source of bottlenecks in the development process. Ideally, the work should be broken down into parts that are as independent from each other as possible, allowing the engineer to switch between tasks efficiently. However, that’s not always possible and, in such a case, the engineer would waste time waiting for their PR to be picked up, reviewed, and approved.
PR Review Time
Once someone picks the PR up, they might still take a long time to finish the review. That’s what this metric is about: the amount of time it takes until the PR gets merged, counting from the moment the first comment is posted.
Like the previous metric, the shorter this is, the better. So, why do pull requests sometimes take long times to be reviewed? There’s some overlap between this metric and the previous one. For instance, a long review time might indicate that the PR is too large or complex, or the reviewers are overwhelmed, among other problems. Finally, long review times contribute to longer cycle time, meaning it takes longer for the value to reach the customer.
How can you improve this metric? As with the previous metric, we advise you to leverage the benefits of efficient automation and meaningful alerts. Also, simpler, smaller PRs should be encouraged over large and complex ones. Last but not least, the team lead should pay special attention to the health of their dev team. By tracking important health indicators, the team lead should be able to identify when team members are overwhelmed, which might be one of the causes of PRs taking a long time to be reviewed.
Sometimes the repository will contain branches that you can’t trace back to a user story, issue, or ticket. Generally speaking, that’s a bad sign. First of all, the lack of traceability is certainly a bummer. But most importantly, the existence of such branches indicates the occurrence of shadow or ghost work. Why are engineers working on tasks that aren’t mapped to a user story? Most likely, it’s a problem of prioritization in the project.
When code gets rewritten soon after being first created, that might be a problem. That’s what we call “rework rate,” also sometimes known as “code churn.” This phenomenon usually tells us that the developers have been struggling with that area of code. They might lack experience in the domain, the tech stack, or both. Or maybe the requirements weren’t properly defined or communicated.
How might you improve this metric? Well, you can’t improve what you don’t measure. So, the first step in keeping your rework rate at bay is actually tracking it.
Step number 2 is actually making sure that people know when the rework rate starts getting dangerously high. A great way to do that is by defining alerts.
Finally, it’s essential to address the root cause of rework. Though some causes of rework are related to individual developers, a better solution is to focus on the causes that affect the whole team or organization. For instance, if rework is high because developers aren’t getting clear requirements, that’s a problem that doesn’t affect a single developer, but the whole team. It’s important to investigate why they’re not getting better requirements—e.g., there might be communication problems between the business and the customer—and then fixing that.
This one is pretty much self-explanatory. It’s just the number of commits authored by the team over a certain amount of time. Sure, this metric is very basic. And it could definitely be gamed—for example, tracking this metric could encourage engineers to make many tiny commits. But the idea here is that you shouldn’t look at this metric in isolation. Instead, use it as part of a larger picture. For instance, you could use it to create a baseline for the weekly productivity of a given department or team. If that number goes way up or down for several standard deviations in a given week, that should trigger an investigation.
“Impact” here refers to how much code is affected by any given commit. This metric is interesting because it can help balance the information we obtain from the previous metric, commits per developer. So, if the number of commits by an engineer goes up, but the impact of each commit is negligible, it’s a sign we have someone trying to game the metric.
Git Analytics: Your Way to a Data-Driven Team
In a field that’s strangely full of subjectivity, intuition, and appeals to authority, we can think of metrics and analytics as compasses that point us toward objectivity. But metrics, despite being immensely valuable, aren’t a panacea. There are important things you should always keep in mind if you don’t want your attempt at objective analysis to backfire.
First, if you don’t use metrics carefully, you could end up harming team morale. For instance, if you attach perks or rewards to individual metrics, you create an incentive for developers to game them.
Second, engineering metrics are most useful when you track them not in isolation but in correlation with one another. Even though Git metrics are in general more accurate than statistics from project management services—like Jira—you really get the best of both worlds when you can bring the two together to get the big picture view. For instance, you can visualize Git metrics in the context of each individual iteration.
Finally, team metrics are more useful than individual metrics. For instance, let’s say you start analyzing the number of commits per developer along with pull request pick-up time. You might conclude that there’s a positive correlation between a high number of commits per developer and long pick-up time. Therefore, you conclude that engineers are handling way too many tasks, which is why they don’t have time to review PRs.
By adopting a tool like LinearB, you gain access to Git analytics along with project information. By correlating repository data with stats from your project management software, LinearB can give you unprecedented visibility into the health of your codebase, project, and team. That way, you can move away from guesswork and intuition and toward a data-driven approach to managing a software team.
Thanks for reading!