Fact: You can’t become better at anything unless you understand what getting better would actually look like. This is especially true in the case of engineering teams.

Following the analysis of 2,000 dev teams and over 4 million code branches, the 2023 Engineering Benchmarks report is out.

To walk us through the performance metrics of the top 10% of engineering teams, LinearB’s Head of Developer Relations Ben Lloyd Pearson makes his first Dev Interrupted appearance.

From how long elite teams take to complete code tasks to the size of their pull requests, this is a great episode to understand where your dev team stands and where they have concrete room to improve.

Episode Highlights:

  • (0:00) Accelerate State of DevOps survey
  • (2:15) Introductions
  • (7:38) Research behind the engineering benchmarks
  • (11:23) Delivery lifecycle metrics
  • (14:51) PR automation tooling
  • (18:32) Elite developer workflow metrics
  • (25:40) State of business alignment metrics
  • (34:19) Predictability and planning accuracy

Episode Transcript:

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

Dan Lines: Welcome to our latest LinearB Labs episode. So these are episodes where we dive into the latest data we look at our research and thought leadership in engineering, and then we see how we can apply that more tangibly to our own dev organizations. This episode is actually a follow-up to one of our first Labs episodes where we took a look under the hood.

We had about 2000 Dev teams, and we wanted to find out what metrics the top 10%. So thinking about the elite portion those dev teams achieved. So today we're going to look at the most recent data we have on what makes an engineering team elite. With help from a new voice on Dev Interrupted, Ben Lloyd Pearson, LinearB's, Director of Dev Relations.

Ben, welcome to the show.

Ben Lloyd Pearson: Thanks, Dan. It's really great to be here.

Dan Lines: Yeah, it's awesome to have you here, man. Ben, could you let our listeners know a bit about your background? And why you love doing this stuff and talking about this stuff?

Ben Lloyd Pearson: Yeah, so I've worked in Dev for nearly 10 years now, and my career has been split pretty equally between engineering and marketing teams.

So I've always had this innate affinity for technology like my entire life. And I love this career choice because it really lets me merge like my creative and my technical sides. To produce persuasive and informative content for software developers. So before I got into Dev Rel, I was a bit of a technology jack of all trades.

I bounced between various roles in web development, systems administration, network management, IT support and that was where I built like this technical foundation that I draw from today to produce all this Dev Rel content. And over the years I've worked with some of the largest and smallest engineering organizations in the world, from massive companies like Samsung and Verizon to small startups like LinearB .

And a lot of my work is focused on helping these teams achieve high-impact goals in an efficient way, and tracking those impacts via metrics. So I've also been involved in a lot of open source community metrics in the past. And I really just love getting to a, apply that expertise to evaluating top performing engineering teams.

And, every team has its unique nuances, but there are also practices that I've seen over time that are pretty consistent from team to team. And the fact that LinearB can evaluate such a large volume of engineering metrics puts us in a pretty unique position to quantify software development best practices.

And I just, I think that's really neat.

Dan Lines: Yeah, that's awesome, man. I wanted to ask you, actually, I was gonna ask you this before the show and I forgot, but the role that you have with this, director of Dev relations, we won't get into the specific numbers, but it's a pretty lucrative career path and I think a really fun career path and for the people that are listening, That maybe are developers, but wanna try something new Or like, how do I get into this Dev Rel mode?

Do you have any advice of how to like, transition into that or like how you got your first job or If I am a developer saying, I really like this stuff, I wanna like enable communities, and it seems like there's a cool path there. Like any tips. For the audience?

Ben Lloyd Pearson: Yeah. Communication is probably one of the skills that I think a lot of people particularly people coming from technical backgrounds, can struggle with the most.

Learning how to do things like write blogs, get comfortable speaking to people about technical subjects, Practice breaking down really complicated things into to simple concepts that people can understand. the more you can learn how to communicate, the value of the technical work that you're doing, the better or the more potential you have to move into this type of role.

Know, Dev Rel, people like me are always looking for friends over in the engineering organization to help us with content. So if you have someone who does Dev Rel already in your organization or who publishes technical blog posts or other content, it can be really helpful to get involved with them, contribute to what they're doing.

Dan Lines: Obviously you speak really well, like present well, And all of that. But one thing that I will say, if you are thinking about this career path, it doesn't mean that you have to be like perfect in your presentation. I don't know, like you would see like a sales pitch or something like that.

I think really the people that do well in, in this role, yeah, you have to be comfortable to go in front of an audience and put yourself out there a bit, but it doesn't mean you have to be refined, like the more real you are about like being a developer, like implementing a solution. Be good to get your feedback on this, but it usually goes over great actually.

Not as polished. What do you think about that?

Ben Lloyd Pearson: What it really comes down to is combining like synthetic and analytical skills. So being able to analyze a complex thing and break it down into like fundamental components, and then synthesizing that into some sort of, like content piece that you can share with the world.

And it can be as simple as like a configuration for. For some software you build that just helps illustrate like how things work, or some code examples or anything like that. Yeah it doesn't have to be a finely polished solution all the time, as long as it helps developers and is presented in a way that is familiar to them.

Dan Lines: Yeah. Okay, great. Thanks for sharing that. I'll move us on to the main point so our producers don't get angry with us. We're here to talk about this research and the data, and the first section that we have here, they have it titled The Power of Having Metrics And the reason that it's titled that Way.

Is what we've seen with engineering organizations, engineering teams, just even having these types of metrics that you're gonna speak about accessible and visible, makes a positive impact on how Dev teams are working. And actually, Improve some of the metrics, fairly significantly. So Ben, could you take us through this research of the impact of having these real-time engineering metrics that are accessible for everyone and along with what metrics your team should be aiming for?

Ben Lloyd Pearson: I'll start with the data that we drew from. So we analyzed more than 3000 organizations who adopted LinearB metrics and our improvement guidance last year. So this totaled about 2.8 million poll requests from more than 75,000 contributors. And the results that we saw reinforce many of the beliefs that we have about developer productivity.

70% of the organizations who adopted our product improved their cycle time which is, this is a concept that I want discuss here in just a moment. And then 65% of these organizations improved their PR size. So overall, the organiz organizations saw prs that were about one-third smaller on average, and their cycle review and pickup time all improves by about 47 to 49% each.

Dan Lines: . Wow. That's awesome. And I really we'll probably work it in a little bit throughout the episode here, but one thing that you touched on is the PR size, which is a great thing to be tracking. I think I said it in another series that we're doing, but that's, I, that's coming for me to start being as like one of those golden metrics.

If I see a very small PR size, a very manageable PR size, usually, then there's a really good experience for developers. And then on the flip side, when you see those PR sizes getting like big and like too large like 600, lines of change, 800, you start thinking, okay, things are probably moving really slow here.

And then the other thing that I'll say as we start. Diving into some of these, what it means to be elite usually, and this is where Ben, you're focusing a lot of your time with gitStream, usually those elite teams like a company with automation or things that are help helping, in that PR lifecycle, how much would you say like automation factors into being elite or not being elite?

Ben Lloyd Pearson: I guess the question is how much time are you wasting on, like repeatable actions that, are really more of just a distraction than a value add to the software development process? If you're constantly having to bug your teammates to just to give them, just to give a thumbs up so that you meet your.

You're like one size fits all review policy, then, the stuff like that, the, automating stuff like that can really, make a big impact on this.

Dan Lines: Yeah, for sure. So we're going to move on to the next area here. We're gonna start running. Through some really cool data for the rest of the episode.

If you do wanna take a deeper look, you can find all of this data that we're talking about at LinearB dot io slash benchmarks. That's where you're gonna get your community benchmarks data. You can, compare how you're doing against the community, that type of stuff. It's really cool content. You get it for free.

So definitely check it out. And to set a little more context before we dive into each section, there's three different buckets that we're gonna be talking about today. So the first one is delivery lifecycle metrics. The second one is developer workflow metrics, and the third one is business alignment metrics.

And we'll go through each one of those, , individually. We'll start with what does elite developer lifecycle metrics, look like? So Ben, I'll kick it over to you to go through this first grouping, of metrics, for engineering teams.

Ben Lloyd Pearson: So delivery lifecycle metrics are your leading indicators for how long it takes a future to go from programming to deployment.

So there are five key metrics that we track in this area. First, we have cycle time. This measures how long it takes for a single engineering task to go through all of the phases of the delivery process from coding to production. Second, we have coding time. Which measures the time it takes from the first commit until a poll request is issued.

Short coding time tends to correlate with small PR sizes and much more clear requirements. Third is pickup time. This measures the time a poll request waits for someone to start reviewing it. And a low pickup time indicates that you have a team that is pretty responsive to the needs of others.

Next we have review time. This measures the time it takes to complete a code review and merge the poll request. review time is a good indicator of how collaborative your team is. And then last is deploy time, which measures the time from when a branch is merged to when the code is released to production.

So a lower deploy time is good when you're an organization who has a high deployment frequency.

Dan Lines: To summarize that, it's like cycle time is the end-to-end result of all of those smaller ones that you mentioned. So the smaller ones are like the coding time, the pick, the PR pickup time, the PR review time, and then the deployment time.

And actually, we've said it a bunch of times on this pod, but I'll say it again. Cycle time is one of those classic DORA metrics that, everyone should be measuring. You need that at your engineering like organization level, but also for every business unit, for every group of teams, for every team.

But what we wanna talk about on this pod is, What does being super elite look like? And I think the number I have here would be the top 10% of engineering teams putting up for these.

Ben Lloyd Pearson: Yeah, so there's a lot of variability in the individual metrics, but generally speaking, elite engineering teams have a cycle time of 42 hours or less, a coding time that is under 30 minutes.

And pickup time, review time and deploy time are each under about an hour.

Dan Lines: so those are like amazing times. So that's the top 10%. And that's why I was talking a little bit in the beginning of, the episode here that a lot of these teams that are achieving that level of efficiency, you have to accompany that with automation.

Like a gitStream or another tool like that's saying, Hey, let me auto assign the PR to the right person, or let me actually do an automated, review and say, Hey, this is really low risk. We're not gonna have anybody physically waste their time reviewing it. That's where I see a lot of these things that you can, get that I don't know what you would say.

What would you say in, it's like a cheat code in video games or jump to the next level.

Ben Lloyd Pearson: Left left left down. A, B, select, start.

Dan Lines: Is that contra? That's, I don't even remember. No, I think that's NES contra. Yeah, exactly like that. Could you say like a word or two about gitStream or like any tool that's like that or what it can do for PR pickup time and PR review time.

Ben Lloyd Pearson: Yeah I think it really just comes down to, unblocking reviews where you can, and then adding context and, appropriate assignments where necessary. So if it's a thing that isn't gonna hurt you to just let it be merged without any sort of deep review, then just do that. Just let it go through.

And then that way you're giving the space for the things that actually do need, additional human attention to fully understand, And then when you're bugging someone to get a review on a pr they know that because we've implemented these automations, it's something that actually is important and I should pay attention to, and I should, give it the attention that it needs.

Dan Lines: So Ben, when we're thinking about in particular, the middle of that cycle time with the PR pickup time and the PR review time, one of the things that I hear you talk about that's different from the really elite engineering teams and the engineering teams that are suffering a little bit more in terms of Dev experience is cognitive load.

And the idea of, okay, I have to go on to a new I haven't thought about this PR in a really long time, and now I have to come back and answer a question, versus okay, it's in my mind, it's in my flow. How do you think those long wait times versus the short wait times, like it affected developer's experience?

Ben Lloyd Pearson: Yeah, I think there's kind of two sides to this and the first is just the nature of the job that a software developer has. They need to have a lot of focus on the task at hand, and they also need to have a lot of knowledge present in their mind in the moment to fully understand how to approach the challenge.

And because of that, it takes, it, it takes time to construct that level of focus. So anything that draws you out of it, it's not like jumping on and jumping back off. You have to rebuild that present knowledge that you can regain the focus. The other side is that this applies to PR reviews as well.

So if you're looking through a list of prs, you probably don't know a whole lot about what's in it until you click into it. Look at. The files that are changed, figure out what's going on, like and build. Again, build that present knowledge to understand what you need to just to review the code.

An elite engineering team is gonna make sure that when you're tasking a developer with that level of focus and that level of knowledge, you really wanna make sure it's necessary, it's important that they're, you're reducing the cognitive load, however you can through things like PR labels and integrations that give you.

Details about the pr, stuff like that. It really just comes down to letting developers focus and, build that present knowledge.

Dan Lines: Yeah, that context switching, there's so much, there's so much that goes into, I gotta look at someone else's code and like under dive into that world versus what I was doing and vice versa.

So I'm really happy that you hit on that point.

Ben Lloyd Pearson: Yeah. And keep in mind if you submit a PR and have to wait three or four days to review it, you may have already moved on to something completely different and now you have to come back and like I said, rebuild that present knowledge.

Dan Lines: That's a, yeah feels so bad to go backwards. The next section that we have here is around elite developer workflow metrics. So this is our second group. Ben, what do you have here for us around elite engineering teams? What they should be measuring, and then we'll get into what good really looks like.

Ben Lloyd Pearson: Yeah, so developer workflow metrics, these are measurements that look at frictions or inefficiencies within the process. So we have three of these. The first is deployment frequency, and this measures how often code is released. So frequent deployments represent a stable and healthy continuous delivery pipeline.

Second, we have poll request size, which measures the lines of code that are modified in a poll request. A smaller PR is easier to review, safer to merge, and it. Tends to correlate with a overall lower cycle time. And then third, we have rework rate, which measures the amount of changes made to code that is less than 21 days old.

So if this number is high, it could signal code churn and is a leading indicator that there might be some quality issues.

Dan Lines: Okay, great. And what does doing super elite look like here?

Ben Lloyd Pearson: Yeah. Elite teams deploy daily. The average PR is less than 105 lines of code and rework rate is somewhere under 8%.

Dan Lines: Let's spend one more minute, just cuz it's my favorite one on that PR size. So the data that you have is for the top 10%, this is still 10%, right? Top 10? Yep. Yeah. Okay. So for the top 10% we have. Average PR size is less than 105 lines of code. If we go back to that developer experience, in thinking about that, how do you think about the difference between, okay, I, I'm gonna review something that is 1 0 5 lines or less, right?

Versus getting up in there into the 400, 500, 600, 800 lines. how does each one of those feel as a reviewer?

Ben Lloyd Pearson: What you might theorize, but then there's the practical reality as well, and the practical reality is a massive PR is probably not getting the attention that it needs.

And no way. Yeah, you may be getting, you may be getting more of the just I'm LGTM and go on with their day cuz it's it's just too much mental burden to, they don't wanna block. A release or anything like that just to, deal with this massive pr a smaller pr it means you're probably sharing your code with your teammate much more frequently.

So people are tend, will tend to be more in the loop with each other about the work that's happening in the moment. Plus the obvious benefit of just not having to read a book to get through, a single code review,

Dan Lines: Yeah, it reminds me of like school assignments from when I was a kid. It's if I'm in English class and I have to read this huge book, there's no way that I'm actually doing this.

I gotta go I'll like date myself. You remember Cliff Notes? Oh yeah. Cliff Notes. That's what I was thinking about. Gotta buy Cliff notes cuz I can't actually read this huge book. But if you're telling me, Hey, I want you to go I went to school in like the ni like late nineties. Okay. Go read like a small internet article and tell me what you think about it.

Yeah, I can do that. So I think it's ki a similar thing. If you're giving me 105 lines of code or less, like I'm into it. Got it. I'll read every one. I'll give really detailed feedback. I'll block off, like a half an hour or less of my time. I'll still get my own work done. If you're coming at me with 400, 500, 600 lines of code, I'm giving a cursory review at best.

And I'm really thinking to myself like, what the fuck is going on here? I can't, comprehend all of this. So I think everyone that's listening to the this pod, it's not just about how quickly you can deliver work, which I mean, it is about that, but it's not only that. It's what experience are you giving to your developers in your engineering org, because that matters.

It's like highly competitive market right now. So yeah, man anything else to add to that?

Ben Lloyd Pearson: Yeah. Something we didn't really talk about too much was merge frequency. Yes. Which we don't really have any specific recommendations on this from elite teams, but, it is something that you should also consider because it does, if you have teams that are merging daily, it, it means you are keeping things.

Small, digestible you're being much more collaborative, having a faster review cycle.

Dan Lines: Yeah. Thank you so much for bringing that up, because that's also, I know we don't have a benchmark data on it, so our benchmark data is on the deployment frequency, but the merge frequency, especially if you're listening and you're a larger organization, you might be saying to yourself, okay, like the deployment stuff is someone else's.

Job or like another department or whatever, depending on how you work, but merge frequency, getting that code merge, that's what feels good. That's what feels good for developers. So make sure that you're tracking that. As we are coming up to the end of our time, we do have another section that I'm pretty excited about, and it's titled The State of Business Alignment Metrics.

This is a section that is, I would say like new and evolving. New data is coming in all the time, the way you think about this type of data. Or a way the industry might be thinking about it is probably gonna change like every six months here. But there's two areas that we wanna talk about in terms of business alignment metrics.

One is gonna be more around resource allocation. Where are you investing your time into? And the second one that we're gonna talk about is planning accuracy. Sometimes people talk about capacity accuracy, that's more okay, am I delivering on time? And that type of thing. And I think, so out of the two, two of these, let's first start with resource allocation.

In the way that we think about it at LinearB, and you can find this in our app. And also the way that the industry is starting to think about this, I think everybody maybe we gotta see if our producers can include this in the pod or like in the details, but Iconic came out with a really good report.

And there's four areas that most engineering organizations are now talking about resource allocation, when they're talking to their board, when you're talking to executive peers, when you're thinking about where are our engineering time going, the four of them I'll read them out loud here, are keeping the lights on, internal productivity, quality improvement, and new capabilities.

Let's see where we want to go here, Ben where do you think we should take it? first.

Ben Lloyd Pearson: Yeah. I've seen this same research and I know this is something that is relatively new to us here at LinearB in the, the grand scheme of things. What do you think are like some common breakdowns of how, like those four categories like manifest within engineering organizations?

Dan Lines: Yeah, absolutely. So I'll go based off the iconic report. And the way that they break it down again is like keeping the lights on, internal productivity, quality improvement in new capabilities. And when they talk about keeping the lights on, that's anything that has to do with responding to bugs found in production.

Anything that like an s r e team would have to deal with lag and pride. The service is down, something like that. And the way that they think about that is you just need to allocate time. Because it's not really elective time, so it's 19%, for example. For example, if you're making 50 million a r or less, 15% if it's more, than that.

So 19% of allocation going to just keeping the lights on. And then these other three categories of internal productivity, quality improvement, and new capabilities. Those are more electives. That's how you're having a conversation with your, peers on the executive team or your board, and you're saying something like, okay, what percentage of investment do we wanna do to internal productivity?

What percentage do we wanna do the quality improvement? What percentage do we wanna do to new capabilities? And I'll just read the numbers of the report. So 50 million a r or less. For internal productivity, it's averaging 14% of time for quality improvement, which is a little bit of a misnomer. What it really means is improving the usage of existing features that you've already released.

That's at 32% of that elective time and for new capabilities. These are new features, new things that can bring in sales. The average is 55% of time, so that's what the data says.

Ben Lloyd Pearson: Gotcha. And I imagine a, after hearing that some organizations are probably comparing themselves to how they stack up to that.

And maybe they don't really have a way of gauging that yet. But what do you think are some signs if we're overinvesting in new features or like underinvesting in things like internal pro productivity, like what can an organization look for to identify, where they might have issues?

Dan Lines: Yeah, absolutely. we're fortunate now because in LinearB in our app, we're reporting on this type of stuff. So I get to talk to our customers about what they're running into, and I wrote down three situations that I hear the most now based on this data. So the first situation that I hear is under investment in internal productivity.

And again, what is internal productivity? It's all the things that you would do to decrease your cycle time, to increase your deployment time, to increase your merge frequency, to decrease your PR size. this is investment in automation or practices. And again, what the report says is you wanna average about like 14 to 15%.

Oftentimes we're seeing like five, six, 7% there. And that's a situation where, okay, the developer experience isn't as good. Maybe we're not delivering on time. That's an area that you can then go back to your CEO or your executive peers and say, Hey. We have this information now we're a little underinvested into internal productivity.

Our Dev experience is suffering. You might wonder why we're not delivering as predictable as we want to. That's why. And therefore, I'd like to have the conversation of increasing that to about 15%, which is industry standard. And that's a, that's like the first one. That's a great conversation to have.

And the only way to do it is with data. Otherwise, it's really like hand wavy. Of okay, we need to do this internal stuff that, is not like a new feature. That conversation doesn't go as well as if you have data. So that's the first one.

Ben Lloyd Pearson: Why are developers so angry, is what you're saying?

Dan Lines: Yeah, you gotta quantify it. It's like anytime that you're talking to an executive staff, you gotta come with data.

So that's the first one that I've seen under investment in internal productivity. The second thing that I've seen, and it relates to that, is, Over investment in new capability. So probably from your CEO, if you're under 50 million a R, you might be more in the startup mindset, or you just might be more in like we gather, compete against a competition, which is all well and good.

You're gonna get that pressure on the product team to come out with new capabilities. The report says like a good average is around 55% of that elective effort, but what I've seen is when you're getting that up to 65%, 70%, that's where you're going to suffer and where are you going to suffer?

Usually it's in those features that you just released maybe last quarter or a few weeks ago. And instead of investing into, the stability of those features, the usability of the, those features, the user retention rate, get another iteration going on them, get feedback from your customers. You're already moving on to something new.

And so now you're piling on more and more feature set, but not taking care of what you've already, created. So that's the second situation where I see a little more over investment. And again, if you can come back to your c e o and your product or like your VP of product, like your product owner and say, Hey, we're a little like off kilter on new feature investment.

I wanna bring it back a little bit into quality improvement to our existing feature set. That's a conversation that usually goes really well. And the last one, I think it's more around not accounting for the K T L O, which is the keeping the lights on. And if you look at the report, you know about 19 to 20% of the time, and as you get over 50 million a R, like 15% of the time, that's still very significant that you need to account for when you're doing your allocation planning.

For example, you might be committing too much to new features. You might com committing too much to existing features and not really counting for all the work of the bugs and the performance and issues coming into production and cost, scalability, all these things to keep the light lights on. When that gets under 10%, everything moves slower.

So those are the top three that I've seen.

Ben Lloyd Pearson: Yeah, and I don't know if people really think about as much about keeping the lights on being something that does take as much as a fifth of your development time. Think about it like in the terms of a developer may have to spend one day a week just making sure that everything continues to function normally before they can even begin to focus on that other stuff.

Dan Lines: It almost feels like sometimes it goes unaccounted for. But when you don't do that, then bad things pile up and everything else suffers. So again, that's the one. If you have that data, you can come and get that data, with LinearB, and then have that conversation. Yeah, we have to have at least 15% there so we can just operate in a healthy, sane way.

So the last part of the show here, Oftentimes what I see, once you have that resource allocation conversation, you're talking to your executive staff, you're talking to your c e o around your investment. Usually the next question is, okay, cool. Let's like adjust this a little bit. Let's get into better, alignment on this.

They usually then will say, okay, I wanna go into that new feature development. How is project A going, or How is project B going? And that's where we get into this predictability and planning accuracy stuff. Ben, what information do you have for us from the report around, planning, accuracy and such?

Ben Lloyd Pearson: yeah. So the, our very last metric planning accuracy. So this measures the ratio of planned work versus what is actually delivered during a sprint or iteration. So a high planning accuracy is gonna signal better. Predictability, more stable execution, stuff like that. And from an elite engineering team perspective, they tend to have a planning accuracy above 80%.

So you know about a B minus.

Dan Lines: Yeah. And that you, when I first looked at the data, I thought the planning accuracy would be like a hundred percent. But it's interesting that you're saying that the elite engineering teams typically have accuracy of 80%. And when we think about planning accuracy, you just think about, okay, what did we commit to do within this iteration versus what we actually completed?

And I think the reason why it's still just 80% is I do think it's healthy to leave at least 20% wiggle room. For learning during the iteration, taking something that comes in for prod. something that kind like that, like you think the same.

Ben Lloyd Pearson: I look at it as, the unknown unknowns. It’s very common, just about every challenge in software development is gonna have unknown unknowns that you only will begin to understand as you build out whatever it is that you're doing in that sprint.

I think it is natural just from the nature of the profession, that there is gonna be some level of inaccuracy, from a planning perspective. Technology is hard to build and sometimes you just run into problems and. Unexpected places that can set back, what you thought you would be able to accomplish.

Dan Lines: Ben, this has been super informative. Thanks for coming on, giving us all the data, letting us know, what elite engineering teams actually look like from a data perspective. Really happy to have you here.

Ben Lloyd Pearson: Yeah, absolutely. It's been a pleasure and we should do this again.

Dan Lines: A hundred percent.

So if you and your teams want to see your team's metrics, you can get all of this information, I think we call it the engineering benchmarks report at LinearB.io. I'm sure we will put the link in the show notes and also if you like this episode and there's data that you would like to see us break down in the future, for example, Ben mentioned merge frequency. We can probably get some standards around that. Let us know. Leave us a review or comment on social media. We definitely appreciate all of your feedback. Thanks everyone again for listening and we'll see you all next week.


A 3-part Summer Workshop Series for Engineering Executives

Engineering executives, register now for LinearB's 3-part workshop series designed to improve your team's business outcomes. Learn the three essential steps used by elite software engineering organizations to decrease cycle time by 47% on average and deliver better results: Benchmark, Automate, and Improve.

Don't miss this opportunity to take your team to the next level - save your seat today.