The most critical aspect is that you don't under invest in DevEx. You should know how much you're investing over the long term in DevEx because if you're under investing, you'll start to see things like unexpected work creep into your sprints, and other things that directly impact your organization's ability to continuously produce new value.

Want to ship better software? Get in the fast lane.

This week, hosts Ben Lloyd Pearson and Andrew Zigler open the episode with a discussion of some of the tech topics dominating the news cycle like Microsoft's restructuring to focus on AI, a potential TikTok ban in the U.S. and a status check on New Year's resolutions. 

Then, joined by Dan Lines, they shift their focus to the 2025 Engineering Benchmarks Report by LinearB. This in-depth conversation explores the crucial link between developer experience and productivity, the impact of project management hygiene on development, and new metrics like PR maturity and project traceability.

Within this data-driven discussion, uncover key insights like:

  • how smaller PRs boost velocity and quality
  • connection between longer cycle times and change failure rates
  • the ideal investment profiles for engineering teams.

Show Notes

Transcript

Dan Lines: 0:00

I wasn't going to answer any questions on this pod, BLP, but I'm going to do it cause I love you. And here's what I would say. First of all, go read the report, right? It's there for you should know what the standards are, what the benchmarks are. But I think that 2025 is the year of productivity for engineering organizations. So if you're an engineering leader. Understand the report, know where you stand against the report. And then for any of the metrics that you want to get better at, use automation, use AI. You gotta be into this stuff now to improve.

Ben Lloyd Pearson: 0:44

Welcome, everyone, to Dev Interrupted. I'm your host, Ben Lloyd Pearson, and I'm joined today by your other host, Andrew Ziegler. Andrew, How's it going? Like, I just want to check in. Like, how are your New Year's resolutions going? Are you still running every day?

Andrew Zigler: 0:58

Oh yes, yes, you know, um, I went out last night, but I still woke up this morning, got my run in, uh, uh, it's still going pretty strong for me. What about you, Ben? How's, how are your resolutions? What were they again? What were yours?

Ben Lloyd Pearson: 1:10

yeah, you know, I don't really do resolutions except I did resolve myself to not get sick too much this year, and that's already, like, gone out the window. So, like, unfortunately with all the sickness going around, it's, like, unavoidable

Andrew Zigler: 1:23

Oh yeah, I'm always sick this time of year. It's pretty much unavoidable, so you set yourself up for failure with that one.

Ben Lloyd Pearson: 1:30

yeah, yeah, exactly. So, you know, before we get into our interview, uh, today, I just want to talk a little bit about some stuff that we're seeing happen in the space and, how that relates to some of the stuff that we cover here on Dev Interrupted. So, Andrew, what's on your mind right now? What stories have caught your attention?

Andrew Zigler: 1:48

Yeah, something that has been top of mind for me, especially in the last week, is this new memo from Microsoft about its internal changes, where it's combining its dev, div, and AI platform teams internally, and they're taking the entire developer division and making it focused on AI, and it's signaling a huge change within Microsoft, and I think this is something that really resonates with large orgs and small ones too, all across software engineering right now, that We recognize there's a lot of change that has to happen, and it has to happen internally with the tools that developers are using. so the most standout quote that was for me from the memo on Monday, was about they needed to compress 30 years of change into three years. So they're trying to move lightning fast and make a whole lot of innovation happen in a whole lot of small amount of time. So, uh, the idea of compressing that is quite fascinating to me.

Ben Lloyd Pearson: 2:43

That's, I mean, that is a very stark claim. You know, of course, every leader is going to claim wild things when they take over a new organization. But, you know, for me, it really, you know, resonates with the idea that, AI is not going to be this tool that, uh, Replaces you like if your job was to weave together threads then like the the loom will replace you But the vast majority of people can provide far more value than that So I feel like really what we should all be focused on is how could we be? More productive, more efficient, do better things with AI rather than like, you know, we've got all these LinkedIn influencers out there quoting improperly quoting people like Mark Zuckerberg, which I said I wasn't going to talk about today, but

Andrew Zigler: 3:27

Ben, you got to let it go. You gotta let it go, Ben.

Ben Lloyd Pearson: 3:30

Yeah, so you've got all these influencers saying that, you know, AI is going to take your job and either they're trying to sell you something or they're probably misinterpreting the words that of somebody that's more important than they are. So, you know, I really like this story because I think it is probably more emblematic of what the future of software engineering is going to look like,

Andrew Zigler: 3:52

I totally agree, and I felt like the memo was very positive in that regard, and the idea of, re aligning developers around the tasks they need to be focused on, now that they can use these kinds of tools to automate away other ones. And that was really the unlock, they weren't just saying that they needed to move 30 years in three years, they were saying that they could move 30 years in three years. It's an opportunity, and it's an opportunity that involves everybody in the process. So what about you, Ben? any items top of mind for you?

Ben Lloyd Pearson: 4:21

Yeah. So, you know, we're recording this episode on Friday, January 17th, just to date this for everyone that's out there. in two short days, there's a set to be a potential ban of TikTok here in the U S. of course, by the time this comes out, it'll either happen or not happen, but it sounds like it's going to happen at this point. unless there's some sort of last minute sale of the company. but you know, in my perspective, like Bring on the ban? Like, I'm actually kind of okay with this, partly because it's like not actually even a ban. Like, if you actually look into it, really all it is is they're forcing divestiture from a, you know, a company that's owned by a company that is very closely associated with the Chinese Communist Party. the thing is like, you know, when you look at how China treats social media, you know, they've banned dozens and potentially even hundreds of American apps, uh, within their borders. And, you know, and over there, it's an actual ban. Like you can be arrested for using some of these apps. But, you know, I think really what was missing from this entire conversation, and this is what I want to get to because, you know, I think whether or not an app gets banned is not going to change the world, but there really are just deeper. Data privacy issues in our society, right? Like we are concerned when a, a foreign potentially adversarial government might be misusing personal data of American citizens, but we don't seem to take that same rigorous approach to, you know, privately owned or publicly owned American companies. So, you know, really what I think we're missing is like, we should be having this deeper conversation about how do we actually have, implement data protection laws that protect all of us and prevent organizations from misusing, personal data. our personal data and trying to use it to manipulate us. but we'll find out, uh, you know, before this episode comes out, I guess.

Andrew Zigler: 6:13

Right, I'm sure we'll tell our future selves, or our future selves will tell us now, and I agree about the timely topic and something that is, um, when I think of the TikTok ban, it reminds me of videos I've seen of, people in Congress asking questions about how technology works while in the process of creating, very large sweeping rules that will impact the usage of that technology, the development of it, the ownership of it. And these are really impactful decisions. And What always stands out to me, and of course, these clips go viral because they capture the most, you know, silly of moments, but is the lack of understanding and the lack of true grasp of how technology is used by people. And when I hear of something like this, like TikTok ban, you know, is a little more catchy than TikTok divestiture. And so I guess that's why they're calling it that. But it just seems largely unenforceable in the way that many other bills related to technology have gone through that same emotion before it. And what it signals to me every time is just the lack of technical literacy in our world, especially by the people that make the decisions that make the world that we live in. I think there's a part on all of us to do better to understand how technology is impacting the world we live in.

Ben Lloyd Pearson: 7:29

Yeah, and man, it always comes back to the same topic it seems like, but I believe it was Facebook that went in front of U. S. Congress and, had a senator grill them very basic questions about how they make money and, quickly devolved into, like, a basic fundamental conversation about how the internet functions, you know, and like, how can you expect good regulations being applied consistently in that environment,

Andrew Zigler: 7:56

It reminds me of just very recently when we said, like, would you rather take tech teach tech debt to your grandma? It's kind of that, you know, you need to explain these concepts to grandma because, you know, grandma might be in Congress.

Ben Lloyd Pearson: 8:08

Yeah, well, man, we need a, we need a GPT that educates our, our congressional representatives on technology. So, cool. Well, you know, those are the stories that we're covering today. But, you know, I want to go ahead and hop forward to the guest that we have, that we brought on today. Uh, in fact, Dan. We actually don't really even have a guest because it's just me and Dan talking about something that's really important to me actually, which is the 2025 Engineering Benchmarks report from LinearB. And in this conversation that I have with Dan, we're going to talk about some of the biggest takeaways, you know, things about cycle time, PR maturity, PR size, planning accuracy. We've got lots of really great metrics that we're going to dive into. and then share a bunch of new insights that we've found, this year. So after the break, stick around cause you don't want to miss that conversation.

Andrew Zigler: 9:03

on January 29th, join LinearB and Luca Rossi of Refactoring for a workshop to learn the three key traits shared by highly successful engineering teams. This workshop gets down to brass tacks with actionable examples, so check the show notes the RSVP year spot for January 29th, and we'll see you there.

Dan Lines: 9:24

I'm your host Dan Lines, COO. And today's episode will focus on the 2025 Engineering Benchmarks Report from LinearB. We're going to take a look at what the report found this year and what the most recent data says about engineering teams and what it says about the health of those teams. So like a lot of amazing information in this report. And to help me walk through all this data. I'll be putting Ben Lloyd Pearson, BLP, LinearB's Director of Developer Experience in the hot seat and making him answer all the questions while I get to just relax and chill as a host. So Ben, welcome to the show.

Ben Lloyd Pearson: 10:09

Yeah, thanks. It's a glad, glad to be here as a guest, I guess now instead of the host. So

Dan Lines: 10:15

Yeah, I'm just going to chill over a drink, maybe have a beer, ask you a few questions. Questions, but no, in all seriousness, I think this 2025 report is super interesting. So, you know, thanks so much for walking us through it. So to kick us off, can you give us, maybe just give us like a general overview of this year's report?

Ben Lloyd Pearson: 10:35

yeah, if I had to pick like one focal point of this year's report, it's the relationship between developer experience and developer productivity. So specifically I'm referring to with DevEx, it's when, you know, it's your team's overall morale engagement with the organization, their tools, the processes, the environments. Yeah. And then on the dev productivity side, it's a matter of how effectively and efficiently developers can complete their meaningful tasks with minimal waste. So we're hearing more and more from engineering leaders that dev productivity is a major focus area, but you can't actually improve productivity without making DevEx a significant part of that strategy. So if I had one key takeaway from the report, it's that you should be measuring your investments into DevX to ensure that you're minimizing long term risks. If you're always over investing in new features, feature enhancements, and you're ignoring like keeping the lights on and the maintenance and DevX side of the equation, you may get some short term gains, but over the longterm, that's certainly not going to work out for you.

Dan Lines: 11:39

Yeah, no, you know what I think is funny is developer productivity and developer experience relate to each other. Right. They're very, very. I think highly correlated. And maybe like in 2024 or 2023, I would say DevEx was hot, or maybe, hey, let's make sure that we have an amazing experience for our developers, which is important. But I think what business leaders, even developers themselves, it's like, I want to be productive and maybe I'd rather be productive because if I know that I'm highly, highly productive, and I think, you know, hopefully we'll get into some of this stuff like PR maturity and merge rates and all that kind of stuff. If I know I'm highly, highly productive, I'm highly likely to be having a good experience.

Ben Lloyd Pearson: 12:28

Yeah. And I think in particular, you know, knowing how. How DevEx impacts your overall productivity. Like, when the business, the rest of the business comes to you and say, Hey, we need to ship all of these new features on this scheduled timeline. And you have concerns that maybe you don't have the resources that you need to deliver on that. Like, you know, you need to be able to have that conversation where you can push back to actually make sure that. your engineering teams are getting the support they need while you're still delivering the value that your business is expecting.

Dan Lines: 13:00

So this year, I think this is like at least the second year, maybe the, even the third year of the report. And I'm sure we'll attach the report to the show notes and all of that. And there's some webinars that we could point you to and that type of thing. But is there anything new this year about the report versus previous years?

Ben Lloyd Pearson: 13:19

every year we look for opportunities to add new things to the report. So we've added a few metrics. the first two that I'll cover are related to that give you more of like a granular analysis of code review bottlenecks. So, the first one is approved time. So this is the amount of time from the first comment on a PR to when it's actually approved. And then merge time. So that's the time from the first approval to when the PR is merged. There's also a new metric to help measure upstream conditions that create review cycles. And we call this PR maturity. And this is probably actually one of the more unique additions that we've added this time. And that is the ratio of changes added to a branch after a PR is opened. So if you see a ton of Commits coming into a PR after it's been, moved out of the draft stage and is now actually in the ready for review stage. It's how many more changes after that do you actually see on the PR? So I think what our goal here really is to just, you know, both to give you more granular analysis, but also to give us more indicators that can help us sort of comprehensively understand productivity and experience.

Dan Lines: 14:27

Yeah, I like that they added these, the approved time and the merge time. I think it's, you know, obviously they're a part of cycle time, but I think it's kind of like every team works a little bit differently. Sometimes you might want to focus more on approved time or more on merge time. Like for example, uh, let's say, okay, merge time, the time from first approval to when the PR actually gets merged. Like maybe you have some. Big process, or maybe you have some slow process of like, Hey, you know, the PR is actually done being reviewed, but it takes a long time to get merged for some reason. I don't know, maybe integration failures or something like that. So I think that granularity kind of probably reflects like being flexible about how teams are actually working. And then the other one that you mentioned, PR maturity. Like that one is pretty cool. The ratio of changes added to a branch. After a PR is open. So that's kind of looking at like, how many times do I need to go back and forth? And I'm hoping it, as we move forward here, like you can see so much is happening with AI reviews and like bot based reviews. I'm hoping PRs are going to get more mature. I guess it seems like I'm like talking about my kids, like, Oh, they're great. These PRs are really, they're growing up. They're getting more mature now with AI. Like, let's say, okay, I'm a developer. I'm developing. I, open up a PR. I can have all of this AI stuff run on it before I'm actually even, looking for a review. So it's like I can kind of get it with my, like, bot buddy to a better state before I'm wasting other people's time. So I'm, I'm pretty interested in how that metric, goes over time.

Ben Lloyd Pearson: 16:08

Yeah. And we'll actually get into, I have some insights about PR maturity later too. So that's, that's what really excites me about having it is we were actually able to start drawing correlations about it now as well.

Dan Lines: 16:20

Okay, so is there anything else new in the report that you would like to highlight to us?

Ben Lloyd Pearson: 16:26

This report also includes a whole bunch of new metrics related to project management hygiene. And I know this sounds so boring, but it's actually pretty fascinating or it gives us some very fascinating insights about how engineering teams operate. So we have a few metrics that serve as proxy metrics for project traceability. So these are things like issues that are linked to a parent. So this is like the percent of tasks that are linked to a specific parent, such as like an epic or a story. trying to help you like make sure that things are actually being, um, rolled up into the overall, initiative within your organization. we also have benchmarks for branches linked to PM issues. so this is a percent of code branches that contain a reference to a specific PM task. So, again, like, helping you roll that up by going all the way from the actual work that's being done in your code repositories, and making sure that that's connected. all the way up to like an initiative that's happening within your organization. and you know, what we're trying to do is help teams monitor their development progress and ensure that actual development work is being tied to defined tasks within the organization.

Dan Lines: 17:36

Yeah, I think that's pretty cool. It's actually nice to see how this benchmark report has kind of evolved and Similar to, PR maturity, I think this report is a little bit more mature because the things that you're talking about, yeah, they might sound boring on the surface. I don't think they're boring at all. Let's give an example. Let's say that we're giving you a benchmark for cycle time or even those, PR merge to whatever. Let's say that you're really, really good at them and you're elite. So you read the report, you find out you're in the elite category. It doesn't mean that you're actually delivering like good business value or good project value. You could be working on something that's like completely irrelevant to the business. And I think with some of this hygiene stuff, we're kind of evolving as an engineering organization to say, yeah, we're certainly not just like code monkeys. We're not just like a machine that goes out and does stuff super, super fast. We're. Highly essential to the business. So let's give some metrics that show that we're delivering great value, can measure the value, that type of stuff. So yeah, I'm super pumped that this got, added, into the report.

Ben Lloyd Pearson: 18:49

And then we have two other new measures for sort of related to this for planning and ownership for work items. So one of our new benchmarks is In Progress Issues with Estimation. So that's the proportion of ongoing PM tasks that have a time or effort estimate assigned. And then In Progress Issues with Assignees is also another new benchmark. The percent of active PM tasks that have a designated team member responsible for completing them. So again, what we're trying to help here solve is, is to ensure more predictability, more accountability, and measure, whether your organization is able to effectively manage Workloads, because you think about something like time estimations, you know, not every organization t shirt sizes their work, but it can be very helpful to know when you have large proportions of work that are not getting time estimated, because that could indicate, for example, that there may be some, predictability risks coming down the pipeline.

Dan Lines: 19:49

I love it. It's I'll double down on what it said. I think it's like being a pro engineering team or like a highly elite engineering team, predictability matters a lot. And some of these metrics, I can't wait to see what the bench benchmarks actually are here, but they're really saying like, yeah, I'm an elite efficiency and quality organization, but I'm also a lead at understanding workloads, therefore understanding developer experience. Therefore, Delivering on time to the business like that's how you be the best. I would say professional engineering organization I'm going to move us now. So we have an insights and, takeaways, section. So yeah, Ben, great stuff, giving us the overview, really interesting things that have been added. Love that they have all of the project delivery, predictability stuff now in the report. So thanks to us, you know, for running that all down with, with the audience. Let's close the episode on the insights and takeaways engineering leaders can gain from the report.

Ben Lloyd Pearson: 20:51

Yeah, so the biggest one that was actually very surprising to me is related to cycle time. And that is specifically longer cycle times correlate with quality risks, specifically around change failure rate. we've long, known that there's this correlation between cycle time and efficiency. Like it's an obvious correlation, but we still have data to validate it. it was actually pretty surprising to see the stark difference in metrics like CFR, change failure rate for teams that had faster cycles versus slower cycles. In fact, I think if I recall, I believe organizations who fell into the elite category of cycle time were like Half is likely to fall into the needs focus area for CFR. That's additionally this carried over into deploy time as well. So a longer deploy time also correlates with higher CFR,

Dan Lines: 21:47

No, that's really cool. It's like the rich get richer, right? If you're really efficient, then you're also kind of doubling down on quality. I would have to guess, it's probably because if you're that type of organization that can have an elite cycle time, it probably means that you have the right automations in place. You're probably Also have a lot of automated testing, a lot of integration testing. Maybe you've even started playing into the AI game, AI review game, all of that, maybe even AI test creation. And so I could see kind of just like, without even thinking about the data, if you think about maybe like some of the best engineering teams in the world, you think of like Netflix, you think Spotify. Yeah, they probably move super fast and their quality is super high because That ecosystem or however you want to describe it surrounding the developer is probably really tight. And the other thing is you know this Ben from being a developer, it's like as code is sitting there it almost gets stale in a sense. Or like things are moving underneath it. It's like only bad things can happen. The longer code is not merged.

Ben Lloyd Pearson: 22:58

and picking it back up after it sat for a week can be such a huge challenge. so keeping the theme on cycle time. So now that we know that, you know, cycle time is both a thing to focus on if you want quality and efficiency. Let's talk about some of the things that impact. cycle time. So one of them is, a thing that we've thought for quite a while the case, but it's nice to just have data that validates our theories, and that is that PR size drives velocity. So smaller PRs, reduce pickup time, they reduce merge time and large PRs sort of do the opposite and actually even require more review modifications typically. we found things like larger PRs wait longer for reviews and have a longer overall cycle time. PRs that wait longer for reviews also wait longer to be merged after approval. larger PRs are more likely to be heavily modified during the review process. So there's almost this cascading negative effect that is created when your PR size gets too big.

Dan Lines: 24:01

Yeah, again, totally makes sense. Like all the companies that I work with, I always recommend trying to get your PR size into the elite benchmark because you get that double down effect. One, you're going to be more efficient. You're going to be faster. Two, your quality is going to increase. So it's like any time that you could focus on a metric that does, you get the two for one, right? What else do you got for us here?

Ben Lloyd Pearson: 24:30

Yeah, so I mentioned the PR maturity. We found some insights with that, specifically it correlates directly with efficiency, probably not super surprising to our audience, but A higher PR maturity reduces rework and speeds up merge frequencies. So it really kind of encourages better prepared, PR submissions. the higher PR maturity also correlates with higher merge frequency and shorter pickup times. you definitely want to focus on a culture where developers have. All of the insights, all of the expectations set, before they've sent their PR off to reviews, because, the more you have to revisit it after it's been opened, the more it just creates complexity issues for your organization.

Dan Lines: 25:13

Yeah, I mean, anytime that you're asking someone else to stop what they're doing, and like, come help with the PR, then you make, you like, review it, make the PR change, then come help again, like, it's kind of that opposite double down. It's like the double down effect in a negative way, because you're affecting multiple people. anything else in the Okay, go for it.

Ben Lloyd Pearson: 25:33

Yeah, I just got one last one that is a, it's a bit of an example of how not to do it. and that was an insight that we didn't really expect to encounter. we found that teams with poor project management hygiene actually move faster than those that have good hygiene. if you aren't referencing your PRS, against a JIRA ticket or You actually probably have a faster cycle time than companies that do that consistently. However, I would stress that there's probably some issues that you're introducing to the organization if you take this approach related to like, Misaligned Goals, Technical Debt, Lack of Visibility, into to where product work is actually happening. So, you know, I'm not going to go out and make the recommendation that we all need to stop, updating our JIRA statuses. But I will make one recommendation. Automate it.

Dan Lines: 26:30

Yeah, no, I love that. Automate it for sure. all right. What about the investment profile section of the report? I talked to a lot of engineering leaders about this who want to invest in developer experience and to demonstrate, that type of spend to the business. Like what do you got cooking for us there?

Ben Lloyd Pearson: 26:49

Yeah, so this is actually one of my favorite aspects of this report, and I'm really happy that it returns this year. and it, you know, the name makes it sound very dry and high level. But, you know, we hear from a lot of organizations about how hard it is to justify putting investments into DevX. And I believe, I legitimately believe that in that measuring your investment profile is how you do it. so our benchmark founds that on average, so this is just a simple average of, I think 3000 organizations that were a part of this. so on average organizations spend about 55 percent of their time on new value. So this is new features, roadmap work, new platform or partner applications, things like that. about 20 percent on feature enhancements. So this is improving, focusing on performance, reliability, improving quality. About 15 percent is spent on DevEx, so in this category we include things like refactoring, test automation, dev tooling, reducing the keeping the lights on bucket. And then the last 10 percent is for keeping the lights on, so this is maintenance and services. So I think what's really important to keep in mind. Take away from this specific benchmark is that it's really just a recommendation and every organization should adjust it depending on their situation. And really, what I want to stress is that the most critical aspect is that you don't under invest in DevEx. Like you should know how much you're investing over the long term in DevEx because If you're under investing, you'll start to see things like unexpected work creep into your sprints, and other things that directly impact your organization's ability to continuously produce new value. so I encourage our audience to check out this webinar that we ran, back in, December of last year, where we brought in some experts from CircleCI and MongoDB to cover this because we really, did get to dive into this a lot more. Dan, I actually have. One question for you now, after you've heard all of this summary, you put yourself in the shoes of, of our audience, engineering leaders out there that are making their plans for 2025. You know, we're already into the year a little bit, but it's still not too late to define your plans. what do you think our audience should take away from this report?

Dan Lines: 29:14

I wasn't going to answer any questions on this pod, BLP, but I'm going to do it cause I love you. And here's what I would say. First of all, go read the report, right? It's there for you should know what the standards are, what the benchmarks are. But I think that 2025 is the year of productivity for engineering organizations. So if you're an engineering leader. Understand the report, know where you stand against the report. And then for any of the metrics that you want to get better at, use automation, use AI. You gotta be into this stuff now to improve. The best way to improve cycle time is through automation. The best way to improve, PR maturity is through automation. make sure that you're, deploying that within your organization, focus on productivity, and I think you'll be in great shape.

Ben Lloyd Pearson: 30:02

Awesome.

Dan Lines: 30:03

Alright everyone, so, first of all, thanks so much Ben, your overview was super insightful. For anyone listening, if you or your team want to see where your team's metrics stand, be sure to visit the Engineering Benchmarks Report at LinearB. io. Of course, we'll put the link in the show notes. If you want to listen to some other great engineering leaders break down everything in the report, you can watch. webinar that we've mentioned a few times here. We have Rob, CTO at CircleCI, Tara. VP of Developer Productivity at Mongo. Share their thoughts and insights There's some great anecdotes there, so be sure to check that out. We'll also put a link to that in the show notes. Thanks again, Ben, and thanks for listening, everyone. We'll see you next week.