What does this year’s Accelerate State of DevOps Report 2023 mean for your team?

LinearB & DORA have officially joined forces. On this week’s episode of Dev Interrupted, co-host Conor Bronsdon interviews Nathen Harvey, Head of Google Cloud’s DORA team. 

With data gathered from over 36,000 global professionals, this year’s report investigated how top DevOps performers integrate technical, process, and cultural capabilities into their practices for success.

Listen to learn how your team can focus on three core outcomes of DevOps: enhancing organizational value, boosting team innovation and collaboration, and promoting team member well-being and productivity.

Episode Highlights:

  • (3:30) What's new in this year's report?
  • (12:00) Key insights & takeaways
  • (16:30) Approaching code reviews
  • (21:30) Biggest surprises in the report
  • (29:30) Benchmarking your team
  • (31:45) How should teams start improving?
  • (36:30) DevOps trends & forecasting

Episode Transcript:

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

Conor Bronsdon: Hey everyone. Welcome to Dev Interrupted. I'm your host, Conor Bronsdon, and I'm delighted to be joined by Nathen Harvey, Head of Google's DORA team. Nathen, welcome to the show.

Nathen Harvey: Hey, thanks so much for having me. I'm super stoked to be here.

Conor Bronsdon: It's going to be really fun to talk to you because you just released this year's 2023 Accelerate State of DevOps report with a ton of new data, 3.6 times as many people were surveyed, you said. It's really exciting to see that continued research grow.

Nathen Harvey: Yeah, yeah, it's awesome. And when you talk about those 3.6 times more respondents that we have, there's a couple of things that led to that, I think. One, it's sponsors like LinearB that help us come together and get the word out about the survey.

Conor Bronsdon: We're proud to partner with you on it. It's the coolest initiative.

Nathen Harvey: Yeah, we love that. Another thing I think that's, uh, it's kind of indicative of the momentum that DORA is seeing over the years. Uh, you're seeing more respondents. We're seeing more vendors like yourselves putting together great developer productivity dashboards where you can get access to your Dora metrics very easily.

I think the other thing from a surveying perspective, and this is a shout out to our researchers, we looked at what happened last year and we had a lot of people start the survey, but fewer people finish it. And so we had to focus down. We had a hypothesis that Maybe the reason they didn't finish last year was because the survey asked too many questions.

It was too long. And so we, we set a hard limit for ourselves. We want to be able to make sure that a typical respondent can get through the survey this year in 15 minutes or less. And what we saw was that the, the rate of, um, we'll call them abandoned shopping carts. went down dramatically this year because of a shorter survey.

So in this case, our hypothesis was borne out. It was pretty awesome.

Conor Bronsdon: That's fantastic to see. How much did you have to shrink the survey to do that?

Nathen Harvey: Yeah. So each year we look at different capabilities. Um, and so we have to be very particular about which capabilities we look at. And then we have to figure out what are the best questions to ask about that capability.

We want to ask, Enough questions that one, a question is in asking you two things. Do you like the color red or yellow? Uh, so, you know, yeah, but, or, or what if we asked, do you like the color red and yellow, or, or typically how we propose the questions are really statements and we give you a Likert scale, right?

So I like the color red and yellow, and you can lay out on a scale, strongly disagree to strongly agree. Well, what if you like red, but not yellow? How do you answer that question? Like, right? So we have to be very careful in our survey design. How do we ask the questions? And then we have to make sure that we ask enough questions that we're really getting to the meat of the thing, right?

Continuous integration is a good example of that. Do you do, like, my team practices continuous integration. Strongly disagree to strongly agree. You can't ask that, because what does continuous integration mean to you? What does it mean to me? So the definition piece is challenging. Yeah. So we, instead we ask about the characteristics of that term.

Code commits result in an automated build of, of your system. Is that, do you strongly disagree or strongly agree to that? That's getting more to the heart of the capability itself. So we spend a lot of time on survey design. So to get it down to 15 minutes, the first thing we really have to do is really be strict about what sort of areas do we want to investigate.

A great example of this, in 2022, we looked deeply into software supply chain security. Great news, we got a lot of data last year, and we didn't investigate it again this year. We have insights from that, those insights we think are pretty solid, we can carry those forward. But that was maybe a thing that we could carve out of this year's survey, to keep that time short, get more data.

And of course with more data... We can explore more of the pathways and the connectivity across those capabilities.

Conor Bronsdon: Yeah, because the DORA report is really our report on the state of engineering teams, understanding how to create healthy, high performing engineering teams. And so I'd love to get into, what did this year's report encompass?

Nathen Harvey: Yeah, so this report encompassed a lot of things that you know and love when you talk about Dora. For example, software delivery performance. This is something that we've looked at year in and year out. We actually made some slight changes to how we asked about software delivery performance this year. I won't go into all of the details.

I'll just point you to the appendix of this year's report to get some of that nuanced details for those of you that really, really care. But one of the things I will mention just quickly, is listening to our community and listening to all of the practitioners in this space. We were hearing a lot of concerns about our use of MTTR as one of our metrics.

And there's a lot of concern about, is that a valuable metric? Can you actually learn something from MTTR? And so instead of asking generally about, uh, how long does it take you to recover from failure this year? We really went back and focused that question on software delivery performance. And instead of asking about how long does it take to recover from a failure, we asked instead about when a deployment causes an issue, how long does it take you to recover from that deployment?

And so we don't talk about time to restore anymore. We now talk about a change failure recovery time, or a failed deployment recovery time is what we call it. So. Again, it's kind of nuanced. I said I wouldn't tell you about it and point you to the appendix, and now I just read you half of the appendix from memory.

Conor Bronsdon: Bring it in. I think this nuance is really important for the audience to understand the depth of this reporting and the... Evolution that's happened in the survey over time. Yeah, while you're studying and how you approach it, just, just, I mean, the change from last year to try to increase, uh, take rate, I think is fantastic.

Nathen Harvey: Yeah. So back to your original question, though, software delivery performance. Of course, we looked at that. We look at organizational performance. We always look at that because we really want to show like, like you said, this is about how do you build a technology team? That is able to further the business goals, help the business succeed.

Because look, I love playing with new toys as much as anyone else. And new technologies are awesome to get my hands on, but we do it for a reason. Uh, we do it to help our businesses succeed. Uh, we also looked at, so, organizational performance, team performance, looking at a team. Are your, is your team able to be healthy, productive, uh, innovative on your team?

Uh, we also looked at the well being of people on your team. Well being is so, so important. We talk about systems all the time. We, we have to remember that we, you and I, we're part of the system. The humans, the humans are a big, big part of this. And so we looked at things like. Burnout, we look at things like Westrom's organizational topology.

But we also looked at things this year, like how is work distributed across a team? Is there an imbalance for who has to do which pieces of work or who chooses to do which pieces of work on a team? A couple of other things in that space as well.

Conor Bronsdon: This is fascinating, and I'll say like, Dora's research has really formed the foundation for so much of modern software engineering's approach to metrics and analytics.

Like, I mean, LinearB's own software engineering benchmark support, uh, where we take a more quantitative look at it. It's really based on the research Dora does and saying, okay, like, how can we expand on this and add more context? Because I think that's the really crucial thing to your point. Creating psychological safety for teams so they can say, okay, we Our leadership has the context.

We have the context. Let's go figure out how we can perform better and work more on the things that we care about and deliver the things we care about in a reliable way. And I think that's what DORA is really all about, is creating great teams.

Nathen Harvey: Absolutely. It is about creating those great teams that can really work well together and succeed.

And I love the fact that you bring in that quantitative research as well, because the truth is, you know, DORA is survey based. We are asking people and gathering data that way. There are benefits and drawbacks to both. The system data that you get and the survey data that you get, they both have different strengths and weaknesses.

I find that in practice, the best way for me to get Signal when I'm working with a team is to have a conversation. Totally. And when you start with those metrics, those DORA metrics, those software delivery performance metrics, that's a great place to start to gather some Signal. And now, as actually as Dr.

Nicole Forsgren said yesterday at the Dora Community Summit, signal plus action. So how do we act on that action or on that signal and put improvements in place?

Conor Bronsdon: What's the friction point we've identified that now we can say, okay, we need to try to improve this piece. Maybe we need automation in this tooling.

Maybe we just simply need to talk to our team and improve something here by having a conversation or refactor our process. A lot of options.

Nathen Harvey: Absolutely. And I'll give you a good example that I ran into over the past year. When we looked at those four software delivery metrics for a team, one of the things that we saw was their change, uh, sorry, their deployment frequency was about once every three days.

Pretty solid. You're deploying a couple of times a week. But then when we looked at their change lead time, so how long does it take for a change to actually get deployed? It was more along the lines of 15 to 18 days. which is really fascinating, right? How can you, how can you be in a place where you have this regular cadence of deployments, but it takes your changes three to five times longer to actually make it into production?

And so, unfortunately, those four metrics don't tell you the answer to how does that happen, but they did give you a good signal, or that team a good signal. Where, where is the friction? How does this happen? And then, of course, you can imagine, like, how did they get into that state? The truth is I have no idea, but I'll tell you what I imagined.

I imagined that at one point they said, Hey, everything's going bad, stop all deployments. And so they did that. They froze all deployments. And then eventually they're like, Okay, things are better, we can let deployments happen again. But they were smart and said, Oh, but we can't just release everything that's backed up.

So they started this cadence. We deploy every three days and we deploy a smaller chain set. Those, that backup that was created just continued to persist over time and potentially grew over time. But it, it's, it's information and like experiences like that that you cannot get from four measures. But the conversation can start with those four measures.

Conor Bronsdon: Right. That, that's the place where you begin to identify, okay, what do we need to do from here? I mean, Goodheart's Law says that people will game metrics if you're just saying, Oh, four DORA metrics. You have to, you know, hit these numbers. Great. You can game that.

Yep. You need the context. You need the richness of the conversations to the team. You need qualitative, quantitative, bring in all the signal you can. And you need to understand the impact of these decisions. Like it's so easy to say. Hey, engineering team, I need you to deploy every X days. What does that mean for team culture?

What's the impact of that? What are the challenges around that?

Nathen Harvey: Yeah. Yeah. It's also why we always encourage folks to look at all four metrics together. Don't try to optimize one of the four. You have to look at them all as a unit. They're interrelated. Because they are interrelated. Yeah. And they're a good proxy, really, for this question of what's your batch size?

How big are the changes that you're pushing out? And again, from a surveying perspective and asking people questions, I can't ask you, how big is your batch size? Because your small is going to be someone else's medium and someone else's large, right? So we use these four signals as a proxy for that.

Conor Bronsdon: So this brings something to mind, which I'd love to dive into a bit more, which is what are the key numbers that jumped out to you this year in the report?

You've kind of alluded to a couple. Let's dig in a bit.

Nathen Harvey: Yeah, so one of the most interesting findings, I think, from this year, and it's a new thing that we looked at this year, we kind of had this question of, how focused on the users? Are your teams, right? And what does that mean for customer connections?

Yeah, like we have to think about the service that we're building and who's using it. And I think that first, as we think about that, that really changes the perspective and the way that a lot of different teams approach the work. Take, for example, a reliability team or an operational team. Traditional operations, I might care about what's the CPU utilization across my fleet.

My customers don't care about that. And maybe on an operational, like a typical operational team, I don't even know who my customers are or what they're trying to do. So the best signal I have is CPU utilization as an example. But as we get into more sort of modern practices around operations, Understanding how is our customer using this application that changes what reliability means.

Am I able to meet the promises that we've made to our customer? I think you can extend that further into platform engineering, which is, of course, a hot topic right now. How do we help? How do we, how do we drive platform engineering? Well, the truth is, if you do that without focusing on the users, wait, who are the users?

Oh, it's the developers in my team. I need to make my developers more efficient. The best way for me to do that is to go and understand how they're working today. Where are their friction points? It's not to go off into a corner and build a platform and then go yell at them because they aren't using our platform.

So the fascinating thing that we saw though was that teams that have a higher focus on the user, they have 40 percent better organizational performance. Wow. This matters. This matters a lot.

Conor Bronsdon: I mean, it's something that's been talked about a lot by, you know, thought leaders, people like ourselves, you know, on podcast pontificate, you got to talk to your users, got to understand it.

So it's, it's awesome to have that data backing it up. Yeah. What are some of the other key insights you saw?

Nathen Harvey: Yeah. So another key insight that we saw when it comes to software delivery performance, we we've. We've oftentimes in our research have seen that the change approval process is something that really slows teams down.

And in that change approval process, we're typically looking at an external body that has to approve changes like a change approval board. And we see, we've seen over the years that not only does that change approval board slow things down, it also actually has a bad impact on the stability of your changes.

In other words, with a heavyweight change approval process. Your stability goes down, you're more likely to have a change failure. Not only that, it gets worse. Don't worry, there's more, don't worry, it gets more worse. Uh, from a developer's perspective, when there's a heavyweight change approval process, I might not even understand what it takes for my changes to go from committed to approved, much less deployed.

And, and because I don't understand that. I think about it a lot more, and that increases burnout on the team. Increases stress. Yes, exactly. So change approvals, we've shown over many years that they're not good. Like, we need a lightweight change of approval process. We didn't investigate that this year, though.

Instead, we said, all right, well, change approval process is one of those steps between committed and into production. There's another step that doesn't necessarily involve an external body, but instead a peer. Right. And that's peer reviews for our code. How long does it take for code reviews to happen is a slightly different question than how long does it take to get a change approved, right?

And one of the things that we saw that was fascinating was that the teams with slower code review times, or, sorry, the teams with faster code review times had 50 percent better software delivery performance. 50 percent better.

Conor Bronsdon: Again, that's just a, just an incredible efficiency game. This is something we've seen in our own research where coder views are such a friction point in the development process if you're not careful.

And a lot of folks don't pay that much attention to it. They just kind of throw it over the wall and hope it works. And that creates a ton of context switching, a ton of stress to your point. And, it can make really fractured workflows that derail your software delivery lifecycle.

Nathen Harvey: I think there's another really interesting sort of subtle thing here with this, and this is going back to sort of the principles of DORA.

We investigate those capabilities to help teams find their bottleneck. So we just, I just told you, faster code reviews means 50 percent better software delivery performance. Guess what? If your code reviews are already fast, making them faster might not help. It's those slow code reviews. So you've really got to find that bottleneck.

Conor Bronsdon: So if I'm going to benchmark my code reviews, I'll shout out here, LinearB has its own benchmarks. If you want to look at some quantitative data, we've done that in our 2023 engineering benchmarks. But what would be the approach you would take as you're saying, okay, like, how do you determine if this is a fast code review or not?

Nathen Harvey: Yeah, that's a, it's a really good question. I think there's kind of two or three things that you want to look at. First, I want to understand how does my code review time look over time? Right. Like, what is a trend of that? Is, is it taking longer and longer for code reviews to happen or is it happening less, uh, faster over time?

I might even want to talk about sentiment of people on the team. Hey, are you happy with the amount of code reviews that you have to do? Are you happy with the amount of time it takes to get your code reviews? We talked earlier about work distribution. Hey, who's doing the code reviews on your team? Is that, is there an undue or an uneven balance of, uh, this particular person or this particular group of people have to do all of the code reviews?

So we're putting an undue burden on them. I think all of those things are interesting to look at. And then, uh, what, the thing that I would really look at is as you look at those software delivery metrics, maybe you're, you see your change lead time trending up, like in that example from earlier. Maybe it's code reviews.

Let's at least look at how long does a code review take. And part of that process might also be how many handoffs do we have in a code review process? How many back and forths do we have? And I think there's something really interesting there with this new thing that we looked at this year. We don't really have strong signals, but there may be something there.

That new thing that we looked at, I don't know if you've heard of it. Artificial intelligence. What's that? So we looked at some AI stuff this year, of course, um, and because, you know, it's on everyone's mind right now. So unfortunately it's, it, it's also so new. We don't have good ways to ask questions yet about AI.

Like we're still all learning together. Yeah. How do you quantify it? Exactly. So the thing that we did do was we said, Hey. For the primary application or service that you work on, today, what is the importance of the role of AI in a series of tasks? And we laid out a bunch of tasks, like, uh, I don't know, generating code, writing documentation, analyzing logs, incident response, and you could basically use, uh, uh, a slider to say that it's not at all important or it's super important to this thing today.

What we saw was that about 50 percent or so of our respondents were using AI in some way across What we didn't see yet, though, was that that's actually contributing to organizational performance. So, in other words, we see teams that are starting to use it, but not dramatic impacts on overall performance.

One thing that we did see, though, was the well being of the people on those teams was lifted slightly.

Conor Bronsdon: Interesting. So, hopefully, AI is taking care of tasks that maybe are frustrating or helping automate pieces of, you know, Friction in the cycle.

Nathen Harvey: Absolutely. And then going back to our code review thing.

Hey, if code reviews are taking a long time. Now, I have a signal of where might I experiment with AI next?

Conor Bronsdon: I know Uber is using CodeReview AI and it's having a lot of success. I think the figure I saw was they saved 10 million this year, they think, based off of that.

Nathen Harvey: That's incredible. And think about that from a well being perspective, with your employees, the friction reduction that happens there, and like, let's let the AI help us write better code.

Conor Bronsdon: Yeah, this is an area of focus for us, too, is like looking at programmable workflows and saying, okay, like, how can we leverage AI in the process to speed it up? How can we also just leverage automation? That developers can fine tune and configure around their cycle. And I, there's huge opportunities to speed up code, if you use to your point.

And yeah, and really speed up the whole cycle because of it. Absolutely. What are other friction points you saw?

Nathen Harvey: another area that we looked at and we've been looking at for the past couple of years is around operational performance, right? So we look at software delivery performance, how quickly and safely can you get something into production?

But that operational performance is, frankly, that's where all the value comes when our customers start using the service. For better or worse, operational performance and sort of the reliability of a service is super context specific. How reliable does your banking application need to be versus how reliable does the cake store that you buy your cakes from?

Like, how reliable does that application need to be? Well, I'm a big cake fan, so that one's actually important to me. I mean, you know, we can get into the debate on cake versus pie, no problem. All right. But like, yeah, there, there are different concerns there. So we can't just like... What's your reliability number?

Is it 99. 9? What's the context? Yeah, the context super matters there. But one of the things that we did see this year was that teams that are investing in better, sort of, reliability engineering practices they see some immediate lift in their operational performance and then, maybe not surprisingly, kind of a plateau where their operational performance as they bring on more of these reliability practices kind of levels out.

And then, and then it ramps up again over some time and as they get better at these practices. I mean, you've probably seen that in real life. Think about, like, using a code generator. Like, yeah, immediately it's helping you really good, and then you're like, uh, I mean, it's helping me some. I need to fine tune.

Yeah, I need to fine tune it. And so, that sort of lived experience, we see in the data. I think when we see what we feel and what we experience in the data, To me, that's one of the most exciting times.

Conor Bronsdon: Yeah, it really validates this kind of anecdotal piece in your brain where you're like, I think this is an issue and then you get, you know, qualitative or quantitative feedback and you're like, Okay, great.

Now I can go in and try to figure out how to solve this in a way that's positively impactful for the whole team. I'm curious if there were surprises in the data that came out for you where, you know, something came up or that just kind of shook you.

Nathen Harvey: Yeah, I mean, I think that, um, I think one of the surprises is...

that trunk based development, like trunk based development for the past couple of years has been a really interesting capability. We've seen in years prior, like 2019 and before that, trunk based development really had a dramatic impact on teams ability to do things like continuous delivery, to get better software delivery performance.

For the past couple of years, we've kind of seen mixed results across trunk based development. We aren't really sure why. But here's, here are some of the things that we do know are positive about trunk based development. When you have better documentation in place, documentation unlocks a team's ability, ability to do better trunk based development.

It was something like, and I'm gonna make this, I'm gonna take this number from memory so it might be slightly off by a percentage or two, but with good documentation in place, trunk based development Contributed about 12. 8x more to your overall performance. The other thing that's important is that trunk based development gets sort of mediated through continuous delivery.

What does all that mean? That sounds like I'm a data analyst now. What it means is that when you're doing trunk based development and have continuous delivery in place together, they help drive better software delivery performance. Now, you might argue that it's hard to have trunk based development without continuous delivery or even continuous delivery without trunk based development.

The truth is, it's all kind of messy, and it also helps us see how all of these capabilities build on, reinforce, sometimes tear down each other.

Conor Bronsdon: Do you think this would be associated, perhaps, to the quality gains that we've seen in previous to our research, where when a team has high quality, their speed actually improves, even though some folks think that is?

Uh, a little odd.

Nathen Harvey: Yeah, that could absolutely be the case. And I think, yeah, you're totally right there that that, like, traditionally, or, or, maybe it's not even traditionally anymore. Maybe in the old school way, we thought in order to be safe, we have to be otherwise, you'd think,

Conor Bronsdon: right? You'd be like, oh, well if you're, if you're going faster, it's harder to be safe.

Nathen Harvey: Yes, absolutely. But what we've seen year in and year out, and to a point where it's no longer surprising to me, but I understand it's still surprising to some people that in order to be safe, you have to be fast. And those two things, that speed and stability, they tend to move together with one another.

Conor Bronsdon: I wonder if that...

That is part of why the documentation piece is so important for trunk based development because you're improving the quality of the process, improving the quality of what you're delivering, and then that enables that speed and the productivity.

Nathen Harvey: Yeah, absolutely. That could very well be the case. And this is also one of the challenges and beautiful things that we have here, right?

The data can only reveal so much. We can have hypotheses and like, guess what we see and use our lived experience and our experience with teams to tell stories about that data. But the data can only take us so far.

Conor Bronsdon: Interesting, yeah, so we saw some interesting data in our own benchmarking research that we did recently that I think correlates to this.

So, we saw that cross team collaboration just on, if we looked at PR data and saw more comments from other teams. Improved both quality and lowered rework rate. And so there was actually in our data, there was a a positive correlation between, enterprises had, less of those cross team contextual pieces added to PRs and improving the coder reprocess and therefore had a higher rework rate.

So I wonder if that's kind of that same approach of like, okay, Dora's identified. You know, when you have higher quality, which we can imply if there's more activity, more engagement from different teams, you get more quality, can get more perspectives. I wonder if that's part of why we saw that data.

Nathen Harvey: Yeah, interesting.

And that, that data also helps me think about things like loosely coupled architecture, which is another thing that we investigated, have investigated for a number of years. And I think that there's a, there's a, like, this is another good example of you can ask a question and people will hear it differently.

When I say loosely coupled architecture, what I'm really talking about is actually not the architecture of your system, but more so the communication pathways of your team. Is a team able to independently deploy the service or the portion of the service that they're responsible for? And I think the thing that you just described there where across organizations, we aren't seeing a whole lot of PR comments.

They may be acting as if they're loosely coupled, but they see their quality going down. And the truth is they are tightly coupled across those teams. But they behave as if they are loosely coupled. So they aren't participating in that code review process, commenting on those PRs. That to me is like fascinating.

I'd love to dig in.

Conor Bronsdon: Oh yeah. Happy to, happy to talk more about it at some point with you. I think the organizational system design is such an important piece. Like we all know it, but I think we all underestimate how much harder things get as you scale. I mean, just look at enterprises who now have platform teams in place because they're saying, Hey, look.

We need to centralize our ability to have these like key infrastructure pieces so that they are coupled across teams.

Nathen Harvey: Absolutely. And this is, this, this, this organizational structure and how do we keep teams aligned properly. This all comes back to leadership as well, because of course the leaders in our organization, they, they shape the organizational culture.

They have control over the incentives. And, and look, DevOps was born because we had bad incentives across two different teams. Like, if you go back 10 years, 15 years ago, we told developers, move faster. We told operators, keep everything stable. Uh, and it's easy for one or the other of them. The operators, and I come from an operational background, I knew how to keep my system stable, accept no changes.

Right? But that's, that's, it's no good for either one of us in the long run. It's certainly not good for our customers. And so DevOps was sort of born out of this idea of, Maybe we could collaborate and align on the same goal of keeping our customers happy, and that's going to help us improve, but we can't get there if our leadership isn't bought in.

Conor Bronsdon: I think that leadership point is so crucial because it's, it can be challenging sometimes to translate the operational metrics that are really important for internal engineering. To, you know, the VP even sometimes not let alone getting the C suite to care about why these efficiency metrics matter as much.

You have to translate it to, Oh, I'm helping deliver projects more predictably. Uh, my teams are able to, you know, have higher velocity and quality, that rework rate is lowered. we like to talk about, like, the psychological safety piece and the need for that, and, and we can, like, generally opine on that, but until we are able to show and translate that to the business metrics, it can be hard to get buy in for some of these projects.

Nathen Harvey: Yeah, absolutely. I, actually, it just came from a talk that was led by Denali Luma, who happens to be a, a Dora community guide, but she was giving a talk about a bunch of statistical analysis that, uh, That she's done across codebases and she's taken that statistical analysis across codebases and also looked at developer sentiment and using developer sentiment as like a proxy for the value of some open source projects.

And one of the things that she demonstrated through her analysis was that when you have higher variability and slower lead times or slower cycle times, that actually those two things are negatively impacting sort of the developer sentiment. And so when we get to better, like less variability, shorter cycle times, we're increasing the developer sentiment, which is a way of saying that we're increasing the value of this particular product.

Talking about that value is a thing that leaders can understand. They can, they can help internalize that. And, and frankly, our job as technical practitioners is to build empathy with those. In the business, right? And be able to talk to them in their language. And just like we expect them to build up some capacity to do the same for us.

Conor Bronsdon: It takes effort from both sides. No question. Absolutely. That's super fascinating to hear. And I mean, it aligns to a lot of the things that we're seeing in the data trends. I'm curious though. Let's get back to that benchmarking piece and thinking about like, how do you define success? You know, because you mentioned great.

We can, we can see these gains. We can start to get an idea of what we do. That's all wonderful. But if you don't know is, you know, Committing, is deploying every three days good or is it bad? Like, how should teams go about that approach?

Nathen Harvey: Yeah, this is a really interesting space because I think, you know, one of the things also that we do when we look at software delivery performance as an example, we talk about low performers, medium performers, high performers, and elite performers.

Well, I have to admit that it pains me a little bit every year when we do that. Look, we have these four different clusters, we have to put a name on them. But the truth is, when you put in sort of an evaluative name, like low, medium, high, and elite, everyone immediately wants to be elite. I want you to be elite too.

But the elite that I want you to achieve is elite improvement. Not elite as a raw number. I want to find the teams that are making those investments and improving. And getting better over time. In fact, the way that I talk about Dora all the time, it's a framework that helps you get better at getting better.

This is, this is a journey of continuous improvement.

Conor Bronsdon: Absolutely. And I think it's also important to say that like. When we show those elite metrics, it may be that elite metric needs to be broken down to say, are you an enterprise? Are you a startup or a scale? But like, you should take a look at that. Like this is benchmarking that I know our team's looking at, I know your team's looking at as well.

Nathen Harvey: Absolutely. And like saying, like, look, putting a moonshot out there and saying, we want to get to elite performance. That's great. But what you need to reward on your journey to elite performance are those continual improvement gains, right? And so at the end of any year, I'd much rather give a most improved award than highest performance award.

Conor Bronsdon: I really appreciate that perspective because, I mean, we've talked a bit about the need for creating good engineering cultures and having a longevity to it. And there's a huge risk when you just say, we have to push for X because it can be something that is really toxic to the internal culture, can burn people out.

It can do the things that are the exact opposite of what builds high performing teams.

Nathen Harvey: Exactly. And when you think about improving, like Improvement work for better or worse, it's never done, right? There's always an opportunity to improve. And also as your teams get better at improving, they're also building up their resiliency.

Because I don't know what's going to happen from a market perspective, from a financial perspective, from so many different things. What I do know is that the world is going to continue to change. And the better you are able and capable of adopting and adapting to change. The better off you're going to be.

Conor Bronsdon: So how should teams start improving?

Nathen Harvey: That's a great question. How do teams start improving? I think the first step in, uh, starting any improvement journey is figuring out where you are. So like, you have to figure out where are we on this map, right? And I think that using things like the Dora metrics is a great way for you to set a baseline.

Here's where we are. And then we have to ask that question of what's, what's keeping us from improving. And this is where you start to go from the Dora metrics into the capabilities that we investigate. And of course we investigate a ton of technical capabilities, but we also look at process and cultural capabilities.

And as a team through a conversation, try to identify which one of these is my bottleneck and then commit to changing something. in that capability. We want to improve that. It's really the scientific approach. We have a hypothesis that code reviews is the thing that's slowing us down. Let's do some work to improve the speed of our code reviews.

Now let's retest. Did our baseline numbers improve or did they deteriorate? Either way, we will have learned something and then we can make further investments and you basically rinse and repeat that process. That's how you get into that mindset and practice of continuous improvement.

Conor Bronsdon: And I'll say, if you're someone who's listening, we're going to make this as easy for you as possible.

We'll link the report in the show notes. Absolutely. We also have a free DORA dashboard that teams of any size can use. Free forever. We'll link that as well. If you want to get started, start benchmarking yourself against the DORA parts or understanding, take a peek. and I think it's a great place to start because then you can say, okay, like maybe, you know, you've gotten your free dashboard set up.

You've, you've gone and you've looked at the report. I feel like we're off on code reviews, to your point, like we need to figure this out. What would you advise a leader who's starting to see that friction point to do as they kind of implement it? Yes, we want to come back to the data. But how should they go about trying to improve these key areas?

Nathen Harvey: Yeah, so that, that really comes down to the context of your team and which area you want to improve. So if it's code review as an example, maybe there's, uh, maybe you look at the process of code reviews. Do code reviews just sit and wait? Maybe there's something you can do on your team where you almost have an on call code reviewer, right?

So we have a rotation. This week I'm on call for code reviews, so I'm not doing any real project work or product work. I'm here to, like, pick up the slack. This week, and maybe code reviews come to me, so now I prioritize them this week, and next week I hand that off to someone else. Let's try that for three or four weeks and see how do our code review times adjust.

And that's, you know, that's just one way that you might look at that, but again, it comes back to the context and sort of the constraints. Within your organization, right?

Conor Bronsdon: Maybe your org is more into pair programming and you need to take that approach. Or maybe you need to leverage programmer workflows to do, you know, uh, automation that estimates the time of reviews and how long they'll take to cut down on that context, which is a lot of approaches.

Nathen Harvey: Absolutely. A lot of, a lot of approaches there. I love that you say pair programming, you know, and pair programming, we know like how, how do you get faster code reviews while you code review, while you write the code, that's pair programming, right? And so that's a beautiful way to do that. Not everyone is ready or convinced that pair programming is actually going to help them.

Uh, so again, that's a good place to experiment then. Go try it out.

Conor Bronsdon: Yeah, and I'll just say like we have a whole unit within LinearB where we look at these like experimentation pieces because I think it's I mean, we're really inspired by the work that Dora team's doing to say, like, how can we leverage this for improvement?

Whether that's our, you know, GitStream programmable workflow tooling, evaluating pair programming, or the benchmarking research. It's, it's fascinating to see how the work that Dora has done has really led the industry these last 10 years. Um, how do you view the position of your team, uh, and of Google Cloud's role in it as...

Moving the industry forward.

Nathen Harvey: Yeah, so I think from, I'll start from a higher level with Google Cloud. I think Google Cloud's commitment to keeping this research going and keep bringing it out into the world is super important. And I super appreciate also the fact that Google has maintained its commitment to this research being program and platform agnostic.

This is not and will never be research into how do you get better on Google Cloud. That's not to say that, uh, that research shouldn't be done or isn't being done elsewhere.

Conor Bronsdon: And to be clear, you can use Google Cloud to improve these things.

Nathen Harvey: Absolutely, you can use Google Cloud to improve these things. But, that's not the remit of this research.

And I think it's really important that we maintain that. I think that, then second, my team's role, really, uh, I don't know if this came across or not, but I am not the data analyst. I'm not the PhD analyzing all the data. I work with them. But my role and the role of my team is to do things like this, bring the research out to the world and specifically take it to teams to help them go from basically academic research, beautiful, great, I can get inspired by that, but what does it mean for me?

What does it mean for my team? How do we put that research into practice? And that, I think, is the role of my team.

Conor Bronsdon: I love it. And I think it also gives you these incredible insights into where the industry is going. I'd love to ask you, based on the trends that you're observing in this year's report, what do you foresee as the future trajectories space?

Nathen Harvey: Yeah, so I think that, you know, um, first, AI, AI, AI. I should have said a non AI category here. Well, and I think from a DORA perspective, we're really excited about AI. Primarily because as more teams are adopting it, we're going to find and have better ways to evaluate the efficacy of AI and what is it actually doing from a trend perspective.

I think there's also a lot of interesting things around things like, FinOps, if you will, like how, how much are these things costing us? How do we actually... Quantify the value of improvement work, how do we quantify the value of, uh, friction in our process, how do we get to a place where, teams can actually, you know, maybe leverage flexible infrastructure to get better cost out of their systems.

And I think one of the fascinating things there is, if you can't change your system, it doesn't matter that the financial people, or like the finances say, if you ran it on this, Platform instead of that platform, well, if you can't change, you can't do anything with that. So change is where all of that starts.

So I think there's AI, I think there's that finance, and I really think that we, as a research team, owe it to ourselves and the industry to look again at leadership and the role that leadership plays in driving all of this improvement capability.

Conor Bronsdon: And I think it's crucial to ask that question as organizations position themselves to adapt to these upcoming shifts in development practices, particularly around AI and others.

Nathen Harvey: Yeah, absolutely.

Conor Bronsdon: Absolutely. It's interesting. We've seen some trends here that really speak to the, the need for this understanding of the leadership level too. Like, um, I mean, at the Dora Community Summit that you and I were both at yesterday, I talked to so many people who were saying, Hey, I care about the Dora metrics.

I understand their importance to the efficiency of our organization. How can I get my leadership to understand? And having that conversation on how to translate business metrics, uh, is, is really important. Uh, I don't want to talk too much about this because this is the whole thing we do. And it's a, it's a, it's not that it was pods about, but it's a, it's just a crucial conversation for, for any engineering org to have.

And, uh, I mean, like one of the things we've seen, that it was really interesting, this wasn't statistically significant. So I'm going to preface that, but like we saw an indicator that you know, startups and scale ups deploy code 18 percent faster than enterprise in our dataset. And obviously not as big a dataset as Dora, but like these kind of indicators speak to like, okay, is there a leadership problem or is this a tooling problem?

What's, what's happening here? There's, there's so many things you can dive into, whether it's. Uh, on the Dora side and, uh, leaders need to be thinking about this all the time. Uh, that's why, that's why I think this report is so important is it should give a roadmap to improvement for leaders.

Nathen Harvey: Yeah, absolutely.

And, you know, when we think about those improvements, we talk about those capabilities, right? Technical capabilities, process capabilities. I really think of each capability kind of as a dial. You can amp it up, you can lower it down. But here's the thing. If, if I'm an individual practitioner or, you know, on a small team.

There are only so many dials that are within my reach, right? Like, let's say we have an organizational policy about our, how our change approval process works. I probably can't reach that dial. I can't impact it today. I might make longer term plans to, to build up. How do I actually turn that dial? But today I can impact.

How frequently am I running tests? What does my continuous integration look like? So it is this understanding of, are these capabilities within my reach, or are they things that I'm going to have to necessarily partner across and up and down to the organization in order to impact improvement there?

Conor Bronsdon: So interestingly, the majority of listeners on our show are actually engineering leadership.

They're engineering managers, senior managers, directors on up. so I'd, I'd love to also hear from you. What do you think that leadership approach should be?

Nathen Harvey: Yeah. So I think that leadership approach It has to start with listening, and really seeking to understand what are your teams doing. The truth is, as a leader, I can tell you this with a lot of certainty.

Not, maybe not 100 percent because you can't say anything with 100 percent certainty, but I will tell this to leaders. Your team knows what hurts. Your team also knows how they can make that pain go away. It's your role, really, to go and listen to them and then create that space and capacity in their roadmap, in their tooling, in their way of working to allow them to eliminate that friction.

And I also think from a leadership perspective, you have to remember and be careful with how you use metrics. Metrics can obviously be used for good, they can obviously be used for bad as well. And so I think that understanding that there is no one metric. To rule them all, there is no one metric by which you can make decisions.

You have to understand that we are talking about complex systems. Complex systems that involve, you know, systems, like computer systems and stuff. But within that system, it's us. It's the humans and how we interact with one another. And so I think from a leadership perspective, it really is about Listening to your team, making sure that they have the autonomy and the space to make that improvement work.

And your role is to support them in learning, support them in making those improvements.

Conor Bronsdon: Absolutely. If you can give them the support and the space and safety to make those improvements, developer led improvement is always going to be more sustainable than saying, top down, change this number, because that will then create all these negative knock on effects that you mentioned.

And I'm curious though, because... Like, this is obviously so important. Operational efficiency of an engineering team is a crucial part of the dual mandate of engineering leaders. The other part, though, is delivering for the business. How do you make the leadership and the rest of the business care about all this, you know, willy nilly software stuff you're doing?

Nathen Harvey: Yeah, absolutely. I think, I think, and this advice kind of goes both for the practitioners and for the leaders. We have to learn to speak the language of our partners across the business. So I have to understand, you know, a great example that a lot of us maybe have experienced. We moved from data centers into the cloud.

Great. That's awesome. Maybe we didn't think about the financial implications of that. Our CFOs said you moved from CapEx to OpEx. Which X? No, I'm in the cloud. I'm not on X. I'm in the cloud. Like, we don't necessarily have that communication pathway. So I think it is important for both sides of that equation, for the CFOs and for us on the practitioner side, to be able to speak in their language and build up some empathy there.

Conor Bronsdon: Absolutely agree. That translation layer is crucial. metrics like DORA can help give you a first step and you need to take the effort to then apply them and, and translate them to your leadership. There's education to be done too. Absolutely. So I'd love to just give you an opportunity to share any closing thoughts, maybe a call to action on, on where to go to learn more about the DORA report.

What do you want our listeners to know?

Nathen Harvey: Yeah, absolutely. So it's come up a couple of times here in our podcast today. We were both at the inaugural DORA community. Fantastic time. You're doing one next year, right? Yeah. Uh, well, we, we don't have any, any solid plans yet. Based on, based on how things went yesterday, I, I, I think, uh, yeah, we definitely want to do another one next year, and I hope that we'll see, uh, everyone come back and more.

So, I think, really, the truth is that Dora doesn't have the answers. LinearB doesn't have the answers. I don't have the answers. But together... As we work as a community learning from each other, this is where we're going to get closer to those answers. And it's, it is about learning from the experiences of others, looking at pitfalls that others fell into, helping others avoid them or lift them out of them when they do fall into them.

So I think that community, and we're here at the DevOps Enterprise Summit, like, that's what this is all about. Learning from each other. So from a Dora perspective, my call to action is one. Go to Dora. dev. At Dora. dev, you're going to find the latest research, you're going to find capabilities articles, and you're going to find a link over to the Dora community.

If you're listening to this, you owe it to yourself to join us on the Dora community so that you can share your experiences and learn from others.

Conor Bronsdon: Well, you heard it from Nathen. It's been an absolute pleasure here, and we're definitely encouraging our community to join yours. I think it is such a fantastic place to learn about the future and to help shape the future of software development.

And, uh, thank you so much for coming on, Nathen.

Nathen Harvey: Yeah, thank you so much. It's been a blast.

Conor Bronsdon: If you want to download the DORA report, we'll absolutely be linking it in the show notes. I mentioned we'll also have a free DORA dashboard and the benchmarks because, as Nathen put it, more context, that's going to give you more information to understand how to improve.

Nathen, thanks again for coming on the show. It's been amazing to partner with you on this year's report, and we're super excited. Thanks so much, Conor.