"Metrics are a great way to understand where you should start looking, they're not going to give you all the answers to what's happening in your organization" - Rob Zuber, CircleCI CTO

This week, Ben and Andrew dive into the (surprisingly?) complex world of calculator apps, analyze how AI is revolutionizing the technical interview, and dissect the emerging “two-tier” economy around AI. What side of the curve does your org fall on?

Then, the conversation goes on site to San Francisco, where host Dan Lines hosts Rob Zuber (CTO, CircleCI) and Tara Hernandez (VP of Dev Productivity at MongoDB) for a discussion of LinearB's 2025 Software Engineering Benchmarks Report.

We unpack the report's surprising findings on the PR lifecycle, project management hygiene, DORA metrics, code quality, and predictability, with key takeaways for optimizing your engineering team's performance.

Be sure to grab your copy of the report to follow along with Dan, Rob & Tara.

Show Notes

Transcript

Ben Lloyd Pearson: 0:08

to Dev Interrupted. I'm your host, Ben Lloyd Pearson.

Andrew Zigler: 0:11

And I'm your host, Andrew Ziegler. In today's news, we're talking about why startups are leaving big corporations in the dust when it comes to AI, why building a simple calculator app is actually a complex puzzle, and what to do now that AI has made your whiteboard interviews completely useless. Ben, what do you want to talk about first today?

Ben Lloyd Pearson: 0:30

Well, I feel like since everything seems to be about AI anymore, maybe we just start with the one story that's not AI this week.

Andrew Zigler: 0:37

Oh, I love the calculator story. So this calculator story that we're going to include in the roundup is a deep dive into something that we all take for granted every day. You know, we all open up a computer or our phone and do a simple calculation on the calculator app. But did you know that under the hood, this is actually a lot more complex than you would have thought? And this is because of how we represent numbers in computers, in the abstract ways in which we have to do so. And then when you start introducing calculations, You start sacrificing accuracy for precision. And in this article, it's a really interesting deep dive. I will say I'm not the biggest math nerd in the world, so I can't do it justice. But for those that are, I know they'll find this a fun ride behind the engineering journey and the complexity of a calculator app.

Ben Lloyd Pearson: 1:21

Yeah, you know, this is, I wanted to include this story because it is just a fun look at something that, like you said, we take, we all take this for granted. In fact, you know, one of the biggest mistakes that my teachers taught me in high school and elementary was that I wasn't always going to have a calculator in my pocket, so I have to learn math. But, it turns out for my reality that the opposite was true. I do always have a calculator in my pocket. And actually the entire reason I got into software development was because I hate, how much I hate math. So, I'm really glad that people like this exist because it enables people like me to learn. you know, I think my distaste for math really took off when I learned about the quadratic equation. And I literally learned how to write software on my calculator that did my homework for me. And, you know, technically teachers might call that cheating, but it actually sparked a whole interest in me. you know, I would learn the formulas enough to write an application that solved it for me. But yeah, just to say, like, I don't understand it either. I'm really glad that there are people in this world who can. And if you're someone who is, like, really geeky about math, it is a pretty fascinating breakdown on how this, like, very standard technology works today.

Andrew Zigler: 2:34

And another thing as well, on this one that was interesting to me is how when you solve a problem like a calculator app, you know, you might be able to solve 99 percent of it really, really Easily, but getting that last 1 percent so that it's perfect and putting in that blood, sweat and tears to make something foundational works incredible. That's what great technology is all about.

Ben Lloyd Pearson: 2:55

Yeah, so I want to talk about one of the stories that I read this week, and that's about how there's this like two tier AI economy that's emerging between startups and corporations. So large organizations really kind of appear to be falling behind startups in terms of AI adoption. startups move very quickly, they're able to innovate rapidly, they try new things frequently. But, you know, big corporations, conversely, have a tendency to get stuck in like red tape. and they may have money and resources, but pouring money and resources into this AI challenge isn't really what is solving the challenges. you know, startups are experimenting really fast with these. and I think they're able to do it a lot more efficiently. So you're seeing AI adoption take off within startups, but lag behind quite a bit in larger corporations. we work for a small company. we're already using, agentic workflows, with AI to, optimize some of the stuff we do for the show, for polling and research. we're using, this new workflow we just built to collect research for newsletter articles that will be coming out over the next months and weeks. This is true from our perspective. we work at a startup and we are very rapidly innovating with AI.

Andrew Zigler: 4:10

Indeed, and it's about being able to iterate very quickly. Sometimes if you're in a larger organization, your cycles are just so much longer naturally that can be hard to get the momentum needed. to build something that's changing so quickly, like AI. And I'm excited for what these kinds of things will unlock for us to help us work more efficiently. I also think it calls out the, real root of where the AI innovation comes from, which is, Scrappy teams making resourceful decisions with what's available to them in the time they have. And I think it's going to cause us to see a lot more diversity in the kinds of problems that get solved with technology by startups, because they have the ability to move really fast within regions and in industries that maybe are traditionally, managed by those large traditional enterprises and that can't move as quickly.

Ben Lloyd Pearson: 4:58

Yeah. This speaking of something that we're probably going to have to iterate on very soon, the tech interview process. Tell me what you read about this week on that, Andrew.

Andrew Zigler: 5:08

Yeah, I read this amazing article from Kane Naraway about how the AI has killed the interview process for tech folk. And it really just kind of shines a light on everything that we already understand and know about the reality of applying and interviewing for jobs in an AI world now. The idea that one, you're often tested on things that maybe are expected to be used alongside AI, and maybe the interviews are not testing. You know, your AI capabilities, which is what you'd actually be using on the job now. But on the inverse of that, there are so many opportunities and ways for AI to get between you and the interview, inject its knowledge into the process. So if someone's interviewing you, are they really interviewing your skills or are they interviewing Claude's? You know, that becomes the question of the article. And it's really shining a light on how Rehiring practices, they need to change. Just like how the day to day work of developers is evolving to be more like a manager of AI, to be in charge of these workflows and processes, to maybe not be as writing as much code, but definitely be overseeing a large technical, project or architecture. Those are the things that we should be testing and testing them alongside their ability to use, AI tools.

Ben Lloyd Pearson: 6:22

Yeah, I mean, I feel like at a minimum, everyone should be incorporating AI questions into their interview process to figure out how candidates are using it. Like, have they experimented in positive ways with it? Have they seen success? I think in the future, like really every engineer needs to become product oriented. And I think the sooner you can do that, the better. And what I mean by that is it's not enough to just understand code and to be able to generate code anymore. You need to understand how or have an awareness of how that code impacts the end users. So even if you're making simple database updates, you should know how that database is used in real world conditions so that you're optimizing it for how it's actually going to be used. But I think beyond that, you know, some things that maybe organizations should start thinking about today are like how to lean into the human side of the experience, you know, if you can like in person interviews may actually make a lot of sense now to bring back. but even if you can't, I mean, make AI a part of the interview process. Like if you're going to do technical, challenges or technical, interviews, you should make AI a part of that. Like how much, how can they use AI? How can they leverage it to solve higher order challenges? you don't need to just Test them on their ability to produce code anymore. But it also makes me wonder, like, is LeetCode dead? Yeah,

Andrew Zigler: 7:55

the new, the leet code of tomorrow will probably be more about testing your ability to prompt and use AI tools to solve the kinds of problems that were originally on leet code. But ultimately, what leet code is testing isn't maybe something that needs to get tested. Um, I think it really harkens back to, what's the goal of asking those technical questions in an interview of a candidate? It's because you want to see their skills. Well, the skills you want to see now are their ability to work with the tools that are going to produce that kind of content, as well as be able to understand the fundamentals behind why it's working. So really, it's about going back to the reason that we do interviews and, focus on the human elements that you can actually understand.

Ben Lloyd Pearson: 8:35

this article, like specifically called out Max Howell, who's the creator of Homebrew. Which if you're a developer who works on Mac, there's like a 90 plus percent chance that you're using something that this guy created and how, you know, he made it, he probably can't even get hired at Google because, you know, even though most of their company probably uses the stuff he built, if he can't solve leak code, he may not get in the door. So, you know, I think in the future, people like Max Howell are the ones that are really going to Excel,

Andrew Zigler: 9:03

Absolutely. this kind of actually touches on something very similar that's evolving and changing within the tech industry right now, especially for entry level software, developers. When they're coming on the scene, it's a lot of times the way that they've acquired their knowledge or that they start to learn on the job is, you know, by interfacing with people. Technologies like AI, they're able to get fast answers based upon the context they're working on. But when you compare that to the last decade or two, if you've become a developer, you probably spent a fair amount of time on Stack Overflow trying to find an obscure answer to your very obscure question. But as we know, now you can ask those kinds GPT even, and get a response. So, there was an article this week about how. You know, the lack of the struggle that you might go through as an individual developer to learn those things and go on Stack Overflow and find the 30 questions and answers that don't answer your question. Like, that's the struggle of understanding the fundamentals of what you're writing.

Ben Lloyd Pearson: 10:00

Yeah, this is, I think you're referencing this article that I brought up in one of our channels about, how new junior developers appeared to possibly actually be losing the ability to code. and this kind of made me think like, Andrew, have you ever met a developer that doesn't know what Stack Overflow is?

Andrew Zigler: 10:16

No, that's kind of what I'm thinking of immediately is it's so ubiquitous.

Ben Lloyd Pearson: 10:21

Yeah, I mean, you can bring up something that vibes in like the stack overflow nature and any developer knows what that experience is like, where you have loads of questions that are only tangentially related and a bunch of surly people who are providing very pointed and sometimes aggressive answers.

Andrew Zigler: 10:38

Yes.

Ben Lloyd Pearson: 10:40

But, I mean, that's changing, you know, like, sort of grew up, for lack of a better phrase, in an environment where everyone was cutting their teeth on Stack Overflow, but now, like, when a developer encounters a problem, they can just simply paste that error into one of the many generative AI tools that are on the market and instantly get an answer that is directly relevant to them. so I'm kind of like, you know, forget calculators in your pocket. Now you've got an encyclopedia in your pocket, and not only that, you can have a conversation with it. Like, that's, it completely changes, like, the paradigm of how we solve problems, Yeah,

Andrew Zigler: 11:17

point. I think that's the most powerful experience that I have with AI so far, is not just asking the question and getting the answer, but then continuing in a dialogue to ask other questions, to do a follow up. This is why perplexity is really Successful is because when you go to search, you know, you're not trying to magically pull up a list of things, you're trying to actually find your answer. So when you start by asking your question, and you're having a dialogue, it's a really powerful experience that becomes deeply personalized, just like how every conversation or even this one right now we're having it, you know, it's personalized, we're having our own discussion. When you are able to kind of introduce knowledge in that way, that's how we learn as people. So it's a really powerful way to kind of Build that knowledge over time, and really what it calls out to me is if you are building things with AI and you are new to technology maybe, and you're exploring how to implement things is to also balance those inquisitive moments on different levels. ChatGPT with going and doing investigations of other people's, you know, sharings and buildings online. I think that's a really great opportunity to kind of, have that communal developer experience, share knowledge, and it'll get us all out of our silos as well. And you're always going to learn something if you can saddle up to a, you know, a mentor developer who's able to kind of shine a light on why those struggles are so important to building that knowledge.

Ben Lloyd Pearson: 12:42

it makes me, like, wonder, like, are, are we sacrificing knowledge gain for speed? So AI lets you move faster. Are we sacrificing the ability to learn So yeah, so the author of this article that I mentioned does have some great recommendations. So among other things, it's, you know, bring a learning mindset, like you should interrogate your AI. You shouldn't just accept answers that it gives you at face value. but sort of touching on, you know, the more humanistic aspects of our previous story on interviews. you know, there's a lot of human, Places or places for human discussion out there that aren't overwhelmed by bots and GPT generated stuff You know, there's lots of communities out on like discord and reddit so I think affinity groups are becoming like more important than ever like we can all learn from GPTs, but there's still a lot of value in learning from other humans as well but then there's also some recommendations like bring more conversation into code reviews, so Don't focus as much on whether or not the code works, but more on why the approach to that was taken in the first place. and then, you know, just occasionally build things from scratch. Like it's, it's fun to make stuff from scratch. I love to cook, you know, a lot of times I buy stuff from the store. But sometimes it's better to just do it yourself because you learn something along the way and you take that knowledge with you into the future. So, you know, it's not all bleak. It's not like we're gonna suddenly forget how to code, but, you know, I think we do have to take a different approach to learning in this environment.

Andrew Zigler: 14:14

Yeah, I don't think it's bleak at all. It's just more like a, it's a new level of abstraction that will just require new skills.

Ben Lloyd Pearson: 14:20

Yeah, So, Andrew, tell me about our guest today.

Andrew Zigler: 14:24

after the break, we're diving into some amazing insights from a recent event that Dev Interrupted had with LinearB. This is in San Francisco and covering our 2025 Software Engineering Benchmarks Report. And so what we're going to be listening to is What happens when you get Rob Zuber, the CTO of CircleCI, and Tara Hernandez, the VP of Developer Productivity at MongoDB, together in a renovated church with a live audience and a bunch of technical experts and Well, you get magic, So stick around, after the break. We're gonna have a round table discussion with these industry leaders on dev productivity and experience.

Ben Lloyd Pearson: 15:05

Let's talk about something that's been on every engineering leader's mind. How do you measure success beyond just Dora metrics? That's exactly what LinearB tackles in their latest guide on how top teams track developer access beyond Dora. you're relying on deployment frequency and lead time or cycle time, you're missing the full picture of how engineering actually drives business impact. This guide breaks down real world metrics that matter. Stuff like developer satisfaction, cognitive load, and engineering impact beyond just shipping fast. Because what's the point of going fast if you can't see where you're going? If you're serious about improving your team, go grab your free copy now. The link's in the show notes.

Dan Lines: 15:49

We're going to get started with our first session today titled software engineering benchmarks and insights round table. Now we have a wonderful agenda for this session. We're going to look at a bunch of different insights, five of them. But before we get there, I want to introduce our amazing. Roundtable guests, starting with Tara Hernandez, VP of Dev Productivity at MongoDB. Welcome, Tara. Hello. And of course, we also have Rob Zuber, CTO of CircleCI. Welcome, Rob.

Rob Zuber: 16:31

Hello, everybody. It's really weird being up here. It's just like a bunch of bright lights in my face. And all I can see is glowing Linear B things. But they're very cool.

Tara Hernandez: 16:40

And a really big pipe organ. Welcome, everyone.

Dan Lines: 16:43

Yeah, can we play the organ? You know what I told them is make this look like a nineties roller rink. And I think they executed it really well. Maybe we can play the organ, but, uh, amazing to have you both here.

Tara Hernandez: 16:57

Free bird.

Dan Lines: 17:02

So for today's agenda, we're going to look at five insights. We're going to look at the PR lifecycle, the PM hygiene, Dora metrics, code quality, and predictability. This is all coming from. The 2025, engineering report that you should all have. And just a little bit of information about the report before we get started. So on the left hand side, you'll see the population size. So we looked at about 6. 1 million poll requests, over 3, 000 organizations, 32 countries, you have a bunch of different calculation types in there. P50, P75, new for this year is P90 and average. This is the fourth generation of the report. And this year we have seven new metrics that we've added. The first two, approve time and merge time. That's looking a little bit more at developer experience, breaking down review time, getting a little more granular. We added PR maturity, and then we added a bunch of different ones here. Issues linked to parents, branches linked to issues, in progress issues with estimations, in progress issues with assignees, these are all what I consider predictability or predictable delivery metrics. Let's start with PR maturity. So if we think about PR maturity, think about putting up a pull request and maybe it has a hundred lines of code of change in that PR and what PR maturity looking at is the ratio of the amount of changes. So let's say that you had 20 different. Lines that were changed after the PR was open. You'd have a PR maturity, ratio of 80%. What are your thoughts on that new metric that was added PR maturity?

Rob Zuber: 18:50

We might talk about this for an hour. one of the things that I think about metrics in general is that there are a great way to understand. Where you should start looking, they're not going to give you all the answers to what's happening in your organization. They're going to say, Hey, that's interesting. Let's go ask a question about that. And what's really important there is what you're looking for. When you ask those questions is what is the surrounding context? Why do these numbers look the way they look? And the thing that I love about this and a few of these different metrics is you could describe a fantastic organization. And a terrible organization that has show the same numbers, right? So PR maturity, again, how much change is there after someone puts up a PR, right? The fantastic organization, you have teams pairing on code. By the time the PR goes up, two people have looked at it. They've built it together and someone goes, yep, let's put that in. No problem. You know, we've all agreed that this is the right thing in the less awesome organization. People are rubber stamping PRs. Nobody actually asks any questions. They just put them up and they're like, yeah, go for it. You know, it is like, Hey, it looks interesting. I don't know. I'm not, I don't really have time to read your PR. I got my own work to do. And like, I guess, you know, I'd always ask, which one of those do you want to work in? Right. So you can't just look at one of these metrics and say, my company is elite. My company is terrible. You can say that tells me something about what's happening. What else do I need to know to sort of triangulate? What's really happening in the organization?

Tara Hernandez: 20:28

Yeah. You know, I was at reinvent last week'cause I think like half the living world, gosh, there was a lot of people in Vegas and there was a talk about, collaborative AI and this, this new concept that they, this the speaker was trying to hype up. and she said that really interesting, which was, you know, collaborative AI is when you recognize that AI has all of the data and knowledge that the human has the context. And I'm like, yeah, okay. I, I buy that. But that's actually true for just about anything. And I think it's. You know, certainly true for metrics, and then the example that I think I gave before that we liked is when I was at Google, I almost never made commits I was a manager, you know, don't touch the code it break something but there was a cleanup that we did We did a code refactoring It was like a million lines of code that was in the wrong spot and it was just sitting there And I'm like, I'm gonna go clean that up So I made a PR that was a negative 900 million lines of code I could assure you that it was not reviewed thoroughly, but it was the most exciting like Button press I've ever made in my life into GitHub. And you know, what would that, you know, if, if without context, that would have been like, you just deleted our, the entirety of our intellectual property, right? Like, or some other responsive thing. So I think that these metrics, PR maturity being, I think it's a good one, but it's like a piece of your Lego kit. As an engineering leader for what do you want to be true? And what do you want to value? Right. And I would argue, like, I think Rob has some good examples, but I would argue like other variations are, where are you in your career? Like as an engineering manager, you want, might want a lot of visibility on what your junior engineer is doing. And you see this long conversation in a GitHub PR about giving that person feedback, and now it's stored as part of the commit history, right? Whereas the senior engineer, you might see a couple of things cause you know, they had a. Really like deep whiteboard session at some point with the staff engineer and it came in pretty clean. So again, I think it's, you know, love the concept of it, but I think it's, you know, we really need to think about what do we do with it now that we're measuring it is the, it's kind of the important next step.

Dan Lines: 22:31

Yeah, I think, well, first of all, you know, I love that story. That's a great story, but for all of these metrics and for our audience listening today. The takeaway here is context matters a lot. So you're going to look at these metrics. We're going to look at some of these insights. We're going to talk about the benchmarks, but think for yourself, if you're sitting in the audience and listening, do I know the metrics for my organization? Can I say what they are? You are the ones that understand the context of your business, your project, what you're looking to deliver, the seniority of your team, the junior ness of your team. And I think what's great about everything that we're going to talk about today is that it opens up conversation to ask the right questions of yourself, of your team, and of your business. go ahead.

Tara Hernandez: 23:17

So I think there's another aspect of this that we've been talking about recently, and I'd love to see your reaction to this idea, which is a lot of time we think about like developer velocity metrics, like how fast your PR is going through, how long does it take you to do your code reviews? You know, how long does it take you to write your tech spec, execute, et cetera, et cetera, et cetera, right? But if you think about that in the aggregate of a team, I realized that one of the things that I think is probably one of the most interesting aspects of this is how well it shows how well that team's leader is doing, right? If the team's leader is routinely, you know, they're responsible for what their team delivers, right? So if they are missing a lot of, the team's missing a lot of deadlines, maybe their engagement survey scores aren't so great, There's quality problems, there's low test rate. Maybe that's a piece of a leadership puzzle, part of your toolkit rather than a developer piece.

Rob Zuber: 24:08

think a lot, talk a lot, I think, in this industry, not just at CircleCI, about kind of the delivery unit as the team, measuring velocity as a team. the way that I think about that is, to your point of the manager, I expect the team to be effective and I care about the team's delivering, but then what happens inside the team? Right. If one person is not contributing, that's a concern of the manager. And then I guess your point is like, if the whole team is not delivering, that's probably a manager issue. It's not an individual issue. of course, other dynamics and organizations we could go on forever, but

Tara Hernandez: 24:40

everything depends, right? Exactly.

Rob Zuber: 24:42

But I think the resolution, like being able to see below the level of the team to me is the responsibility of the manager, meaning. I can't, I can't see that much from the outside, but from the outside it's really hard to see the dynamic that created the scenario, right? Okay, yes, this, your team is not delivering as much value as another team, or you're missing your deadlines. You're unpredictable and your delivery. You have lots of incidents, whatever you care about, but there's some dynamics inside the team that's causing that to happen. And to me, it's the manager's responsibility to figure that out.

Tara Hernandez: 25:17

And having that signal, right, is really kind of a key indicator.

Dan Lines: 25:20

We probably could have spent an hour just on this first, metric. Keep us going,

Tara Hernandez: 25:26

Dan. I'm going to

Dan Lines: 25:27

us forward and actually start looking at some of the insights. We have five or six to go over here. And the first insight is around the pull request lifecycle. So I'm going to read the key, takeaway here. PR size, so pull request size is the most significant driver of velocity across the PR lifecycle. And what the little graph is showing you here is as PRs get larger, cycle time or velocity also gets larger. And as PRs get smaller, your mirrored merge frequency increases and your cycle time decreases. Now I love this insight. It's actually when I'm working with customers, usually the first one. That we talk about around PR size and usually when I come in and work with a customer, the PR size is a little bit large. And it's not even about the PR size being large. It's about what I was saying earlier with the context of why, what is causing it. There could be a lot of different things that are causing that ranging from a, you know what, we're working in a situation where we only are releasing once every month. So developers try to get everything into a PR or something like that to make sure it goes out. Otherwise they'll mix the next release. Or it could be something that has to do with. technical debt. Every time that I try to change something within this code base, I actually have to touch five different files and change all this stuff, even though it seems like a really, really small, request of me. I'm actually having to touch a large amount of code. when you hear this takeaway of PR size is the most significant, driver of velocity across the PR lifecycle, what comes to mind for the two of you?

Rob Zuber: 27:10

I think the first thing is the word driver and trying to, you know, rep on behalf of my DSA team. Like, I think there's correlation. I don't know that we necessarily have causality there. Like there are feedback loops in software development that work both ways. You know, I think this is trying to say PR size leads to reduced velocity, but I would argue that reduced velocity leads to large PR size. Right. If people are slow. To review and give me feedback, or it's expensive to get a PR through the process, then I will jam more stuff into the PR and then I get this negative feedback loop. I make it bigger, people take longer, so I make it bigger, so people take longer, and now we're into month long PRs instead of like, like I've worked in teams where, you know, we would not even mention anyone's name, just write the PR, drop a link into Slack or HipChat back in the day. And someone would just review it in three minutes because they knew exactly what I was working on. They had the full context, right? So it can tell you something else about your organization. Again, everything is context, right? Are the people showing up to review saying, I don't know anything about this. Now I need to sit down and understand everything that surrounds this change, right? Therefore, I'm only going to do this on my coffee break in the morning. Versus, I could stop what I'm doing, review this, agree to it, and move on in a very short period of time. And so I think the, there's like organizational culture and structure that drives large PRs. Versus always being large PRs driving that organizational structure.

Tara Hernandez: 28:49

100 percent agree on that one. I think that's a, it may be non obvious to some, but you know, the large PR side, it's like, what bureaucracy do you have that's causing your developer pain, right? And especially in your example, you think that developers had to re merge 14 times, now they're angry, right? So God only knows what's going into those updates.

Dan Lines: 29:10

I mean, you talked about, I think a good situation where there's a shared context within the development team. But I think you said, Hey, you know, let's say that I was making a change. You already knew that I was going to be working on that. So when the PR goes up, you would say, Oh yeah, I knew Dan was doing that. I have all the context. I could review this really quick. Now, I think on the flip side in a situation where there's not a lot of context between the teams, so maybe the one team is working on many different things in parallel and also the PR size is large. That doesn't sound like a fun review to me. If I'm a developer, I might say, you know what, I'm going to wait till, I don't know, Friday to do this. That's Monday, or at least wait to the next day. I'd rather be coding. I'd rather be working on something on my own than unpack this enormous PR. So I'm not going to pick it up.

Rob Zuber: 30:04

Yeah, I think that's exactly right. Like I think, you know, I'm thinking whip limits, right? I mean, it's kind of nuts how quickly you get from PR size to whip limits or whatever, right? Like whatever, whatever other tool we reach for. But, and I think as, as leaders, I don't know the makeup of the audience, but I know someone's laughing at this as I say it like. We're like, well, I have six engineers. That means I could do six things at the same time, right? And someone is saying, but we pair and that's more productive. And you're like, that doesn't make any sense. Two people are doing the work of one person. This is where that matters, right? Two people driving up or even working kind of in the same area so that they have, like, they're thinking about that problem, right? You and I are working on similar things. You're thinking about the same problem. And I say, Hey, I'm going to make this change here. Like that totally makes sense. I was just thinking about that versus I haven't looked at that code base in a year. Now I need to go reread this whole chunk of the code base kind of thing. So like. The, again, to the point of it being a sign of the dynamics that are happening in your organization more than just because, because what happens a warning to all you leaders out there, what happens when you read something like this, the simple version is like, we're going to tell all of our engineers, they're not allowed to have more than 20 lines in a PR those PRs will be terrible. Right. It's going to be like every single one will be exactly 20 lines and you'll be like, what does this do? Well, there's another 20 lines coming. Right? Like, it's nonsense.

Tara Hernandez: 31:33

I'm pretty sure there's a Dilbert cartoon about this.

Rob Zuber: 31:36

We all live in a Dilbert cartoon for sure, but like, you can't, you can't drive from the number, right? That's the point. This is the beginning of a conversation. You go say, okay, what are we doing? Like, where are we falling down? Why are we working on too many different things? Right? and so I, I just think there's a ton that comes out of this.

Dan Lines: 31:53

what you made me think of also, if I'm a team leader. Or maybe I'm planning my sprint. Maybe we would be more inclined to say, Hey, let's tackle a particular problem together this sprint. So everyone has the same context when the PR goes up. Because the other way to do it is say, okay, I have six developers on the team. I'm going to go send you off on six different missions. Well, you know, probably your review time is going to be larger. And you

Tara Hernandez: 32:18

better hope you don't hit it, find any buses near your team. Right. So, you know, single points of failure or anything. I think that the other thing I'm actually curious about, and I remembered it after the last, when we did the webcast, which is, we asked this question, like what, what constitute a pull request, like, so for the context, for example, of, really tight teams moving quickly, you know, how much of that code and how much of the culture is around developer documentation, right? You know, is there a markdown that's included in that and, you know, drive and maybe the, you know, there's more documentation. Yeah. Or maybe there's more documentation plus unit tests than there is code, right? So I think, again, there's another sort of contextual aspect of this metric that might be kind of interesting, to think about because in, you know, my ideal world, the PRs are of reasonable size for the, whatever it is that they're solving for, but also include some tests and some documentation and hopefully something like an architectural decision to reduce. So six months from now, when we go look at it, like, why did we do that? Oh, right. Because Rob said that we're going to do this thing. And so I think that, again. Loving the concept of, you know, how can you make this an actionable metric, you know, to drive the culture that you want to see, you know, maybe, you know, something like young developers, smaller scope problems, smaller PRs, bigger developers, probably touching more things, you're doing a big refactor, that's going to be huge, right? How can you, but then, you know, that the outcome of that is then your subsequent PRs will start to get smaller again, because now you don't have to touch, you know, at Netscape, we had this one header file, net. h, I'll never forget this. It had the world's largest global symbol table in it. It was so big, we had to have a custom compiler from SGI because it wouldn't compile in the, in the regular compiler. I'm saying this now, cause you know, everybody involved is now dead. Um, they're not dead, but they should be dead. Intrigue. It was a, it was a wretched, wretched thing. Right. And so, you know, the interns weren't allowed to touch that file. and you think, oh, well that was 30 years. Jesus Christ. That was 30 years ago. but we, we, we're humans. We keep making the same mistakes over and over again. Right. And so I think the interesting thing is like before we were leaning in on where, where did the leadership go into this? But I also wonder like, how can you make this directly actionable for that junior developer, right. For him to understand or him or her to understand what they're doing.

Rob Zuber: 34:30

I mean, regardless of causality or whatever, like tooling, right. Or the approaches I guess I would just share a story to say, but like. Feature flags, right? branch by abstraction. How do I make PRs that are not the whole unit of work, but allow people to keep up with my context? That's another approach, right? Like, I've got someone, maybe we're working on slightly different things. I'm like, hey, I'm thinking of doing this, I'm thinking of doing this. I'm testing it out, I'm getting feedback in production. These are all great things, right? Instead of, I need to write the whole thing, you know, it's kind of going back to the PR maturity thing. And then, oh, and then we're going to get into refactoring later. I'm so excited. Sorry, spoilers. but like, what tools do I have as a developer to actually write small PRs? Are we, coaching, are we training our younger developers on how to break down work in a way that isn't necessarily complete, but allows us to evaluate our ideas along the way? And I think there's lots of opportunity in there.

Dan Lines: 35:26

Yeah, that's it. I'll push this forward, but that's exactly why I usually like starting with PR size when I go to work with a customer and for you all in the audience, like, even if you just know what your PR size is and you have a conversation with it about the team, you're going to discover something. Whether you want to make it smaller or you're okay with the size, like Rob's saying, they might say something back to you like, yeah, I'd love to have smaller PRs, but I don't trust our. Feature flagging capability, uh, whatever it is, we don't have it. I don't know how to use it. So my PRs are here. so a few of the supporting insights, if we move on, to the next slide, larger PRs wait longer to get picked up for review. I think it makes sense, right? Who wants a review in an enormous PR? no one. I'm going to save that for another day. Larger PRs have longer cycle times, probably for the same reason. Larger PRs take longer to approve. I think all of these pretty much correlate together. PRs that wait longer for the review to start also take longer from approval to merge. And larger PRs are modified more heavily during the review

Tara Hernandez: 36:38

with my ginormous PR being an exception.

Dan Lines: 36:41

Yeah. Besides you took two

Tara Hernandez: 36:42

hours.

Dan Lines: 36:43

There's an exception to

Rob Zuber: 36:44

every

Dan Lines: 36:44

rule, but

Rob Zuber: 36:45

you did not build your plan on what was it? 900, 000 line. Maybe it's something stupidly large.

Dan Lines: 36:52

Let's move on. Oh, this one's funny. Let's move on to the next insight here. So. Project Management Hygiene. So, key takeaway. Poor project management hygiene is directly correlated with higher velocity. So, I'll say it again. Poor project management hygiene is directly correlated with higher velocity. You know what this made me think about, actually?

Tara Hernandez: 37:18

Jira tickets are for suckers.

Rob Zuber: 37:21

I'm ordering my t shirt right now. Hackathons.

Tara Hernandez: 37:26

Yeah, no.

Dan Lines: 37:27

But why did it make me think about hackathons? Some of the you're usually working with a few developers that you probably, know pretty well. You're definitely, not going to JIRA and coming with your idea and saying, let me write a story before we go and work on this hackathon project. Yeah, that hackathons I've seen, developers, you know, not only come up with some of the coolest features. for the business, but move at a really rapid rate, you know, 24 hours. And I think, you know, when I think back to a hackathon, it's the situation where the developer is also the product manager in a sense. I don't, we have a shared context. I don't need to open a JIRA ticket. I don't need to explain everything that I'm doing. And therefore I move very, very quickly. Now, what's the downside? To putting hackathon code into production.

Tara Hernandez: 38:23

Oh, that's a really good way to kill your company.

Dan Lines: 38:27

Quality, right? So anyways, when I, when I saw this, I started to think about that type of hackathon mentality, but I'd love to hear what you two think about this key takeaway. I

Tara Hernandez: 38:38

the more, you call it bureaucracy, you call it process, call it whatever, the more of it, whatever it is, it's going to slow things down. So going back to something that Rob was saying earlier. You know, how do we have a collective shared context that allows us as engineers to move forward with the lowest amount of friction, right? inevitably, as I mean, this really becomes ultimately a scale problem. A lot of people love working on the, in the tiny startups. You know, the engineering team is less than a hundred people, right? That's the greatest glory days. It's the greatest time of their life. And then if they stick with it, you know, and the company is successful, next thing you know, the engineering team is a thousand people and they're bemoaning their very existence, right? Right. why? Because the amount of thinking, you know, there's, there's probably an order of magnitude, more products and an order of magnitude, more enterprise customers that are picky and all the other things. And we got to slow down, So I would also argue, like I work for a company that makes databases. Let me tell you how often our enterprise customers want to upgrade their database. Never. Right? So that's a different kind of problem, but I do think there's a. You know, again, as you scale your organization, as you, as you grow your development team, keeping friction low, as low as you can, recognizing it will be as low as it was when you started, I think is a good goal to keep in mind. So, you don't let that tech debt get too crazy because what inevitably happens is your quality goes down. Right. And then the VPs are going, Oh, no, we got to bring in all of these scrum masters and we're going to do agile and everything will be better. Right. Sorry, I didn't mean to make you choke, but you know, that's what, you know, I mean, How long are we going to do

Dan Lines: 40:19

the training for with the Agile?

Tara Hernandez: 40:22

So, you know, not to bag on scrum masters and Agile, it's fine if you've, that's your thing. I'm not an Agile person. but I think that, you know, there's a balance. Right. What's sometimes even within a, let's say CircleCI has got this new thing. You're going to go out quickly, you know, go, go to market product testing, whatever, and then you're going to slow down as much as you have to in order to make sure that you meet your, your enterprise needs.

Rob Zuber: 40:48

I think there's another example that I would use similar to the hackathon is, we use this a bunch as incidents. And one of the things that I would say, like when I reflect on incidents, not that I wish them upon anyone, but I kind of like them or enjoy them anyway, but like you have clarity of purpose, right? You've shared context. Everyone knows there's one priority, right? It's what everyone says they want in their organization. But it only occurs during the most stressful time that anyone experiences in their organization. After you enter the war room together and get the context. Right, it's like, well, like, does anyone say, oh, but we also need to like worry about that typo over here? Everyone's like, why are you talking about that right now? Like we have one thing that matters, right? And that's what people kind of strive for. And so the question is like, can you create that? And I don't mean with like the caffeine fueled 72 hour stretch of insanity or whatever. But like. Can you create that quality of work, that sort of singular understanding of what matters more broadly than in those, either those hackathons, I mean, the hackathons are always funny to me because everyone's like, Oh yeah, well, no, we need to have a hackathon so we can do some innovation. I'm like, what are you doing the other 51 weeks of the year? Like we should probably just make innovation something that we do, right? Like the fact that we need exceptions. In order to be great at delivering feels weird. It feels like a problem to say, if that exists in your organization, like, please go address that problem. The other thing that I think about, about this particular metric is, I personally believe, I don't know if people are familiar with this meme, like, great project management hygiene is probably maximizing your mediocrity. Teams that are at the absolute bottom of the curve and have no idea what they're doing and are like, you know, cowboy coding or whatever. They'd probably rank poorly on this. But the best teams that I've ever worked on have not been that concerned about whether they put the right JIRA ticket number in the PR. So that everyone could follow the flow because they knew exactly what they were trying to deliver. Everybody knew exactly what they were trying to deliver and said, Hey, take a look at this real quick. You think we should fix this? Yeah. Okay. What if we moved it over there? Okay, great. Yeah, let me do that right now. Oh, now it's in production, right? And so, I think, like, the best teams capture that in a way that's hard to represent. Now, that's really tricky because I talk to my, I mean, I have this conversation all the time. People are like, How would we know from a metric perspective if we were truly like a high performing team? And I'm like, I don't know. You'll feel it. Tell me if I'm wrong, but like At some point, all the process is creating overhead, right? Like process, like we said this before, like processes to me create a floor, right? It's like, how do we raise kind of our lowest performance up to a point? But being phenomenal, I just don't think you can process your way to phenomenal.

Dan Lines: 43:48

I'll give you a few, uh, examples on the other side. So I agree with you. Let's say that, you know, you're a really elite team. You've worked together a long time, maybe you're in the office together and you can easily turn around and say, Hey Rob, I'm going to just letting you know I'm going to do this. thing, you know, you have the context. Now, if I think about some large enterprises, we're in an era of distribution, we're working all around the world. Some of my teammates might actually not be sitting across, the way from me, we may be working from home. We may be a new team that just came together. And if we're working in this mode where, you know, nothing is documented, tickets don't have an associated branch. I think the flip side can also happen where you're actually producing maybe terrible code that would, not, help the business in the long run, cause bugs in production. So I think there's like a really fine, line here. the other thing that I would say is when you told me about the innovation, like with the hackathon, like why do we have to do a hackathon in order to do, innovation? Some of the companies that, I'm working with say, Hey, we want to do, more innovation. I'll say, okay, well, how much are you doing today? I'll say, I have no idea. I just need to do more. Well, a different situation can be, Hey, I actually do have a pretty good hygiene. I can tell you how much innovation effort we're putting into, you know, versus keeping the lights on versus enhancing features. And maybe we can go back to the business and say, look, we're pretty organized here. We don't get any time to do any innovation. We're doing it 1 percent of the time. Could we bake that into our normal workflow? So I think there's kind of like, it goes back to the beginning of the conversation that context matters a lot.

Tara Hernandez: 45:34

So again, I'll reiterate, I think this is ultimately a scale problem, right? Like my team or my specific team runs from Sydney to Frankfurt. So me being in the West Coast is actually nice because I can actually take meetings. Without being up in the middle of the night like I had to do at my last job where I had a team in Bangalore. Not that there's anything wrong with Bangalore other than it's twelve and a half hours away. There's just no good meeting time. So what I always tell my team is you need as much process as helps you. If it's getting in your way, get rid of it, And you're not allowed to complain about it without saying what you're going to do to fix it. Right. And so I think that goes back. So it's a scale issue and it's also a leadership issue. I think if you let your engineering team, you know, bitch and moan about, pardon it, my, my French about, you know, Oh, this is getting in our way. We can't get any work done. I'm like, that's not my problem. Get your work done. Right. If this is getting your way, solve it. Or ignore it and then justify it because now you've made your delivery. And so, don't have a culture of learned helplessness around this, I think is the trap that often as companies scale, you know, they fall into that and that's just dark times. And then you have a lot of work to do to unravel that.

Dan Lines: 46:42

Or maybe even automate it. So, okay. A few, a few supporting insights here. If we're moving over to the next slide. Hopefully up top when the percentage of branches not linked to issues is higher. That's the shorter the coding time when the percentage of branches not linked to issues is higher, the shorter the review time. I think that's a mutual context type situation. And when the percentage of branches not linked to issues is higher, the shorter the merge time. So this is our second insight. Move us forward to the third insight. Dora metrics. Do we have a key? Take away here, organizations with a longer cycle time have a higher rate of failures in production. Okay, so organizations with a longer cycle time, the longer my cycle time, the more failures I have in production, you know what this one made me think of? CICD. Made me think of you, Rob. But before I thought of you, I thought of my four year old daughter who's learning to ride a bike, because I'm observing this actually every day. As she's going slower, there's more failures in production, like falling over. This is the metaphor that I've heard lots of people use. But as she's starting to learn, be brave and pedal faster, she's steadying out. So that's the first thing that I thought about my daughter. And then I thought about you, Rob,

Tara Hernandez: 48:12

yeah.

Dan Lines: 48:13

So I'm going to pass it over to you because I think you, you know, your experience and, where you're working now, what do you think about that?

Rob Zuber: 48:20

Yeah. I mean, this is, this is my life, but I think it's everyone else's, right? Like the DO in DORA is for DevOps. Is it not like at some point we realize that if something is hard, you should do it more often. I will say before I got to CircleCI, uh, I started a company with the gentleman who's now the CEO of CircleCI and, um, He was actually like, he's a product person, by background, he was one who said. I think it was a 2011, you know, we should do, we should do continuous deployment. I was like, what are you talking about? And he said, Oh, what we're going to do is like, as soon as we write the code, we're going to push in production. I'm like, nah, releasing is terrifying. Releasing requires three pots of coffee and a weekend because you got to put it in production and then you got to clean up the mess, right? Like that's the life we all lived prior to that. And I remember, I actually go through this with, the kind of new hires at CircleCI. There's, if you go back and like find the original blog post, does anyone remember, IMVU? How's that pronounced, by the way? Anyone know?

Tara Hernandez: 49:25

I don't even know I ever saw it written.

Rob Zuber: 49:27

Oh, Eric Ries, right? So he wrote the Lean Startup, he started that company, IMD, IMVU, I don't know, whatever. Doesn't matter. They wrote a blog post about continuous deployment. They're sort of like one of the early practitioners. Etsy, like, I don't know if anybody remembers this time. Anyway. They wrote a blog post and then he, the, the author of the blog post from ILE wrote a follow up describing the discussion on Hacker News, which is where you should get all your feedback, by the way, about this concept. And the thread was like, this company obviously hates their customers. You are not going to be in business a year from now. If you think that delivering continuously. Into production is a good idea. Fast forward is 2024 and we're, it's the gold standard. Yeah. Right. So like, thanks. Thank you. Also, if, if there are any visionaries out there who are trying things that no one's ever done before, like someone has to go do it, right. Someone has to go get the bloody nose and be like, wow, that was terrifying. But we got through it and now we're doing great stuff. This is like the most unsurprising thing, right? Like. Small units of work, fresh context, right? I mean, did I, I used to work in like telephony. Has anyone worked on like a year long release cycles? Please tell me you're not still doing that. But like someone says, Oh, this is broken. And you're like, wait, did I write, I literally have no idea what I was thinking when I wrote that 10 months ago, as opposed to like, I wrote it this morning. I put it in production this afternoon. So first of all, super low risk. I can fix it. I'm putting out releases that are like 10 lines of code change or whatever, right? Tiny PR is going straight into production instead of the last three months of work going into production. Like just the risk is so high. And so when the risk gets high, you slow down, you're like, well, we could release this thing, but we got to get a whole bunch of people together. We got to prep the war room. Cause there will be an outage. Like we don't know what outage we just know it's going to break. Like, and then everyone's so cautious. And once you feel comfortable. That you can deploy whatever you've built. Then you just go and you go and you go and you go. And it's so good. If you're not doing this right now, like, I don't know, come find me. You don't need to use my product, but my God, please do this.

Tara Hernandez: 51:39

Well, so I have a funny story about this company will be, will remain unnamed, but we had two major engineering teams and this is when I really would have loved the Dora metrics is predated the Dora metrics and the one team was getting into continuously, let's go fast. Let's go fast. But when they started, they went super slow, as far as quality. They didn't have enough tests, they didn't have the systems, they didn't have the muscle. And the other team was inheriting their changes. And we could never convince that other team that even when the first team was actually doing successful delivery multiple times a day, anytime there was a problem, it was that team's fault. You know, and that's when I was really missing, like, we need a platform that tells us what the, you know, to prove. Like, no, it's you, it's not them. Anyway, you want to keep going.

Dan Lines: 52:24

No, I know where you could find one, but Yeah, you keep going.

Tara Hernandez: 52:27

It's just, it was a very frustrating experience.

Dan Lines: 52:30

Yeah, you know, I think the other thing, to add on to what both of you are saying, especially Rob, is You know, if you have code that's sitting there for a long time, it's going to degrade. If you, are not confident to get code out in small chunks, it probably means that you don't have the right testing practices, automation practices, uh, feature flag type situation, system architecture. So I think the other context thing here is, here's an indicator, if you are able to have small releases that go out all the time. you probably have the full infrastructure and ecosystem in place to do so. And therefore, you're going to be in a much better situation in terms of quality.

Rob Zuber: 53:15

The one thing that I want to add, sorry, I'm going to drag this out, but this is my life, is the superpower of this is not the code quality. Like, yes, you recover from incidents, you take out the risk, but the thing that you're actually de risking when you deliver all day, every day, is not the quality of your code. It's the fact. That you are undoubtedly, I don't care what business you're in. I don't care how good your product managers are. You are building the wrong thing right now. And the longer you go on the wrong path, building the wrong thing, the more money you are wasting, not building the right thing for your customers. And if you are shipping three, four times a day. And getting customer feedback three, four times a day. We change this thing. We change this thing. No one gives a crap about this thing we just built. Let's take it back out. People familiar with painted doors? Like, we put up the button that doesn't even do anything and no one clicked on it. You know what we should not do? Build the feature behind that button. But so many teams spend six months building the feature. To put it out. And the last thing they do is put up the button to find out that nobody clicks on the button. That's six months of engineering effort that you could have put into something that your customers care about. And so, yes, you should absolutely reduce the risk of your changes. But what you're really doing is getting yourself turning towards the path of what your customers care about as quickly as possible. And that's where this really, like, this is just. Your cycle time will be lower, but it's not just that your cycle time is lower, it's that everything you deliver is high value to your customers because you're building the right thing, and that is when your business is going to take off.

Tara Hernandez: 54:59

Which is the one thing about Agile I really like, right, which is early to customers and iterate. Right. Which was the whole DevOps thing was invented to, how do you do that? Right. And so it's like, Oh, punchline that there it is. DevOps itself is kind of the button. And then, then we broke other things. We said, Oh, we have DevOps now, so we don't need QA teams. There's, there I threw that hot take down.

Dan Lines: 55:20

That company.

Tara Hernandez: 55:21

That's a whole hour.

Dan Lines: 55:22

That's

Rob Zuber: 55:22

for dessert.

Dan Lines: 55:23

Yeah. Okay. Perfect. Let me, let, let me push us forward here. A few supporting, insights on the next slide. Just what we've already been saying, the longer the cycle time, the higher the change failure rate. There's correlation there. The longer the deploy time, the higher the change failure, failure rate correlation there. I think it makes. Insight number four. Well, we talked about this, a little bit already in the beginning, but. A higher pull request maturity ratio correlates a higher velocity. So a higher maturity ratio correlates with higher velocity. Talked about it a little bit in the beginning. Do we have more to add here?

Tara Hernandez: 56:03

I mean, so we say that a, you know, highly mature PRs go quickly. Yes. Right? I struggle with this one to, to be honest, because we'll go back to the hackathon example. Yes, if we have no process, no Jira tickets, no product planning, no PRQs, whatever, you can move fast. You can move really fast, but you're producing unusable garbage from a business perspective. that's sort of a harsh take, but you know, that's an extreme example, but yes, I think that if all of those other things have lined up, right, that the culture is good, the planning is good, the modularity of your system architecture is sufficient. Et cetera, et cetera, et cetera, then absolutely right. But I think the quality bar has to be correlated to the previous thing, which is what's your failure rate, right? and then your resolution rate.

Rob Zuber: 56:47

So I feel like I'm in the right room to make a request for next year's report. I'd like, I'd like to see this over time because velocity over time is actually, I mean, lowercase L velocity, not like story points. I couldn't even remember what they were called for a second, their story points, whatever. But like, how fast am I delivering business value and how are my approaches to software engineering impacting that over time? Because I think one of the most common things that happens, you know, we take your hackathon example is we're fast at the beginning, and then we just. Like, it degrades, right, because we didn't have the maturity, the process, whatever that is, and it would be really interesting to see some of these metrics, how they evolved over the course of like 12, 24 months, because you have all the data. So that's awesome. because I think that would be really telling, right? Like what I care about as an engineering leader is not how fast did you move this week or this month or this quarter. But can we sustain that for an extended period and consistently predictably deliver value? and all the times we're like, we're really fast right now. We know we're like, gonna hate ourselves in about six months for whatever it was that we did.

Tara Hernandez: 58:04

Yeah, I mean, I think, you know, what's the definition of, it's quality, time, feature set, pick any two, right? Because what do we know for a project that has a certain longevity? Yeah, you get to the point where it's like, okay, it's basically there. And then what you want to actually see is the quality correlates to the churn rate, right? So you know that the successful, mostly complete project, the pull request maturity might be good, but the velocity should just go through the floor because you're not touching it anymore. You're moving on to the next thing, the thing that's going to make you money a year from now or six months from now. Right. So that I think there's some really interesting ways and I plus one, I would love to see this over time.

Dan Lines: 58:40

Yep. So, a new, uh, metric that was added, this year. So I think, a fifth generation will be even better here. I'm going to push us forward Here's the last, Insight that we have, and it's around predictability. So the key takeaway is around capacity, accuracy. So what it's saying here is over half of engineering projects under commit against their goal. And less than 25 percent of projects fell within the ideal range. So when we're thinking about under committing, what it really means is. Hey, I can really deliver this amount of story points, but I'm only going to say that I can deliver maybe 80 percent of them, but I'm under promising so that I can over deliver in order to hit my, maybe my predictability goal or to be on time.

Tara Hernandez: 59:31

This one blew my mind. I have to say, because in, I've been doing this for 30 years, every engineer, I can't think of an engineer that did not. Overcommit. I cannot think of one. In fact, I have this system where depending on who you are, you get a multiplier, like no matter what you say, I'm going to multiply it by three because I just know.

Rob Zuber: 59:50

So

Tara Hernandez: 59:50

this one is fascinating to me. I don't know your experience. You have

Rob Zuber: 59:53

a Monte Carlo simulation for all your engineers. Um, well, two, two things that I think are interesting here, a hundred percent agree with that. I'm just like, wait, there are engineers who actually undercommit. That's cool.

Tara Hernandez: 1:00:04

they're very smart, by the way. I agree

Rob Zuber: 1:00:06

with that. So one, one thing that I read between the last time we talked about this and now. Well, I don't know if people are familiar with Kent Beck, I love his latest book, Tidy First, but I started following his sub stack. He published this thing called, Forest in the Desert. Did anybody read this? So he talks about different kinds of projects, and, uh, the feeling, right? Again, kind of stepping away from the metric, but what it feels like. And in his forest model, he has roots, and he presented this with someone else, and I forget who it was, and I apologize, but, And one of the roots are kind of like the core things that have to be true in order to get to a great project or a great team or however you want to think about it. And one of the things in his list of roots was that you never commit to more than 50 percent of what you can get done. And that's, that's, that's like, so that's really interesting in this metric. And his whole point is you set yourself up, you, put the time and place to do things well, right? Again, to that sustained velocity. And then he talks about the feeling, right? What does it feel like then to be there? The feeling is like someone shows up and says, we're going to do a thing. We know exactly how to do it. We know exactly where it goes. We know what it would take to do it. Like all of those things that it just feels like in so many projects, you're like, I have no idea. Let's just pick a number and pretend we could get it done in that amount of time. Um, and so I think leaving space kind of caught it to your point. Like everyone's overcommitted, right? Everyone underestimates what it really takes to get something done. And then they get pushed up against the wall, and they're like, well, we said we'd get this stuff done, and maybe if we just cut some corners, then we will get it done, so now they got it done, and the next time, their estimate is even worse, because they just made the system harder to work on, and there's this kind of negative feedback loop, I just think we, as engineers, tend to put ourselves, and as a leader, I'll say it's probably my fault, I don't know why yet, but it probably is, but like, put ourselves in that situation where we believe something is achievable, we feel we're smart, we know how to get stuff done, and then we create that negative scenario where we're like, well, I said I could get it done, so now I am gonna get it done, by like, not sleeping, and cutting corners, and all these kinds of things, and that would make it worse for the next round, and the next round, and the next round, whereas if you leave yourself the space, I think that you can So, yeah. Yeah. and then the other thing that I was going to say about this, I can't remember, are people familiar with Marty Kagan, Silicon Valley product group? Anyway, it's got like a very large number of books. I don't know how he also does his job and writes all these books, but, he reads his own audio books to props to Marty Kagan. Anyway, he talks about predictability versus innovation. just since we talked about innovation earlier and like that. That many R& D leaders are focused on predictability as the most important thing, but, I can't remember who he's quoting when he says like 100 percent predictability is 0 percent innovation, right? Like when we are perfect at understanding how long everything is going to take, that means we have not taking any risks. We're not trying anything novel or anything new. And so, you know, I think we need to, predictability tells you a lot. About whether you understand your system, whether you're capable of doing the thing that you need to get done. But if all you're doing is just, we ticketed out the work and we delivered the work exactly as was planned. Nobody's pushing the envelope, no one's taking risk, no one's innovating. And as a business, that's not going to be a good outcome.

Dan Lines: 1:03:34

One thing that I will say, and when you look at the report, you know, there's two different metrics here. One of this is capacity accuracy. There's another one that's planning accuracy. So when you think about capacity accuracy, it's kind of like the know thyself as a team, which means like know how much work you can get done in a sprint. Now that's different than planning accuracy. Planning accuracy is more so saying. I'm coming to you, Rob and saying, Hey, I think I can get this feature delivered on this date. and whether I, or let's say within a sprint, I think I'm going to do these exact 10 stories. did I actually do those exact 10 stories or did I have bugs come in from production or actually wanted to do some innovation stuff? And I ended up doing 10 things, but it wasn't what I actually said I was going to do. So just a little bit of a distinction there. but yeah, all in the report, interesting stuff. Okay. So just the, a few takeaways here. So all of you do have, the benchmarks report. highly recommend you check it out, highly recommend that, you know, for your own, own organization, what your data points are, how you, uh, match up against the benchmark so that you can have the right conversation, kind of like how we did up here live, with your teams. And I wanted to give a big, round of applause and thank you to Tara and Rob, for being our panelists today. Great job.

Ben Lloyd Pearson: 1:05:10

So, hello everyone. I'm Ben Lloyd Pearson. I'm one of the hosts of Dev Interrupted, and I'll be emceeing some of the event tonight. we've got a few minutes before dinner is ready. We have time, I think, for three questions. So if you have a question, we have a microphone that Andrew will bring around to you.

Audience Speaker 2: 1:05:28

love the example of the hackathon and project management and correlation over there, but a factor how many of these hackathon projects actually become features is a question because quality, a clear understanding of the intent, there's a lot more that happens, right? For the product side, speed is, is the primary factor in the hackathon. Uh, but there are other things that that make a product market fit. So any thoughts on that?

Tara Hernandez: 1:05:57

I certainly do. I've seen a bunch of different companies do hackathons and I think the ones that are the most successful are the ones where, the senior leadership in engineering and also the product team is heavily involved in either seeding the themes, right. And then committing to identifying and then executing and making space for execution that creates a really positive feedback loop. Like here is a customer problem. We don't have a good solution for, or here's a set of challenges and then the engineering team sees, Oh, we actually took a couple of those and turn those into actual features. We invest in that time and it was an enormous success. I think where you don't see this success where it's, you know, very kind of low key, it's mostly a reason to hang out and, you know, code from the bean bag in the office and eat free pizza. You don't get the value add. And then at some point somebody realizes, wow, this is a really expensive prospect. We've got 500 engineers who are. Basically just fucking off for 24 hours. Let's stop. And now we don't have hackathons anymore.

Rob Zuber: 1:06:52

how many things can I plus one in there? All of them. I even keep track. 10 plus ones. Is that plus 10? Yeah. I worry less about the quality. I mean, I worry about quality, but that's not the thing about hackathons. It's more the like, is it aligned with what we're actually trying to achieve? and I think. What I would encourage, and I sort of got into this on the DORA metrics thing, right, like, if you can get real feedback from your customers by delivering something in the same day, life feels like a hackathon. Right? Like, I think that a lot of that is spawned, from my life is drudgery, my project management hygiene is super high, and all I do is, like, execute the tickets that were, you know, Perfectly framed for me,

Tara Hernandez: 1:07:42

kill me now

Rob Zuber: 1:07:43

and involving your engineers, even if it's just a more senior, like your tech leads or whatever that might be like involving them in the product discovery is kind of the ideal for me. Like if it feels like I need to take a week off and sit in the bean bag and eat the pizza and like make whatever I want, that's often because my job doesn't feel particularly fulfilling. So I would encourage like a deep think about what it is to get your folks engaged all the time. So it doesn't feel like a hackathon is like this special treat where I get to work on stuff. I like, uh, that sounds terrible.

Tara Hernandez: 1:08:17

One variation I've seen, it's been kind of interesting. I think this is like depends on the company, but like rather than hackathons, they would do rotations of engineers into sales engineering or technical services, right? So it's like now you're getting the customer empathy. So I think there's a lot of different ways that you could approach this theme right around getting ultimately what you want, which is engineering, a sense of engineering, ownership, and accountability in the success of the customer experience.

Audience Speaker 3: 1:08:52

Hello, I like this talk. I was, I want to hack it. That one terror was working at Mongo. Do you, I mean, I was working on that. My God, is

Tara Hernandez: 1:08:59

that Marcus?

Audience Speaker 3: 1:09:00

Is it? If it's, hi Tara, I just sent you an email, but it's irrelevant. there's a lot of talk right now about, there's a ton of engineers. Most engineers, even at these larger companies, where the vast majority of them just do the bare minimum. And maybe that's reflected in this mythical under committed metric that I never would have guessed. What do you do about, a situation, a team where one or two engineers are really outperforming, but everybody else is, they're fine. They're not like, no, one's terrible. So it's not a situation. It's not a situation like you're talking about earlier with there's a leadership puzzle piece. To address, but like everyone's doing okay, but then some people seem to be fantastic. Is that normal or do you have something else to investigate? Is there more to get out of people? Is there a platform issue, tooling issue or culture issue? I'm just curious what you all think about this notion of people are just barely doing any work. I've seen this.

Tara Hernandez: 1:10:09

It's the 10 times employee, right? I know, do you? I went first last time.

Rob Zuber: 1:10:13

Oh man, You know, you listed some ideas there. And it's a really interesting question. Listen, my idea, like I said, platform problem is a tooling problem. Is it a culture problem? Like the answer is yes to one of those. And this is where like the whole theme of metrics are the start of a conversation. Right? Like, okay, I can see that a couple of people on my team are, you know, whatever your measure might be, they're just delivering a lot more, they're writing better code, they're super engaged while other people are a little checked out. That's your manager's job, right? Is to figure out. Do these people have, do they have the skills? Are they engaged in the problem we're solving? Are they, you know, 5 percent of the time, I have no percentage. Some percentage of the time, that person just has some shit going on in their life. And they were actually great three months ago, and they're gonna be great in three months, and they're just going through some stuff. And sometimes they are absolutely terrified to show up at work every day because they feel like they have no idea what they're doing, and someone's gonna find out. I cannot tell you which of those scenarios it is from looking at a dashboard, right? And so it's absolutely telling, if some people are able to execute really effectively in that environment, then you have an interesting baseline. Some people are executing here and some people are executing here. So it's not the environment holistically, right? We talk about this with like twins raised in the same environment, same genetics. Okay, but they end up different. Okay, what's it? What do we learned? I can't tell you what the answer is, but I can tell you there's something interesting there. And I would want a manager to be going and figuring that out. Like if you, you know, there was kind of open with the like, what do you think about people kind of drifting by and not, you know, just doing the bare minimum? It's not the ideal. It's not what I want in my organization. I understand it happens, but the first step is just understanding why, right? And some people might just be burned out on your organization. maybe they need to change. That's totally fine, right? But if you're not even having that conversation, What will happen in that scenario, again, I'll call it 80 percent of the time, I'm making up random percentages, is those two people that are crushing it, probably gonna leave. Cause they're like, wait a second, I'm carrying the team, and everyone else is making the money I'm making, but I'm doing all the work, right? So it's important to address, there's plenty of human factors and it's just, it's complex for sure, but if you don't address it, what you're gonna get is just a team of people who are just phoning it in. That much I guarantee.

Dan Lines: 1:12:41

Yeah, just added, I think it totally goes back to the team leader and to the manager, each one of the developers on the team is going to have different aspirations, what motivates them, what a good experience looks like, you can probably ask three to four questions and find out, Hey, do you like the product that you're working on? Yes or no? Are you inspired by the technology that you're using? Do you feel that you have the right career path here? And are you motivated, if I'm the team leader, by me? Am I giving you inspiration? You ask a few questions, you'll probably find out. The two that are performing really well probably have good alignment on most of those. The ones that aren't performing as well, there's going to be a major gap.

Tara Hernandez: 1:13:23

So yes, to all that, I think there's another thing, and I've had some success with this, which is, you know, how, how intentional are you as a leader about your culture kind of in the broad sense? Right? Well, one. First of all, humans aren't fungible and don't treat them as if they are, because that way lies madness and chaos. but yeah, bench management, to Rob's point, super critical. Don't take your top performers for granted. They think about, how do you incentivize the type of culture you want? You know, we say as leaders that we want employees that, have a learning attitude, a learning mindset, right? Well, how do you know that and how do you incentivize it? So there was a company I was working at that, was kind of migrating out of a more stodgy, infrequent deployment, and wanted to get into the cloud. Uh, wanted to, you know, go faster. And I'm like, alright, well, what's the test story? Well, we don't have very many tests, automated tests, but we have a good QA team. Like, okay, well, they need to focus on the stuff that's hard to automate. We need the developers to write tests. Well, developers don't write tests, they're not QA. Now, this is the opposite side of the problem that we have now. Which developers can't write negative tests, but in any case so I we this is we're on the office We have screens everywhere. I wrote some code that would call stuff out of we were using bamboo, which is a terrible CI system Don't use it You know, we're using Jira like okay top Top build breaker, you would get like a negative Chivo, like top, you know, bug generator. You get a negative Chivo, top bug fixer. Oh, you get happiness and light and it was a totally Mario Kart sort of experience all over the engineer organization. What do you know? The number of tests went up, the number of failures went down because we made a game out of it, right? So you know that's not gonna work for every organization. It's not gonna work for every product team, right? But you know again goes back to these metrics are critical tools. Not outcomes in and of themselves, right? that is the biggest thing what you get the data Make sure you it's accurate as possible and now do something with it Don't just have a dashboard that you kind of look at every once in a while and think huh, that's interesting

Audience Speaker 1: 1:15:17

All right. One more. Just one more. So during the discussion about CI CD and Tara you dropped a little grenade about companies throwing out their QA processes, in, in sort of the search for CD nirvana. And you touched on just briefly, but I want to know what, from your perspective as a leader who's seen kind of the adoption of CICD, what's to be the best approach for folding in an existing QA process into our migration to CICD?

Tara Hernandez: 1:15:44

So, I mean, to me, the, the value of qa, like a QA engineer, their value is that they understand how to break things. they're multipliers on the robustness and reliability of your system. Developers know how to write statistically speaking, I'm not saying absolutes, but developers statistically speaking are really good at writing tests that prove that what they wrote worked, not how to prove that it doesn't break under different circumstances. And so the domain of QA, I think that we as an industry have too often blithely kind of tossed has lost that art. Now. You could argue, like, used to be, like, you could, you know, you could automate, like, back end stuff that's all APIs, but you couldn't automate front end now, and then SonarQube and, and other companies came out, well, now you can automate your front end, but that still doesn't solve, I think, having the domain expertise to guide the negative aspects, right, the stress testing, what have you, and what we've turned it into is incidence management. It's like, oh, well, it got out to production, and now we can, if we can fix it fast, we're okay, and maybe that's the right answer, right? I'm not, like, an oracle of truth here. But I do think that as an industry, we've lost something there and it, and I think it'll be interesting and I suspect it will actually come back around. Everything in the internet is cyclical in my experience. So I think people are identifying, they still have QA teams. They probably are still getting value. If they're struggling in a lot for quality issues, they're probably thinking, huh, maybe we need some QA experts. You know, hopefully they don't still exist, um, to come in and help guide how we do production operations, how we do CI/CD in interesting ways. I think it's an interesting thought exercise, honestly.

Rob Zuber: 1:17:15

Yeah, I mean, I definitely agree with developers testing the assumptions that they've made, proving that the assumptions they made work as opposed to questioning the assumptions they've made right at QA, the best sort of QA folks that I've known in my life were that they're like, what happens if I type this crazy string in here? And you're like, well, you just crashed the entire site. That's pretty impressive. How'd you think of that? I don't know. It just made sense to me, right? Like, yeah. And I think that that's missing. I will say at CircleCI, I would, we do not have anyone with the title quality QA, anything like that. I mean, that's been true since I got there. and we have built a large amount of tooling for ourselves and unsurprisingly for our customers to allow you to, you know, to mitigate risk. In production deployment to make it so that, you know, if something does go wrong, it impacts a very small number of your customers. It's easy to remediate, et cetera. so I think there's many strategies, but I do think that's an interesting piece that we've, you know, I, does anyone have a replacement for the baby bathwater metaphor? Because it kind of freaks me out, but like. That's the one you would use, right? Like we got, we're like, Oh, we have automated testing. But we threw out with that the ability to reason about the weird edge cases that are going to break a system because you didn't think about them as you were implementing. but I think the question was really about making the transition. Take it in small increments. Find a way to test a certain part of your platform. Take the things that are fairly stable and put in regression tests, you know, those sorts of things. I think a lot of people, when they try to make big transitions to anything, any technology or whatever, try to do something all in, and, like, spoiler, I don't even know what you're thinking about right now, but that's gonna fail. Right, doing anything all at once is just guaranteed to fail. It's gonna be, like, way bigger than you ever imagined, you're never gonna get there, but if you can find a way to do it in small increments You can make any transition over time.

Tara Hernandez: 1:19:07

Great. And I think, you know, just to close it out, it's, I think one of the main reasons that, as a DevOps aficionado, I kind of mourn the fact that DevOps contributed to this concept because you can't have manual QA gate keeping as part of a continuous delivery or continuous deployment mechanism. Right. But I, I assert that there are probably different ways that we can approach this asynchronous as a thing, right. That, you know, where do you insert them? Is it in the design phase? Like there's a lot of things there, but I think what you have to think about as a business is where are you struggling? And, you know, can, can a different type of, of domain expertise help and then figure out what that means.

Ben Lloyd Pearson: 1:19:42

Awesome. Well, let's hear it one more time for our guests.