This week, our host Dan Lines sits down with Tara Hernandez, VP of Developer Productivity at MongoDB. Together, they explore the nuances of developer productivity and the impact of AI in engineering environments.

Tara emphasizes that achieving developer productivity requires a focus on outcomes, reduced noise for developers, and a healthy balance between technology, processes, and communication. She also touches on the strategic framework of the 'three horizons' for conceptualizing your investment breakdown across different projects and how to maintain focused on meaningful development work.

Episode Highlights:

01:43 How should you think about developer productivity?
09:09 Three pillars to improve developer productivity
16:07 Automated does not equal autonomous
24:46 Making the golden path the easy path for developers
27:51 What’s exciting in developer productivity and AI?
29:57 The three horizons
38:34 Developer performance vs productivity data
40:23 What is the right way to think about goal setting for noise reduction?

Show Notes:

Transcript:

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

[00:00:00] Tara Hernandez: And one of my other sayings is, automated does not equal autonomous, right?

And so, yeah, if you have automation that goes nuts, it probably is doing the reverse of what you would hope. I think a lot of what things boil down to actually in our world is what outcome are you trying to achieve?

And really start from there and then work backwards. Because a lot of times you think, oh, well, hey, if we had a bot that was, you know, like Dependabot. My dependencies are out of date, go fix, right? Huge. Yeah. But if you multiply that out five times, absolutely, you could get in a situation where it could take three engineers all day long, every day to keep up,

[00:00:35] Conor Bronsdon: How can you build a metrics program that not only measures, but improves engineering performance? What's the right metrics framework for your team. On May 2nd and seventh LinearB is hosting their next workshop where you will learn how to build a metrics program that reduces cycle time by 47% on average improves developer experience

and increases delivery predictability..

At the end, you'll receive a free how-to guide and tools to help you [00:01:00] get started. You can register today at the link in the description. Hope to see you there.

[00:01:04] Dan Lines: Hey, what's up everyone? Welcome to Dev Interrupted. I'm your host Dan Lines, COO and co founder at LinearB, and I'm delighted to be joined by cloud curmudgeon, nerd manager, process wonk, and of course, VP of Developer Productivity at MongoDB, Tara Hernandez. Tara, welcome to the show.

[00:01:30] Tara Hernandez: Hello,

I'm glad to be here.

[00:01:32] Dan Lines: Totally awesome to have you on today. And of course, we have a crucial topic. It's developer productivity. Very, very popular today in engineering teams, engineering leaders. Everybody's talking about it. And you probably can't talk about developer productivity without talking about AI. So we'll dive into that a bit as well.

And, you know, I'm [00:02:00] really, really Uh, to talk with you. I think you have a great approach to improving efficiency and productivity, and we're going to get into all of that.


How should engineers think about developer productivity?
---

[00:02:12] Dan Lines: How do you think about the problem of increasing developer productivity?

[00:02:21] Tara Hernandez: I want to start with, this is just like the latest name.

It's like marketing, it's branding, right? When, when I first started, it was like, Oh, there's the build and release engineers, and then there was the infrastructure engineers, and then what was this whole platform engineering thing? And yeah, and then Google has the whole engineering productivity and now there's developer productivity and it's all the same thing, which is, there's just going to be a category of people that help their, their fellow developers do their jobs.

Right. Whatever we call it. And, you know, DevOps and Agile methodology came out around that. And, uh, now we've got the space framework and, you know, other forms of, of new research. And [00:03:00] it's really, it's just, you know, that the challenge is, is the industry evolves so fast, right? You went from, you know, local computers with floppy disks, and then there was optical media, and, and then there were the beginnings of hosted SAS based solutions, and now everything is cloud, and, and so it's really just this, this constant evolution of how do we I support the industry needing to go as fast as it does without the developers losing their minds, right?

Um, you know, the Dora metrics, which is a very popular thing and invaluable, I think it talks about like time to deploy and time to restore and our appetite for that to be as close to zero as possible as an industry for our customers. Customers is, you know, it's probably unreasonable now, just, you know, how fast we expect things to be.

And so I think that's one of the biggest challenges about, you know, trying to be an organization supporting the facilitation of software development within that realm, right, is the big challenge.

[00:03:59] Dan Lines: Yeah, [00:04:00] you made me think of a bunch of things there. All right, hit me. And.

One of them that stood out to me is like driving developers a little bit crazy. Maybe I remember that there's all these different movements. And then you said like, kind of like the marketing terms behind them, like, okay, we're doing platform engineering, we're doing DevOps, we're doing. Agile methodologies.

We're doing pair programming. No, we're doing Kanban. We got AI coming. We got bots. We got, it's not exactly the same, but like SRE movements. And then there's like, hey, you're going to be a full stack engineer. We're going to remove QA. Everything's going to be automated. You're going to do the QA and you're going to do, and You know, right now, I think probably the popular terms that I'm hearing is like the platform engineering, developer experience, you know, AI, and I'm thinking to myself, is it, do you think it is all the same thing?

Like, [00:05:00] from the beginning of time, it's kind of like, hey, we want to ship our product. Code faster or do you think some of these, like marketing names or like some of some of this terminology we're, we're using now platform engineering, dev, developer experience, ai. Do you think it matters like the, is there meaning behind it?

[00:05:16] Tara Hernandez: I mean, yes, mostly, but also no. Right, and lemme tell you what I mean by that . At the end of the day, as a human being, you know, we've evolved over, depending on who you ask, some 3 million years from Lucy or whatever, right? And there's a certain, like, progression of how our brains have evolved and all that stuff.

But you think about technology, and you think about where were we 50 years ago, 20 years ago, 10 years ago, last year? Right, uh, from the, you know, I think Industrial Revolution is a, is a nice, uh, sort of inflection point that people like to talk about. Like, the amount of change around us is, [00:06:00] like, hundreds of orders of magnitude faster than the previous, you know, 2.

9999 million years or whatever, right? And so, if I were to describe what is the biggest value of anything around developer productivity or the developer environment, it's to reduce the noise. Right? It's, it's to allow the developer to be able to focus on what they need to do by removing, hopefully through automation or telemetry or insights or whatever, the things that they don't have to focus on because we've helped them prioritize the things that they do.

Because we just can't, our brains, you know, that's how we burn out. Like, we, we drive ourselves nuts trying to keep track of everything. Well, let's make it so that we don't actually have to have our developers keep track of everything. Let's make it so that the system allows them to stay focused on the things that are most important.

And then as they finish a thing, they can move on to the next thing. Because we have a process that supports it, right? And, uh, I think that part is really hard. And I think it's a, you know, it's my way of [00:07:00] looking at it. Maybe not everybody's way of looking at it. But what I think is adding to the noise. And you touched on this, is, you know, it back in the day when I first got into it in the 90s, like the build and release engineers were the not, you weren't the real engineers, you weren't, you know, you weren't working on the product, you were working on the make files and the shell scripts and the Perl code and whatever and like, fine, okay, whatever, you're wrong, but sure, but now we've recognized over the You know, subsequent decades, and that infrastructure is actually a valid thing.

You know, look at GitHub. That's what GitHub is. It's an infrastructure company. CloudBees, CircleCI, um, half of all of the hyperscalers, it's developer tools, right? And so now there's a market. And we're trying to sell into that market. And that's, you know, so now there's a lot of more marketing around it.

And like, Oh, Atlassian, I think has made tons of money on like, Hey, we're going to solve all your problems by giving you a wiki and a bug tracking system and a CI, like all of that stuff. And just use our system and you're great. Right. And not to bag on Atlassian at all. They recognize it probably sooner than [00:08:00] most.

And so now I think that has added to the noise, right? Now it's like, Oh, if only I have the solution, or if only I have this tool or that tool, or, you know, a GitHub stack, PR stacks is one that's coming up from for me lately. Everything will be better, right? And so in, you know, the job of my team is to try and distill the signal from the noise.

Will that actually help us? Right? Or not?

[00:08:21] Dan Lines: Yeah, I love the way that that you frame it. And there's always pros and cons. I think that's definitely in like a capitalism society, the one that we live in, like we get better through competition. That's the way we work, at least in the, you know, the U. S. and a lot of people, people in this, yeah, on the planet today.

And again, there are pros and cons to that. But one of the things that I am seeing is that businesses, so if you think about business leaders, even CEOs, but like heads of engineering, you know, now we have heads of, you know, productivity and all of that, they've recognized that. And I'm not saying it's [00:09:00] always like not to make money, I know they want, like everyone wants to make money, but at the end of the day, leadership has recognized, hey, if we reduce the noise.

If we streamline what they're doing, if we make their day to day better, better things happen to our business. So I think there's a realization of that, which I think probably overall is a good thing. And you have some of these metrics you called out, like the DORA metrics, right? They've been around for a while.

At LinearB, we offer these for free. It's like, you know, you get your cycle time, you get your deployment frequency, your change failure rate.


What are three pillars to improving developer productivity?
---

[00:09:38] Dan Lines: But when you start looking at those at a business level. And you say, Hey, our cycle time is eight days or whatever it is. And it would be really cool if it would be like five days.

And I think that I could get there by making the lives of the developers less noisy. I think [00:10:00] it's cool. I can buy into that, but I also agree on the other hand, there's a lot of terminology out there. Because businesses are trying to make money, I have seen, hey, we work with a lot of companies that work on reducing cycle time, it helps, and a lot of the ways that they are doing it is through reducing the noise, and a lot of the area, like one of the areas that we see is around, you know, reduce the noise in terms of pull requests and the review process there and streamlining it, it was really cool that you kind of gave like a history of this that you've seen throughout your career.

And now that we're at the point of, I think you said noise reduction, what are the top things that you're seeing now, like today, in order to either reduce noise or measure it or like where, where do you stand with all of that?

[00:10:54] Tara Hernandez: One thing that I think will always be true, and it's a tension that we have as an [00:11:00] industry, of balancing what are the things that are industry standards that we should conform to and tools that we should adopt versus where are the things that are going to be really relevant to a particular company and the engineers within that company, right?

Because it does matter, and I will go on and on about, you know, how there's, you know, To me, there's three key pillars to developer productivity. The first one is the obvious one. It's the technology. Like, what are the tools that you use? But the next two are the processes that you use. You know, the mechanics of how you organize yourselves.

And then the third pillar is the communication, which is both how do you, do you talk to each other, but then also how do you have transparency around the metrics or around the key insights that you need to elevate, right? And then the three of those things are the three pillows of your stool. But notice, you go back, only one of those is tech.

Right? So people, I think ultimately is the [00:12:00] most important aspect of good developer productivity. It is always going to fall down to people. How do you, um, engage and understand what is going to enable your developers? It's invariably not going to be a tool or not solely a tool. And I think that's the thing that's really important to recognize, but it's also the hardest thing to quantify, right?

Because it's intangible, uh, or, or is often intangible.

[00:12:24] Dan Lines: Right, so two thirds, so if we recap, you have tools, and then the other two thirds have to do with people. You have process, and then is the third one like communication? .

[00:12:36] Tara Hernandez: Well, it's communications, like how do you talk to each other?

Like, what are the forms? Like, you know, if you have an idea or you need to give feedback. But also, like, how do you communicate? And then the other way around, how to use a team, communicate how you're doing against your little piece of those goals,

[00:12:55] Dan Lines: right? So let's do this. Let's talk about measuring. Maybe [00:13:00] we can talk about measuring the three areas, or if you want to dive into the people side, whatever you're comfortable with.

Like, how do you measure success? And then we can move into, you know, what approaches have you seen or are we taking to actually improve the experience and efficiency? So let's start with the measurement side.

[00:13:18] Tara Hernandez: So one of the things I love when I got to MongoDB, it's almost two years now that I've been here.

MongoDB has had a, an incredibly, invested culture around testing, right? From, from the get go. It was very much early adopter to the point where they actually, we, we've actually implemented our own CI system purpose built to, to build and test distributed document database, right? It's called Evergreen CI.

You can see it in GitHub. Um, the, the, and that's great, right? I don't, like, usually when I go to a company, I have to convince them to write more tests. Well, at MongoDB, we actually have so many tests, it's like, I want to know what tests are providing the most value, right? So we need telemetry to tell us, like, these tests are actually providing us the most value.

Let's make sure [00:14:00] we really make sure those tests are getting the most, um, visibility. These tests maybe are not providing that much value, or no value at all. Let's stop running them, because it's, you know, It's taking up thought space, right? Or it's taking up consumption time. So that's, you know, so code coverage, you know, statistical analysis, like those, there's tools that you can use to try and like figure that out, right?

But then there's other parts like, okay, you have an idea or you've been requested to do something. Well, now you need to understand what are you going to do? So there's like review time, design time. How long does that take? Right. And then we're actually getting into Dora metrics, um, there, right. You know, from business insight to implementation and then from implementation to deployment.

But here's an interesting one. Which is, we don't have one thing to measure here, because at MongoDB we have MongoDB that you could download on your, and run on your local Debian instances, right? You could self host it. But then we also have Atlas, which is a SaaS, right? So obviously we're not going to say, Oh, well, for our distributed solution, we're going to have a two hour turnaround [00:15:00] time because our customers do not want to install a new version on their operating system every two hours, right?

But our Atlas folks probably want the latest and greatest. You know, as quickly as possible if there's bug fixes and security remediation, right? And so then we have to have different metrics for different types of products. And then have that, you know, be a thing. And then communication, like, if my boss, who's Jim Scharf, the CTO of MongoDB, needs to be able to answer a question, how fast can he get to that answer?

Does he have to come find me, or can he just go to a dashboard and go, Oh, look, I need that. Next time I talk to Tara, I'm going to ask her about that, right? Or the TPMs, you know, are they able to report back to the product team how well we're doing? So there's like all these different tendrils of information and how much, and this comes back to tooling again, how much can we automate that, right?

And make those things be automatically discoverable so that we know they're therefore always up to date, right? So that's another kind of side challenge. around information [00:16:00] flow.

[00:16:00] Dan Lines: Let's finish the note because you had me thinking about something we were talking about tooling, right? Then we were talking about process, and then we were talking about people, essentially, I think you said collaboration,

[00:16:16] Tara Hernandez: communication, collaboration,

[00:16:17] Dan Lines: communication.

Okay. We're working with a customer and we use. I won't say who the customer is, but we, we use cycle time, to kind of benchmark where they stand. And in our cycle time, the way that we do it at LinearB is we have coding time.


What do you mean by automated does not equal autonomous?
---

[00:16:36] Dan Lines: So how long am I, am I spending in coding? We have how long, once I put a pull request up, how long does it take for that review to start? So how long am I rate? Waiting for feedback. And then once the review starts, how long does it take to complete the review? So that's kind of looking at the reviewer there and like, is it a big PR?

And then how long does it take to get deployed, merged and deployed? Right. [00:17:00] So that's our cycle time. And what we saw at this company is they started using a lot of bots. So it's like starting to get into the, it's not like pure AI, but they had a lot of bot activity and these bots were opening up PRs.

Actually, 20 percent of all their PRs were actually coming from a bot. Now, some of these are like the more common bots, like Dependabot, but some of them have to do with like localization and accessibility, and there's more and more bot generated code. So that's on the tooling side, they're using a bunch of bots.

But what was happening on the process side, is all of this was getting put onto humans for review. And not all of it actually needed like a human review, cause sometimes these bots are doing little version bombs, or little like, Tiny, tiny changes. So they're creating a ton of PRs with a tiny change. And then on the communication side, it [00:18:00] was kind of like, okay, who's going to do this review?

I have a lot of work on my plate. It's not coming from a human. And these PRs were just sitting there. And now your cycle time is increasing. And so, you know, we worked out a way with them where we say, okay, we look at what the, the bot was actually changing and we make a decision if we need a human review or an automated review and so on, we, you know, we have a solution.

But I wanted to get like, just see what you thought about that. Cause that kind of reminded me of putting all of those pillars together.

[00:18:32] Tara Hernandez: I have many sayings, right? I'm half Irish. I can't help it, right? We just, we have the bardic tradition. And one of my other sayings is, automated does not equal autonomous, right?

And so, yeah, if you have automation that goes nuts, it probably is doing the reverse of what you would hope. Right? Yeah, exactly. And so being thoughtful about what's the outcome. And so, uh, and I think a lot of what things boil down to actually in our world is what outcome are you [00:19:00] trying to achieve?

Right? And really start from there and then work backwards. Because a lot of times you think, oh, well, hey, if we had a bot that was, you know, like Dependabot. Dependabot is one of the greatest, you know, inventions of our modern time. Like, oh, my dependencies are out of date, go fix, right? Huge. Yeah. But if you multiply that out five times, absolutely, you could get in a situation where it could take three engineers all day long, every day to keep up, right?

So yeah,

[00:19:23] Dan Lines: Piling

up and distracting me from the work that I wanted to do at this company, which is like, I don't know, maybe build cool features, like

[00:19:32] Tara Hernandez: exactly. Yeah. And they're not going to last very long. Very long, right? Because they're like, I'm tired of just reviewing Dependabot PRs. So, so yeah, that is a great example of where you can kind of get yourself into trouble and why I talk about the three pillars, like technology is one, but only one of the three.

But they all do interact because it is a complete system, right? That the technology enables processes, right? But then the [00:20:00] communication comes in and like, is the right information happening? And here you can see where those things, things are out of balance to follow the analogy, your school stool is wobbling, right?

It is not. Yeah, the

[00:20:09] Dan Lines: tooling is taking over. There's a bot army doing a bunch of work and I, you know, negative

[00:20:16] Tara Hernandez: impact on your process.

[00:20:18] Dan Lines: And the interesting thing is, you know, and I do want to hit on the three horizons, but maybe we can go into the world of AI. Like it seems to me more code is being generated faster.

If that, if that makes sense, that's what the data says, at least the data that we have in benchmarking, like more stuff is happening, let's say, but that's why I think you said, like, automation doesn't mean autonomous. It's like more stuff is happening, but it doesn't mean that the experience is better and it doesn't mean that we're more productive.

[00:20:54] Tara Hernandez: Right? And it doesn't mean that the outcomes that we're getting are actually what we want. And let me give you two great [00:21:00] examples of the things that we've seen around the use of AI. And by the way, like MongoDB very much is pro AI, right? And as a platform, like it, you know, sticking your, sticking your data into our databases, we're all, we're all about that.

And we're, we're trying really hard to make that be awesome. And there's a lot of customers that are using us for that purpose. But you really want to be thoughtful about. But how you use it and why. So here are two areas of concern. Um, from my perspective. From a developer productivity perspective. One is, going back to automated does not equal autonomous.

If you say, you know what? I'm eliminating 50 percent of my engineering resources because this AI thing, I am so confident, right? It's going to just generate all the code we need. We're good. You have two problems from that. Uh, one is what happens when it fails, right? The customer is not calling

[00:21:50] Dan Lines: you. AI did it.

I don't know.

[00:21:54] Tara Hernandez: Exactly. That is going to fly. Not at all. Right. You know, the customer is going to drop you so fast. If you [00:22:00] were to say something, you know, as ridiculous as that. So

[00:22:02] Dan Lines: yeah,

[00:22:04] Tara Hernandez: there's a joke I saw floating around. It's like, you know, AI saves you. Uh, it lets you implement code ten times faster, but the debugging time is now a hundred times slower because somebody's going to have to go in there and figure it out, right?

In a worst case scenario. I mean, obviously that's extreme, but Yeah, you can't, you can't go to

[00:22:19] Dan Lines: a customer and say, yeah, AI did it. Let me, well, at least now, let me ask AI what, it's like, no, I'm done. Trying to talk to you as a human. What? We don't know who wrote that code.

[00:22:31] Tara Hernandez: Yeah. So that's not going to fly, right?

And so AI as an assistant to work. Yes, absolutely. I think that is a great thing, but, but so many people I think are over committing on what AI actually provides as it stands now. And here's the other thing around AI that's kind of interesting. So copyright law, right?

[00:22:52] Dan Lines: If

[00:22:52] Tara Hernandez: you, if you have AI by its very definition, generated code is learned from another [00:23:00] source.

Therefore, by its very definition, is prior art. So if you have a whole bunch of generated code in your corporate intellectual property, you may not be able to retain your copyright.

[00:23:12] Dan Lines: Oh, that's interesting.

[00:23:14] Tara Hernandez: Right? Like it's legally untested. So like, For the things that our customers are doing in, in, in MongoDB and Atlas and, you know, in whatever, you know, they're using vector search and they're, they're trying to do very specific things that keep them safe, right?

But as a company, if you put too much AI into your actual product code, you might get into a little bit of trouble. You know, you might lose your copyright. Someone might sue you, like, and we saw that, right? When Copilot rolled out. There was all kinds of lawsuits that instantly started taking place because of, you know, the fail, fail to attribute, a code that was not licensed to be scraped by a machine language, you know.

So it's, it's a really challenging thing. And, and as a, as companies, you know, You know, engage with AI, as we should, the genie is out of the box, it's not going away. We have to be really thoughtful about it, and so part of developer productivity's, [00:24:00] I think, task is to help keep our engineers safe, right, to, again, not have to, here's where you can use it, here's where you can't, and we're going to make it so that you're not going to accidentally shoot yourselves in the foot , right?

That's another form of noise. To be quite honest.

[00:24:16] Dan Lines: Yeah, well it's another thing they have to keep in mind. It's not a good thing, I think, for developers if they're, if it's like, Hey, go use, like, go use AI generated tooling. But also you have to keep all this other stuff in mind while you do it. Like to me, then that would be like, wait, what?

I thought this was supposed to make my life easier. Like why you're putting more on my plate.

[00:24:39] Tara Hernandez: So going back to my three pillars, right. There is a tool, but then the process and policy and the, you know, the communication around that helps create guardrails so that they don't have to think too hard, like, and that's, and that's where the three things work together in a, in a challenging way.

Right. Here's where the technology has gone faster than the industry [00:25:00] actually was probably ready to absorb it. Right? Um, and so, for each company, we have to figure out what is our safe space here. You know, how do we keep our developers safe, and then how do we keep our customers safe? And that's, I think, the responsibility of every company that's getting into the AI space right now.


How can we make the golden path the easy path for developers?
---

[00:25:15] Dan Lines: I like that you mentioned the guardrails. And I'll just add one because I've been thinking about this a lot. We're experimenting with some of that kind of stuff like guardrails and policy. At the end of the day, like, I'm an AI proponent, I think, like you are, and I'm trying to say, OK, how can we do this?

But actually, Increase efficiency, because I'm not, it's not correlating yet. And on the guard rail side, if I'm using the terminology, hopefully correctly, I'd like to see that be automated. I'll, I'll try to explain why again, I don't think like the developer should have to keep in their head what all the guard rails are and when they should do this and when they should do that, otherwise it's [00:26:00] like, so think about if we had now like a rule engine, which is.

The rules are the guardrails, and you can tell the developer, yeah, go use, like, our approved AI tooling to your heart's desire. Now, once you put that pull request up, we have a set of guardrails and policies that automatically kick in for you. Depending on what the code was changed, sometimes it will need a human reviewer.

Sometimes another AI could Uh, actually do that review and you'll be okay. Sometimes it will automatically call in, you know, the security team or legal. If something is, it's like, that's the way I think about the guardrails. They need to be automated, so the developers don't have to keep all of that in their head.

Otherwise I think it will be crazy.

[00:26:48] Tara Hernandez: A hundred percent. You cannot expect the developers to keep, you know, a thousand things at the forefront of their mind when they're doing their day to day work. Right? It's just not going to work. My staff engineers, they say [00:27:00] this to me a lot, and I don't know where it came from, but I believe it fully, which is, you know, our job is to make the golden path the easy path.

Right? And to make it so that you will naturally fall, go the direction we want you to go, because that's the easiest way to do it.

[00:27:16] Dan Lines: Yeah, I like that. We

[00:27:17] Tara Hernandez: have to work harder to go out of bounds, right? Because people go out

[00:27:21] Dan Lines: of bounds when it's easier.

[00:27:23] Tara Hernandez: Right, exactly.

[00:27:24] Dan Lines: Yeah.

[00:27:26] Tara Hernandez: This is a, this is a challenge that, you know, you may recall that, um, I mean, this is probably still true now, but it was certainly true, um, when, when cloud hyperscalers started to become more and more of a thing, um, you know, security controls, right?

It was really easy to have security controls. Super bad security in the cloud, because we were used to everybody sitting behind a firewall, right? And so we would have behind the firewall, our security profile would be Swiss cheese. And that was okay, because all right, the network engineers will make sure that the firewall is in place.

Well, now we're in the cloud. And we have developers that are, you know, in the early days spinning up [00:28:00] random EC2 instances. And, and the, you know, and now they're spending 15, 000 a month, and it's wide open to the internet. Because they could. Right. And so then we had to put in controls. So, you know, another aspect of humans is that, you know, we tend to learn our lessons the hard way.

[00:28:19] Dan Lines: Yep.


What is exciting in developer productivity and AI?
---

[00:28:20] Dan Lines: And on the, I think we said like some of the concerns, but we talked about using like guardrails and policies and automating that to make, extract the efficiency out of AI, but send it in the right direction. But what, what excites you on the other side? In terms of developer productivity and AI.

I

[00:28:39] Tara Hernandez: Anything that improves developer velocity, I'm going to be in a safe way. And in a sustainable way, I'm going to be in favor of, right? Like that's my job is to help my, my, my partner teams go fast with high quality. Right. Because that serves our business ultimately. Right. And so as, as much as we can [00:29:00] incorporate AI, such that we improve our time to market.

And we improve our, our sustainable product quality and we improve our ability to, to maintain the stuff that we've deployed, you know, so reduce our support, our call instance rate into support, for example, that's another type of metric that matters, right? Then we're being successful. And what are the different ways that we, we can do that?

And then maybe now there's a time to kind of talk about the whole idea of three horizons, because it kind of comes into play around this idea, right? , I

[00:29:32] Dan Lines: would love to talk about the three horizons. I have one point that I'm going to make, and then we're going to move to the three, because you made, you made me think of something else.

Okay. One thing that we're doing right now is at the, you know, some of these larger companies, we're measuring the adoption of Copilot. So that's something that we do at LinearB. So who's using it, how much usage, how much code is in the PRs, that type of thing, [00:30:00] except it's great. Yeah, and it hit me because the other thing that we're then doing is saying, how is that impacting your cycle time, your change failure rate, bugs found in production, MTTR, and I think that kind of just rounds out the point that you were making of, yeah, I want, I want all of this to happen, but my success metrics, Like the point is they improve, like these KPIs, right?

 


What are the three horizons as related to development?
---

[00:30:26] Dan Lines: Let's talk about, so the three horizons, they're related to software infrastructure. Is, do I have that correct?

[00:30:32] Tara Hernandez: Well, so the, the original model was the three horizons of business. And I used to think like, I thought it was an Intel or an IBM thing, but actually it came out of McKinsey.

I don't know, 20 years ago or so. And the idea is how do you do business investments, right? Your, your first horizon is your core business. That's what's making you money now, right? Yes. So if you're, if you're, if you're Intel to follow that example, you know, you're Pentium chips back in the nineties. Your second horizon is what's about to make you money.

Business. So your Xeon chips or, you know, whatever the [00:31:00] next generation is. And then the third horizon is what's, you know, looking down the road five years from now, where do you want to be? Right. And, and, and the model is how much do you invest in each thing?

[00:31:10] Dan Lines: Oh, perfect. Right. So your,

[00:31:12] Tara Hernandez: your core business, hopefully you're not actually putting a lot of investment in it because that investment's already been made.

That product is released. It's making you money. And hopefully the cost of maintenance is, is low enough that it's, you know, it's mostly profit. Your margins are beautiful. Right. The second horizon is where you want the most of your investment to happen because that's the thing that's going to take over your first horizon.

That's going to become the next horizon one. So that's where most of your focus is, but you want to still have some focus on that horizon three because in order to sustain your business, right, you have to have that same quantum computing, like, or, you know, Q chips or whatever it's going to be for Intel.

So I took, I love that idea. And I took it and I adapted it for software development, um, and I, with, my focus is infrastructure, because that's who I am, um, but, you know, any, any software team could think about it, and the idea [00:32:00] is that, what does it cost you right now in engineering resources and, and budget, if you're a budget person, to just keep the lights on?

You've got your products out there.

[00:32:09] Dan Lines: KTLO, they call it. KTLO,

[00:32:11] Tara Hernandez: absolutely, right? And so, you know, if 60 percent of your engineering effort is answering support tickets, you know, that means that at most you have 40 percent going into your, that's your Horizon 1, right? That means at most you have 40 percent going into your Horizon 2, and zero in, probably, if your ratios are that off.

into your Horizon 3. So you're, you're pure tactics. You are hanging on by your fingernails, right? And so as a, as a, as a leadership team, you want to think about what do we need to do to get that number down? We probably have technical debt that is, that desperately needs to be addressed, right? Maybe we have documentation improvements.

Maybe we need to get our DevRel folks out there building more demos. Like, I don't know, right? Whatever it is from a business perspective. In my [00:33:00] case, it's like, okay, My engineer, my support engineers are having to answer, way too many questions about, you know, using this particular service. Well, clearly that service is not where it needs to be.

So, hey, director who owns that service, your next round of quarterly planning better involve how you're going to make that number drop by half, right?

[00:33:19] Dan Lines: Right.

[00:33:19] Tara Hernandez: In an ideal world, you think about what your ratios want to be. To me, I think 30 percent on average across all of developer productivity, and I've got 50 something engineers, right?

30 percent would be really good, right? My range is somewhere between 25 and 80 percent depending on the team, and that's okay. Right, because it depends on what you're doing, but I want to make space for that. What's next? And for us, that what's next is how do we up level, right? How do we improve the type and quality of information we're giving our developers so that they can improve their productivity?

They can increase their velocity. Right? So that they are then supporting the business goals of getting those features out, getting those bug fixes out.

[00:33:57] Dan Lines: Got it. I love this. We [00:34:00] did some benchmarking around this, specifically around investment into future value. So I think that lines up with the second one that you said. It's like, where will my money come from next? Investment into, let's say, enhancement to what I have today.

That was your first category. And then I think you're, you're, you had a category that was kind of like future future, you know, like five years. Okay. I love that. And then there's things like. KTLO, Keeping the Lights On. And then we had another category, at least when, when we have this module called, uh, Resource Allocation, we had an investment profile.

We had another one that's kind of like, uh, we called it developer experience. That's what we said in our product. But at the end of the day, what it means is like, how much investment are you internally putting into to reduce the noise for your developers? And I didn't, I wanted to see, do you [00:35:00] have in your head benchmarking?

On this, I think our report said for like KTLO, we wanted to be like 11 or percent or less, like keeping the lights on. Like if you're spending more than that, it means that you can't invest in the other areas. Do you have any benchmarks on these?

[00:35:18] Tara Hernandez: Yeah, so for, I think for our development team, 11%, 11 to 15% maybe.

It, I think is a great number for For KTLO, right?

[00:35:26] Dan Lines: Yeah. Like a lead.

[00:35:27] Tara Hernandez: Yeah, my team is both an engineering team and a service team, right? So we're always going to be in a position where we're answering questions, you know, fixing bugs, like, you know, helping the developers out. So to me, 30 percent is probably a more realistic goal, right?

Again, aggregate across across the whole organization, some teams will have more support load than others. Um, because that's just the reality of our function, right? And so that's why I also, it's like, you know, our tech support organization would probably have, you know, it might be 60 percent would be okay for [00:36:00] them, right?

Because that's, that's their core business.

[00:36:02] Dan Lines: But then they

[00:36:03] Tara Hernandez: also want, want time to create tools to help them get better at what it is that they do. Right? So I think there's every, every leader can think about, you know, what, what is the right truth for us? And then how do we get there?

[00:36:14] Dan Lines: yeah, for us, I'll see that we can include like our, our benchmarking reports in the, in the pod and all of that.

But I think the, the way that we're looking at it would be like your entire software engineering organization combined, as opposed to, you know, yeah, if you're a service, It's going to be much higher than 11%.

[00:36:36] Tara Hernandez: Yeah. And there's so many nuances, right? And I, I really hate, I always hate talking in absolutes, even though they're easier ultimately, right?

At a certain level. But, you know, I actually wanted to circle back to something else that you had been talking about, which is, you know, understanding, you know, for example, on, on an individual level, how long does it take to get the code written and get the PRs reviewed? And those are all like individual actions.

And one of the things I worry about, I won't [00:37:00] lie. One of the things I worry about is the misuse of data, right? It is very easy, I think, to even inadvertently end up weaponizing some of that stuff, right? Because, again, you're trying to quantify something that has a lot of value. Sort of intangibles around it, like one engineer who's really good might land fewer changelists, but they're bigger, right?

So they take more time. Versus someone else who's really dialed into small check ins, you know, small commits are better, right? And so, you know, the numbers will look the same, but the ultimate outcomes could be identical. But how do we measure the differences between those two, right? And so, uh, and, and also you don't want to inadvertently disempower the leadership.

That's it. Right? Like a manager, frontline manager's job is to kind of have a handle on how the team is operating and how the team works as a system. Like the human systems are as important as the technical systems, right? [00:38:00] I love the, the companies that are coming out with good tooling around metrics.

The ones that are super focused on, using language around, you know, help identify your underperformers. I'm like, whoa, okay, let's that's not the end I want to follow. But the ones that are like, how are your teams doing, but then provides to the manager. Additional set of information that could be helpful to them because now they have the context, right?

You don't want the individual developer to think, Oh my God, I'm going to be fired based on these numbers, right? And so that's another part of, that's the culture element. What kind of culture are you building throughout? You know, the tools, the processes of the communication that is going to incentivize and inspire your engineers, right?

Which ultimately isn't intangible, but it's the things that we have to think about as leaders throughout all of this, because engineers that feel inspired and motivated and rewarded work better, right? You know? Absolutely. Yeah, I mean I think that's as important as any tool. More important, [00:39:00] honestly.

[00:39:00] Dan Lines: The culture needs to be there.


Developer performance vs productivity data
---

[00:39:04] Dan Lines: In order to, I would say, deploy data correctly. When we're talking about productivity data, I think I want to be very clear that we're not talking about developer performance.

[00:39:18] Tara Hernandez: Yes.

[00:39:20] Dan Lines: Nothing in this podcast is about developer performance. What we're talking about is productivity of the engineering team.

Yeah. And I think that that's the difference. Yeah.

[00:39:32] Tara Hernandez: And sometimes they get conflated.

[00:39:34] Dan Lines: And they can be, they can get very easily confused.

[00:39:38] Tara Hernandez: Yeah.

[00:39:38] Dan Lines: Now, when we're looking at data. You can see signals where there's a bottleneck in the process, the tooling in the process, and maybe the way teams are communicating together.

Those were your pillars and using the data to find bottlenecks and then go implement [00:40:00] solutions, whether it's automation or it's not automation, or it's, you know, something else that's great. That is way different than saying I'm a performance management tool. From like an HR perspective, and that is not what we're advocating for here.

So just, you know, I, I've seen,

[00:40:19] Tara Hernandez: but I think it bears repeating, right? Because going back to, in the same way that AI is not going to assume the responsibility to an outage to a customer, right? An AI or, or an AI enabled analyzer. I cannot assume the responsibility of a people manager, uh, uh, uh, you know, viewing people performance.

And so that's where it's like we have to understand this, this falls on that side of that boundary. This is a people thing and needs to stay that way.

 


What is the right way to think about goal setting for noise reduction?
---

[00:40:53] Dan Lines: I think a good, Maybe topic to kind of round out the pod, there's a lot of leaders [00:41:00] today. I consider you a leader.

I said, there's a lot of really smart people today that are trying to improve the, you know, productivity, developer productivity. They want to set goals around this. They want to use numbers. How should, should leadership think about goal setting for noise reduction and all that? Like, what is the right way to do it, in your opinion?

I

[00:41:25] Tara Hernandez: mean, to me, I said it before, it always comes down to outcomes. What are the, what are the things that at a high level, We state are really important, whether it's, you know, uh, getting features out the door, hitting revenue targets, whatever. It's like, we need to have clearly defined outcomes. And then you need to empower the teams as you go down, down the org chart to figure out what their piece of that is, and then be able to define success.

Right at the end of the day, it's, it's outcomes and, and success criteria. And the more you can make that [00:42:00] be really clear and concrete, the more successful you're going to be. Cause you can, you have something to measure yourself again, the lowest level engineer should be able to know why is what I'm, am I doing right now matter and how am I moving the needle?

Right. That to me is, is, is the best scenario.

[00:42:17] Dan Lines: Do you, I love it. Do you have any thing? Um, for those outcomes that like you have used or seen teams use that are like more concrete is an outcome like, Hey, we want to, you know, you, you talked a lot about like, uh, noise reduction, maybe in an outcome can be like, we want to increase developer focus time by 20 percent and we're going to do that by reducing meeting time by 20%.

Like, is there an outcome that you've seen, seen work or, or, or not work as well?

[00:42:55] Tara Hernandez: Um, yeah, I mean, there's, there's lots, right? I think, and again, it's going to be like, what's most relevant to [00:43:00] your organization. Um, for me, uh, a great goal would be, uh, develop a development team could start from scratch. Get a repo set up, get their CI set up, get a preliminary set of automated tasks, get performance analysis, get security checking, all those other things without having to go ask for help.

[00:43:19] Dan Lines: That's cool.

[00:43:20] Tara Hernandez: That would be a stupendous outcome, right? What do we need to do to do that? We start working backwards and figure out where are the gaps, right?

[00:43:29] Dan Lines: And I, the way, and I would just say for the leaders listening. And then we'll wrap it up here. What I really liked about the way that you said that it was really tangible things with the example of what we're trying to do and the outcome is, you know, Hey, a developer doesn't have to ask for help, so let's measure maybe how many internal help tickets that we get or whatever it is that

[00:43:53] Tara Hernandez: Slack

requests, whatever they are.

Yeah,

[00:43:55] Dan Lines: So, you know, we're, we're up on time here. but Tara, it's been [00:44:00] an awesome. I, I really enjoyed having you on the pod and, and speaking with you.

[00:44:07] Tara Hernandez: Uh, this has been fun. I always like nerding out on this stuff. I've been doing this job for 30 years. It never gets old to me.

[00:44:14] Dan Lines: Amazing. And thank you everyone for tuning in with us today.

And remember, if you haven't given us a review on your podcasting app of choice, it does mean the world to us. If you could take 90 seconds. To give us, uh, a review on this pod. Tara, thanks again for coming on, and, uh, we'll talk again hopefully soon.

[00:44:39] Tara Hernandez: Sure thing. Thanks, Dan.