On this week’s episode, our hosts Dan Lines and Conor Bronsdon are joined by long-time friend of the show, industry expert, and Director of Engineering at Spot AI, Kelly Vaughn. Together, they tackle a range of topics, including how tax laws impact engineering teams, AI's evolving role in software development, and the great build vs. buy debate. We’re excited to have Kelly joining us as a regular contributor throughout 2024!

The conversation starts with Kelly and Dan’s takes on Section 174, which poses a looming threat to US tech companies. From there, they pivot to GenAI to discuss how you can measure its impact, leadership’s role in the process, and the importance of navigating this implementation wisely. They conclude by talking about Kelly’s recent article on the debate of building software in-house vs. buying third-party solutions and why the answer isn’t black or white.

If you want to hear how Kelly and our other hosts are thinking about key events happening in tech and within engineering organizations, this episode is a must-listen.

“We used to say that the best senior engineers know how to Google. They know how to ask the right questions. It's going to be very much the same with writing prompts as well.”

Episode Highlights: 

  • 01:35 Section 174's impact on US tech companies
  • 08:01 Why intention matters for remote vs. in-person work 
  • 12:46 Kelly and Dan’s take on what’s causing tech layoffs
  • 18:21 What should leaders be doing to encourage Gen AI tool knowledge?
  • 24:28 How are we tracking the impact of Gen AI?
  • 31:50 How should organizations set up standardization?
  • 33:55 The great build vs. buy debate 
  • 39:47 How the engineering leaders’ role has changed in recent years

Episode Transcript:

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

Dan Lines: 0:00

I'll kick us off. I have a lot of thoughts. I can't think of any more of an exciting topic to start with, Conor, than taxes.

Conor Bronsdon: 0:07

I know you love taxes. 

Ad Read

Is your engineering team focused on efficiency, but struggling with inaccessible or costly Dora metrics. Insights into the health of your engineering team, don't have to be complicated or expensive. That's why LinearB is introducing free door metrics for all. Say goodbye to spreadsheets and manual tracking or paying for your door and metrics. LinearB is giving away a free. Comprehensive Dora dashboard pack the central insights, including all Forkey Dora metrics tailored to your team's data. Industry standard benchmarks for gauging performance and setting data-driven goals. Plus additional leading metrics, including emerge, frequency, and pull request size. Empower your team with the metrics they deserve. Sign up for your free Dora dashboard today at LinearB dot IO slash Dora. Or follow the link in the show notes. 

Hey, everyone. Welcome back to Dev Interrupted. We're excited to be with you here in the new year, and we have a fun treat for you. I'm obviously Conor Bronsdon. You've probably heard my voice before if you've listened to this show, and I'm delighted to be joined once again by our original OG host, the LinearB COO, Dan Lines, and he is also a fantastic co founder. So we have some founding topics to dig into today, and we have one of our fan favorites who is going to be a regular guest this year and kind of serve in a co host role with us. And that's Kelly Vaughn, Director of Engineering at Spot AI, an entrepreneur in her own right, speaker. Kelly, we're so excited to feature you throughout the season here at Dev Interrupted as our regular expert. And having both you and Dan here to dive into some of these topics is just ideal. Thank you so much for coming on.

Kelly Vaughn: 1:39

I am so excited to be doing this even more regularly. You'll get to, uh, listen to my hot takes, or my old takes, or really terrible takes. A whole lot more now.

Conor Bronsdon: 1:47

And I think your hot takes in particular are going to be great with Dan, because I love to push him on things a little bit and see where his reactions are, because you'd be surprised, but he has some good hot takes too. We just gotta make sure he gets them on recording.

Dan Lines: 2:00

Not as good as Kelly, but Kelly, it's awesome to have you. I am super stoked to be doing this with you.

Kelly Vaughn: 2:05

Same, same.

Conor Bronsdon: 2:07

So let's kind of dive right in. We're going to discuss some of the top topics we've heard in engineering leadership circles lately, and a couple of topics that have actually been driven by listener questions. And listeners, I'll just note here before we officially start. If there is a topic that you want to hear us discuss, let us know. We want to hear from you, whether that's on LinkedIn, Twitter, our sub stack, and we'll try to get it into an upcoming episode. We'll be doing these episodes once or twice a month, time permitting. We have a really exciting initial topic and that is taxes. Uh, I know everyone's so stoked to kick off the episode of this, but, both of you, I think are uniquely positioned to have this discussion because you both have experience working with engineering teams worldwide. As well as in the United States, where we're all based, and you've both been founders before. And recently, we've seen this discussion ignite online, where many tech companies have begun to move product and engineering operations outside of the U. S. due to costs. And some in the industry are now theorizing that this is directly related to the recent tax code change in section 174 of the U. S. tax law that has put this into overdrive. Gheorghile, the pragmatic engineer, in particular, has started to bring this to light over the last few weeks. And to give a quick summary for our audience, basically, all costs related to R& D, so that's software engineering, including labor for software development. These costs, you used to be able to write them off as expenses, and treat them differently, and it would help lower your tax burden in the United States. Now though, these costs have to be capitalized and amortized over five years, or 15 if labor is done outside of the U. S., and many feel this is deeply impacting bootstrap startups. I'm going to give one example before I dive into Kelly and Dan's thoughts. And this is from a small business owner in Pennsylvania that said, our taxes increased 500%. It's so bad that if it's not repealed, we'll find a way to move our company to Canada, where my co founder is located. We're a high growth technology software company. So it seems this law is doing the exact opposite of its intent. Pushing business out of the U. S. rather than keeping them here. It's having a major impact, particularly on bootstrap startups. Is the era of U. S. based software startup dominance coming to a close? Yeah, I'd love to hear your thoughts.

Dan Lines: 4:14

I'll kick us off. I have a lot of thoughts. I can't think of any more of an exciting topic to start with, Conor, than taxes.

Conor Bronsdon: 4:21

I know you love taxes.

Dan Lines: 4:23

Okay, there's a few things that come to mind. One is, you have like these lawmakers that have a much different intention than someone that's trying to found a company or create something new or be really driven to innovate I remember for myself and for Ori early on when we started LinearB It's like you want to focus on value. You want to focus on your product. You want to focus on customers. You want to lower the things that are distracting you from doing that, and this seems to add an additional complication of what you need to think about in terms of financing, which is obviously a bad thing for innovation and new startups. The other thing that hit me is it seemed unexpected, which is really lame. Like a 500 percent increase, that's crazy. And if that's going to hit you out of nowhere, I don't know. I can see why it's like, yeah, I want to move my startup to Canada or something like that. Now the, the last thing that I'll say, I had a lot of success with engineers that were not from the US when we were at my previous startup, not LinearB, the previous one. Early on in a company, I love the idea of having your CEO, your head of product, and the engineering team all in one location. I think that matters a lot early on. Why does it matter? You're iterating, you're collaborating, you have an idea, you're whiteboarding, things are moving really, really, really quickly. But what I've seen is as the company matures, let's say you start to getting the 50 engineers, 70 engineers, a hundred engineers. And that's what happened to our startup. I had awesome success. And this was back in the day a bit. I know times have changed. Hiring out of Ukraine, Ukraine engineers, much cheaper. Honestly, if you're paying someone in the U S like a hundred K or 120 K you can get honestly, the same talent, 30, 40k. So I know there's a ton to unpack there, but those are the things that came to mind for me.

Kelly Vaughn: 6:40

That's an interesting take. do understand that there's absolutely benefit to being face to face, working in person. Uh, you can collaborate a whole lot more effectively. I will say Spot did not go in that direction. Most of our senior leadership team is in the Bay Area. Our product team is in the Bay Area. And there was one point where I was the Engineering Leader at Spot I'm on the East Coast. It's about the intention that you go into building. And if you're going to build with the intention of being a fully remote company, you're going to approach it very differently. And you'll find ways to continue to be more efficient, even if you have a remotely distributed team. Now, that aside, Going back to the wonderful topic of taxes, I don't see Section 174 sticking around. I do think that it's going to massively impact Innovation in the United States. I think it was a mistake to even pass and I feel like it was a big whoopsie in general. They were like, oh wait, this thing is now here and now we have to adjust so that kind of sucks But we've been moving, the capital markets have been screaming at us for quite some time saying we're no longer in a world that is growth at all costs, it's about efficiency, it's about profitability. So we have already been moving in a direction where we're, we're hiring, you know, we're nearshoring, we're offshoring, we're looking for that cheaper talent. And so I think section 174 is going to exacerbate the issue. I think we've already been moving in that direction.

Conor Bronsdon: 8:06

Yeah, I think you're spot on, Kelly, as far as this is a thing where it is making a problem that was already there worse, but we are in a competitive global marketplace. And I mean, particularly, you know, after 2020 and the push to have remote work be more and more accessible and people all having to experience it, whether they wanted to or not, we've really opened our eyes, I think, to the availability of talent worldwide. And to your point about setting up engineering teams now gives you access to this wider talent pool, which can be a huge advantage. Dan, you spoke to your experience hiring Ukraine with CloudLock several years ago. I know Kelly, you've done hiring Poland, you know, we have team members in Israel. And I think there is a huge advantage to this ability to access this global talent marketplace, whether that's for costs or simply competitiveness as far as, you know, having the best people on your team. But there is also something special about in person collaboration. And I, I think it's hard to make the decision about, you know, where do you balance there. And so, I would love to dive into what you said a bit, Dan, which was you really feel that there's something special about having that opportunity early on in a startup to scale from an initial, you know, crew of people, maybe the first 10 or so who are building that product, really iterating on it, having that be in person. How do you make the transition to, hey, we now want to access this global talent marketplace. Or if you couldn't co locate, let's say it was, middle of the pandemic or simply, costs are too high. How would you approach setting that organization up differently?

Dan Lines: 9:32

What I like about what Kelly said is she said something like intention matters. So if you're remote from day one and you know you're in that situation, I would assume, Kelly, you're, you're planning stuff on your calendar. You're doing certain things to know, okay, this is when we collaborate.

Kelly Vaughn: 9:52


Dan Lines: 9:52

I love getting in a room with a group of engineers and product people and creating something. And I think it's like a speed thing early on. And the reason that I also included the CEO is oftentimes in a very early startup. Let's say you have 10 people, you have 15 people, something like that. I always like to keep the CEO looped in. There's no, uh, Hey, is it in the right vision? Cause like the worst thing that can happen is you do a bunch of collaboration between product and engineering, and then the CEO comes in and says like, and you think you have something amazing and see, it was like, Oh, that's not in my vision. And you find out three days later, like that's such like a nasty feeling. So that's kind of more where I was saying like, okay, everyone may be like, it doesn't even have to be co located even like closer time zones or like you're staying together matters so much for me. And then later on you get to like 70 engineers and like you have more and more work to do and you start thinking about efficiency, I would just want to say I think there's great engineering talent all over the world. It's not like only the US has good talent or something like that. Again, I'll speak back to Ukraine. These guys and girls were super smart, just as talented as the people in the US. Now they cost a lot less. That's the truth about it. One thing that I will say, Kelly, to get your, your perspective on when we, we were in Boston and we have people in Israel, and we expanded out a team into the Ukraine. And you know, what was the toughest part? Who was the product owner that would work with the team in Ukraine?

Kelly Vaughn: 11:27


Dan Lines: 11:29

And we found that there was only one of our product owners that had, I don't know what it was, I don't know if it's intention or it's like EQ, she was amazing at communication and working with this team in Ukraine and that's the key to making it work. None of our other product owners, not because they were bad, were able to do it. There was too much communication breakdown.

Kelly Vaughn: 11:52

Yeah, no, absolutely. As soon as you expand overseas, and I believe I talked about this in the episode where I was expanding into Poland. Uh, this is one of the most challenging things you are going to face as you expand overseas or you start to build a team elsewhere is as you create an ocean between the product and the team building it, you're going to have to be much more intentional, I want to keep on using that word, about communicating the history of the product, where you're headed, why we made the decisions we made in the past, why we're making the decisions we're making now. Because they don't have the same context, and because they're not physically there, or even in a closer time zone, they're going to miss out on context in passing from conversations that are actively happening. So you have to have a really strong product owner, as you said, to be able to communicate that in a way that's going to make sense to them as well.

Conor Bronsdon: 12:43

I think you're both really on track with this intentionality piece, and like, the need to say, hey, we have to understand the differences between how we're constructed versus how we used to be. I remember I talked to Darren Murph, the head of remote at GitLab back in the day, and one of the things that he really highlighted when I was talking to him about how they were successfully doing remote work early in the pandemic, how they could kind of teach others to be successful in this, was that they had very intentionally set up their communication to facilitate it. And I think a lot of companies, because they were pushed in doing remote work, or maybe moved along faster than they intended to, didn't have the time to think about that. And suddenly they were just delivering on, Hey, we got to deliver our products. We got to, you know, build this, we got to sell. And that lack of intentionality really starts to grind you down as your company hits Dunbar's number, hits, you know, a hundred plus headcount because, it gets, becomes a lot harder to rely on those like tribal pieces of knowledge that have been built in where people go, Oh, I, I know how to do this. I've been here for a while. I think you're, you're both really on track with that need. And it's interesting because I think there's this secondary layer that is also coming in here, and frankly, it's part of why we've seen potentially layoffs across the tech sector here too is we're also now treating engineers as someone who likely has a co pilot in hand. And I mean that very explicitly like, you know a github co pilot or an AI enabled tool and so now you have this other layer of how are we collaborating with these machine enabled tools that are becoming much more powerful and Helping drive a lot of our tech. And so, you know, there's a few examples of layoffs that have already happened in 2024, from Discord to Google, Amazon Prime Video, X, Twitch, Audible, YouTube, InVision, Xerox, Duolingo. I could go on. Do the two of you think that either this change in how companies are constructed or the impact of AI or both, are helping drive these layoffs, or is it something that's more cyclical?

Kelly Vaughn: 14:41

I mean, we're still dealing with the capital markets. I think we're looking for any excuse to explain why there are so many layoffs. As we're actively fundraising, we're watching what's happening in the markets right now. You know, we're seeing that companies are still struggling. And we know we can use the excuse that, well, we're going to replace X number of people with AI, which is going to leverage AI. And so it's going to speed things up and we're going to be a much more efficient company with fewer people. Is that true? I mean, it could very well be true, but a lot of the AI tooling we're using is still very much in its infancy, and so we have a lot of people who are having to also stay up to date with this tooling that's changing. And I think it's a mistake to believe that AI is directly driving all of these layoffs? I've seen companies say they're replacing their support team with it as an AI bot, which is a decision, uh, that I'm not going to get into, when it comes to engineering in particular, I don't see AI being the driver for causing these layoffs.

Dan Lines: 15:40

I 100 percent agree with Kelly. I'm talking to lots of companies that are experimenting with Gen AI in software engineering. I'm also in contact with lots of companies that unfortunately are having to do like riffs, you know, like reduction in force. None of them are saying it's because AI is taking over a software engineer's job. I won't, I can't speak to like the support, you know, like the chat bot and all of that. I think that's something separate. None of them are taking software engineers jobs. And a ton of companies are investing significant money into things like Copilot as an experiment. And the biggest thing that they're at least asking me is saying, I'm trying to understand if this, because it takes like million, like big companies, it's like millions of dollars that you're investing. They're trying to say, is this, let's say like a 5 million investment into Copilot, making my engineering workforce more effective or not and that's the question that they're they're trying to answer But no like directly to your question Connor. I don't think it's like a replacement thing. I haven't seen anything like that today.

Kelly Vaughn: 17:02

I also firmly believe that I think it's great that we're experimenting with GenAI in the workplace. Especially for those who are, you know, needing to write more tests. That's one of the biggest reasons I see people use GenAI. I've used it for writing SQL queries again and again, but I still call, you know, Copilot, my friendly junior engineer who's helping me. Because it's not necessarily the greatest code ever written, and often doesn't actually work on the first try, unless it's something very, very simple.

Dan Lines: 17:29

You know what I was thinking about, Conor, when you were like, posed this question, actually, which is a new, new thing for me. You know, like, with LinearB, we're measuring the effectiveness of GenAI, we're doing all that kind of stuff, but actually something different came up for me. If I was a developer right now, and Kelly, I think you're closer to that, because you said, oh, it's like my junior engineering buddy. I would think like career wise, I would say to myself, how could I use Copilot to be the most effective engineer. I think the developers that understand when to use it, when not to use it, what to use it on, what will like succeed. Do you see what I'm saying? I think there's an art there. Who's gonna take advantage of it and use it in the right way, and who's not?

Kelly Vaughn: 18:16

Yeah, it's writing prompts. That is becoming a skill, you know, you, we used to say that the best senior engineers know how to Google. They know how to ask the right questions. It's going to be very much the same with writing prompts as well.

Dan Lines: 18:27

You can think of it even like our parents generation. It's like the most basic example. Like, who can use Google and the internet to get information faster? And you could see, like, there was, uh, in that workforces, it was generation, like, so I'm talking, like, I don't know, now, like, 30, 20, 30 years ago, you could see the people who, like, adopted it and didn't. Yep. I, I think of it, like, similar. Who can use, like, AI coding to their advantage and is, like, on top of it, know when to use it, prompts, all of that, and who's gonna say, like, Oh, I'm like a senior engineer. I don't need, I'm a principal engineer. I don't need this. Well, that's like not keeping up with the times.

Kelly Vaughn: 19:06

Yeah. Back in my day, we didn't, we coded everything by hand ourselves.

Dan Lines: 19:09

Yeah. Like I, you think of it as like, and then you'll get left behind.

Kelly Vaughn: 19:13

Exactly. It's a skill you should, you'd need to be leveraging for sure.

Conor Bronsdon: 19:16

Is it on the individual though, or what should leaders be doing to either encourage or maybe mandate this use of these Gen AI tools to hopefully improve efficiency?

Kelly Vaughn: 19:27

I think it's important to understand when, as you said, when is the right time to use the tool and how can you support your team to introduce it in a safe environment. And what I mean by that is there are certain things you shouldn't be dumping into any sort of generative AI prompt, you know, like secret keys, environment variables, et cetera, those types of things. But I think, you know, as a leader, talking to your team about how they can leverage it and how they're interested in leveraging it is going to be something that'll be really helpful for them.

Dan Lines: 19:54

So if you're like in a VP or like a CTO role, you also need to be communicating with your business about it. And what does that mean again? It costs a lot of money. So you're running this expensive experiment. What you owe back to the business is is it helping us deliver more projects on time? Yes, or no. Is it improving or decreasing our quality? Yep. And is it allowing our engineers, which we do pay I think, you know, good salary too compared to, you know, other jobs in the world. Is it making them more or less effective? That's what I think you owe as like a VP or a CTO to the business. And then you can have that good conversation of, we should invest more or less into it and hopefully it's more, but you got to back it up That's what that's what I see from like the top top.

Kelly Vaughn: 20:48

Completely agree. Yeah. Yeah Also worth noting if you're going to be starting to use generative AI or any copile or anything in your in your work environment, you should probably have a policy around using any sort of generative AI which I mean you could technically use you know chat GPT to generate that policy for you if you really wanted to but, not saying I know somebody who did that.

Dan Lines: 21:11

I love that you brought that up. Cause I was like, the next thing on my mind now is not all code is created equal. What does it mean? Hopefully it doesn't sound negative. There's some code that's more or less sensitive.

Kelly Vaughn: 21:23

Oh yeah.

Dan Lines: 21:24

There's some, right. So it's like not every pull request is the same. Now it's interesting because it's like. Not every piece of code is human created. So now when you think about a policy, especially if you're in like a regulated industry or like a huge enterprise. You may want to write some rules that says this is how we treat gen AI driven code versus human driven code. And then you ask yourself, okay, do I have the tools and process in place to implement that policy?

Kelly Vaughn: 21:59

Yeah, and I'm curious from a, like, a legal perspective, and obviously I am not a lawyer, but I'm curious, when you have something that you have patented, for example, but part of it was written with generative AI code, who owns that code? Who owns that unique combination of code at this point?

Conor Bronsdon: 22:17

That legal battle is going to be really interesting. I mean, we've seen the New York Times now suing AI models, particularly OpenAI, up around use of it, their content in the model training. I think there's so much more depth that we're going to see around who owns what part, do people get percentages off it, how's that kind of attribution piece going to work. And I made a couple of interesting things that, that both of you brought up, one, tracking the effectiveness of your gen AI initiative. Dan, I know you're looking at PR labeling and LinearB starting to leverage that so that engineering execs can actually see, hey, you know, this was a PR that was generated with AI assistance or fully AI built in. And so really interested to dive into that data as we start to have that come in. And the other piece that I'll mention is is the compliance piece, Kelly. So I know this is a top concern for leaders. And the reason we know this is we just got our preview data back from a survey we ran with 156 different engineering leaders who are starting to live for leverage JNI code. And the vast majority of them, that was their number one concern is, you know, what's my regulatory risk here? What's my compliance risk? You know, is there a legal concern? So I think there's definitely a need as you're both kind of pointing out to say, let's actually track the effectiveness here, let's track how broad is this risk sprawling for us? And companies that fail to do that are potentially at risk. We'll see how it goes.

Kelly Vaughn: 23:36

Yeah. We were talking about things you need to leverage as an, as an engineer. There's a whole new, uh, section or like whole new sector for attorneys to focus on now, which is like leveraging AI and the legal battles around it. So that's going to be interesting to see.

Dan Lines: 23:49

I mean, I can't talk about AI without saying something, some, like, sci fi trippy thing, so I'll say it. It will be the funniest situation where, in court, the AI bot shows up and says, I own this code. I own this IP. It's not OpenAI that owns it. It's also not the company. It's like, I wrote it.

Conor Bronsdon: 24:12

Alright, so this is our first book reference, I think, of this. If you haven't read Foundation, Isaac Asimov's, like, incredible OG sci fi novels. Great time to do it. You're going to see some parallels to the future we're going on. Another interesting aspect of this is, you know, Kelly, you mentioned you're treating AI as like your junior engineer. Dan, I think you brought up this point earlier about the value of co location for improving code quality. So something we actually saw in the study on engineering benchmarks that LinearB did last year, and we'll be repeating every year, uh, we saw that co located teams had more collaboration on pull requests, had more code review happening, and that drove a correlating higher code quality. So there was this positive correlation between merge frequency, code quality, and percentage of overlapping working hours. Which is a really interesting thing to consider as we also look back at that discussion we had about, you know, where are you locating your engineering teams? How are they working together? And I think this, this active workday for the team's contributors where they can have this collaboration, and now the collaboration with AI is going to be fascinating to understand because there are just so many extra factors that we're adding in as we start leveraging more and more bots here. Dan, do you want to speak to how you're thinking about tracking the impact of Gen AI a little bit here? Because I'd be curious to have you expand on that and maybe get Kelly's thoughts on it.

Dan Lines: 25:32

Yeah, What we're doing today, it's more of a comparison. So what we're saying is, let's identify, classify, label all PRs or code changes that include Gen AI created code and do a comparison versus code changes that do not. Because what a lot of companies, especially the bigger ones, are doing right now is they're rolling out Copilot or an equivalent to some part of the engineering organization and not the whole, not like another one. So once we have that comparison or we have that label, now we're looking at data and we're looking at things like bugs created, incidents in production, review times, does it take longer or less to review, and we're seeing the differences. And I actually think Conor, we have a webinar coming up on this soon.

Conor Bronsdon: 26:30

I think it will have happened by the time this, this publishes, but we'll definitely link to that in the show notes because I think there's a lot of exciting stuff that could be happening here in the next few weeks.

Dan Lines: 26:39

So anyways, Kelly, it's done on like a comparison basis with or without, and then see the impacts on efficiency, quality, speed, that type of thing.

Kelly Vaughn: 26:48

Is it self reported?

Dan Lines: 26:49

The initial launch of it will be self reported unless that you're working with a company that we have an integration with, and then it will be auto generated. So like we're in the co pilot partnership program. There's other competing companies like Tab9 and they're opening up the AI, Information to us through API.

Kelly Vaughn: 27:13

That's super interesting. I am curious to see the long term validity of the results because as you know, you might use generative AI to, let's say, create a function, but you're going to go in and make a bunch of changes to it. And so it'll be kind of a little blur of who generated what code, and how does that actually divvy up to where a, perhaps a regression was introduced in production.

Dan Lines: 27:38

Yeah, yeah, I totally, I totally agree. You said you were kind of using it for, did you say like a writing tests and that type of thing?

Kelly Vaughn: 27:45

I've heard plenty of, uh, of people who do use it for writing tests. I personally am not writing code at Spot. And so most of mine is just writing SQL queries.

Dan Lines: 27:53

The other thing that we see is, depending on the leader of your engineering organization, there's kind of a thesis around why to use GenAI. So, for example, I do see some engineering organizations saying, we think we can increase our test coverage in these teams because, GenAI code can like help, you know, write them faster. Yeah. Cool. So what we're doing there is saying that's your thesis for this team. Over the last three months has the code coverage increase at a greater rate compared to before? So I think the other thing about it is like, what is your expected output after you use it? Is it to move faster? Is it quality? Is it to deliver projects on time? So I think that's like the other piece to the puzzle.

Kelly Vaughn: 28:43

Absolutely. There's so much that goes on in engineering work as far as, you know, collaborating with others and, and, what you're actually working on in third party circumstances that might come into, let's say that's your hypothesis is that, you know, if we use general, generative AI, we will increase our test coverage, but if there's also a, uh, a real, a push within the organization, just generally increase test converage, there are a lot of factors at play here that can muddy the results a little bit, but that does not negate the results. It's just something to take into consideration.

Conor Bronsdon: 29:16

So this brings up an interesting corollary here. So we, we've talked a lot on this show about the need to drive visibility, automate. And then, you know, understand Improvement, drive improvement. And I think what's happening here is a lot of folks are actually jumping right into that automation step and saying, Hey, we want to automate and improve, and now we're trying to go back and say, Okay, we want to get visibility into what's actually happening. And I saw a LinkedIn post from another engineering leader with an interesting opinion on this. Um, and they, they said that the easier it is for you to create a dashboard like Dora, the less helpful those metrics actually will be to your organization. And that instead of being able to draw this like clear line from a commit all the way to the code being used in front of users in production. Because it's all automated and logged. That probably means you're actually pretty good at your software delivery capabilities, and maybe your challenges are elsewhere on your team. I'm curious, what's your take on that perspective, given how much we're all diving into automation these days?

Dan Lines: 30:13

When I saw that, Conor, in the show notes, I had to read it a few times. Same. I'm like, okay, what does that mean? If it's really easy for you to measure your DORA metrics. So let's go, go back to why would it be hard to measure DORA metrics? Okay. Let's look at change failure rate, something like that, which means in order to measure change failure rate. You have to have a way to identify there was a failure. You have to have a way to understand that the severity of that failure, is it like a P0 or a P1? And you have to have a way to, let's say, link it back to the code that's been released. What does it take to measure cycle time? Well, you have to understand when a pull request is opened. Is it in draft mode versus like review mode? You have to know when the review started, the review end, you have to know when the deployment happened. So what it actually, after I read it, over and over again, my takeaway was if it's easy for you to measure your Dora metrics, it doesn't necessarily mean that you're fast or you're of high quality or you don't have bottlenecks. What it means is you've worked on your standards enough across your engineering organization, that it's easy to track things like an incident. Or when coding stops and ends, or you probably have a lot of your teams working in a similar style, as opposed to, Oh, it's so hard to measure this. Well, it probably means you don't have a lot of standardization in place. That was my takeaway from reading that quote.

Kelly Vaughn: 31:54

I love that. I echo that sentiment. I think the more you are focused around measurements, the more intentionally you're going to be about how to approach measuring that, and in order to measure that, you have to introduce standards, you have to introduce processes, which is by default going to improve the way your team works.

Conor Bronsdon: 32:12

I mean, I think some devs would push back on that standardization piece, but that's a, that's a broad debate for developers, who I think often try to, go their own way, and I think this is where we're seeing because automation is becoming so much easier because Uh, standardization is becoming so much easier with these digitally enabled tools, we're just moving in a direction where we have these clear steps of how a software development process can work if you choose to go that route. So, let's say you're an organization that wants to do that standardization. You're saying, hey, I I want to enable my devs, but within this framework. I want to get visibility. I want to help them automate away annoying tasks, which is what we're seeing AI do a lot of, uh, in the Dora research, for example, where developer experience is improving, um, when AI is enabled organization, cause they can automate away tests and these other tasks they don't really want to do. How should organizations start that journey of standardization?

Kelly Vaughn: 33:05

Gotta collect data. You gotta have a really clear understanding of the problem statement first. And you have to understand also how this is going to map back to the business. Because if you're going to be investing any time, any resources into standardizing anything, uh, you need to understand what problem you're solving in the first place. And then, with that, you need to understand where the buy in is going to be, where you're going to get pushback, and where you might be introducing process for process sake, and it could be detrimental, actually, to the developer experience. So, it's going to be a lot of push and pull, it's going to be a lot of open conversations, getting feedback regularly, you have to be iterating on it, it's not going to be a, here's our problem, here's the solution. Done. We're good to go. It is something that's going to have to consistently be built upon.

Conor Bronsdon: 33:51

Dan, I see you're nodding your head. Seems like that's spot on for you too.

Dan Lines: 33:53

I think it's well said. I think it's just like an iterative process. Yeah, you gotta measure, you gotta have the data, you gotta look at the data, iterate on the policy. We talked about policy in the beginning of the pod. Yeah, go unleash, Copilot on all of your engineers. But have something in place to understand if the code is like Gen AI, maybe you at first want to have, I don't know, multiple reviewers, or whatever you want to do, just put like a few lanes in place so everyone can drive fast together, instead of like crashing into each other, that's the way that I think about it.

Kelly Vaughn: 34:29

Yeah, it's important, you cannot boil the ocean. You cannot do everything all at once. So pick one or two things to start with. And that is why it's an iterative approach as well. You're going to continue to build upon those, the processes and standards as you're building them. And so you, you have to keep on revisiting what you're working on.

Conor Bronsdon: 34:44

Well, a crucial question here then is, should I build that visibility myself or should I buy it? And Kelly, I know you recently weighed in on this in kind of great build versus buy debate in your excellent Lessons in Engineering Leadership newsletter. And to level set, I'll use your own words from there. The question is straightforward. Do we build this software in house, or do we use a third party software product? Kelly, how do you recommend engineering leaders think about this and assess when to build versus buy, particularly when it's something so crucial as getting the visibility into your software development pipeline or maybe your developer experience metrics?

Kelly Vaughn: 35:15

The way I tend to think about this is, you know, first off, and I, I write about this in the newsletter, understanding the needs and goals that you're, you're trying to reach here. And we're talking about that problem statement right now. How crucial is this to the core inner workings of your organization. And then what is the technical assessment of your team to be able to implement something like this? Or who has the experience to actually build these processes now? And if that's a gap, then I would absolutely consider looking at buying a solution versus building something. You can always, and I talk about this towards the end of it, you can always start with buying a solution with the intention of building something in house. We do this all the time. So long as you're not getting yourself into the trap of, Oh, we'll build it at some point, and then you actually never do. Because if you're using somebody else's software, you understand there are going to be limitations that, in the sense that you are building within the confines of what they're offering for you. And so, is this software going to be meeting your goals, and for how long, you could outgrow that as well. So, much like it's an iterative process, when you're having this build versus buy debate, especially on the topic of something like this, of building the standardizations and building these processes in place, continue to iterate on it, continue to see if this tooling that we have right now, especially in a place that is continuously developing over time, so rapidly, especially like in the Gen AI space, you're going to have to keep on revisiting this.

Conor Bronsdon: 36:36

Dan, what's your perspective on the build vs. buy idea, particularly as a founder who, I mean, you have to convince teams that they should buy LinearB's platform and not build it in house. Why, why should they, or why should they not, in which cases?

Dan Lines: 36:50

First of all, I I really want to read what Kelly wrote. So everyone should check that out. I think nothing in the world's like black and white. I'm sure sure there's in between but this is how I would explain it. First ask yourself is the thing customer value facing or not. Is it customer value facing or not? So for example if the answer is no at the end of the day my customers will not receive value from it So they're not going to get value from it. Then I think buying's okay. And I'll give an example. Do customers get value at LinearB from email. No, they don't. We buy email from Google. So, we're not in the business of building an email solution, okay? Something like that. So, I'm okay with buying that. Because I don't want to create, I know this would go like back in the day to like the 70s or 80s or something. No, we provide metrics. So, for example, Conor, what we would say to our customers is, you're not in the industry of creating metrics. Your customers don't care about that. It helps you improve internally. So buying is the right move. Anything that you're doing for your customer of value that you particularly provide to the customer, I think you should build that yourself. That's your IP. That's why you exist as a company. The only time that I haven't done that before where I bought is it's like we have something that's really, really nice customer facing value, but we're looking to add on to that value and we do not have the in house expertise to do so or the timeframe that I need to supply this to the customer is too small. And in that situation, I really like the idea of what's worked best for me is like aqua hiring. I buy the technology from a smaller company, but the key to that is I want to bring the engineers with me and I want them to be the best engineers possible. And that's worked really, really well.

Kelly Vaughn: 39:05

You know, as you said, it's not black and white. There are always going to be situations where we find that it is better for us to buy something at first because we don't have the resources in house to actually build her out right now, and, you know, we will eventually get back around to actually building that once, you know, we have a, a timeframe set, and that's why I, I really stress if you're going to go the buy route with something you can build in house. If you're signing a one year contract, there's your timeline. Make sure you're working within that one year contract to be able to research and build and test to replace what it is that you purchased.

Dan Lines: 39:39

Yep. I'll give you another non black and white thing. Some of our best customers at LinearB, so with LinearB, right, you get this metrics program. Some of our best customers, they had their own metrics program, let's say for a year. So they started by building their own. And then when they came to LinearB, they're highly educated. They know what they're shopping for. They understand, Oh, I get a lot of this metric stuff out of the box, but I can combine it. They're basically smarter buyers, smarter customers, but understand, okay, we took it as far as we could, and now it's worth buying. So, I mean, I like that as well. Like, start experimenting first in house. Otherwise, you're like a naive shopper.

Kelly Vaughn: 40:24

For you, that is an ideal customer. Because they already understand the value of your product.

Dan Lines: 40:29

They understand, they can expand upon it, and they make a smart decision of what vendor to go with because they know what they need.

Conor Bronsdon: 40:36

And part of that need is this change that we've seen in the role of the engineering leader over the last couple of years. Uh, that engineering leaders today need to be stronger business leaders. They need to be able to report to the executive team on, you know, here's the value we're delivering. You know, here's the efficiency we're delivering. Particularly as CFOs and others are starting to look really fine tuned and say, Hey, is this org as efficient as I want it to be? Where can we cut costs? And, and Kelly, you highlighted at the start uh, of this morning, you, this great tweet by Taylor Poindexter on, Twitter, uh, that's engineering underscore bae, uh, saying I'm helping a friend's job search, and I saw a hands on director of engineering role at a company that's 200 people. Expected to manage several teams, oversee technical direction, code 15 percent of the time, and help with company wide strategy, recruit, etc. Max total comp of 180k. Would you do it? I'd just love to get your thoughts, Kelly, cause I, I know this, this, uh, hiring piece is a huge one.

Kelly Vaughn: 41:31

Absolutely. Uh, my first response is LOL. Absolutely not. I would not go for that job. I think if you removed the coding piece of it, I might have time to do the rest of it. Now, kind of going back to your initial question of, being a stronger business leader, I think that is such an incredible value that you get in an engineering leader, if they understand how the business works and how business decisions are made. Why? Because as you're reporting up to senior leadership, you're trying to make a case for purchasing something, hiring more people, the more you understand about the business, the more you understand how the decisions you're making, what you're requesting, is going to improve the business, the easier case you're going to be making. So I think that is one of the most valuable things that you can, you can find in a leader as you're managing up. It's also really helpful for your team. If you can explain the business reasons why decisions are being made, then you can explain that down as well.

Conor Bronsdon: 42:26

And Dan, I know you've talked about this as the dual mandate of the engineering leader, being in both this technical expert and a business leader. What do you think about this kind of, you know, job posting, for example?

Dan Lines: 42:36

I think being a director is one of the hardest jobs in any organization. And it's exactly because of what that role said. They want you to be like a manager and understand the business. But they also don't want you to get too far away from the code where you can't impact. And that's probably why it says, yeah, 15 percent of the time you'll be coding. I'm translating it that as we want you to stay close to engineering so, you know, what's going on. And I I've seen this with every company that I work with, that director role is one of the hardest roles to be in because they want you to do both. Now, Conor, differently, a little bit differently, when we think of like a VP role, where you're like the head of engineering, I think the new thing that's been added on to there is, not only are you responsible for the execution, meaning is the software getting out to production on time with high quality, that's where the industry first started, they've also added on to, you need to know your resource allocation. You need to know everything that has to do with what does the CEO want you to deliver. That's what the dual mandate is. It's like business alignment plus execution. And that's how I think it's changed. I don't know over the last 10 to 15 years for that VP role.

Conor Bronsdon: 43:59

I think that's a great point to end us on, Dan. Uh, I know we could keep going for another hour or two here. But, uh, these episodes are long enough already, so hopefully everyone stuck with us through this one, and Kelly, Dan, thanks for joining me for this really wide ranging discussion. It's been a ton of fun. Listeners, if you're still here, remember, if there's a topic you want to hear Kelly and Dan discuss, let us know on social media, or via our Dev Interrupted Sub stack, or I guess tweet at Kelly and I. We intend to do more of these types of episodes. And we'll see you all next week. And until then, you can find more of our content on LinkedIn and via our substack at devinterrupted. substack. com. Uh, and don't forget to check out Kelly's newsletter as well, engineeringleadership. xyz. Uh, thanks again, Dan and Kelly. Thank you. Thank you.