"Just having software engineering intelligence... that used to be enough, but that's no longer the case... the difference is we now need to take that information and we need to apply immediate actions."
Imagine saving as much as 75 days of work within a six-month period, all through intelligent automation.
Building on last week’s discussion about the critical shift from passive metrics to active productivity, host Ben Lloyd Pearson and LinearB co-founder Dan Lines now look forward to realities like this: 19% cycle time reduction and reclaiming significant engineering time. They move beyond common narratives surrounding AI to present actionable success stories and strategic approaches for engineering leaders seeking tangible results from their AI initiatives.
This concluding episode tackles how to safely and effectively adopt AI across your software development lifecycle. Dan explains the necessity of programmatic rules and control, detailing how LinearB's gitStream technology empowers teams to define precisely when, where, and for whom AI operates. This ranges from AI-assisted code reviews with human oversight for critical services, to enabling senior developers to make judgment calls, and even automating merges for low-risk changes. Ben and Dan also explore the exciting future of agentic AI workflows, where AI agents could manage tasks from design and Jira story creation to coding and deployment, making developer control even more critical.
Transcript
(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)
[00:00:06] Andrew Zigler: Welcome everyone to Dev Interrupted. I'm your host, Andrew Zigler
[00:00:10] Ben Lloyd Pearson: and I'm your host, Ben Lloyd Pearson.
[00:00:12] Andrew Zigler: This week we're talking about how the Pentagon is kick-starting their own kind of Y Combinator and looking for the next Palantir on college campuses and Reddit to lawsuit against anthropic for data scraping that's hitting the news. And finally, an engineer's reflection on the AI skeptics and his life that gave us both a good chuckle.
[00:00:31] Andrew Zigler: Then what are we cracking into first?
[00:00:34] Ben Lloyd Pearson: Yeah, let's just start right at the top and, and talk about what's happening in the defense industry.
[00:00:38] Andrew Zigler: Uh, yeah. So the Pentagon, the defense sector of the US is launching a major initiative to increase, uh, software startups that are incubated for the purpose of serving, the military. So we're talking about, startups that are funded for military purposes and incubated in a very similar way that we see startups for consumer businesses, B2B [00:01:00] businesses, maybe historically through things like Y Combinator.
[00:01:02] Andrew Zigler: Right. It's, uh, an attempt by the government to modernize, internal things, on old systems. It's, uh, actually a theme we've been covering quite a bit here on Dev Interrupted. We talked recently about, changes at the FAA and at airports around updating technology there. this was a really interesting shift, uh, into like a tech-focused, world, incubated by the defense sector.
[00:01:24] Andrew Zigler: So, Ben, what did you think?
[00:01:25] Ben Lloyd Pearson: Yeah, I saw they were offering some, some like startup investments for, recent grads that were wanting to launch their own company that like supported both the defense and the commercial sector. but you know, I think one, one thing that's really important to remember whenever stories like this come out is that software engineering as a profession is still pretty young, relatively speaking. you think about like other types of engineering, like civil engineering or architecture or material science, like all these other types of engineering have a much more storied and established history [00:02:00] around them. and you know, I've previously brought up like how, how I, I would love to see far more public investments into building more robust software foundations for society. And I think we're increasingly heading to a world where software does end up being the ultimate defense in a lot of situations. we're all familiar with the story of how like software has eaten the commercial world. and I think it's going to eat our governance systems as well, just, probably on a slower timeline, you know, and there's been a lot of modernization effort that's happening in the US federal government in the last decade or so. you know, if you are familiar at all with the Air Force, like they went through a really massive digital transformation during the COVID era.
[00:02:42] Ben Lloyd Pearson: I. that ended up then becoming a model for a lot of other organizations within the Department of Defense, and I've actually met some of the people who have started and worked for some of these more defense oriented startups they operate surprisingly similar to like your, your typical tech company. You know, the, the same types of people you would [00:03:00] see in a Silicon Valley tech company. with the same like, approach to building software. So,I think that's a great thing. I think this is probably gonna be a good investment over the long term and I hope it, I hope we see some cool companies come out of it.
[00:03:11] Andrew Zigler: I think there's a lot of opportunities to innovate especially within the government. So I'm interested to see what comes of it.you know, related to that, you talked a bit about modernizing, defense infrastructure. The, there's like a lot of need for it, right? Well, there was a fun tidbit that came out recently in Japan about how they modernized the part of their own social infrastructure. Maybe that's a, a fun news bit that, kind of resonates with your own interest there, Ben, talking about the opportunities in society to improve things, because when I read this article, all I could get was immediate flashbacks, still living in Japan, where I was for two years after college, and the dreaded experience of having to order.
[00:03:49] Andrew Zigler: like a movie ticket or a concert ticket, or buy furniture from the, vending machine. And the convening was always so daunting because putting in my address, which was like four sets of numbers and three sets [00:04:00] of kanji and, really complex, code was very hard for me on the machine. But in Japan.
[00:04:06] Andrew Zigler: They've just launched a digital address system that crunches all of that. Hard to understand, hard to input, information into a simple seven digit code that folks can use on these types of machines living in Japan to do everyday life, things like pay their bills, and order concert tickets if you were me in Japan.
[00:04:24] Andrew Zigler: So this was a fun tidbit that came out. it's a really great effort from a government that typically hasn't done a lot of technological innovation, Japan is actually not really known for having this type of system. so it's really great to see this quality of life improvement for everyone there.
[00:04:38] Ben Lloyd Pearson: Yeah. I love how it's just like a low hanging fruit to help people out, you know? And I'm sure both foreigners that are visiting and, long-term residents, there's, there's probably a lot of value to these types of improvements.
[00:04:49] Andrew Zigler: For sure.
[00:04:50] Ben Lloyd Pearson: Yeah. So let's talk about something that's very near and dear to my heart,
[00:04:53] Andrew Zigler: I.
[00:04:53] Ben Lloyd Pearson: Reddit. what's the story that's going on there?
[00:04:55] Andrew Zigler: Oh yeah. So Reddit, we all are familiar with, the great field of Reddit. So, [00:05:00] you know, Reddit has built a $2 billion business on user generated content that's categorized by any kind of topic you could think of. It's a treasure trove of information for ai. We've seen this already play out over the last several years as foundation models have come into existence, wanting to get their hands on.
[00:05:18] Andrew Zigler: Reddit because Reddit has this naturally organized sets of information that are natural communication between people on highly intricate topics, but ultimately, Reddit is a place where that information and data has gathered it's people's conversations, over, you know, decades even. there's something interesting that has happened where Anthropic apparently has been making a bunch of unauthorized access to Reddit servers to access this information.
[00:05:43] Andrew Zigler: Uh, and this is after a licensing deal fell through, you know, and, and so Reddit is not getting paid what they think is their fair share. Reddit hasn't really done anything to make that AI and that information AI ready for consumption, like there's no cleaning or curating or enrichment [00:06:00] that they do on there, and they just hand you all of it.
[00:06:02] Andrew Zigler: the spam and the bots and all. The snake eating the tail all in itself. so it didn't really write the content, it didn't enhance it, but now it wants rent from the innovators that want to build on top of it. Uh, it's a really interesting evolving play in the power economy.
[00:06:18] Ben Lloyd Pearson: Yeah. What, what is model collapse?
[00:06:22] Andrew Zigler: I mean it, if anything, it would happen through Reddit being traded on Reddit.
[00:06:26] Ben Lloyd Pearson: Yeah, exactly. Exactly. I've been a Reddit user for well over a decade at this point, and if you're a long time Reddit user, you may know that it's pretty easy to pile on the hate right now with them. 'cause they've kind of made a lot of decisions recently that, has upset a lot of their users. but I wanna point out something really important and that is. Reddit has been, it's probably one of the most AstroTurf platforms on the internet. And I think this problem gets worse and worse every single year. So like, if you're like me and you've normalized this practice of turning to Reddit to find advice [00:07:00] for like local communities and for niche problems and, and all of that, which it's historically has been very good, like a very valuable platform for that. I think whenever you do that today, there's a pretty good chance that you're now reading bot generated content. because it's been largely unregulated on that platform. but to get back to the story, you know, Reddit, like many other companies, is being disrupted by all of these new AI tools that are coming out.
[00:07:24] Ben Lloyd Pearson: Like, think about how there's probably lots of people now that are going to all of these GPT services to ask the types of questions that they might have taken to Reddit in the past. and companies like Reddit may be facing a future where their primary value proposition is all of the data that they have to train those AI models. And, you know, we mentioned model collapse, like I suspect with just how much bot generated data is on Reddit, that it, you know, there's always a risk that this data is not as valuable as you might expect, and it could actually end up becoming a model, a [00:08:00] liability risk if you are actually facing model collapse because your data isn't good enough.
[00:08:04] Ben Lloyd Pearson: And I think we should follow this story either way. You know, it's, it's all alleged at this point. Like philanthropic denies scraping this stuff outside of the terms of service. So maybe they
[00:08:13] Andrew Zigler: Yeah.
[00:08:13] Ben Lloyd Pearson: that the data just wasn't valuable enough to continue using.
[00:08:17] Andrew Zigler: Yeah, I think it'll be evolving story. So I'll be interested to see, how Reddit continues the chase down folks using their data. I'm sure it's not, this won't be the last one.
[00:08:25] Ben Lloyd Pearson: Yeah, yeah. And the last story we have is a bit of a fun one. It's not really news, it's more of an editorial. So what do we have for this, Andrew?
[00:08:33] Andrew Zigler: Yeah, I love this article. Um, we came across it on Hacker News. This is an article from Thomas Patek, he's an engineer at Fly io and he wrote this really candid article about, all my AI skeptic friends are nuts and put out several, uh, key arguments in his mind about, The skeptics in his life and the arguments that they have against AI and using it in coding and his personal responses that are grounded in his experience as someone using these [00:09:00] tools, I found it to be a really fun reflection because some of this is obvious in terms of like level setting, talking about making sure.
[00:09:06] Andrew Zigler: You know, you're even discussing the same thing going in, understanding that it's different using chat GPT versus using like an agent in your IDE.
[00:09:14] Ben Lloyd Pearson: Yeah.
[00:09:14] Andrew Zigler: a lot of, uh, like level setting that he does in this article, uh, that kind of help folks navigate these conversations. If you find yourself constantly starting back at.
[00:09:23] Andrew Zigler: The starting point, in skeptic conversations, but also he, he balances it all throughout with humor. the silliness of why would you be against innovation and trying out new things as an engineer and building. I found this one fun to dig into. I, I recommend everyone give it a read.
[00:09:37] Ben Lloyd Pearson: Yeah, and, and I, I like to routinely reflect on like where I was about a year and a half ago when I first started really using AI as a part of like my day-to-day work just how far the technology has evolved in like. That short, 18 or so months, there's like a lot of over height aspects of ai.
[00:09:56] Ben Lloyd Pearson: Like, you know, there's all these claims of like artificial general [00:10:00] intelligence, AGI like being around the corner. those may or may not come soon or ever, it's hard to tell, but you really can't stick your head in the sand. And claim that it, that AI is not gonna disrupt many aspects of work like it's already happening today. I keep hearing a lot of comparisons in this article as well to what's happening now to the adoption of the internet and to mobile computing in the past. I think it's a pretty good comparison 'cause I think the transformation we're going through now mimics those periods in many, many ways. and we're gonna come out of this with new paradigms in how we build and consume technology. Like, just think about how different the technology landscape was before everyone had a computer in their pocket and before everyone's computer was connected to this immense
[00:10:47] Andrew Zigler: Mm.
[00:10:48] Ben Lloyd Pearson: source on the internet.
[00:10:49] Andrew Zigler: Yep.
[00:10:50] Ben Lloyd Pearson: you know, the reality is that like agent. Autonomous workflows are here and I think now, like today or the next year or so, is the time [00:11:00] that. these tools are really gonna prove exactly how much productivity improvements they're gonna create. we're kind of in the wild West right now, like it's very chaotic. There's, a lot of things going on.
[00:11:10] Ben Lloyd Pearson: I think we're slowly gonna come under more, more and more control over the next couple of years as AI products become a lot more hyper-focused on, solving real world problems rather than these like. General purpose chat bots that we have today. As much as I love them, it's only one aspect of, of where all this is going.
[00:11:29] Andrew Zigler: Yeah, exactly. And as kind of AI's involvement in the things that we use, it matures. It's also going to fill into those niches and parts of the process that we haven't seen it really resolve yet. It's gonna rise. It's gonna lift all of those boats once we can figure out how it's going to actually play with all those parts of a process.
[00:11:47] Andrew Zigler: So Ben, um, what's coming up next on the pod?
[00:11:50] Ben Lloyd Pearson: Yeah, so this week, Dan Lines and myself sit down to continue our discussion about the future of AI driven software development. We we're gonna dive into a lot of really [00:12:00] interesting topics, so make sure you stick around after the break.
[00:12:04] Andrew Zigler: AI is creeping into every part of the SDLC, but how far have teams really gone? LinearB surveyed over 400 devs, many of them like yourself and the Dev Interrupted audience and found that 67% are already using AI to write code, but is that creating opportunities or bottlenecks?
[00:12:23] Ben Lloyd Pearson: Our new DevEx guide breaks it all down, including adoption patterns, pitfalls in the AI collaboration matrix that charts your own team's journey with ai. Check out the guide and maybe even the panel I hosted about it last week with Atlassian, AWS, and ThoughtWorks, I'll drop the link in the show notes. Hey everyone, welcome to part two of our conversation on developer productivity. Once again, I've got Dan Lines here with me in the studio, and if you haven't listened to part one of the conversation, we covered the shift from engineering intelligence to developer productivity insights and developer [00:13:00] experience.
[00:13:00] Ben Lloyd Pearson: So go check it out if you haven't already, because it's full of great insights. That really set the stage for today's conversation where we're gonna be more forward looking and think about how AI is impacting the engineering teams that we talk to every week. So Dan, it's great to have you back for this conversation.
[00:13:18] Dan Lines: Yeah. Awesome to be back. Thanks, Ben.
[00:13:20] Ben Lloyd Pearson: Yeah, so AI is all over the place. We can't seem to get through a single week anymore without discussing this in some level on this show. but it's really hard sometimes to see through the hype, to find the actionable success stories. And that's where I wanted to bring you in today, Dan, because we've got some big views here at LinearB about where AI is going, and I wanna talk about that.
[00:13:47] Ben Lloyd Pearson: So, you know, everyone's talking about it. Very few people have a concrete plan for the future. What should engineering leaders doing right now to prepare for AI?
[00:13:57] Dan Lines: Wow. What a big question, [00:14:00] Ben.
[00:14:00] Ben Lloyd Pearson: I like to
[00:14:01] Dan Lines: Okay. Yeah. You like, you like to start big? Maybe let, let's break down where are we all at with ai? Where are engineering leaders at with ai? maybe what, what's on on their mind? something like that. So one is most of the engineering leaders that I'm speaking with, I would say they are dabbling.
[00:14:26] Dan Lines: Maybe that's the best, or experimenting,
[00:14:28] Ben Lloyd Pearson: Yeah.
[00:14:28] Dan Lines: with ai in one form or another. Usually they'll say something to me like, I bought some Copilot licenses, maybe everyone, for everyone, or maybe for a few teams. And it almost feels, I mean, that, that's the interesting part. They almost, it almost feels like they did that because they felt the pressure to do it so that they could say they're doing something with ai.
[00:14:54] Ben Lloyd Pearson: I think their developers would revolt if they didn't give them
[00:14:57] Dan Lines: Yes.
[00:14:58] Ben Lloyd Pearson: know, in this day and [00:15:00] age.
[00:15:00] Dan Lines: Right. But on the flip side of that, it doesn't always fit into like a cohesive strategy, let's say, of what they're doing with AI or why they're doing it. And maybe there's, and of course there's kind of like an assumption if I do, provide all of my developers and their IDs, it could be cursor, it could be something else like,
[00:15:25] Dan Lines: good things will happen.
[00:15:27] Ben Lloyd Pearson: Yeah.
[00:15:29] Dan Lines: But beyond that, I think there's a lot of room to understand what the next step is. And then I would also say just because we purchase a bunch of these licenses and developers are using it in the IDE, it doesn't mean that it's actually having a, a positive effect on productivity or the business.
[00:15:55] Dan Lines: Now, the other thing that I see a little bit is, okay, I want to roll [00:16:00] out AI maybe everywhere into my SDLC, but I don't know, do I need control around it? I can't actually roll it out. Maybe I just wanna put it on a few services. How do I get the visibility into it? Can I write rules around it? And there's kind of like the control aspects of this as well.
[00:16:22] Dan Lines: so there's kind of a lot to unpack here. where do you wanna take it? Where, where should we take this talk?
[00:16:27] Ben Lloyd Pearson: what we're kind of getting at here is that AI is clearly having an impact on just about everyone. there is a lot of hype around it and, there are many times where organizations adopt ai get very hyped up about it, but don't do it in some sort of like structured, coordinated rollout.
[00:16:47] Dan Lines: Yeah.
[00:16:47] Ben Lloyd Pearson: Then it doesn't live up to the expectations, and there's almost like a backlash where everyone becomes like, disappointed in ai and then, you know, suddenly you're fighting like pessimism around adopting it. And, and often [00:17:00] it's tied to like top down directives as well that that don't align with the bottoms up need. So maybe, maybe we can start with how LinearB is approaching this, because, I think that's probably where we have sort of a unique perspective on this. I think one of the biggest concerns around AI is how do you safely adopt it across your organization? maybe we can just start there.
[00:17:20] Ben Lloyd Pearson: Like, how, how can you make sure that the next step you take into AI is gonna be done safely?
[00:17:26] Dan Lines: Yeah.
[00:17:27] Ben Lloyd Pearson: I,
[00:17:27] Dan Lines: Okay. You need rules around it. I think it's a actually, pretty simple. So, you know, with LinearB we have our gitStream technology and that gitStream technology essentially allows you to programmatically create rules around when AI runs, where it runs, for who does it run and full monitoring end to end.
[00:17:50] Dan Lines: So usually when you think about control, you might think about policy rules, programmatic control, and that's what LinearB offers. Now, [00:18:00] for example, maybe let's give, something more concrete than that. Let's say that you have deployed Cursor or Copilot and you see more code is being created. more pull requests are being created and therefore.
[00:18:17] Dan Lines: There seems to be, a little bit of, throughput gain. now that we have all of this, let's call it gen assisted code, right? You still have a human there, a developer there. now it's made it to the next step of the SDLC, which is the pull request and the review process and the merge process, we're stuck a little bit.
[00:18:39] Dan Lines: Because we have a large amount of code that's now making it there, and it's almost like the pipe was opened on the development side, but the pipes that actually get the code out to production in a controlled and safe way have remained the same width. Now what LinearB allows you to do is deploy [00:19:00] these rules, these policies, and have the control.
[00:19:03] Dan Lines: Once the pull request is open all the way to merge. And that's doing two things for you. It's actually making kind of that promise of AI and the efficiency come to life, but also you can start saying things like this. For these services that are low risk, and this is all, all codeable within gitStream.
[00:19:25] Dan Lines: For these services that are low risk, I'm actually going to allow an AI review to run and if everything passes with looks good to me because, for example, these services, maybe they're more in an incubator state or an experimental state, and I need to just run fast. I'm going to let that LGTM to go through.
[00:19:46] Dan Lines: and then I'm actually going to merge that pull request. That's one rule that I can have. Now, I might, in other services, want much more control than that. I might be in a situation where, [00:20:00] okay, the PR is opened. The AI code review is going to run, and what I'm actually going to do with gitStream is still have a human reviewer, of course, but that human reviewer, that developer reviewer, is actually gonna be sent all of the information that the LinearB AI code review has already done, and it'll say something like this, Hey Ben, I actually
[00:20:26] Dan Lines: did 80% of this review, and I feel really good about it. I need you to review this 20%. I still need you to review. Okay, now that, that's a nice level of control. And the third one, and I've seen this kind of like in an in-between state. Let's say Ben, you opened up a, a PR and some of that that was, aided by
[00:20:47] Dan Lines: Copilot. LinearB is gonna
[00:20:49] Dan Lines: have a rule in your service area. That says, Hey, the AI code review has run, but Ben, since you're a senior developer, you are going [00:21:00] to decide now if you're gonna bring another developer in to review or not. You have the power. So these are a few, I think kind of like a down to earth examples of actually coordinating and controlling where AI is able to run, have more power, have less power, and you can kind of turn the knobs between like
[00:21:22] Dan Lines: efficiency and, your level of control.
[00:21:26] Ben Lloyd Pearson: Yeah, the word control, I think is gonna be the, the word of the year, maybe for 2025. you know, and I think it's, it's like you really think about how the perception of ai, like everyone started with like, chat interfaces, predictive typing like these natural
[00:21:42] Ben Lloyd Pearson: language
[00:21:43] Ben Lloyd Pearson: to code. But that's really only like a tiny portion of the actual like software development process. and I think many org still view it in that the first way, but in reality is manifesting more like automated workflows. Like even within the
[00:21:56] Dan Lines: Yep.
[00:21:56] Ben Lloyd Pearson: when you look at how IDs like cursor operate, everything [00:22:00] is becoming like an automated workflow. Now. so, you know, on that thread, like where do you see the future of these like agentic AI workflows heading?
[00:22:10] Dan Lines: Yeah, I mean, obviously no one can predict the future, so this is just what I'm seeing and maybe giving a hint without fully being able to say it, of what LinearB has coming next. But let's say right now we're in the era of code assistant and agents in the IDE, code assistant in agents, like I said, in the PR process, and then the deployment process.
[00:22:37] Dan Lines: But there's still developer involvement and there's still a lot of human control. That's, I think, the era that we're in right now. But quickly, I, I kinda see two other eras, progressing here. The next era would be something like, Hey, the agent actually, so your AI agent actually produces all of the [00:23:00] code.
[00:23:00] Dan Lines: Let's say, and autonomously does the review and makes a decision of whether it can go out to production or not. And maybe in between there, there's still some developer and human checks and balances, but it's more autonomous end to end. And then maybe after that, again giving. hints of where, LinearB capability set is headed.
[00:23:25] Dan Lines: And it might sound a little bit futuristic, but I think there will be an era where agents are saying something like, Hey Ben, I created a design for you. I created a Jira story for you. I actually created the code for that story. Does it look good to you? I actually am ready to release it for you. So when you're talking about a workflow, I can see us headed to a point where all areas of that, like SDLC, start [00:24:00] being more AI and agent driven end to end, and then the control becomes more important because now you can start controlling, okay, where do I want it to stop and have,
[00:24:12] Dan Lines: a human review and where am I gonna allow it to go on its own to actually like move as fast as possible.
[00:24:19] Ben Lloyd Pearson: Yeah. And, and in the past here on Dev Interrupted, we've, had some content that shows how a lot of AI adoption can be mapped somewhere between like. human produced assets versus AI pro. And human process, human managed process versus AI driven process. And I know just personally speaking, there have been aspects of my personal workflow that I have like four or five x through ai. But again, it's only like one aspect of my workflow. And, and gradually over time, it feels like more and more aspects, more and more tasks, more and more things that I have to engage with are gonna get wrapped into these AI workflows. at some point it, like, you [00:25:00] really do start to see like these five to 10 x improvements in, in specific areas.
[00:25:05] Dan Lines: You are right, and maybe if you're thinking about how to get like a hundred x improvement or something crazy or 10 x improvement, it's those different aspects of the workflow actually talking to each other, if that makes sense. Again, like someone from the product side saying, Hey, I have a design, or I have an idea all the way out to, okay, an agent has coded that idea
[00:25:30] Ben Lloyd Pearson: Mm-hmm.
[00:25:30] Dan Lines: and reviewed that idea and felt safe enough to deploy.
[00:25:35] Dan Lines: Yeah, so this is all very, very future forward thinking, which is really, really great. I love, I love getting to talk about that stuff, but it's also really great to talk about what success is looking like today for organizations. So obviously no one is out there. Anyone who's claiming to be a 100xing their developers or getting rid of their entire software development team through ai is, is, is not telling you, [00:26:00] is probably not being honest with you. But we are seeing some improvements. So I know we've been working with some companies that have started to realize some pretty substantial gains from, adopting this like automation driven metrics informed approach to developer productivity, developer experience in adopting ai. So
[00:26:21] Dan Lines: Yep.
[00:26:22] Ben Lloyd Pearson: what have we seen so far today?
[00:26:24] Dan Lines: Yeah, let me, let me just hit you with some concrete numbers that we're seeing from the LinearB community.
[00:26:30] Ben Lloyd Pearson: Yeah.
[00:26:30] Dan Lines: On average. After deploying AI and automations with gitStream, we see about a 19% cycle time reduction. That's an average across the customer base. In addition to that, because we have some things that are helping team leaders and managers in the sense of mostly in the iteration flow, like the sprint flow and the retrospective.
[00:26:57] Dan Lines: About five hours saved per [00:27:00] manager as well, every sprint. These are just kind of like hard pieces of data. Now, even more so than that, Ben, I, I pulled a few, examples. I won't say. the name of the company that we're working with, but more describe just three, that came to mind. One is we have a company that saves 71 days.
[00:27:24] Dan Lines: So 71 days you can do the math of hours saved. So 71 days of saving just by deploying the gitStream AI code review and the PR summary. If you add up all the minutes of time saved from automating, the summary of the PR time saved from reviewing the PR doesn't mean that human reviewer can't be involved as well.
[00:27:49] Dan Lines: Okay. That's a control part. Add that all up. You get a 71 day saving. Okay.
[00:27:55] Ben Lloyd Pearson: days across how, how much? How long of a time.
[00:27:58] Dan Lines: Okay. So that, that's gonna be [00:28:00] over let me just, check the data here. That's gonna be within a a six month time period for this example.
[00:28:07] Ben Lloyd Pearson: Yeah. I love the
[00:28:08] Dan Lines: Okay?
[00:28:08] Ben Lloyd Pearson: analysis.
[00:28:09] Dan Lines: Yeah, yeah, yeah, yeah. I have it up here running live. Now, I have another, customer that saved 41 days of work with gitStream and they've done this, more so on safely merging 2,500 pull requests that were created by bots.
[00:28:32] Dan Lines: So you think about like a RenovateBot a Dependabot. Think about bot created work, 2,500 PRs that have been merged without human review. Okay, now that's coming within a five month period of time.
[00:28:48] Ben Lloyd Pearson: Yeah, I mean
[00:28:48] Ben Lloyd Pearson: even if it only
[00:28:49] Ben Lloyd Pearson: takes you 30 seconds to click those button, the green buttons to merge that, I mean, you, you still have to imagine the time savings from that.
[00:28:57] Dan Lines: yeah, so you get, even if it's taking you 31 [00:29:00] seconds, but you're also, removing yourself from what you were doing,
[00:29:04] Ben Lloyd Pearson: Yeah.
[00:29:05] Ben Lloyd Pearson: doing
[00:29:05] Dan Lines: that, you know, merge, then having the come back to what you're doing as well. So you can kind of put the.
[00:29:10] Ben Lloyd Pearson: every week to click green buttons for Dependabot.
[00:29:13] Dan Lines: Right, and so that's with the gitStream safe merg. And the last one that I, I have here is I have another company that's combining those two use cases. So I have one where they've rolled out the gitStream AI code review. They've wrote out the gitStream AI PR summary. And then they've roll, rolled out the safe merging rules.
[00:29:38] Dan Lines: And again, the safe merging rules means you have to pass the criteria. The AI review has to pass, the test has to pass, has to be in the low risk area, and they were able to save, 75 days of work in that same time period. So, when we think about kind of like some hardcore stats or what it means to be successful, and we said, [00:30:00] said this in part one of the episode as well.
[00:30:02] Dan Lines: It's like, how many days or how many hours of work have you returned back to the engineering organization or return back to a developer and they can decide what to do with that time. That's what success looks like.
[00:30:15] Ben Lloyd Pearson: Yeah, you took the, the words outta my mouth. I was going to mention, if, if you didn't watch the first half of this interview, this is a great moment again to remind you, go back and, and watch it. We cover how you know, you can look at DevEx investments and convince your engineering leadership that, that it did have a positive impact on your organization and, and show and demonstrate how it drives the business forward. So just to wrap things up, I wanna get a little practical with just a couple of maybe there's some short answer questions, but who, who knows, maybe you got a lot to say
[00:30:47] Dan Lines: Okay.
[00:30:48] Ben Lloyd Pearson: so yeah, question number one, what is one habit that every engineering team should start tomorrow?
[00:30:56] Dan Lines: Ooh, that's a good question. And actually, you know what? [00:31:00] The answer this to this question, and it kind of shows how fast the world is moving. The answer to this question has changed for me over the last eight months, six months, something like that.
[00:31:10] Dan Lines: Oh
[00:31:10] Ben Lloyd Pearson: interesting.
[00:31:11] Dan Lines: the one habit that I would say is each week pick one task that you're going to remove from a developer's plate.
[00:31:22] Dan Lines: Probably with AI or an automation. See if you can quantify the time returned, if you would remove that task and take an action to remove it. If you did that every week or every month, you're gonna get to the end of the year and be in a good state.
[00:31:40] Ben Lloyd Pearson: Yeah, very well put. I, I know, uh, my team has started to do something similar and it's, it's amazing how quickly those one tasks add up over time. So the second question, what do you think is the biggest misconception about productivity?
[00:31:57] Dan Lines: I would probably say the [00:32:00] biggest misconception, and you can even say maybe it's coming from some of the stuff in the news of like, You can see like, uh,Elon, like Elon Musk saying, Hey, my teams work like a hundred hours a week and all, all this. I think the biggest misconception is productivity gain is associated to developers working more hours, working the weekends, staying up all night, that type of stuff.
[00:32:27] Dan Lines: And it can be then perceived as a negative connotation 'cause that's not a smart way working a ton of hours.
[00:32:34] Ben Lloyd Pearson: Yeah.
[00:32:35] Dan Lines: As opposed to what we said in the previous answer of productivity gain being associated to the removal of tasks off of people's plates and how often you're able to do that.
[00:32:48] Ben Lloyd Pearson: Yeah. Yeah. What if instead of having your developers work weekends, you have their AI agents doing work for them over the weekend?
[00:32:56] Dan Lines: Yes,
[00:32:57] Ben Lloyd Pearson: Yeah.
[00:32:58] Dan Lines: exactly.
[00:32:59] Ben Lloyd Pearson: Third [00:33:00] question. What is the biggest mistake that engineering leaders make today, and what can they do to fix it?
[00:33:05] Dan Lines: Yeah, I mean probably in this pod, we have gitStream, we have automations, we have AI. That's the context that I'm answering this question in. But I would say the biggest, I don't know if it's a mistake, but the biggest thing that I see is lack of a strategic plan to roll out AI and automations to each area of the SDLC.
[00:33:27] Ben Lloyd Pearson: Yeah.
[00:33:28] Dan Lines: As opposed to just purchasing copilot and being like, that's what I did. And it's like, okay, well you're not gonna get the productivity gain there, or you're not gonna get the outcome that I think you're looking for.
[00:33:40] Ben Lloyd Pearson: Yeah.
[00:33:41] Dan Lines: As opposed to saying, okay, let me take a holistic view. Now I understand where all my bottlenecks are.
[00:33:47] Dan Lines: I have a plan for rolling out to each stage to the SDLC. That then adds up to the outcome that I probably promised like my, either promise the CTO or the CEO or the board.
[00:33:59] Ben Lloyd Pearson: [00:34:00] Yeah. We've covered this actually quite a bit on, on Dev Interrupted already how, organizations who are, there's been a lot of research on this as a matter of fact, organizations who adopt AI without some sort of coordinated and structured plan see a fraction of the benefits that organizations versus organizations who do.
[00:34:16] Ben Lloyd Pearson: So, it's definitely, definitely critical. all right. Just one last question and then I'll let you go. So, what's next for LinearB and what are we, what is LinearB doing to help engineering organizations stay ahead of the curve?
[00:34:28] Dan Lines: Yeah, listen. I mean, every customer that's using LinearB today, they're providing hours back to their organization with gitStream AI and automations. That's where we are today. And we plan on doing that more. I'm not going to disclose on this pod then what we're doing next. Within, the product of LinearB, but the hint that I will give is what we talked about in the beginning of the pod.
[00:34:57] Dan Lines: We are gonna bring more AI and [00:35:00] automations that take work off of developers plate and return them hours so they can work on the most important creative and interesting, task. That's what's next for LinearB.
[00:35:10] Ben Lloyd Pearson: Wonderful.
[00:35:11] Ben Lloyd Pearson: That sounds great, Dan, as always, it was great to have you here. Thanks for joining me today.
[00:35:16] Dan Lines: Thanks, Ben.
[00:35:17] Ben Lloyd Pearson: Yeah, so make sure you subscribe to the Dev Interrupted newsletter for tons of awesome content that doesn't always make it into this main show. And if you're struggling to navigate the uncertainties of the AI driven software development future, check out LinearB.io Thanks for joining us. We'll see you next week.