Podcast
/
The fundamentals of agent-driven software workflows

The fundamentals of agent-driven software workflows

By Dan Lines
|
Blog_Comprehensive_DORA_Guide_2400x1256_5_6f90bbb47e

"The future of AI in software development is not about isolated tools... It's gonna be more about orchestrated systems where agents are reasoning, delegating, and operating across your entire software delivery lifecycle."

You've bought the AI tools, but are you seeing the promised productivity gains? 

Join Dev Interrupted hosts Dan Lines and Ben Lloyd Pearson, as they break down the fundamental shift from AI coding assistants to intelligent, agentic systems. They discuss the key findings from their new guide, "The Six Trends Shaping the Future of AI-Driven Development," and challenge the common misconception that simply buying tools guarantees success by explaining why sustainable AI transformation depends less on better prompting and more on building better systems.

Dan and Ben reveal the foundational pillars required for AI adoption, from creating unified knowledge sources to modernizing your infrastructure for agentic workflows. Learn why the future lies in moving from isolated tools to fully orchestrated systems where AI agents can reason and operate across the entire software delivery lifecycle. This episode offers a strategic playbook for engineering leaders aiming to build a successful and sustainable AI strategy for 2025 and beyond.

Show Notes

Transcript 

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

[00:00:06] Andrew Zigler: Welcome to Dev Interrupted. I'm your host, Andrew Zigler.

[00:00:09] Ben Lloyd Pearson: And I'm your host, Ben Lloyd Pearson.

[00:00:12] Andrew Zigler: This week we're talking about LeadDevs engineering leadership report. The list of most desirable AI scientists that meta and open AI are fighting over right now and the day my toaster started taking phone calls. Ben, what catches your interest?

[00:00:27] Ben Lloyd Pearson: I, I mean, as much as I wanna learn about a toaster making phone calls, uh, I, I did get a chance to read LeadDev's leadership report, so maybe we can start with that and we'll get to the toaster a little bit later.

[00:00:37] Andrew Zigler: So LeadDev, they released their recent engineering leadership report. This is something they do yearly where they surveyed 600 plus engineering leaders and asked them about how their roles are changing in response to things in the economic and industry environment. Things that are changing all around us.

[00:00:54] Andrew Zigler: Right? What's the sentiment of leaders and developers in this space right now? and there were a lot of interesting [00:01:00] findings in the report. Ben, what stood out to you?

[00:01:02] Ben Lloyd Pearson: Yeah, so I mean, the first one that, you know, I think everyone's always looking at these kind of data sets, but, the effects of AI on productivity. So, in this report they mentioned that 19% of, survey participants said they saw no impact from ai, and then a further 62% were somewhere in the range of like one to 30% improvements.

[00:01:23] Ben Lloyd Pearson: I think about, you know, 20 or so percent or 25%. we're in the like 11 to 30% range and then the rest were, below that. And, you know, it's, it's interesting 'cause that that really does kind of align with a lot of the research and, you know, including our own research that we've done around productivity improvements from coding assistance primarily.

[00:01:42] Ben Lloyd Pearson: you know, I think that's the primary way that most engineering teams have adopted AI at this point. and you see, usually around 10 to 30% improvement. if you're using it regularly. So, yeah, definitely like just aligning with stuff we've seen already in the past. one thing that was really interesting though, is [00:02:00] 65% of the people that were surveyed were worried about a recession.

[00:02:04] Ben Lloyd Pearson: And, I do understand that like people's sentiment around the economy and around the future of their own like financial prosperity. can often be a pretty good indicator for like how things are actually looking in the real world.

[00:02:17] Ben Lloyd Pearson: So, you know, that's, kind of concerning. you know, the, something that counters this is, they had lots of charts in this report of comparing responses this year to various questions versus responses last year. one of them asked about, things that your company has done over the last 12 months, and a promising sign was that both hiring freezes and layoffs dropped quite a bit this year versus the year prior. so, you know, I think kind of goes counter a little bit to the narrative of like, AI is either causing people to rethink hiring plans or maybe put off hiring. but there was also some, some mixed. Bag responses, particularly around ai. a lot of companies are out there shipping ai, in their products.

[00:02:57] Ben Lloyd Pearson: Like, uh, 63% of [00:03:00] respondents said that their company has shipped an AI capability in the last year. in terms of biggest concerns, The, the people they surveyed were a little more concerned this year about things like upskilling junior developers for AI as well as code maintainability. some of the issues that we've been seeing around call it vibe coding or, or what have you. But, but, uh, people using AI in, in ways that aren't as productive, particularly junior developers. but there's also been concerns around, you know, data privacy, intellectual property, the learning curve around this stuff.

[00:03:32] Ben Lloyd Pearson: Those have actually been reduced. So, it's not all. a bad picture for AI adoption, but, yeah, it's kind of all over the place. Uh, and I did wanna mention, we had Scott Carey from LeadDev on the show a while back, uh, where we talked about where all the laid off engineers were going back during, one of the last many rounds of layoffs.

[00:03:51] Ben Lloyd Pearson: Uh, it'd be great to have him or someone else from LeadDev come back on and talk about, you know, this discrepancy between. the, the hype around AI and replacing software [00:04:00] developers and the reality that like it, it seems like teams still need to hire plenty of software developers.

[00:04:05] Andrew Zigler: Yeah, I, I think there's a lot to, to learn from this report and something that really stood out to me as I was reading the results is kinda like the verb used in measuring about, it's, it's listening to. What do, engineers right now believe and what do they worry about? And those are really interesting indicators about the environments that people work in.

[00:04:25] Andrew Zigler: And for me that that really hints at larger issues that obviously not just throwing tools at things can fix. These are communication, team, structural, issues being on the same page. Right. So I think there's a really big opportunity to. Really crack open this report and talk about that and maybe even identify some strategies for teams that do find themselves with like a lot of worry or anxiety, or they have beliefs that don't match up on both sides of leadership and ic.

[00:04:54] Andrew Zigler: I think there's a lot to learn from this report, so it's a great, it's a great kind of start to, to figuring [00:05:00] that out.

[00:05:00] Ben Lloyd Pearson: Yean and one thing

[00:05:01] Ben Lloyd Pearson: I think is really important to keep in mind is, you know, AI adoption is happening in a lot of different ways across teams. And the really

[00:05:08] Andrew Zigler: Yeah.

[00:05:09] Ben Lloyd Pearson: to have visibility into, to what your team is doing.

[00:05:11] Ben Lloyd Pearson: Like where are you being successful, where are you encountering difficulties? And just really break down that problem. Like don't, don't just assume that adopting ai, giving it to your developers is gonna be successful or not successful. Across the board, there's actually gonna be a variety of. Success stories, and the only way you're gonna know is to have visibility into its adoption.

[00:05:31] Andrew Zigler: Exactly. Exactly.

[00:05:33] Ben Lloyd Pearson: Yeah. So let's talk about this next story. I, I can't leave the toaster phone calls hanging for too long. So what's going on here?

[00:05:42] Andrew Zigler: Okay, fine. So this is a story that, I loved You, loved it too. I think when you read it,

[00:05:46] Ben Lloyd Pearson: I.

[00:05:46] Andrew Zigler: this is by Scott Werner, about MCP being the accidental universal plugin system for all things in the world right now. It introduces a really clever concept called the Toaster Principle. The idea that every great [00:06:00] protocol gets used for something.

[00:06:01] Andrew Zigler: Its creators. Never envisioned. he lists some iconic ones, like HTTP was for academic papers. Now the whole world runs on it. Bluetooth was just for, you know, using your hands free calling, uh, when you're in the office. And now it does everything like unlock your front door. USB was just the plug in your mouse and your keyboard.

[00:06:19] Andrew Zigler: That's all it was for. And now you can plug everything through the universal port and it also powers data and all sorts of stuff. So, uh, there, there were a lot of fun. Captured moments in this short, sweet essay that I think everyone should go check out, on Scott Werner's blog. Uh, it works on my machine is what it's called, and I'll be sure to include the link in the show notes.

[00:06:39] Andrew Zigler: Uh, Ben, what do you think about this one?

[00:06:41] Ben Lloyd Pearson: Yeah, so I mean the, the kind of core principle of this is that MCP has accidentally become like this universal plugin system for like, basically everything from toasters to pillows to whatever else you could connect the reference that I love the most from this, actually.

[00:06:57] Ben Lloyd Pearson: It was the Warcraft three reference, about like [00:07:00] having your GPT workflows or your MCP workflows responding to you like as if they were the peons in Warcraft three. Like I've always kind of secretly dreamed of that.

[00:07:10] Andrew Zigler: That's funny.

[00:07:11] Ben Lloyd Pearson: but you know, we've been trying to create like this a, this like one API standard to rule them all for decades and it kind of seems like we might have accidentally stumbled into that thing. into the thing that achieves that, you know? but, you know, given the nature of like, ai, of course this stuff is probabilistic and a little bit chaotic, so it's, it's almost like we've standardized on like the most random option possible. It's like, instead of having like strictly defined, criteria, we just kind of like hook an AI up to it and say, have at it. know? But I, you know, I, for one, welcome the day that my toaster talks to my pillow.

[00:07:47] Andrew Zigler: I mean, it, it's a, it's a JSON schema, like on top of an API. Right? It's like a nice wrapper. It works for so many things that already exist in our world. It's also immensely fun to play with. If you haven't messed around with [00:08:00] MCP technology, trying to build one, use your own. There's so many interesting use cases.

[00:08:04] Andrew Zigler: I've been tinkering with my own in the last week and I've learned a lot. About how, LLMs interact with tools that help me just use all of these tools in better in general now. So, if anyone, you know, is listening to this also tinkering with these types of folks, I know we talked about it last week too, with Andrew Hamilton on the show.

[00:08:20] Andrew Zigler: I would love to hear, maybe we can work together on some stuff because we're all figuring it out together.

[00:08:25] Ben Lloyd Pearson: Yeah. And this next one is a story we've been covering for a little while, so we just kind of felt like we couldn't leave it hanging. Uh, so what do we have here with Meta?

[00:08:32] Andrew Zigler: Yeah, so we're giving another update on this story. I know folks have been tuning into Dev interrupt. We've been covering this now for like, last three weeks and really, uh, every week we try, we try to leave it and be like, oh, let's talk about something else. But then there's just like a new development that's too juicy or interesting or totally unorthodox and the, in the engineering hiring world, that it would be remiss of us to skip it.

[00:08:53] Andrew Zigler: So another one of those has happened. so I'm gonna try to sum up the latest in the saga for you. So after, [00:09:00] open AI's, chief researcher that kind of went after Meta, kind of aggressively poaching their scientists, they likened it to a home burglary, someone coming into their own home and taking things.

[00:09:10] Andrew Zigler: Uh, and it became evident that the social media giant meta, released a, a giant list of names and bios of, of employees that recently held roles at places like Open AI, Anthropic, Google DeepMind showing us that they're not just going after open ai. They're, they're, they're poaching all of the best, AI talent in the industry right now and bringing them together.

[00:09:31] Andrew Zigler: So, Zuckerberg dropped this memo that explained the, or rather the chief researcher dropped this memo that explained the incoming, list of staff and their amazing, you know, backgrounds and credentials. These are incredibly seasoned and intelligent. AI researchers and scientists that have built the technology has got us to where we are, right?

[00:09:49] Andrew Zigler: So this recruiting frenzy is in a total hype right now. meta is looking to staff a 50 member super intelligence lab, which is, you know, has a lot of question marks about. What its [00:10:00] goals are and where it will go. but he's staffing it with the most sought after talent and is bringing a sharp response from other leaders in the field, who have been competing directly against meta in this escalating war for AI dominance.

[00:10:13] Andrew Zigler: So, a really interesting development of saga. A lot of emotions. on both sides. You can tell that this runs deeper than maybe some of their past quarreling. and these, uh, transfer of staff is really hitting all of these teams hard.

[00:10:27] Ben Lloyd Pearson: I'm, I'm not sure how big open AI is at this point, but you know, even if they're at a few hundred people, like if you lose, five to 10 of your top experts that can really hurt it changes the culture of your company. but yeah, I'm wondering if, if there's people out there listening to us from OpenAI, like maybe Sam Altman himself is listening to this.

[00:10:47] Ben Lloyd Pearson: 'cause he mentioned how they were going to readjust compensation packages as a part of this, and that was what I was telling everyone who worked there when we first covered this to go talk to their boss about. So yeah, maybe we did have an [00:11:00] impact on that, Andrew.

[00:11:01] Andrew Zigler: Oh yeah. Maybe you influence them into most in this like genius scenario when all of your colleagues are getting poached for tens and hundreds of millions of dollars, you might go ask your boss for a little bit more money. I think that's a, I think that's the good play here for sure.

[00:11:14] Ben Lloyd Pearson: Yeah. But,

[00:11:15] Andrew Zigler: but where do you think it's going, Ben?

[00:11:16] Ben Lloyd Pearson: I mean, it's starting to feel like an exercise in like, who can burn money the fastest, you

[00:11:20] Andrew Zigler: Yeah, totally.

[00:11:21] Ben Lloyd Pearson: like this, this super intelligence lab. I'm not really sure how I feel about it yet. and I kind of personally just wonder if this effort to get like super intelligence or like AGI are really misguided,I kind of wonder if maybe the future of AI isn't in having like a singular model that rules everything in your life, but rather like, what if it was just more of a quiet, like everyday interactions, just start having specialized AI that shows up and solves very discreet problems. I think that actually may be long-term, the thing that has the biggest impact.

[00:11:55] Andrew Zigler: Yeah.

[00:11:55] Ben Lloyd Pearson: quality of our life, on productivity, however you wanna measure it. Like, I [00:12:00] just don't know about this chase for like. Trying to have like some sort of artificial generalized intelligence. I, I don't know if, if that's really the play that, that all of these billionaires should be focusing on.

[00:12:12] Andrew Zigler: Yeah, it's always been tough to parse that as the goal. I think anyone who's not the CEO of one of those companies right now maybe has a bit of trouble parsing that goal, and it's a bit nebulous, right? I think that the Super Intelligence labs, it could be an incubator for all sorts of things with AI's impact on the world.

[00:12:29] Andrew Zigler: because lots of people have pointed out, and this is something that I always come back to and think about, is, you know, we could stop researching and innovating and discovering new things about LLMs. Today and there would still be so much that we need to do and implement and change about our world.

[00:12:45] Andrew Zigler: That's possible now it's like a before and after scenario that we haven't even really begun to truly dig into it and get that impact deep, right? So there's so many problems in front of us. We can solve, all of these smart people together in one place. They're gonna [00:13:00] continue to push the frontier and make that more possible for more people.

[00:13:03] Andrew Zigler: So, I think that just kind of. It's inspiring a bit that, people will be able to come together and maybe solve higher order problems, at a faster and faster pace thanks to technology like this. But it does mean we need to like, focus on what those problems are, bring people to the table and everyone, work together to understand it because, there's a lot of work ahead of us and we're not all super fundraisers like OpenAI and Meta and, you know, they need.

[00:13:27] Andrew Zigler: They need AGI and super intelligence because they need like the Northest North Star. They can possibly come up with the rally, everyone in that direction. but for the rest of us, I think it's important to focus on the impact we can make right in here, right now, today.

[00:13:41] Ben Lloyd Pearson: Yeah, I think what you're trying to say is more MCPs, less super intelligence.

[00:13:47] Andrew Zigler: Always more MCPs. I'm having tons of fun with those, Ben.

[00:13:50] Ben Lloyd Pearson: there's one really quick story I just want to cover here at the end. Uh, as part of our ongoing coverage of, you know, all the companies out there trying to make their AI push, and this is [00:14:00] Grammarly announcing that they're acquiring the email startup superhuman as a part of a push to get more AI into their platform.

[00:14:06] Ben Lloyd Pearson: the reason I bring this up is like literally the only two productivity apps in the world that I use are Grammarly and Superhuman. So it's like my two worlds just colliding right now. great tools. It's really interesting to, to see a company like Grammarly continuing to try to adopt to the AI age.

[00:14:24] Ben Lloyd Pearson: particularly when, you know, you have people like our producer Adam out there claiming that companies like Grammarly are, are, uh. Uh, gonna lose their market or something because of ai, you know, uh, I, I don't know if that's the, the future, but, you know, and, and Grammarly has done a lot of AI things in their product up to this point that, you know, I'm not really sure if they've had a lot of success getting them, getting users to adopt them.

[00:14:47] Ben Lloyd Pearson: Like I've tried out a few. but you know, for me, Grammarly has always been sort of this deterministic check on, The content that I write, and you know, I think in the age of, AI helping us produce a lot of content here at [00:15:00] Dev Interrupted now. it's still serving to be a pretty valuable deterministic check in the same way that like an engineering team is gonna continue to have A-C-I-C-D system that, that builds and deploys their software and have static analysis that analyzes it before it goes into production, Yet another company, you know, we saw in the LeadDev survey, lots of people are, are adding AI to their product now. It's, interesting to see this like industry standard tool. also making that push,

[00:15:29] Andrew Zigler: Yeah, this is a fun news bit 'cause I, I've known you for a while, Ben and Grammarly and superhuman entering together. That must have been just like the biggest news on your recommended news dash that day. That's like highly specialized news. for a power user like you, and, and honestly at this point, all it needs to do is just roll into Asana and then you would have just one app you could use for everything. That would be pretty cool.

[00:15:50] Ben Lloyd Pearson: I need my Grammarly MCP and my superhuman MCP and my Asana MCP all working together. I think

[00:15:56] Andrew Zigler: Yeah. Yeah. You need to, you gotta start working on that [00:16:00] toolbox, or at least figuring out what goes inside of it. you know, Ben, I think you're sitting down with our guest this week. So wanna tell us about, what we're about to be listening to?

[00:16:07] Ben Lloyd Pearson: Yeah, well my guest is me actually, believe

[00:16:10] Andrew Zigler: Ooh.

[00:16:11] Ben Lloyd Pearson: stay tuned. After the break, Dan Lines is gonna join the show to interview me about the guide that we recently published, the six trends in AI driven software development. So stick around.

[00:16:23] Andrew Zigler: Are you ready to upgrade your SDLC with ai? Join me for a virtual workshop where we break down the latest AI powered code review workflows that are transforming delivery performance around the globe.

[00:16:36] Andrew Zigler: We're gonna discover the top three AI PR automations enterprises are using today. See how tools like co-pilot LinearB B and Code Rabbit stack up to each other and get the inside scoop on building a high velocity PR automation stack designed for modern teams. So register now to make sure you get the full recording and the benchmark report that I'm producing and join me for one of the live [00:17:00] sessions.

[00:17:00] Andrew Zigler: It's gonna be amazing. Don't miss out.

[00:17:04] Dan Lines: what's up everyone? I'm your host, Dan Lines, co-founder of LinearB. In today's episode, we're gonna focus on a fundamental shift we're seeing in the industry. We have the rise of AI driven development, of course. But the thing is, this isn't just about AI coding assistance anymore.

[00:17:25] Dan Lines: Now we're in 2025. We're witnessing a move towards intelligent systems that can drive decisions, execute actions. A lot of this stuff is reshaping how software is built and to help me unpack the how of these trends in shaping the future of software development. I've got Ben Lloyd Pearson here, BLP LinearB's director of DevEx Strategy in the hot seat.

[00:17:55] Dan Lines: So Ben, welcome to the show.

[00:17:57] Ben Lloyd Pearson: Yeah, thanks for having me on, I guess to [00:18:00] the, to the show that I helped run. But no, it is nice to have something that, that we've produced that we get to talk about a little bit. You know, I think one of the best things about working here at Dev Interrupted and LinearB is that we.

[00:18:12] Ben Lloyd Pearson: You know, we get to meet engineering leaders every single week and just discuss the challenges that they're facing. And you know, one of the things I think both of us are hearing, from kind of the two ends of that spectrum is. Everyone's talking about AI strategy these days. Like how do you take it to the next level?

[00:18:27] Ben Lloyd Pearson: Like you've bought some tooling, you've maybe seen some success, but like how do you take, how do you use that to then start actually improving productivity, making improvements to developer experience? And, you know, the truth is it's just, it's just really hard to navigate that space right now. So. yeah.

[00:18:44] Ben Lloyd Pearson: So I'm happy to be here today to talk about some of the insights we've pulled from our community and from past di guests about how companies are maturing their AI workflows. And you kind of referenced it, but we're out with this new guide that we published called The Six Trends Shaping the Future [00:19:00] of AI Driven Development.

[00:19:01] Ben Lloyd Pearson: And yeah, really looking forward to breaking down some of the findings from this

[00:19:05] Dan Lines: Awesome man. Yeah, I think you're totally right. Like honestly, every. Customer call that I have, or any call that I have with like a CTO or a vp. This is the topic of discussion. It is hard to navigate right now. And I think the other thing that makes it hard, like.

[00:19:22] Dan Lines: With the navigation is how fast things are moving. Exactly. It's almost like a week by week thing. What LLM are you using? Is it working, not working? There's legal aspects, there's developer experience, so that's super cool. that we have the guide available. I know we're gonna include it in the show notes and all of that, and I know you're pretty deep into it.

[00:19:44] Dan Lines: I'm gonna kick us off. I think we have about six topics, five to six topics here. I'm gonna ask you a more broad question to begin, and then we'll see where this goes. So here's the question. Is AI [00:20:00] becoming the backbone of modern software engineering?

[00:20:04] Dan Lines: And what does that actually mean in practice? What's the biggest misconceptions, leaders? Have about this transition today. What are you hearing, Ben?

[00:20:15] Ben Lloyd Pearson: yeah, just real quick, I want to touch on a a point you just made that, you know, with how fast this is shaping, like, and I'm, I'm pretty proud of the team we have here at Dev Interrupted at being how quickly we've adapted to AI to really adopt it into our workflows.

[00:20:28] Ben Lloyd Pearson: We've had lots of requests to help teach other people how to do it, but it's changing so rapidly that I feel like the moment we understand something that solves our own workflow. Before we could even share that knowledge with others, like our understanding of it changes and we've moved on to the next thing.

[00:20:44] Ben Lloyd Pearson: And I think there's been, you know, this, this, you asked about misconceptions. I think the biggest one right now is that I've seen this perception that you can buy a tool, give it to your developers and just have them start prompting it to, to do something for their work. Uh, and frankly, [00:21:00] this is, this is a recipe for disaster.

[00:21:02] Ben Lloyd Pearson: AI tools, you know, GPT tools, they require a lot of context. To be successful. So, uh, think of them like a tourist. they're new here. They have a specific goal in mind and they disappear forever after they've been successful. when you think about a tourist, like they need lots of signs, they need clear pathways that guide them.

[00:21:22] Ben Lloyd Pearson: Sometimes they need information kiosks or occasionally like a human expert to step in and help them get back on the right track. And AI behaves in a very similar pattern, and you need. The infrastructure to provide enough context. So what we've really tried to do with this guide is to break through all of these things that are changing daily and look at the stuff that as an engineering organization, you need to build so that fundamentally, you're in a place to take advantage of, you know, this AI revolution, for lack of a better phrase.

[00:21:54] Ben Lloyd Pearson: Right? Because it, it really, it, it doesn't come down to better prompting. It comes to better systems. And that's what this guide [00:22:00] is, is trying to show.

[00:22:01] Dan Lines: Yeah, I think so. I think it's systems and strategy. I mean, the o the other thing is like you mentioned, hey if you're a leader and you're saying, Hey, I'm just gonna buy a few tools and try to piece them together or something, and then the worst thing you can do if you're doing that is also promise, back to the business productivity.

[00:22:19] Dan Lines: On the other side, it's like, yeah, I, I bought some, uh, cursor, I bought some co-pilot, I bought this other tool for like a review or something like that. And we're gonna be good to go. Okay. So I think that's another, I don't know, maybe it's a misconception or just a warning upfront. I do believe, the CTOs, the VPs, and, you know, you have titles now, like the head of AI strategy, like all of that.

[00:22:42] Dan Lines: You do need a surrounding strategy to some of the foundational stuff that you were just talking about to actually execute Well, I think to get the productivity that you may be promising back to the business. Yeah. So something to be aware of there. Now [00:23:00] I think the first section of the guide is around unified knowledge.

[00:23:06] Dan Lines: So the first trend you identifies that unified knowledge is the new productivity fuel. This one stood out to me, so. why is something as fundamental as knowledge management, the first and most critical piece to this AI puzzle? Ben?

[00:23:24] Ben Lloyd Pearson: Yeah. And, and I like that you say most critical 'cause I, I do actually believe if, if you're listening to this episode and you take one thing away from it, this is probably where you could get the most advantage from focusing.

[00:23:35] Ben Lloyd Pearson: the truth here is that to unlock real productivity from ai, your engineering teams are gonna have to turn any sort of fragmented tribal knowledge into a unified and high quality context that you can feed to AI agents. So, you know, you think about the typical organization today, Often information that is important for decision making can be fragmented across tools or trapped in like [00:24:00] various, locally owned parts of your knowledge base.

[00:24:02] Ben Lloyd Pearson: It could be like a chat message or an email thread or some outdated documentation. Or with co like comments within the code itself. You know, it, it's scattered from all of these different places. Yeah. And there's a really big risk. When, you know, humans are really great at sort of stitching together context without having the complete picture in front of them.

[00:24:24] Ben Lloyd Pearson: But AI isn't, you know, if you're giving incomplete or missing or out of context data to an AI agent, they are gonna make bad decisions. so really any sort of data silo that you have within your organization, is gonna hinder giving AI agents access to what might be essential context. any areas that you have inconsistent or missing documentation, you're gonna see resistance or difficulties adopting AI within their workflows.

[00:24:55] Dan Lines: let's talk about that a little bit. 'cause I think it's interesting. And while you were. [00:25:00] Kind of breaking that down. I was thinking about, okay, AI versus human developer. Now, if I'm a human, developer, which I was at one point, I was a human at one point, I was a developer. Both of those, it's almost like while you're developing, you might get to a point where you lack context mm-hmm.

[00:25:18] Dan Lines: To make the right decision, and then you start searching for the context. You're like, okay, I know we have documentation. Let me go and find it. Oh shit. Actually the documentation isn't so good, but I know the developer that wrote this, so let me go talk to that person. Exactly. Now I gain the context or you might say, Hey, I'm working in this repo or this area of code.

[00:25:40] Dan Lines: I need to call someone, else's area of code. I don't know exactly how that works or the API. And you can always, I think, go and find someone usually and have that conversation. Get the context. Now, the downside to that obviously is that takes a lot of time. I have to stop what I'm doing. Yeah.

[00:25:58] Dan Lines: Now with ai right, the [00:26:00] promise of it and how it, how it's working today, this stuff is pretty much instantaneous. But like you said, if the AI cannot find what it needs, it is going to give an answer. It might give like a, a snippet of code. But it's probably gonna be incorrect. Exactly. Now, one thing that, I've been talking about, so LinearB, we have an AI code review.

[00:26:22] Dan Lines: Today we have a pretty killer AI code review. I talked to our head of, ai, who I consider our head of AI, Ofer, and where we're going with this is actually in this context area. So if you think about what LinearBcan do, yeah, we provide you probably a lot of our listeners know with all these different metrics.

[00:26:44] Dan Lines: But the way that we get these metrics is we're connected into Jira, we're connected into Slack. We can see all of the incidents, we can see bugs that went out into production, changed failure. We can see documentation. We have this wealth of [00:27:00] connectivity. And now actually what we're doing next in the roadmap is going to be while our AI review is processing, if it doesn't have a piece of information that it needs, yeah, of course it can go.

[00:27:12] Dan Lines: Get information from other repos. It can get it from documentation. We can see if there's a bug in the area. So we're actually connecting together that missing context. Which totally makes sense because that's what you're saying in this first part of the guide, is making sure that you have all of that connectivity so the AI can essentially find the answer that it needs.

[00:27:32] Ben Lloyd Pearson: , You're touching on a lot of points that we're gonna get into later in this episode too. so stay tuned for that. 'cause I, yeah, I really love how this stuff is all kind of connected. But you know, and, and a, as a part of this guide, we went back through, recent Dev Interrupted history and we found a lot of guests, former guests that really articulate some of the challenges that we're highlighting in this guide.

[00:27:51] Ben Lloyd Pearson: And for this one, we had, Brandon Jung from Tabnine on. who last year discussed with us about this concept of like creating golden [00:28:00] repositories or in other words, a piece of code or a collection of code that you trust as being really high quality and has all the context behind it for why it was constructed that way.

[00:28:12] Ben Lloyd Pearson: Like those types of resources are really valuable for AI when you're starting to get it to make more decision making. and I think what's really important to understand from this guide is that. All the recommendations we're making here are both things that will help your human developers today, as you put it Dan, uh, but also the AI developers that are coming around the corner for you.

[00:28:32] Ben Lloyd Pearson: So you wanna do things like establishing data hygiene practices across your organization. Maintain high quality training data sets, like establish responsibility within your organization for who's responsible for maintaining high quality data sets. And start working towards centralizing that knowledge into a place where you can start feeding it into, your AI models.

[00:28:56] Ben Lloyd Pearson: And the great thing about this is that AI can actually help you do this. So [00:29:00] you can start using AI today to start preparing for these changes that we can see around the corner.

[00:29:06] Dan Lines: golden, repo concept is pretty cool actually.

[00:29:10] Ben Lloyd Pearson: We are adopting it for a lot of workflows.

[00:29:12] Ben Lloyd Pearson: You know, it's, it's, kind of been incredible to me how much software engineering has aligned with the content creation workflows we're building here at Dev Interrupted. And the more we have these, like idealized sources of information, the more effective our AI workflows can be.

[00:29:27] Dan Lines: And, uh, I don't know if it says it in the guide or not, so I'll, I'll ask you.

[00:29:32] Dan Lines: A question maybe, and maybe we don't know. Let's see, we have it in the guide. But is there anything that like, uh, qualifies as like a golden re Is there any like, criteria behind it? Is there, I know it has to do with like a level of trust. We think the code is really, really great, something like that.

[00:29:47] Dan Lines: But, I don't know if it gets into it. Is there anything that like, qualifies for? Okay. It's like meeting the bar of what's needed?

[00:29:53] Ben Lloyd Pearson: Yeah. I mean the, the big thing is this varies from organization to organization. What you don't want to do [00:30:00] is to feed, code that, you know, has significant difficulties into an AI as a training resource.

[00:30:06] Ben Lloyd Pearson: You know? Yeah. So as long as you're avoiding that, this is the kind of thing where you can iterate and get better. Like the more you feed better data into it, the more it can help you create better data for training. So there's definitely like agreat benefit to taking a, an iterative approach to,

[00:30:20] Ben Lloyd Pearson: Continuously improving whatever data you're feeding into your ai. Yeah. Very cool. I'm

[00:30:26] Dan Lines: gonna move us on, yeah. To the next pillar that we have here. So this is around, modernizing for agentic workflow. So the second trend that was uncovered in this area of modernizing infrastructure for agentic workflows.

[00:30:43] Dan Lines: It feels like a lot of teams maybe are still struggling with basic, like CICD, and now we're talking about like modernizing the infrastructure for agents again here. what new layers like agentic, observability or sandbox testing are required for [00:31:00] these more advanced AI systems?

[00:31:02] Ben Lloyd Pearson: Yeah. So you, you mentioned how LinearB's AI is able to go out and connect with all these various tools to pull additional data and context Yeah.

[00:31:10] Ben Lloyd Pearson: Into, the workflows. And for a agentic AI to be successful, you're gonna need platforms that are designed in that way. Right. You know, a lot of teams are still stuck with like more legacy infrastructure, that is potentially stifling automation and scalability and developer velocity. So. when you really think about it, ag agentic, AI and AI driven software development are gonna require tooling that isn't widespread yet within the most engineering organizations.

[00:31:37] Ben Lloyd Pearson: So you might have like some outdated or manual ops practices that have little to no automation or maybe there's no self-service capabilities. you probably also have ticket based workflows within your company, like I'm thinking like Jira for project management, ServiceNow. Those aren't all going to scale for AI adoption in the way that we use them today.

[00:31:59] Ben Lloyd Pearson: Like, if [00:32:00] your coding time is going from, let's say two days, down to two hours, what does that mean for a QA process that might take a day or two, like that now becomes a substantial bottleneck within your workflow. And you know, there's also just like the challenge of like, modern engineering just requires a lot of infrastructure knowledge that developers just aren't always trained for.

[00:32:20] Ben Lloyd Pearson: Like the technologies are changing constantly. most developers are trained to write code, rather than to think about infrastructure problems. And this all just creates a lot of friction for adopting agentic ai.

[00:32:33] Dan Lines: actually, you know what, one of the things that I'm seeing. At least with like the customers that I'm working with.

[00:32:41] Dan Lines: And you know, of course a lot of this goes back to like the developer's experience. Like, okay, again, you might buy an AI tool, but the developers are the ones that are expected to interact with it, use it, I think like, a lot of this comes down to like the accuracy or the quality or the integrity.

[00:32:59] Dan Lines: Of [00:33:00] the suggestions or the code generation that the, uh, AI is able to provide. So for example, in the first two kind of sections that we talked about, yeah. If there maybe isn't these golden repos or maybe there's, the agents having a hard time. getting an answer, working its way through the different areas of your workflows and the infa infrastructure, the quality is not going to be there, right?

[00:33:26] Dan Lines: And what I've seen is when we're interacting, now we're talking with de developers, and maybe it's obvious, but, but I'll say it. The best AI tools that they like working with are the ones that are giving accurate, high quality, suggestions. Yep. And that's where their experience is actually improving and their productivity is improving.

[00:33:47] Dan Lines: Now you have the flip side of that. Hey, I bought an AI tool. Maybe my, infrastructure or workflows or the context wasn't there. And actually it can be pretty annoying to work with [00:34:00] because it's giving me suggestions that are incomplete, wrong, that type of thing. So I've, it's a, uh, I think it's also like a sensitive time for developer experience.

[00:34:09] Dan Lines: Absolutely. And therefore it's like really important to get these first two kind of pillars that the guide talks about. Right. I don't know if you've seen that at all, Ben. That's what I'm experiencing. Yeah. Like in the field.

[00:34:20] Ben Lloyd Pearson: Absolutely. There's definitely a, a, a very strong perception I've seen of developers who are very frustrated with how AI initiatives have been going, and it's almost always rooted around high volumes of low quality output.

[00:34:35] Ben Lloyd Pearson: And we'll actually get a dive into this a little bit more, later in this episode. But I think back to, an episode we had not too long ago with Cory O'Daniel from Massdriver, and one of the things that he pointed out is that platform engineering is gonna become much more important in this era as a way to abstract the complexity of working with ai.

[00:34:56] Ben Lloyd Pearson: Like you don't have to. You shouldn't be relying on every individual developer to [00:35:00] navigate all of these difficulties, like a more centralized platform, engineering oriented team. Even if it's not a dedicated team, if it's a, it's just a, a group of people who care about it, who have been given authority to act on it.

[00:35:13] Ben Lloyd Pearson: you want them to be focused on enabling developers to ship code using AI without adding infrastructure or just like burdensome overhead. So the more you're investing into Better pipelines, better automation test coverage. The more you'll enable your developers to accelerate without amplifying risks.

[00:35:34] Ben Lloyd Pearson: So if you already have really reliable pipelines introducing AI into it, it, helps reduce the risk with that.

[00:35:42] Dan Lines: Yeah, totally. Ma makes sense to me. What I like about this guide, I think we only made it, to the first two and we have a, a few more to go here. But if you're looking at each one of these, uh, six pillars of, of the guide, it essentially is helping you form a strategy.

[00:35:57] Ben Lloyd Pearson: Exactly.

[00:35:58] Dan Lines: Because again, like we said from the [00:36:00] beginning, it's not just like, buy an AI tool, turn it on, you're good to go. It's like the best, engineering org. It's the best companies actually are looking at all these different factors and saying, this is how I'm gonna actually. put a strategy together to make, I guess, productivity, output the, and the best experience for our developers.

[00:36:18] Dan Lines: So this is, uh, I like this about, about the guide.

[00:36:21] Ben Lloyd Pearson: Yeah. And like I said, we're, we're also focusing heavily on stuff that, is gonna benefit you today too. there's a lot of benefits to adopting these practices now beyond ai. So you're just helping developers today while being prepared for tomorrow.

[00:36:35] Dan Lines: The next section that we have here, and I hear this, uh, actually like a good amount, from customers, but there's this area of like governance or control. It's a pretty, I think, critical, topic here. And maybe it's even just around trust. So as agents start taking actions, like they're merging PRs or they're triggering a deployment.

[00:36:59] Dan Lines: They're creating [00:37:00] more and more code governance. That topic is starting to become essential. And especially if you're like, leading a company that's in like, finance, healthcare, critical, you know, systems, stuff like that, banking, all of that. This third trend, is kind of like reframing this.

[00:37:19] Dan Lines: It's saying using governance as an AI enabler as opposed to a blocker. So why is it important to see governance as an enabler rather than a barrier?

[00:37:31] Ben Lloyd Pearson: you mentioned how AI agents are gonna start taking real actions in our workflows What you want to try to do is shift compliance from being a hurdle that developers have to overcome to one that implements or establishes trust with your Agentic AI and makes it more explainable and scalable.

[00:37:51] Ben Lloyd Pearson: So you have all these autonomous agents making decisions like. You said they're merging PRs, they're identifying and deploying hot fixes at at least they [00:38:00] will be soon. And this naturally is gonna raise a lot of concerns around safety, audibility, and accountability. And really what's at the center of this is, you know, LLMs are inherently probabilistic.

[00:38:13] Ben Lloyd Pearson: Yet we're often sort of viewing them through like the deterministic lens of software development. Even in like, as you said, high stakes environments, like when you're working in banking, there's certain situations where you don't want to have like a probabilistic actor, such as like when determining how much money somebody has in their bank account.

[00:38:32] Ben Lloyd Pearson: But the challenge is that with ai, they're kind of like a black box and it's, it can be very difficult. To trace their reasoning or ensure a high level of explainability. So if you don't have clear audit trails, if you don't have trustworthy data sources, as we mentioned earlier, your teams aren't going to be able to reliably validate, the decisions of ai.

[00:38:53] Ben Lloyd Pearson: I think back to an episode we had quite recently with Brooke Hartley, Moy, the CEO of Infactory. And, uh, [00:39:00] she made a really bold statement where she said that trust is the only thing that is gonna matter at the foundational level with your LLMs. if your developers don't trust the outputs of their ai, or if your business doesn't trust the output of it then you're fundamentally gonna have difficulties with your AI strategy.

[00:39:19] Dan Lines: I'm saying the same thing and what I keep thinking about is that makes sense for ai. It also makes sense for developers, both, like if you, let's say that you had a situation, you don't trust the output of your developers, well then the business isn't gonna trust if you don't trust the output of, of the ai.

[00:39:38] Dan Lines: Also, the businesses. Going to trust. So it's actually like the, a very similar or same para paradigm is just probably, uh, much less trust right now with the ai. But the cool thing that I'm seeing is companies still want to use it and move forward. It's not like, uh, governance is, I haven't seen it be like a blocker to try.[00:40:00]

[00:40:00] Dan Lines: But what I will say is most of the companies that I'm working with, there has to be rules in place. Around the ai and these rules are always, I'm sure, we'll, we'll talk about it, again with like orchestrating and stuff. But you have to have rule automations that kind of surround the AI around what's allowed to be merged, what's not allowed to be merged.

[00:40:21] Dan Lines: How does your review work? It's kind of like. Vibe coding, but like the backstop, the review check, the, yeah. Okay. AI's generated a bunch of stuff, but let's make sure that we still have gates checks quality in place so that, what does make it to production? 'cause we still wanna be accelerated, still wanna move faster.

[00:40:43] Dan Lines: And a lot of the customers that I'm working with, they are actually writing rules usually around the PR process. To ensure, that the code is making it out with like super high quality.

[00:40:54] Ben Lloyd Pearson: Yeah. and you, you call 'em Gates, but I think it's, it's important to recognize that, that if, if you're [00:41:00] giving a developer who's been blocked by one of these gates, so to speak, if they just get blocked and they don't have clear instructions on what the next step is or why things are being blockedthen you create frustration.

[00:41:12] Ben Lloyd Pearson: You destroy trust. Whenever you're creating like a governance framework, it should always be centered around enabling developers. if you're going to block them, tell them what the next step is to get unblocked and help them take that first step. I think that's a really important distinction here when you're thinking about governance.

[00:41:29] Ben Lloyd Pearson: I mentioned, Brooke Hartley Moys episode. You know, she had a lot of great advice about building better audit trails that are built on trustworthy data sources. I think one thing that's really critical is that you implement smart human, the loop mechanisms in any AI workflow.

[00:41:45] Ben Lloyd Pearson: So there are certain things that, that you can hand off to an autonomous system to delegate that task. but then there's always gonna be moments, at least for the foreseeable future, where a human will need to interject or be prompted to step in and be an active participant in the [00:42:00] decision making.

[00:42:01] Ben Lloyd Pearson: so the, the more fluid those human in the loop moments are, the better. and, and as I mentioned, you want your governance frameworks to be designed to give developers the easiest path forward and avoid shifting left the responsibility for compliance onto developers without giving them those automated tools to take action.

[00:42:18] Ben Lloyd Pearson: So, , there's this sense that as you're building autonomous systems, there needs to be automated guardrails that keep everything going down the right pathway.

[00:42:27] Dan Lines: Yeah. It's almost like, gates is not the best word. It's almost like you gotta smooth out the workflow with automation and make it, visible and transparent to the developer on what the rules of the game are.

[00:42:41] Dan Lines: So there's not like confusion, which like you said, uh, can be a frustrating experience. Why am I getting blocked versus something else? And, and all, all of that. That makes, yeah. Total sense to me. I think

[00:42:52] Ben Lloyd Pearson: when you're talking to your head of legal, it is a gate. You know? It's a gate to protect compliance issues, security, that's for sure.

[00:42:58] Ben Lloyd Pearson: Hey,

[00:42:58] Dan Lines: hey, we have gates. [00:43:00] We have critical paths. We have, uh, control. That's what they're looking for. Automated control.

[00:43:06] Ben Lloyd Pearson: When you're talking to your developers, it's, it's a guidebook, right? It's a, it's a path for, for doing things the right way.

[00:43:13] Dan Lines: Yep. Alright, let's move on to the fourth part of the guide.

[00:43:18] Dan Lines: So this is talking about isolated tools, versus a more orchestrated, system here. So most AI tools today, they can feel like they're a little bit, bolted on to an older, which they are like an older workflow or maybe a workflow that didn't know that, AI was going to arrive. So, rapidly. And this is the fourth trend that's highlighted, in the guide.

[00:43:42] Dan Lines: So it's moving from tools to an orchestrated system. what's the key difference between an AI tool that assists a developer versus a truly orchestrated system? where agents communicate with each other.

[00:43:55] Ben Lloyd Pearson: well, speaking how bolted on AI tools, I, I feel like, the first major wave of [00:44:00] this technology was like a chat bot, or it was either a chat bot or it was something built into your, like predicted typing in your IDE.

[00:44:06] Ben Lloyd Pearson: But, you know, I think we picked chatbots primarily because of convenience and it was just easy to bolt on to existing workflows. And I think. years down the road, we'll look back at this period where we're interacting with our AI through chatbots as like a very, like relatively brief period in the, the AI adoption journey.

[00:44:22] Ben Lloyd Pearson: But, the important thing here to understand is that the future of AI and software development is not about isolated tools. And you're not gonna be buying a bunch of different AI platforms. It's gonna be more about orchestrated systems where agents are reasoning, delegating, and operating across your entire software delivery lifecycle.

[00:44:42] Ben Lloyd Pearson: So when you think about today's developer tech stacks, most of them lack native support for autonomy or inter agent communications. Google recently announced the A2A protocol, which just recently went onto the Linux Foundation. This is like the first attempt to create a [00:45:00] standardized protocol within the space, but it's still very early on.

[00:45:03] Ben Lloyd Pearson: there isn't, there's no standard protocol that has emerged for agent communications which is really forcing teams to rebuild integrations on their own for each use case. So, you know, a lot of our existing infrastructure treats, tools, sort of like isolated functions rather than the interconnected workflows.

[00:45:22] Ben Lloyd Pearson: And the thing we are, we're, I think it reiterating a lot here is that your agents are gonna be operating across many tools, across many data sources. And until the native integrations are there, everything that we're gonna be doing in that space as an engineering leader is gonna be more accustom and bespoke there.

[00:45:41] Ben Lloyd Pearson: There just simply aren't a lot of products out there that have solved this problem yet.

[00:45:46] Dan Lines: Yeah. I think, that's the trend of on where it's going. I also think that's why I see the demand for, Hey, I need one platform. that gives me kind of like that, I guess you can say control [00:46:00] plane again, but that transparency of, show me what all of my agents are doing.

[00:46:04] Dan Lines: Is there actually bottlenecks or not bottlenecks? Who's talking to who do I have the control? Mm-hmm. because I think piecing it together is tough. Piecing it together is likeIt's just, it seems like it would be so hard to manage. strategically over a long period of time. So maybe that's why the demand, I'm seeing a rise in that demand on my side,

[00:46:26] Ben Lloyd Pearson:

[00:46:26] Dan Lines: yeah.

[00:46:26] Dan Lines: From talking to customers.

[00:46:27] Ben Lloyd Pearson: Yeah. Yeah. That makes sense. and I think back to an episode we had with Amir Behbehani from Memra where, one of the big points that he made was that, you know, developers really sort of have to shift from like a carpenter mindset, like building discrete components, to more of an architect mindset.

[00:46:45] Ben Lloyd Pearson: So designing systems, orchestrating higher order systems, and one of the ways you can tackle this or start tackling this problem today. Is to focus on challenges that are like low hanging [00:47:00] fruit within your workflows. Things that consume large amounts of time, but also have really strong API integrations and standardized usage patterns.

[00:47:09] Ben Lloyd Pearson: These are gonna be the easiest to build like a custom integration layer with today that starts to connect the most critical systems within your agent, development workflows. be prepared for this to be bespoke for the time being until a more standard and operational, platforms start to emerge in this space.

[00:47:28] Ben Lloyd Pearson: In the meantime, you can identify like a lot of these repeatable, high leverage tasks with clean APIs and consistent usage patterns. Like, one really great example that we've covered. Here at Dev Interrupted is, Google recently published research where they showed how they're now using AI to do 32 bit to 64 bit migrations.

[00:47:47] Ben Lloyd Pearson: Like, this isn't a glamorous job. It's, it's something that, you know, I'm sure there's some developers out there that love this kind of work, but I would say most developers don't like, get outta bed, excited to go migrate a bunch of, but once, once they get

[00:47:59] Dan Lines: paid [00:48:00] a lot like it. Other than that.

[00:48:02] Ben Lloyd Pearson: Yeah. Uh, probably not, but, but Google has implemented a workflow that uses human, the loop mechanisms with a few autonomous systems like strung together to handle the bulk of that work now.

[00:48:15] Ben Lloyd Pearson: And in this experiment that we covered, they did, I think it was close to 600 changes and 75% of that was written by their AI system. and the developers invol involved with that estimate about a 50% productivity improvement. So, you know, these are the kinds of things that you can start solving now, even though the platforms that to solve this as a product aren't quite there yet.

[00:48:37] Dan Lines: totally makes sense. Ben, actually I wanted to ask you, and it do it will lead to the next, section in the guide here. But you said something, previously that right now. Or maybe when it started it was kind of like, okay, humans are interacting, with the AI agent, maybe asking questions of the agent or getting responses.

[00:48:59] Dan Lines: [00:49:00] And the section that we're gonna talk about here is this trend that you're seeing with like, okay, recalibrating. For developer to agent collaboration, right? Yeah. Developers are used to using tools not used to collaborating with agents that like taken Initi initiative. And the first, question two part here, I'm sure that creates some friction and the guide talks about that friction.

[00:49:25] Dan Lines: Yep. But also just wanted to get your opinion, how long do you think that will last where it's like developer to agent versus. Agent to agent and a developer overseeing a bunch of agents together.

[00:49:38] Ben Lloyd Pearson: Yeah. Well, I, you know, I think at the core of this is gonna be. You know, today, if you think about how most people use AI tooling, they do their work and then when they need AI's help, they go and they prompt ai.

[00:49:50] Ben Lloyd Pearson: Mm-hmm. Imagine a world where that gets completely flipped on its head, like AI is doing a lot of the legwork, and then when it gets. Stuck or when it's been told that this is a moment that you need to bring [00:50:00] in an expert or you know, for whatever reason you might stop the workflow. Uh, it then goes and the saying, I cannot proceed.

[00:50:06] Dan Lines: Yeah. I can't proceed. Like it means something. Exactly. Yeah.

[00:50:08] Ben Lloyd Pearson: It, it goes and then prompts the human to intervene. You know? So it's actually like almost a complete reversal from what we're doing today. And, you know, as agents start to take that initiative within developer workflows it's gonna create a lot of.

[00:50:23] Ben Lloyd Pearson: challenges around designing natural and low friction collaboration between humans and a ai. and eventually, you know, as you mentioned, it will be more agent to agent, but I, I think, you know, I don't want to discount the, the role of the software engineer here. I think the humans in the loop are still gonna be valuable for, quite some time, if not just for forever, effectively.

[00:50:43] Ben Lloyd Pearson: But as you mentioned, you know, developers really aren't used to collaborating with agents, like as a coworker. I don't wanna call them a coworker, but that's kind of how they might start to behave. Especially when you start thinking about them taking initiative and going out. And maybe they start, they see a customer report for a bug and they go out and fix it, [00:51:00] and then prompt a human to go see if that actually, you know, fixed it before merging it.

[00:51:04] Ben Lloyd Pearson: Like that kind of workflow. Yeah. And the risk is that if these agents are misaligned with your goals, with the way that you build software, with anything within your team, anytime they're misaligned, you're gonna create friction and inefficiency. And if you're asking your developers to override agent decisions, you're increasing cognitive burden.

[00:51:26] Dan Lines: So if you have, you know, getting back to this, this automation and guardrails, like if you have all of this autonomous automation without guardrails. It's a recipe for overload, especially as you think about a world where you have more and more bot generated pull requests. honestly, for, for me, it goes back to like, uh, also again, the beginning of what we talked about, the accuracy or the quality of the agent.

[00:51:50] Dan Lines: Because you said, I think, uh, actually there can be situations where it increases cognitive load because I keep having to correct the work that the [00:52:00] agent done. Then you might say to yourself, why don't I just do it myself? Right? Yep. So it's like, it starts with, I think that foundational thing of like having the good

[00:52:09] Dan Lines: accuracy of the agent by having great context and having an infrastructure, uh, that supports that great context

[00:52:16] Ben Lloyd Pearson: ex. Exactly. And, and I'm reminded of, uh, a great episode. We had a while back with Tara Hernandez from, uh, MongoDB, and, you know, she talked about how important it is to make the golden path, the easy path.

[00:52:30] Ben Lloyd Pearson: So on the probabilistic side, you can like work to change organizational norms to like accept agents as valid contributors, but then there's also like in, in a deterministic side where you can start automating your guardrails to reduce that mental load. I've already mentioned the risk of shifting AI compliance left onto developers.

[00:52:50] Ben Lloyd Pearson: Like if you're doing that without actionable automated oversight. That's where you're gonna create a lot of those friction points, uh, in the collaboration workflows. So the more that you [00:53:00] establish clear interaction patterns between their developers and agents, uh, and design AI experiences that naturally fit into existing workflows.

[00:53:09] Ben Lloyd Pearson: That's how you're gonna build a lot stronger trust with your developers as you roll out your AI initiative and achieve better productivity improvements. Yeah. So actually Dan, this brings us to the last and and fi like one of the most crucial trends I think, and that is the feedback loops can make or break AI adoption.

[00:53:28] Ben Lloyd Pearson: And this time I wanted to flip the script. You've been asking me a lot of questions. I actually wanna flip this one around and ask you about this. 'cause I think it's something that you probably have a lot to share. So why are feedback loops so critical? And how do teams create like a signal rich environment that agents will need to get better over time?

[00:53:44] Dan Lines: Yeah. Okay. I wanna talk about what I'm seeing from real life deployments, strategic AI deployments across like really cool and important companies. We'll start by feedback loops are important. I think [00:54:00] everyone that's in engineering already knows this. It's so that you can get the information tight, little feedback loop so you can iterate quickly on your plan strategy and execute.

[00:54:10] Dan Lines: Yeah. Now with an AI strategy, what is needed to have a great feedback loop? We talked about it earlier in the episode. You need a platform that provides transparency, in this case, measurable transparency into what are my AI agents actually doing? What are they producing? So I call that kind of like the first step.

[00:54:33] Dan Lines: It's like, how much are they doing? What are they doing? Where are they doing it? Then as part of that, you need to understand, put the AI agents aside. Do I still have bottlenecks in my workflow? Let's say that I got a cursor or a co-pilot. Am I actually producing more work, yes or no? So you have to have also that telemetry to say, and this is what I see most often.

[00:54:58] Dan Lines: Oh, I'm actually not producing [00:55:00] more work 'cause I have a bottleneck. For example, in my PR process, I have some agents generating some great code in the IDE. But my business didn't get any faster 'cause I still have a bottleneck, for example. Yeah. That's having great transparency. And the third part I think to that, just starting with the transparency and actually receiving the feedback, is the feedback from the developer, do I have a way to engage, see what the developers are experiencing?

[00:55:28] Dan Lines: They can, say, is this agent working well for me? Not well for me. And so on and so forth. So the first part is end to end. I'll call it like single pain transparency. Now, the second part that I'm seeing to a strategy like this is actually taking action. So not just measuring, but taking action. And usually what, what I'm seeing customers do is they're asking themselves, am I purchasing the right AI tools?

[00:55:57] Dan Lines: And am I purchasing the right AI tools [00:56:00] in the right areas of my SDLC? So for example, I'll use the cursor and copilot to start with. 'cause that's where most people started, like in the IDE code generation. But am I also purchasing agents that are maybe helping with testing documentation where LinearBspecializes AI code review pushing code from PR to merge.

[00:56:24] Dan Lines: And can I, again, measure each one of those. So the second part of this is actually, getting two AI tools in place that will unblock the workflow end to end. And the third thing around it, we already touched on it in this episode, is do I have automated workflow and rules where controlling governance's governance is actually accelerating my workflow?

[00:56:49] Dan Lines: So I do have automation, do I have policies in place that support the tools that I've built? I have the transparency. And now I have the automated [00:57:00] rules in place to actually accelerate their workflow. Those are the companies that I see that are doing the best job in their AI initiative for productivity.

[00:57:11] Ben Lloyd Pearson: Yeah, and I, and I want to call out, a past episode. that's quite the journey that you're, you're outlining there. And if, if you're organization that's still, like, pretty early on to this it can feel very daunting. Like, you know, let alone like adopting ai, but even just getting the more foundational practices of having visibility into where like engineering bottlenecks are.

[00:57:31] Ben Lloyd Pearson: Like that alone is a challenge that solving that today can be a huge benefit even without ai. And a lot of organizations out there I think are still, trying to solve that. So we had Somya Subramanian on the podcast earlier this year. She shared an amazing story about how she's taken a lot of the, the knowledge she gained from Google about standardizing metrics, building a foundation of data-driven habits.

[00:57:55] Ben Lloyd Pearson: And she shared a lot of great insights about how to make the effectively [00:58:00] implement that within your engineering team. So, if you're deep into the ai, I think we've got a lot of great past episodes, but also if you're brand new to this stuff as well there's still some great episodes for you to consume from our archive.

[00:58:12] Dan Lines: Awesome stuff. Alright, I think that wraps us up Ben, thanks so much for walking through all of this with us, this guide that is freaking awesome. If you or your team want to dive deeper into these six trends that we just discussed and get the full checklist for preparing your organization.

[00:58:35] Dan Lines: for your strategic AI transition, you can find the guide the guide's called the Six Trends Shaping the Future of AI Development on the LinearB website. So LinearB.io. We'll be sure to put a link into the show notes as well. And of course, Ben, thanks for joining and thanks everyone, for listening.

[00:58:57] Dan Lines: We'll see you next week. 

Your next listen