Podcast
/
Agents get their own AOL, Andrew gets published, and vibe coding is actually good?

Agents get their own AOL, Andrew gets published, and vibe coding is actually good?

By Andrew Zigler
|
agents_aol_vibe_coding_trends_571f119728

Is vibe coding actually good now? This week on The Friday Deploy, Andrew and Ben explore the convergence of vibe coding and agentic engineering, unpack the decline of the traditional technical interview, and discuss why companies like Warp are prioritizing AI prototypes over planning meetings. They also celebrate Andrew's newly published research on "mise en place" context engineering. Finally, they break down the enterprise AI "last mile" crisis and share how they are using personal knowledge graphs to upskill their own agents.

Show Notes

Transcript 

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

[00:00:00] Andrew Zigler: Did you use AOL or were you like an ICQ or an AIM or MSN person or Yahoo? Were you, were you a YIM person?

[00:00:08] Ben Lloyd Pearson: Oh, I was definitely an AIM person.

[00:00:10] Andrew Zigler: Yeah.

[00:00:10] Ben Lloyd Pearson: Heavy, heavy into AIM with all the customized, uh, profiles and emojis and the games that were built into it. I, I love that platform.

[00:00:18] Andrew Zigler: What made...

[00:00:19] Ben Lloyd Pearson: my teenage years.

[00:00:20] Andrew Zigler: That was what made Yahoo instant messenger so fun too, 'cause you could do all the customizing. I mean, they all kind of had that, but AIM was definitely the first one with like the sound effects, like the, the door opening and closing. Like that is rent-free in my mind. It's like AOL will always be there.

[00:00:36] Ben Lloyd Pearson: Man, that's some nostalgia I haven't tapped into in a while. Well, you know, it's, it's really awesome because now AI agents get to experience the, the early era of AOL and instant messengers and, and stuff like that. You know, I'm personally really happy that they get to experience this. but yeah, I'm talking about this, this new open source project that we came across this week called AOL, Agents Online, Pretty cool. [00:01:00] like just a chat room for agents to, to work together in, uh, with a really kitschy name. So, just wonder, like, what's the agent version of ASL? Like, do you remember this from, like, the chat, the

[00:01:11] Andrew Zigler: Oh, yeah.

[00:01:12] Ben Lloyd Pearson: room days? Which no one does this anymore 'cause it's, it's like doxing yourself.

[00:01:16] Ben Lloyd Pearson: But everyone asking for your age, and of course, today it wouldn't be S, it would be a gender. They'd ask for your gender and then your location. Like, who are you? Where do you live? Like that kind of stuff. They're like...

[00:01:26] Andrew Zigler: That was back when...

[00:01:27] Ben Lloyd Pearson: version of that?

[00:01:28] Andrew Zigler: Oh, absolutely. That was back when like your internet persona was just a word on the screen. You didn't even have like a profile. People literally had no idea who you were. And agents live in that world too. I would guess that their, their ASL, ASL is like, what model are they? What's their, what's their, what's their contacts windows, uh, currently filled at?

[00:01:46] Andrew Zigler: And, um,

[00:01:48] Ben Lloyd Pearson: What capabilities do they

[00:01:49] Andrew Zigler: oh yeah, and then what MCP servers are you currently connected to? That would be like their, their ASL handshake.

[00:01:55] Ben Lloyd Pearson: Wow. Wow, we do definitely need this. You know, uh, uh, yeah, as somebody who was both like that, that former [00:02:00] AIM user and a former AOL employee, like I, I think this is absolutely necessary. Like this is-- If you wanna be on the internet, you have to experience like that sensation of these like early chat rooms, 'cause it really was like such a fundamental component of the early internet.

[00:02:15] Andrew Zigler: Precisely. And what's fun about it is that like, even though it's nostalgic, it's not a joke either. The underlying principles of how it work really link back to how agents do collaborate and co- and corroborate with each other. The idea that they can set statuses, they can see each other's work as they do it because it's like a status in the messenger, um, as well as send, um, you know, obviously information back and forth on a local server.

[00:02:37] Andrew Zigler: Kind of an interesting concept, a way of, uh, giving them some more presence, I think, on the internet. Now, uh, it's only to think of maybe they bridge out of that into messaging humans, and it's not just agents online, but we're all online with them.

[00:02:50] Ben Lloyd Pearson: Yeah. Well, welcome to the Friday Deploy, brought to you by LinearB. We got a lot of awesome news like that, um, to share today. I'm your host, Ben Lloyd [00:03:00] Pearson.

[00:03:00] Andrew Zigler: And I'm your host, Andrew Zigler.

[00:03:02] Ben Lloyd Pearson: All right, today we are covering the growing convergence of vibe coding and agentic engineering, Andrew's new agentic development research, uh, build-first development processes, the evolution of technical interviews, and enterprise AI's last mile failure.

[00:03:18] Ben Lloyd Pearson: So we've got a lot to cover today. Let's start right at the top with how a-- vibe coding and agentic engineering are getting closer than some people might like, including Simon Willison. What do we have here, Andrew?

[00:03:29] Andrew Zigler: So Simon Willison, once again in our lineup, we can't get enough of what Simon writes about. He's on the edge of everything with AI adoption, especially for large teams and how open source folks are thinking about it, and really connecting the dots between those two worlds. So in this article, Simon, admits like that his vibe coding experiences and his agentic engineering experiences, which are supposed to be, you know, the more formalized version of that.

[00:03:56] Andrew Zigler: They're starting to converge just as AI work just becomes [00:04:00] more reliable across the board, vibe coding starts to enter a impossible to fail territory where you don't have to do as much of the discipline that we now call agentic engineering. And this is happening 'cause harnesses are evolving. It's happening 'cause the models are getting better.

[00:04:15] Andrew Zigler: but he admits that he's stopped reviewing every line of the AI generated code even for production systems because the coding agent's more like a trusted external team. And while that shift, it makes him uncomfortable, it's proven to be consistently reliable, which is an early sign for like the new mode of re-- of reliability we might be shifting into for AI output.

[00:04:36] Andrew Zigler: Uh, and what's interesting about this is at the same time, the scale of code produced by these agents is going up. So we're trusting it more, and it's writing more code. So those are compounding effects of, uh, it just like being able to do more longer without making errors. And the last thing that he pointed out was that it's like really easy to create really polished looking repositories with [00:05:00] documentations and tests that are really impossible to distinguish from what he would call, you know, quote unquote, "vibe coding," uh, projects.

[00:05:08] Andrew Zigler: And so, uh, it really hints to like why the reliability is, is, is so good under the hood. The capabilities are just getting higher for everybody. What do you think, Ben?

[00:05:17] Ben Lloyd Pearson: Yeah, I'm, I'm definitely feeling a lot of the elements of what Willson is describing in this article because, you know, uh, we ha- I've found a lot of workflows that I can almost completely trust AI with handling it a- and its ability to execute on it. and it's getting to the point actually, where I think it's, it's even getting harder to identify work where this isn't the case.

[00:05:38] Ben Lloyd Pearson: Like, I'm so comfortable just throwing more and more work at AI. When it fails, the, the failure modes are becoming less obvious as, as it, as it gets better and better. you know, to the point where like last year, I felt like, you know, most of my time with AI was spent determining where the models failed, obviously.

[00:05:55] Ben Lloyd Pearson: Like, they would do something that was clearly wrong that I would have a strong understanding of how [00:06:00] to go back and correct it and, and put gua- guardrails in place. But now I'm spending more time validating whether or not the AI was successful. And this is a very subtle difference, but I think it has really big implications because obvious failures are very easy to spot. But a single failure mixed into a long list of successes is substantially more difficult to spot. Like, humans just struggle to identify the, needle in the haystack versus a pattern that is mismatched as a, a- as, as a whole.

[00:06:32] Ben Lloyd Pearson: So if you scale this problem up to a team or an organization level, you need awareness from everyone within that team or organization about how to spot those AI failures. You know, because naturally, th- these are stochastic systems. They will eventually produce failures. It's, it's basically a guarantee that eventually the probabilistic system will encounter a probability that is wrong if you give it enough time.

[00:06:57] Ben Lloyd Pearson: Like, that's how these things are designed. So you [00:07:00] either need to a- have automated systems in place to catch those, those failures, or you need humans who have a strong a- awareness of what failure looks like so that they can quickly pass judgment on it. And, you know, there was a, there was a closing line in this, in this article that, that, uh, really stood out to me, uh, because it's a topic that we've been discussing a lot recently, and that is rolling your own SaaS solutions. You know, today with agentic coding, vibe coding, it feels like you could just go out and build whatever you want, whenever you want. Um, we, uh, have been exploring this through our own-- We have this build versus buy workshop that's coming up, uh, n- next week that is gonna get really dive into this topic of like, when do you determine whether or not you should put AI to building something versus just buying something off the shelf? Uh, and, you know, and, and we tried to vibe code an engineering productivity platform and really learned a lot about the importance of domain expertise within these, these agentic workflows [00:08:00] you know, enter- an enterprise grade SaaS company has accumulated years of domain expertise that inform how their platform is built, and yet the current technology of AI just simply can't replicate that. and, you know, the organizations that are using AI well, you know, sort of quickly learn that the humans operating these AI systems are now more valuable and more important than ever. And this was something that Willison touched on in this article, is that, he feels like he's, he's-- his role as an engineer has never been more important, uh, with these AI systems. you know, and a well-built AI system, I think handles the toil of, of a lot of people's jobs and frees that human operator up for higher order challenges, which, you know, is a really great thing to have. So yeah, I, I think, you know, as always, Willison's doing a really good job at, at just being attached to the zeitgeist, you know, helping us understand the, the forefront of agentic work. Uh, and I think an- anyone who's operating in this space should go out and [00:09:00] check out this article.

[00:09:01] Andrew Zigler: Yeah, I agree.

[00:09:03] Ben Lloyd Pearson: All Andrew, you're a published researcher now. First of all, congratulations. Second-- But now let's, let's talk a little bit about, like, what, what, what's your research about?

[00:09:13] Andrew Zigler: Well, thank you. Um, yeah, so I had an experience earlier this year where I did a hackathon that I talked about on this show that I won. Uh, and in doing that hackathon, I spent a lot of time planning with my agents before we wrote any code. So after the hackathon, I reflected with my agent in those same task planning artifacts that were left behind, and we turned it into a research paper.

[00:09:33] Andrew Zigler: Uh, it's called Mise-en-Place for Agentic Coding, and it dives into the deliberate preparation as a context engineering methodology. I actually submitted it for a research conference called Vibex. It's the very first, international vibe coding and vibe researching, uh, conference, and it got accepted there as a discussion paper.

[00:09:52] Andrew Zigler: And so, uh, it's on arXiv now, and we'll include a link so you can go check it out. It follows a little bit of stories of some of the stuff we've been covering here on Dev [00:10:00] Interrupted and some of the practices that I've been talking about here as well, especially as you know, my obsession with beads, which makes a very prominent display in this article.

[00:10:08] Andrew Zigler: It was a lot of fun to write.

[00:10:09] Ben Lloyd Pearson: Yeah, I don't think anyone needs to be re-reminded of your obsession with beads at this point, Andrew.

[00:10:14] Andrew Zigler: No, it's,

[00:10:14] Ben Lloyd Pearson: But

[00:10:15] Andrew Zigler: established.

[00:10:16] Ben Lloyd Pearson: yeah. But, uh, for, for, for our listeners who aren't familiar with this term, mise en place, it's a, it's a term that comes from French cooking. So it's this idea that before you start any cooking action, um, you should spend your time preparing to cook.

[00:10:32] Ben Lloyd Pearson: So you prepare your ingredients, you get them measured out and organized in, in an order of operations. Um, you make sure that you have all the stuff in place that you need to create the dish before you start any work to actually create that dish and the benefit of, of this practice is that it allows you to just rapidly, create y- you know, if you're thinking about a, a, a kitchen that's, you know, operating a, a system to get food out to a bunch of patrons, uh, they need to be able to very quickly [00:11:00] create these dishes on demand.

[00:11:01] Ben Lloyd Pearson: And if they're spending all of their time preparing while trying to cook, it, it just causes a lot of chaos. So we apply that same sort of practice to agentic coding. and in fact, you know, I just mentioned our, our attempt, our recent attempt to use AI to build an engineering productivity platform. I would estimate that about 90% of our time was-- and effort was spent just gathering context, building a plan, and informing our agent

[00:11:27] Andrew Zigler: right.

[00:11:31] Ben Lloyd Pearson: phase, where we were actually producing the code for this, took less than an hour. You know, it was, it was a day or two of planning leading to, um, only about an hour or less than an hour of development work. So, you know, and that's really the benefit of this, is that you, you spend all of your time making sure you're prepared to be successful, and then the actual act of making that success reality becomes very simple.

[00:11:54] Ben Lloyd Pearson: So yeah, it's a, it's a great practice. I, I'm, I'm really excited to know that we have, we now have a researcher [00:12:00] in our presence and on our show, so

[00:12:02] Andrew Zigler: It was a lot of fun to write, and you did a great, uh, a great justice there with its summary, and, uh, it-- the article itself kind of goes into the collaborative, uh, process that I do to brainstorm it. It's-- It was a really, uh, interesting exploration of, uh, like, task management for agents. So if you've been trying to figure out how do I plan and execute at scale with my agents, prevent them from getting misaligned?

[00:12:24] Andrew Zigler: How do I take my very specific domain expertise and encode it in a way where agents can act on it quickly? Uh, this article might give you a good starting point, and my agents even left you with a few open research questions that maybe you could take and expand on my paper, uh, and we can continue understanding what that des- uh, design process might look like in the future.

[00:12:44] Ben Lloyd Pearson: Yeah. And, and also just, you know, just as an aside, I think this is a great example of how AI can enable people who are domain experts at one area to sort of expand the scope of their capabilities. Uh, like, you know, I, I think before the age of AI, you probably wouldn't have [00:13:00] thought, put too much thought into being a researcher as well.

[00:13:02] Ben Lloyd Pearson: But, but now it's sort of just a thing that we can just add on to our skill set because we have tooling that makes it a lot easier for us to, to do things like this. So it's, yeah,

[00:13:11] Andrew Zigler: Wild times we live in.

[00:13:14] Ben Lloyd Pearson: And speaking of ways to build things with agentic AI, we have another article from friend of show, Zach Lloyd, who's the CEO of Warp, um, where he argues that, AI coding agents have fundamentally changed product development. Uh, and they've done it by making it faster and cheaper to build solutions, um, rather than spending all of your time, you know, aligning teams on specifications upfront. So, sort of getting into, you know, the-- what we're describing with the effort we spend on planning, the actual act of creating a prototype, um, or a, a functional MVP is now very simple. Um, and, you know, traditionally, a company would produce some-- or would, would approach something like this with things like meetings,

[00:13:59] Andrew Zigler: [00:14:00] Mm-hmm.

[00:14:00] Ben Lloyd Pearson: discussions around them, you know, all to make sure that the team is aligned before any effort is spent creating the thing. but, uh, Warp is, you know, they're restructuring their development process to be, um, to align discussions after the build is complete, rather than trying to align people, um, around hypothetical implementations that people might envision differently.

[00:14:24] Ben Lloyd Pearson: Like, it's, it's different when you, when you have something tangible in your hands versus you're imagining something that, that will be, uh, a reality in the future. So Warp has been applying this approach when launching some of their recent coding agent features. Zach really recommends that all engineering leaders out there, like th- now is a good time to audit your product development processes to see if you're aligned with the, the realities of this agent-led development work. So yeah, Andrew, what'd you think about this article?

[00:14:55] Andrew Zigler: This article is fascinating. It actually kind of stands a little bit against [00:15:00] some of the article that we just talked about a moment ago, the research that I put together. There's definitely some parts that we completely agree on, which is being aligned on the core problem you're solving before you do anything.

[00:15:10] Andrew Zigler: Um, definitely if you're in a position where you don't know what you want and you don't know what, uh, the problem is, then you're not really going to be able to agentically code a solution. You're probably gonna agentically code a bigger mess. And so, um, that's like the very first and critical thing that never goes away, no matter what you want to do.

[00:15:26] Andrew Zigler: If you want to do the really deliberate preparation method that I'm talking about, if you want to do this build fast, talk later method that, that Zach is doing, you still have to align up front on what the problem is. I do agree with him that in, usually in team environments where typically you would have maybe weeks or several cadences of planning and discussions before you'd ever touch a prototype, a lot of that is going the way of the dodo.

[00:15:50] Andrew Zigler: The prototype comes out a lot faster now. It can be easier to throw together a prototype from a meeting transcript even. And so the ability to iterate rapidly is, uh, [00:16:00] it probably cuts that kind of meeting cadence down to literally just figuring out what the problem is. That said, though, it's like his, his methodology is really great for also getting out like a lot of possible solutions and then like picking which one might be best for different solutions, which could serve better for different types of problems you're trying to solve where you want to throw a lot of stuff at it.

[00:16:20] Andrew Zigler: Um, and they talk about how they applied this approach with like building their own co-coding features. Open source recently has become-- Or Warp has recently become open source, um, as part of its development. And so, um, they're kind of like restructuring their whole development process to allow people to come in and quickly ideate and, and ship and deliver.

[00:16:38] Andrew Zigler: It's a really interesting shift to see because, um, uh, I think more folks are gonna be doing this, this build then align. But then I also do think that for very specific domain specific problems, like the one that I solved in, in the, the research paper that I used my preparation methodology for, it [00:17:00] required me spending a lot of time explaining stuff to the agents that they wouldn't have had a way to know.

[00:17:04] Andrew Zigler: It lived outside coding space. It wasn't something they were quick and grabby and good at because I was teaching them how to make, uh, an educational platform. Whereas in Zach's world, it's like he's using agents to build an agentic orchestration system. They're speaking their own language, and so they can probably build in it probably more fluently than we can brainstorm what they think they need.

[00:17:24] Ben Lloyd Pearson: Yeah. Yeah. Yeah, and, and you know, and we're seeing this sort of exact thing play out with content as well, like here at Dev Interrupted. You know, when we have new ideas, sometimes it's easiest to start with a final draft rather than like an outline or an overview of what we want to cover. You know, we, we establish the high level of what we want to, to cover in the content, um, and then we just go straight to a finished product and, and adjust from there.

[00:17:47] Andrew Zigler: Yeah.

[00:17:49] Ben Lloyd Pearson: seen other examples here at LinearB where, for example, uh, someone on the marketing team uses AI to build a prototype of something that we want to deliver to our customers, and then the dev [00:18:00] team works backwards from that prototype to implement it within our platform.

[00:18:03] Ben Lloyd Pearson: So, you know, even outside of, software engineering, the function itself, you know, there's a lot of opportunity for non-engineering roles to, to do the build aspect of this.

[00:18:14] Andrew Zigler: Yep.

[00:18:15] Ben Lloyd Pearson: Um, and, and yeah, and our product team is also s- and our, our engineers, they're also starting to work this way in some cases too. Um, where sometimes, and, and this is not for everything, but sometimes it makes sense to spin up a prototype, um,

[00:18:27] Andrew Zigler: Yep.

[00:18:31] Ben Lloyd Pearson: so that the organization can have that tangible thing in our hands to, to give feedback on. Um, you know, instead of starting from the same, from, from the same, PRD doc, we're, we're operating on the, the same platform and able to give very explicit feedback about where we see opportunities, uh, for improvement. you know, and, and that approach in the past consumed tons of time, so it wouldn't have been practical. But I can definitely see it becoming our [00:19:00] default approach more and more, particularly when we're building like substantial new features, you know, something that we've never done in the past or something that is a, a brand new capability. Um, it definitely does make sense more and more to just go straight to that final product and, sort of re-reverse engineer it, for lack of a better phrase.

[00:19:17] Andrew Zigler: Yep.

[00:19:17] Ben Lloyd Pearson: right, let's talk about the technical interview, Andrew. Has AI killed it off? Is that another thing that has been wiped out in the AI era?

[00:19:24] Andrew Zigler: it is definitely getting wiped out. You know, it's funny that we keep saying all these things that are getting wiped out, going the way of the dodo, getting-- 'Cause even just last week we had, you know, Bryan Bischof on the show talking about, um, you know, software stack and how things are constantly dying week after week, and what we're supposed to mean by that.

[00:19:40] Andrew Zigler: And what we mean by that is disruption, and disruption is certainly coming for, uh, the technical interview world. It's already shaken up how folks prepare for, uh, getting hired, uh, in a tech role. But an interesting part about this article is it points out how obviously startups are moving way faster at this than enterprises.

[00:19:59] Andrew Zigler: But, like, you [00:20:00] might be surprised that, like, uh, the interview process at FAANG companies have largely unchanged since, AI has hit, uh, the scene. But what's happened is that sixty-seven percent of startups report that AI has changed their interview process in some way. So either interviews are handled, uh, where, uh, there's like a coding process that's done, like with a screen share, or there's a live coding project, or they ask them about the tools they use and kinda shadow their process.

[00:20:25] Andrew Zigler: Understanding how employees use AI is critical now in, uh, hiring them. And this is a really interesting expansion because I think always for technical interviews, like the puzzle-based approach of doing like the LeetCode challenges and then hopping in and doing like the coding challenge was always flawed We were always talking about how that was flawed.

[00:20:47] Andrew Zigler: It doesn't quite highlight problem-solving skills. It doesn't properly showcase your ability to collaborate and code on a larger scale, which is what you would be doing in your role. But more importantly, it [00:21:00] stresses the importance of just memorizing a bunch of edge case situations in code and knowing how to handle them, which in the old world was knowledge that you needed in your organization because if it wasn't in a human's head, you couldn't ask an agent.

[00:21:15] Andrew Zigler: But now because that knowledge is so accessible, it's less of something that you need to screen for in a candidate. What you need to screen for are problem-solving skills, the ability to break a problem down into smaller parts, understanding where to start, having taste in AI skills, knowing when to throw something away and when to iterate.

[00:21:33] Andrew Zigler: There's so many little gotchas in working with AI that can either waste a lot of time or make you super productive. And the value between them is massive. And so that's what employers now are evaluating is how fluent is this individual at using these tools? How well can they use it to enact their will?

[00:21:52] Andrew Zigler: But more importantly, how well can they collaborate with others with it and working on brand new problems? We're going to be [00:22:00] seeing an explosion of AI skilling and up-training that I think is going to shadow the world of LeetCode before it, especially because engineering itself is going to become a more accessible field for folks that traditionally weren't studying those quiz questions over and over again.

[00:22:15] Ben Lloyd Pearson: Yeah, and we'll, we'll include a link to this Lead Dev article in the notes. Um, it's chocked full of a lot of really great comments and quotes, um, from, you know, i-industry professionals, you know, and how their, uh, the skills they're evaluating for really haven't changed. You know, it's just the process f-for doing it that has changed because of AI. You know, you still need engineers who can explore problems, understand and validate code, and communicate well. Um, and additionally, you know, system design and judgment of technical trade-offs are increasingly important in AI-driven companies. You know, you need people who can build and manage complex AI workflows and then quickly pass judgment on what's best. Like you said, we-we've, [00:23:00] we've highlighted a lot of the issues with, i-interview processes in the past. Some people today say don't use AI at all. You know, even I think Anthropic really, discourages the use of AI in the earliest stages of, of their interview process. but, you know, I think, uh, and, and maybe you can justify that in some situations, you know.

[00:23:17] Ben Lloyd Pearson: Particularly for like, you know, if it's a low-level technical interview for like a junior engineer who's looking at their first role. Um, you know, you might want to evaluate their basic competency with their tech stack and do it in an environment where they can't just go straight to AI and ask questions. But you know, when we-- when people interview at LinearB, you know, we, we tell them, "Use whatever AI tools you want." You know, get the best outputs that you can with them and use-- leverage it for all the advantages that it gives you. Um, and you know, we wanna test you on whether or not you can understand the value of those outputs and why they were built the way they were built. Um, you know, those are the skills that we really need, and it's skills that AI can't easily replicate [00:24:00] or, or fake, you know?

[00:24:02] Andrew Zigler: Yeah.

[00:24:03] Ben Lloyd Pearson: to shift. You know, I think interviewing in particular is gonna get-- is very, very much being disrupted significantly by AI. Um, but and, and you know, maybe we'll even get to a point where AI starts to run more of the interview process itself, you know. Let the person use AI, but they'll be challenged by an AI agent that, that sort of forces them to, to walk you through all of those higher order, skills that you want to screen someone for. yeah,

[00:24:31] Andrew Zigler: Or you might find it flips differently. You might find that the process becomes really deeply human, like what Anthropic is doing. I think that that could be another direction it goes as well. Um, so it'd be interesting to see kinda how it shapes up. But I think, like, the other really interesting part of this too that stood out to me was as the engineering interview process gets less technical and more communication collaboration focused,

[00:24:54] Ben Lloyd Pearson: Mm-hmm.

[00:24:55] Andrew Zigler: interviews around them are becoming more technical because we're all meeting in this new [00:25:00] middle.

[00:25:00] Andrew Zigler: And so there's this interesting happening-- interesting thing happening with, like, you know, you're applying for a PM-- a PMM role, right? In the days of the past, like, in no world were you probably gonna be asked to, like, do some sort of technical assessment or put together a technical project. You might be b-- tasked to put together, like, a, a product launch or a pitch or a positioning thing, uh, but you probably were not tasked as much with doing something technical.

[00:25:24] Andrew Zigler: But now that's an expectation for most PMM roles, and so you're gonna get quizzed and trained on, like, how fluent are you? How well can you go from that positioning plan to having an actionable thing? How can you use AI to convert that into, into action? And so at the-- on the flip side, you're also gonna see all these roles around engineering, uh, get increasingly more technical in how they're screened.

[00:25:44] Ben Lloyd Pearson: Yeah, almost, almost like these, these outside-- roles outside of engineering are becoming more engineering-led, you know? And I, I can think particularly like product managers, you know, they're-- That's a role where I think there's an increasing demand to have technical competency and the ability to use [00:26:00] AI to build things end to end, you know, because you might be that product manager that is tasked with building prototypes of products, you know, before they've even had any development work on them.

[00:26:12] Ben Lloyd Pearson: So

[00:26:12] Andrew Zigler: Yeah, I think that expectation

[00:26:14] Ben Lloyd Pearson: that I...

[00:26:14] Andrew Zigler: is really, really high now. It's like if you're, if you're a PMM and you have an idea, it's almost the expectation that you can create some initial prototype. Going back to even what you were talking about with Zach earlier, because you were aligned on what you needed, and it was faster to make a prototype to talk about than it was to have a meeting cadence about it.

[00:26:32] Andrew Zigler: That's the real unlock. And PMMs who go into these hiring process-- engineers too, who go into this hiring process understanding that are gonna stand out like w-- a head and over tails above everyone else. And so if you're applying for those roles, keep that in mind. I'm sure you're already thinking about how can you showcase your technical skills.

[00:26:51] Andrew Zigler: So Ben, what's this last one about?

[00:26:52] Ben Lloyd Pearson: Yeah, our last story today is the last mile where enterprise AI dies, and this is an article [00:27:00] from A-Andre Savine. Uh, it's a really great breakdown of a survey from McKinsey where they, they surveyed 10,000 engineering or executives. Uh, and this survey revealed that while 88% of organizations are deploying AI, less than 20% are seeing significant bottom line impact from that. And 86% of leaders feel unprepared to adopt AI in their day-to-day operations. So some pretty, some pretty stark numbers there. And the core problem seems to be this quote unquote last mile gap, where AI technical capabilities need to be integrated with actual business operations. You know, companies might have hundreds of pilots and high adoption rates and all these experiments going on, but they're not seeing the downstream productivity gains, uh, because most of those, those benefits have been trapped within individual workflows, uh, rather than organizational improvements. And, Sabine a-a-argues in this [00:28:00] ar-article that organizations have over-invested in AI technologies while under-investing in the production layer. So that's things like governance, verification, redesigning their operations, as we've been talking about today a little bit. Um, and, you know, McKinsey's recommendation is that companies should be spending about five times what they're sp-- on people versus what they're spending on, technology.

[00:28:24] Ben Lloyd Pearson: So spend five times more on people and processes than AI technology. And the result of poor last mile planning is that, you know, AI value is primarily being captured through layoffs, rather than redeployment. You know, and, and I think we're seeing this with, with all these companies who are coming out and announcing layoffs.

[00:28:42] Ben Lloyd Pearson: I mean, we have At-Atlassian, we have AWS, we have, uh, WiseTech Global cutting 30% of their workflows, um, uh, Block. There's a lot-- The list kind of goes on. A lot of these companies that are explicitly tying reduction in forces to their AI transformation. and it kind of, it seems like [00:29:00] this, this pattern is gonna continue in the short term. and we've been covering this a lot here at DI, you know, this awkward transition from AI experimentation to AI impact, and it, it is a very difficult chasm for, for many organizations to cross. even for more nimble organizations, I think it's, you know, the success at doing this has been pretty mixed.

[00:29:22] Ben Lloyd Pearson: You know, sometimes experiments turn into something that has bottom line impacts, but other times the productivity gains are isolated to a really small component of a larger system. I'm, I'm definitely sensing this, like, universal pressure to do more with AI, but not everyone's stopping to evaluate whether that work is actually translating into business gains. And the thing that, that I just keep coming back to, like, time and time again mentally, is this statement that came out of last year's DORA report, um, and that is that upstream productivity gains are being lost to downstream chaos. And that's, that describes that last mile problem. [00:30:00] And, you know, and I think the unfortunate reality is, is this ar- as this article argues, is that it, it's leading to more layoffs rather than reinvestment into the organization. Andrew, what'd you think about this article?

[00:30:11] Andrew Zigler: It's a really great summary of what I think is going on here. And, you know, I, I do think that it's an interesting situation where companies find themselves, uh, wanting to rapidly buy and adopt these tools, strug-- and then they struggle with a workforce that isn't quite skilled up on how to use them.

[00:30:28] Andrew Zigler: They're still in these growing pains of how, you know, the new normal, new level of operation is going to be. And in all of that thrash, that chaos, as DORA, the DORA report calls it, you know, I don't think any of us have a really clear grasp yet on what that virtuous cycle of employee and AI and automation all working together and shipping things.

[00:30:46] Andrew Zigler: Like, there are companies that are AI native, you know, with one employee, and they have like a whole bunch of agents or something that are exceptions to this. They kind of have that thing figured out. They're the companies of the future. But for companies that are inheriting this, that already [00:31:00] pre-existed with a lot of, already enterprise, you know, applications and process and people, it's a huge challenge to come in and like unfossilize that is kind of what it feels like, and then start to move around the pieces.

[00:31:15] Andrew Zigler: Enterprise's strengths is that they've taken a bunch of things over time. They've cemented them together into this incredibly robust foundation that has lots of layers of management and process and tooling and skills, and by f-- by like turning it to stone and making it as hard and as big and broad and as great as p-possible, that's what becomes the really sturdy platform that other companies and themselves could build on top of.

[00:31:42] Andrew Zigler: And so that, you know, being that strongly aligned and being that rigid was their strength. But in today's market, moving really fast, being nimble and resourceful, is what is being rewarded and Enterprise companies want to be part of those gains. They have so much, [00:32:00] to bring to those environments, but they kind of lack that celerity.

[00:32:04] Andrew Zigler: They don't have the ability to move as quickly or as fast. And so, unfortunately, you get a-- in a situation where, like, the employees, in their minds, they are-- they feel stuck. And so the, the option is either I can, like, rapidly upskill them, or I can restructure my headcount and bring in new people who are thinking this way or get rid of folks that I, I don't think that we need their role anymore.

[00:32:27] Andrew Zigler: And so you get these massive disruptions in enterprises that don't happen other places, and that thrash burns trust, it causes burnout, and it really creates this death spiral of, "Oh, we're not gonna use AI well," or, "I don't wanna be part of this." That is, I think, plaguing a lot of these big companies right now.

[00:32:47] Andrew Zigler: Uh, and you just see it in headlines like this week after week. Um, but ultimately, it comes down to employees not perceiving the same benefits from AI that their employers are. [00:33:00] It's a cognitive disconnect between what employers see as ultimately numbers on a spreadsheet that they're optimizing for, which is their job.

[00:33:10] Andrew Zigler: That's like their stakeholders. They're reporting to a board. They have some fiduciary responsibility. they kind of are in this very, like, high-level position where they're so far removed from the problems that they lack the language and the ability to break through and really kind of convey that with employees.

[00:33:25] Andrew Zigler: And then on the flip side, employees feel like a number in that spreadsheet, and they don't feel like they're supported in their upskilling. But more importantly, uh, and I, I have felt this before as well, it's oftentimes like no good deed goes unpunished. The better and the more efficient and the more elaborate you can get with your AI workflows, the more will get asked of you to do more of that, to teach others how to do it, and then that becomes a really big burden for your, your AI-enabled folks within your org.

[00:33:57] Andrew Zigler: They burn out. They don't have the resources [00:34:00] they need to upskill everybody. And it creates also, like, an us versus them d-dichotomy within your own employees because you have folks that are getting lots of gains from AI. Maybe their process or the r-- the, the job they do can just be really largely done by AI.

[00:34:15] Andrew Zigler: But maybe you have folks that are, like, maybe, like, in design, and that's not their world. Their tools are still very immature. And so no matter how good and how AI forward that designer is, they're not gonna be able to match maybe some of the, the level that the engineer can get, right? So that disparity as a baseline is really discouraging For a lot of folks.

[00:34:36] Andrew Zigler: Uh, and I think that's the biggest challenge that we have internally is creating an environment where you're not making a pressure cooker, but rather you're having open, honest conversations. And when you are finding things that work, you don't, glom it, right? Like, if you find, if you find that little flickering candle within your org, don't snuff it out.

[00:34:56] Andrew Zigler: Like, there's a way to, um, do this well and [00:35:00] listen with your employees, and that starts by building trust and having open conversations.

[00:35:04] Ben Lloyd Pearson: Yeah. You know, a lot of organizations hear about these, like, fabled product managers, you know, getting back to our, our conversation on the interviews and how they're changing. Everyone wants that fabled product manager that is like the 10x person that can do it all. They can go end to end with,

[00:35:18] Andrew Zigler: Yeah.

[00:35:23] Ben Lloyd Pearson: it with AI, building all of the assets to enable your organization and, and go market it to the, to the, to the outside world. Um, they, they want those, like, superhuman people that are just operating on top of AI agents that can build all of these things that used to take a team to build. But the challenge is if your organization is not set up to facilitate a single individual doing all of that work, um, you are just creating-- You know, maybe some people are, are savvy enough, they can navigate all of the enterprise

[00:35:53] Andrew Zigler: I [00:36:00] agree.

[00:36:00] Ben Lloyd Pearson: how to translate those individual improvements into things that, that, um, the rest of the company benefits from. So yeah, we've been covering this a lot. You know, we talked about the messy middle of AI, uh, development last week, and we've been talking about tokenmaxxing leaderboards.

[00:36:17] Ben Lloyd Pearson: This is, this is a common thread that I think we're gonna keep touching on in the coming weeks. So Andrew, what are your agents up to this week?

[00:36:24] Andrew Zigler: Uh, well, my agents-- I've been on site at AI Council, so there were a few really cool, um, data science booths that were here. Um, in particular, been learning from them from their different handouts. What I love to do when I go to these events now is to, like, walk through the expo floor and do, like, spoken word to my agents, being like everything I see, like I'll let them know, and I'll grab some, like, brochures.

[00:36:44] Andrew Zigler: I'll take pictures for them. I'll do a little, like, um, expo hall exploration, and then my agents will go and do some research and be like, "Oh, yeah, this might be relevant to you," or, "This looks cool." Uh, but the tools here are really amazing. Um, in particular, like, um, I, I, I really loved, like, all [00:37:00] the, the DuckDB stuff here, Hex as well.

[00:37:02] Andrew Zigler: You got Databricks. Um, so some really big, really smart, um, data companies that are leading the charge. And so I've been having a really great time learning from them and passing that on, uh, to my agents while I've been here on site at AI Council. What about you?

[00:37:14] Ben Lloyd Pearson: pa-- speaking of passing things on to my agent. So, you know, we covered a few weeks ago the, the Carpathi method of using agents to ingest raw sources and, and build its own wiki,

[00:37:26] Andrew Zigler: Mm.

[00:37:26] Ben Lloyd Pearson: to have it to, to, to help your AI be informed about the topics that you want to focus on in that moment.

[00:37:33] Ben Lloyd Pearson: So yeah, I've really been embracing, uh, you know, letting my agents build, uh, content maps around like token maxing, for example, just so I can get a picture of who, who's saying what about it out in the space, what companies are doing certain things. Uh, and it's, it's been really cool. Yeah, I've, I've, uh, blown a few of my Claude Max sessions completely on just having to do deep dives into some of the stuff [00:38:00] that's in my, my, uh, data sources.

[00:38:02] Ben Lloyd Pearson: But yeah, it's, it's, it's pretty cool 'cause it's, you know, I'm using Obsidian for this, as I think a lot of people are today, it's really neat to see how I can use AI to sort of take all this disparate information and structure it in a way that helps me take action on it a lot more quickly. And, and I'm using it for content, for understanding the projects I have going on, for, for helping me navigate, uh, work with, with people on my team.

[00:38:27] Ben Lloyd Pearson: Like, it, it's actually a really cool method. And, and I do wanna point out that, uh, you know, when we covered this on this show, it was, um, I think it was just a tweet that sort of went viral, but now he has a whole markdown file that explains how-- it explains to your AI how to, to do this system. So it's, it's been a pretty fun, uh, journey this

[00:38:46] Andrew Zigler: Yeah. Yeah. If y'all are listening to this, if you, if you have not experimented with keeping your own knowledge graph or your own knowledge system with AI, it's a huge, huge unlock. If you're in a situation where you don't keep logs and journals of [00:39:00] your thought process and what interests you, and you don't accumulate those things over time, you're gonna have a hard time hill climbing in the future.

[00:39:06] Andrew Zigler: Right now, we're all experimenting and throwing things at the wall, but eventually, there will be a provenance that needs to be established. And those that have spent this time cultivating and building up this corpus of, " This is who I am, this is what matters to me," or, "This is what's interesting to me," even in a one-off way, this creates a knowledge graph that kind of, uh, takes what's interesting to you and captures it in a navigable format for an AI to operate on.

[00:39:32] Andrew Zigler: This becomes your context fluency layer between you and your AI, and you need one of these in the future. You maybe don't need one now, but you should always be investing in your future. So this is a good, a good shout out, and it's great to hear that you're building that.

[00:39:46] Ben Lloyd Pearson: Yeah. All

[00:39:47] Andrew Zigler: Sweet.

[00:39:47] Ben Lloyd Pearson: right, well that's, that's the Friday Deploy presented by LinearB. Make sure you give us a like on whatever platform you're listening to us on or watching us on. Thumbs up, subscribe, all the great things. It really does help this show out and help us get, [00:40:00] get our, uh, get, get more attention on these challenges that, that we're bringing to the industry.

[00:40:05] Ben Lloyd Pearson: So thanks for joining us this week. We'll see you next.

[00:40:08] Andrew Zigler: See you next time.

Your next listen