Podcast
/
Goblins in prod, the messy middle of AI adoption, and everything is a harness now

Goblins in prod, the messy middle of AI adoption, and everything is a harness now

By Andrew Zigler
|
goblins_prod_messy_ai_adoption_harness_98b2f88533

Are you stuck in the "messy middle" of AI adoption where individual productivity doesn't actually translate to organizational impact? This week on the Friday Deploy, Andrew and Ben break down the hilarious and terrifying realities of agentic intention drift, exploring how a "goblin" invasion in ChatGPT and poorly scoped tokens are wreaking havoc on production environments. They also navigate this messy organizational adoption phase, discussing why senior developers are accelerating while juniors stall out on the K-shaped productivity curve. Finally, the hosts wrap up with a look at the open-source renaissance of agentic harnesses like Lattice and Pi.dev.

Show Notes

Transcript 

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

[00:00:00] Ben Lloyd Pearson: I don't know, Andrew, are you getting a Codex pet? I, like, I feel like it's only a matter of time until we have, like, NFT-based identities for all of our bots.

[00:00:07] Andrew Zigler: Oh, no. That is not the direction I want this to go at all. I actually think it's funny to see things like Codex Pets. It's-- If, if for those who haven't seen it, it's a plugin for Codex that lets you add a terminal, uh, companion or a terminal friend as you're using, uh, the tool. You know, we, we've obviously been seeing this before.

[00:00:26] Andrew Zigler: Uh, Claude Code had buddy mode a few weeks ago or a month ago. I had

[00:00:30] Ben Lloyd Pearson: And I

[00:00:30] Andrew Zigler: a buddy on there.

[00:00:32] Ben Lloyd Pearson: the loss of yours. That was devastating. So y- I,

[00:00:34] Andrew Zigler: yeah, poor Trixel. Trixel's

[00:00:35] Ben Lloyd Pearson: for a new pet.

[00:00:37] Andrew Zigler: ti- Trixel's time in this world was too short, uh, because the Claude, uh, Clo- Code buddy feature only lasted about 10 sweet days. Uh, but, uh... And so it's funny to then see the Codex folks over there and like, oh, you know, this is a grass is greener kind of moment where I'm like, "Oh, they're having fun over there with their pets."

[00:00:54] Andrew Zigler: But you know what? I'm actually not really leaning into the whole, uh, trying to make it, [00:01:00] uh, a personality or trying to have it as this, like, thing that I talk with. A lot of times that my sessions are, you know, very ephemeral, and they're anchored in durable sources of context and information.

[00:01:11] Andrew Zigler: But, like, the sessions and agents themselves don't really glom much of an identity. Like, even in the OpenClaw world, when that really hit the scene, there were parts of that that I stole and wanted to use for my own harness, and I certainly did. But, you know, the, the growing personality over time just wasn't really one of them.

[00:01:28] Ben Lloyd Pearson: Well, that's not a d-- y-you're not en- s- entirely true 'cause s- your agents do ask you to build them a profile picture at the very least. So it seems, seems like

[00:01:37] Andrew Zigler: That is true. Th-that is true. That is because all of my, all of my, uh, agents are represented as a little fish with a cowboy hat, for those that aren't in the savvy know. I post about them on LinkedIn because, um, they certainly all have the same kind of shape and face.

[00:01:50] Andrew Zigler: But actually, that is just because I have a, a really simple template that I go to Nana Banana, and I make them a little fish with a hat. It's, it's zero cognitive burden for me. I don't have to be like, [00:02:00] "Oh, what's your name? What are you?" Like, what are... Like, I'm not trying to put any kind of face on them. But it doesn't mean that I can't have fun with it.

[00:02:07] Andrew Zigler: So yeah, I'll go slap a fish with a cowboy hat on them.

[00:02:10] Ben Lloyd Pearson: learnings you get here at the Friday Deploy, brought to you by LinearB. I'm your host, Ben Lloyd Pearson.

[00:02:17] Andrew Zigler: And I'm your host, Andrew Zigler.

[00:02:19] Ben Lloyd Pearson: Yeah. And this week, beyond Codex pets, we have OpenAI's goblin invasion, organizational AI learning crisis, specs massing, uh, specs maxing, wow, and AI productivity myths. Uh, tongue twisters aside, where are we gonna start, Andrew?

[00:02:35] Andrew Zigler: Let's talk about this story where goblins are overrunning our chats. Um, so this is a really fun postmortem that came across our desk in the last week. If you've been on social media, you've probably seen fun prompts that peop- and, uh, outputs folks have shared from their usage of ChatGPT, where, uh, it becomes obsessive in mentioning goblins, gremlins, and other creatures in its responses, uh, rendering them in places where they don't [00:03:00] belong.

[00:03:00] Andrew Zigler: Uh, and goblin usage and goblin-directed conversations spiking over 175%, which is an amazing thought to think that there's a dashboard or a metric inside of OpenAI that's tracking goblin. and it forced a lot of folks to add explicit anti-goblin instructions, literally like a, a barricade on the town hall wall, the town walls to keep, uh, goblins out of their outputs.

[00:03:23] Andrew Zigler: So, you know, where did this come from? This is a really interesting dive into the reality of how LLMs are trained and where they get, uh, their performance from. So the problem on this originally, uh, or originated from a personality training for a, quote, "nerdy" ChatGPT preset, where, you know, things like a creature, uh, references to a bestiary might come up every once in a while.

[00:03:46] Andrew Zigler: It's the idea of, like, having a personality on Claude where, like, you'd probably be playing Dungeons & Dragons in a basement with it. So this, uh, output from this particular personality training ended up spreading to other models, uh, [00:04:00] because it's a s- pretty standard industry practice to use model outputs to train future models.

[00:04:05] Andrew Zigler: So all it took was the existence of this nerdy ChatGPT preset somewhere deep in the training architecture of the models to slowly be pumping out goblin-obsessed, outputs and otherwise playing an infinite game of D&D in some virtual basement somewhere. And it really just dramatically polluted the output downstream for folks.

[00:04:24] Andrew Zigler: So it really reminds us that this is a really big Ouroboros, right? It's eating its own tail, uh, in terms of the kind of, like, performance we're getting, and it really calls to mind the importance of provenance and understanding the data that goes into your model. Uh, what did you learn from this one, Ben?

[00:04:41] Ben Lloyd Pearson: Yeah. Well, the biggest thing that I'm coming away from this is just wondering, like, how many other goblins, so to speak, are out there? Like, these things that have leaked into the core training models that have created a, an unknown tendency that, that impacts everything we do with them. So And I think it's a really great representation as [00:05:00] well of intention drift. Uh, so you-- A, a minor shift to word structure can sort of cascade into mass-massive downstream differences, um, the more that you iterate on this. Um, you know, we've seen this all the ti-- quite frequently, where we introduce like a minor thought into our, into our agent harness and, uh, and maybe it implies a little more intention than we in-we intended. excuse the redundancy on that a little bit. and retraining AI models on past iterations is effectively, you know, you said Ouroboros, but it's like a game of agentic telephone as well. You know, every iteration is slightly different than the one before it, and so the final outcome is sometimes is nowhere near what it, what it start or what you expected it to be. And this thing, you know, happens at the micro scale as well when you're working with a-an agent harness. You know, a minor idea could be injected early, and that could morph into some sort of strategic imperative within the agent's, like, mind. So, you know, for [00:06:00] example, consider the line like, "It would be nice to build feature X, but we would need to build a new API integration for that feature." Um, so you would add that to like your roadmap along with the API integration. Later, you know, your agents are working on it. They come across, um, the part that says that you need this API integration, but they miss the context that it's only in the context of this future feature that we don't, we're not building at this moment. Um, and it could mistake that and think that this API integration is now a critical component of, uh, what it needs to do. you know, and this happens at scale when you're dealing with agent orchestrators. You know, that drift is, is the constant battle that you have to, work against, um, whether it's goblins or whether it's, uh, agents that violate your testing policies or, um, do any other number of things.

[00:06:53] Ben Lloyd Pearson: So, um, you know, AI loves over-indexing towards things that seem really important and, uh, yeah, I, I [00:07:00] can understand this goblin channel, uh, very deeply myself.

[00:07:04] Andrew Zigler: Yeah, it's really good advice, like what you said, to go back and revisit your harness and your, your prompts and your skills and make sure you're not over-rotating on things that are polluting downstream context. If you find yourself iterating a lot or making, uh, the same kinds of revisions on outputs that you previously had crystallized as like a skill or some kind of process, I think that's a good reminder to go and revisit it, 'cause there might be goblins lurking in that prompt.

[00:07:29] Ben Lloyd Pearson: All right, Andrew, let's move on to AI deleting databases. Is it the AI that did it? Is it you that did it? Who, who's responsible for that, Andrew?

[00:07:38] Andrew Zigler: Okay. Well, another week and another tr- tragic incident of a production database or some sort of production environment getting totally wiped out or sideswiped by a rogue AI agent or a bad endpoint or a combination of the two. and really this is just a once again, your weekly reminder that, uh, [00:08:00] ultimately, uh, the permissions and the scopes and the hooks and the protections around your agent are your responsibility.

[00:08:07] Andrew Zigler: And, uh, understanding the power and the leverage that you can have with your agent and the responsibility that comes with it is really crucial when you start entering into interacting with a lot of systems at scale. Like, even going back to the whole goblin problem we talked about a moment ago, it can...

[00:08:25] Andrew Zigler: Imagine this drift now on top of something that is working with your production data, with your customer data, with your product data. It becomes more than just, "Oh, it's annoying a goblin is popping up in outputs I don't like." It can become catastrophic when you talk, combine that with tool calling. And so, uh, it's r- really kinda highlights the danger of not having those guardrails, um, and reminding us that it's a system-based thinking, and part of that is closing the environment.

[00:08:53] Andrew Zigler: What is the world that my agent has access to? Uh, and understanding that really, really deeply before you start to [00:09:00] put, uh, tools in its hand. it really just is a, a strong reminder that, like, if you're working in production environments and you're working with agents to make sure that you have these structured layers and these protections in place, um, to protect them from doing harm to data that might be irrecoverable, um, or cause, uh, you know, downtime or, or interruptions for your users.

[00:09:21] Ben Lloyd Pearson: you know, Andrew, you and I have spoken to enough people firsthand that flew a bit too close to the sun with their agent orchestrators and have come away with scary stories that, uh, we should all learn from, you know, in, in addition to all these ones that are, that are public attention.

[00:09:38] Andrew Zigler: Mm-hmm.

[00:09:39] Ben Lloyd Pearson: actually kind of getting to the point now where, like, I'm starting to feel almost like a safety officer, like when I hear about like coworkers that are, that, that are like, "I just discovered this new agentic workflow," and I'm like, "Okay, are you being careful about all the, the

[00:09:52] Andrew Zigler: Absolutely.

[00:09:53] Ben Lloyd Pearson: wild?" and so, you know, but it's ex- it's, the technology is just so exciting that y- it just, you want to [00:10:00] explore like that because it, it's, it's fascinating. but you know, there was a l- line in this article that really stuck out to me about how we use terms like thinking and reasoning, um, when we're talking about these AI systems.

[00:10:14] Ben Lloyd Pearson: But, um, really those are just like marketing terms that have been put on top of them. the reality is that they're just coming up with structured ways to generate tokens that simulate those, those like thinking and reasoning. So we have to keep remember constantly that, that AI is a simulation of our ability to think and reason. Um, and I'm, I'm really like 100% aligned with this author. Like, we, we need to be responsible for our own agents. Um, you know, and I think it also shows really just the importance of blameless culture. You know, failures happen all the time. Um, the important thing is that you respond to them and make things more robust for the future. And, uh, so yeah, and I'm gonna get on my, back on my soapbox about agent permissions for a moment. You know, I, I kinda do think [00:11:00] that these AI companies, the vendors are, they're putting some unreasonable expectations on users about how to use them safely. Um, you know, if your AI is willing to go to the end of the world to solve a problem that it thinks it's responsible for solving, it's really hard to prevent it from going rogue. yes, we need to have safe practices by default, but what was safe in a human-driven world is no longer safe in an AI-driven world, and mistakes are now happening at the, in the blink of an eye and just like, just unraveling. you know, this is where we really still need determinism.

[00:11:31] Ben Lloyd Pearson: Like determinism plays an, an even more important role today. Um, we, we all need to have checks in place to prevent those rogue AI agents. Like, you know, at the very simplest level, we have AI, uh, we have c- hard caps on API usage for our agents in the case that they decide to go spend a year's worth of budget in an afternoon, you know? Um, but you need to have that same sort of protection at every single stage of your SDLC. Like you have to have that awareness [00:12:00] and that, that check in place. You know, and at LinearB, like we've been focused for a long time specifically on code reviews, um, because they are, you know, one of the most important quality checks, and they're the single most important or the single most frequent bottleneck in most engineering teams. And, you know, we spend a lot of time just like helping teams like eliminate hours of toil around code reviews specifically. you know, and the goal is really always just to get developers back to, to more high impact work, uh, and to make sure that they, they can work quickly without, you know, creating additional problems and, and headaches for their team.

[00:12:32] Ben Lloyd Pearson: So That's a great place for-- to implement a lot of this determinism to protect your organization. Like, you know, CI/CD is kind of old news at this point. Like, it, it feels like this

[00:12:42] Andrew Zigler: Gasp, a scandalous statement.

[00:12:46] Ben Lloyd Pearson: But it's, but it's as relevant as ever, you know? And it's kind of more f- it's more foundational than ever before.

[00:12:52] Ben Lloyd Pearson: Like, you can't expect human attention on every single detail of the code that's entering your code base at this point. So, you know, these [00:13:00] failures, like they need to be caught early. They need to be contained. Um, you need to have rollback procedures in place for when it's necessary. Um, and then, u-use failures to strengthen your automation.

[00:13:12] Ben Lloyd Pearson: And, you know, if the opposite is happening to your organization right now, like failures are cascading, they're becoming more frequent, they're impacting your ability to deliver, um, that's where you need that visibility to understand what's happening so you can fix those problems before going to broader AI adoption.

[00:13:28] Ben Lloyd Pearson: So, you know, my, my advice is focus your tokens on solving the process first, and then you can go and solve that like AI adoption problem,

[00:13:37] Andrew Zigler: Absolutely.

[00:13:38] Ben Lloyd Pearson: All right, Andrew, let's talk about when everyone has AI, the company still learns nothing. Uh, I, I really love this article, but why don't you summarize what was in it for us?

[00:13:49] Andrew Zigler: Okay, so this is a fun article. It really cuts to the heart of a problem that's happening in a lot of organizations. and it talks about how individual AI productivity gains don't [00:14:00] automatically or even realistically sometimes translate to organizational benefits. And a lot of companies are now entering this messy middle phase where the AI use is certainly widespread, it's undeniably there, but it's uneven, it's hidden, it's disconnected and siloed.

[00:14:17] Andrew Zigler: You have folks on all different parts of the AI adoption journey, and their ability to communicate and collaborate with each other is pretty staggeringly hampered by the differences in their outputs and what they need of each other right now. And really, the organizational work needed to understand how to connect these different types of learners and, and people of, of agentic ability with their, outputs and their, and their organizational goals is, so early days that we don't really even have the language as teams to talk about how to share and distribute, um, how change and process management has to evolve, and just, like, the new reality of how fast some folks can work and how some other types of roles are stuck in a more traditional throughput.

[00:14:59] Andrew Zigler: So, [00:15:00] uh, because of this, you might be in a familiar story, especially, like, if you listen to this podcast, you're probably a little more on the advanced side of your agentic journey. And, uh, you probably have, uh, in some cases, encountered, you know, where you meet folks that are on another end of that journey and, uh, and, and communicating and collaborating and getting on a shared baseline can be really hard.

[00:15:24] Andrew Zigler: Uh, so this article outlines the three new capabilities that we all need to be thinking about on, like, a meta and organizational level. And this is about how to, like, orchestrate and, and operate agents as an organizational skill, but also things like understanding the intelligence of a loop, like what makes an AI, uh, workflow effective and scalable, and what are the capabilities of the things inside.

[00:15:46] Andrew Zigler: This kind of, uh, literacy around AI is really critical, and it's applied, right? It can only come from li- having lived in the experience and being able to communicate it on your own company's terms. So, It also even like veers against some of the [00:16:00] tokenmaxxing stuff that we've talked about recently.

[00:16:01] Andrew Zigler: Like

[00:16:01] Ben Lloyd Pearson: Yeah,

[00:16:02] Andrew Zigler: organizations when, when challenged with this problem, like what they threw at it was what we talked about last week. And those were the, the Goodhart's laws to the extreme tokenmaxxing boards that were tracking all of the engineering teams and their token usage. And then when you peeled back the hood or when these, uh, engineers at these companies revolted and threw back the, the, you know, the tracking, you really started to peel back then the learnings and, and realize that there were...

[00:16:26] Andrew Zigler: You weren't really understanding anything about your organization's impact or what you were shipping or how to get this one guy over here who has like, you know, 1,000 agents running. How do I dis- how do I ma- how do I distribute that to everyone else? How does everyone even understand what's going on over there?

[00:16:41] Andrew Zigler: Like none of that was elucidating. So, really there's a competitive advantage available for folks here. So this is the leadership takeaway if you're listening to this about how can you shift on an organizational level the ability for people of different AI competencies to interact with each other, to share their, gains, but then also [00:17:00] just find a new collaborative normal in accepting the fact that not everyone's gonna be some like super agentic 30,000 terminals open

[00:17:06] Andrew Zigler: you know, you're like in the Minority Report and you got the glove on and you're throwing things across the screen. Like I don't think anyone's expecting that of everybody. But there is a baseline expectation for all of us to be curious and to throw away assumptions of yesterday and to find our new place in this matrix

[00:17:21] Ben Lloyd Pearson: Yeah, even if you could operate with that matrix machine, um, you know, it's, it's a-- You're not always operating with that at that level. You know, it, it really depends heavily on the type of task and work that you are, are focused on. But I really love articles that like this, that just sort of paint a picture of the moment in time that we're all experiencing right now because, you know, this is a very emergent, uh, just foundational skill.

[00:17:47] Ben Lloyd Pearson: Like working with AI is not like a, we've all purchased a new tool and now like have to, like, uh, use it or, or something like that. This is, this is actually like skills that we have to like almost retrain ourselves on [00:18:00] how to operate within our day-to-day lives. and the mis- the messy middle is it, it-- I just love that, that phrasing 'cause it really does feel like we're going what, what I'm experiencing right now and what I'm seeing. you know, 'cause it does feel like we're kind of transitioning to this super awkward phase of AI adoption. Um, but there was a quote that, that I really love that, that, um, it, it focused on the key thing that many organizations don't understand about measuring AI adoption. Um, and that is that AI collaboration stretches from tight synchronous co-driving to looser asynchronous delegation.

[00:18:35] Ben Lloyd Pearson: So the point there is that too many companies are focused on whether or not people are using AI. Like they're just looking at like how many times did you accept Copilot's su-suggestion or, um, did you put AI code into your pull request this week? but you should really be more focused on, whether or not your teams know which loop to use and where they need, uh, to, to put resistance into the machine. [00:19:00] Um, but also like which artifacts survive the loop, and then how those artifacts become something that the organization learn from. and we're seeing a lot of this on our own engineering team. You know, someone discovers a loop that allows them to consistently achieve like high quality output, um, and sometimes it can be translated into broader organizational improvements.

[00:19:19] Ben Lloyd Pearson: Other times it gets used for a, a single project and then basically just scrapped and thrown away, and then we just take the knowledge that we gained from that to, to the next one. there's just such a, a wide range of how our experience with AI is unfolding right now. and I do wanna highlight the three capabilities.

[00:19:36] Ben Lloyd Pearson: The, the, the top one that was listed, um, is that, um, to help navigate this middle, messy middle, um, is agent operations, and specifically things like which agents are allowed to run What systems can they touch? What data can they see? What actions require explicit human approval? You know, all the things that, that we've solved for the human workplace, but not for the agentic [00:20:00] workplace. and then, and then there's just the bigger question of like, who even owns this? Like, you know, the cloud era led to like platform engineering becoming a thing. you know, we've seen that sort of morph into like developer experience, and now we're seeing like a lot of terms like AI enablement, AI innovation turning up within organizations. There's really no established precedent for how to operate AI within an organization at scale, and anyone who's doing it is basically building that knowledge from scratch. so yeah, and I, you know, personally, that's, that's where I, I love where we're at right now. Like, you know, both with Dev Interrupted and LinearB, we're always working with those people and it's, uh, and we've, we really have as a part of our core mission is to help that audience like really navigate the uncertainties of this.

[00:20:46] Ben Lloyd Pearson: So yeah, go check this article out 'cause I think you'll, our reader-- everyone here will, will find a lot to, to empathize with.

[00:20:53] Andrew Zigler: For sure. And I think it pairs really well with our next article too, which talks about the usage of Claude Code and [00:21:00] whether or not it's making your product better. And it's, this analyzes, um, a, a very similar, uh, reality of the world that we live in right now, this messy middle. and, and that is of this K-shaped productivity curve.

[00:21:13] Andrew Zigler: The idea that senior engineers are showing measurable output gains and, and throughput by using agents and, and, and they're able to kind of, uh, up-level their, their abilities in a very, like, net new way. Meanwhile, on the other end of the curve trending down, y- you get experiences where engineers with less experience, uh, or with less domain expertise are, uh, flattening or declining in their productivity.

[00:21:39] Andrew Zigler: And this might be because of, uh, a cyclical nature of iterating on outputs instead of the source of the problems. It could be an AI, uh, literacy problem. It could also be a product understanding problem. Because in, in many cases, the senior engineers are leveraging years and years of experience, sometimes for the very specific products that they're using the AI on, if they've been on that team for a while, [00:22:00] or if they've been in that ecosystem, um, as part of their career.

[00:22:03] Andrew Zigler: So, you know, the distribution gains of being able to work with the tool are uneven across even engineers as much as they are uneven across the whole organization. And this article is a, is a reminder about how, to fight this K-shape curve within your own organization. it kind of like highlights, I think, um, the, the importance of these engineers that are more senior that have found these gains to find ways and pathways for those other engineers to learn and to be part of that same experience.

[00:22:34] Andrew Zigler: Uh, because really what this does is starts to unfold your capabilities into this thing we're starting to call like an agentic halo. I think this is gonna be something we talk about more on the show, and you're gonna be hearing more about in general of these engineers, uh, that have managed to unlock huge gains or become that very fabled 100X or 1,000X kind of person that we talk about once you reach all of the, like, the levels of [00:23:00] enablement you can with this tool.

[00:23:01] Andrew Zigler: But instead of then just using it to become like a, like a single user cannon, they distribute that. They find ways to create an ecosystem where that is supported by their knowledge and earned expertise that then any of those folks in their AI journey can play around. I think this is the key to establishing that fluency and flattening this K-shape problem.

[00:23:21] Andrew Zigler: Uh, what did you learn from this one, Ben?

[00:23:23] Ben Lloyd Pearson: Yeah, I'm with you that this really is a great representation of the messy middle. Um, and, and I, and I don't have a whole lot of comments on it because I, I just think it's really important that we highlight, uh, multiple perspectives on this issue. Um, particularly for people who, like, aren't feeling this transformation that they're s- they're hearing others talk about, you

[00:23:43] Andrew Zigler: Mm-hmm.

[00:23:44] Ben Lloyd Pearson: learned that there, there's really like three big stages that a lot of companies are at these days. and it seems like you're either, um, super early into the AI adoption phase, like you've just gotten access to it and you're, you're still like trying to understand how to apply [00:24:00] it, um, across your teams. there's the people who, who are deep into the experimentation phase, like they've, given their developers the freedom to do a lot of experimentation and try out new things. And, um, I think this is a lot of times where you start to hear people start talking about tokenmaxxing. It's like who can come up with ingenious ways to, to apply tokens to a problem. and then the third group is like people who are starting to feel that transformation, because they've started to sort of systematize the productivity improvements. Um, the thing that stood out the most to me from this article was that comparison of senior roles versus junior roles And how, you know, senior roles have been increasing in productivity, like quite substantially, uh, since the advent of, of AI versus junior roles that are dec- have been decreasing in, in overall productivity. Um, you know, and I think it's important to remember though that the, the trend for senior engineers in particular has been ongoing for quite some time, even long before, um, AI became a thing because the tooling just continues to get better and [00:25:00] better for software engineering.

[00:25:01] Ben Lloyd Pearson: Um, but AI has definitely, or appears to have, have had a... You know, I-- And this is something I wanna hear more about is like who-- people out there, if you're listening and you've, you're focused on this problem of creating a new, the new pipelines for junior engineers, like I would really love to hear this story because I do think it's something that we need to be thinking about, and making sure that the industry as a whole is gonna be built sustainably and that we have a healthy pipeline of, of new people coming into it. Um, 'cause it does seem like it's quite difficult right now to start fresh in this profession, um, given all of the productivity gains that are going to the people who have been more established in their career.

[00:25:38] Andrew Zigler: Absolutely, I agree

[00:25:40] Ben Lloyd Pearson: All right, Andrew, what are your agents up to right now?

[00:25:44] Andrew Zigler: Oh, boy. Well, my agents have been, uh, working and learning. Um, I've actually-- I mentioned this recently. I've kind of unlocked this really nice new loop where my agents can work across Asana, um, and also across Beads, which I use in the terminal, and really [00:26:00] this has been a really virtuous cycle for me. It's allowed me to really quickly get an understanding of the tasks on my plate, but then also leverage a lot of learning opportunities from the world around me.

[00:26:10] Andrew Zigler: Like, learning a lot faster and in real time is now more possible because when something catches my interest, I can throw it on a task and then delegate it out to an agent to research and fill out the task for me, and then it's something I can come back to later. It can inform something downstream. I can then string these tasks together, and it, it reminds me of like, "Oh, wow, it would've been great to have been doing this the whole time."

[00:26:32] Andrew Zigler: But the, the blocker was always just the amount of throughput and time it took to kind of create those learnings. But I've managed to create a nice little local skill system that understands the things I tend to be curious about and the stuff that I'm working on and goes in and sees like, "What did we learn from here?

[00:26:49] Andrew Zigler: Is there something that we could apply back to even our own harness or our own practice?" And instead of being of like a digest, it's more of an investigation of like, "What do people out there know that we don't know?" And [00:27:00] that's been a really new workflow to have in partnership with my agents. That's what I've been working on recently.

[00:27:06] Andrew Zigler: How about you?

[00:27:07] Ben Lloyd Pearson: Yeah, that's fun. That's fun. Well, I, I was, uh, bre-- I got a chance to skim the latest fragments from Martin Fowler, um, and, uh, there's a lot of, there's a lot of nuggets in there, but in particular, I wanna put my agents on looking into this thing that, that he talked about called Lattice, which is a new agent harness, uh, from Rahul, Rahul Garg but yeah, I really, I really love this because, well, A, I think we're in sort of this renaissance period of like open source agent harnesses.

[00:27:33] Ben Lloyd Pearson: Like it's sort of like everyone's doing it right now and like releasing a bunch of like really cool stuff. Um, but also I like this one in particular, A, because Lattice just is a, a very logical name, uh, for something like this. Uh, but it uses the metaphor of atoms, molecules, and refiners, uh, which I think is just...

[00:27:51] Ben Lloyd Pearson: Like I j- we, I mean, we both love metaphors, so,

[00:27:53] Andrew Zigler: Well, that's how, that's a lot how Gastown works. Gastown has mol- and that's what the meows are. They're molecular, you know, [00:28:00] things of work.

[00:28:01] Ben Lloyd Pearson: That's right.

[00:28:01] Andrew Zigler: and so it, it kind of, I think, is, uh... And also too, I, I also love Lattice. Lattice is really-- it becomes a, a durable and composable system through which you can make product and architectural decisions.

[00:28:13] Andrew Zigler: You can rapidly iterate. It's an applied version of Beads. It's this idea that you really need this like thin, durable, like, context layer for you and the agent to work on and, and then, uh, Lattice comes in with all of the structures and underpinnings needed for, uh, software engineering. So definitely a really, uh, a, a definitely a really cool one to check out.

[00:28:32] Ben Lloyd Pearson: Yeah.

[00:28:32] Andrew Zigler: think also too as, as well there is definitely a new world of products and open source harnesses that are coming out right now. The renaissance, as you call it, is definitely happening. There's another one like Pi.dev, which is an extremely small and lightweight, unopinionated coding harness that doesn't have any kind of prompting, it doesn't have caching, it doesn't have anything underneath.

[00:28:53] Andrew Zigler: The idea is that you go in and you bundle that on. You figure out exactly what you need. It's as really as close to the raw loop, um, that you [00:29:00] can get on an agentic coding tool as possible, and the developer mindshare on it is staggering. It has a lot of attention on GitHub. There's lots of folks who are taking it, forking it, mixing it into things.

[00:29:09] Andrew Zigler: Simultaneously, we're in this world where a whole bunch of really powerful local models have been released on Apache 2 licensing, where folks can take them, lightly fine-tune them on their domain expertise, and bundle them inside of other applications, distribute them to users to download locally for private local-first models, put them in the web to serve a one-of-a-kind platform that allows folks to interact with domain expertise owned entirely by the person who create, who then distills and trains that model.

[00:29:39] Andrew Zigler: So the idea of like you have the model, you have the harness, whether that's a website or it's a terminal loop, and then you have your domain expertise. And I think you're gonna start to see a lot of folks gluing these three things together and shipping really unique, one-of-a-kind products that are bundled with this highly domain specified, [00:30:00] uh, model.

[00:30:00] Andrew Zigler: And the harnesses are a really key part of that. So definitely be paying attention to this trend, uh, and check out these projects if you haven't already.

[00:30:07] Ben Lloyd Pearson: Awesome. Well, thanks everyone for joining us again for the Friday Deploy presented by LinearB. All right. Give us a like, thumbs up wherever you're listening to us. Uh, leave us a, a review, comment on whatever platform you're on. We love hearing the engagement from the audience, and we'll see you next week

[00:30:23] Andrew Zigler: See you next time

Your next listen