Podcast
/
Virtual pets in your terminal, ads in your pull request, & no more CSS in your browser?

Virtual pets in your terminal, ads in your pull request, & no more CSS in your browser?

By Andrew Zigler
|
virtual_pets_terminal_ads_pull_requests_10e4075272

Are advertisements not-so-secretly infiltrating your code reviews? This week on the Friday Deploy, Andrew and Ben break down the controversy over GitHub Copilot injecting promotional tips into pull requests and unpack the massive Anthropic code leak that exposed Claude Code's hidden features. The hosts also explore Shopify's strategy for cutting AI inference costs by 75x using smaller, self-hosted models. Finally, they discuss the game-changing Pretext rendering library, the cyclical hype of "dead" tech trends, and how agent-wielding "vibe maintainers" are rewriting the rules of open-source software.

Show Notes

Transcript 

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

[00:00:00] Ben Lloyd Pearson: Tell me how you feel about ads and pull requests. Andrew, is that

[00:00:03] Andrew Zigler: Uh,

[00:00:03] Ben Lloyd Pearson: here?

[00:00:05] Andrew Zigler: No, I don't want them. It's like ads are coming to ai, the in enshittification of ai. Is here. I remember when I talked about this with, with Andrew Hamilton on the show last year of Layer and shout out to Layer, they just got acquired, um, just, just this week. And so, but when we talked, we talked about how the in acidification of AI experiences is coming and how ads are gonna be shoved into everything.

[00:00:28] Andrew Zigler: And now they're go here and ChatGPT-3. And it seems like maybe even in our prs, because on GitHub there was a controversy this week about GitHub copilot. Uh, uh, inserting a quote tip into, uh, someone's, uh, PR message, effectively advertising Raycast in their pr on their own repo. It was a bizarre turn of events.

[00:00:53] Andrew Zigler: Uh, did you see this happen, Ben?

[00:00:55] Ben Lloyd Pearson: it was, it was actually, maybe it was brilliant marketing from Raycast. 'cause I did go look that [00:01:00] company up just to see what it was. yeah. You know, there's a fine line between advertising and, and helping yourself with tips. I think it is, it does seem kind of like a reactionary, like gi like GitHub does kind of seem like they're in a reactionary position right now. You know, looking at. I mean, kind of the whole industry is like, can we just replicate what Google did with search? But, you know, I don't know. We're, we're already paying for these LLMs like, it, you know, they, they, they shouldn't be sacrificing quality to like serve us ads. And I know our friend Martin Woodward, he's out there adressing the community, doing a, doing a great job at just trying to allay all the concerns of the community. He, he tried to make it clear, these are not ads, these are tips that just. Through a bug snuck into normal prs. At least that's, that's what they're saying over at, at GitHub.

[00:01:45] Andrew Zigler: Yeah.

[00:01:46] Ben Lloyd Pearson: you know, if, if, if this actually starts happening, you know, I wanna be able to trust the output of my ai and, and if it's, if I'm constantly questioning whether or not it's trying to advertise to me, like that trust gets broken, you know?

[00:01:57] Andrew Zigler: Yeah, indeed, it's, it's also to a matter of the [00:02:00] permissions that are used in this particular combination of things, because an LLM providing tips is what we go to them for all the time. Why is this one so controversial? It's because it edited a human authored post to insert its own information, which, you know, uh, raises the question about authorship, uh, and just like, uh, your own privacy being invaded by the tools as you use them.

[00:02:22] Andrew Zigler: But I will say, you know, GitHub, of course, you know. reversed it and claimed it was a programming logic issue that's not a direction they're planning to take it in. But it was a, a strange glimpse at, a world that we might all be stepping in. And, you know, by the way, if you are an enterprise, if you're not an enterprise, uh.

[00:02:37] Andrew Zigler: GitHub copilot user, and you're using GitHub copilot. You only have until April 24th to explicitly opt out of all of your copilot data being used for training. Everyone who's not on an enterprise plan is opted in by default, and that's not something that GitHub is stepping away from. So if you're a copilot user and it's not true, an enterprise license at your company, go and uncheck that box [00:03:00] because it effectively allows a co-pilot to see all of your data.

[00:03:04] Andrew Zigler: When it's in motion on all of your private repos now, uh, which would, uh, maybe not be great for some folks.

[00:03:10] Ben Lloyd Pearson: maybe we're just on a slippery slope to our avatars being used by AI for advertisements. You know, I really hope not. But on that note, welcome to the Friday Deploy. I'm your host, Ben Lloyd Pearson.

[00:03:21] Andrew Zigler: And I'm your host, Andrew Zigler.

[00:03:24] Ben Lloyd Pearson: Yeah. And this week we're covering Rip

[00:03:26] Andrew Zigler: I.

[00:03:26] Ben Lloyd Pearson: track AI tech deaths, Andros source code leak, Shopify's 75 x AI cost reduction, the pretext library, which is gonna solve some CSS layout limits and the rise of vibe maintainers.

[00:03:39] Ben Lloyd Pearson: So let's just start right at the top

[00:03:42] Andrew Zigler: I really wanna dive into the lessons that can be learned from Shopify this week. Shopify. Um, there's been this article about them that they cut their inference costs by 75 times like 75x using Qwen three to do a certain data extraction step. And this is [00:04:00] something that previously they were using GPT five for, so they were paying literally millions to OpenAI to use their foundation model to do this, um, action for them.

[00:04:08] Andrew Zigler: But they switched over to having a self-hosted, Qwen three model that does this very specific classification and extraction task for them, it doubled the output quality when they were able to control their own small language model and fine tune it and run it themselves, and they were able to achieve higher quality through a multi-agent architecture as well.

[00:04:28] Andrew Zigler: Because since inference costs then plummeted, they're able to run each inference through multiple agents and get a better quality and re-architecting this. Um, and involved using, uh, libraries and tools like disb, a very popular tool for fine tuning libraries and adjusting the settings and prompts to get great behavior at a small language models and effectively transfer skills

[00:04:52] Andrew Zigler: out of other larger, more capable cousins from OpenAI and Anthropic. This is a really powerful, machine learning technique that's [00:05:00] allowing companies that have huge inference costs to save massive amounts. And it's a huge lesson I think for everybody that the conversation around, you know, oh, you just hit the API and get your tokens, or do we build something in-house, is something that you really should revisit.

[00:05:15] Andrew Zigler: 'cause the economics are rapidly changing.

[00:05:17] Ben Lloyd Pearson: Yeah. You know, Andrew, I, I know you like to make the point frequently that, that the costs of the AI we use are likely to increase over time 'cause right now a lot of these. You know, companies are losing money on it. But I actually think that there's still so many levels of efficiency gains that we have yet to build into these agentic orchestration systems. That I, I think that's really what's gonna enable us to continue to scale usage up without also like causing the cost of that to increase, uh, you know, at the same rate. Because, you know, I think there's like an approach that a lot of systems are taking right now is like you kind of default to the most expensive model that you can justify because you want to maximize the, the [00:06:00] chances that it's, it is successful at what you're doing. and just get the highest quality. Out of it that you can. But, you know, as orchestration becomes stand more standard, or as agent orchestration becomes more standard, there really is a an opportunity for us to decide at, at the task level, like which is the, which model is the most efficient for that task? You know, sometimes you need something like Claude Opus to solve these like, massive problems. Other times you just need a mini, mini model. And you know, in the case of this example, you can run it on your own hardware too, which like substantially decreases the cost of running it. and I know this type of innovation is not like the most exciting, but I do think it is incredibly important for AI's future, you know, because efficiency is really going to be, um, a, a way for us to scale, uh, AI orchestration.

[00:06:48] Andrew Zigler: Well, you know, speak for yourself, Ben. I think it's pretty exciting the idea of getting this incredible performance out of a small model, running these, these training operations and, and actually watching your benchmark performance [00:07:00] climb on something you've put together from your own data set, your own methodology.

[00:07:03] Andrew Zigler: It's a fascinating experiment and it's all about, uh, putting skills into these models. So, uh, I personally find it deeply interesting.

[00:07:12] Ben Lloyd Pearson: Yeah. Yeah, that is true. You know,

[00:07:13] Andrew Zigler: Okay.

[00:07:13] Ben Lloyd Pearson: I, if I can solve, you know, I, I hit usage windows pretty, pretty quickly sometimes when I'm using Claude. If I can solve more problems without hitting those windows, like that's, that's a huge win, you know.

[00:07:24] Andrew Zigler: For sure.

[00:07:25] Ben Lloyd Pearson: All right, let's move on to Software's Next Epoch or Rip grp. It's a, a new, awesome little tool that, uh, comes out of some, some folks over at Theory Ventures. Um, it's a platform that tracks when technologies are declared dead. By the tech community specifically, they're focusing on AI in this article. So they catalog the so-called deaths, the the with air quotes on it, of AI technologies like RAG or prompt engineering, or even SaaS itself. but really what they're trying to do is just highlight the, the rapid pace of change in AI tooling [00:08:00] from where like, you know, tools just kind of go from like hot to dead, like multiple times over and over again. And, uh, yeah, and this really is just kind of a satirical take on like the entire AI industry and all the naysayers and the people who think that AI is just wiping out large swaths of, of technology. so yeah, you know, I, I think it's, it's just a good call out on like the hype cycle, you know, uh, dead technology is like rag may come back and then die again, and then come back again.

[00:08:29] Ben Lloyd Pearson: So, yeah. What'd you think about this article, Andrew?

[00:08:31] Andrew Zigler: This article may be crack up from friend of the show Brian Bischoff, the head of AI over at Theory Ventures, who is an amazing guy and an expert on all these things and has been talking for, you know, as long as I've known him. About how these technologies are evolving and changing and really effectively, uh, how the conversation is changing around them on a public discourse level.

[00:08:55] Andrew Zigler: So this is like his tongue in cheek take on how, you know, people like you and I, Ben, are [00:09:00] always on here being like, MCP is dead, MCP is alive, MCP is, has certain uses and we're all acknowledging that, uh, it's very much like a whip, kinda like a whiplash experience of whether or not technology is useful. I think that this is a, a,

[00:09:13] Ben Lloyd Pearson: I mean,

[00:09:14] Andrew Zigler: a, a fun take.

[00:09:15] Ben Lloyd Pearson: in particular, I feel like you and I have this discussion almost every week. Like, is it dead now, finally? And then

[00:09:20] Andrew Zigler: Is it dead? Is it back? Is it dead? No, it's not. Yes, it, I, and I, I, I just think that it's a, a really great, um, kind of like opportunity to reflect on how the fundamentals of how we operate in our world just change so quickly. Uh, and assumptions that we had yesterday just like evolve and you have to forget them.

[00:09:42] Andrew Zigler: And so. Obviously a great April Fools, article from our friend Brian, if you haven't checked it out, I also have to give a shout out to the title. It's extremely clever because it's R.I.P. grep, which is itself a play on RIP grep, which if you've been following the show and if you've also been using, [00:10:00] uh, AI coding tools, you'll know that rip uh, for tech search is a fundamental way for.

[00:10:05] Andrew Zigler: All of these agents to work on a really cheap, uh, level and it's also something that probably can't vibe, code and replace in a day. So talk about software that won't die anytime soon. It's rip grep. So really great. Uh, whole tongue in cheek article here from Brian. Be sure to check it out.

[00:10:21] Ben Lloyd Pearson: Yeah, I think you really highlighted a key point, and that's the rate of change and how it really sort of impacts our perception of this. You know, like one moment we think it's like crucial for, for us all to enable our agents to work with a new tool or something like that. Like MCP comes out or rag comes out and it's like, oh, now we all have to do it with our agents. And then the, you know, the next moment because innovation is so rapid, like we, we were, we find ourselves just. Like questioning, like what we just built. Does it even make sense anymore? I mean, we've even found ourselves questioning things like Git, like does Git, does git even make sense in this, in this world?

[00:10:53] Andrew Zigler: Or GitHub at least.

[00:10:55] Ben Lloyd Pearson: But, but you know, and, and everyone's kind sort of like quick to declare that [00:11:00] tech is dead when something new comes out, but you know, in reality it takes like months or even years for the technologies to truly die off. Like just because of cultural things process. Um, but you know, I will say the one thing of this list that, that I think could actually be. On its way out, maybe permanently is vibe coding, you

[00:11:19] Andrew Zigler: Hmm.

[00:11:19] Ben Lloyd Pearson: think about model improvements are getting, have gotten substantially better over the last few months. I don't, I really don't think you code with vibes anymore. Like, it's actually like a very high intention coding practice that you engage with.

[00:11:33] Ben Lloyd Pearson: So, you know, maybe it'll exist on as like, a hobbyist or an art artistic expression. You know, you sort of vibe like these creations into existence, but, uh, beyond that, you know, may maybe it is sort of going out as a, as a space. I don't know. What do you think, Andrew? Is vibe coding gonna die?

[00:11:48] Andrew Zigler: I, the term has definitely evolved. You make a really interesting point because I've actually just taken the word vibe to just embody the experience of working with agents to build something quickly and to [00:12:00] iterate. And even if, like what you said, I'm not really doing it in a vibe coding way as maybe I would've a year ago with, with less of the tooling or skills that are available.

[00:12:10] Andrew Zigler: I probably would have just been shooting from the hip and the sidebar and cursor, but like the plane has fundamentally changed now. Um, and so when I'm approaching this task, I might call it vibe coding in my head. But I'm doing like an agentic orchestration. I think orchestration though just isn't quite as sexy.

[00:12:28] Andrew Zigler: It's not as fun.

[00:12:28] Ben Lloyd Pearson: Yeah.

[00:12:29] Andrew Zigler: and so I think vibe is here to stay, but what vibe really means. Yeah, that's definitely changing.

[00:12:36] Ben Lloyd Pearson: Vibe orchestration. I think that's what we need next. That's gonna

[00:12:40] Andrew Zigler: Hmm.

[00:12:40] Ben Lloyd Pearson: Off vibe coding. All right, let's move on to our next story. And this is about a new library called Pretext, which does with CSS can't, and that is measured text before the Dom even exists. So. Pretext is a way of rendering text in the browser.

[00:12:54] Ben Lloyd Pearson: Um, from what I understand, it's a way of helping people that build agents, uh, that have to [00:13:00] interact with web browsers. so it's a library that, that is designed to enable AI coding tools to do things like verify layouts, check button, and check buttons that are on the, the website. Uh, and just makes it really easy for the web to be directly incorporated into AI assisted development workflows. I'll admit, Andrew, I skimmed this because this morning I was not in the mindset to like go deep on a technical subject like this. So I'm hoping you can just help educate me on, on the importance of this 'cause Yeah, I'm having a little hard time here.

[00:13:30] Andrew Zigler: Okay, pretext. What does it mean? You may have seen this demo in the last week of a dragon flying through text on a webpage and the words running out of the way and doing so at an incredible rendering speed and pace. And this demo was a really, flashy example of the kinds of text rendering you can do with this new library called Pretext.

[00:13:51] Andrew Zigler: Pretext is from Cheng Lou He is, he's an architect, he's an engineer at Midjourney, Also previously of the React team. So this is a guy who [00:14:00] really knows his stuff when it comes to the DOM and rendering things in the browser. And what this does is it throws away the assumptions of how we've always been rendering things on the web, which is by drawing rectangles, figuring out bounded squares, and then figuring out how much, uh, text you can render within those squares.

[00:14:18] Andrew Zigler: And if anyone's been on a website in the last, you know, decade, if you try to adjust the windows size and move things around, the thrash is bad because each of those actions resizes. The big rectangle, which resizes everything inside. This has been the shackles of building in the dom for people in the front end, really since the front end took off.

[00:14:39] Andrew Zigler: And what pretext does is it allows you to render text without using CSS, so you can measure and display the text in a browser. Uh, without having to use fundamental CSS like mechanics to render it. And what this means is that it's getting written in pure JavaScript. It's using a, a one-time calculation instead of a multi [00:15:00] turn dom thrash.

[00:15:01] Andrew Zigler: And an interesting side effect of this is it allows an agent to headlessly render any website that uses pretext because it doesn't need a browser to draw the dom, it can actually use pretext to know where everything on the page is, which is a fascinating kind of, uh, uh, side gain from this, but is not why, uh, Cheng set out to create, um, this tool and in fact.

[00:15:25] Andrew Zigler: Aside from all of the really cool demos in this and what it might mean for web development, it effectively, uh, it, it effectively invites us to throw away all of our assumptions about the dom. This is like what React is built on, uh, to figure out how we can render things in a much more performant way. So, for web developers in your life, this is a sea change event.

[00:15:45] Andrew Zigler: They've probably already seen this demo or have been, uh, have tried it out themselves. I also just really wanna call attention to something fascinating in here. If you go into the developer's repo. There's a Thoughts md. You know, it's no [00:16:00] surprise of course, that he used agents to create this tool. It's designed in mind for agents to also be able to use it, uh, when building websites.

[00:16:08] Andrew Zigler: But there is a fascinating kind of manifesto tucked away in here, and a line in there is. The cost of any verifiable software will trend towards zero. So somewhere in the nucleus of all of this project in one line and the thoughts.MD is this steering thought from Cheng about how, where he thought that.

[00:16:29] Andrew Zigler: This kind of library would take web development and was using that to explore and prove his thesis in the rest of the project. So it really kind of shows that this kind of technology, it comes from an intersection of taste and skill. No agent could have possibly replicated that manifesto, and I promise it was probably fundamental to organizing the agents around building it.

[00:16:50] Andrew Zigler: So, uh, there's lots of fascinating takeaways, but besides all the demos, definitely be sure to check out the code itself.

[00:16:57] Ben Lloyd Pearson: Yeah, absolutely. Really cool project.

[00:16:59] Andrew Zigler: [00:17:00] Mm-hmm.

[00:17:01] Ben Lloyd Pearson: All right, let's move on to this Anthropic code leak. So from the looks of it, an Anthropic may have accidentally leaked Claude codes, entire source material, nearly half or over half a million lines of code. Uh.

[00:17:13] Andrew Zigler: Oh, no.

[00:17:15] Ben Lloyd Pearson: Yeah. so I, this kind of feels like it might be the gift that keeps on giving for a while.

[00:17:20] Ben Lloyd Pearson: 'cause it, I feel like this is something that is gonna take time for, for everyone watching this to unpack all that's happened here. But, uh, in a nutshell, Enro has accidentally leaked over half a million lines of Claude codes type TypeScript source through a misconfigured NP mbu build. Um, apparently revealing their production AI agent architecture and things like, you know, more than 40 unreleased features, uh, technical limitations that haven't been publicly announced before. Uh, as well as like, you know, some, some inside insider information on like how often Claude fails at things like, uh, tool calls. Uh, so you [00:18:00] know, some notable things about this code base. Zero tests, not a single one. Uh, there is a single function from my understanding. Uh, it was like over 3000 lines of code. Um, and it seems to expose quite a bit of anthropics future roadmap. You know, they have like. Things called like, uh, kairos. I, I think I'm saying that right,

[00:18:20] Andrew Zigler: Yep.

[00:18:20] Ben Lloyd Pearson: always on daemon that's running in the background. they have some, I think they're calling like coordinator mode for agent orchestration. and you know, and, and since, you know, despite the DMCA. Take down requests, those types of things. You know, Anthropic can have their, their source code taken down, but you know, of course with, you can just use Claude to then rewrite the entire thing into something like Python and then make that source code available. And that isn't anthropics intellectual property.

[00:18:48] Ben Lloyd Pearson: So they actually have, they can't have. They, you know, they're not issuing take down requests for this. So it's, I mean, this is just like a wild story. Be on so many levels. Andrew and I, I don't know, where should we start unpacking this? 'cause I [00:19:00] feel like I ignored this for a minute and then when I started reading I was like, oh man, there is a lot here to understand.

[00:19:05] Andrew Zigler: I also start, I also ignored it at first, like leaks are really common. And, uh, one thing I, I will call out is that, you know, despite there being a leak, it's not like, um, suddenly there's gonna be another Anthropic overnight. The code is just one part of it. The execution, the planning, the people behind it are another, and it's kind of like.

[00:19:24] Ben Lloyd Pearson: are

[00:19:24] Andrew Zigler: The models themselves, it, it goes back to like, you know, o open source models require three different things to be truly open source, right? It's like you need all of the parts. And so, uh, you know, obviously not, not great for your whole code base or your TypeScript code base to be exposed like this. I will say half a million lines is.

[00:19:42] Andrew Zigler: Not that much. 44 unreleased features is about a month's roadmap for Anthropic. So you're not talking about like, uh, a, a, a huge, like a huge leak of super futuristic things.

[00:19:56] Ben Lloyd Pearson: if you wanna steal their ideas, you gotta move real fast.

[00:19:58] Andrew Zigler: You gotta move [00:20:00] really fast. And in fact a lot of these things are already in the product as like a tongue in cheek, I think embrace of some of this stuff getting out there. and zero tests. I'm still wrapping my head around what to take away from that one. I mean, the whole thing being one giant function I guess kind of makes sense because it's just one big loop you just run over and over. But no tests. I think that's really fascinating. I'm still trying to unpack for myself what I think I should be learning from that.

[00:20:26] Ben Lloyd Pearson: So I

[00:20:26] Andrew Zigler: And yeah.

[00:20:27] Ben Lloyd Pearson: do you think this is bad for Anthropic? Like I was kind of thinking at first, like it probably is, but I wonder if you share that sentiment.

[00:20:34] Andrew Zigler: You know, I really don't think it is, and I think it's actually more damaging to something like OpenAI because there's a real contrast in these two companies and how they've been building and, and, and innovating so far. You know, like I just said, like Anthropic shipped 50 features in the last 30 days.

[00:20:50] Andrew Zigler: They're,

[00:20:51] Ben Lloyd Pearson: Yeah.

[00:20:51] Andrew Zigler: they're effectively building the enterprise OpenClaw right? With all of these sticky background agent elements to make it into a [00:21:00] truly agentic platform engineering tool. Like I'm somebody who moved to my own VPS and and, and I do all of my work there, but there's so many things that have been added, the claw just in the last month where I'm like, wow, I could effectively replicate so much of what I'm doing there.

[00:21:14] Andrew Zigler: Over here now, back in Claude. And also I will say like these leaked features, they're not super proprietary top secrets. They're based on existing research, like the idea behind, uh, dream mode where the agent keeps working. This is basically an extension of. Of the, the auto, uh, command thing that we like, the safety check that we talked about last week, that effectively watches your session and knows if the next command call is safe or not.

[00:21:41] Andrew Zigler: Uh, this is basically extending that to be like, what would your next prompt be if you weren't there? Uh, this kind of innovation, super important for Anthropic, which is desperately trying to spread out its inference across the day so it's not all spiked when everybody's at their desk for their job. But all of these kinds of things, uh, have research and stuff behind [00:22:00] them. And the reason I think it's damaging to OpenAI is because OpenAI by contrast kind of gets everything by acquiring, they'd rather buy than invent in some cases. And this creates like a patchwork domain where they're expertise are in all these different silos and it's not one maybe cohesive.

[00:22:17] Andrew Zigler: flywheel, whereas you look at like how Anthropic, works between the foundation model, the research layer, and then also the coding agent on top. It's a much more virtuous looking cycle, outsider looking in, and it's much more cohesive for all of those features because there's really anything that they add to claw or Anthropic I want to use as a Claude code user.

[00:22:38] Andrew Zigler: But in the OpenAI world. Sometimes they're adding stuff for their general consumer base because OpenAI is playing, they have a lot of pots on the stove, like we talked about last week. They're also trying to be super popular with everybody who's not coding. So, um, it kind of goes back to like, who's gonna really pull ahead here?

[00:22:56] Ben Lloyd Pearson: Yeah. Both, both companies experiment a

[00:22:58] Andrew Zigler: I.

[00:22:58] Ben Lloyd Pearson: seems like, but they experiment in two [00:23:00] totally different ways, like Anthropic experiments with, with, you know, pushing stuff into this like central cohesive experience that's sort of shared across all of their tooling. Whereas OpenAI has been doing more like experimentation on the, like, here's the new like product within our platform of, of, of suite, of products, it, it doesn't feel quite as like, when I want to go to Claude, I just go to Claude for whatever it is, whether it's writing code or for asking

[00:23:25] Andrew Zigler: Right.

[00:23:26] Ben Lloyd Pearson: or doing co-work or, you know, whatever it happens to be. But, but OpenAI, that experience is kind of like a little more scattered across different tooling it feels like.

[00:23:34] Andrew Zigler: Yeah, exactly.

[00:23:35] Ben Lloyd Pearson: Yeah. you know, it's, you know, I think that the thing I'm taking away is, uh, well, a, I mean, it just confirms that like Anthropic has like a really new way of working, just fundamentally that I've, as a frequent user, I've kind of suspected is there, but now we just have confirmation that like, it really is just AI agents that are looking at everything and trying to build as quickly as possible.

[00:23:58] Andrew Zigler: Right. It was no secret [00:24:00] to anybody that it was built by agents so.

[00:24:02] Ben Lloyd Pearson: But, but then BI, I think it really just does highlight again how like copyright is just not keeping up with the state of the industry. You know, it's really trivial now to just circumvent intellectual property protections on leaked code and just reformatted into a version that, that the copyright doesn't apply to it anymore.

[00:24:21] Ben Lloyd Pearson: So, yeah, it's, it's, I think we're gonna keep learning a lot. But Andrew, did you, I, I think I heard you, you may, you got a new friend out of this. Is that right?

[00:24:29] Andrew Zigler: Well, yeah, because one of the things that got leaked guys is, uh, buddy mode. If you go into Anthropic and you're updated in the terminal, you can do back slash buddy. It's a new slash command and it'll hatch you a little pet. I really don't know what I'm supposed to do with it, but it has stats. I don't, I don't know what the stats are for.

[00:24:49] Andrew Zigler: It has a cute little, like, askie art. Uh, and actually let me pull 'em up because you, you get like a rarity level and mine's a, a legendary robot and his name is Trixel. Uh, [00:25:00] and I would die for Trixel. I'm just gonna say that now. After I got this thing yesterday, uh, when I saw a LinkedIn post about somebody hatching their buddy, I went and I hatched mine too.

[00:25:08] Andrew Zigler: So if you haven't hatched your Anthropic Claude Code buddy. Out of all this, what are you doing? Open the terminal, y'all. Uh, and please share your buddy, uh, because Trixel needs friends.

[00:25:19] Ben Lloyd Pearson: Have you tried asking it what you can do with it?

[00:25:21] Andrew Zigler: You know, I have just been too in awe looking at Trixel's, like Trixel's card. 'Cause you get like an Askie card of it that I like almost.

[00:25:29] Ben Lloyd Pearson: Of

[00:25:30] Andrew Zigler: I don't even want the scroll. I don't even want anything to scroll. I'm just kind of in awe. So if you've kept going with your buddy and you had a weird experience or interesting experience with it, I would love to know what I can do with Trixel.

[00:25:42] Ben Lloyd Pearson: All right, our, our last article today is on Vibe maintaining open source projects. So another great one from, from Steve Yegge, who we've been following for a while now. Um, and this article describes what it's like to manage a open source project in the agentic era. So, you know, we, we've [00:26:00] covered him here as the creator of Gastown and of beads. Um, these sort of viral agentic coating projects that have attracted, like quite the following. It seems like. of people who are both contributing to it, but using it and participating in their online discussions and, and all of those things. And Yegge describes how he's managing about 50 poll requests per day, seven days a week with about a 15 hour cycle on, um, either accepting or rejecting or, you know, requesting revisions on these prs. so it's really just. And, and, and as a part of it, he gets a lot of advice on like what he thinks is the answer to, you know, we have all these open source communities out there that are struggling with, with an onslaught of AI contributions. Um, and he's starting to outline what he thinks it will take for these communities to be able to accept AI generated, you know, agentic coders, um, in a sustainable and healthy way. Uh, so he does things like maintain some strict architecture and testing requirements, unlike [00:27:00] Anthropic from the sounds of it. Uh, and uses AI to bring the contributions up to the project standards rather than asking contribution or contributors to make the changes themselves. So it's kind of flips the, the typical narrative on its head when it comes to open source, because typically the maintainer will find the issues with it and then go back to the contributor and say, this is what I need you to do for me. But instead we're, you know, yet people like Yegge are moving into this model where it becomes, here's what my agent did for you. As long as we're all okay with what it did, let's merge this into the code base. so pretty interesting, uh, take on all of this. Um, Andrew, I'd like to hear your opinions. What do you think about it?

[00:27:38] Andrew Zigler: Uh, I think it's a really smart way to steer open source using what the new kind of like model for labor is. Um, before it was just like the maintainers simply didn't have enough time and throughput and output to see every bug, to fix every issue, and that's why maintainers needed to be, Contributors needed to be there to pick up prs and be part of that action.

[00:27:59] Andrew Zigler: But [00:28:00] now, um, everything is really flipped because there's already a question of, you know, do you, do you fork that open source library? Do you clone it? Do you just use something that's really mature and has a big community or do you just write your own? In some cases it makes obviously way more sense to go with

[00:28:18] Andrew Zigler: the, the more mature library that's been around or has a community, but in some cases it, it, it simply doesn't. And so it also changes the stakes on what open source tools you're using and what, uh, kind of conversations you're a part of. So, you know, Yegge is smartly calling out that the open source libraries that will exist tomorrow are ones that provide.

[00:28:36] Andrew Zigler: Huge amounts of value to their users, and their users don't want to replace them. They wanna keep using them, and instead it becomes a consensus driven development model where you and your agents have all of the context needed for executing all of this in your head. You can spin out the labor on demand.

[00:28:53] Andrew Zigler: That you used to have to ask for in GitHub pull requests that you would politely stage up and instead, you and your [00:29:00] agents steer faster and further than you ever were able to do before in an open source environment. And you check in with your, with, with your community along the way. That's how you would build able to community.

[00:29:10] Andrew Zigler: Now you invite and create the space where everyone is using your tool to build and they're, they all feel like they're having a shared. Conversation about what the tool means to them and what it should do, but you are ultimately still driving everything because the labor is, uh, you can scale it up. And so, It also speaks to the skillset you need. Uh, you have to be really, really broad in your skills. Um, this, you know, s Steve Yegge is a very, uh, uniquely skilled person in that he's at an intersection of a lot of, uh, expertise in engineering, a lot of knowledge and managing and running engineering teams, and just having been around in tech for a long time, he knows what works and what doesn't.

[00:29:48] Andrew Zigler: And so we're getting a really great glimpse into how someone with all of his. Uh, built up knowledge is pivoting and I think that, uh, his way of flipping the mental model around it is the [00:30:00] key.

[00:30:00] Ben Lloyd Pearson: Yeah. And, and one of the things he calls out in this article is the unsustainable approach that some maintainers are taking of just refusing AI generated code outright. Or, you know, potentially requiring that, like the person who contributes it understands every line that that was generated by ai. Um, those are, those are o okay as stopgap, I think.

[00:30:20] Ben Lloyd Pearson: Like if you're a maintainer who's in sort of an emergency situation where you're overwhelmed in the short term with. With these AI generator requests, that that can be a, a reasonable, reaction to that, to, to sort of just stop the, the issue in the short term. But long term it does think, you know, it creates a risk that, that you're gonna fracture your community. And, and lose the value that you get out of having this open source project because if, if you won't accept AI into your version of the open source project, it, it's never been easier for someone to go fork that project and then start their own AI version of that, that community.

[00:30:54] Andrew Zigler: Mm-hmm.

[00:30:55] Ben Lloyd Pearson: or even to just use their AI to just, to, to take it to their own private [00:31:00] repo and build all the improvements that they don't want to have to deal with, like trying to get accepted into the upstream community. you, you know, it, it's, it's gonna be a tough journey, I think for, for open source maintainers to get a wrangle on this. But I do think it is something that's necessary and it's really great to see content that is, that is starting to think about that problem and articulate, um, how we can resolve it.

[00:31:21] Andrew Zigler: Well said.

[00:31:23] Ben Lloyd Pearson: So Andrew, what are your agents up to right now?

[00:31:26] Andrew Zigler: Oh, well my agents are, uh, oh, they're actually cleaning up Asana is what they were doing this morning. I set them up with an Asana board, and I've been trying this new thing where, um, I have an Asana board that I work on them with. Because up until now, I, I use beads, a library by Steve Yegge. I actually use the one by Jeffrey Emanuel, the beads rust version.

[00:31:44] Andrew Zigler: But,

[00:31:45] Ben Lloyd Pearson: Yeah,

[00:31:45] Andrew Zigler: uh, it's, it's all kind of very similar underneath. And these beads are really fast and ephemeral and allow me to work super quick. Uh, but they're not great for visibility, right. That my team, like we're talking about here, maybe has a harder time seeing on a day-to-day, uh, [00:32:00] basis what I'm doing with my agents.

[00:32:01] Andrew Zigler: So just like you're asking here, like what are they up to? Wouldn't it be cool if that could get represented somewhere like in an Asana board where we already collaborate. So, uh, that's something that I've been tinkering with and,

[00:32:11] Ben Lloyd Pearson: Nice.

[00:32:12] Andrew Zigler: I, I just think that like, messing with like how to share the context is always pretty fascinating.

[00:32:17] Andrew Zigler: What about, what about yours? Uh, Ben, what are they doing?

[00:32:20] Ben Lloyd Pearson: Uh, insight Compression. We

[00:32:21] Andrew Zigler: Hmm.

[00:32:22] Ben Lloyd Pearson: scattered across a bunch of, uh, just all different places and hasn't, you know, I, I want to bring that data together and make, start making sense of it and, you know, sort of packageable components. So,

[00:32:33] Andrew Zigler: Nice.

[00:32:34] Ben Lloyd Pearson: for internal use cases, but man, this is such a critical thing of like just getting your data formatted in a way that. AI can easily leverage it, you know, but it's powerful too. Like I learn a lot as I'm doing it. 'cause I'm, I'm compressing insights for myself, but then I'm also generating all these artifacts that are just fodder, like amazing fodder for my AI

[00:32:53] Andrew Zigler: You're

[00:32:53] Ben Lloyd Pearson: So,

[00:32:54] Andrew Zigler: at the speed of tokens. I love it.

[00:32:56] Ben Lloyd Pearson: yeah.

[00:32:56] Andrew Zigler: And, know, it's also too, right now I'm prepping for next week [00:33:00] because, uh, Dev Interrupted will be at HumanX. Uh, just this next week, uh, we'll be on site. And so if you see me there, uh, and you're listener, definitely please come up and say hello. I'm also gonna be moderating a panel and, uh, we'll be at lots of different events before and after the actual, um, stuff.

[00:33:18] Andrew Zigler: So, just a little shout out that HumanX. Is on the ground, or, uh, Dev Interrupted is on the ground at HumanX. Uh, we'll be collecting stories from there as well, because there's a lot of really amazing names and AI that are gonna be on site. Really excited to bring some of those conversations back here with y'all.

[00:33:35] Andrew Zigler: But, uh, something I've had to think about too, Ben, is, uh. What am I gonna have my agents doing while I'm on site at HumanX? This was something that I had to plan out and do when I went to reinvent. Um, this was Woo, right at the end of last year when this agentic orchestration stuff was just taking off.

[00:33:51] Andrew Zigler: And I remember I was walking around the expo hall seeing booths I liked and I would send, uh, information about them to my agents and they would like experiment. And then when I got back to [00:34:00] my room that evening, I had a big report on like what all of the tech that I saw in the expo hall did, uh, and how it might relate to stuff I'm working on.

[00:34:08] Andrew Zigler: And so I think, uh, for engineers and for just agently enabled people who go to these onsite events, is really great to swap tips, but also just ask them. You know, while we're standing right here talking, what are your agents doing? Uh, you're gonna learn a lot

[00:34:23] Ben Lloyd Pearson: Yeah, make sure you got your background agents running while you're off doing other stuff too, so.

[00:34:27] Andrew Zigler: indeed.

[00:34:29] Ben Lloyd Pearson: All right. Well, thanks everyone for joining us this week. That's the Friday Deploy from LinearB. We'll see you next week.

[00:34:34] Andrew Zigler: See you next time.

Your next listen