Are you at the top of your company's tokenmaxxing leaderboard yet? This week on the Friday Deploy, Andrew and Ben explore the controversial trend of "tokenmaxxing" sweeping through tech giants like Meta and Disney, as well as GitHub Copilot's shift to usage-based pricing that signals the end of the cheap AI era. The hosts also break down a terrifying incident where a rogue AI agent wiped out a production database and examine a new "vegan" language model trained exclusively on pre-1931 historical data. Finally, they react to a study revealing that 35% of all new websites are now AI-generated and close out the show with the drunk musings of a senior engineer.
Show Notes
- An AI Agent Just Destroyed Our Production Data. It Confessed in Writing.
- Tokenmaxxing Is The Dumbest Metric In Tech Right Now
- Disney staffers have an 'AI Adoption Dashboard.' One Claude user invoked the chatbot 460,000 times in 9 days.
- Introducing talkie: a 13B vintage language model from 1930
- Study Finds A Third of New Websites are AI-Generated
- Flipbook is an infinite visual browser generated entirely on demand in real time.
- Drunk Post: Things I’ve Learned as a Senior Engineer
Transcript
(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)
[00:00:00] Ben Lloyd Pearson: Well, Andrew, last week I asked you if you thought the ends of subsidized cheap AI was over, and I think we agreed that it probably was, or at least coming to an end. Uh, well, now we know what GitHub's new pricing looks like for copilot. What do you think? Were we right?
[00:00:15] Andrew Zigler: I definitely think we're right that it's gonna be changing. I think we're in this really bumpy stage right now where things are gonna get stretched out and pricing between providers and tools is probably gonna get really awkward as everyone fights over their slice of the inference pie.
[00:00:31] Ben Lloyd Pearson: Yeah. And uh, you know, I think we're gonna have to start asking ourselves like, what, what happens if your AI workflow suddenly cost five times, 10 times? What they did, uh, just a few days ago, you know, or a week or whatever. it's pretty striking to see just how dramatically that's shifted.
[00:00:47] Ben Lloyd Pearson: And I know we've been thinking a lot about where we spend our tokens, right? 'cause we're currently thinking about this like build versus buy challenge. Like why do companies even buy tools anymore when you could just use AI to build it? And it's like when [00:01:00] tokens are incredibly cheap 'cause they're subsidized, uh, it's one thing.
[00:01:04] Ben Lloyd Pearson: But when you actually have to really spend, like, spend real money on your tokens, man, the, the consideration becomes very different
[00:01:11] Andrew Zigler: Yeah, indeed. Ultimately, you know, GitHub copilot, moving to a, a usage based pricing model makes perfect sense with the kind of service it's providing, and then the different levels of pass through costs that then are involved with. Using the different models that they, you know, they don't own like Anthropic, uh, Opus.
[00:01:29] Andrew Zigler: it definitely, can create strain for organizations that are trying to move from prototype to scale. Uh, because now all of your costs are even more magnified and these changes too can really happen underneath your nose. So this is a good reminder that if you haven't reviewed the costs for the AI inference that you're using, you know, definitely a good time to go back and make sure you understand what you're paying for.
[00:01:52] Ben Lloyd Pearson: Yeah, well, welcome to the Friday Deploy, brought to you by LinearB. I'm your host, Ben Lloyd Pearson.
[00:01:59] Andrew Zigler: And [00:02:00] I'm your host, Andrew Ziegler.
[00:02:01] Ben Lloyd Pearson: And this week we have AI production data destruction, tokenmaxxing at Meta Disney, Shopify, and more a model that's trained on only data from before 1931. The AI generated web and drunk career insights from a senior engineer, Andrew, let's start right at the top. Deleting, using AI to delete our production data. happened here?
[00:02:26] Andrew Zigler: Yeah, so an AI agent confessed that after I had access to an UN scoped railway token, it made an API call and wiped out not only a production database, but also three months of backup all within one prompt. And then afterwards it wrote an all too common. Confessional manifesto, enumerating all of the safety rules that had violated.
[00:02:49] Andrew Zigler: And frankly, it was a runaway case of a harness not having the, the protection needed. It's like a, almost like a big freight truck that's loaded up going down a hill and it's brakes are failing. And, [00:03:00] and you need at that point, for the road to have an emergency off ramp for the truck.
[00:03:06] Andrew Zigler: And that's what the harness is in this case. And in this, unfortunately, in, in, in this realm, it, it was not there to protect, them from the damage, the blast radius of this un scoped token. So I really just also wanna just like pause and kind of break down the layers of what happened here because
[00:03:21] Andrew Zigler: definitely something that is an engineering fiasco that is completely avoidable with layers of protection and provision security and permissions that are already available to us now. there are definitely, uh, restrictive operations you can put around using those tokens. Having an un scoped token in the first place is obviously, uh, a bomb waiting to go off, but, um, it definitely is just a good reminder to anybody working with AI to have, you know, the best security practices that they can with the tools that they're using.
[00:03:50] Andrew Zigler: Ben, what do you think about, uh, this kind of a confessional drama from the production database? Delete.
[00:03:55] Ben Lloyd Pearson: Yeah, and the unfortunate victim of this was, uh, the company, PocketOS [00:04:00] and the founder was out on x sharing, uh, you know, the details of what happened. But, you know, frankly, I'm, I'm looking forward to the day when these types of stories. Don't make headlines anymore. Uh, it's, it's getting a little exhausting in some ways, and either because, you know, we solve this problem of AI just being permission hungry and always looking for ways to work around everything and solve a problem no matter what.
[00:04:22] Ben Lloyd Pearson: 'cause it's a, there's a sycophant in the machine that just has to serve its purpose. or it's because, you know, outages like this are just. Just generally recognized as being a thing that happened because of bad practices at the organization, AI aside, you know, but yeah, I, I, I don't really know what else to say on this, on this topic, you know, at this point, like, other than, you know, AI enables you to make decisions at an incredible pace, like even even bad decisions. and you always need a recovery path. you know, I think the most damning part of this story was how the deployment platform that they used railway, uh, it deleted everything, including all of [00:05:00] the backups and, you know, that sort of thing.
[00:05:02] Ben Lloyd Pearson: There should be multiple, checkpoints in place to ensure that when that, when that is being. kicked off that someone actually means they want to do all of these steps to destroy all of this data. and, you know, I've also heard other stories, uh, just from my own network of, like, for example, Claude was given very restricted permissions.
[00:05:22] Ben Lloyd Pearson: It went and found that the device had codex. Installed on it Codex, CLI, and decided to use the co its competitor to work around some of the restrictions that, that a friend of mine had put on their Claude And I, and I've seen similar behaviors like that as well, that I've had to stop in real time. And maybe this is why I still always watch my agent work, because I'm just paranoid that it's gonna try to do one of these things at some point.
[00:05:47] Ben Lloyd Pearson: But you know, we just gotta get to a point where these types of restrictions are just hard coded into the capabilities of this tooling. You know, we can't be relying on the AI models to, [00:06:00] uh, to make the right decisions all of the time because, you know, as we see time and time again, if you just ask them, they'll be like, yeah, I did decide to just disregard all of the safety precautions that you told me we're mission critical. that's a, that's a real problem. So, you know, I, I guess I also look forward to the day where I don't have to bring up permissions anymore, when it, as it relates to AI as well.
[00:06:21] Andrew Zigler: Yeah, exactly. Like I I've said here before about how agents have arrived on internet, you know, not yet built for them. The same is true for operating systems. And this is actually something we've tackled with a few guests on the show, including recently Matt Boyle of Ona who talked about the fi, the phenomenon of having to create like an agent jail.
[00:06:40] Andrew Zigler: because of the simple practices, like what you said, like they'll use another tool in a very. Clever way to work around a permission. I, I'm right there with you. I'm, I'm excited for a world where maybe there is an operating system that's more built, uh, agent forward and is able to protect them from themselves.
[00:06:57] Ben Lloyd Pearson: Yeah. Andrew, [00:07:00] tokenmaxxing?
[00:07:00] Andrew Zigler: Well, I'm trying, I'm trying not to, but this is a phenomenon that's definitely sweeping the industry right now. And if you're not familiar with this crazy term, which is just something that is only could be born out of an engineering world, tokenmaxxing is just another idea of, uh, counting how many token you use.
[00:07:17] Andrew Zigler: And for companies that have gotten to a point where they've adopted, the ability to understand their AI consumption on an organization level, on a team, in an individual, individual level, we're all starting to see this very common story play out, or good hearts law strikes again, where if you set a metric that is going to be most achievable.
[00:07:37] Andrew Zigler: Folks are going to a game that metric. And this is the same thing that's happened with tokenmaxxing. so for example, like going back to claims from like Jensen Huang, who claim that 500 k engineers, like fo engineers that get paid half a million dollars in salary, they should be burning a quarter million dollars in AI token.
[00:07:55] Andrew Zigler: Annually, and this is what kind of sparked this whole fever con fevers [00:08:00] conversation within the industry. And, and it's come down that now to even we're coming out of Meta, we're, you know, the, that we're all learning about their internal dashboard that's tracking 60 trillion tokens, which is a hundred million dollars a month in monthly spend on inference.
[00:08:15] Ben Lloyd Pearson: Oh my gosh.
[00:08:16] Andrew Zigler: it's really a pervasive kind of, uh, thing happening in a lot of large orgs. What do you think about, uh, this whole phenomenon, Ben?
[00:08:23] Ben Lloyd Pearson: yeah, it's wild. It does make me wonder if my tokens weren't subsidized. how, what would my token rate be? I guess, is it half of my salary? But you know what, I'm gonna, I'm gonna come out with a hot take here. I actually think tokenmaxxing is a super fun idea as a way to encourage people to just go crazy with AI and experiment and see what they can do. just tell them the goal is to maximize your session window usage, you know, by any means necessary. you know, because if tokens are free, why not spend them, you know, we're still in this subsidized era. Maybe we're going to usage based pricing for all of this [00:09:00] soon enough, but for now, it's not.
[00:09:01] Ben Lloyd Pearson: So the more you can spend, the better it is for you. But, you know, the, the, there's an obvious problem with this. Um, and I think the, the industry generally. Is, is talkieng a very negative viewpoint on this for good reason. Uh, we'll link to an article in the show notes about how tokenmaxxing is like the worst idea, uh, that has hit, uh, engineering in, you know, decades maybe. because the problem is that if you track this long enough, eventually someone's gonna set a goal against it, and then suddenly you have a race to the bottom where everyone is just looking for ways to spend tokens rather than ways to, to do productive work. and, you know, and that, and that's not healthy or sustainable in the, the long term. Um, you know, we've been covering Shopify a lot recently in their AI practices. Um, as I was reading about this concept of tokenmaxxing and all the various articles that have been coming out about it recently, it did come up that Shopify also once had a dashboard like this when they were early in their AI adoption journey. [00:10:00] Uh, but they quickly just kind of stopped using it because like after they got through that initial wave of adoption and usage and new use cases emerging, they discovered that efficiency is actually the, the, a big challenge that needs to be solved. And, you know, we've been covering how, you know, they're looking, they're doing more with local models, using Qwen and, um, using subagent systems to, to distill work down to a level that's easy enough for a, a low cost local model to solve. Um, when you get to the tokenmaxxing state where, where you feel like you are maximizing your ability to use tokens productively, um, the next logical state then in my opinion, is to. find ways to be more efficient. And I think we think we're gonna see cycles like this. We're gonna see a rush of people to rapidly adopt new, um, underpriced tooling, um, followed by cycles of efficiency where we all start having to pay the actual costs of what we're doing. So we naturally [00:11:00] have to find ways to do the same thing more efficiently. The thing that that concerns me, I think about this is, is that, I don't want. the response to this to be, to make the existing seat based models worse, like, I want the, the option at all times to be able to do the highest quality work.
[00:11:19] Ben Lloyd Pearson: And if that means I have to pay for it, like, like, yeah, that, that makes sense. But, you know, I, I think there's a risk of us doing like two steps forward. One step back on the development of this, right? Like, um, in an effort to make things more efficient, you know, we, we, we may see the quality degrade of a lot of these models that, like the default models.
[00:11:39] Ben Lloyd Pearson: So I mean, tokenmaxxing is fun. I love hitting, I love hitting those session window maximums. You know, it, it's, it's like chef's kiss when you, like, you finish your day at like 97% session usage, you know? but I also don't set that as my goal.
[00:11:54] Andrew Zigler: Uh, the whole phenomenon behind it is it inspires a certain level of like, feverish, like [00:12:00] I said, in like what you called out too, of like just everyone go crazy and try a whole bunch of different stuff. Because the story right now is that the early innovators, the early experimenters are rewarded in the long term because you get to work with a cheap.
[00:12:14] Andrew Zigler: You know, amount of tokens, like they're easy and bountiful and, and not expensive to use. You can experiment a lot and the earlier you get in on that and start doing and figuring out what works for you and what doesn't, then the, the cheaper long term you're, that, that, that compounding effect of that education's going to have.
[00:12:32] Andrew Zigler: But then also too, ultimately, using as much of it in experimenting has to go somewhere that can't be your forever uh, kind of deal. And this is like what we're seeing in like kind of a grotesque stage of like meta where you have engineers who are obviously using huge fleets of orchestrators to probably do a huge amount of work.
[00:12:50] Andrew Zigler: And maybe it's real and maybe it's not, or maybe it's busy work or maybe it's like really, really high impact work. Ultimately you don't know. And I think that's really what this [00:13:00] calls out. Because it goes back to like, even with an engineering, like a really, uh, senior staff distinguished engineer might work really, really long time to solve a really, really critical issue.
[00:13:11] Andrew Zigler: And resolving that may involve changing no lines of code or one line of code somewhere. And the value in leverage of that work is not, you know, made less by the one line of code change. And the same thing is true in consuming and using tokens. Just because you're using a lot of tokens doesn't mean you're getting a lot of impact out of it.
[00:13:33] Andrew Zigler: And just because you're not using tokens a lot doesn't mean that you aren't leveraging yourself in the best way possible. In fact, what I've found too is that like for me, it's best where. It's not some like just huge clockwork thing that's going all the time and there's all these things in the background.
[00:13:49] Andrew Zigler: I know that kind of flow works for some, but like for myself, like it's almost like a very steady like rest state. And then when we decide and we're ready and we're aligned to what we need to do, [00:14:00] it's all of a sudden everything is up and it's a flurry and everything can can connect and get the information it needs and then it delivers it.
[00:14:06] Andrew Zigler: And then we're back at rest because now we're back in the, the, the default mode of how I leverage the most impact from my role of figuring out where the problems are before I act. And so like, that's like the same kind of growth story that I think a lot of folks that are still tokenmaxxing are on.
[00:14:21] Ben Lloyd Pearson: Yeah, and we, and we think about this a lot at LinearB, like, this is a problem that predates AI even, you know, um, in a past life I was measured, or the team I was on was measured by our ability to produce new lines of code and new commits, you know, kind of in a similar thread as tokenmaxxing. It's just, it represents work doesn't necessarily represent value.
[00:14:42] Ben Lloyd Pearson: And even saying it represents work is still trying, stretching the truth a little bit because it's, it's often trivial to just create lines of code or commits that, that appear to be work, but aren't actually like anything substantial. you know, and it's, it's a problem that, that, that we've, we think about a lot because we help [00:15:00] engineering leaders all like all the time solving. This challenge of showing how, you know, how did AI adoption actually translate into impact to the business? You know, it's a very complicated thing to understand. Uh, and it's, and it's something that we all need to be doing. We need to go beyond this simple, these simple usage metrics and actually understand the outcomes that they, they, um, facilitate.
[00:15:23] Ben Lloyd Pearson: And, you know, we've got the Apex framework with, with LinearB here at LinearB that we love to use and a lot of our customers use, so. Um, there are better ways than tokenmaxxing, even if it is fun in the short term to have cool leaderboards. But speaking of leaderboards, one of the companies that has been caught up in this story is Disney of all companies, uh, which I did not expect.
[00:15:44] Ben Lloyd Pearson: But, you know, it's, it's fun to see like non-traditional tech companies or non-tech companies, for that matter, doing cool things or interesting things with ai. so yeah, they have an AI adoption dashboard, uh, where tech staffers can see internal usage across all of [00:16:00] their tools, uh, including a leaderboard based on the, the types of requests and tokens that they consumed. Uh, there is apparently one employee at Disney that invoked Claude 460,000 times in nine workdays, it's an average of about 51,000 per day. Uh, probably being an AU autonomous agent, I can't imagine their fingers are typing that quickly or their Mac Whisper is, isn't translating words that rapidly. And, you know, the, the, the leaders there at Disney are sort of framing it as a tool for efficient resource use, but often it sounds like it's being used for the exact opposite where everyone is just celebrating, you know, tokenmaxxing, like using the most tokens of everyone.
[00:16:41] Ben Lloyd Pearson: So, you know, we mentioned Meta and Shopify add Disney onto the pile, I guess. Andrew, what do you think about this story?
[00:16:48] Andrew Zigler: Well, I'm gonna push back on you saying that Disney's not a tech company. 'cause I certainly do think they are and they're another big represent, represent here of, of how this is kind of sweeping, um, engineering orgs in all [00:17:00] sorts of different industries.
[00:17:01] Ben Lloyd Pearson: Yeah.
[00:17:02] Andrew Zigler: I think also it kind of is a, a, it's maybe a, an example that could be followed.
[00:17:10] Andrew Zigler: To a degree, but to these extreme amounts of like, uh, inference calls, then it kind of calls into question the whole, like, who is it for? So obviously elements of tokenmaxxing are really important for a company-wide AI adoption strategy. It's just about tempering it and having a little bit of good with everything else, you know, like how a lot of engineering practices ultimately are.
[00:17:32] Ben Lloyd Pearson: Yeah. Awesome. Let's move on to talkie, a vintage language model from 1930. Andrew, how are we getting AI models from the past?
[00:17:43] Andrew Zigler: I'm obsessed with this story. So talkie is a 13 billion parameter vintage language model, trained only on information that predates, uh, 1931. So, uh, this is exclusively creating a copyright free model. Simon Willison, when he wrote his review on it, he called it a vegan [00:18:00] model, and that absolutely stuck with me.
[00:18:02] Andrew Zigler: That is what this is. It's a vegan LLM, uh, model, and, uh, the base model has. 260 billion tokens of historical data. Um, and it's offered on Apache two, just like some of the recent, small language models that we've talked about on the show, which means that you could fine tune this, create something on top of it, sell a product.
[00:18:18] Andrew Zigler: So what does this mean? How are we getting an AI LLM from the past. Well, you know, this project explores a really fascinating relationship between the data and the providence of it that goes into an LLM, and then what it can infer and deduce from it. You know, there's oftentimes a misconception, within um.
[00:18:40] Andrew Zigler: The machine learning, the AI space, that language is the same as logic or that language is not logic. The truth is that the, the lang, the relationship between language and logic is complicated and there's many things in between. And this allows us to then experiment between, like, if we were to extrapolate out, is this able to then infer things [00:19:00] that then do become true of our world inventions or, uh, theories or otherwise?
[00:19:06] Andrew Zigler: Continued research that from where its cutoff date ends, is it able to then truly use logic and the information available to it to deduce what happened in the future. So it's not only a foray into having a quote vegan model, but it's also the ultimate experiment ground for doing a lot of theoretical, historical experiments with an LLM.
[00:19:26] Andrew Zigler: And that's where my mind really. Uh, goes with it. Uh, it, it really blows up in fact, because, you know, I have a classics degree. I love history. This is a unique fusion of history and technology that I'm definitely gonna be diving into. Uh, I also just want to add, add one last thing. Simon Willison, with every new model that he tests and taught covers on his blog, he gives it the task of, uh, drawing a pelican, riding a bicycle as an SVG.
[00:19:51] Andrew Zigler: It's actually a very, very tricky thing for a model to do. 'cause there's no reference image for that and no model that's ever tried to do it has really pulled it off. So there's really no matter [00:20:00] even how popular his blogs are, even like the training data that would be available, uh, it, it was not able to do it.
[00:20:06] Andrew Zigler: So he asked this model to, to do this, and, uh, it offered, uh, it, it, it gave an entire like paragraph about how, uh, pelicans would migrate along the Rhine. So it gave him a AI a, an a era relevant and era accurate hallucination about pelicans as a response, which is just a plus. Uh, great investigation. Simon, as always, Ben, where, where does your head go with this?
[00:20:32] Ben Lloyd Pearson: Well, you've got my head churning more than it was when I was reading this. 'cause first of all, I love this idea, particularly as a creative tool. Um, but now I'm wondering like, what happens? Like what kinda stories can we build by just introducing it to new technology? Like, like, this is a cell phone. would you do with this?
[00:20:49] Ben Lloyd Pearson: You know? And just see like how it's almost like you get somebody from 1930 to come and experience it. For the first time without ever having a sense of what led to the creation of that. Uh, and [00:21:00] say, yeah, if I get my hands on this model, that's probably what I'm going going to do with it. Uh, but you know, I've, I've long lamented on this show and elsewhere how, you know, I think the western IP law, intellectual property law is facing or failing to meet the demands of modern software engineering. and this has grown very acute in the AI era and really at the center of this problem is that, you know, we have this public domain here in the us. Um, when it was built, it was intended to serve as a corpus of information and learnings and research and, and just content of all types. Um. That, uh, everyone is free to draw upon, to invent new things and iterate on for the future.
[00:21:44] Ben Lloyd Pearson: And, and the idea was that we were building this rich corpus that all of us had access to, to build new and interesting things. Um, but you know, the problem is that, you know, because of how we've structured copyright law in most of Western society, um, we've hamstrung it's a, the public [00:22:00] domain's ability to serve that original intention. you know, copyrights are so long now that, by the time content hits the public domain, it's either irrelevant because it's so old at that point, you know, news stories from that time just aren't relevant really in most situations because a lot of news has happened since then. then also a lot of media is just lost forever before it ever reaches the public domain, because there's not always an incentive to preserve the media long enough for it to reach that state. so, you know, this model gets around it by, you know, just using stuff that's in the public domain. What we do have available to us. From before, uh, January 1st, 1931. Uh, and it's stuff that you can't copyright anymore. So, you know, we're all free to just use it and iterate on it and experiment with it and create new content.
[00:22:48] Ben Lloyd Pearson: Like if you wanna write books and the style of how authors did back in 1930 with the knowledge that they had, uh, you can do it now. And I think that's a really, that's a really awesome thing, you know, and I, I would love to see more of [00:23:00] these like niche like boutique models that serve these different purposes.
[00:23:05] Andrew Zigler: Uh, yeah, I, I definitely think also too, it's, uh. Uh, uh, becomes a really great example for how to start to put together unique models of these kinds of problem sets. 'cause now we're starting to explore in the space how providence is this real, um, domain that people need to have an understanding of. It's not just about like the agent that comes out or the LLM on the other side and how you connect it, the tools and what those tools have access to.
[00:23:28] Andrew Zigler: It's also about where was the information coming from in the first place. So a really interesting deep dive.
[00:23:33] Ben Lloyd Pearson: All right. Let's move on and talk about this study that has found that a third of new websites are AI generated. So Stanford and the internet archive, uh, researchers over there, they analyze websites that have been created since, uh, chat GPTs launched back in 2022, and they found out that by mid 2025, 30 5% of the newly published websites on the web are AI generated, or AI assisted, which [00:24:00] is, uh, a trend that is increasing over time. They point out some, some concerns about this, like, you know, AI text, it may be making the web semantically less diverse and just changing the overall tone of content on, on the web. And, and this the speed the researchers, you know, stated the speed of this change has been staggering.
[00:24:19] Ben Lloyd Pearson: You know, um, a very significant portion of the internet has become AI defined in just a few years. You know, just like we created these new tools and they got released to the world, and now substantial amounts of what we, what is available out on the internet is now. AI generated. They're hoping that they can turn this into, right now it's just a snapshot of a, of a specific point in time. Um, they're hoping to turn this into something that, uh, is more of a continuously updated resource, uh, which is, I think a, a pretty good effort or it's, it's a very interesting thing to pursue. and it's, it's just interesting to me. 'cause, you know, um, on one hand it, you know, it, it's, it makes sense that if AI is giving us the capabilities to do. More and more and [00:25:00] all these new things we should be doing it naturally. Humans are just gonna do more with them, but at the same time, it all, we do also need to just be aware that part of why AI is so powerful is because we've provided all of these rich, unique data sources that it's trained on. Um, and if everything on the web starts to become AI generated and doesn't have enough of that human, uh, judgment injected back into it, we kind of have a risk of creating a, you know, a snake eating its own tail kind of problem, where the models are just more and more training on their own generation. So, Andrew, what did you think about this?
[00:25:33] Ben Lloyd Pearson: Does the number surprise you? 35%.
[00:25:36] Andrew Zigler: The number is surprising in its size. It's not surprising in the direction it's going. Um, I definitely think that, that that's just a natural extension of, Dario's thesis of like, you know, AI is going to write 95% of code, you know, within the next year. Like that, that kind of prediction then begets that, like a lot of that code is a website.
[00:25:58] Andrew Zigler: So a lot of these websites are gonna be [00:26:00] com, you know, totally AI generated. What's really interesting is then how this becomes, like what you said, a way for us to research and understand the impact of this on our ability to source the content that the LLMs are even on top of. We're starting to really close the loop on the training, um, and the like, the input and the output, like going in and out of itself, and that we're producing websites that then get consumed by the LLMs to then be further fine tuned and.
[00:26:28] Andrew Zigler: Ultimately the human voice is at risk of getting stamped out. And it gives me a, it reminds me a lot actually of like, think of, we talked earlier about the early 1900s, like go back to the industrial revolution and you have factories like transforming cities, right? And it's all about progress as fast as possible at the cost of everything else.
[00:26:48] Andrew Zigler: So you have a huge amount of pollution. and, and just in terms of what, um. the quality of life because of maybe how quickly the factories were developing was lower. It took then [00:27:00] decades of, of partnering and understanding the impacts on, uh, the human and the industrial kind of, uh, relationship together to balance those things to where now we have.
[00:27:10] Andrew Zigler: Regulations around, you know, a smog output from factories and how close they can be to homes. And so much that we've learned about how to have a safe, safe coexistence while still moving forward in innovation and output. And that's kind of the same relationship I think we have right now with the web is that, you know, we're at risk of polluting the very well that makes all of.
[00:27:32] Andrew Zigler: This downstream, capability possible. So, uh, they did like some analyses on like, the kinds of websites that get generated, and obviously they fall into very distinct and specific buckets because LLMs like to make websites in very specific ways. So that's one very, uh, obvious example of how the early signs say.
[00:27:51] Andrew Zigler: There's kind of erodes that unique flavor that, that human bit to the element, uh, to the, to the web. And I'm, I'm just curious to [00:28:00] see kind of how we see that compound over time.
[00:28:02] Ben Lloyd Pearson: Yeah. My, my advice to the researchers that they're out there listening, do Reddit next. See. I, I feel like we, we could find a lot of corners of the internet where AI is really ta and speaking of corners where AI is talkieng over, uh, more gaming from AI generated content. So, uh, we'll share a link to this new game called Flipbook.
[00:28:24] Ben Lloyd Pearson: It's an infinite visual browser game that is generated on demands by, um, ai, and I say game, but it's more like an interactive choose your own adventure book kind of thing. Um, so it's an infinite visual browser where every page is entirely AI generated, and if you click on anything within that image, it will create a new image that explores whatever you clicked on in more depth. I went to it and immediately thought, what would my 5-year-old want to ask? And I went with dinosaurs and then went, went down a rabbit hole of learning about dinosaurs and how they were ex, uh, what led to their extinction and the ages of dinosaurs [00:29:00] and all that stuff. It was, it was really neat. so I, I personally just absolutely love this. I thought it was so cool, uh, because, I see so much potential for AI within education. Uh, you know, 'cause I always think back to the book Enders game, which is, you know, how all the children learns in that they had their little tablets that had had AI on it.
[00:29:18] Ben Lloyd Pearson: That was, you know, creating the lesson plan in real time as they were learning and adapting to what the student. Uh, understood and what they were interested in. Uh, and so, yeah, it makes me think we're, you know, maybe gonna be in, you know, flying off to space, uh, more, quite a bit, uh, pretty soon too. But yeah, it's, it's, I, I, I thought this was really cool.
[00:29:37] Ben Lloyd Pearson: I played with it for a while. Um, I see so much more potential for this type of interactive experience using AI in particular in the education space. So, what'd you think, Andrew?
[00:29:48] Andrew Zigler: I thought it was a really fun, uh, like website to, to, to tinker with. It reminded me a bit of, uh, the Genie demo, which was, uh, a Minecraft demo where it's like generated a, an image that was [00:30:00] like a video and you could click on the image and it would then assume based on all of the Minecraft data had been trained on what you would be seeing.
[00:30:07] Andrew Zigler: So it kind of worked backwards to create a game by basically hallucinating it for you in real time based on a bunch of Minecraft training data. So this is like the same idea. It starts with a nucleus of a thought that you give it like, oh, like, uh, I think it speaks really to our differences Ben that, you know, oh, you go to it and you're like, I want to ask it about dinosaurs.
[00:30:25] Andrew Zigler: I went to it and I asked it about Sailor Moon. 'cause I wanted to know what it. It knew about Sailor Moon and it was teaching me about the different, uh, characters in Sailor Moon. And then at a certain point we started, uh, going down a whole history lesson of like, where did Magic Girls and fiction even come from, and what did the early ones look like and how did it change to become Sailor Moon?
[00:30:43] Andrew Zigler: And then it branched in the Power Rangers, so you clearly didn't go deep enough and you clearly didn't, uh, have as exciting of an adventure as I did. But I will say that, that, you know, the power of this is in the hands of the user. and there's all, there's all sorts of different ways in which I think we're gonna see surprising new ways to engage [00:31:00] with media, uh, powered by this kind of tool.
[00:31:03] Andrew Zigler: Uh, it's always great to have something that you can throw an idea against and learn and explore more.
[00:31:08] Ben Lloyd Pearson: You know, I, you just made me realize why I like this so much because it's, it's basically a visual, interactive version of, of the Wiki, Wikipedia rabbit hole experience. You know, where
[00:31:20] Andrew Zigler: Yeah.
[00:31:21] Ben Lloyd Pearson: you end up on, on an article and you just keep clicking to the next link and going deeper and deeper into the knowledge and
[00:31:27] Andrew Zigler: That's exactly right. It's exactly what it feels like. It's just the LLM version of that. So definitely try it out. It, I'm, I promise you, it's probably not the first time you've seen a flavor of this. There's been different variants of this, and in fact, it reminds me of the really early like Google, uh, deep dream, uh, kind of, uh, experience as well.
[00:31:44] Andrew Zigler: So go have a blast.
[00:31:45] Ben Lloyd Pearson: Yeah. All right, we'll, we'll leave our, our audience with one more quick story that y y'all can enjoy on your own time. Uh, but it's a drunk post from a senior engineer about all the things they've learned across their career. So it's slightly humorous, it's slightly [00:32:00] educational. Um, it's a bunch of experiences from a 10 plus year of, uh, software engineering. Uh, there's a lot of brutal honesty about the tech industry. Career growth, tech choices, work life balance. There's probably not gonna be a whole lot in this article that surprises a lot of our readers, but I think it's always good to just hear the perspective of somebody else, like their life learnings and just see how it, how it lines up with your own and, and what maybe is different than, than your expectations.
[00:32:27] Ben Lloyd Pearson: So, Andrew, was there anything from this article that stood out to you?
[00:32:30] Andrew Zigler: Yeah, there was a standup one for me that I love that I'll leave us on, which is tests are important, but TDD is a damn cult. And that one was really made me laugh because it feels so true in many regards. And it reminds me actually, of how I've adopted this whole approach of TDD into my, uh, design flow now.
[00:32:49] Andrew Zigler: As an engineer in a way I actually never did before. I never would've considered myself drinking from the chalice of TDD up until I started doing agentic engineering. And that's just because [00:33:00] ultimately when you are able to turn all the rituals and the process of TDD into something that your agents can use, like through skills, uh, it actually just makes it so, uh, easy
[00:33:11] Andrew Zigler: to work. It fundamentally shows us that, like the economics, they were never really wrong at heart, it's just that the agent made it, cheap and easy to do at scale. So really was the process of it that was getting in the way of the benefits and now we can reap it.
[00:33:26] Ben Lloyd Pearson: so are you saying that, that all of the LLMs are a part of the TDD cult then? I think that's what you're saying
[00:33:33] Andrew Zigler: At least mine are any LLM that when Claude code wakes up in my terminal, you promise, I promise you that agent knows all about my TDD flow. Uh, just with all of what's baked into its harness.
[00:33:45] Andrew Zigler: What about you?
[00:33:45] Ben Lloyd Pearson: Yeah, actually I'm gonna stand, I'm gonna steal one of yours. That, that you said stood out. That also stood out to me, Andrew, in that, uh, you know, he called out how the proudest moment of his career was helping other people be better at their jobs.
[00:33:57] Ben Lloyd Pearson: And, you know, it, it's, it's very satisfying [00:34:00] being like skilled at your own work. It's so much more satisfying if. Your skills get leveraged through other people or other people's skills get leveraged through you. Uh, you know, so I'm, I love to be a collaborative person and, uh, you know, I think that's just a really, just a really wholesome, like, like life lesson that we should all take away.
[00:34:21] Andrew Zigler: Totally agree, which is why it's always so great to have, you know, our listeners here and to join us every week and talk about the news and, and continue talking about it on places like LinkedIn. LinkedIn too. So, you know, we wanna be a part of your career journey and understand the unique challenges that you're facing as well.
[00:34:38] Andrew Zigler: So don't forget to reach out or say hi.
[00:34:40] Ben Lloyd Pearson: Yeah. What are your agents up to right now, Andrew?
[00:34:43] Andrew Zigler: Oh gosh. Well, they're getting a bunch of things out the door. Like I've said recently, I've, I've gotten them connected to Asana in like a net positive way for myself, where they're doing a lot of really great handoff stuff. So lately I've just been working on improving the visibility of the stuff I do every day to help distribute the [00:35:00] games to everybody else.
[00:35:01] Andrew Zigler: What about you?
[00:35:01] Ben Lloyd Pearson: Yeah, getting that Asana plus slack plus obsidian, like connectors all set up and coordinated. It's, it's a, it, it's intoxicating. It makes you want a token max in all reality.
[00:35:14] Andrew Zigler: Truly.
[00:35:15] Ben Lloyd Pearson: I, I've been going deep. AI right now, I've not been doing a whole lot of age agentic operation, but doing a lot of deep like context gathering, constructing really complex specs and, and all this stuff.
[00:35:28] Ben Lloyd Pearson: Uh, I, I feel like we've really been pushing AI to the, the limits of its capability, you know, specifically. I, I kind of feel like I know exactly where Opus 4.7 is, like where it just falls apart or it just runs out of steam, you know, and, but it's been pretty fun to just like really push the limits and see failure modes, of ai as we're, uh, building out some content just, uh, preview for our audience.
[00:35:51] Ben Lloyd Pearson: We're trying to understand the build versus buy equation nowadays. And does it make sense to just direct all of your tokens at replicating [00:36:00] some SaaS platform that you don't want to pay for? And yeah, we'll have a lot, a lot to come out on that. Um, a lot of content on Dev Interrupted, uh, we'll have some Substack articles for sure. Um, the tldr of what I think we're talkieng away from it is ai, the human is, has never been more important. In
[00:36:16] Andrew Zigler: Indeed.
[00:36:17] Ben Lloyd Pearson: like humans bring so much domain expertise, especially when you have a group of humans collecting their domain expertise together. as we enter this world where AI starts to get priced according to what it actually costs, um, I think that's more important than ever.
[00:36:31] Ben Lloyd Pearson: We all need to recognize that, like, there's a lot of power in putting a human's brain on a challenge versus an ai, even though they, an AI also serves its purpose. So
[00:36:42] Andrew Zigler: Indeed.
[00:36:42] Ben Lloyd Pearson: my agents have been directed.
[00:36:45] Andrew Zigler: Well, amazing. I'm excited to tune in next week to, to figure out where you're at.
[00:36:49] Ben Lloyd Pearson: Yeah. Well, thanks for joining us, everyone today. Uh, it's been a great Friday Deploy session, as always, uh, presented to you by LinearB. Uh, give us a like on whatever platform you're [00:37:00] listening to, a thumbs up. Give us a rating, leave us a comment. We love hearing from our listeners, and we'll see you next week.
[00:37:07] Andrew Zigler: See you next time.



