Podcast
/
The self-authoring wiki, beating brain fry, and Obsidian as memory is a trap

The self-authoring wiki, beating brain fry, and Obsidian as memory is a trap

By Andrew Zigler
|
personal_knowledge_management_obsidian_pitfalls_f2bcd30146

Have you or a loved one been afflicted by "brain fry" after managing too many autonomous agents? This week on the Friday Deploy, Andrew and Ben explore the cognitive toll of orchestrating AI swarms and share Kelly Vaughn’s expert strategies for avoiding burnout. The hosts also discuss Google's new campaign to punish websites that hijack the back button, the breakthrough of running Gemma 4 natively on mobile devices, and a new 8-step maturity model for building agentic data pipelines. Finally, they dive into a heated debate over whether Obsidian flat-files are a scalable memory solution for AI, comparing the methodology to Andrej Karpathy's new agent-compiled wiki system.

Show Notes

Transcript 

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

[00:00:00] Ben Lloyd Pearson: So, Andrew, are you as excited as I am about this news that Google is gonna start punishing websites that hijack your back button?

[00:00:10] Andrew Zigler: Uh, I mean, for me it's a little too late. It's like, I'm not going to those websites anymore. My agents are so did it, did it take my agents complaining for them to finally do something about like the back button not working on, on like bad websites.

[00:00:23] Ben Lloyd Pearson: Oh yeah. So you're saying it's Google's token costs were getting too high because their agent's back button was getting hijacked, so they now they're gonna make, take action on it.

[00:00:32] Andrew Zigler: Well, maybe I didn't take the thesis that far, but now that you've said it, that's very compelling that these agents were getting caught in these loops. But I do love the, I do love the movement that Google is.

[00:00:43] Ben Lloyd Pearson: is,

[00:00:44] Andrew Zigler: You know, continuing the fight spam, uh, especially during a time when like bad practices on the web are at an all time high in terms of spam and vibe coded websites that don't act like you maybe expect.

[00:00:54] Andrew Zigler: And so, uh, any kind of pulse check from Google of like, Hey, we still care about spam on the internet is, or, you know, [00:01:00] bad internet, a website practices, is it a win in my book? What do you think about it?

[00:01:03] Ben Lloyd Pearson: it? Yeah, I mean, it makes me just wonder like how many other things out there do Google see that are like, wow, it's, our agents are having a really hard time consuming websites that do this, so we should go punish those websites. Like they're probably like the only company in the AI space that actually has the authority and power to do that, right?

[00:01:22] Ben Lloyd Pearson: Like.

[00:01:22] Andrew Zigler: They truly are,

[00:01:24] Ben Lloyd Pearson: Yeah.

[00:01:24] Andrew Zigler: they have a unique moat in terms of, you know, kind of owning how people were consuming the web in the first place.

[00:01:30] Ben Lloyd Pearson: Yeah. Yeah. Well, awesome.

[00:01:32] Andrew Zigler: Yes.

[00:01:33] Ben Lloyd Pearson: to the Friday Deploy from LinearB and Dev Interrupted. I'm your host, Ben Lloyd Pearson.

[00:01:38] Andrew Zigler: And I am your host, Andrew Ziegler.

[00:01:40] Ben Lloyd Pearson: And this week we're covering Google's offline AI breakthrough agent swarms for data teams.

[00:01:45] Ben Lloyd Pearson: Obsidian. Isn't AI memory or is it when we talk about car path Carpathy and his self writing wiki. And lastly, we'll close out with the AI brain fry epidemic. Andrew, let's just start right at the top with this new Google Gemma News. What do we [00:02:00] have here?

[00:02:00] Andrew Zigler: Yeah, so continuing the story we've been covering about, um, small language models, open source models, um, alternatives to the foundation models like Anthropic and Claude. Uh, this is an article about, uh, highlighting Gemma Four's ability to run natively on the iPhone, which is something that we have called out here on the show before when we first talked about it after it's unveiling about two weeks ago.

[00:02:21] Andrew Zigler: Uh, it's a great reminder, um, about all the different variants of this model and how they're optimized for mobile devices, devices on the edge, uh, devices and internet as bad as mine. And this represents like a major shift towards internet in places where internet or, or rather AI in places where internet connectivity is spotty.

[00:02:39] Andrew Zigler: there's like a real danger in the world of a lot of populations getting frankly left behind in the AI revolution.

[00:02:45] Andrew Zigler: And

[00:02:46] Andrew Zigler: Personally, I see Gemma as a really great step towards making AI more accessible to the rest of the world.

[00:02:52] Ben Lloyd Pearson: Yeah. A lot of, a lot of people and use cases, I think. Um, and I, and I, I think it's, it's interesting to know that, I think it feels like Google's really [00:03:00] trying to position this as something for developers and power users to treat as like a foundation for, for future capabilities rather than like a feature that they're rolling out to users. totally makes sense to me. I mean, the average AI user isn't gonna know like, which tasks are appropriate to hand off to a local model, nor would they even know how to do that in the first place often. you know, and like you said, we've been covering a lot of these stories of how local models are becoming more efficient, higher quality. Easier to delegate subagent tasks to. And I really do think this, this type of stuff is gonna open up a lot of efficiency gains, but it's also gonna do a lot of open up a lot of new use cases where you need like either offline AI completely or some sort of a edge AI capability where it's just. Too expensive to go back to a central service for, uh, you know, API services, uh, versus just trying to do it locally.

[00:03:51] Ben Lloyd Pearson: So, yeah, you know, a lot of companies are competing in this space more and more. It's really exciting to see Google being a part of it as well, part [00:04:00] particularly 'cause you know, as we covered in our opening, they have a lot of, they, they have a lot of power in their incumbency that lets them. You know, get, uh, push this stuff out in a way that other companies may not be able to do.

[00:04:11] Ben Lloyd Pearson: So, I think this is, we'll, we'll definitely be following this a lot more. 'cause I think local models are increasingly gonna become like the story of the rest of this year maybe. So

[00:04:22] Andrew Zigler: I completely agree. Um, it's an opportunity for everyone to get involved and to reduce their costs. Like we even covered the Shopify story, I think like two weeks ago, about how they not only cut their, their, their inference costs from OpenAI, but 70 by 75 times. But they also, um, got a multi-agent.

[00:04:38] Andrew Zigler: Architecture then for free, I mean cheaper even out of the box because they were able to leverage these really

[00:04:44] Ben Lloyd Pearson: uh,

[00:04:44] Andrew Zigler: well tuned models. So lots that come on this story. Don't be sleeping on small language models. They are here to stay.

[00:04:51] Ben Lloyd Pearson: All right, let's move on from Google. We've given them enough attention and let's talk about Watertown, the Agent Swarm Data stacks. What do we have here, Andrew?

[00:04:59] Andrew Zigler: [00:05:00] Yes. So this is a really fun article from JT Tigani. It introduces Watertown and Jordan. He works at, uh, he works at Mother Duck. It's an, it's a database service's largely used for AI inference vector source, but it's also just, you know, a database. And he works with large amounts of data. And this framework, it takes Yegge's, uh, Gastown idea and it maps it into eight stage progressions for data teams that are using AI automation.

[00:05:26] Andrew Zigler: Starting from the very beginning of like. Automating SQL assistance to having self-healing pipelines around querying and, and manipulating data. And it uses structured communication in really smart ways, borrowing lots of, uh, techniques that Steve has used to create his software factory. Now they kinda have a data analysis factory, um, is really a great glimpse at how people can adapt to this kind of a methodology for their own, uh, use cases.

[00:05:51] Ben Lloyd Pearson: use case. Yeah, there was a line in this article that like really stood out to me and made me chuckle a little bit. Uh, and that is, agents are like violence. The only solution to the [00:06:00] problems they cause is to use more of them. And now I may not have used violence as the example, but the line resonates with me extremely well.

[00:06:08] Ben Lloyd Pearson: You know, 'cause I feel like the challenges that we solve with AI only create. Additional challenges that can also be solved with ai. And hopefully those new challenges are bigger and better challenges rather than just like dealing with the toil and the slop that, that ai like, that the poorly optimized ai, uh, can create. Uh, so yeah, there's this eight step maturity model for agentic use within data pipelines that is outlined in this article. And I, and I really think this is an exercise that we should be applying to most of knowledge work today. So, you know, we do a ton of content production here at Dev Interrupted. And I, I haven't really thought about a system for like leveling, like where we are age genically within that, uh, you know, we are definitely progressing deeper and deeper into agentic workflows. But I think what these, these models, what makes these models particularly important is that [00:07:00] they give you this maturity roadmap that shows you where you should be focusing next. So you can see where you are now, but then also what the next step looks like. So the, the types of things that you would need to build to, to achieve that step. Uh, there's also an, you know, another really great metaphor, in this article, um, you know, borrowing again from, from Stevie based on the, the book Master and Commander I think, or something like that. It's, it's about, you know, basically how an agentic data pipeline operates sort of like a ship at sea. So you have a log of observations, you have orders that agents receive to do something. There are flags that show when there's human. A feedback that's needed. There's captains, there's other roles within this ship, um, that help maintain and build data pipelines. Uh, there's also this metaphor about as scribe, which I also particularly love 'cause I've used that exact metaphor myself in the past when building agentic content pipelines. So, yeah, you know, you, Andrew, I think you and I are both firmly in the camp of metaphors as being a really great way of conveying [00:08:00] meaning to ai and it can be really powerful. Like particularly if you can just really tie everything to that central metaphor. So, you know, I love seeing the, these concepts getting applied to new fields and I think, you know, like other stories we're covering today, we're just gonna see more and more of this, uh, as time goes on.

[00:08:16] Ben Lloyd Pearson: As

[00:08:16] Andrew Zigler: I love the idea of agents needing to get brought in to solve problems that agents create. That's so true. And it just, it really speaks actually to not the whack-a-mole situation of, of

[00:08:27] Ben Lloyd Pearson: of

[00:08:27] Andrew Zigler: using them as a solution, but actually rather just like the next order scale. That you end up working on. What it makes me think of is like if you're gonna go build a huge highway bridge over like a, a bay or a river, um, you're probably not gonna use like hammers and nails and you know, things by hand.

[00:08:46] Andrew Zigler: You're probably gonna use power tools and trucks and like massive cranes and equipment to do it. And you know what? When you bring those in, that is then not gonna require more heavy machinery, more specialized people, more. More logistics [00:09:00] and you might think like, oh gosh, like why couldn't they have just built it with, you know, hammer and nails?

[00:09:04] Andrew Zigler: And we all know the answer to that. To build like this sturdy thing that's going to, uh, be resonating and staying in the future, uh, we have to use the tool smartly, and that means acknowledging that, like, sometimes you gotta put tools on tools. but I, I, I'm strongly with you here on the idea of like, Gastown is coming to lots of different disciplines.

[00:09:22] Andrew Zigler: I think this is a great example of using it to create, uh, like a semi-autonomous. Data working system, and particularly what was really interesting to me is how it can flag these states for users and for humans to come in almost like the human in the loop, but more like a human tool call that the agents can do to kind of get, um, more information.

[00:09:44] Andrew Zigler: I think we're gonna start seeing this pattern more. Um, I first learned about this, I think, uh, it was last week when I was at Human X and I hosted a panel, uh, one of the panelists, Angela McNeil of Thread ai, and she is, they're, they're the team that created lema. She made this really [00:10:00] smart point about exactly that, that humans are evolving from being the in the loop to being like on the loop or they are the loop, they're the tool call itself.

[00:10:09] Andrew Zigler: Um, it's a really fascinating way of looking at it, and it's really great to see it and make its way into this example. I love how people are adapting gastown, and I hope to see more of these.

[00:10:17] Ben Lloyd Pearson: right, let's move on to our next one. Stop calling it memory. The problem with every AI plus obsidian tutorial, and I wanted to cover this one because I feel like this is almost like a direct attack on me as somebody who recently became an obsidian convert because of agentic workflows. Uh, but in this, this article, the author argues that obsidian markdown files as an AI memory in quotes, is a fundamentally flawed architecture that doesn't scale. Um, so, you know, while I sitting works really well for like personal notes, it fails as a data store according to the author because you can't query, filter or handle relationships of that data at scale. And in this article, the author argues that engineering [00:11:00] teams need to use actual databases like SQL, light infrastructure data instead of flat files that get dumped into context windows. Uh, you should use proper infrastructure like Zu Open Brain, so base, you know, all of the tools that help build these agentic systems. Um, rather than just sort of hacking it with flat files in obsidian. you know, like I said, I feel a little bit attacked by this. But I also am not gonna argue that a flat file system, uh, is with, with like markdown and JSON is the most scalable system for when you need to do a whole bunch of agentic works.

[00:11:34] Ben Lloyd Pearson: I, I'm not gonna make that claim, but for personal use, you know, I feel like, or even for a small team, um, it is really often the, the quickest and easiest way to start building better context for your ai. Uh, and you know, and it's funny, Andrew, 'cause I, I feel like you and I are sort of taking opposite directions on this often. know, you've gone very like cloud-centric where all of your agent workflows are triggered, uh, with, with stuff in the clouds that you [00:12:00] can, you know, do it when you're like on your phone in, in the cab or working out in the morning or something. Versus me, I'm, I'm going like pure local only with everything.

[00:12:09] Ben Lloyd Pearson: Like, everything is now just a markdown or a js ON document that gets saved locally so I can just feed it into claw desktop or something like that. So what'd you think about this article, Andrew?

[00:12:18] Andrew Zigler: As a heavy obsidian user, I think that, um, they really hit the nail on the head on how it can be used and, and, and, and what obsidian really means for ai. Because obsidian existed obviously, before AI came into the scene. Note taking in this kind of way locally on your machine with markdown files, with tags and links to each other in like a semi wiki way.

[00:12:39] Andrew Zigler: This is a really established way of building your own personal knowledge management system. Something that I'm, I've been a big fan of even before. Ai. I really love how this calls out the cargo cult thinking, in like the AI influencer space, thinking that like, oh wow, when Open Claw came out, you know, they just had this like memory markdown file and

[00:12:58] Ben Lloyd Pearson: and

[00:12:59] Andrew Zigler: this was the [00:13:00] secret sauce.

[00:13:00] Andrew Zigler: This was why it was able to like have this, persistence to its memory and its capabilities. But when you look closer, that's not actually the truth. Like one of the things being that not long after Open Claw hit the scene and became viral, uh, they quietly bolted on sequel light because of this exact reason.

[00:13:19] Andrew Zigler: You need the ability to query, uh, and fetch only what you need. Uh, if you just dump a file into your context window. That's just brute force. It's not retrieval. There's no schema, no matter how beautiful your.

[00:13:31] Ben Lloyd Pearson: your

[00:13:31] Andrew Zigler: Your front matter is on your markdown. There's no joins or traversals and your ability to scale that.

[00:13:37] Andrew Zigler: It actually comes a lot faster than uh, than you think. And the wiki links and stuff that link between them are, are great for human discovery, but they are not queryable. You can, of course, create sorts of tools and stuff for an LLM to query it, but then.

[00:13:53] Ben Lloyd Pearson: but then,

[00:13:53] Andrew Zigler: You know, I'm just gonna call it out. Then you're just writing a database and so you should then just use a database.

[00:13:59] Andrew Zigler: Now, [00:14:00] all of this said there are very powerful ways to use obsidian as a memory tool. One of them being that if you use, for example, a AI focused plugin, there are a few, there's like a co-pilot plugin for obsidian. You connect it with, a subscriber token that you have. It could be like from your favorite foundation model provider.

[00:14:20] Andrew Zigler: And the key here is taking those notes and embedding them. These tools can actually take your notes and turn them into embedded semantic, uh, lookup of, in like a local, basically like a vector database for your LLM to use. This is the connector that you really need to make it more like a memory store.

[00:14:38] Andrew Zigler: However, I will say as someone who's used obsidian every single day of my life for probably the last four years, I don't use obsidian at all. In any combination with agents, uh, like you called out, like I work in the cloud, I like to work in an ability where I can access everything from everywhere, talk to my phone or my terminal from any kind of place.

[00:14:57] Andrew Zigler: And that's what fits me. Like right now, I'm sitting in a [00:15:00] hotel room. I'm about to go spend the whole day at a conference. Yesterday I was in a keynote and I shipped two things. It's like the ability to work that way is only because, I've taken the opposite stance. I don't want to be locked to my local machine.

[00:15:13] Andrew Zigler: The one other thing I will call out here is that like the. Unlock of taking things that matter to you and distilling them into a place where you can collect them over time and you and the agent can access them is absolutely critical. And why I don't use obsidian for this, I certainly have

[00:15:32] Ben Lloyd Pearson: have

[00:15:32] Andrew Zigler: tools and mechanisms for doing that.

[00:15:33] Andrew Zigler: So this gets you like 50% of the way there. Just be mindful of how retrieval actually works. Um, and, um, I, I don't just think that it's a, it's working under the hood. You'd be surprised at how much you lean into the hallucinations of your agent when you think it's sitting on top of all your thoughts.

[00:15:49] Ben Lloyd Pearson: thoughts. yeah.

[00:15:49] Ben Lloyd Pearson: You know, and, and your approach. I, I'm not going to, you know, uh, try to sugarcoat this at all. Your approach is way more scalable than mine. If you're like talking about scaling it out to a team or to your [00:16:00] organization or putting it in production or something like that. Um, but I also don't really view what I'm doing with obsidian as building memory when I record all of my artifacts into it. Um, instead I view it as more of a, like a record of, of historical context, right? Um, so instead of losing all that information to the ether as I used to do it all now gets captured in a single location that I can push to a git repo. I can feed directly into Claude Desktop. I can copy and paste files into whatever.

[00:16:29] Ben Lloyd Pearson: GPT service I I might want to use instead. and you know, I, I think we don't really need to be focused as much on the tech that we use to solve this, especially in this day and age. Like it's really easy to use your agents to just migrate to new tech, um, particularly if it's something that you're just doing for your personal use. I. you know, instead we should focus more on the process behind it all. So if obsidian is the thing that solves it for you today, then you should use obsidian. 'cause it's a great tool. But if you need to be more scalable, this article has some really great insights on how you can take it [00:17:00] beyond just flat file architecture. but you know, I, I would actually bet that most people right now are not yet at a state where they need that level of scale. Um. And again, the most important thing that you can do right now is start these practices rather than focusing on which tools you're going to adopt.

[00:17:17] Andrew Zigler: Bingo.

[00:17:18] Ben Lloyd Pearson: all that, yeah, with all that said, I, I'm sticking with flat files on my local machine for now.

[00:17:23] Ben Lloyd Pearson: I was not convinced to change from this. It just works so well for me. And, you know, I can move really quickly and, and, and it's great. So, but I also understand why, why this article exists and why some, some people out there might have issues with it as well. All right. Well, let's move on to, from an article that says Obsidian is the wrong answer to all this, to the opposite.

[00:17:42] Ben Lloyd Pearson: An article that argues, or that makes an argument for how you can use obsidian in really great ways for this type of work. Uh, so this, this article focuses on some recent learnings from Andre Carte you know, recently shifted from using LLMs for coding to building self-maintaining knowledge [00:18:00] bases. So spending, he's spending more time. Manipulating tokens for knowledge rather than manipulating code itself. Uh, and this article does a really great job outlining a system that uses a two-tier architecture with raw sources and an AI compiled wiki that synthesizes and cross links information automatically. So talking about some of the, the relational challenges that, that we were just discussing, and this sort of approach, this approach sort of is a little different from, you know, traditional note taking practices because AI is doing all of the synthesis work. Uh, basically just removing a lot of the cognitive friction. Uh, around understanding of things. for listeners that don't know, Andrea Pathy was one of the founding figures of the modern AI era and coined the term vibe coding. and this article kind of breaks down a lot of the practices that, um, he's been following recently. Um, it, it really resonates with how I feel like, you know, I don't do a whole lot of code writing these days, but I do a lot of content, which is, has a lot of similar [00:19:00] challenges. And for the last four months, I feel like most of my time at Dev Interrupted in LinearB has been really just focused on. The things that I want to convey rather than how I convey those things. AI does all of the structuring and turning it into clear and actionable information, but I'm the one that's there just giving guidance along the whole way and, you know, and just making sure that the AI stays on the right track and has all the information it needs to make the right decisions.

[00:19:27] Ben Lloyd Pearson: So what'd you think about this article, Andrew?

[00:19:30] Andrew Zigler: Really cool piece. I'm a big fan of Carpathy. Follow all of his thoughts about how this space is evolving. I, I really liked his idea of taking knowledge, working tools that we traditionally use for coding and applying it.

[00:19:42] Ben Lloyd Pearson: it

[00:19:43] Andrew Zigler: Towards knowledge, working and knowledge, um, maintenance. And it's a really cool piece.

[00:19:47] Andrew Zigler: Um, and Carpathy has invented a lot of like net new things, but I will say, you know, Carpathy maybe didn't necessarily invent this way of building a tool, um, in turn or rather building a knowledge base. What he's introducing here is like bringing, [00:20:00] making it agentic, which is a really interesting level up.

[00:20:02] Andrew Zigler: But this goes, uh, all the way back to like Nicholas Lu Ham's, zeal Castin, which literally means. Note box in German, and Carpathy made the concept agentic and, uh, luman. He was a prolific writer. He wrote dozens of books in his life on a huge array of topics, was widely seen as just this, an incredibly bright, multidisciplinary person.

[00:20:26] Andrew Zigler: And the key to his secret, his prolific ability to write was this note taking system that has a lot of callings back to what Carpathia has built here. Uh, he had a very specific notation and linking system. Them on index cards, physical index cards, because this guy lived in like the 1800s, and he used to produce like more than 70 books.

[00:20:46] Andrew Zigler: And we're talking like addresses, like one slash one slash a slash three B, like the precursor of like a weird, pigeon version of like a Dewey Decimal system. And he used this to organize knowledge and [00:21:00] traverse it

[00:21:00] Ben Lloyd Pearson: it

[00:21:00] Andrew Zigler: in a physical way. And so, uh, what Car Pathi is doing is taking those same techniques. He's upleveling it into a virtual agentic system and, and taking the compilation, the linking and the health checking and making these more, uh, streamlined with agents.

[00:21:15] Andrew Zigler: And so imagine. In Imagine 200 years ago, using this by hand note taking system, using it to write

[00:21:22] Ben Lloyd Pearson: write

[00:21:23] Andrew Zigler: more than 70 books, more than 400 articles in your life. And now you're living in a modern age and you're able to do all of that same stuff, but agentic at scale. And these things can, uh, even like work in manipulate knowledge while you're sleeping or not at the wheel.

[00:21:38] Andrew Zigler: Uh, I think it's really fascinating because. It's also a really great reminder that words do not equal knowledge, right? Uh, manipulating words on a screen and otherwise throwing tokens to get, uh, an inference result isn't necessarily in its, in that form, knowledge working, because a, a, an important thing to remember is that knowledge does not [00:22:00] require language.

[00:22:01] Andrew Zigler: Um, someone can be in a enabled to communicate via language. There's lots of people who are unable to grasp or speak with language and there's, but they still have a, a large amount of intelligence and knowledge working. They just don't have the ability to express it. And conversely, just because you can combine and move around words.

[00:22:18] Andrew Zigler: Doesn't mean that it's actually logic and knowledge underneath, because words are a representation of symbols and, and symbols and symbol manipulation is where logic comes from. Car Pathi iss using that to kind of, um, uh, create a knowledge synthesis system. He's using words as like a through past to actually manipulate logic and build up in, uh, information at scale, uh, using the connections between them.

[00:22:43] Andrew Zigler: And I actually relate with this a lot. This is how I use Claude code quite a fair bit. Um, I'd say that like,

[00:22:49] Andrew Zigler: I use it of course for writing as a writing tool. It's very helpful for that. But, um, I just have so many Claude code sessions where no code is written, no article is outputted, but we're taking [00:23:00] ideas and cycling them around.

[00:23:01] Andrew Zigler: Rotating it, seeing it from a different angle, um, adding to it, stripping things out, and, uh, this kind of, um, surgical way of getting in with words and manipulating them to find patterns and knowledge is really powerful. I definitely think there's a huge unlock in using obsidian in this way and using any kind of knowledge, working, uh, project.

[00:23:22] Andrew Zigler: So definitely, people should be, taking this kind of approach more seriously because writing is the thinking and agents can create that map for you, uh, and, and allow you to

[00:23:31] Ben Lloyd Pearson: to

[00:23:31] Andrew Zigler: traverse and create those really incredible original thoughts.

[00:23:34] Ben Lloyd Pearson: Yeah. The, the simplicity of the, the metaphor I think is what really makes us so powerful. So it's kind of, it's all centered around this idea that you basically have two directories. You have raw and you have Wiki Raw is just all of the raw assets that, that are being brought into the system. Um, and then the wiki is. The ai taking those raw assets and processing them into some sort of refined component. And I've been using a metaphor like this a lot lately, actually. I've been, I've been calling [00:24:00] some of the, the, the AI agentic work that we've been doing, sort of like running a, like an OR refinery.

[00:24:06] Ben Lloyd Pearson: Like we have all this really raw resource coming into it and we have to process it into something that. Is more usable and refined like an init, for example. And it's very noisy while it's happening. But if you do it right, the thing that comes out the other end is this really nicely packaged thing that has a lot of uses. But you know, I mostly just love how this also kind of just contradicts the article that we just covered, because, you know, Carpathia also discovered that it doesn't need stuff like RAG to search all of his documents. You know, in fact, the LLM often does a really good job just with raw text files, particularly if you have some sort of index.

[00:24:40] Ben Lloyd Pearson: That is, along with those text files, that helps the AI determine which files may be relevant to a query that it's received. Um, and the, you know, the article also points out how there's, you know, there's other similar tools out there like notebookLM, that sort of solve similar

[00:24:56] Andrew Zigler: Yeah.

[00:24:57] Ben Lloyd Pearson: but you know, notebookLM lets you like upload a bunch of [00:25:00] documents and then you can like, ask questions and use it to generate new assets and, and do all of that stuff.

[00:25:05] Ben Lloyd Pearson: But that's actually more of like a session based approach. Like you're bringing all the context that you need for one challenge and then solving that one at a time before you move on to the next session. what was outlined in this article is more of a systems level approach to doing that same thing for all of the things rather than just one effort. Uh, and of course I love that there was some ideas in there about like. Using these tools to like, create automatic podcasts with AI avatars, talking to each other about the topic. And, and, you know, I'll, I'll assure our listeners out there that, you know, I'm a human still. Andrew's a human. Our producer, Adam, behind the scenes, we're all

[00:25:38] Andrew Zigler: We're gonna stay humans. If we're gonna stay humans. No, we're gonna stay humans. I, y'all, y'all. They will. I I can, I can tell you right now. They will, they will drag me out of this podcast before they will put a virtual Andrew in it.

[00:25:50] Ben Lloyd Pearson: Yeah. Well maybe we'll just switch me out and you, and we won't tell you. I don't know. We'll see. but, you know, but you know, honestly, I think what this article really points out, it, it really helps [00:26:00] solve is the challenge of keeping a system of record for all of the context, but also injecting new ideas into that.

[00:26:07] Ben Lloyd Pearson: So you mentioned the, the lumen method. You know, every relevant idea needs to be captured and brought into the system as an artifact so that it can be improved over time and it doesn't become stagnant. AI is just not great at that like original thought that needs to be incorporated to improve a system. Um, that's. And, you know, and this article focuses mostly on writing. Um, but I do think there are many parallels between what is happening in the content production world and engineering right now. Like, we're solving very similar challenges, uh, with LLMs and we also have very similar roadblocks and, and, and achieving them. So I, you know, I think there's a lot of stuff that, that software engineering leaders could learn from this to help them improve their organization. All right, Angela, let's talk about the brain fry and how we can break out this AI spiral. What do [00:27:00] we have here?

[00:27:01] Andrew Zigler: Yeah, so friend of the show, past guest, Kelly Vaughn has another article on After Burnout about, how managing multiple agents creates this brain fry phenomenon in folks and. That's, you know, a way to describe the mental fatigue from all of the context switching of course, but just the different ways in which you have to think during your day in order to navigate through a day with agents.

[00:27:23] Andrew Zigler: She cites a, a Harvard study that found that overseeing AI tools is actually more immensely taxing than using them. And I definitely have some thoughts on this that we'll get into.

[00:27:32] Ben Lloyd Pearson: Mm-hmm.

[00:27:32] Andrew Zigler: Um, and.

[00:27:33] Ben Lloyd Pearson: and,

[00:27:33] Andrew Zigler: she gives 'em recommendations on how to protect your mental health. while using these tools, I think this is a really great, uh, reflection from Kelly who brings a, a, an incredible educated and unique perspective to engineering with her background as, as, as somebody in like therapy as well.

[00:27:48] Andrew Zigler: And so, like, understanding how these things impact, uh, your cognitive health is really important to avoiding burnout.

[00:27:56] Ben Lloyd Pearson: Yeah, it is another great article from Kelly.

[00:27:58] Ben Lloyd Pearson: She, she always is producing [00:28:00] great content and, and I feel the pressure that she's talking about myself quite frequently, to be honest, you know, and, and she had a really great description of how it feels, you know, every agent is essentially pinging. I need attention. I need attention. And all you're doing is context switching between them. You know, it sort of ends up feeling like you're less like an engineer and more like you're a manager of a team of engineers that you can't fully trust yet. And I think that last bit is really important. Because, you know, since there's this distinct lack of trust, you have to constantly validate and course correct these systems on pretty high level cognitive tasks. Uh, and I have, I have felt that brain fry that she describes on some days. You know, I've had days where I've done very high volumes of agentic work, um, and produced a lot of things in a very short order of time. And it's, it's extremely difficult to use your brain that way for an extended period of time.

[00:28:53] Ben Lloyd Pearson: You know, I, I think Steve Yegge mentioned this a while back, how he feels like he can only do that for about three or four hours a day [00:29:00] before his brain really just starts to like not be able to focus in the right way anymore. and, and Vaughn Kelly used a really great metaphor, referring to your brain, kind of like a plate where if you want to add more stuff to a plate, it doesn't make the plate bigger.

[00:29:13] Ben Lloyd Pearson: You just have to rearrange stuff, and eventually you just run out of room on that plate. Um, and yeah, so you mentioned some fixes. She, she has a very, a fairly simple fix that she proposes, one of them is just stop watching what your agents are doing in real time. Like, you know, it's, it's really tempting to just.

[00:29:30] Ben Lloyd Pearson: Watch all of the steps they're taking and, and analyze and be like, do I need to stop them? And course correct. And maybe that's not the best way to do it. Maybe we should just be more conscious of stepping away from the agent, letting it finish, and then coming back when it needs, when it needs the next step. and then she also just recommends that, you know, if you feel like you, you're feeling brain fry at the end of the day. you just need to sit and really think about what the things were that contributed to that in that day, and try to find ways to just [00:30:00] not replicate that in the future. Like find ways to, to resolve that. Find ways to find the focus time you need or to step away from focus time to, to give you a brain some time to recharge. So yeah, definitely go check this out if you feel like you're a little overwhelmed by ai.

[00:30:16] Andrew Zigler: I think the thing that stands out to me is this definition from the Harvard Business Review that she, or the Harvard study that she cited, uh. About what, what is really the biggest takeaway. It's not that it's using AI fries your brain, it's overseeing it. And that distinction is really important because, in the last several months, many engineers have been taking these steps from individual contexts.

[00:30:38] Andrew Zigler: context,

[00:30:39] Andrew Zigler: Uh, collaborations with agents or, or single threaded conversations to managing and orchestrating agents at scale. And this isn't like, oh, I threw in three more agents, so I'm just doing three times the amount of cognitive load. It's actually more like you're doing a cubed amount more cognitive load because you're thinking about it across all of the surface areas that they're working.

[00:30:58] Andrew Zigler: and.

[00:30:58] Ben Lloyd Pearson: and

[00:30:59] Andrew Zigler: The [00:31:00] more that you throw into it and kind of like stare at it, the harder it is for you to pull away and do anything else. And I've definitely seen stories from founders, you know, citing problems with sleep or insomnia or focus issues that really only started to kick in after working with agents this way.

[00:31:17] Andrew Zigler: Like you greatly cited. Um,

[00:31:18] Ben Lloyd Pearson: um,

[00:31:19] Andrew Zigler: had an entire article on avoiding AI burnout through using Gas Town, the very orchestration system that went viral at the top of the year because he himself was, was feeling it. And I see these candid stories from folks and,

[00:31:32] Ben Lloyd Pearson: and,

[00:31:33] Andrew Zigler: uh, I really resonate with it. I can't say for myself that I've,

[00:31:36] Ben Lloyd Pearson: I've

[00:31:36] Andrew Zigler: I've fallen into this, into this trap of or into this, um.

[00:31:40] Andrew Zigler: A situation where it, it negatively impacts my ability to focus or, or to, uh, even sleep. I do, uh, agree that sometimes it can be hard to peel away from the terminal, when it's in the middle of a session. I do like to watch and monitor what my agents are doing, um, at least at some stages of.

[00:31:58] Ben Lloyd Pearson: of

[00:31:59] Andrew Zigler: Certain [00:32:00] tasks, and if you find yourself in this position, oh, I have to watch them.

[00:32:03] Andrew Zigler: Oh, I have to course correct. My challenge to you is to figure out what you're watching for and bake it into a skill, because that's ultimately what I was able to do to scale up my practice. And, uh, my agents know very specifically the the the order and the flow of operations in which I work, to the point where I can just kind of spew ideas at it, what I want.

[00:32:23] Ben Lloyd Pearson: I want

[00:32:24] Andrew Zigler: And I can trust that the system is going to rightfully create a spec and then rightfully review it with another third party agent and then create some tests right before it's gonna go write code. And so I know that once I kick off that process, there's actually a lot of steps that the agent's able to do with without me interfering at all where I can turn away and I can go focus on something else.

[00:32:46] Andrew Zigler: Um, and, and I think getting to that point is how you avoid the fry.

[00:32:51] Ben Lloyd Pearson: Yeah.

[00:32:51] Ben Lloyd Pearson: Well co coming back to the story we covered earlier, agents are like violence. The only answer to the problems they create is more agents, more deterministic [00:33:00] checks

[00:33:00] Andrew Zigler: Or just, I mean, honestly, this goes back to like, um, for me, it reminds me a lot of like classroom management. Like, okay, you have a homeroom and then like, what? You gotta be sick or you're gonna be out for a day. And then like you're out and you're thinking like, oh God, what are my students doing to the substitute teacher?

[00:33:16] Andrew Zigler: Are they learning? Are we gonna be behind on our module? Like there's so many. There's so many things that go through your head, but if you're a teacher that creates systems and skills and support

[00:33:25] Ben Lloyd Pearson: sy

[00:33:25] Andrew Zigler: and and support methods for that substitute teacher, teacher to, to actually like step in and be that helper.

[00:33:31] Andrew Zigler: And also for your agent, for your agents, for your students. To be able to, know like, oh gosh, if Mr. Zigler

[00:33:39] Ben Lloyd Pearson: Ziglar

[00:33:39] Andrew Zigler: what would he tell me to do next? Right now? You know, would, would he tell me to, how would he tell me to move forward with this? Like, the more that you could put that in someone else's mind about how you would act.

[00:33:50] Andrew Zigler: In that situation, the less that you're gonna feel like you have to monitor. So that's what this reminded me a lot of was the fun of like having my first absence as a teacher and then [00:34:00] being like, oh God, this is like more stressful than just like, I wish I was at the school. And so, um, uh, uh, definitely a lot of techniques to, to take away here, prompt and forget.

[00:34:09] Andrew Zigler: It is a really powerful one. Everyone should be striving to get.

[00:34:12] Ben Lloyd Pearson: to get there.

[00:34:12] Andrew Zigler: Uh, and if you haven't been, if you haven't subscribed already to After Burnout, please go do so because Kelly writes these really great deep dives on how to protect yourself in the Agentic era.

[00:34:20] Ben Lloyd Pearson: Absolutely. All right, Andrew, so what violence are your agents up to right now?

[00:34:27] Andrew Zigler: Well, uh, as you may have noticed, I'm in, uh, I, I'm at TDX, um, and I'm in my hotel room right now. So, uh, between the sessions and being on the, on the floor, getting really cool tours of all of the developments from Slack force, I've definitely been talking with my agents. As y'all know, I roam with them and I talk with them on my phone.

[00:34:46] Andrew Zigler: Because I work via a terminal on a remote machine. So, um, while I was in the keynote yesterday, I worked on two of my agents. They unveiled a lot of really cool stuff around agent force, headless, uh, tools that you can use to create tools and, in [00:35:00] Salesforce, like using Claude Code instead of just like proprietary Salesforce tools that they had before.

[00:35:05] Andrew Zigler: The really cool thing I'm taking away from this is all the Slack bot innovations that they've come up with. Um, they're really making the bet that you know, where your chat happens is where the AI is, is gonna happen to. And I strongly, strongly agree. even just like two years ago I gave this talk on multiplayer AI in a chat program and what that would look like, the trust problems, the managing a, a shared context, how you would pass that information back and forth.

[00:35:30] Andrew Zigler: And I was exploring that.

[00:35:31] Ben Lloyd Pearson: that,

[00:35:31] Andrew Zigler: You know, two, three years ago. So it's fascinating to see Slack finally meet the moment and become this like agentic, uh, operating system for people and their agents to solve problems together. So a lot of really cool demos.

[00:35:42] Andrew Zigler: so I'm really excited to be on site here.

[00:35:44] Ben Lloyd Pearson: Yeah, yeah. You know, if, if people say flat files and, and, uh, local stuff is not scalable, slack is probably also not scalable as well. But it is quickly. I mean, it is becoming sort of like a central hub for agent interactions, for gathering information, [00:36:00] for doing research. I mean, it actually is a pretty powerful place.

[00:36:03] Ben Lloyd Pearson: I mean, we have a lot of age. We, we have more and more agents that are showing up in Slack, in our own Slack, and it seems to be the place where. It's kind of the central hub of activity, you know?

[00:36:13] Andrew Zigler: Yeah, I'll even push back on that a little bit. I'll say that Slack can scale. Slack is where the work happens. It's where the chat happens, or it doesn't have to be Slack. It could be anything that you're using to communicate, but your communication tool is,

[00:36:25] Andrew Zigler: is,

[00:36:25] Andrew Zigler: you know, it's always gonna scale with your company.

[00:36:28] Andrew Zigler: That's why teams of three people can use Slack, just like teams of 10,000 people can.

[00:36:33] Ben Lloyd Pearson: yeah. Yeah. I think it's, you know, it's kind of like the challenge of like, we do need a system though, of like, in the same way that we need to have, to be able to go from like flat files to like, uh, you know, a more robust system.

[00:36:45] Ben Lloyd Pearson: I feel like Slack is gonna have to go on a similar journey, right? Because we, I think we want to use our Slack for our agents in ways that it's not really optimized for yet. And, and I think there's. Tons of potential for that. But yeah, and that's, that's where I think my agents are really kind of focused right now.

[00:36:59] Ben Lloyd Pearson: Like, you know, we've [00:37:00] been doing a lot of stuff to sort of roll out our, our agents to other teams that haven't had as much ex exposure to this level of work yet. And it's all happening over our chat system, you know,

[00:37:09] Andrew Zigler: Yeah,

[00:37:10] Ben Lloyd Pearson: pretty, pretty exciting times. 'cause you know, a lot of cool things are, are happening because of that.

[00:37:15] Andrew Zigler: Indeed.

[00:37:16] Ben Lloyd Pearson: All right, well that's the Friday deploy from LinearB . Thanks everyone for joining us this week and we'll see you next week.

[00:37:23] Andrew Zigler: See you next time.

Your next listen