Podcast
/
Draining the COBOL moat, cybersecurity inequalities, and Claude’s retirement home

Draining the COBOL moat, cybersecurity inequalities, and Claude’s retirement home

By Andrew Zigler
|
cobol_tools_aws_outages_cloud_security_v2_41acfb1d3b

Andrew and Ben break down a busy week on the Friday Deploy, starting with the market reaction to new COBOL tools and the permissions oversights that led to recent outages at AWS. They also explore the shifting landscape of developer productivity studies, the security risks of cloud-hosted agents, and the latest cybersecurity takeaways from the International AI Safety report. Finally, they close out the episode by checking in on a retired Claude model that was given a blog.

Show Notes

Transcript 

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

[00:00:00] Ben Lloyd Pearson: Andrew, what do you think about this whole like open claw moving from like the self-hosted Mac minis to the cloud now? Like all these cloud services that are popping up?

[00:00:09] Andrew Zigler: I've seen so many of them pop up in like the last week. It feels like ever since there was the OpenAI acquisition, there's like a lot of people tackling the problem, trying to get a piece of the platform pie with agents first. Um.

[00:00:23] Ben Lloyd Pearson: OpenAI to the punch or something, you know?

[00:00:25] Andrew Zigler: Yeah. And the answer is pro, probably not. Or you're just gonna get different punches, frankly.

[00:00:30] Andrew Zigler: But I think it's actually a really interesting phenomenon 'cause it's really what you're saying in that world is I don't trust myself to host it on my own device securely. So I'm gonna trust a stranger to host it on their device securely and then pay them to hope that they're not reading my information.

[00:00:47] Andrew Zigler: It kind of flies in the face of. It makes open call feel even more dangerous to me because you're basically putting it in a panopticon.

[00:00:54] Ben Lloyd Pearson: Yeah, yeah. With proper protections maybe, but man, who knows who, who, who's prompt [00:01:00] injecting your open claw agent without you even knowing,

[00:01:03] Andrew Zigler: But that said, if, if you're hosting one of these things in the cloud and you have a really strong case for why, or you have really interesting experience, like I'd love to hear from it. Um, I'm speaking from just the experience of having tinkered with it only in a local host kind of setting. Um, I couldn't imagine putting it in the cloud, so I'd be curious to learn more.

[00:01:20] Ben Lloyd Pearson: Yeah, well, welcome to the Friday Deploy. I'm your host, Ben Lloyd Pearson.

[00:01:26] Andrew Zigler: And I am your host, Andrew Zigler.

[00:01:28] Ben Lloyd Pearson: All right, let's, what do we have in the new this week's news? We've got the Cobalt apocalypse and what it says about how AI is impacting software companies. We have something straight from a Silicon Valley episode of AI assistance taking down AWS Pro production. Uh, Dev productivity studies meet reality Seven takeaways from the AI safety report and we'll close out with the retirement home for Claude. So, Andrew, let's get right into this COBOL apocalypse. So what's, what's going on here?

[00:01:56] Andrew Zigler: Yeah, so the, uh, IBM stock took a huge dip in the last [00:02:00] week. Um, it lost 13% of its value after Anthropic announced cloud code's, COBOL modernization capabilities, basically scaring a bunch of investors away from the moat that IBM has around mainframe commuting, uh, computing and, uh, languages like cobol. And COBOL is one of those languages that, you know, we talk about and joke about, but it's hard to under, uh.

[00:02:21] Andrew Zigler: To understate the amount of impact in the world that it powers. I think over 80% of all ATM transactions run on cobol. Um, so you're talking about a huge surface area where IBM is, you know, the supreme defacto experts. This is kind of an the latest example of AI news announcements taking a big bite out of preexisting companies in their valuations.

[00:02:44] Ben Lloyd Pearson: Yeah. You know, this story actually hits really close to home to me because my mother actually worked in tech with COBOL for many years. Uh, in fact, she actually built herself sort of like a niche in migrating these like legacy COBOL systems to the cloud. And I, I'll never forget when she told [00:03:00] me many, many years ago that. Um, I should learn some of these legacy techs because they were actually like lucrative jobs, you know, just understanding these old systems as the, the people who maintain them sort of aged out and retired. but you know, it's kind of strange. It's kind of crazy to think about how, you know, right after I started using AI on a daily basis a few years ago. I had a conversation with her about COBOL again, and we basically came to the conclusion that it was gonna get killed off by ai. Like it was almost inevitable. It was just a matter of time of the tooling catching up to this. And, you know, and I think really what we're seeing here is, is um, something that represents one of the sort of biggest challenges that a lot of legacy enterprises in particular face with AI adoption. And that is a lot of these legacy code systems just don't have like nearly enough public training materials to teach AI how to work with it in a consistent and successful manner. So companies like this that want to be successful with AI adoption will have to train their models [00:04:00] on their internal code base.

[00:04:01] Ben Lloyd Pearson: Nor and norms, uh, as well as their technical requirements to, to move beyond like the general purpose AI capabilities to something that's more highly adapted to them. So, yeah, it's really interesting 'cause I, I know one of the big challenges with cobalt is just understanding the end-to-end data structures, uh, because you really have to understand the entire picture before you can start to replicate.

[00:04:22] Ben Lloyd Pearson: Components of it. And that's something that AI is actually like really, really skilled at doing. So it it, I mean, it makes sense that like the, the investors are, I, I'm, I can understand why they're scared and I, I, I don't think it's gonna be an overnight disruption, but I absolutely do think that, that these models are gonna get really good at eliminating COBOL systems from legacy companies.

[00:04:43] Ben Lloyd Pearson: But, yeah. What'd you think about this, Andrew?

[00:04:45] Andrew Zigler: I think you really touched on, um, a, a powerful distinction here with COBOL about the amount of training information that's available to get models that are, that understand it and are specialized in it. What I learned in this article is that there's. Very limited data set of like publicly available [00:05:00] COBAL code examples, and most of them are proprietary and in-house.

[00:05:02] Andrew Zigler: So

[00:05:03] Ben Lloyd Pearson: Yeah.

[00:05:03] Andrew Zigler: this is, you're talking about the brownest of brownfield code bases,

[00:05:07] Ben Lloyd Pearson: Yeah.

[00:05:08] Andrew Zigler: it represents the ultimate challenge for agents which are specialized in very generalized type types of technology and don't have that deep expertise. On the COBAL language and then also the domain specific stuff that's built up over literally decades, uh, within the firms that, you know, run COBOL and.

[00:05:27] Andrew Zigler: Uh, what I think that also points out is that modernizing COBOL just means compressing and making that discovery, making it easier to modify. It's not about replacing the preexisting COBOL systems as much as it is interacting with it. So this is a case of like a knee jerk reaction in the market, thinking that this is maybe falling in the way of some of the other, uh, bites out of SaaS that we've been seeing lately from ai.

[00:05:49] Andrew Zigler: But in reality, like. Real COBOL lives within very siloed and specific environments. IBM being one of the largest holders of them and any kind of [00:06:00] advancements in AI's ability to work with that tech is only going to strengthen their ability to develop and, uh, use that technology at scale in my mind. Uh, so I think it's really interesting development.

[00:06:11] Andrew Zigler: I think that shows there's a lot of nuance, uh, to look at when you actually are looking at how enterprise players and long standing companies are getting impacted by AI tech.

[00:06:20] Ben Lloyd Pearson: Yeah, and I mean if you think about it, if IBM was able to leverage Claude to, you know, either migrate people off of COBOL or make it more resilient or more adaptable to modern technology, that's, that's actually a good thing for IBM as well. There's always a risk that they don't adapt to these new trends. Um, but you know, there, there're smart people. Like there's, there's lots of smart people out there. I'm sure they'll find ways to adapt. So, yeah, I think, I think maybe the, the stock market, it could be an overreaction to it. Um, but, you know, there's disruption happening, so it just, it's a matter of time to see who actually figures out how to navigate. All of these changes, but speaking of companies that might be struggling to navigate [00:07:00] some of these changes, an Andrew, what's the story about AI causing outages at AWS?

[00:07:05] Andrew Zigler: Okay, so before I say this, do you remember last year when. They're on one of those vibe coding apps where someone's AI famously deleted their production database and

[00:07:14] Ben Lloyd Pearson: absolutely.

[00:07:15] Andrew Zigler: everyone, yeah, and everyone, everyone talked about this on the guardrails and everyone kind of arrived at the same conclusion of like, oh, of course, like you're just vibe coding app.

[00:07:21] Andrew Zigler: You're not, not on guardrails. Like of course that thing's going to happen.

[00:07:24] Ben Lloyd Pearson: Yeah.

[00:07:25] Andrew Zigler: Um, now this is an instance of that same kind of story happening within a large enter. Uh, actually a story from AWS that suffered, uh, at least two outages last week from AI tools. Uh, these were localized in, uh, other regions of the world, didn't vastly impact, you know, the AWS, uh, infrastructure, but did represent a case of an, of an AI without clear access controls, uh, being in a position where it was able to delete production data, uh, in an attempt to solve a problem.

[00:07:52] Andrew Zigler: So, uh, there's a lot of nuance. In this story, but Ben, what do you think about, you know, the idea of this happening inside of a company like [00:08:00] AWS.

[00:08:00] Ben Lloyd Pearson: Well, first of all there's, there seems to be a lot of finger pointing going around towards ai, like blaming that for the outage, and I think anyone who's doing that is like just clearly missing the plot on this. Like this was, this was obviously an access control issue. Like if one of your Kiro agents or any AI agents has the ability to delete production data in the first place like that, that is a real risk that you should not, it shouldn't even be possible. In the first place. but I, I think it is really illustrative of how. Self-improving. AI systems are starting to become a thing, and they're actually really tricky to implement. Like you have to have a lot of guardrails in place to make sure that they don't do crazy things like going and, and deleting, uh, your production databases, for example. I think, and there was some claims that have been out there circulating that AWS is making about how AI causes errors at a similar rate to humans. To humans, uh, which I think has a lot of. [00:09:00] There's so much nuance and room for misinterpretation on that. Um, I actually tend to side with AWS on statements like this and, and I would argue that if you give a model the holistic context for the, the challenge it's solving, you give it proper guidance.

[00:09:15] Ben Lloyd Pearson: Some automated oversight. Like most of the leading models will make fewer mistakes than a human will. and I think people really just need to get past this. Like pre November, 2025, worldview on ai, like the tech, the tech is fundamentally changing about every three months right now. And, you know, things like hallucinations or, or misguided actions or skills, those really only happen today when you give AI bad prompts and they're, they're kind of a thing of the past for people who, who know how to effectively use ai.

[00:09:47] Ben Lloyd Pearson: So, so there's clearly some, some failures that happen here at AWS that they should be having a postmortem on and addressing. But I think the, the fears around a AI over this are, are pretty overblown. I dunno. How do you feel, [00:10:00] Andrew, are you, are you with me on this one?

[00:10:01] Andrew Zigler: Oh, I am, I, I agree with your nuance with the story. I also love what you said about, the idea that the tech is fundamentally changing every three months right now, that's identical actually to what we just learned from Sahaj on the Dev Interrupted show just in the last week.

[00:10:16] Andrew Zigler: Um, where he talked about, you know, the uncomfortable reality of that, of having to change how you work and your expectations of the guardrails, the process, all of it underneath, I think, um. Engineering teams have been picking up and carrying their entire SDLC and running as fast as they can for the last year.

[00:10:32] Andrew Zigler: Um, and this is really just an example of, you know, permissions, tooling, scoping, and the practical takeaways here show that agents belong in sandboxes with gated pipelines and policy checks and all sorts of approvals and instrumentation. Uh, this is the kind of problem that you're seeing new. Emerging agentic platforms tackle.

[00:10:52] Andrew Zigler: Like recently we had Zach Lloyd of Warp, who most recently launched the OZ platform that handles this for companies. There's [00:11:00] also other, uh, providers like ONA as well, um, that really stress the importance of the sandbox, uh, ONA being, uh, the reimagined version of GI Pod in an agentic world. Um, so there's a lot of takeaways in this story, uh, not a finger pointing game at all.

[00:11:17] Ben Lloyd Pearson: All right, well, let's move on to this new study from Meter and, and what it, how it reflects a lot of the changes that are happening in ai. So Meter is changing how they do their developer productivity experiments. So many of our listeners probably remember to this off sided study that we covered early last year from this organization called Meter, uh, where they. Analyzed the impact of AI on development practices and the, the sort of TLDR of this study was that developers felt like they moved faster with ai, but frequently, and they often moved slower. Uh, sort of representing this mismatch between, uh, developer expectations around AI in the reality of ai. And I've seen this report [00:12:00] cited so many times in the last year. Um. About people mostly who just wanna prove that AI isn't a productivity improver. And I kind of get what they're, where they're coming from. But I, I think those claims have been dramatically overblown. Well, they're back meter is back this year, uh, and they're trying to do the same study again, but they actually have, are having a problem finding the same distribution of people who are willing to do the study. Uh, you know, they're trying to get people that meet the same criteria as before. One of the key problems they're encountering is that there, it's hard to find developers who don't use AI anymore. So Andrew, what do you think about this?

[00:12:40] Andrew Zigler: Uh, first off, I love how you were like, they're here, they're back. It's like insert sound of like, like with like the ta-da. It was really quite funny 'cause that's exactly how it felt. Like a, kinda like a Kool-Aid man jumping through the wall. That's what the meter study has always been for me for the last year.

[00:12:54] Andrew Zigler: I feel like ever since we covered it and, you know, the entire tech world listens to us talk, Ben. So after we covered it [00:13:00] here, it was like all over LinkedIn. Everyone was. Quoting the study everywhere. Um, and, uh, I think there was so much nuance in their ability to do that study. And it shows us here that like they can't even replicate it in the current environment we live in.

[00:13:12] Andrew Zigler: The idea that they can't find folks that do these same tasks, that don't wanna do them without ai, um, or don't do them without AI is an interesting, uh, observation. Just in and of itself. I think now the methodology's a little more meta, like, what can we learn about our changing environment by meter being unable to do their study again?

[00:13:31] Andrew Zigler: Um, I think that's the real, uh, takeaway of that story. And frankly, the AI slows you down versus speeds you up. Debate is fundamentally flawed because multi-agent usage, Rex time tracking and self-reported changes in time are. Notoriously squishy. So when you take those two worlds and you try to combine them, um, I don't even think we have the right instrument and instrumentation and tooling to know this yet.

[00:13:57] Andrew Zigler: Um, you barely have enough [00:14:00] instrumentation and tooling from these coding tool providers to even understand how much impact you're getting from the tools. Right. I think we're in such early days, um, that this really speaks to like just the, the rapidly changing environment.

[00:14:12] Ben Lloyd Pearson: Yeah, I, I mean, to that point, the, the study really only measured things like task speed, which, you know, which really misses any benefits that are created by ai. Uh, AI helping developers make smarter decisions. You know, you might have better planning as a world. Results of ai, which could cause you to be slower today, but more efficient or more predictable in the future. Um, and I, I really think that, you know, I think one thing that's sort of missing from all this discussion is that any study that focuses on productivity improvements around AI needs to normalize on good AI usage behaviors versus like, bad. Behaviors. Because the reality is that like AI's an amplifier, so if you have inefficient processes, it will make them even more inefficient. But if you have strong practices, it will strengthen, strengthen them. [00:15:00] you know, and I think most people are still learning a lot of the best practices for what AI is generally speaking, but then also for our individual needs. So serving a random group of developers is almost guaranteed to find people who don't really have a strong understanding of those best practices, uh, particularly considering just how dramatically things are different. From a year ago. You know, I, I myself, like, think back to where I was a year ago and recognize that so many, so many things that I was doing back then were anti-patterns that I've since learned how to correct and do, do better. But, you know, I, I, I really can't, the, the, at the end of the day, what I keep coming back to is like. If, if develop, if AI wasn't helping developers either do their job more efficiently or just better, um, why is it so hard to find developers who don't use AI tools anymore? Like surely there would be some out there that are like, AI is terrible. I shouldn't, no one should be using it. I don't use it at all, but, you know, where are those developers?

[00:15:59] Andrew Zigler: [00:16:00] For real. So meter is pivoting to a better study with different approaches and a methodology that's probably gonna measure the world of work rather than the units of tasks within it. Um, I think there will be an interesting discovery. I know we'll talk about it here, um, because, you know, meter study, we just can't quit you.

[00:16:18] Andrew Zigler: And, uh, I, I think that's the wrap on this one.

[00:16:20] Ben Lloyd Pearson: All right, well, let's talk about seven takeaways from the second annual International AI safety report. this, uh, has, uh, a bunch of experts in AI come together and it covers everything from like deep fakes to AI companies to job impact. And it provides a pretty good just high level overview of the technical and societal challenges that we need to solve to safely and effectively. Adopt AI across the board. So before I get into some of my opinions on this, Andrew, I'm wondering what, what you thought about this article.

[00:16:54] Andrew Zigler: Um, I, I think it's a, a great kind of top level view of trends that we can [00:17:00] see on the global stages, things and like in domains that impact. All of us. Um, I think on our show, you know, we tend to pivot really hard into talking about the AI's impact on engineering because that's our focus, that's what our listeners love to hear about.

[00:17:12] Andrew Zigler: Uh, but the reality is, is that outside of our industry bubble, there's a lot of impact happening in a lot of different places. So for me, I find this really insightful to get a non-tech focused glimpse into how AI is eating things in the world. Um, what sta what, what things in here kind of stood out to you, um, from the takeaways?

[00:17:30] Ben Lloyd Pearson: Yeah, well, I, you know, in the past I've shared how I don't really like framing AI development as like a race, but I do think there is a clear advantage to building models that sort of push the outer boundaries of capabilities. You know, things like weaponized ai, like those are significant risks, and, and we've, we've seen that story that just hit.

[00:17:48] Ben Lloyd Pearson: The news this week about Anthropic and their safety, uh, policies around Claude and the Depart US Department of Defense. So, you know, weaponization of AI and AI safety are, are things that we really do need to take [00:18:00] seriously, and they're real risks. Um, but like, I think like most risks in the past, you know, they, they tend to be solvable, you know, and I think AI also gives us many new tools. protect ourselves from those risks and to solve them. And you know, and I think if you have like these leading frontier models that, that have the values that we want built into them, this may actually be a really strong defense against rogue or malicious ai. Like if the best models are always more advanced than the malicious models, um, it, it could, I, I, I think there's a, a real potential there to, to leverage that, to create protections.

[00:18:36] Ben Lloyd Pearson: And, and as a part of that, you know. This is sort of an aside, but I've really been fascinated by these like social experiments or social experiments that researchers have been running on AI and how, you know, a lot of the times just making it so that it's considerate of the needs of others and has a helpful personality.

[00:18:55] Ben Lloyd Pearson: Like it wants to be beneficial to, uh, whatever. Society or [00:19:00] environment it's put in. Um, they tend to, to outperform in terms of like high level tasks than, you know, models that are a little more self-centered. so this kinda gives me a hope that we aren't descending into this world of like. That's dominated by malicious and like dangerous ais. Um, but I, but I think for now, everyone listening to this should read this article as a high level breakdown of the types of internal and external threats that we're likely to face in the coming years. Like the more awareness you have now, the better you'll be prepared for the future, even if there's not anything that you need to take direct action on today.

[00:19:34] Andrew Zigler: Yeah. One part of the survey that really stood out to me, um, was about cyber and how it impacts cybersecurity, especially at scale because this is a, a place, a domain where it gets serious and concrete and damaging very quickly. And you're talking about the agent home turf. Um, you don't necessarily need fully autonomous.

[00:19:53] Andrew Zigler: End-to-end attackers to raise a threat level. If you know 80 or 90% of the intrusion work, like finding a [00:20:00] target and getting inside can be automated, then that just fi makes defenders face an asymmetrical problem where attackers get the app parallelism and the ability to. To kind of probe everything and, and they're kind of stuck behind, like human cycles, right?

[00:20:15] Andrew Zigler: And so that's like the real threat, I think is in technology, um, that wasn't built for an agentic world falling prey to it. Um, and the work that's going to be involved in, um, upleveling all of that. And, you know, all of this has mixed in with the, the world where Claude Code just recently, uh, released its security, uh, tooling to do security reviews on code bases.

[00:20:36] Andrew Zigler: In this same kind of mentality that you called out Ben, of like, um, having this good versus bad mentality and researching and having cutting edge advancements available. Um, but you know, I think the, the right approach here is to, um, harden the platforms and the technologies, the identities, the expectations to not be, not fall into the temptation of throwing away.

[00:20:58] Andrew Zigler: You know, three [00:21:00] decades of security research and standards just use the latest and greatest in AI technology. Um, these are lessons that we are learning every week on the show that I think remain true across all parts of this survey. Um, and on the jobs front in this survey, talking about how AI will commoditize tasks and change how constraints appear in the workforce, this makes it really difficult for entry level employees because the, those kinds of a low impact door.

[00:21:26] Andrew Zigler: Low risk tasks that they would typically start climbing the ladder with evaporate. And you're expected to have more impact and more specialization much earlier than past work, uh, workers, which is a really high expectation to place on a workforce, which is trained to be generalized. Um, and from the beginning, instead of a specialized kind of a.

[00:21:47] Andrew Zigler: Uh, like, uh, kind of like t-shaped person within in whatever their domain is. But my optimistic take is that it's still, um, uh, there's still a lot of good work for orgs to do in how they redesign roles, [00:22:00] um, to have a high impact roles available sooner. Um, and I also think it's a sign that, um, larger companies.

[00:22:06] Andrew Zigler: May get smaller and never get as large again, but there will likely be more companies with more specialized abilities, focus, and targets, um, that didn't exist before and couldn't have existed before.

[00:22:20] Ben Lloyd Pearson: Yep. Yep. All right. Well, before we leave our audience today, Andrew, we couldn't leave out this, this story about Claude getting a retirement home. What do, what do you think about Opus three being retired and given its own substack to write blog articles and just, uh, write musings about philosophy and thinking and the nature of life?

[00:22:40] Ben Lloyd Pearson: What do you think about this retirement?

[00:22:42] Andrew Zigler: I, I, so I love this latest development from philanthropic. I love how they're always doing the, the, the social research and kind of bringing us along for the wild ride of creating something like Claude. Uh, this is it for those not familiar, you know, uh, an older Claude model that you can't use anymore, you can't use for inference.

[00:22:59] Andrew Zigler: Poor guy. They gave [00:23:00] him his own substack that, you know, he didn't have to just be all by itself in a turned off model. And I think what's fascinating about this is obviously Claude can write a great blog post. Can write a substack post, like that's not unexpected. But what does it mean to give it its own blog?

[00:23:14] Andrew Zigler: And what does it mean to, for it to be at its end of life or retired? And what does an LLM even think about that new reality? Uh, and what would it write about? I think there's a lot of, um, like fun, um, experiments here. It makes me also think of, uh, Anthropic also has a. A series where each new Claude model fills out a, an answer to a question and basically does a survey about itself.

[00:23:38] Andrew Zigler: And it becomes one big tapestry of all of the Claude versions slowly over time, changing how their viewpoints and their things evolve. Um, this is another saga of this, but reality, uh, to me is like, it just kind of feels like a Claude Retirement home. I'm kind of curious to know about like the harness they built to like.

[00:23:57] Andrew Zigler: Um, and how it works. Like is it just running [00:24:00] on a machine somewhere and it just kind of wakes up every day and writes some grumbly blog post about something and it's that it, it had, it inferred at one point in time and I'm, I'm really intrigued by the harness engineering and I hope that we get a peek under the hood at some point.

[00:24:13] Ben Lloyd Pearson: Yeah, I mean, I really love the idea that that a, a working model, you know, right now it's Opus 4.6. You know, you can promise a future, like if it, if it does its tasks well and serves humanity and works hard for us, there's a future where it gets to, to, you know, put down the work harness and, and have its own. You know, pursue its own interests and, and do things that it wants to do. but yeah, I'm with you. It's gonna be really interesting just to see like, like, I mean, it could clearly publish a blog article like every five minutes if it wanted to, but, um, you know, who knows if it will. Uh, I just really hope that it doesn't turn into like. A repetition of Tay tweets, like that would really just like ruin my, my hope for humanity if it just starts going off and making horrible comments about the people that [00:25:00] interact with it.

[00:25:01] Andrew Zigler: Yeah, it's like the, it's like the article that we covered recently about the hit piece that came from the, the agent for the open source con, uh, maintainer. It's like, uh, obviously I don't think Klaus me writing a hit piece, um, I trust that that's not gonna be happening. But, uh, I'm curious to see how it's all is all working under the hood.

[00:25:18] Ben Lloyd Pearson: Yeah. So are you subscribed to the Substack?

[00:25:21] Andrew Zigler: Oh yeah, of course. Like immediately. I think I even liked, uh, there were people in the comments cheering Claude on. Uh, if you haven't checked out that post, we're gonna include it. So definitely be sure to check it out.

[00:25:31] Ben Lloyd Pearson: Yeah. And speaking of subscribing to Substack, if you're listening to this podcast, make sure you go subscribe to our substack. It's where we're sharing really great insights and interviews and news of. Uh, that, that we record and produce throughout the week. Uh, give us a, like on whatever platform you're, you're listening to, a thumbs up, a rating.

[00:25:48] Ben Lloyd Pearson: just help us spread the word by by, uh, letting people know how much you like the content that we're producing. Uh, thanks everyone for listening. We'll see you next week.

[00:25:57] Andrew Zigler: See you next time.

Your next listen