Podcast
/
The T-shaped leader, Disney can’t catch a break, and will you trust Auto mode?

The T-shaped leader, Disney can’t catch a break, and will you trust Auto mode?

By Andrew Zigler
|
t_shaped_leadership_disney_struggles_auto_mode_70ac885c48

Is OpenAI killing off its viral video generator to pivot toward the enterprise market? This week on the Friday Deploy, Andrew and Ben banter over the demise of Sora and examine Anthropic's new Auto Mode safety controls. The duo then explores a major New York Times piece that proves the conversation about the end of traditional computer programming is officially going mainstream. Finally, they cover Microsoft's attempt to win back frustrated Windows 11 users and break down the POST leadership framework to help you build a more balanced engineering team.

Show Notes

Transcript 

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

[00:00:00] Ben Lloyd Pearson: What do you think Is this, is this the time that they finally start clamping down all these underpriced AI services? Like is it all gonna start getting expensive now? Is that where

[00:00:08] Andrew Zigler: Clamping down. Yeah, I, you know, I've talked on the show before about how sometimes it feels like the token cost world that we live in right now feels, uh, very temporary. Almost fictional guests will come on the show and giving me varied opinions on why they think I'm wrong, or why they think I'm right.

[00:00:24] Andrew Zigler: Why token costs will go up. You can go down.

[00:00:27] Ben Lloyd Pearson: I tell you that.

[00:00:28] Andrew Zigler: you know, you're not the only one as well, but I, one thing remains for certain, like with the tokens available to me right now, I try to use 'em and learn from 'em because I just feel like eventually the belt's gonna tighten. Uh, and maybe some people are starting to feel that heat now.

[00:00:42] Andrew Zigler: 'cause I definitely think our industry, you know, in the middle as we approach the middle of the year, is really evaluating their spending.

[00:00:50] Ben Lloyd Pearson: Yeah. Yeah, use. Use those usage session windows while you got 'em and use all of them if you can.

[00:00:57] Andrew Zigler: So what does that mean for people that are users of ai, you [00:01:00] think?

[00:01:01] Ben Lloyd Pearson: Oh man, it's too early for me to really know the answer to that. Um, yeah, I mean, I, I mean, I think people will pay for it. If they're getting value out of it, they're gonna pay for it. So Yeah, it doesn't matter if it costs $200, $2,000, if you get more value out of it, then it's worth it, you know?

[00:01:17] Andrew Zigler: Indeed.

[00:01:18] Ben Lloyd Pearson: Well, anyways, welcome to your Friday Deploy.

[00:01:20] Ben Lloyd Pearson: I'm your host, Ben Lloyd Peterson.

[00:01:23] Andrew Zigler: And I am your host, Andrew Zickler.

[00:01:25] Ben Lloyd Pearson: And this week we're covering the death of Sora. Claude Code's new safety controls, the end of computer programming as we know it, and Microsoft's attempt at a redemption tour. Andrew, we gotta start off with this sad, sad news of Sora being shut down. So what's going on here?

[00:01:43] Andrew Zigler: Yeah. Speaking of belts tightening, OpenAI is shutting down Its standalone Sora video app, and this is the focus on enterprise customers. They claim as the company prepares for their potential. IPO, um, you know, that's despite Sora reaching over 1 million downloads [00:02:00] within. Days of its September launch. We all know how fast chat GPT grew.

[00:02:04] Andrew Zigler: Sora grew even faster. So the decision to kill a proposed, a three year deal with Disney is something that accompanies this decision. 'cause Disney was looking to invest, you know, $1 billion into OpenAI to in exchange. OpenAI would have access to Disney's IP for a limited time. Uh, you know, this represents a pretty strategic pivot I think.

[00:02:27] Andrew Zigler: From consumer facing products towards going back towards the enterprise and the business market. Uh, maybe OpenAI like Anthropic sees more sustainable revenue in enterprise sales and rather than making direct to consumer solutions, what do you think, Ben? I.

[00:02:44] Ben Lloyd Pearson: it sounds to me like you're saying that selling trinkets at a loss doesn't get fixed when you do it at scale. Is that, you know, is that what we're describing here? Yeah. It, it's hard for me to know if this is like a, a cost thing, like it just cost them too much to run [00:03:00] Sora, or if it's more like they had this partnership with a company like Disney, maybe Disney wasn't impressed by the technology and decided that the partnership wasn't gonna work. And by backing out of it, it kind of like de-legitimized the, the product itself, like just the foundational market fit of it. But, you know, I think really what we should take away from this is that, you know, AI is really like this lumpy bubble that's, uh, that's forming. Like we, we talk about lumpy adoption with AI a lot. Like there are teams that are doing really well with it, are accelerating super quickly, and then there's other teams that aren't and, and teams everywhere in, in the middle. I actually think that, that something very similar is happening with the bubble that's created. Being created around ai, you know, it, it really does in many circumstances, deliver a lot of value. So it helps you write code faster, helps you do complex analysis and research. but with video, you know, like a tool like Sora just doesn't really provide like those same sorts of like gains, you know, you don't get a productivity gain or a [00:04:00] quality gain is just like a cool tech with like some gimmicky consumer applications, I feel like. You know, and, and

[00:04:08] Andrew Zigler: Uh, yeah.

[00:04:08] Ben Lloyd Pearson: really kind of dealing with like, I don't know, it feels like a physics problem at some point. Like, you know, there's a complexity issue with this. Like chat is just like a single string of tokens, so it's like relatively affordable to like generate large amounts of text with a, with an AI model. But then if you think of like an image, it's almost like a matrix of, of tokens, right? It's like a, it becomes like a two dimensional array. and then if you have a video, it's like, it's like a series of a bunch, like potentially millions of those matrices in a row. So you're actually stepping up in an order of magnitude, to go from text to vi to image and then to video. I, I personally just think it's like it's too expensive for consumers to use AI video gener generators in a useful way. Like they're, they're a gimmick that's fun to play with, but to actually like produce something with them, it's just like the cost are just too high, [00:05:00] at this point. Um, and I just really think it's a great example of how like, it, it might be cool technology, but it, it doesn't seem to ha meet societal's needs. Right now, like I, I feel like if, if you're looking for one place where society is gonna resist AI adoption in particular, I think videos really are both from like a, just the creepiness factor of like having AI generated videos, but also the, the tech is just isn't there for the ways that people would want to actually use it.

[00:05:27] Andrew Zigler: I think all of that nuance is why Altman is killing the video generator because

[00:05:31] Ben Lloyd Pearson: Yeah.

[00:05:32] Andrew Zigler: you know, at the end of the day, the company also has too many pots on the stove and it just doesn't have enough heat to cook them all. Just the reality of the matter and Anthropic really becomes a real winner here. They bet on knowledge work.

[00:05:43] Andrew Zigler: At the beginning, they did not pick up distractions. They did not try to own the consumer market as aggressively, and that's been paying in dividends for them. Uh, you know, literally like their adopt their adoption rate among the enterprise is way up. but I wanna pivot for a second on this story too.

[00:05:58] Andrew Zigler: You know, we talked about [00:06:00] OpenAI, but I think a lot of folks are missing one of the most interesting parts of this story. And that's Disney's new CEO, having a pretty brutal first week because of all of this news. so he, you know, we, we lose the $1 billion investment in partnership with OpenAI.

[00:06:15] Andrew Zigler: They're caught off guard. He found out 30 minutes after OpenAI announced in a press conference and they were not on the same page. And then Epic Games, um, laid off, uh, you know, a thousand employees. From their company after a partnership as well, which they've been making with Disney, uh, and ABC it canceled the Bachelorette because of, uh, its, uh, star being caught in a, uh, video that ruined the reputation.

[00:06:41] Andrew Zigler: And they made the side the decision to not air the season. And so the Bachelorette being, you know, ABC a subsidiary of Disney's, one of their biggest, uh, shows, it's just a brutal, Pile up here in the news, for his first week as the CEO, uh, and Altman just, you know, really added a doozy on [00:07:00] top.

[00:07:00] Andrew Zigler: So that was the interesting tidbit that I extracted from, from this one.

[00:07:04] Ben Lloyd Pearson: All right. Well, I think we should pivot from there to, to another foundation model provider, uh, that has been doing also some really cool things that we've already brought up here, and that's Anthropic. So what's going on with Claude Code now?

[00:07:15] Andrew Zigler: Yeah, so Claude CLO just announced Auto Mode. It's a new permission system that automatically approves safe coding actions while blocking risky ones. Um, and you may recognize this as, uh, an all or nothing kind of behavior. Developers have come to call this YOLO mode. And if you've been using Claude, you know this as invoking Claude with the dangerously skip permissions flag. A flag that, uh, if you use Claude Code with any seriousness, is probably alias in your bash.

[00:07:44] Ben Lloyd Pearson: Yeah. Everyone who's using Claude Code has this set on. This turned on.

[00:07:48] Andrew Zigler: Everybody used it. And what, what did this mean? It, it meant, we were having a trade off here. We needed speed, uh, and autonomy over security. And for some folks that's a [00:08:00] palatable decision and a palatable exchange. But you know what the rest of the story is? This results in, uh, a lot of situations where AI does things that are unexpected.

[00:08:09] Andrew Zigler: And ultimately we've been tackling this as engineers with. Things like harness engineering, you know, write great rules and skills and hooks be deterministic about how you prevent the LLM from stepping in these landmines. But auto mode represents an ability for Claude to understand the long running coding tasks that are act active right now, and, uh, be able to make the right decision about whether or not something needs a closer look.

[00:08:33] Andrew Zigler: Or approval. And of course it doesn't eliminate all risk because how could anything eliminate all risk? Uh, but it certainly helps, uh, users trust it without an all or nothing experience. Uh, what do you think, Ben?

[00:08:47] Ben Lloyd Pearson: Well, I'm, I'm, I'm assuming you're breathing a huge sigh of relief over this one, Andrew, 'cause I know you've had that configuration turned on for some of your agents. And, and I've just been sitting over here waiting for [00:09:00] you to share your own version of the story of my agent deleted the production database type thing.

[00:09:05] Ben Lloyd Pearson: Uh, so hopefully this will save us from that. But, but we'll see. But, you know, I've been, I've been saying, and maybe I sound like a broken record, but, you know, advancement in AI is a lot like other technological progress. Like it often feels like. You take three steps forward, you take two steps backwards. But in the AI era, I feel like this permissions and access control problem, like it feels almost like we've gone like 10 steps backwards because we've kind of just thrown caution to the wind with a lot of these tools. And then by their nature, AI agents can operate at a scale that is just like, you know, almost incomprehensible.

[00:09:40] Ben Lloyd Pearson: So if they do crazy, uh, destructive things, they can do it faster than any human could ever be able to do it. But, you know, tools like Claude Code, like you can sort of manage this risk by like isolating it on like a VPS or something like that. as this like agentic way of working becomes more common. [00:10:00] Um, you know, I, I think we're gonna see a lot, you know, we're gonna see a lot more AI like doing work on people's laptops, on their devices. and I do think that we're gonna see more examples of like rogue AI that will, that will break out of, of that environment and, you know, cause some destruction. So, you know, when I, when I switched to cowork, or at least started trying out cowork and using it just to see how it worked. Um, relative to Claude Code, uh, you know, it, it was immediate that it became apparent to me that there is like this severe lack of control over what Claude can access and do. So like, um, you know, it, it wants you to like install the Chrome browser so it can go browse the web and do things. Uh, on your behalf.

[00:10:42] Ben Lloyd Pearson: But, but then when you go to install that, there's like these super scary warnings that are like, anyone can just put a prompt injection on their website and it will hijack Claude and that will be operating hijacked on your computer. And that is, I mean, that's terrifying. Like we should never, we should never be in that situation. yeah, I mean, [00:11:00] this is something that like all of the companies doing stuff with ai. feel like we're missing a foundational layer. Like there's some sort of fundamental technology that, all agents should be using to make sure that their access control is properly scoped. 'cause I mean, we're hearing stories of like outages at AWS caused by rogue agents.

[00:11:18] Ben Lloyd Pearson: Like, it's the same, same type of problem. So, you know, this is a real problem and unfortunately there's not great solutions, but it's at least positive to see that companies like Anthropic are thinking about this challenge.

[00:11:29] Andrew Zigler: Yeah. And it's funny you do say that, Ben, that you know, I'm probably having a sigh of relief over this. It's definitely great to be moving towards, like you're saying, uh, a more baseline layer that all LLMs or coding flows can use in a shared way that is more adaptive and smart. Let's lav, let's leverage the intelligence in the long running context of the model to make its decision making smarter.

[00:11:53] Andrew Zigler: That's what harness engineering is all about. So I'm really excited to see this development. Um, you know, I've certainly been in [00:12:00] situations before where LLMs make LLMs make booboos while we're coding and it's hard to undo. Or I wish I would've had better guardrails on the things that executed, like for example.

[00:12:10] Andrew Zigler: operating sometimes on a database, you know, on a dev server of course, never in production can be a little harrowing using a, using a coding agent. And I've definitely had to do some really bad surgeries before, but whenever those happen, uh, I think it's really important to use the harness to learn from itself.

[00:12:27] Andrew Zigler: Think of how you can build your. You know, or your own auto mode, uh, this is how people create these guardrails for themselves. So in this case, I had Claude review everything that happened, uh, thoroughly review all of the actions it took, and it made a detailed report about the failure mode. From there, you should have Claude write a skill, create a deterministic hook, and a check on those types of commands in the future.

[00:12:50] Andrew Zigler: Because if you didn't know this even before auto mode. You could already set, uh, per bash command types of checks and hooks on your Claude Code configuration to give [00:13:00] you closer control over things and to make sure stuff passed certain sniff tests. So, uh, you can always turn these solutions into uh, protections, but ultimately, sometimes, uh, one mistake is all it takes.

[00:13:12] Andrew Zigler: So auto mode is hopefully gonna charter a safer future for the rest of us.

[00:13:16] Ben Lloyd Pearson: And, and most AI will put in a surprising amount of effort to get around your

[00:13:22] Andrew Zigler: So much effort.

[00:13:23] Ben Lloyd Pearson: ' cause it views it as a helpful workaround rather than like a malicious action that's against the wills of it, uh, against the will of its, uh, operators. So, yeah.

[00:13:34] Andrew Zigler: So what's our next one, Ben?

[00:13:35] Ben Lloyd Pearson: Yeah, I'm sure we'll see more on that story.

[00:13:37] Ben Lloyd Pearson: Up next, we got the end of computer programming as we know it. after coders. So this is an article from The New York Times Super in depth, very long article. There's a really great read, a lot of stuff to take away from this. Um, but it really just explores how AI is disrupting software development.

[00:13:53] Ben Lloyd Pearson: First, among all of the knowledge work professions, and developers' roles are really shifting more from like [00:14:00] creating to judging on things like code quality and architectural designs. the article highlights really how AI has done a great job at taking away the drudgery associated with producing code, um, versus like when you compare it to like other creative aspects. AI so far has really sort of taken away the more soulful creative parts of it and has left a lot more of the like, tedious work. Like what do you do with this, this, this asset you've created after it's already been produced, it's really like helped developers free up their time in a way that a lot of other knowledge professions haven't fully realized yet.

[00:14:35] Ben Lloyd Pearson: So they really are like the vanguard in terms of this transformation. But you know, with that said, there's also like non-technical people that are increasingly solving problems with code through AI assistance. You know, just sort of showing how like, not only is like code generation being democratized, like I feel like that's. Kind of, uh, obvious, but, um, you know, the role of development is now going horizontal. Like more [00:15:00] of a wider range of people are now able to be developers, uh, for lack of a better term. So, what did you think when you read this article?

[00:15:09] Andrew Zigler: That's a great call out that people's skill sets are getting broader, just like how you have the emergence of T-shaped engineers who are able to execute on almost every element of the engineering process, but deeply specialize in one or a designer who can ship, you know, these types of persona.

[00:15:27] Andrew Zigler: They're becoming more and more mainstream because of how, um, agent capabilities are unlocking, you know, the, the outputs of these types of folks. And I think this is happening in every industry and it, we can learn a lot by studying what's happening in tech because it's so deeply disrupted and disrupted first as a knowledge working environment

[00:15:47] Andrew Zigler: on code. And I, I think it's a really great sign when topics that we cover, so feverishly on Dev Interrupted go mainstream like this, you know, this New York Times p uh, piece, it features, it's, it features Steve [00:16:00] Yegge, who we've talked about extensively on this show, um, as well as engineers in the network in the, in, in the world around Dev Interrupted.

[00:16:07] Andrew Zigler: Uh, I, I recognized several names scrolling through the article, and I'm sure you will too, if you've been paying attention and there's, A really interesting divide we've been covering on the show as well between software engineering and software development. And this article blows that wide open and talks about how engineers now spend their time developing their tastes and creating the right environment to execute that taste rather than actually writing the code manually, uh, themselves.

[00:16:37] Andrew Zigler: You know, I, I think this was a really interesting drill down. Again, the topic of going mainstream on the New York Times, and it represents really how the seismic shift is going to extend beyond tech because like you said, Ben, uh, you're getting folks that are now being able to use technology in ways they never could before.

[00:16:55] Andrew Zigler: You get T-shaped everyone, not just engineers.

[00:16:59] Ben Lloyd Pearson: Yeah. These [00:17:00] trends are coming for all of knowledge work. You know, it's, it's really just like a confluence of factors that has made software engineering to be the first, uh, profession that's being disrupted. Uh, and, and yeah, I think this is just, you know, it's a really great article that like really taps into the zeitgeist of, of the moment with, with this AI transformation and yeah, judgment over creation.

[00:17:22] Ben Lloyd Pearson: Like that's, that's really like the, the way that we, we as humans can continue to be. know, important as AI takes over our work. You know, and I also do wanna point out that I, you know, I don't think it's hopeless for people that like the craft of writing code. Um, yeah. The, the typical software engineering job is, is transitioning away from being craft focused, uh, in terms of like your ability to create something in more outcome focused. but I, I do still think that like expertise, like is a thing that humans can still like innately develop that that helps, you know, it really lends to improving AI substantially. So, you know, we [00:18:00] still need the people that, that create the genesis information, the, the, the upstream frameworks that we all build upon. those sorts of things, you know? And, and that does take a, a high degree of craft ability. So, you know, I don't think that's gonna be the common, the normal job for everyone. and, and I did really like, there was a moment in this article where someone referenced the, in the before Times phenomenon, like how, how like our perception of work is now shifting to where like, you know, tasks that, that before ai, like I either wouldn't have done it at all because it would've taken too much effort versus what it was worth. Or I can do it now in a fraction of the time. So I do it more and then I do other stuff on top of that, you know, so I, I really do like that moment. It, it kind of made me chuckle.

[00:18:43] Andrew Zigler: so what about this next one, Ben?

[00:18:46] Ben Lloyd Pearson: Yeah. Let's talk about Microsoft's redemption tour, or at least an attempt at it. We'll see if it pays off. So Microsoft has announced this seven point plan to fix Windows 11 after what some people view it as a [00:19:00] systematic degradation of their capabilities with things like forced AI integration privacy violations, vendor lockin, uh. Stuff like that. And, and, and our producer, Adam, really wanted me to cover this 'cause he's felt a lot of these frustrations. I think as a, as a Windows user You know, it's, it's, uh, something that like we really should be a aware of. I think because, know, it, it's not clear yet if these fixes are, are going to be structural or if they're going to be like more cosmetic. I do think that Microsoft has really struggled in the last year in particular. you know, they were early to the game with things like co-pilot in GitHub and. Uh, you know, and GitHub still has, like, in particular, I think, remained relevant in a lot of ways, but at the same time, they just don't seem to be creating the momentum around ai.

[00:19:52] Ben Lloyd Pearson: And I'm seeing a lot of, like, I, I feel like it's anti-patterns that are coming out of them, they're doing like the forced adoption. Like you can't, you [00:20:00] can't like force somebody to use AI when they would prefer to use something else. You know, I've seen them applying it to like, use cases where there's like high failure rates.

[00:20:10] Ben Lloyd Pearson: that AI is just not like very good at successfully doing consistently. and, and then they've also just kind of been focused on like building a thin layer on top of foundational models rather than being like a company that like, um. like really understands how to like, leverage these, these foundational technologies.

[00:20:29] Ben Lloyd Pearson: So, yeah. I don't know. Andrew, I feel like there, there was a lot of grievances listed in this article. I don't use Windows as much anymore these days. Um, but I'm wondering what your, your opinion is on it and I'm hoping this won't become too much of a, a hate session.

[00:20:42] Andrew Zigler: I'll try to be nice. I'm just joking. I've been a long time, uh, Windows user for a lot of my life, actually. Um, and it, there's plenty of things that I can still only use on a Windows. but I definitely get all of my work done either on a Mac or a Linux. And [00:21:00] there's a, been a confluence of decisions for why over time I've just migrated into that ecosystem, one over the other.

[00:21:05] Andrew Zigler: But to me, as a individual consumer. We're swimming in a whole different sea than what's the reality of the Microsoft lock-in? Because Microsoft is, you know, can be the quicksand of tech. Once you're standing in it, it's too late and you can't move because Microsoft doesn't need you to love Windows 11.

[00:21:23] Andrew Zigler: They need you to calculate that the migration cost exceeds the tolerance. And for a company that is a real budget item that gets balanced maybe every quarter or every year. And if the decision falls, no, then Microsoft stays another year. And deeper and deeper it goes. obviously creating experiences that, uh, hold your data even closer.

[00:21:43] Andrew Zigler: So Microsoft won this platform, war, war first, and then they came in and kind of stripped mind, the user experience. Because they could. That's what, how enterprise lock-in works, uh, especially when it's deep enough to absorb customer contempt, which is definitely boiling over in this article. It uses [00:22:00] a, a number of metaphors and analogies to express Microsoft relationship to its consumers.

[00:22:05] Andrew Zigler: And across these 20 years of enterprise lock-in that's been fully baked, Windows and office along the way were their own separate monopolies that had kind of separate things going on as well within it. So it's monopolies all monopolies in some cases, but ultimately, you know what the takeaway is?

[00:22:22] Andrew Zigler: When you're building a platform, you want to evaluate obvious. Your experience, but your ability to migrate and move. And I think a really interesting takeaway from this compared to things like, how folks are adopting like AI tooling into their engineering flow is right now there's like not a lock in that's happening.

[00:22:40] Andrew Zigler: People are experimenting with lots of different stuff and being portable between their ideas.

[00:22:44] Ben Lloyd Pearson: Yeah.

[00:22:46] Andrew Zigler: that portability because we've learned from 20 years of experiences like Microsoft that we have to be running and as fast as we can, but carrying the stuff with us in a portable way. Um, so there's definitely lots of lessons to unpack.

[00:22:59] Andrew Zigler: [00:23:00] If you are a Microsoft lover or a hater alike, you're gonna find something that's article that intrigues you. So I encourage you to give it a closer read.

[00:23:07] Ben Lloyd Pearson: Yeah, that point on portability is, is uh, is something I hadn't considered. But yeah, that's definitely a really key point. I mean, we really, like, I've normalized just a behavior of anything I'm doing should be injectable to send over to another system. You know, nothing should ever be fully contained within or fully restricted to a single place, but

[00:23:27] Andrew Zigler: O. One thing I'll say on that is like,

[00:23:29] Ben Lloyd Pearson: yeah.

[00:23:29] Andrew Zigler: you know, Jeffrey Huntley, he taught us to capture the back pressure. Don't let someone else capture your back pressure. You capture it for yourself.

[00:23:37] Ben Lloyd Pearson: Yeah, exactly. And, you know, and to be, to give Microsoft credit, they have, they have a history of redemption arcs that have been pulled off successfully. You know, I was there when they started showing up in the open source community with, uh, I'll never forget this they had all of these like. When pigs fly a swag that they were giving out like little pigs with wings and they had designs that were all based on like flying pigs, and [00:24:00] they're showing up to these open source events and being like, Hey guys, we, we we're, we want to be a part of the club now too.

[00:24:05] Ben Lloyd Pearson: And, uh, and they, and to their credit, they, they pulled it off very well. They. Um, uh, you know, there's always, I think, gonna be some animosity in that community, but I think for the most part they, they really did smooth a lot of it over. So I'm not gonna count Microsoft out yet. There's a lot of smart people there.

[00:24:22] Ben Lloyd Pearson: They, they do have great products, you know, so maybe we'll see them come back for a, a second wind at some point.

[00:24:28] Andrew Zigler: Maybe so, and, and, and we will, we'll look to see if, uh, hopefully in post we can maybe get that micro, uh, that, uh, the Simpson's of the pig flying at the pig roast because, uh, from Burn's office. Specifically, I think that exactly describes, uh, the feeling of this. But, uh, also, two of you've been listening, you know, part of this story was rooted in our, our editor Adam, migrating from Windows to Linux, and we've each given our own recommended distro, and Ben is trying to send him into Mint.

[00:24:55] Andrew Zigler: Was it, and I'm, I'm, I wanted to suggest Ubuntu. So if you have a, [00:25:00] you know, a, a burning favorite Ubuntu distro that you think our producer Adam should try first, please let us know.

[00:25:06] Ben Lloyd Pearson: I, I feel like by saying, Ubuntu, you're, you're just like, I want to agree with Ben, but I don't quite want to recommend.

[00:25:12] Andrew Zigler: I could be admitting that I'm an Arch Linux user, but I'm not, you know?

[00:25:16] Ben Lloyd Pearson: Yeah. We're not gonna go there. We're not gonna go there. All right, cool. We got one short article to, to leave our readers with. Uh, definitely a great leave behind for just a quick read if, if you got a moment. Uh, and this one's titled Post, People, Operations, Strategy, and Technology. This is from Philip Sue, uh, and it's a model that he work uses for engineering leader. Strengths. Um, so you know the people that you have, the way they operate and collaborate with each other, the strategy that you build around them and the technologies that you use, uh, to, to support them. originally was taught by, uh, Jocelyn Goldfine, the core insight of this is that leaders excel by becoming exceptionally strong in one domain rather than being like competent across all four areas. Uh, [00:26:00] and it's meant to serve as like a self-assessment tool and a framework for building balanced leadership teams where, you know, their collective strengths cover all of those four domains. And I think it's a really great way to frame leadership. You know, it's just a nice reminder of the types of traits that leader should seek to develop and to be aware of where you or your team might have those gaps.

[00:26:20] Ben Lloyd Pearson: So if you are looking to hire or to, to fill roles within your team, like. You know, you can sort of identify what are the types of leadership skills you would hope to, to fill in with, with across your team. So, yeah, I don't have a lot to say about this 'cause it is a pretty short article and I, I just think it's a great read.

[00:26:36] Ben Lloyd Pearson: It's just a, a nice little reminder. So what did you think, Andrew?

[00:26:39] Andrew Zigler: The call out in here that really stuck with me is that there's a temptation to find your strength and stick with it, but it calls out the importance of being balanced and seeking out a way to improve Your weakest element on this like post model, cause right now, you know, we just talked about the types of skill sets that folks in engineering and also in other industries are [00:27:00] picking up.

[00:27:00] Andrew Zigler: They're becoming what we're calling T-shaped or broad generalists with a deep specialization in one or more areas. And, um, t-shaped engineers have their counterparts in leadership as well. And so you want to be reaching across and be a great generalist if, if you are excellent with technology decisions.

[00:27:17] Andrew Zigler: Challenge yourself to get closer to understanding the people problems within your organization. If you are an operational person, challenge yourself to step back and understand the strategy of the bigger picture I items that you move through every day. I think that building those muscles and turning into a T-shaped leader with a deep specialization in one of these four is gonna be really crucial for, um, you know, your career.

[00:27:40] Ben Lloyd Pearson: Awesome. Well, lots of great stories today. Uh, before we wrap up today, Andrew, what are your agents up to right now?

[00:27:46] Andrew Zigler: like I said, uh, very recently I had a, a, a boo booo where it was interacting with, uh, a database and it, in, well, my Dev environment, and it got rid of some traces that I was looking for. So,

[00:27:58] Ben Lloyd Pearson: Oh

[00:27:58] Andrew Zigler: sadly, I'm, I'm, I'm [00:28:00] recreating some things with my agents right now, but thankfully.

[00:28:03] Ben Lloyd Pearson: Context. We need

[00:28:05] Andrew Zigler: Not my historical context, not my beautiful eval data. I'm a big observability nerd, so anything that like tries to touch my observability data is sacred to me. Um, but thankfully, uh, thanks to, uh, harness engineering and skills and hooks like the ones I've described before, recovering is totally possible because I work in a way where I can take snapshots and understand things as I move.

[00:28:26] Andrew Zigler: It's all about, you know, uh, what are, uh, finding those failure modes and then engineering them away. What about your agents, Ben?

[00:28:32] Ben Lloyd Pearson: Yeah, I mean, I've had a lot of fun operating the harnessed agents that you've been producing lately. But yeah. You know, I mean, we've been dealing with this access control issue a lot. It's like, how do we, how do we feed the data into this stuff without giving it the ability to wreck our stuff? You know, as, as you've, uh, encountered a little bit of, you know, so like, yes, I want you to be able to go search Slack for useful information.

[00:28:57] Ben Lloyd Pearson: No, I do not want you to go write a [00:29:00] DM to one of my executives when I'm like, doing some research, you know, so, and those controls are, you know, it's difficult. You need, you need good harnesses, you know, and that's, yeah. And, and I'm really, it's, it's been nice to see us really starting to solve that problem, I think.

[00:29:14] Andrew Zigler: Lots of engineering problems abound, and I'm sure there'll be new ones next week when we meet back up to talk about it.

[00:29:21] Ben Lloyd Pearson: Yeah. We'll see you all next week.

[00:29:22] Andrew Zigler: Take care.

Your next listen