Podcast
/
From Kubernetes to AI maximalism

From Kubernetes to AI maximalism

By Craig McLuckie
|
Blog_Comprehensive_DORA_Guide_2400x1256_1_954d1bce3b

When you co-create Kubernetes, you earn the right to have strong opinions on the next platform shift. This week, Ben sits down with Craig McLuckie, Co-founder & CEO of Stacklok, who is advocating for a shift in leadership mindset. He argues we need to move from asking if we can use AI to demanding to know why we can’t. Listen to hear why he believes an "AI maximalist" philosophy is the only way to survive the next cycle.

Show Notes

Transcript 

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

[00:00:00] Ben Lloyd Pearson: My guest today is Craig McKey. He is the CEO and co-founder of Stack Lock and Co co-creator of Kubernetes. Craig will be sharing his thoughts on how engineering leaders can navigate the shift to an AI first world. He's gonna offer some practical advice on how to rethink where value is created, and adapt team structures and adjust processes to integrate generative AI for maximum efficiency.

[00:00:24] Ben Lloyd Pearson: Craig, welcome to the show today. Yeah, and I just wanna point out like I, you know, I got my career started around the time that Kubernetes was really starting to take off. So it's a real pleasure to get, be in the same room with somebody who like was pivotal in such a foundational technology. So, really?

[00:00:42] Craig McLuckie: journey. It also ages me a little bit if that's when you

[00:00:44] Ben Lloyd Pearson: Yeah. Yeah. No,

[00:00:45] Craig McLuckie: career going.

[00:00:46] Ben Lloyd Pearson: Yeah. No, but awesome. So let's just jump into it. So, you know, one of, one of the reasons we wanted to bring you in here to talk today is that you, you've been talking about this concept of an AI Maximalist philosophy. So [00:01:00] let, let's talk about that a little bit and how that sort of defers from like the more gradual or conservative approach to, uh, AI adoption.

[00:01:08] Ben Lloyd Pearson: So let's, let's start there. So what, what is this AI and Maximus

[00:01:11] Craig McLuckie: AI maximum. Well, I think, you know, for me it's, it's really about shifting the mindset of a team where, um, you know, you, you'll encounter a lot of organizations that'll ask, you know, can we use AI to improve this? And I think, you know, changing the narrative is like, you know, why can't we use AI to do this?

[00:01:29] Craig McLuckie: You know, sort of, you know, really pushing the, the envelope. And so for me, um, you know, the sort of, the way I describe the, the intent behind AI Maxism is. living in a world where the cost economics of, of doing business is changing. we really need to find ways to kind of push the envelopes and, you know, jump into a world where we start to challenge a lot of our operating assumptions around what works well and what doesn't work well.

[00:01:54] Craig McLuckie: Uh, you know, how value is, is being created in organizations. And so, yeah, the catchphrase I use for my team [00:02:00] is, is like, let's embrace an AI maximus approach. And it's really about inverting the question, you know. You know, can we do this to, you know, with AI too, why can't we do this with ai? Show that it cannot be done with AI first.

[00:02:10] Ben Lloyd Pearson: Yeah. You know, I've actually, as, as I personally have been adopting ai, I've, I've found a lot of situations where. you know, early on, like hallucinations were a big, a big problem and they still are today, but I feel like that's, you know, as, as context, windows get bigger, bigger, and as the technology improves, like that becomes less of an issue.

[00:02:29] Ben Lloyd Pearson: But particularly early on when I would encounter an issue where I was trying to solve something with AI and it would fail. Um, taking that sort of approach of like, why can't AI do this? And then actually just asking my model, like whether it's ChatGPT or whatever, like, why are you struggling with this?

[00:02:44] Ben Lloyd Pearson: And like, help me, help you unblock this issue. Then suddenly you're, you're, you're getting to more of a productive state where you're actually like starting to unpack the challenge and you can almost like piece by piece understands a problem of like. Why, why [00:03:00] is AI struggling with each individual component of this?

[00:03:03] Ben Lloyd Pearson: And then just solve it piece by piece, you know?

[00:03:05] Craig McLuckie: I think that's a really essential. You know, one of the key learnings I've had has been, you know, I've, I've been in distributed systems for a long time. You know, my first project I worked on was, uh, clustering for Windows NT 3 51. If, if you wanna, like, really date me, that, that kind of tells the story of, of how long I've been working in distributed systems.

[00:03:25] Craig McLuckie: And one of the observations that, you know, I've, I've made about myself is that. Yeah. As I started approach this technology, I thought I understood it. I thought I was clever. I thought I was smart, right? It's like, well, I've built a few things, and some of them have been okay. Um, you know, it's just another, it's just another, it's just another distributed system.

[00:03:42] Craig McLuckie: It's just another technology asset. But it's not, it's, it's a stochastic system. It's a probabilistic system. The way that it works is it's, it's, it's a natural source of entropy in your environment that enables you to unlock new capabilities. But it comes with these very specific [00:04:00] limitations and. Of, you know, start trying to invert. The way that I was, I was thinking was I just didn't intuitively understand a lot of the properties of this, of systems when I first encountered them. I remember having this argument with one of my, um, you know, kind of principle researchers about. You know, the use of of synthetics in, uh, in, in training and refinement.

[00:04:26] Craig McLuckie: And I couldn't get my head around, you know, I'm like, I'm a kind of information theory geek. And from simple information theory perspective, you know, you add entropy, you start to reduce these things down, the results should get worse, not better, but that's not how it works. And so, you know, I think one of the kind of key observations that, that I've certainly seen for myself and I've seen a, a lot of others kind of embrace is that the value creation.

[00:04:49] Craig McLuckie: narrative around the use of generative systems is, is somewhat inverted. You know, people tend to approach it with a platform engineering mindset. And that, that idea that like, well, I, I understand the, the system [00:05:00] conceptually. I'm gonna need a vector database, I'm gonna need this, I'm gonna need that. I'm gonna plug them together.

[00:05:03] Craig McLuckie: I'll get it basically working, and then I'll just, you know, tune and refine the system until the producers results that I want, which is how we build things like Kubernetes

[00:05:10] Ben Lloyd Pearson: Yeah.

[00:05:11] Craig McLuckie: is just not, it doesn't work. You gotta, you gotta kind of invert it. You gotta figure out what works first. Why it works and then you gotta figure out how you can, you know, kind of improve it over time.

[00:05:23] Craig McLuckie: And it does. It does involve this inversion of thought, which is I think is very important.

[00:05:27] Ben Lloyd Pearson: Yeah, and, and you mentioned entropy,

[00:05:34] Ben Lloyd Pearson: hallucinations. Both as like, kind of like viewed as, I mean that's primarily viewed as like a failure of, of an AI system when it like hallucinates or it like that probabilistic system, like goes down a path that is unexpected or is viewed as like being wrong or a failure. But that's actually like the way it's designed fundamentally.

[00:05:52] Ben Lloyd Pearson: Like it's supposed to be behaved that way because there when it, there are many times that it actually does that and it does it what exactly what you want it to do [00:06:00] because that's what it was intended to do.

[00:06:01] Craig McLuckie: you know, there's papers being published these days that

[00:06:06] Craig McLuckie: hallucination effectively. An artifact of the the reward system during training. You're rewarded to a result, you're not rewarded to say, I dunno. So of course you're gonna do that. But I mean, at, at the roots of it, the core operating sort of principle here is that you've basically taken a lot of information.

[00:06:23] Craig McLuckie: You've extracted the semantic meaning of that information and created a set of embeddings. And then you've created a set of attention heads that are, you know, uh, you know, receiving a certain amount of context and then working to collapse down a probability field, one token at a time to produce a result.

[00:06:38] Craig McLuckie: It's stochastic, it's probabilistic. Um, you know, things are gonna change. And, you know, attention is everything. You know, like the Google paper, attention is all you need. It's great, but it's also finite, you know, and like at some point, you know, attention mechanism, self attention mechanisms do collapse under their own weight.

[00:06:53] Craig McLuckie: At some point things will start to hallucinate. And you won't know when until you try and, um, and you won't know [00:07:00] how to kind of maintain a system until you do. And I think that's why you really have to make this jump in with both feet and, and to start to challenge your own assumptions around, around, um, the use of these technologies.

[00:07:12] Ben Lloyd Pearson: one of the problems I think a lot of people face is that. Uh, you know, they, they apply this probabilistic system into really what is like more of like a deterministic problem set or deterministic problem area. but also, you know, I think a lot of people look for deterministic ways to sort of constrain and control these systems.

[00:07:32] Ben Lloyd Pearson: So do you feel like a push and pull between like. Um, like the, the probabilistic nature of this, but with, along with like a need for like more deterministic control systems. Like, does that play into your like maximalist like system in some way?

[00:07:47] Craig McLuckie: I mean, I think it's. It's important to recognize what it's and what it is. Right? And like, you know, it's, it's interesting. You're not gonna know what it's until you try it. So I'll give you, uh, two kind of examples of, of how we [00:08:00] embraced an AI maximalist view, an organization, one that succeeded and one that felt spectacularly right.

[00:08:05] Craig McLuckie: Just to give you a sense of it. Right. So, um, the thing that succeeded spectacularly for us was building our own knowledge management server, right? So basically what we did was we built a server that would go and capture. Documentation from Google Drive would go capture, um, content from GitHub, discord, all of these things, you know, generated, um, a system that would go process that, you know, create a set of embeddings, put a semantic engine in front of it, you could query it and you could ask it things using, you know, any of these existing systems and would come up with very lucid answers.

[00:08:36] Craig McLuckie: And, you know, conventional wisdom would say like, Hey, I'm an engineering organization. That's a tool I can go buy. Why would I build that tool? Well, the answer is like, it, it shocked me. It's what we built took about two weeks and it's shockingly useful. It's just changed the way that we work. And we would never have known that unless we'd actually gone and tried to build it.

[00:08:55] Craig McLuckie: And it was, uh, it was just a, a fantastic success story for our own, you know, [00:09:00] kind of self-actualization on the AI journey. It's not something we can plan to commercialize, but it's just. Wow. It's, it's a really cool system. And then we tried to do the same thing with, um, you know, Prometheus data. We're running Kubernetes clusters and we're starting to ask questions like, Hey, you know, know.

[00:09:14] Craig McLuckie: Pod goes into crash, new back off. Uh, can we use these models to do some initial RCAs so that ire

[00:09:20] Ben Lloyd Pearson: Uh, yeah.

[00:09:20] Craig McLuckie: instead of our SRE getting, um, instead of ire getting the page, you know, getting the, getting some kind of RCA that, that reduces like, you know, TTR times recovery for these systems. And the answer was no.

[00:09:31] Craig McLuckie: Actually, um, you know, we, we started playing with it. We, we tried to build these systems. We generated a whole, like we got some data, we generated a whole bunch of synthetic data. Uh, we ran it. And what we discovered is that. You know, when we started just presenting raw time series data, these things, they, they would just collapse the, they would collapse the context, the, the self attention mechanisms.

[00:09:50] Craig McLuckie: And we were getting far, far better results by just performing basic Bayesian analysis for looking for correlation between Prometheus streams than we would ever get out of the transformer. [00:10:00] And you know, not every organization needs to know that, but un like, until you actually spend a couple weeks giving it a go, you just don't know.

[00:10:08] Ben Lloyd Pearson: Yeah, the root cause analysis has been such a wild one with, with ai because like, and sometimes like I, you know, I have seen a lot of people give you the advice of like, just paste your stack trace analysis into AI and it will tell you the answer. And it's like sometimes it does just. Like right off the bat.

[00:10:23] Ben Lloyd Pearson: And it's like, and if it happens the first time, you might think, wow, this is amazing, and maybe we'll do this all the time. And then the second time you do it, it's like, oh, your Kubernetes cluster has a memory leak, so just allocate more memory and it will be solved. Right. Yeah. Like, no, it

[00:10:36] Craig McLuckie: And like no, it's just, it's gonna crash in 15 minutes.

[00:10:39] Ben Lloyd Pearson: yeah, yeah. Well, instead of 15 minutes, it'll be 10 minutes or

[00:10:43] Craig McLuckie: exactly. So like, you know, it's, um, it's funny how these things work.

[00:10:46] Ben Lloyd Pearson: Yeah. Yeah. So, um, you, you know, so something that, that we've seen that you, you've, you've, you've mentioned in the past is, you know, how some of this is impacting, like, team roles, team structures, that type of stuff. So, [00:11:00] um, what, what are some of the most like surprising or like counterintuitive changes that you've seen to.

[00:11:07] Ben Lloyd Pearson: The roles or the, the structures of teams, um, that, or particularly among teams that have successfully integrated ai, like how is it impacting a team?

[00:11:15] Craig McLuckie: Things here that are sort of, you know, and I drew these sorts folks so. even in an maximalist world, you know, we, and we, where we we're doing everything we can, we're using every tool, every trick we can think of to kind of drive our own productivity. You know, when it look, when you look at the cost associated with producing great code, we've seen perhaps a 20, 30% reduction in the cost of produce.

[00:11:41] Craig McLuckie: Great code. And, and this is, yeah, we're writing the unit tests using these technologies. We're, we're doing everything we can to, to, to get there. Um, turns out the cost to produce bad code. It's infinitely cheaper, right? Because I can produce bad code, like overnight, I can produce bad codes, uh, you know, using [00:12:00] cursor, uh, to, you know, solve anything.

[00:12:02] Craig McLuckie: And so I think one of the key things is, you know, it's, it's definitely changing the, the narrative, right? The ability to use, you know, like the, the, the distinction between roles, right? Like, and, and, and the, the work associated with specific role. There's nothing stopping a designer, instead of producing Figma, producing a scaffolded prototype that you can click through, right?

[00:12:22] Craig McLuckie: The, class of work product, the, the way people start to self-identify changes, but then there's also some in variance. Like you can't, um, you know, bad code takes work to become good code. Um, it's, you know, you can certainly use these tools to sort of shorten that path, but other. To iterate and produce disposable code, the need to dispose of disposable code because disposable code can be free, like free like a, and if you choose hold onto, it's a gonna

[00:12:58] Ben Lloyd Pearson: Yeah.

[00:12:59] Craig McLuckie: And, [00:13:00] um. You know, there's obviously, you know, tensions that emerge, you know, with an organization where there's this sort of temptation to vibe code.

[00:13:08] Craig McLuckie: Um, you know, these, the sort of the, the natural tension where someone like, what the hell? You just vibe coded a a 1500 line pr like, you know, like, why am I reviewing this for you? This makes no sense. So definitely some kind of cultural, you know, kind of, um. Reinforcement necessary that, you know, you produce the code, you and the, the work artifact.

[00:13:26] Craig McLuckie: And so definitely the, the way that people work and, and sort of, you know, radically improving that, that sort of fast turnaround loop and, and changing the way that we think about disposable code as, as an asset for that early iteration cycle. And then the rigors associated with actually disposing of the code that's being produced is one thing.

[00:13:43] Craig McLuckie: The second thing I'd say is kind of back to that knowledge server example. these systems are fantastic at synthesizing and summarizing data and like, you may not get actually, you know, know sort of perfect results every time. They'll occasionally miss a document.

[00:13:54] Craig McLuckie: They may, you know, you know, to your point, there's occasional hallucination. If, if they get [00:14:00] into a situation where the, that sort of self attention systems become a little bit overwhelmed or there's just not enough, uh, parameterized data to, to render a great result. Bringing teams in that introduce into the work so that they don't have to context shift, don't have switch. Slack and interrupt another engineer to get an answer to a question is, is fantastic being able to flatten the organization significantly because you don't have to have managers as a primary arbiter of the flow of information and information summary kinda up the management chart. Because this information's not democratized, it's immediately accessible.

[00:14:37] Craig McLuckie: I don't need to go to an engineer manager to know what an engineer is doing. I can just ask the knowledge server that you know, like, Hey, what did so and so do last week? And it'll gimme a pretty decent summary. It wouldn't be perfect. But if I really care, I could go look at their, you know, their prs. I can go and actually ask a manager to get in some more details.

[00:14:52] Craig McLuckie: Um, but it's just, it's just a, it's just creating this running downhill experience from an engineering management perspective. Because a lot of things that were [00:15:00] hard are just suddenly free.

[00:15:02] Ben Lloyd Pearson: Yeah, and, and I, I love that in particular because so much focus when people talk about ai, even still today, which still kind of surprises me, is. Is on generating code, you know, 'cause that second half of your answer there is very distinctly not about generating code. Right. And, and that potentially being, you know, a big impact can improve a developer's life without actually touching anything related to, to writing code.

[00:15:29] Ben Lloyd Pearson: Right. you know, and personally that is a, a big area that I've seen, you know, like knowledge and skills acquisition, you know, as, as a, like, just a thing that we all have to do as knowledge workers. Like that has been one of the biggest, uh, I think often overlooked, uh, the things that AI does, the, the really helps a lot of people.

[00:15:47] Ben Lloyd Pearson: Yeah. Um, but I, I wanna touch on the, the concept of disposable code. 'cause you're not the first person that has discussed that concept on our show before. We've actually, Andrew and I, uh, you know, the host here at Dev Interrupted, we have talked about this concept quite a [00:16:00] bit in the past. We actually have referred to many of, a lot of what we do internally, uh, both at Dev Interrupted and LinearB as disposable code because we, we are in an era now where, you know, a marketing team can build their own internal app.

[00:16:14] Ben Lloyd Pearson: In an afternoon that doesn't have to serve any purpose outside of their own team or even even an individual moment because now you have an AI that can just build it for you. So I, I'm really curious if you have any sort of like success stories that you can share of where you've seen someone like take more like disposable approach to code.

[00:16:35] Ben Lloyd Pearson: Um, and even like specifically, like, I really like, you know, how you're, you're saying like. It's one thing to generate disposable code. It's another thing to know when it's time to get rid of that code and to have like a process for like efficiently and like, you know, effectively getting rid of that code when it's time has come to an end.

[00:16:53] Craig McLuckie: Yeah, I mean I think for us, you know, probably the biggest success story around disposable code [00:17:00] was, you know, having an engineer vibe code a system, or like having an idea on a Friday vibe coding over the weekend, showing up with a POC that we could look at and, you know, get a real sort of acute sense of, and basically say like, yeah, you know, that's actually, that's a really good idea.

[00:17:19] Craig McLuckie: Look, look, I'll give you an example of this from my, my previous life, right? Like, um, you know, when we were getting Kubernetes going, the whole thing that caused Kubernetes to click in my head was, there's this guy, Brendan Burns, he's one of the most creative, uh, amazing engineers I've ever worked with.

[00:17:34] Craig McLuckie: And like I, you know, I was having this conversation with him around like, let's, you know, we were playing with ideas in this, in this area and like, I was pretty, I was pretty keen on, on Docker as a, as a container, uh, sort of, um, as a container framework, you know, for like, you know, packaging software. And I described this thing to him and he went off and coded it and over like, and he, he's super fast.

[00:17:55] Craig McLuckie: He just, just jam it out there right back. Eds.[00:18:00]

[00:18:03] Craig McLuckie: I was like, holy shit, you just built a personal board cell. Like it was just like, you know, I was like, this is it. Like this is the thing. Right. And you know, it would take someone like Brendan Burns to do that. Like he was the kind of guy who's just this creative genius. He would just produce things. Like you could walk around with this guy and like I swear, if a VC just followed him around for two weeks, there's 10 fundable ideas that will just drop outta his mouth.

[00:18:27] Craig McLuckie: Right? Like he's just that kind of guy. But his ability to execute quickly and produce prototypes is a superpower. And not everyone had it. Not everyone has that power. And that's, and like I've seen this in my own organization time and time again where, you know, people come up with ideas. You know, like we, we built a system we call Tool Hive, which is, uh, it's something we, we are like all in on as a company.

[00:18:48] Craig McLuckie: And that, you know, tool hype didn't spring into life as a fully formed concept. It was just a, a great engineer on my teams. You know, having a weekend fling vibe, coding, something that we looked [00:19:00] at and it was, was great. And, you know, we've also produced probably 20 other things just like it. Which, which didn't, you didn't sort of stick the landing.

[00:19:08] Craig McLuckie: And so I think, you know, that that sort of early ideation, rapid ideation has been, um. Wonderful. Um, as a, as an enabler for the organization, I've also seen, you know, having a designer go off and vibe code something, makes it much more real. Like the, the ability to actually, you know, touch it and feel it and reason about it is fantastic.

[00:19:27] Craig McLuckie: Um. Uh, you know, from the marketing side of the house, like a lot of, a lot of the content we create, you know, will give the devs a week to go off and just code some crazy ideas together, uh, to show how, you know, you can use the system to create, you know, outcomes. And some of the greatest ideas aren't coming out.

[00:19:45] Craig McLuckie: You know, the, like you actual working prototypes aren't being generated by the developers. They're being generated by other members of the organization. So there's just so many examples where. Bringing creative energy into the organization that's provable through an artifact you can see, touch, [00:20:00] work with.

[00:20:01] Craig McLuckie: And then if it's great, invest in and turn it into great code. Or if it's not, just throw it away and just don't.

[00:20:06] Ben Lloyd Pearson: Yeah. So do you feel, do you sense

[00:20:12] Ben Lloyd Pearson: the side is.

[00:20:18] Ben Lloyd Pearson: Do you feel like the real potential is like non-engineering folks coming in, getting their hands on AI and being like, look at this prototype that I built. I think like maybe it's a marketer, maybe it's a sales rep. They're like, I built this thing, I think I can market it or sell it. Um, you know, here's the prototype.

[00:20:36] Ben Lloyd Pearson: I need an engineer and a product person to see if maybe we can turn this into reality. Or is it the engineers doing kind of the reverse of that? Or is it a mixture of both? Like what do you

[00:20:47] Craig McLuckie: you know, like, I mean it's, this is an age old story, right? Like, um, and there's, you know, shadow, it has always been a thing that's existed, uh, you know, for, you know, since time

[00:20:57] Ben Lloyd Pearson: I I used to manage shadow it so

[00:20:59] Craig McLuckie: right? You [00:21:00] know, like, and it's like, I remember like, and if you've ever run an IT organization. You know, like you'll eventually have this moment where the business comes to you and they went ahead.

[00:21:08] Craig McLuckie: Like they were just, they weren't happy with you as an it, you know, kind of provider. And so they secured some budget and then they hired some consultants and they went off and built this crazy janky thing in two weeks that they start using right. It gets to a point where, you know, eventually you get called in and they're like, yeah, we, um, we're using this now.

[00:21:27] Craig McLuckie: What are you using it for? Oh, we're using it to, you know, do our X, Y, Z and it's not working anymore. And like, and like, and then you're like, well. Why, why is this my problem? And it's like, well, you're a iot person. It's like, and then you look at it and you're like, holy crap. Like what were you thinking?

[00:21:43] Craig McLuckie: Like this is never gonna scale past a certain point. You've done this, you know, kind of crazy thing and now you're, you're left living with this thing that someone else owns. So I think there is this very real, you know, you know, problem associated with, you know, kind of, sort of weaponized shadow it, where you're gonna start ending up with a lot of systems that are gonna be [00:22:00] showing up. with production implications that are gonna become a natural part of a team's lifecycle because they can simply build them and they dunno any better. Um, so there's, there's definitely, um, risks to an IT organization from that perspective. And it, it does, it does beg the question, you know, like if the team that you serving can produce this, this system in a certain amount of time.

[00:22:21] Craig McLuckie: If you are not able to kind of embrace a certain amount of agility where you're, you're serving the needs, they will self-service. They will ultimately absolutely do that. We may get to a point where these technologies get good enough. That code is more like an intermediate language than actual code. We may well get to the point where it's, you know, they're, they're actually producing sufficiently robust capabilities that, you know, things will work well.

[00:22:46] Craig McLuckie: Um, but the reality is they're not there yet. Like we, we just aren't there yet. You know, the, the, the reality is that we don't have reasoning systems. We have systems that are emulating reasoning through planning loops and iterations and other pieces, you [00:23:00] know. Context is everything, but it's finite.

[00:23:02] Craig McLuckie: Eventually the self mes collapse, things become too complex, and, and you, and then you left hand, you know, kind of holding the bag for, uh, for the team.

[00:23:10] Ben Lloyd Pearson: Yeah, yeah. You know, there's a lot of hype around like artificial general intelligence. I think really what we have now, and maybe even for the foreseeable future is simulated intelligence. It's like something that looks exactly like intelligence most of the

[00:23:23] Craig McLuckie: Yeah.

[00:23:24] Ben Lloyd Pearson: and then it doesn't suddenly.

[00:23:25] Craig McLuckie: I think, you know, the, the way I'll probably get yelled at by AI purist for describing it this way, but I keep coming back to, um, this book by Kahan thinking Fast and Slow, right? So Kahan describes these two systems in the brain. The first system is, you know, the sort of system one, which is like the fast thinking system.

[00:23:42] Ben Lloyd Pearson: Yeah.

[00:23:43] Craig McLuckie: And then system two is the sort of reasoning system, and you kind of need both, right? Like you need to be able to look at, you know, process visual information. That's a chair.

[00:23:50] Ben Lloyd Pearson: Yeah.

[00:23:51] Craig McLuckie: Based, you know, turn visual information into something that has semantic meaning. And the closest analog that we have, you know, in terms of generative ai is, is it's effectively [00:24:00] kind of, you know, like that sort of fast thinking system.

[00:24:03] Craig McLuckie: It's not a reasoning system, it's emulating reasoning by making snap judgements to generate planning and then running kind of internal iterative cycles. And the essential thing that's missing is, is the bridge between system one and system two. You know, like, you know, when we see a chair and if our brain says it's a chair, but it's not a chair, like something will like, think us.

[00:24:21] Craig McLuckie: Like, is that really a chair? You know, is that really a something? And we actually have the ability to catch ourselves. And then we have the linkage where, you know, the, the, the work that, you know, we're, we're the, the reasoning starts to program the fast thinking systems. Those don't exist. So, you know, I, I think you're right.

[00:24:38] Craig McLuckie: It's, it's emulated reasoning. It's not actual reasoning. And because it's emulated reasoning. It'll work within the, the, the boundaries of a certain set of, um, operating attributes. But at some point, you know, things do tend to fall over.

[00:24:51] Ben Lloyd Pearson: a listener out there, their company is just starting their AI journey, or maybe they've, you know, made a little bit of progress along it, but, you [00:25:00] know, they're still, you know, struggling along the way or they're, they're still learning a lot. what's some practical advice that you would give to, to somebody who's still like, sort of early on in that experiment experimentation stage and they're, you know, maybe not quite at full scale adoption, but trying to get to that.

[00:25:15] Ben Lloyd Pearson: That phase? Like what, what is the advice that you would give to, to someone who's in that stage right now?

[00:25:20] Craig McLuckie: I would say that, you know, first of all, you have to recognize that a lot of your intuitions and instincts are gonna be wrong. And so there's this, this, this learning arc that you have to embark on, which you have to embrace a certain amount of patience with. And I think for me, the learning arc that that worked for myself and my organization was, you know, first use the tools just to understand, you know, what the possible is.

[00:25:42] Craig McLuckie: Then actually attempt to build the tools and optimize the tools. Once you've got to a point where you're successfully feeding yourself with, with, with tooling, you then may be positioned to start, you know, building, um, you know, kind of general products. So, so like, you know, go buy cursor license, start using it, use it a lot, figure out where it [00:26:00] falls over, figure out how to make it better.

[00:26:01] Craig McLuckie: Figure out how to connect cursor to a variety of the existing systems using MCP server. Great. Get to a point where you can say like, well, okay, what would a knowledge server look like? Don't buy it, just build it. You know, like, just, just throw yourself down the cliff. Try it, see what happens. Um, it's a lot, at the end of the day, it's a lot easier to take a great engineer and, you know, get them to that point where they have that intuition, that instinct around what works and doesn't, than it is to take someone who has, you know, has learned the intuition and instinct around what you know, like how these things work and turn them into a great engineer.

[00:26:33] Craig McLuckie: And you just kind of have to be patient with yourself and your teams. Look at it as a, a path towards understanding through use, then optimization, and then delivering actual, um, you know, sort of systems. Like there's a lot of work associated with building long running stateful agentic systems. It, it eventually will de deconstruct into a, a distributed transaction problem, I promise.

[00:26:57] Craig McLuckie: Workflow always does. So like, so unless you're [00:27:00] willing to kind of, you know, work your way into it, just, uh, over time you, you're gonna get hurt.

[00:27:03] Ben Lloyd Pearson: Yeah, you, you, uh, you, you've used the phrase vibe coding a lot, which gets a lot of negative attention, and we, we've addressed that on this show a lot, so our listeners are very familiar with our take on it, but, and you've used a little bit harsher language than I typically use. Uh. Think you said throw, throw yourself off a cliff with it, which is, I I like to say vibe code until the wheels fall off.

[00:27:23] Ben Lloyd Pearson: Like, could you do that at least once? You know, just so you see what happens.

[00:27:26] Craig McLuckie: it's faith-based engineering. Throw yourself off a cliff with a bag of parts and make sure you construct a plane before you

[00:27:31] Ben Lloyd Pearson: yeah, yeah. Just so you know, like this is, this is what happens when it all falls apart on you, so that you're prepared so that you know when you're serious about it. Like you can protect yourself and be ready

[00:27:42] Craig McLuckie: And I like, you gotta, like, it's a, it's a sharp new tool. You gotta treat it with bit respect. Yeah,

[00:27:48] Ben Lloyd Pearson: Yeah. Yeah, exactly.

[00:27:49] Craig McLuckie: like it's, it's um, you know, like sidle up to it. Use it a bit, learn it, understand it. Know, just, just chartering a team to go off and like, you know, hey, build [00:28:00] this long running state full subsystem with reflection patterns and memory and, you know, kind of tool calling and context retreatment, all that.

[00:28:07] Craig McLuckie: I promise it's not gonna work. You know, you'll get there eventually, but I promise, like, unless your engineers have actually walked the journey, uh, it's, you're gonna be

[00:28:16] Ben Lloyd Pearson: there's a lot of frustration and then

[00:28:17] Craig McLuckie: of frustrations. And you just dunno. You're like, why is this breaking? I don't understand what it's breaking. Well, you just add a new tool.

[00:28:23] Craig McLuckie: Well, what happened? Well, you just bumped the thing over a threshold where it, you know, like you've, you've, it's not rendering too much context and it just is not performing the same way. Or, uh, you know, like, Hey, you just updated a model. You've gotta realize the prompt and the model and the orchestration pattern are all tightly coupled.

[00:28:38] Craig McLuckie: These things can't, you know, like they cannot be easily decoupled. Like, you know, behavior is, is optimized for probabilistic outcomes. And until you get your head around that, it's gonna be a bit of a bumpy path.

[00:28:49] Ben Lloyd Pearson: yep. So, you know, we've talked about how like, you know, there's efficiency gains around generating quality code. There's efficiency gains around generating bad code. But beyond [00:29:00] efficiency, like, what do you think are, you know, the biggest benefits that you know, or even challenges that, that engineering leaders are gonna face as they start to integrate generative AI and agents into their core processes?

[00:29:13] Ben Lloyd Pearson: Um, you know, particularly if you know of any like, unexpected ones that people maybe don't typically see.

[00:29:17] Craig McLuckie: Well, I mean, it's, there's the bill at the end of the day,

[00:29:22] Ben Lloyd Pearson: Well, everything's, everything's subsidized by venture capital right now, so the bill's a lot cheaper than,

[00:29:27] Craig McLuckie: you know, and like, and a lot of people are like, you know, I'll speak to a lot of very smart people and they're like, ah, don't worry about it. Like, you know, like you the, you know, like using sparse or matrices and this and that, and all these other things are gonna, you know, interesting costs coming down.

[00:29:40] Craig McLuckie: It'll be

[00:29:40] Ben Lloyd Pearson: yeah.

[00:29:41] Craig McLuckie: No, the, the bill is gonna continue to increase. I think the, the biggest thing I see is as a problem for organizations is this, it's effectively change management in the context of something, right? So it's, it's not, you start to get to a system where you, you've, you've, you've, you've, with, with care and [00:30:00] love and, and, and iteration and running, um, you know, kind of running.

[00:30:06] Craig McLuckie: Brilliant descent for your prompt optimization. All of these things, you produce this relatively optimal system. The problem is that, the minute something changes, and it could be like, Hey, I'm using a tool called system with MCP and you know, I'm, I'm starting to kind of, you know, project tools that are now available.

[00:30:22] Craig McLuckie: A tool description changed. Bet are off, off. There's no guarantee that it's gonna continue to invoke that tool the way you want. you know, I layer on five more tools in my registry and I was doing dynamic tool binding and suddenly, um, you know, I, I've laid in a hundred tools and suddenly my, my, my, my context when I start to just present a tool list has been blown up from 10,000 tokens to tokens. You know, the, the fact that you, you, you know, you really need to kind of, it's not enough to just get it working. You also need to understand why it's working and what's gonna be introducing entropy and, and sort of inconsistencies into the system over time is, is, is [00:31:00] really big. and I think that's been the, probably one of the more surprising things I've seen is just how things are fine until they're not.

[00:31:07] Craig McLuckie: And uh, and that's sometimes surprises people.

[00:31:08] Ben Lloyd Pearson: Awesome. Well, I, Craig, I appreciate you very much coming on our show and sharing your insights with our audience. Yeah. Uh, if our audience wants to follow you after the show, where, where's the best place for them to tune in and keep up with you?

[00:31:19] Craig McLuckie: uh, hit me up on, uh, LinkedIn if you wanna have a, a conversation. Craig McLuckie, um, or just, uh, you know, just look at the, um, the, the tool Dev, Dev website if you wanna, uh, learn more about some of the technology we're working.

[00:31:33] Ben Lloyd Pearson: Awesome. And to our listeners, thanks for tuning in today. If you're not subscribed to our Substack, make sure you sign up for that and thanks for joining us this week.

Your next listen