Podcast
/
Many tokens make all bugs shallow & open source’s new maintainers | Chainguard's Dan Lorenc

Many tokens make all bugs shallow & open source’s new maintainers | Chainguard's Dan Lorenc

By Dan Lorenc
|
Blog_Comprehensive_DORA_Guide_2400x1256_31_d272f7d11d

Autonomous agents are pushing deployment speeds to the absolute limit, but is our security infrastructure ready for the consequences? Andrew sits down with Chainguard CEO Dan Lorenc to discuss the severe supply chain risks of this new frontier and what it takes to safely transition to an agent-first engineering model. They explore how engineering teams can safely accelerate deployments by turning restrictive guardrails into frictionless "guide rails" for their AI agents. Finally, the conversation unpacks the future of open source, detailing how AI might either spam projects into dormancy or solve the ecosystem's long-standing sustainability crisis by stepping in as automated, full-time maintainers.

Show Notes

  • Chainguard: Learn more about how Dan and his team are securing the software supply chain.
  • Dan Lorenc on LinkedIn: Connect with Dan to follow his predictions and insights.
  • Gastown, and where software is going: Read Dan's article exploring the Brownian Ratchet principle, multi-Claude, and eventual determinism.
  • EmeritOSS: Explore Chainguard's initiative to provide sustainable stewardship for mature, end-of-life open-source projects.
  • Google's Big Sleep: Read about the Google AI agent that discovered and stopped a zero-day SQLite vulnerability.
  • Daniel Stenberg's Blog: Insights from the Curl creator regarding the influx of AI-generated vulnerability reports.
  • Chainguard Assemble: Catch up on the latest announcements from Chainguard's user conference.

Transcript 

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

[00:00:00] Andrew Zigler: Welcome back to Dev into Rough. I'm your host, Andrew Zigler, and here on Dev Interrupted, we've been talking about the agentic transformation and how it's been coming for engineering, and there's also a darker side to that speed as well.

[00:00:12] Andrew Zigler: All of the security debt that we rack up underneath, all of that progress and an internet that frankly just wasn't built for autonomous agents. And joining us today is someone who spends his weekend stress testing the Claude Code and all the places that you can take agentic engineering. But then also spends his weekday securing the global software supply chain.

[00:00:33] Andrew Zigler: He's the co-founder and CEO of Chainguard, Dan Lorenc. Dan, welcome to the show.

[00:00:38] Dan Lorenc: For having me on. yeah, a lot is changing right now. Uh, I think everyone is scrambling to try to figure out what that means and where we're gonna from now. Um. Everyone's guess is as good as mine. I'll start by saying that, um, I've got a lot of guesses, uh, but I've never seen software move and change this quickly.

[00:00:56] Dan Lorenc: So, um, yeah, we, we gotta take guesses and see what [00:01:00] turns out to be right.

[00:01:01] Andrew Zigler: Yeah, taking guesses and I think it's about experimenting all the time. You can't be standing still. Um, and part of experimenting and trying new things is maybe throwing away assumptions about how engineering is supposed to be done or how it can be achieved. And, uh, we've been following a lot of your work, Dan, about, the way that you've been working with agents and thinking about them in like a parallelized way and otherwise, getting to like a state of eventual determinism through series of gates and checks that allow like an agent to drive itself forward.

[00:01:31] Andrew Zigler: And this is something we've actually talked about a lot on Dev Interrupted, the idea of like chaos going in and progress going out. And there's a lot of our listeners who are all. On different stages of this journey still, of getting to that point of trusting the machine and having the security guidelines and the, the safety, uh, in place to go fast.

[00:01:50] Andrew Zigler: So, uh, I want to just start there, Dan, like how do you think about like the multiclaude philosophy, for example, and everything that you've been, uh, publishing and releasing to the world?

[00:01:59] Dan Lorenc: Yeah. [00:02:00] So I think if we step back to kind of what's happened in the last 18 months, I would say, um, agents and coding tools and AI auto complete and stuff like that aren't new, right? Uh, co-pilot in VS. Code from GitHub is actually the first consumer LLM product out there. Um, it pre-dates ChatGPT. A lot of people forget about that because the spaces moved so quickly while, um, that kind of complemented existing development flows though it, it really was a smarter auto complete, um, where we've gotten to in the last year, I think it has really started to change the way people are writing code. Right? It, it, it's a change from you and an ID writing code yourself. Maybe the LLM saving you a little bit of typing. To that flipping.

[00:02:42] Dan Lorenc: LLMs are writing the majority of the code that people, uh, that are using them anyway are, are shipping today. Um, the Quad code tool itself, I think is, it's one of the best software. Tools or programs ever written and released. Um, it was in a crowded space of all of these IDs that people were [00:03:00] using and cranking out, and they, they kind of inverted it, said, no, no, id, it's just a terminal app that's gonna write all the code with bash and grep, and said, just like the creators of Unix intended, right.

[00:03:10] Dan Lorenc: Um, no IDs your ID is UNIX again, and a tool that actually knows all those flags, uh, to a level no single person does. but it still didn't really change things until probably about six or four

[00:03:22] Andrew Zigler: Yeah.

[00:03:23] Dan Lorenc: now, which is like a lifetime in AI speak. but the models actually got really good and the tool calls got really good and the harness got really good. To the point where it was just kind of in the beginning, it was just kind of mesmerizing to watch it and slam all these grep flags and stuff and write code that way. Um, but it wasn't really any faster and the results weren't terrible good to, to now, when it's better than any person writing code that I've seen, um, you still have to prompt it.

[00:03:44] Dan Lorenc: You still have to steer it, but you can crank stuff out in hours or days that would've taken weeks or months before. And I kind of, uh, equate this to like power tools, right? Um, like imagine that we've all been doing, you know, woodworking by hand for decades. [00:04:00] Um, every engineer has some weird fantasy of like retiring, turning off all of their electronics and uh, doing hand woodworking or something like that.

[00:04:07] Andrew Zigler: Yes.

[00:04:09] Dan Lorenc: of how we've been writing code up until now. Even with some of these fancy LLMs and IDs like cursor and windsurf and stuff like that, people are still writing every. Line of code. Um, but now it's like we handed the whole industry circular saws and we're like, go try to use these things with like no safety course or anything like that. Um, yeah, it's a lot more powerful. Um, people are gonna get fingers cut off, you're gonna make a lot of mistakes. It's a lot easier to mess something up. Uh, but you're going so much faster. Um, and so that's sort of the shift everyone has now of like, how do we do this safely? and then at the same time, this stuff is also getting good enough where you can build entire factories around it. Um, and we're starting to see that a lot more too. Instead of people writing and reviewing and shipping code, uh, robots are doing that. And if you go with that same analogy of like, you know, handwork, woodworking to power tools. Yeah. This is now full assembly line mode that we are either able to create now or just on the cusp of it. [00:05:00] Um, and people aren't even gonna be operating those circular saws. They're just gonna be operating the factory itself.

[00:05:05] Andrew Zigler: I like that you go to the metaphor of going from like the handwork tools to suddenly getting power tools. Uh, I think that's really powerful. When, when, when Jeffrey Huntley was on our show, he also made a woodworking analogy, but he said, you know, the idea that. You have to have, um, you have to prove your worth before you can use the table

[00:05:22] Andrew Zigler: saw. The idea of understanding the bounds and the constraints of the workshop you're in and how to keep people out of danger. And that becomes the real job now of engineers, of how do we create these working environments, these working environments, this tooling, this, uh, process, these rituals, um, that allow us to capture all of these new gains and this new way of working.

[00:05:45] Andrew Zigler: But, um, is. Also has a fundamental level of like trust

[00:05:50] Dan Lorenc: Yeah.

[00:05:51] Andrew Zigler: to it. And so in that world, You know, there also, you've described it as a factory. I agree with you. You know, that's where everyone is going. We're going [00:06:00] to be stacking all of this until we get to the idea of a assembly line kind of style output,

[00:06:04] Dan Lorenc: Okay.

[00:06:05] Andrew Zigler: all we need to do as engineers is a line on the, uh, intent of what we're trying to achieve.

[00:06:10] Andrew Zigler: And then the rest can happen downstream. But just as well as we can use that to create and others can use that to look for weaknesses and to, um, you know, to deploy maybe, uh, bad actors as well. And so in this world where you get these like two lanes, I think there's an unfair advantage for the attackers.

[00:06:30] Andrew Zigler: They cannot paralyze a lot of like probing and looking around and, and harm. But

[00:06:34] Dan Lorenc: Mm-hmm.

[00:06:35] Andrew Zigler: the defense, you don't know where to look to protect yourself. So how do you account for the idea of that, that almost like a losings arm race. Uh, between those two sides of the coin.

[00:06:46] Dan Lorenc: Yeah, so I think it we're in a really interesting state, and this is like we're moving up an exponential curve very quickly here with these capabilities. big enterprises, security teams, they're usually the slowest to [00:07:00] adopt any new technology. Um, because they should be right. They're not, you know, owing every new app into prod where you've got your bank account data and stuff like this or your health records.

[00:07:09] Dan Lorenc: Um, they let everyone else try things out, see what works, see what doesn't, see what broke, see where they got hacked, and then they move it into their environments. Um, but when you're on an exponential curve like this one, um, if you typically adopt things six or nine months after everyone else, or one or two years after everyone else. That gets farther and farther behind every single year as we move up this curve quicker. like before you, maybe you were two years behind the rest of the industry, um, now that's two decades behind with how fast things are moving. Um, attackers don't have that same set of constraints and so, uh, they're now gonna be two decades ahead of you instead of two years ahead of you. Um, the ways of thinking about how you're gonna secure systems, are wrong and they're not gonna work unless you, you try to get as close to that as possible. Now, I'm not saying everyone has to go run open claw in prod inside of their, you know, banking infrastructure today that, because that just came out a month ago.

[00:07:59] Dan Lorenc: But you do have to [00:08:00] consciously try to get closer to the bleeding edge, otherwise that that gap in that exponential curve is gonna make it impossible for you to secure anything that you're running today, we can't let the attackers have the fancy new stuff forever.

[00:08:13] Andrew Zigler: In this idea where, um, they can get that far ahead of you in terms of like attack vectors, right? What do you think are some basic ways that maybe a company that is typically going to lag on that adoption curve? I think a lot of our listeners are at those kinds of companies and their leaders there, where they have to grapple with the realities of the, of bureaucracy and slow moving enterprise adoption of these tools.

[00:08:37] Andrew Zigler: What are the things that they should be doing to be proactive and protect themselves in that environment?

[00:08:42] Dan Lorenc: Yeah. One tactic I've seen work pretty well in some of these larger companies, um, is just set up whole sandbox environments. Get your developers new laptops that don't have access to the same code bases and that kind of thing, and carve out time to get them playing with stuff. Um, because if they, they're not even aware of the state of technology, then. [00:09:00] Uh, that's half the problem and they don't even know what they're missing out on. And when you finally do bring something in, they're gonna have a six to nine month learning curve to get comfortable with these tools. Um, like I'm really, really, really good at claude code today, right? Um, and that's because I've been using it for a year.

[00:09:12] Dan Lorenc: As the tool has gotten better, I've been able to understand the capabilities and can kind of press that limit. That learning curve isn't gonna go away. And the more time and the more ways you can get creative to let people experience that without also sacrificing your security posture and opening up, you know, your entire tool chain to OpenClaw um, it's better, right?

[00:09:31] Dan Lorenc: It, you have these constraints. You're not gonna be able to get rid of those, but you need to figure out a way to get your workforce and get your engineering teams and get all of your leadership aligned that this is where we're gonna go as soon as we can figure out how to do it and be ready for that time.

[00:09:43] Andrew Zigler: I really like that your answer for that was going straight to the human element. It wasn't, you know, a lot of people would be obviously leaning into the more of a technical way of fortifying yourself. But no, the best way to protect yourself is to upskill your employees. Make everyone aware of the realities, create [00:10:00] that shared space, uh, where folks can experiment and understand the bounds of it.

[00:10:04] Andrew Zigler: 'cause like you said, it's just like a, a, you know, a daily motion. You know, you didn't start riding this bicycle till a few months ago. That's how easy it, that's how, uh, how convinced we all are, that it's teachable because we all learned it, you know, rather so. So quickly, this is moving so fast. So in, in that world, what are the kinds of skills that you think are most important for a senior engineer right now?

[00:10:25] Dan Lorenc: Yeah, I think it, it is sort of intuition, right? Like I, I mean all you know, engineering is intuition somewhat at the end of the day, but understanding what the model capabilities are and what types of tasks it's going to be able to do without supervision and which ones are just gonna cause it to go in a loop and go crazy and self-destruct, and know where those limits are and how to scope things and break them down so they fit into context windows.

[00:10:46] Dan Lorenc: And, um, then when you get a new, bigger context window, see what it can do with that one before things start going off the rails again. there's no real concrete skill here because, uh, it's changing so fast. Um, you know, if, if somebody were publish a course of, [00:11:00] become a master of this tool. Um, you know, all those, uh, you know, Twitter, uh, influencers that were posting, like, here's this magical prompt I built that can do anything in one shot that goes outta date a month later when the model changes.

[00:11:11] Dan Lorenc: Right? That kind of skill is, is not gonna last very long. the real one is just, you know, building up that intuition and keep pressing on it and keep testing it. That's what's gonna last.

[00:11:19] Andrew Zigler: Yeah. Uh, and that part of the intuition too is part of it. If it is experimenting with new tools and age, agentic ways of working and, and operating with the world as they come out. Obviously, sometimes this is a little bit like taking a sledgehammer to like 30 years of security practices in order to, uh, extract some value or maybe even in some cases novelty.

[00:11:40] Andrew Zigler: And something we've been talking about a lot on this show is OpenClaw, which you just, you know, mentioned prior, uh, and it's, uh, grapple on the World and how it's kind of truly left, uh, the original audience and is like very mainstream now. It's the most widely starred GitHub, uh, repo. We covered that on our show, uh, just like, uh, the last two weeks.

[00:11:59] Andrew Zigler: And [00:12:00] so, You have a high visibility point from your perspective at Chainguard. What are some like really dangerous or scary things that you've seen agents do in these kinds of environments that kind of like keep you worried and keep you at trying to, to solve this problem?

[00:12:12] Dan Lorenc: Yeah, I think, you know, it assume every agent is like an intern that you just gave a laptop to, um, and that intern is gonna make a mistake, right? Um, no one gives their interns laptops with root keys to production on them because, you know, if that intern accidentally runs the wrong command and deletes a database, like it's not that intern's fault. Yeah, they shouldn't have run that command. But it's you, it's your fault for putting them in a situation where they could have run that command. Um, and that's the same way people are, you know, actually getting results out of these in the positive side. the teams that have these amazing CI systems and test frameworks and harnesses and continuous deployment and all that stuff that we've known we should have been doing for a decade anyway. Like, you know, if, if you're confident that when those checks come back green, you can press merge and it's gonna go to production in 10 minutes. Then you don't have to worry, right? Uh, the, these agents are just gonna push [00:13:00] PRs to that repository. No one's touching production. No one's SS hing in and debugging things. The agent's not gonna be able to do that either. Um, and then you don't really have to worry, uh, is the code good, bad? It, it doesn't really matter at the end of the day. If it's bad, just tell the agent to fix it later. You need those automated signals and that a really, really, really strong pipeline where you can ship code confidently. Um, and then at that point, the code doesn't matter. That's how you can go from, you know, writing 500 times as much code to shipping 500 times as much code. The people without that, where they're scared to deploy on a Friday, because, you know, half of the deployments crash and break things and you don't want to page someone. Now you have 500 times as much code, but you can only release things at the same, at the same rate. So the bottleneck is really just shifted in your process.

[00:13:41] Andrew Zigler: I love that you call out the Friday deploy, we talk about the Friday deploy a lot on the show and the phenomenon of embracing it. Um, and so I, I, I, I really, I really agree with that. I also, the idea of having, uh, going back to the, the basics of like the pipeline, right? Having a really clear [00:14:00] structured system that can gate the work that your agents are doing.

[00:14:03] Andrew Zigler: Just like how. You said it should have been gating all the work that humans were doing the whole time. Right. So like building that kind of baseline, um, is that, like, is that where you think, uh, engineering leaders should focus on, in order to extract the most value from getting started with this kinds of stuff to shipping it, you know, not going from experimenting, but to shipping.

[00:14:24] Dan Lorenc: Yeah, I think so. And especially if you're in one of these regulated industries where you can't roll this stuff out yet. The best investment you can make to get ready is to yeah, get rock solid deployment pipelines that you can trust today. Because once you do have these agents, they're gonna love them too. An analogy I like is, uh, it's kind bowling, right? So I'm a terrible bowler. but if you go bowling, um, and you put up the bumpers, you can still have fun. If you're a terrible bowler, you don't really have to look. You just throw it down, it bounces off, it'll hit the pins. Um, but now if you take a hundred bowling balls and run up and slam them down as fast as you can at every odd direction, right, they're not gonna get down any faster. They're gonna be bouncing off each other. The bumpers are gonna crash, you [00:15:00] know, um, you're gonna break the, the bowling alley. and I think that's sort of how CI systems work, right? Like if

[00:15:05] Andrew Zigler: Mm.

[00:15:05] Dan Lorenc: tests flake and you're running 200 tests every time and everyone is just sitting there hitting retry, hoping everything gets green and then half the deployments that go out still fail. Um, that's kind of where you are. Um, you have these gates, but they're not really helping you get down faster or those bumpers. Um, and I think as you start to, pull them in. And I think that's really gonna be the role of en engineers in this future, is getting those gates rock solid, making sure all the intent is captured, making sure all the performance stuff is in there, everything you need to be confident.

[00:15:32] Dan Lorenc: If you can start to pull those bumpers in tighter and you know, get them to exactly one diameter of a bowling ball, then you can throw a hundred of them down that track as fast as you want. They kind of turn from guardrails into guide rails. There's no way for them to get off track and start bouncing around and not make it down.

[00:15:47] Andrew Zigler: Oh, I love that. It's just like they need to be self-teaching. They need to teach. Yeah.

[00:15:52] Dan Lorenc: there's only one way things get to prod and if it makes it through there, it's good.

[00:15:55] Andrew Zigler: Yeah. And, and so the idea of, you know, the guardrails is just means something, some stuff bounces [00:16:00] off of, or you can't trust is, you know, you can't build off of that. But understanding why the guardrail is there and entrusting the guardrail and then like you said, letting it guide you. I think that becomes a natural, uh, way where

[00:16:11] Andrew Zigler: you get to that level of, uh, eventual determinism that you've written about, like with multi Claude and, and in your article about the brownie and ratchet, which is a great read we're going to include for folks. And so, uh, I, I wanna, I wanna shift here though, to another problem scope that you have a really great view on

[00:16:30] Andrew Zigler: as leader at Chainguard and, and that's the, the software supply chain world. And that's something that's been fundamentally altered by the arrival of AI and agents and agentic software. And the thing about it is that it's an iceberg. Like so much is built on top of it, but so much of it is so deep and down underneath

[00:16:50] Andrew Zigler: in the murky depths that, you know, people like you, like me or a lot of our listeners, don't have a lot of visibility on what's going on down there.

[00:16:56] Dan Lorenc: Yeah.

[00:16:57] Andrew Zigler: I, I know this is something you spend a lot of time and focus [00:17:00] on at Chainguard. So I, I want to understand from your perspective, how has the software supply chain evolved in this world and, and how have the stakes changed?

[00:17:09] Dan Lorenc: Yeah, I think it, it, it's still early, right? Agents are around, they're getting used. Their effect on, uh, open source as a whole, I think is still early and it's hard to say too much has changed one way or another. People are feeling it. People are complaining. There's stuff happening. Um, it's going to change.

[00:17:26] Dan Lorenc: but it's too early to tell exactly what's going to happen, right? Daniel, uh, from the Curl Project, Daniel Stenberg, um, you know, he's been complaining for years that people are using ChatGPT to basically denial of service attack, his vulnerability report process. Everyone grabs ChatGPT, something like that says find a CV and curl it, spits something back that looks kind of, uh, sane before you read into it too much.

[00:17:50] Dan Lorenc: And then the email has private list. So it went from like, you know, a couple hundred reports a year to a couple hundred reports a week, and 99% of them are just garbage. Uh, because the [00:18:00] people submitting don't know how to review this stuff. And, that's a security vulnerability in and of itself if he can't find the real one buried in there with the 99 garbage ones each week. Um, and so he's, you know, basically shut that off completely. Um, no one is allowed to use AI for security vulnerability research and curl anymore, uh, because it caused too much of a problem for him. but at the same time, you see things like Google's deep sleep research where they, they found a bunch of really,

[00:18:23] Dan Lorenc: good. Really valid zero days in open source projects that no security tools were able to find before. and disclosed 'em and got them fixed and all of that, uh, before it was out. Um, but agents can do this stuff now and open source is kind of gonna be front and center in it because it's a lot easier to point an agent at open source code than it is, you know, your, your banks, uh, locked down code. So we're kind of just gonna see more of everything and some things are gonna collapse under that and others aren't. Um, and I think my my prediction is open source is gonna stick around, right? There's a lot of people saying it's, it's gone now. What's the value in it anymore? [00:19:00] If you can, one shot every library you need, why are we reusing libraries? I don't think we're gonna get to that world. But I do think it's gonna bifurcate, right? There's gonna be a whole group of people that just don't want AI pointing at them, and I understand why. It's just a whole bunch of noise you have to deal with as an open source maintainer. And then there are gonna be other projects that embrace it. and we're gonna see what happens there. But if things go well, the projects that embrace it, we're gonna start moving a lot faster and shipping a lot quicker. And, uh, we're gonna see that bifurcation happen in real time.

[00:19:30] Andrew Zigler: So it's really like you think the social contractor on open source will evolve and you'll get these two different types of groups who exist for different reasons.

[00:19:38] Dan Lorenc: the short to medium term. Yeah. There's gonna be a bunch of projects that just say, no, we can't deal with this. Um, and some say, let's go agents, you know, code of people and see what happens.

[00:19:48] Andrew Zigler: mentioned too the idea of, um, just creating your own software and so the, why would I use open source and, and I've been reading a lot about this too, about the, um, you know, folks doing these clean [00:20:00] room experimentations where they have an agent implement something with no outsize outside resource.

[00:20:05] Andrew Zigler: Obviously there's a huge grain of salt because the LLM itself is an outside resource. But, um, all of that is, all of that is to say it does kinda change the economics. For why companies would pick up software. But at the same time, it doesn't for certain groups, because a lot of the parts of adopting software for SaaS is, I don't wanna maintain it.

[00:20:22] Andrew Zigler: I want someone else to, uh, so how do you think that balances out? and it, what do you think that looks like?

[00:20:28] Dan Lorenc: Yeah, I think, uh, you know, I, I was asking Claude this earlier this week, what it thinks is gonna happen to the space. Um,

[00:20:35] Andrew Zigler: Great.

[00:20:35] Dan Lorenc: think no one's gonna vibe, code a database, right? And ship that to production. You know, something like Postgres or MySQL or those layers, right? It, it makes absolutely no sense.

[00:20:44] Dan Lorenc: Even if it could one shot, something like that, it's too much risk, right? There's gonna be a bug somewhere. All software has bugs. Um, if you point enough agents at it for long enough, yeah, they can probably squash most of the bugs. But, um, people are gonna keep using battle tested pieces of software down there, and some of those are probably gonna adopt this and start moving [00:21:00] even faster. And then the, those are kind of at the bottom layer. Web servers, databases, that kind of thing where you just need them to be battle tested and solid. And the only way to do that is for other people to run them for years and, you know, run into all these edge cases. Um, AI can't really speed that part up. and then at the top level, right, like open source or agents are amazing at front end development, right? You can just tell it. You want a website and it builds this amazing looking one. And that's because they're trained really well on these DSLs and things like React and these high level libraries that deal with all the crazy dom nonsense and they can, you know, keep context windows small and move really quickly. I think there's gonna be a lot of innovation at that top level too, that let agents go fast. Libraries and things like that, that they're optimized for and trained on and trained around. Um, but that middle section, um, all those little middleware, libraries and routers and Postgres client libraries and things like that. I can start to see people pulling in a lot more of that into their own stacks and maintaining that kind of thing yourself, where, yeah, you can use this library, but you have to rewrite, you know, 30 other ways in your app that you [00:22:00] call things and restructure to use that library. Um, and it's not that hard to write in the first place.

[00:22:04] Dan Lorenc: Um, uh, that area. I can start to see getting hollowed out a bit as, uh, agents get better doing that glue in the middle.

[00:22:11] Andrew Zigler: Yeah, there's almost like a math equation for, you know, is it more convenient or is it more reliable for me to just use the tool, or is it cheaper for me to use a tokens to build a replacement for it? And there's probably a threshold there where the usefulness of the tool. Way exceeds what you'd possibly able to do with the same, the level of tokens you could do to get like a baseline version of it.

[00:22:31] Andrew Zigler: So therefore you keep it. Those are like the economics, I think, that shift. And so, and you're right, that projects will fall on different sides of them. So it'd be interesting to see how, how that evolves.

[00:22:41] Dan Lorenc: And, and the stuff where it's really hard to get the edge cases right and the cost of messing up is really important. Like, like databases and web servers, that kind of thing. Um, we're all better off if we point our tokens at one solution and make that better over time, rather than everyone pointing their own tokens at their own solutions.

[00:22:56] Andrew Zigler: Oh, I love that 'cause like kind of like an analog to the whole, like, you know, [00:23:00] eyes make all problems shallow. The idea that everyone's tokens could make those problems shallow too.

[00:23:04] Dan Lorenc: Yeah, it was, uh, I was working on a version of that, like, yeah, it's Torvalds's Law. Many eyes make all bugs shallow. It's like many tokens make even more bugs, even shallower.

[00:23:14] Andrew Zigler: That's amazing. And speaking of, you know, this ecosystem is going to evolve and change. It's gonna be. Interesting. But the thing about, uh, open source is that it, you know, sometimes lacks its guardians, its champions and sometimes that can be hard for it to come by. And that's what can make open source and, and all of the gains from it.

[00:23:34] Andrew Zigler: So tenuous and, and, and something that we take for granted a lot of the time as an industry. And so there's like an element of like, how do we sustain the development and the proliferation of open source in the future. How do we find, how do we discover these new forms that open source is going to take and the value exchange that both sides are going to have.

[00:23:53] Andrew Zigler: But part of that too is that just like a lot of modern code bases in the world that we live in relies on open [00:24:00] source. Um, but there's. In this world where you're describing like long, longstanding projects, can't even accept contributions or pull requests anymore, get inundated with security features.

[00:24:12] Andrew Zigler: They might spend hours staging up a good first issue, just like for a human to never be able to come along and discover it because of the new world they live in. So then how do they hand off the project that someone else? How do you, uh, develop? Uh, like a community around that. I think that becomes the real challenge.

[00:24:30] Andrew Zigler: You know, how, how are you thinking about that at Chainguard?

[00:24:34] Dan Lorenc: Yeah, I think there are pros and cons to, you know, the, what agents can do in this world. Um, you know, there are a lot of projects, um, that are just plain done. Um, you know, they're, no one ever wants to call them done because they're always open and you could always come up with something new to add. But for the most part, they're feature complete, they're done, they're tested.

[00:24:53] Dan Lorenc: They don't really need much extra work. And we see a big sustainability crisis kind of at that end. Um, those also tend to be the most widely [00:25:00] deployed projects too, because they've been around and stable and haven't changed every six months for the last decade. Um, so they show up in super low levels of the stack.

[00:25:07] Dan Lorenc: They're everywhere, even places you wouldn't expect to see them. And that's hard because the maintainers need to be around if there is a security incident or something like that. Um, but it's not a ton of steady work, so it's hard to fund that work too. 'Cause you know, it's not a full-time job. Even if you were to try to hire someone and pay them a full-time salary, it's a couple hours a month, maybe one month out of the year.

[00:25:25] Dan Lorenc: It's a whole week. It's kind of hard to predict. Uh, but that's exactly the type of work that agents can do a lot, uh, more of, and for a lot cheaper and a lot easier. Um, you can have one person with a agentic tooling doing that kind of end of life care for hundreds of projects because the work is bursty and doesn't all come at the same time. So you can see some benefits to something like this. It'll a lot easier to maintain projects over time. Um, even if you're not gonna go add crazy features to it. But you also see the challenges in it too. If everybody's chasing the shiny new thing, no one wants to be around to run those agents on that software anymore. and projects are [00:26:00] gonna disappear and, uh, go dormant. But I do think enterprises use open source software is also gonna change a bit here. you can fork open source software, you can modify it, you can add whatever features you want. It's one of the big value propositions of open source software. Um, but the Linux Foundation has a bunch of awesome research on this and stuff. And like the cost of maintaining a long-term fork is very expensive today, and it only gets more expensive the longer term your fork is. It's always better and cheaper if you can get your changes merged back upstream. Which is great and keeps projects moving in the same direction.

[00:26:30] Dan Lorenc: You don't have tons of companies hoarding their own feature work because it's really expensive to do that over time. but I think that cost, maintaining a fork is actually gonna drop dramatically too, over time because it's, it's messy work. It's rebasing, it's fixing merge conflicts, it's that kind of thing, you know, every month when the project does any release thing no one likes doing. Uh, but that's the exact type of work agents are very good at. Um, so I think we're gonna, we're gonna start seeing a lot more internal forks and even a lot more public forks of open source projects where you can merge and share code and ideas back and forth easier [00:27:00] without having to sit there and get interactive rebates for hours and hours and hours until you go blind. so I think it, I, I think it's probably like, it, it, it's gonna go fractal is sort of the way I think about it. Um, like all of the, the forking and all of these amazing features in git are gonna allow everyone to start forking code and doing whatever they want. And now there's gonna be hundreds of versions of all of these things, whether they're internal fork or public ones.

[00:27:23] Dan Lorenc: Having agents do that messy updating work.

[00:27:26] Andrew Zigler: Yeah, I love the idea of it being like a fractal, but it's like a, a hyper-personalization because the economics, the cost of y before you would never maintain that highly specialized internal fork of X, Y, Z very publicly maintained libraries like the economics of why you wouldn't are just fundamentally gone.

[00:27:46] Andrew Zigler: Because the idea of having to keep it in sync with, with the upstream and, and. Dance that around all of your downstream changes is just untenable for most, uh, organizations to consider. But now you get a world [00:28:00] where, just like how on the consumer end with our apps and software that we use now, it's highly customized, highly personalized because you get this, uh, you get this age agentic experience inside of so many things we're using now.

[00:28:13] Andrew Zigler: On the flip side, you get that there as well. And so it, it becomes like, uh. I I, I also really, I also really just like the idea of, uh, them being maintained by agents. 'cause it changes the i the economics for the long-term contributors instead of it, instead of it being literally that XKCD comic that we all point to that has the little tiny brick at the bottom of like, whatever.

[00:28:39] Andrew Zigler: And it's like the entire internet is built up on top of it. And the little tiny brick is just some dude in Wyoming. Now it's gonna be some agent on some dude's laptop in Wyoming. Now it's precisely, and, and, and then that agent itself could then be, um, it, that, that itself is, is uh, is something that could just take so many different forms. We, we don't know [00:29:00] what that agent would really look like yet.

[00:29:02] Andrew Zigler: Although I, I, I think at Chainguard, y'all are certainly exploring this with EmeritOSS

[00:29:06] Dan Lorenc: Yeah,

[00:29:07] Andrew Zigler: that right?

[00:29:07] Dan Lorenc: Yeah, yeah. Trying to see how much, you know, a small team with AI can do and how much we can scale that to maintain these projects that people are done maintaining themselves.

[00:29:16] Andrew Zigler: Also too behind those projects, like being an open source con uh, maintainer right now is, uh, always, it has been relatively thankless, but right now it's, it's, it can be feel even more extra thankless. And I feel like they're getting the brunt of a lot of the bad, uh. Like slop in the, in the world of a AI engineering, both on the security end, the PR end, the issues end.

[00:29:39] Andrew Zigler: Like I remember, uh, when, like maybe like a year or ago when, uh, like it was time for Hacktoberfest and like the world was just starting to do like agentic coding. It was, or not even agentic coding yet, it was really like we're in like YOLO mode and Little Pass Auto. But like it, it broke Hacktoberfest [00:30:00] and Hacktoberfest

[00:30:00] Dan Lorenc: Yeah.

[00:30:01] Andrew Zigler: was already something that already had so many fundamental problems in its ability to execute because of spam, but then that hit it like a tidal wave.

[00:30:09] Andrew Zigler: So you know what I'm, you know what I'm saying?

[00:30:10] Dan Lorenc: yeah, yeah. Hector Fest has been criticized every year since the start. Um, and that's only getting worse. Um, I remember the yeah, the first year they did it, you'd get a free t-shirt for contributing to it, an open source project, and

[00:30:21] Andrew Zigler: Oh yeah. You get it.

[00:30:23] Dan Lorenc: those poor maintainers got like thousands of PRs.

[00:30:26] Dan Lorenc: I think everyone dramatically underestimated how much people like t-shirts.

[00:30:30] Andrew Zigler: I used to, I used to work for an open source project and we gave mugs to people who would contribute to our repo. And I think we've sent a mug to every country in the world. And so, um, I, I know exactly what you're talking about and behind that too, I think it speaks to the incredible amount of like, and.

[00:30:46] Andrew Zigler: Enthusiasm and eagerness to be in open source. Open source is a stepping stone that many folks use to gain entry into tech. It has always made tech more accessible. My backgrounds are in open source as well. Um, I don't have an engineering degree, right? I learned [00:31:00] to code myself. And a large part of that is open source.

[00:31:02] Andrew Zigler: So open source has always been really like dear to me. This world of, you know, it maybe being, uh, threatened by the rise of AI and the way that we consume and use software. It also changes to the way software's discovered. And I wanted to ask you about just the discoverability elements of, you know, you're building a tool now you.

[00:31:20] Andrew Zigler: It's more likely than not, than like an agent is going to be looking up that tool for a moment to implement it into something in, in the, if not now, in the very near future. So how do you think about like the agentic experience of like, how do I make a tool that agents just intuitively want to use?

[00:31:35] Dan Lorenc: Yeah, that's kind of the, there's a couple, like there's agentic, what's it called? I can't remember. The EEO or something. It's not search engine optimization. Now it's like LLM.

[00:31:45] Andrew Zigler: Oh, answer engine optimization or, yeah. Yeah. Mm-hmm.

[00:31:50] Dan Lorenc: it's, you know, crafting your pages and, uh, doing all of this so agents can index it and know to use you. Um, and I think that's kind of like an arms race, like SEO has always been. They wanna find and recommend the [00:32:00] best solutions, uh, but sometimes they're hidden in too hard to find.

[00:32:03] Dan Lorenc: So you have to do some basics. Um, I've loved, uh, what Anthropic data, I dunno if they were the first or not, but they were the first, I noticed it on about a year ago. Like every single doc page they have have a little button called export as markdown. Um, right there on the docs page 'cause that's what agents speak and all the HTML stuff just clogs up context windows and uh, you can copy paste it into your ID and hand stuff to your agents and then they get really good at understanding those docs. And then there's also this near term problem where like they don't retrain constantly. You know, you get new training data put in every six months or so, or sometimes faster now, um, where if you have some new amazing tool, it doesn't matter how good it is, the agents aren't gonna recommend it 'cause they don't know about it until the next time the training window gets updated. and so kind of, I, I like the advent of skills. Um, they're a really good way to can this stuff and hand it to agents in a way they can understand without having to wait for that training window refresh. Um, but yeah, the, if they're just going to Google and using some web search tool, call, uh, who knows what they're [00:33:00] gonna find.

[00:33:01] Andrew Zigler: I'm really glad you bring up skills too, because in a lot of our conversations today have been thinking a lot of open source could now just be a skill.

[00:33:08] Dan Lorenc: Yeah.

[00:33:09] Andrew Zigler: Um, could be, it could, could be a skill that, that an agent uses. And I know a lot of people think about, um, their tooling in the same way. You don't wanna be building something that an AI can replace in a few days or a week.

[00:33:20] Andrew Zigler: And you don't want to be something that an AI can replace with a skill, you know? Um, those become like

[00:33:25] Dan Lorenc: Guy Podjarny

[00:33:27] Dan Lorenc: um, he was one of the original founders of Sneak and he has a new startup called Tessl, and they've done a bunch of stuff in this space. But one of the things they did, I loved, um, was they generated really good doc pages for agents for open source libraries, but at every specific version. Because that's something you run into if you're trying to write an app. The agent was trained on a very specific version of that library

[00:33:47] Dan Lorenc: that

[00:33:48] Andrew Zigler: Hmm.

[00:33:48] Dan Lorenc: six or nine months old. And if you're trying to use something newer, the agent doesn't know it. And you get into this battle because it does know that library incredibly well.

[00:33:55] Dan Lorenc: It's just not the current one that everyone is using. And you get tons of errors and stuff like this, and it takes a [00:34:00] while for the agent to kind of break out of those patterns. And so this one had, yeah, really good autogenerated, you know, syntax and usage docs for every single dot version. So your agent could always be up to date and calling the most up to date version of all of these libraries. Weird little things like that that crop up that you don't think about in the beginning.

[00:34:17] Andrew Zigler: And those are the things that, you know, we have to think about now that we're in this workshop. Going back to the beginning about the idea of, you know, you have power tools for the first time a table saw and there, and you have all of everybody running around in the workshop and there's sawdust everywhere.

[00:34:32] Andrew Zigler: It's like you are responsible for making sure people don't cut their fingers off. Um, it it just in that same way of giving that internal laptop where they could delete the production database. You know, you have to be able to create these safeguarded environments, uh, where things can be, uh, like maintained for the long term.

[00:34:50] Andrew Zigler: And I think that becomes the new, uh, like level that we all play as software engineers now is like, how do I create the, the safest and most effective environment for [00:35:00] my agents to get this work done? From there, the idea of so much of that work goes into cultivating the right space, the right guardrails, the right guiderails as you put them.

[00:35:11] Andrew Zigler: I loved that. Um, I don't think that there's any, there's nothing more important right now than being able to come together and share. Those ideas and, and really kind of experiment with what one person is doing and share it with another team. I think there's so much opportunity for like, cross pollination of ideas across not just the tech industry, but across a lot of other industries as well.

[00:35:33] Andrew Zigler: And, and so in, in this world, like do you think engineering just becomes generally more accessible to people outside of, of tech? And then what do you think that that, uh, impacts everything that we've talked about today?

[00:35:43] Dan Lorenc: Yeah, so I, I, overall, I'm bullish on this, right? Um, the tools make it much easier to pick things up. You can go much faster. Um. But it, back to that woodworking analogy. Yeah, if somebody spent five years with hand tools and then you give them power tools, they're gonna be much better than someone that just jumped in straight to power [00:36:00] tools. But agents are also amazing at teaching people things. Um, if you prompt them a little bit differently, um, ChatGPT has like a student mode in and out where, yeah, if you're in high school and you want finish a paper, you just ask it to write that paper for you. Solve a physics problem. Uh, but they have a teaching mode too where you say, don't tell me the answer.

[00:36:18] Dan Lorenc: Help me think about this. Um, here's what I'm thinking. Steer me back to correct. Um, everyone kind of could have a super individualized, personalized tutor, um, that could get you through that. Like maybe that five year apprenticeship can be cut to six months or something like this, where you get the same value, but only if you're doing it that way and you're, you're not just opening up Claude and say, you know, refactor this code base without knowing what a good code base looks like. Uh, from the start. Um, so if we get both of those, I'm really bullish. It's gonna a lot more accessible. You're gonna learning, you're gonna things faster and get to that good output but you do still have to put in that work.

[00:36:51] Andrew Zigler: Yeah. I loved everything that we've talked about. Dan, the way that you think about, um, where engineering is going is, is so wise, but [00:37:00] also to your perspective from a security standpoint, from a software, a supply chain standpoint is really valuable, I think, um, to us because, um, a, a lot of us exist in a world where we are consumers.

[00:37:12] Andrew Zigler: Of those tools at scale. Um, and we don't necessarily have the, uh, time or the ability to understand the, the machinations that go on underneath. And I, I think that this world's gonna continue to evolve and change in really interesting, fascinating ways. Uh, I'm curious now, you just, this episode is dropping, uh, right after your, or right during your Assemble conference.

[00:37:33] Andrew Zigler: Um, you know what's top of mind for you, uh, right now is you have everybody, uh, in, in one place to discuss the future.

[00:37:40] Dan Lorenc: Yeah, I mean, I'm really excited for the stuff we're doing. Like, you know, we, we, as a company, we've been trying to use Claude Code and every agent out there for a year, and we've finally gotten past that stage where now we are shipping faster and we're able to do a lot more. It was a painful process and I think we tried every trend in the AI world as they were getting obsoleted, [00:38:00] you know, MCPs and rag and all that stuff that no one even thinks about anymore. Um, but yeah, we've really got this stuff in production now and I'm excited for meet for all of our customers and everyone using us. Excited to share all of that.

[00:38:11] Andrew Zigler: Amazing. Well, Dan, it's been really great to have you on the show. We'd love to have you back in the future to touch touch back in about how software has continued to evolve. But in the meantime, you know, where can the folks who have listened today, where can they go to learn more about you and your writings and what you're working on at Chainguard?

[00:38:27] Dan Lorenc: Chainguard.dev is our website. Um, most of my posting is on LinkedIn. Um, it's D-A-N-L-O-R-E-N-C. You can look me up.

[00:38:36] Andrew Zigler: Awesome.

[00:38:38] Dan Lorenc: See, uh, how far off We were in all of our predictions here today.

[00:38:41] Andrew Zigler: No, that's my favorite game to play on this show. And trust me, the listeners, they have their score score sheet, so we will come back together. It'll be a ton of fun. And to those listening today, if you, uh, loved our discussion today, please come and find us on LinkedIn, on Substack. Uh, let us know your thoughts about today's, uh, conversation.

[00:38:58] Andrew Zigler: Dan and I would love to [00:39:00] hear from you, especially if you wanna continue things that we've talked about here on the show. Uh, but in the meantime, uh, that's it for this week's. Dev Interrupted. I'll see you next time. And Dan, thanks again for coming on the show.

[00:39:12] Dan Lorenc: Awesome. Thank you.

Your next listen