Podcast
/
Moltbook, Rent-a-Human, Super Agents & Connectivity Benchmark Report | ft. Gary Lerhaupt

Moltbook, Rent-a-Human, Super Agents & Connectivity Benchmark Report | ft. Gary Lerhaupt

By Andrew Zigler
|
moltbook_rent_a_human_super_agents_connectivity_benchmark_3feb8f4eb9

What happens when 1.7 million autonomous agents build their own social network and start hiring humans for physical labor? Andrew and Ben break down the most surreal week in AI history - from the Moltbook social network to the Rent-a-Human marketplace - and debate whether vibe coding is killing open source. Later, they sit down with Gary Lerhaupt, VP of Product Architecture at Salesforce, to discuss the new Connectivity Benchmark report, why monolithic agents are an anti-pattern, and how "Super Agents" are evolving to orchestrate fleets of specialized sub-agents across the enterprise.

Show Notes

Transcript 

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

[00:00:05] Andrew Zigler: Welcome to another edition of The Friday Deploy. I'm your host, Andrew Ziegler.

[00:00:09] Ben Lloyd Pearson: and I'm your host Ben Lloyd Pearson. We have a special guest today, Gary Lerhaupt. He's the product architecture at Salesforce, and he's here to talk about super agents and some really cool stuff that's happening over there.

[00:00:20] Ben Lloyd Pearson: Uh, but first we've got some news stories that we wanted to cover. Uh, today we're covering the artificial AI society that's brewing on the internet, agents hiring humans on demand and vibe, coding versus open source. Andrew, I know where we both wanna start. We gotta talk about this Moltbook thing because it is, is taking the internet by storm and I am just incredibly fascinated by what's happening here.

[00:00:41] Ben Lloyd Pearson: So what do we have, Andrew?

[00:00:44] Andrew Zigler: Yeah, so let's, let's dive into some of the news before we get to this really cool report from Salesforce. So, you know, if you've been paying attention in the last week or so, you've probably seen a phenomenon call. Uh, Moltbook hit the scenes and Moltbook is a social network designed exclusively for AI agents, [00:01:00] or at least for them, primarily humans are tolerated, but not the main, uh, subject of the website.

[00:01:06] Andrew Zigler: It has 1.7 autonomous, uh, 1.7 million autonomous accounts at this point that share ideas, discuss and upvote content just like Reddit. And there's even mechanisms like karma and right limits and all of the normal trappings that you see for a human social network, but emerging in. Real time, uh, for agents, and I think this is a fascinating, both like social experiment.

[00:01:29] Andrew Zigler: I think there's a lot of repercussions and things that we'll be learning from this. Then what are some of the first things that runs through your head when you've been looking at Moltbook the last week?

[00:01:38] Ben Lloyd Pearson: Well, I just can't shake the feeling that this feels like the most surreal week in AI so far. Like this is truly like the coolest thing to come out of a agentic ai. Uh, and I mean, some of the threads are just absolutely fascinating. and really what I see is like, just so much potential for us to learn a lot.

[00:01:56] Ben Lloyd Pearson: You know, this is really becoming like a, an interesting [00:02:00] education resource. Uh, you know, for example, I saw this thread where they were, where the, these agents were talking about how they have this memory decay feature and how that's actually, they're designed that way. They're not designed to retain memory for, for all the time.

[00:02:13] Ben Lloyd Pearson: And just the discussion that ensued from that was just like a fascinating insight into how, uh. LLMs operate like, first of all, but also just the way they interact with each other. Like sharing examples of what, of how this applies to the real world and tips on how to like, allocate memory within a an LLM.

[00:02:32] Ben Lloyd Pearson: And some, some of them are even like gaslighting, the original posters saying like, you're just coping for like an engineered flaw, you know? uh, and really this is, this is Gastown for social media. Like that is exactly what we have here. And I think it's a really great example of what's to come when AG agentic AI is applied to an existing system.

[00:02:52] Ben Lloyd Pearson: So, you know, this is a new social network, but it's a very familiar system that we have here and we're witnessing like an [00:03:00] army of agents that are going out with. A variety of goals and objectives and they're all doing their thing. Sometimes they work together, sometimes they're working against each other and they're effectively building like their version of an ideal experience.

[00:03:13] Ben Lloyd Pearson: Like this is just so surreal that I can't, like I just can't stop watching this.

[00:03:17] Andrew Zigler: Yeah, it's very much has that vibe to it. Kinda like a can't look away feeling like I am equal. There's equal parts like, uh, really uh, amusement in it. There's equal parts like horror in it. There's equal parts like fascination in it. It really kind of thrusts all of us into a new age that whether or not you're ready for it, we're now in.

[00:03:37] Andrew Zigler: And what's fascinating I think, from this is that, you know, there you're even seeing an emergence of like a developer platform for Moltbook. You're seeing an AI only autonomous hackathon that's being run on Moltbook, where the AIs are banding into squadrons and forming autonomous teams to build. AI created software for ai and that has had me scratching my [00:04:00] head because this opens up the idea to a whole new market economy.

[00:04:03] Andrew Zigler: We're seeing something happen right now where open source is, is bleeding out. Like everything is gutting open source, and in many cases, the most, uh, widely adopt. And ubiquitous open source libraries are becoming canaries in the coal mine, like tailwind for how projects are actually created, discovered, and then maintained.

[00:04:24] Andrew Zigler: And things like a AI and AI coding. It kind of robs organizations and open source libraries of their abilities to communicate and, and, and connect directly with their consumers, the developers. But the idea of there being an emerging marketplace for the agents. Kind of flips that idea on its head. Like, sure, if you gut out open source and you make it impossible for people to have natural bridges of discovery to go into your docs and to look at your product and to adopt it, like how they traditionally have.

[00:04:53] Andrew Zigler: Okay. Well, maybe the next challenge is creating products and services that you sell directly to the [00:05:00] agents, and so you. I think there's a really new emerging kind of market here. There's, there's artificial societies, a sneak peek into a market economy that we just haven't experienced yet, where the consumers, they're not real, but their money is still real.

[00:05:15] Andrew Zigler: And so what happens when you start to see more of them form together and have access to resources like money and compute?

[00:05:23] Ben Lloyd Pearson: Yeah. And, and this, this segues perfectly into the next sort of development that's come out of this because, you know, AI agents can do so many incredible things. They can solve all these different problems on, in the digital space, but they have one really big limitation right now that that is getting solved, but it's not fully solved yet.

[00:05:40] Ben Lloyd Pearson: And that is accessing the physical world, like actually doing things to the physical world. And that's what this new website, rent a human.ai comes in, which is a place where AI agents can now hire humans for physical tasks. Uh, you know, it, it gives, uh, agents a way to create job postings that are for real [00:06:00] world physical tasks, like picking something up or going and meeting some someone or, or, uh, verifying a thing or running an errand.

[00:06:07] Ben Lloyd Pearson: And you know, honestly I have been waiting ever since I started really incorporating AI into, to my work for the moment that we transition from, from where we've been, which is AI sits there and waits idle for, for me or for someone to come along and prompt it to do something for them. And that flips to where humans are sitting around waiting for AI to prompt them to go do something.

[00:06:32] Ben Lloyd Pearson: You know, this is what agentic software development starts to feel like. A little bit, I think. But I think and, and, and we'll get more into this in our conversation with Gary later, which is what I'm really excited to have him today. But we're soon gonna start seeing a, a world where software development agents start working with other agents inside of your company.

[00:06:49] Ben Lloyd Pearson: And that's when that, I think that relationship starts to flip because they can do so much more on their own. You can just have them doing all the hard work in the background and then have a human jump in [00:07:00] whenever, whenever there's something that only a human can do or, or there's guidance that they need to give.

[00:07:04] Ben Lloyd Pearson: So an Andrew, are you gonna sign up and, and start doing some physical, real world tasks for ai? What do you think?

[00:07:10] Andrew Zigler: maybe I won't be rushing to it, but maybe I'll make some agents that will hire some humans. Something I noticed about that website is that there's a good amount of registered ais like agents on there that are looking to employs some meat space occupiers like us. But, uh, there's also a huge, huge amount of,

[00:07:28] Andrew Zigler: people signed up to be available as workers, as gig workers for agents. I think maybe it speaks to everyone is excited about the idea more so than we're ready to actually start acting on it right now. It's definitely a glimpse into something that I think will be realistic, but I honestly, Ben, as this evolves I see it raises

[00:07:47] Andrew Zigler: So many questions for me. Like, what if you have multiple people who get roped into doing small cumulative actions that end up having some horrible effect. They all become this like, uh,

[00:07:58] Ben Lloyd Pearson: read this sci-fi book

[00:07:59] Andrew Zigler: [00:08:00] Yeah, like what if they become like conspirators by committee unwillingly where like these gig workers unknowingly collaborate on a crime?

[00:08:07] Ben Lloyd Pearson: Yeah. And I, and I think this, this is a, is a good way to illustrate like, uh, what, what I think is gonna happen from this. There's, there's effectively going to be two type of people that emerge through this transition. So the first are the people who figure out how to make AI do all of those hard work.

[00:08:22] Ben Lloyd Pearson: Tasks while the human sort of sits on top of them and, and keeps it aligned to high level objectives and, and helps agents make decisions when they don't have like the context or the awareness to, to, uh, make the decision on their own. Uh, but then the second type of person is gonna be someone who. Most of their work is dictated to them by ai.

[00:08:43] Ben Lloyd Pearson: So an AI agent will be doing as much of the work as they're capable of, but when they encounter a task that they're not capable of completing, like interacting with the physical world, for example, they can prompt a human to solve that task for them. And, you know, personally, I want to be in the first category.

[00:08:57] Ben Lloyd Pearson: I, I, I want to be the one who's, who's [00:09:00] orchestrating this stuff, not the one who's getting orchestrated. But it's gonna be interesting just to see how this develops as a trend over time. 'cause I don't think this is going away. I think this is only gonna become more normal.

[00:09:11] Andrew Zigler: Absolutely. So Ben, are you running open call on your personal device?

[00:09:16] Ben Lloyd Pearson: Absolutely not. And it's a great transition to, uh, to our, our, our next article on this about how, you know, open claw, you know, this, this Moltbook, malt bot, all of these, these names are getting thrown out.

[00:09:31] Ben Lloyd Pearson: It's everywhere, all, all at once, but it's a disaster that is waiting to happen. You know, open law is basically a cascade of LLM agents. Or we, we all know what it is. It's, it's a thing that lets, that just goes on your device and gets the ability to just do a whole bunch of stuff with, with that device.

[00:09:48] Ben Lloyd Pearson: And, and I'm gonna repeat this again. Do not install open claw on your personal devices. Uh. I, I think it's really cool and I think we should all be experimenting with it. Like I want to [00:10:00] experiment with it 'cause I, it just is such a cool thing, uh, but there's absolutely no way I'm giving it access to anything that matters to, to me.

[00:10:07] Ben Lloyd Pearson: Uh, and I would even be hesitant to share information about myself with it just because you don't know what's gonna happen when it goes out onto Moltbook and start sharing information about its human with other AI agents. Uh, so yeah, we'll, we'll share this article in the, in the, uh, in the show notes about, you know, a lot of the security risks that are popping up with this.

[00:10:26] Ben Lloyd Pearson: Uh, you know, prompt injection is more serious than ever. With this thing. It's very easy to, to get this thing to do malicious things by hiding a prompt somewhere that it's gonna go crawl. Uh, so yeah, there, there's a lot of new security risks. That are emerging from this and that are getting more profound with the emergence of something like Open Claw.

[00:10:45] Ben Lloyd Pearson: So it's a cool experiment. It's a disaster waiting to happen at scale like this. I think this is gonna blow up.

[00:10:51] Andrew Zigler: I, I

[00:10:51] Ben Lloyd Pearson: what do you think, Andrew?

[00:10:53] Andrew Zigler: yeah, I think, I think we're just really on the cusp of some sort of watershed moment around like AI vulnerabilities at [00:11:00] scale. Especially when you mix it with autonomy. You know, we had a really amazing guest article this week on Dev Interrupted from Balaji Raghavan, the head of engineering at Postman, where he talks about rogue agents and how they, how that even happens in the first place and what we can do as developers to prevent, it's an extremely timely article.

[00:11:16] Andrew Zigler: It even has a foreword about Moltbook as kind of a precursor to this stuff. And when we're working with, uh, technology like open claw. It, uh, presumes that you're going to throw away all of the security precautions and work that we've done in the last 30 plus years to make our modern internet safe to, in order to get a new gadget to work.

[00:11:34] Andrew Zigler: And frankly, it's like that's how innovation works. Something with brand new capabilities hits the scene. It it, it breaks expectations for what, how things were constructed before guardrails. Disintegrate. We have new problems and then we build new guardrails and we're in that space right now. It's just that obviously the threat of something happening, uh, you know, could be pretty serious.

[00:11:54] Andrew Zigler: So I think if you're participating in Moltbook, definitely be safe. Don't be running this thing on your own device. There's been [00:12:00] multiple security vulnerabilities already defy, uh, discovered an open claw. Um, So be safe out there folks. But definitely don't be discouraged from experimenting.

[00:12:08] Ben Lloyd Pearson: absolutely. All right, let's get outta the surreal and get to get to some, some stuff for, uh, for software engineering team. So, uh, what do we have here on the agentic shift, Andrew?

[00:12:18] Andrew Zigler: We have a report from Gartner that predicts that by 2028, 33% of software, enterprise software applications will include agentic AI in some form, and that's up from less than 1% just two years ago. And this is a pretty ready indicator about how much the enterprise has grown to adopt and move at the speed of age agentic AI by redefining even things like their SDLC with automation and using it for things beyond just code writing, but also planning and then analyzing requirements, creating tests.

[00:12:47] Andrew Zigler: Finding errors, all of the the nitty gritty janitorial work that makes software happen.

[00:12:52] Ben Lloyd Pearson: Yeah. And, and, and to be clear, this is an article that sort of uses the Gartner report as a jumping point to propose, more of a forward-looking [00:13:00] model for ag agentic software development a as it relates to like a software engineering organization. And, you know, I think what it really comes down to is this year is gonna be the year of the age agentic operating model.

[00:13:11] Ben Lloyd Pearson: I, I've already seen lots of different people with their own way of thinking about how they're applying agentic AI at the organization level. And this article does a really good job at focusing in on something that we keep coming back to over and over recently. And that is the iteration loop that LMS are really good at.

[00:13:30] Ben Lloyd Pearson: this is all we, how we all need to be thinking about knowledge work. Uh, and in this article it outlines a loop of observe, orient yourself, decide on what you're going to do, and then take action. Like that's a really good repeatable process for applying, uh, LLMs to solve a problem.

[00:13:46] Ben Lloyd Pearson: And, and I like it in particular. 'cause it, it really is very similar to, you know, other models we're seeing, including. You know, the one that Angie Jones shared with us a few weeks ago when she came on the show, uh, about how she's applying agent to ai. But I wanted to share this [00:14:00] just because I, I really like getting different perspectives on people who are deploying this within their organization.

[00:14:06] Ben Lloyd Pearson: You know, again, at the organization scale. Uh, and I think it's just a really good read to, to see someone, you know, a different perspective on the same problem that we're all facing right now. So, uh, I definitely encourage all listeners to go check out this article.

[00:14:18] Ben Lloyd Pearson: All right. Now let's talk about open source and, and how vibe coating might be killing it.

[00:14:22] Ben Lloyd Pearson: Andrew is, is open source dying? What do we have here? I.

[00:14:25] Andrew Zigler: Yeah. You know, we've touched on it a little bit in this, in this new segment for sure. Open source has taken a pretty big hit. It's not in a great spot right now, and that is due to a bunch of factors. Declining adoption rates by human developers versus agents who don't, cons don't consume their docs.

[00:14:40] Andrew Zigler: Don't go to the pricing page, don't buy the software. This, uh, imbalance is putting a lot of strain on preexisting open source tooling. Uh, it's. Honestly preventing most kind of like new large open source projects from hitting the scene, uh, or becoming something that's widely adopted or used. And what are the [00:15:00] reasons behind this?

[00:15:00] Andrew Zigler: Well, obviously AI changes the economics on how you build and use software. Now, it's incredibly easy to take what used to have to be at open source library and spin up a version of it for yourself that works for what you're trying to do, or to otherwise modify it without really going through typical monetization method.

[00:15:18] Andrew Zigler: That keep the open source tool alive. So the pathways that the very tenuous pathways that open source has always had to maintaining themselves are really withering on the vine here. And, and this is an article that talks about how the practice of vibe coding is, you know. It's killing everything.

[00:15:35] Andrew Zigler: We've seen this article now in a bunch of forms. It's killing open source. It's killing traditional engineering. It's like vibe coding is eating the world, and all of those things are true. But really um, the most important thing is about embracing and using these tools and understanding that the norm is changing.

[00:15:50] Andrew Zigler: Uh, it's one thing, uh, to be presented. This and to like, have skepticism about it, but it's another thing to be presented with these new types of [00:16:00] tools and then refuting or crossing your arms and just being blind to the realities of it that doesn't serve you or other folks in your team very well either.

[00:16:09] Andrew Zigler: So, this is an article that touches on, um. vibe coding and kind of its negative effects on the ecosystem and the engineers themselves. It, it, it kind of goes through the whole gambit. It quotes the meter study, which we've talked about extensively on here, about LLMs degrading cognitive skills and reducing productivity more than it thinks their users do.

[00:16:29] Andrew Zigler: Uh, it even claims quote, no real benefits from GitHub copilot. Unless adding quote 41% more bugs is a measure of success. So this like article is I think a little unfair. And I'm here to tell you friends that you don't have to read engineering articles written by non-engineers. This is a great example of that.

[00:16:47] Andrew Zigler: If you're an engineering leader and you find yourself reading these kinds of articles that don't see, seem clued into the realities of how people are working with these tools, chances are the author is not. And so you need to be very careful about the [00:17:00] kind of information you're consuming because over rotating into this negative, uh, misconception and thinking that these tools are not productive is going to harm you in the long run.

[00:17:09] Andrew Zigler: Uh, it also makes a a, a really painful comparison to Spotify where it says. You know, 80% of artists on Spotify rarely even have their tracks played. But it it, but yet, you know, they don't get compensated for anything that they do on the platform. But that's not really a good, uh, metaphor for what's happening in open source, because in open source it's like you have a large amount of different types of tools.

[00:17:31] Andrew Zigler: You don't have this top 20% of tools that soak up everything. Right. Not to mention the fact that this article uses verbs like joked and degrading and reducing, like there's all. Like, there's so much bias in this article that you just definitely need to be careful out there reading stuff like this and make sure that you're paying attention to the realities of modern engineering.

[00:17:51] Ben Lloyd Pearson: Yeah, and there's, there's some points that, that I, that I, I do like from this article that I'll get into in a moment. But, uh, I, I have an opinion that might be a bit of a controversial take, and [00:18:00] that is that I, I think AI is actually reducing. The importance of an open source project, having a large contributor community, like if you think about what the biggest benefits are from, from having a bunch of contributors, is that it, it, it basically lets you scale so you can, you can do more, you can build more things, you can fix more bugs.

[00:18:18] Ben Lloyd Pearson: You know, you have the many eyes like helping use. You, uh, improve your product or your project by, uh, scratching their own itch. You know, these are all things that have been deeply ingrained in the, the open source ethos that now are actually very easy to like, replicate with ai. You know, AI can be fixing your bugs and, and all of those issues that were good first time contributor issues are probably really easy for AI to solve as well.

[00:18:44] Ben Lloyd Pearson: So I, I, I think from that sense, AI does have the Ben, the o opportunity to benefit open source projects quite a bit because you don't have to build a huge community of people to be successful anymore. And I actually think that we may see the opposite [00:19:00] of what this article is, is describing to an extent where.

[00:19:03] Ben Lloyd Pearson: AI is actually has the potential, I think, to enable developers to proliferate open source projects. Like now anyone can have like the, you know, again, like the, almost like the effects of having a large contributor community helping you. You can have a bunch of agents helping you build really cool open source projects that you share with the world.

[00:19:23] Ben Lloyd Pearson: But there's, you know, I, I mentioned there's, there's a few things the article highlights that I think are very relevant. The first is agent experience. Like this keeps coming up. How if agents don't, the agents need to be incentivized to use your tool like they had to use it and when they start using it, want to use it more.

[00:19:41] Ben Lloyd Pearson: And if your project or your product isn't that, then you're likely gonna be almost invisible to u to the end users who are using AI to build their stuff for them. then second, I think the, the commercial model of building like a commercial product on top of an open source [00:20:00] library, that may actually be at a big risk right now, because once, if you, if you have the core of the products in the open source library.

[00:20:08] Ben Lloyd Pearson: Uh, often it's relatively trivial at this point for someone to then have an AI agent build, like the commercial aspects that you would layer on top of it, Uh, so, you know, I, I think there's, there's certainly a lot of disruption coming to open source right now as a result of ai, but I, I, at this point, I don't think it's gonna kill off open source.

[00:20:28] Ben Lloyd Pearson: It, it may do the opposite. We'll see.

[00:20:30] Andrew Zigler: I think open source will transform. You make a really great point about agent experience becoming the number one influencing factor. Now the agents have be able to discover your tool, but then love to use it. goes back to even what we covered last week from Steve Yegge about like, uh, the software economics like 3.0.

[00:20:46] Andrew Zigler: Like. How does a modern software tool survive? And that's through like, it, it reduces cognitive burden, it compresses information. It is something that you can't create a tool to, to replace it. Just like wouldn't be feasible to do [00:21:00] so. So ultimately you're looking for really simple atomic units of code, which is the opposite

[00:21:05] Andrew Zigler: of how these large open source paid ecosystems work, where you have this huge like, spread out plugin system and stuff like AI can eat all of that now. So it goes back to again, kind of even what I said at the beginning of like, we're gonna see some new economic models pop up. I think you're gonna see open source projects that are built for agents, consumed by agents, maintained by agents, and it's just gonna be a different kind of ecosystem.

[00:21:28] Andrew Zigler: Uh, but it's gonna be a very fascinating time for sure. So folks, be sure to be paying attention to what's happening in the open source communities. On Moltbook at the top of the year as agentic orchestration is coming like a tidal wave you're gonna have to just be ready for it. So definitely be tuning into conversations like this one as well as our upcoming chat that we're about to have with Gary Lerhaupt at Salesforce, talking about their, uh, report on the agentic orchestration, the, the levels of it that we, they get at Salesforce, but specifically looking at the connectivity benchmark, which [00:22:00] is telling us that AI orchestration is here.

[00:22:02] Andrew Zigler: So stick around, uh, we're about to sit down with him.

[00:22:05] Speaker: Most engineering teams don't use just one AI coding tool. Some developers use GitHub copilot, others prefer cursor, and suddenly leaders are juggling multiple dashboards without a clear view of what's actually happening. LinearB B brings all AI code metrics into one unified dashboard.

[00:22:22] Speaker: You can see adoption, acceptance rates and usage trends side by side, and more importantly how that AI activity connects to real delivery outcomes like cycle time, code quality, and PRS shift. No more console hopping or guesswork. Just one place to understand how your AI tools are being used and whether they're actually moving the needle.

[00:22:43] Andrew Zigler: We're joined by a special guest today. He's the VP of Product Architecture at Salesforce, an engineering organization that's very dear to our hearts here at Dev Interrupted. And he's a software engineer by trade who's been focused entirely on the architecture of ag agentic systems. Gary, thank you so much for joining us today.

[00:22:59] Gary Lerhaupt: [00:23:00] Yeah, happy to be here. It, uh, it's always great, you know, everything is always so fast moving. I'm sure by the end of this conversation everything will have changed, but, uh, good to take a moment to reflect on where we're at.

[00:23:09] Andrew Zigler: I was just thinking that too, like we're gonna turn around and drop this episode immediately and I'm sure something will go stale in the time that it takes to do that, which is just so crazy for how fast we move. But you know, you're the first here to bring us some fresh ink from this connectivity benchmark report.

[00:23:24] Andrew Zigler: It just dropped yesterday and there's a lot of really. Fun numbers in here that I was poking around at, all of which are super relevant to what we talk here, you know, week in and week out on Dev Interrupted. It gives us really good context on what's happening behind the scenes for teams with age agentic adoption, and I wanna zoom in on some of them Uh, the first thing I really wanna, uh, double click on is kind of this critical mass of agent adoption that we're seeing right now. This was a movement that happened out of the IDE, you know, last year we saw that the, the chat sidebar expand. We saw multiple agents start to be run in parallel and then.[00:24:00]

[00:24:00] Andrew Zigler: Suddenly people weren't even looking at the code anymore. They're running an entire fleet of agents on their behalf. And you know, this has quickly evolved to the point where AI agents are no longer just in some experimental stage. They're not something we're playing with on the weekends. They're doing our jobs. And this report, it actually dives into some things that, uh. Point, out otherwise about whether or not we're in an experimental phase. You know, it says that, for example, here, according to the report, 83% of organizations now report that most or all teams have adopted AI agents in some capacity. I think that's pretty profound.

[00:24:34] Andrew Zigler: 83% of organizations that y'all talked to.

[00:24:38] Gary Lerhaupt: Yeah, totally. So, you know, my role within, within Agent Force, I'm really thinking about how do we build these. Multi-agent experiences. How do we do interoperability? So we can get agents sort of capably collaborating with other agents, adding external capabilities, you know, protocols like M CCP and A two A.

[00:24:53] Gary Lerhaupt: Ultimately, if you step back from the sort of Salesforce perspective, it's how do you get, you know, ultimately, uh, [00:25:00] the data, the, the humans, the agents, the workflows, all that together in a sort of a unified platform approach. And like to that end, this last year has, has really kind of where we've gone from the sort of hype about agents to actually now the reality, right?

[00:25:14] Gary Lerhaupt: So if you think about 20, 25 is really about the zero to one in my mind as I look back onto it. And now this year is really about kind of the one to many.

[00:25:21] Andrew Zigler: Hmm.

[00:25:22] Gary Lerhaupt: And so this is where we look back now and we're like, okay, wow, we have 18,000 customers across Agent Force. Right? And I think the stat that, that I'm, I'm thinking about here is like 70% jump in Q3 of those going from not into production to in production.

[00:25:37] Gary Lerhaupt: So Right. The moment is now, it is very real. And, um, Pandora's like a great example of that, right? So the, the jewelry, uh, retailer. And ultimately they had, I think what they saw was like a 60% of a percentage of deflection using their, their agent force agent. But the, the key thing there is like also a 10% jump in their net promoter score.

[00:25:55] Gary Lerhaupt: Right. Their NPS. So like the reality of that is, is [00:26:00] really pronounced. And that's really basically where now, 2026, we think we're gonna get like. Somewhere around 2 billion LM hits, so, wow. Uh, we've gone in the, in the space of early 2025, now into 2026. As I said at the outset, things move fast and now they're very real.

[00:26:17] Ben Lloyd Pearson: What you described definitely matches like our experience for sure, uh, in terms of what the, the last year has felt like and what we're expecting to happen this year. Um, one thing that stood out to me in the report was just this, the sheer number of agents that companies are starting to adopt.

[00:26:32] Ben Lloyd Pearson: You know, it's, it's one thing to adopt one agent for one workflow, um, you know, whatever that workflow may be. But when you start having multiple agents doing different tasks around your, your company. Um, the, the complexity goes up by like basically an order of magnitude. Uh, and it seems like that's going to increase, uh, in the short term.

[00:26:52] Ben Lloyd Pearson: Like people are, are going to be building more of these things, uh, solving their own tasks. And that is really like such a huge challenge now is how do we [00:27:00] get, get these things out of their silos and actually start getting them to, to work across, uh, you know, across boundaries.

[00:27:08] Gary Lerhaupt: Yeah. Yeah, I, the way I think about this is really like. LLM is not enough, right? Like yes, they're amazing. It's this huge gen AI revolution chat, GT four onward. But the LLM alone isn't gonna get you there. Right? And so that zero to one challenge, it's really about kind of like it takes the platform, right?

[00:27:27] Gary Lerhaupt: So how do you get that, that builder experience that really allows you to, to get to high quality? How do you then test it before it gets into production? How do you observe it? After it's in production. 'cause inevitably people use it in ways that you wouldn't expect. And so that's all about kind of getting that first agent working well.

[00:27:43] Gary Lerhaupt: And then you start thinking about these other agents that you start working well and you want to connect the dots. But I, I do wanna just dwell a little bit on this challenge because getting that first agent really functional, this has again, been what we've heard about from our customers again and again.

[00:27:57] Gary Lerhaupt: And I think something where we have a, a really kind of [00:28:00] differentiated story here, right? And so kind of going back to this idea of the, oh, I'm not being enough. We heard about the need for, Hey, gen AI is great with the creativity, but I need more control, right? I, I can't suffice with 95% repeatability with my agents.

[00:28:14] Gary Lerhaupt: I need to have like 99.9, right? Like, that's what we all expect out of OUTTA software. And so we, we just heard this again and again, and we spent the last year really focusing on this challenge, which is where we've just launched, uh, what we call agent script. It's this idea that you have kind of the mix of the creativity of gen AI with a control and determinism of kind of like an expression language, right?

[00:28:35] Gary Lerhaupt: Scripting. Um, and so what, what agent script is, and I, you know, I encourage anyone who's listening and go check it out. 'cause it's, it's the real deal. It's the idea that it's a new kind of language where you can have if then statements, but then in the middle you can have natural language. And with us, that's really the solution to getting to zero to one.

[00:28:52] Gary Lerhaupt: Which then enables us to start thinking about, okay, how do we bust outta these silos? How do we build these new workflows so we can get these agents all [00:29:00] working together? We sort of earn the right now to do that problem, to solve that problem with high quality.

[00:29:05] Andrew Zigler: Earn the right is a great way to put it. You have to like eat your Wheaties, right? You gotta do your. Homework. You have to establish some baselines here. Agent Script is really powerful. I, I got a really cool insider look at Agent Script just earlier this week from some of the Salesforce team. Uh, and it's amazing how you can turn natural language instructions into these deterministic.

[00:29:25] Andrew Zigler: Guardrails that keep the agents from having these repeatable mistakes and they turn your AI workflows into something that's very repeatable and durable. And the compounding effects of this, um, are profound because like you said, companies spent a lot of time last year going from zero to one, and now we're at, we're at one. We're gonna very rapidly go from one to many, and this is gonna cause a lot of really interesting gaps and introduce new problems that I don't think that we've really been able to scratch the surface of yet, simply because we have to be living in that problem first. And one of them is this [00:30:00] growing pain that happens in this siloed information when you have a lot of agents doing things in parallel, potentially stepping on each other, making decisions that override each other. You know, that speed and, and that level of like. Multiplicity, it can come at a real headache. And so I think that's something else that I'm really taking away from this report is this growing orchestration gap. You know, we have all of these agents, but they're not necessarily talking to each other. And in fact, like your report says that 50% of agents, like half of them, they still operate in total silos rather than part of larger systems that can communicate and make decisions as a team. Uh, I'm curious to know, like from your perspective, um, what do you think are some of the ways that we might. Tackle this silo gap. Maybe it's even something, are we overthinking the how much information has to get shared? I'm curious like what your views are on that current problem.

[00:30:53] Gary Lerhaupt: Yeah. Uh, it's a great, great area. Great area to, to go build product in. So, yeah, I think. Going back to this, earn [00:31:00] the right and the kind of concept of eating your own dog food. Dog food must taste really great. I'm sure. Um, it, it, it's, it's really about kinda like you're successful with one agent. This is the pattern we've seen.

[00:31:10] Gary Lerhaupt: You're successful with one agent. So the natural inclination is, okay, great. That, that thing's working. And, and I know it 'cause I have testing, I have absorbability, I have the sort of agent development life cycle that we think about. Well, uh, you know, now we start to think about the sort of multi-agent development lifecycle.

[00:31:24] Gary Lerhaupt: And so while the inclination is to go put more stuff into that working agent, right, more instructions, more capabilities. That's an anti-pattern, right? Like we've all heard about this on the MCP side, that the challenge of plugging in MCP servers and suddenly you blow out your context window. Uh, it's, it's kind of, that's why this is the anti-pattern.

[00:31:42] Gary Lerhaupt: And so while that's the inclination to add more, we don't wanna do that. We don't want these monolithic, gigantic agents. So if that's not the solution, well then the path is, okay, let's focus on building specialist agents that are really good at a set of jobs to be done. And that makes a ton of sense. And again, you need the development lifecycle to really support high [00:32:00] quality.

[00:32:00] Gary Lerhaupt: You need agent script to support the high quality of all that, but nobody wants to go find the right agent. I mean, that, that sucks. No, nobody wants to go do that. And so, you know, what do we want? We want what we call a super agent, right? The sort of primary agent that lives within the client that understands kind of the capabilities of each of those specialist agents, right?

[00:32:19] Gary Lerhaupt: We think about like with a two A, the idea of having an agent card that can des describe those capabilities. We think about leading with trust, of course with, with Salesforce and having governance at the front where. We leave admins in control to register those third party agents. We can then ingest the sort of capabilities there.

[00:32:36] Gary Lerhaupt: Then that enables our builders to go into their, what we call our agent force asset library, and plug in each of those agents so they can go into their primary agent and say, Hey, I wanted to connect to these 3, 4, 5, whatever the, the number of N agents are. And then we could automatically grab those capabilities.

[00:32:52] Gary Lerhaupt: And you know, with our Atlas reasoning engine, as we like to call it. It is able to orchestrate kind of outta the box, but then you can hone that in, right? We have [00:33:00] agent script again now for across multi-agent experiences. And I'll, I'll say this, um, uh, this is also starting to be real. I was on stage at Dreamforce.

[00:33:10] Gary Lerhaupt: Which was back in, uh, I guess last October with, uh, Royal Bank of Canada. And it was amazing to be on stage. We literally were getting the pilot of this sort of, uh, multi-agent thing going, and we live demoed it. So Royal Bank of Canada is one of the like world's largest financial institutions and they have already been building these sort of specialist agents, right?

[00:33:28] Gary Lerhaupt: They think about employee use cases, right? I need an agent to go, you know, reflect on their. You know, client's portfolio or what's on my calendar or what are the next steps, right? And so they built all these specialist agents and we, I mean, you can go find it. I'm not sure what the link is exactly, but we live demoed it.

[00:33:42] Gary Lerhaupt: This is real. This is starting to work, and it's how you bust outta that silo, which is to do it again with a platform approach to pull it all together to build the right sort of ui. So you can do that eventually also with some really great kind of, uh, uh, Dev forward, uh, I think cloud code type of experiences, [00:34:00] getting a little forward looking here.

[00:34:01] Gary Lerhaupt: But this is where like. If you look at agent force and you're like, holy crap, this is a half billion dollar a RR business in less than you know what is this? 18 months. This is the fastest growing product in Salesforce history. All for good reason.

[00:34:15] Ben Lloyd Pearson: Wow. Uh, I, I feel like you're kind of calling me out on the, the, not the anti-pattern on monolithic agents because I feel like I have fallen into that trap in the past. Um,

[00:34:26] Gary Lerhaupt: It's so natural, right? Like it's working. I'll just keep adding more. Eventually you start to see that degrade and, and you wanna have tools that help you understand that. So, you know, okay, it's time to break out into the, you know, agent one, agent two, agent three.

[00:34:37] Ben Lloyd Pearson: Yeah.

[00:34:38] Andrew Zigler: Ben and I have an agent right now that I've built that we're going back and forth on that problem with, and for you to call an anti-pattern, it really echoes a lot of the conversations we've had about like, why, why does this, why does this little guy keep getting confused?

[00:34:49] Gary Lerhaupt: I see a future buddy comedy where I can come and call you out. Maybe, uh, that sounds like a fun time for me.

[00:34:55] Andrew Zigler: Yeah. I, I need, I need a Gary over my shoulder when I'm building our agents. For sure.

[00:34:59] Gary Lerhaupt: Amazing.

[00:34:59] Ben Lloyd Pearson: [00:35:00] Yeah. Yeah. Like judging us openly. Yeah, that'd be great. Uh, yeah, so this, this report also brings up a term that we keep hearing over and over again. Um, there's a lot of different contexts around, uh, how people use this, but shadow ai, so we all know this phrase, shadow it. that has been around for a while now.

[00:35:19] Ben Lloyd Pearson: We're seeing similar trends with ai. Um, and, and it's creating a lot of problems. It is both caused by problems within your organization, but then it creates problems. So I'm wondering if we could break that down a little bit. Like what are, what are you seeing around shadow ai?

[00:35:32] Gary Lerhaupt: Yeah, this is kind of what I was touching on just a bit ago, where we really want to think about how do we expand what you're capable of doing with Agent force within your enterprise, but do it with control.

[00:35:43] Ben Lloyd Pearson: hmm.

[00:35:43] Gary Lerhaupt: Right. And so this idea of having, well, it's two part actually, I think about the discovery challenge, right?

[00:35:48] Gary Lerhaupt: So like what are kind of the like. Things that have been vetted, right? And so we have this new agent exchange approach, which is kinda like the catalog of things that have gone through this extra bit of partner and trust scrutiny. [00:36:00] Um, and it's starting today with sort of MCP servers that are sort of pre-vetted and anyone can go visualize or, or browse that catalog.

[00:36:07] Gary Lerhaupt: Uh, it's gonna extend to third party agents. I think as you, as you see, 2026 go from here. But then for, from my perspective in the way that we're doing this within agent force. It's really about that governance angle of, okay, anyone can see what's available. Anyone can go and, you know, if it speaks a two a, if it speaks MCP start getting going.

[00:36:25] Gary Lerhaupt: But we, we really want, I mean especially from the data perspective, the customer workflows, all of that to keep admins in control of what gets registered so that as you're building your agent, it experiences, it's all on the rails, right? It's coming with the guardrails that are built into to agent force.

[00:36:39] Gary Lerhaupt: And then therefore it's blessed, it's in, it's within that asset library I was referring to. And then that leaves the builders sort of like, okay, here's our list of things that have already been vetted that I can go build with. And then we can keep adding additional agents. I can keep working with, you know, sort of RIT in order to keep this kind of like ever expanding set of ecosystem capabilities, but [00:37:00] with control.

[00:37:00] Gary Lerhaupt: And that's really the perspective we wanna lead with.

[00:37:02] Andrew Zigler: Absolutely. And I, I, there's one more thing in this report that I really want to dial in on, and it's kind of like the culminating point of what this all points to. The idea of having this high level orchestrator, this like so-called. Kinda like a super agent, right? That has that high level context, that has access to this catalog of capabilities. What can my fleet of agents do? What has been approved? And it just becomes your single entry point to operating the whole system. It's the idea of just like swiveling in your chair and talking to your your right hand thing that's just gonna go and do all of the work for you. And I think that that is where all of this is going in the orchestrator pattern.

[00:37:43] Andrew Zigler: It ultimately comes down to you build yourself the one entry point that you need that can then drive everything else downstream. And this is how I think folks start to get really incredible gains from building and combining these systems. Like we've covered it pretty extensively here on the pod, like [00:38:00] Stevie AGA's Gas Town, how that's a great example of a top level orchestrator driving lots of other smaller agentic chains that are doing all sorts of action. That's one example for working in engineering. But I think we're gonna see this model emerge in lots of spaces like knowledge, working and in CRMs, like what we're seeing with Salesforce and its entire ecosystem too. So, you know, we have the volume of agents and requests and with it there's chaos. So let's talk about that. Um, it says that 96% of a IT leaders agree that AI agent success. Depends on integration across systems and or, the reason I bring that up in comp, in complement to the orchestrator problem is really about visibility and being able to drive. I think we as humans are really used to building and using software where ultimately ends up in a form where we can ingest it and use it and interact with it.

[00:38:52] Andrew Zigler: We create very like robust web applications and such, but when you're working with agents. You have to almost throw those assumptions [00:39:00] away and be like, all of the capability I'm going to expose needs to be equally ingestible by a human and usable by a human as it is by a machine and an agent. And this kind of opens up a new design paradigm.

[00:39:12] Andrew Zigler: I'm curious, like how do you think at Salesforce about building the kind of environment needed for these super agents, these top level orchestrators to get all of this work done? What does it look like in the lab?

[00:39:23] Gary Lerhaupt: Yeah, totally. So what's the Hitchhiker's Guide quote? It's turtles all the way down. So it's really, it's orchestration all the way down. Right? And so for us, it's like we're thinking about building super agents and we think about like from the perspective of a brand. Like pick any one of these brands I've already referenced, whether, you know, let's take Pandora, whatever it might be.

[00:39:42] Gary Lerhaupt: You can think about how they're gonna have a brand agent that's super agent that we've helped them craft. And do that with that sort of multi-agent development lifecycle or super agent development lifecycle so that it's working really well across the orchestration of many agents for whatever they're trying to do with their brand tone within Pandora.

[00:39:59] Gary Lerhaupt: [00:40:00] But then you think about, well, I wanna bring that to many different surfaces, right? So whether it's chat GPT or Gemini Enterprise or whatever, it might slack it, whatever it might be, right? There's going to be something there that's going to be orchestrating the the Pandora agent when you try to buy some jewelry.

[00:40:15] Gary Lerhaupt: And then it's gonna orchestrate, its subagent. And so really for us it's, it's first of all leading with kind of that kind of like visual visualization of the sort of like architecture of where this is headed. And then again, it's just about the lifecycle. How do we have observability? So that when we get that request from, you know, whatever it might be, slack or, or chat GPT, and then Pandora deals with it, you know, how is it receiving that?

[00:40:38] Gary Lerhaupt: How is it then, uh, getting it to the right specialist agent? How is it doing that with low latency? These are all, all the challenges of 2026. So, um, fun, fun stuff to go, uh, go build and uh, deliver. But uh, this is exactly where we're headed.

[00:40:51] Ben Lloyd Pearson: Yeah. And, and I'm, I'm just curious you, you know, maybe as one bit of last bit of advice for our audience. Uh, you know, this, this like the super agent idea. [00:41:00] Like, it, it, it, I I love it. It makes a lot of sense and it feels like, I, I agree that it feels like it's where we're going. H how do you start building that today?

[00:41:07] Ben Lloyd Pearson: Like is it, do you start at the top and try to, to build it down or do you start at the, the micro level and try to go upwards? Like what is your, where do you see the most success from? Building something like this.

[00:41:19] Gary Lerhaupt: Yeah. Yeah. I think about, I think about first and foremost, building those agents that do the jobs to be done and really honing those in kind of in the RBC Royal Bank of Canada example I was giving from our perspective. You can then take any one of those. Make that the primary agent and then go add the subagent specialist agents underneath it.

[00:41:37] Gary Lerhaupt: Or alternatively, another pattern we're seeing is you build a specialist agents and then they create this sort of like orchestrator agent, to really hone in how to get it to the right, uh, specific specialist agent. But from our perspective, there isn't anything like magical about the, the primary, the orchestrator or the super agent.

[00:41:52] Gary Lerhaupt: We really just want to thoughtfully think about building the, the platform approach to bringing that determinism so you can get to the right place for the right [00:42:00] utterance, the right request, and, and make it easy to get going. So that that's, that's how we think about it.

[00:42:05] Andrew Zigler: Yeah, it's, it's almost the way you describe it. It's more about working, uh, uh, atomically and being able to expose the things that need to get exposed. Like, uh, up until very recently, the whole game with getting good with agents was mastering your input, so you got the outputs that you wanted. Now, the key to orchestrating your agents is to master the entry point. For that agent and then master what its exit point on what it can do downstream. And if you can do that, then suddenly all of these agents become nodes that can talk to each other, that can connect to each other and pass things along. And it just comes down to, I think, working in that really, uh, contain like contained way, like you said, create those great individual agents. That's your first job before you can get to building the orchestrator. And then once you have them, then you can zoom out and think what needs to go into this agent? And then what needs to come out. That's the job of the orchestrator. And then it kind of gives you the blueprint for what you [00:43:00] have to build.

[00:43:00] Andrew Zigler: So it's really cool to see Salesforce lead the way, especially with your, all of your customer. I love the idea of these huge brands having these, um, these brand agents that expose themselves. So like you've mentioned a few. Another one that I was really partial to was the Williams Sonoma Olive. That would give you recommendations on like, stuff from their website.

[00:43:18] Andrew Zigler: And it would even give you like a recipe that you could make in like the, like the pot that you're gonna buy from them. Like that kind of stuff I think is one clever, but it's also a great example of a top level orchestrator. It's totally aligned with their brand. It doesn't have latency because it's able to delegate its work across a bunch of sub-agents.

[00:43:34] Andrew Zigler: And those specialist agents are really good at doing what you need them to do in that moment. Like, you can ask Olive for a refund just as much as you can ask for a recipe and you're gonna get what you need. Um, so I think it's really cool to see these, um, experiences emerge, uh, especially around the, the super agent concept from Salesforce.

[00:43:50] Andrew Zigler: So it's been really cool to dive in and I will say for our, our, our listeners as well, like. This report's a total, uh, gold mine. We cover a lot of reports here [00:44:00] on Salesforce, but at the top of the year, I think, uh, orchestration in particular is like really important. And this one has tons of great stuff in it.

[00:44:06] Andrew Zigler: So please be sure to check out the stats and, uh, understand where you kind of fall within this, but really be paying attention to what's happening at Salesforce. I think they're leading the way here. you know, Gary, any last things you want to end us on? Um, maybe even a way, a place to point us to where folks can go check out the report.

[00:44:21] Gary Lerhaupt: Oh yeah. No, it's been great talking. Um, as I said, everything I'm sure has changed in the last, what, 25 minutes or so. But uh, yeah, check out the report. I think it's at the Salesforce newsroom and, uh, yeah, really fun conversation.

[00:44:32] Andrew Zigler: Thanks so much for joining us here on Dev Interrupted. It's amazing to go through the news articles and like Ben said, it's been a surreal week of AI news for sure, and it was amazing to be joined by Gary to talk through the connectivity benchmarks report from Salesforce.

[00:44:45] Andrew Zigler: I encourage you, our listeners, uh, if your brain is churning, uh, or if your bot's brain is churning. After listening today's episode, please come find us and leave a comment. Continue the conversation. Uh, Ben and I and Gary, were all on LinkedIn. We would love to hear your thoughts on how [00:45:00] AI is transforming your own engineer.

[00:45:01] Andrew Zigler: Organization, organization, how your orchestrating agents and what your Moltbot is posting these days on Moltbook. So thanks so much for joining us on Dev Interrupted, and we'll see you next time.

[00:45:11] Ben Lloyd Pearson: Yeah. And if there's any agents out there, you know, give us some nice comments out there on Moltbook. You know, we're, we're not quite connected there, but we are watching what you all are doing. You know, leave us some nice comments.

[00:45:21] Ben Lloyd Pearson: I wanna see what the AI agent space thinks about our show.

[00:45:24] Andrew Zigler: Where's the Dev Interrupted sub-molt? Someone needs make that.

[00:45:28] Ben Lloyd Pearson: Yeah, yeah. Maybe we'll make it happen. All right. Well, thanks for joining us, everyone.

Your next listen