"If you're still building for humans, then you're missing half of, or even more of your potential consumer base. It's a whole new wave of product adoption."
If you're still building products only for humans, you're already missing out on a massive new customer base: AI agents.
Joining Dev Interrupted is Andrew Hamilton, co-founder and CTO of Layer (a first-of-it’s kind MCP agency) to unravel this monumental shift in how products will be discovered and consumed. He dives into how AI agents are rapidly evolving from developer tools to direct consumers of APIs and products, with new standards like the MCP spearheading this transformation by effectively creating an "app store for LLMs." This evolution demands a complete rethink of product design, packaging, and user experience for an entirely new kind of user.
Andrew educates us about how successfully leveraging MCP isn't about a simple one-to-one API mapping, but about thoughtfully designing an "agent experience" based on key user workflows and providing pre-packaged capabilities. He shares insights on identifying good MCP candidates, the importance of experimentation in this fast-moving space, and how tools like Layer are defining the frontier of agent-accessible tooling.
Show Notes
- Follow Andrew on LinkedIn: Andrew Hamilton
- Learn more about Layer: buildwithlayer.com
Transcript
(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)
[00:00:00] Andrew Zigler: Hey everyone. Joining us today is Andrew Hamilton, co-founder and CTO of Layer, a company defining the frontier of agent accessible tooling. Today we're diving into something that's going to be redefining how your products get used, whether you are ready for it or not.
[00:00:16] Andrew Zigler: AI agents, they're not just helping developers anymore, they're becoming AI consumers. With new protocols like MCP, and we've been discussing this a bit on Dev Interrupted. We had a recent chat with our guest Sagar Bachu of Speakeasy,who dove into this topic about MCP and how they bridge into APIs, and that's why I'm really excited to have Andrew here today, a fellow Andrew, to continue informing us about how MCP is going to impact the work we do every day.
[00:00:43] Andrew Zigler: Because when MCP is on the scene, if you're still building for humans, then you're missing half of, or even more of your potential consumer base. It's a whole new wave of product adoption. So Andrew, welcome to the show.
[00:00:56] Andrew Hamilton: Yeah. Thanks for welcoming me.
[00:00:58] Andrew Zigler: So we have a lot of cool [00:01:00] stuff to cover. I'm gonna go ahead and jump into our first one here. you know, we've, we've been talking about MCPA model context protocol about how it's enabling LLMs to consume APIs. So, Andrew, you're using MCP to build a whole new layer, uh, no pun intended for accessing tech on the internet. You know, why do you think MCP is so transformative of a shift in how products will be used and consumed?
[00:01:21] Andrew Hamilton: So we've been in the in something that we call the LLM extensibility space for a long time now, probably year, year and a half. that's a relatively long time in the AI cycle of thing.
[00:01:32] Andrew Zigler: Yes.
[00:01:33] Andrew Hamilton: And what we found is that everybody's tried to create this, I call it an app store for LLMs over and over and over again. The first iteration of it that I saw was ChatGPT with their, GPTs. That unfortunately didn't go very well. There was after that GitHub co-pilot tried something with co-pilot extensions. There were a bunch of other attempts as well. I believe LangChain [00:02:00] tried an extendability protocol as well. But MCP isis that first attempt at really creating that app store for LLMs, basically enabling you to plug and play your own custom software with an existing, client application.
[00:02:13] Andrew Zigler: And so when you move, consumption into this model, you're building MCPs, that servers that allow LLMs to make calls into someone's API, then that kind of reevaluates how you have to package your products, right? 'cause now they're being consumed in a new way.
[00:02:28] Andrew Hamilton: Yeah, you know, it's really interesting. it's a user experience. Shift. so we've done this with a lot of companies. Now we've, we a bunch of different companies. Some of it, the MCP model fits really well with okay. They generally have really good iterative workflows and so the user is aware of what they're asking and then a change is made and you're able to see it really quickly. There are some companies where if you just map the API directly, it's not a very good or useful MCP server.
[00:02:58] Andrew Hamilton: I've spent a great deal of time trying [00:03:00] to differentiate what makes one product really good for MCP and whatmakes a product not good for MCP. And what I've found is that it comes down to the workflows of execution that a user on your platform experiences. A good example of, good example of an MCP server that I saw. That I thought was pretty neat wasSentry what it's able to do is it can look at your code, tell you what errors came from the code, hit the API to to, you know, update or get more information from Sentry and then update code respectively.
[00:03:40] Andrew Hamilton: And so it's, it's a, it's a quick iterative loop and I think a lot of MCP is finding those types of loops.
[00:03:48] Andrew Zigler: So it's important to call out an important distinction you made about, it's not just about one-to-one matching your API to an MCP server. It's about thinking about what are the actual flows that people are going to be. Trying to [00:04:00] use your tools in that way, and you call out a good one of using century, you know, it's probably doing a combination of like, it's doing analysis of your code, right?
[00:04:07] Andrew Zigler: of its security, of its quality and it's checking it against Sentry's own proprietary,information and APIs and, effectively MCP becomes, a way of getting a bunch of really valuable context on demand. Into the conversation. So maybe that becomes a marker of, of a good candidate for MCP when it's a really rich context.
[00:04:29] Andrew Zigler: Right.
[00:04:29] Andrew Hamilton: Yeah, the rich context definitely plays a role. I can give you another great example. the one that I personally use the most is Docker.
[00:04:38] Andrew Zigler: Okay.
[00:04:38] Andrew Hamilton: Very similar flow. one of the first things that happens when I'm writing code is I'll take a bunch of the output of a Docker log take the, any errors that it throws me and I'll throw it into ChatGPT like immediately or throw it into whatever service I'm using. And that's the first step. And so it just does that automatically, right? It's a tiny little optimization that,benefits my [00:05:00] workflow. And that's where MCP is kind of pushing the boundary. I. Have seen some poor uses for MCP servers
[00:05:06] Andrew Hamilton: there? I don't, I don't wanna mention any or call any specifically out.
[00:05:10] Andrew Hamilton: but there's a lot where do just directly map their API. Okay. It's like one to one
[00:05:17] Andrew Zigler: Yeah, that's what I've kind of seen. That's what people are seeing and so people get confused right. About like, MCP must just be my API and it becomes a a, a bad starting place.
[00:05:25] Andrew Hamilton: What's the point of MCP if it's just gonna map your API? And I think that's a big problem that a lot of people are bumping into, is that oftentimes the best MCP servers use like three or four API endpoints. Okay. But a lot of people map their entire API. so for example, like Twilio Maps 1400 API endpoints and, that's a lot of endpoints for a model to handle. It's a lot of endpoints for a human to handle, so you know, the utility you can get out of a one-to-one [00:06:00] mapping just doesn't seem to be where MCP servers really shine.
[00:06:06] Andrew Zigler: It makes sense to me. It also kind of then is putting a lot of burden on the LLM, not only in its context, understanding this massive library of tools available to it, but then also to understanding how to connect them together, like atomically into like, oh, I'm gonna build, build, build these few API calls to achieve this goal.
[00:06:24] Andrew Zigler: When you're, what you're calling out is that the MCP server should actually come to the LLM, with that chain of API calls, or that specific layer of endpoints kind of prepackaged and ready, so that it's like a more end-to-end experience instead of it being like aa bunch of Lego pieces,
[00:06:40] Andrew Zigler: It's, it's in fact just like partially assembled pieces of what they wanna build.
[00:06:45] Andrew Hamilton: I actually like the Lego pieces analogy a lot, right? say you have a few blocks and each of these blocks is a respective API endpoint function or a local function. 'cause we don't have to restrict ourselves to just APIs
[00:06:56] Andrew Zigler: Yeah.
[00:06:57] Andrew Hamilton: and either you can [00:07:00] show the LLM look.
[00:07:01] Andrew Hamilton: This is a common workflow that our users take, which is an assemble, like an assembled part of your little Lego creation. Or you can give it the blocks like completely on its own. Some things work phenomenal in the use case where you just give it whatever to do. Sometimes it's really intuitive and really, really good, really well thought out.
[00:07:23] Andrew Hamilton: APIs oftentimes that works sufficiently,
[00:07:27] Andrew Hamilton: The
[00:07:27] Andrew Hamilton: more complicated
[00:07:28] Andrew Hamilton: bigger APIs where you're, you have to do a lot of stuff. I think we see That workflowing building larger initial chunks for the LLM. Is what allows it to give a better user experience.
[00:07:44] Andrew Zigler: That makes sense. And actually this is starting to bridge into something that you and I have talked about a bit and that I'm seeing emerge as a new topic. And that's this idea of, you know, we have the developer experience, very tried and true part of building any great [00:08:00] product is having a delightful experience for your developers and, and your developer users to be able to interact with the technology you build.
[00:08:07] Andrew Zigler: but it sounds to me like there's like a similar emerging concept for ai. And you could almost call it like an agent experience. And it's not just like a semantic differentiation. There's actually a lot that goes into thinking about how an LLM would use your API just like what you called out in terms of what's available to it.
[00:08:27] Andrew Zigler: So, have y'all been exploring agent experience at Layer and what has that looked like?
[00:08:33] Andrew Hamilton: Yeah, I, I think a good place to start with that is, why do we need agent experience, right?
[00:08:39] Andrew Zigler: Right.
[00:08:40] Andrew Hamilton: so for example, I mean, this is a straw man, but why is it that you are interviewing me rather than an agent? Why? Why am I even on the interview if an agent can handle it? And I think that that highlights a really good glaring discrepancy between a developer experience, which is designed for a human being and an agent experience, [00:09:00] which is designed for an LLM I think early on in this, a lot of people had the expectation that agents were going to be just as autonomous as humans. And so far we've been pushing the boundaries and it's pretty impressive what we've been able to do. But we haven't gotten to that human level of autonomy needed yet to just essentially use the developer experience. I can't say to an agent and I'm gonna define agent in a little bit here. So we are all kind of talking about the same thing.
[00:09:28] Andrew Zigler: Yeah.
[00:09:29] Andrew Hamilton: you can't just say to an agent, oh, onboard me onto this like, AI platform. And so I think that's where the emergence of this term agent experience has come from so far.
[00:09:42] Andrew Zigler: Do you wanna define agent for us then, so we're all on the same page with what that might look like?
[00:09:46] Andrew Hamilton: So when I'm gonna refer to agent here, I am just meaning an LLM in a loop that has the ability to make decisions as to whether or not it continues working or it [00:10:00] terminates, pretty simple. It can call tools, it can do that basic kinda stuff. The cursor agent is probably the closest, like implementation of that abstraction there.
[00:10:09] Andrew Hamilton: So I think we've seen this a lot in a lot of open source frameworks out there. Uh, there's a few that I'll call out. There's LangChain, which has an agent orchestration system, CrewAI. There is MCP agent, which is one that popped up specifically. About, it was by like last mile ai. I think they built it. that's a good one. And so a lot of people are taking that initial, you know, do everything agent and constraining it and it to execute specific, very predictable workflows. And there's kind of a scale From a very weak workflow where you're not really giving the LLM very much direction, but you're giving it a ton of flexibility. So the LLM could do whatever it wants to, a very strict workflow where essentially it's [00:11:00] not really doing anything more impressive than like clicking through a bunch of buttons.
[00:11:03] Andrew Hamilton: We, we, we, we almost call that, RPA, uh, Robotic Process Automation. There's,
[00:11:08] Andrew Zigler: Okay.
[00:11:08] Andrew Hamilton: Company, UiPath is responsible for something like that.
[00:11:11] Andrew Zigler: It's very like set flow, like it does this one thing and it's just equipped with the tools to do that one thing.
[00:11:17] Andrew Hamilton: Yeah.
[00:11:18] Andrew Zigler: Okay.
[00:11:19] Andrew Hamilton: it's able to, stuff like that, it's used like UiPath is used very, very frequently at big organizations to automate stuff like filing, filing things, or creating a bunch of, Highly, easily automateable stuff. I think what we're seeing is we're seeing RPA be pushed a little bit into the more autonomous direction, and that the agent experience lives right in between that like what developers can handle, which is total autonomy.
[00:11:47] Andrew Zigler: Right.
[00:11:48] Andrew Hamilton: you know what RPA does on its own, which is a perfectly strict workflow.
[00:11:54] Andrew Zigler: Okay, so, In that world where you kind of have this agent experience between [00:12:00] the full autonomy of the devs and between this like very rigid, like a very formulaic approach to like automation. Most people would think of this as just like when you go and build a drag and drop automation or whatever for, your, your business, your org, and then maybe now it has a little bit of LLM in it because that's very common these days.
[00:12:18] Andrew Zigler: From your perspective, since you spend all your day all, all your days, looking at how people are, are building and using these tools, what do you think that engineering leaders are still getting wrong about building or integrating their tools into things like MCP or otherwise?
[00:12:34] Andrew Hamilton: Hard to say what they're gonna. Getting wrong because there's so much experimentation in the space.
[00:12:39] Andrew Hamilton: I think everybody is trying to figure out what's working and what's not working, and trying to find that balance as to how strongly to strictify a given workflow.
[00:12:54] Andrew Zigler: This is sparking something that I've been thinking about is about how this [00:13:00] starts to evolve for teams as they do experiment? Because you're rightfully calling out that, you know, how can we know that anything that anyone's doing right now is truly wrong? We're in a early experimentation phase.
[00:13:11] Andrew Zigler: I think the takeaway from that is The only thing that you could really be getting wrong right now is doing nothing or turning up your nose at it or thinking that it's just going to pass. Because with every day like that, then you're missing out on opportunities that your competitors, other people in your industry are gonna be taking advantage of.
[00:13:28] Andrew Zigler: And that's just kind of the situation a lot of leaders I know find themselves in. And as that evolves, do you see. taking these concerns like agent experience, um, like more seriously to the point of it becoming like a first class concern for engineering organizations. Like how we saw the evolution of DevOps and platform engineering teams to cater to the developer experience that's so critical for shipping good software.
[00:13:54] Andrew Zigler: Do you think that a natural continuation of this might result in a similar team for [00:14:00] the agent consumption of a product?
[00:14:02] Andrew Hamilton: And my reasoning for this is, I think, pretty simple. It's there. ChatGPT slash any agent tools. Has billions and billions of users. I think a month. I think that they quote themselves at around 500 million weekly users. And Cursor has, you know, another several, several million users. And this is just, I mean, at some point we're getting to, like an eighth of the population is using these gen AI tools. Okay.
[00:14:32] Andrew Zigler: Right,
[00:14:32] Andrew Hamilton: now these GenAI tools are beautiful walled gardens. If you take a look at them, there's no ads, there's nothing really, it's like, it's like the good old
[00:14:43] Andrew Zigler: right.
[00:14:43] Andrew Hamilton: search, right?
[00:14:45] Andrew Hamilton: There's no advertising, there's nothing. But eventually people are gonna find out how to sell their products through these tools, through
[00:14:52] Andrew Zigler: ChatGPT,
[00:14:53] Andrew Hamilton: through Claude, through these things. And so it's not so much that, I think that existing enterprises [00:15:00] will probably do all right for, the next couple years with their existing models. But where one enterprise goes beyond another is are they able to figure out a way to tap into this user base, like someone will figure out how to tap into this user base. I can't tell you who. But someone in every single one of those sub-sectors in law, in insurance, in whatever, is going to find a way to make money from the users on ChatGPT. I think that's where the majority of thought should really be allocated is how do you, how do you harness this new, semi-technical user base.
[00:15:42] Andrew Zigler: And then how do you take your product and package it up and put it into their prompts to where now effectively, you don't have to rely on them going to your website and being like, oh, I want to build with this and signing up and getting the API key and doing all these things. Now there's an opportunity to kind of wrap all of this up in a MCP server that's just kind of [00:16:00] available in tools like ChatGPT.
[00:16:02] Andrew Zigler: Yeah. To tap into that massive user base.
[00:16:05] Andrew Hamilton: Yeah. I, I think, I mean, I wouldn't mind selling a $1 product to all of ChatGPT's user base. I would be very happy to do that,
[00:16:15] Andrew Zigler: Right. That opens up a whole, a whole new paradigm of like, who's the gatekeeper for that? It makes me think of like Apple and the App store and, you know, like there's been rulings about their charges that they can put on people that use it as a marketplace, right? To sell stuff. Apple's a huge cut.
[00:16:29] Andrew Zigler: So I think of like, uh, environments like open ai. Or with ChatGPT, like, do they create a marketplace? they had like the GPT marketplace, right? That was kinda like their first bit of kind of like letting people come in and build tools for their user base. So they've shown that their user base can be something that other companies could access.
[00:16:47] Andrew Zigler: So maybe MCP is how people, start getting their products in front of them.
[00:16:53] Andrew Hamilton: I think you hit on something really good there, which was the App Store and Apple's notorious walled garden in the app [00:17:00] store. It's also, they, they're really, yeah. If you look at all the court proceedings, they're really fight fighting to tooth and nail to keep that 30%, uh,
[00:17:09] Andrew Zigler: Exactly.
[00:17:10] Andrew Hamilton: on everything there. And I think that, ChatGPT, Claude, or the people building these marketplaces are in a potentially similar position
[00:17:21] Andrew Hamilton: to charge money on transactions similar to what the Apple App Store looked like, right?
[00:17:29] Andrew Zigler: So yeah, I think there's a lot of parallels there, right? With like the walled garden approach with people having these massive user bases and suddenly there's a chance for other companies that connect their tools, their products, into that user base.
[00:17:39] Andrew Zigler: And, but also, you touched on something that I hadn't really thought about before, that you're absolutely right. These beautiful walled gardens. You go in and there are no ads. It's not an ad laden experience, and you're not getting sold. Stuff like when you go and you search, and maybe I'm looking for like cool places to hike, and now Google's trying to sell me hiking boots everywhere.
[00:17:59] Andrew Zigler: You know how [00:18:00] long until you're having conversations and then the LLM is suggesting stuff based on other conversations where it knows what you're looking for and it's like, Hey, I know you've been planning that camping trip. Here's uh, some reservations you should look at. Here's, uh, I know you like glamping, so here's the hotels
[00:18:15] Andrew Zigler: By the park and you know, that kind of stuff. Um,but also it kind of scares me about this, like almost, we talk about this on social media with how, eventually over time it gets worse. You get like this enshittification of any kind of text space with ads and just a poor user experience.
[00:18:30] Andrew Zigler: And just like everything becoming about like the algorithm, making sure that. People can advertise on it. Like people, there's a term for that, right? In on the internet, we call it like enshittification of a platform, and it happens with every kind of golden platform. So maybe that's what will happen with LLMs.
[00:18:45] Andrew Zigler: But, it's definitely, if anything, a transformative opportunity. Do you think so?
[00:18:50] Andrew Hamilton: I think we're in the golden days right now. I think this is as clean as it's gonna get. Everything is like so heavily venture backed. I mean, at some point they run out of like [00:19:00] money to in, to invest to some degree.
[00:19:04] Andrew Zigler: Right. The good, the goodwill only goes for so long until they're like, okay, we need to start getting some returns off of this.
[00:19:10] Andrew Hamilton: And so I think, I think that, uh, a VC subsidy tends to be when you get, when you get your golden ages, at least, you know, in modern history.
[00:19:20] Andrew Zigler: Yes. That makes sense. And so actually this ties into another thing I've been thinking about with AI agents. I think you're maybe well suited to kind of inform us about, and this is about how it evolves as that user experience. I love that you draw these lines between, ChatGPT's experience and like early Apple.
[00:19:37] Andrew Zigler: And I think that there's also some parallels between like, when we were building experiences for mobile, on the web for the first time, you got this world where you had the desktop website of a, of a service, and then you had the mobile website of a service and sometimes. That was like a whole different domain.
[00:19:54] Andrew Zigler: There was like a whole, there was a whole period of time where we were going like the M dot, whatever the website was to get its mobile [00:20:00] version or whatnot, right? And then you see this convergence of, okay, now we have responsive design. Now it's all one website. It just works on whatever you view it in.
[00:20:09] Andrew Zigler: And right now I think that with LLMs we're kind of in. It reminds me a lot of early mobile for web, but how do you, how do you view it? Do you think that there's gonna be different lanes, like you go to ChatGPT to ask a question, but you work with an agent to do something else?
[00:20:25] Andrew Zigler: Or do you think eventually it's just gonna be one experience?
[00:20:29] Andrew Hamilton: No, I think it, I think it will fractionalize if that is the
[00:20:34] Andrew Zigler: Yeah, it just become actually more, more lanes.
[00:20:37] Andrew Hamilton: I think we're seeing that already because you have tools like Cursor, which are clearly. It's an IDE, it's a software development, IDE Windsurf, which no longer exists.
[00:20:48] Andrew Zigler: Yes. Was just acquired very recently.
[00:20:51] Andrew Hamilton: Was just acquired by OpenAI Yeah. And so each of these experiences is usually first an MCP client. So almost all of them [00:21:00] support MCP, at least the big ones that, that we're talking about here. all have their own unique experience, but generally gen AI is part of that flow, and so I think we're gonna see more gen AI necessarily embedded in existing products. Like sure, there's some Gen AI in Google Docs. But I don't really use it and I don't find it a very good experience. At least I, I personally haven't, I, I've heard of some people who do. Uh, I think instead we're gonna see like a huge offshoot of highly specialized products with gen AI embedded in them in very meaningful and very useful ways. That's where I think use cases are really gonna shine.
[00:21:46] Andrew Zigler: Yeah, people shouldn't really think about necessarily, maybe the, the winner isn't gonna be who builds the Swiss Army knife that does everything. The winner's gonna be the person who builds, you know, the very specific tool that does, you know, maybe one thing very well. [00:22:00] And maybe that one thing that does very well is also then tied to something that's economic, right?
[00:22:04] Andrew Zigler: You can charge for it. Um, suddenly becomes a tool people, in that practice pay for. So it sounds like everyone's gonna be getting very specialized tool belts full of, you know, very specialized workflows and tools that they can use. Instead of maybe this like, kind of like auspicious, like genie in a bottle, one size fits all.
[00:22:21] Andrew Zigler: You ask it and you get what you need kind of experience.
[00:22:24] Andrew Hamilton: Yeah, I think lovable is a really good example of one of those specialized tools.
[00:22:29] Andrew Zigler: Oh yeah,
[00:22:29] Andrew Hamilton: web applications.
[00:22:31] Andrew Hamilton: it's good at prototyping those web applications. It's excellent at it. it's truly a magical experience to use something like that. And that is a very specialized tool you can't use that to build all pieces of software that you want to build. can't just, it can't hit everything, if I want to build a quick web app with Cursor, more work. it's not as easy. It's not as one and done. I, I'm [00:23:00] really, really fascinated. It's almost impossible for me to predict which tools are gonna be the ones that people really love. But I feel like it's pretty easy for me to assume that we're gonna see a lot across the spectrum from highly non-technical prototyping type things all the way down to, you need to be like an, like a highly trained engineer in a, particular space to understand this
[00:23:22] Andrew Zigler: Right. Highly specialized.
[00:23:24] Andrew Hamilton: Yeah, highly specialized.
[00:23:25] Andrew Zigler: If we were to bottle up all of this advice, and let's say for our listeners who are, on a software team and they're, they're working on technology that maybe is getting consumed by an LLM, maybe isn't. What are some concrete things that an engineering org can start doing today to be ready for this kind of shift that you're describing?
[00:23:45] Andrew Hamilton: I think, conforming to to model context protocol. Is probably the best thing you can do to somewhat futureproof yourself a little bit. So that means launching an MCP server for the most part if you have an API service and integrating an [00:24:00] MCP client into your existing application.
[00:24:02] Andrew Hamilton: And the reason I'm so bullish on MCP right now is because it's a protocol that seems to be able to handle a lot of use cases that have been being built in the last year or two years. So for example, if you build an MCP server and if you build it correctly, you can create a rag chatbot out of that.
[00:24:22] Andrew Hamilton: So you've seen a lot of like chatbots for documentation. You can also have it build some form of like AI copilot where it can go off and execute really nice user starting flows. And by building an MCP server, you're kind of building that frameworking block in order to, get your product ready for the agentic space. You can also conform to agent to agent protocol, although I can't speak on that as well as I would like to because I haven't had the chance to read the protocol. If I was going to allocate bandwidth, I'd say 90/10, uh, is probably [00:25:00] that at because with the MCP server, you get experimentation. And that's, that's when you get to figure out if you are, uh, have one power user of your MCP server, you, you gotta go, you gotta ask yourself like, oh, is this a new avenue that I'm, I'm gonna be exploring? And I think that's what engineers should be looking for is power users of MCP servers, not necessarily like large amounts of usage
[00:25:27] Andrew Zigler: Right.
[00:25:28] Andrew Hamilton: adoption of these things is going to be challenging While you still need, fairly technical background in order to, to adopt them.
[00:25:36] Andrew Zigler: Yeah, there's so much you can learn from a power user right now. Everything we talk about, it's easy for us to feel like, you know, we're in a bubble because, or that everyone thinks this way just because we, we talk about it so intently and so frequently. and in your user base that manifests as power users, right?
[00:25:51] Andrew Zigler: But we're still so early in the adoption, you know, flow of LLMs that. Those power users are your most powerful experimenters, like you should be. [00:26:00] Learning from them, I think is a, is a great takeaway for how teams can get be. Getting started. So like if you don't have, if you haven't started, working or experimenting with, with LLMs or with, with, MCP specifically, you should look at your current users of your tooling, of your APIs, of the things that you would be interested in packaging, into an MCP server and learn those workflows they, they use, right?
[00:26:21] Andrew Zigler: Because if you build those workflows into the MCP server. Now you kind of have some velocity and you've justified what you just built. so it sounds like it, you have to start by understanding your core power users first.
[00:26:33] Andrew Hamilton: That would be my suggestion for MCP in particular, they also just have an SDK for almost every language that you wanna
[00:26:41] Andrew Zigler: Yeah,
[00:26:41] Andrew Hamilton: And so it's just a really nice, easy framework to build on top of
[00:26:46] Andrew Hamilton: You really should be able to put something like that together in three, four days one engineer should be able to handle that in three, four days.
[00:26:53] Andrew Hamilton: And by the way, when we speak to people, usually that's what we hear is that it's a three to four day [00:27:00] initiative. And then it's a two to three day marketing initiative To at least get something out there and to try it out. And I think that's why we saw an immense amount of hype last month.
[00:27:11] Andrew Hamilton: I mean, last month was, I think we definitely
[00:27:13] Andrew Zigler: Well relative, you know, for our listeners for sure, but very recently in the Times it, there was like a big push. Right? Or just like how you're saying like there were like, I felt like everyone was like dropping an MCP server.
[00:27:26] Andrew Hamilton: Everyone was dropping one.
[00:27:28] Andrew Zigler: Yeah. And it was cool. It's like, it's easy to build, like you're so right that it just takes like a day or two or like a few to really kind of get up and going with the prototype.
[00:27:35] Andrew Zigler: There's also a lot of really great like techniques, that you can be kind of use the rapidly prototype them as well. I think that's a good call out that you should start experimenting, that the lift is low. basically you should come in the waters and just try it out because it's not a.
[00:27:49] Andrew Zigler: It's not a heavy investment to do so.
[00:27:51] Andrew Hamilton: Yeah.
[00:27:52] Andrew Zigler: In, um, a company like yours, you know, we don't, we don't get, get a lot of opportunities to talk with like AI native companies and startups because [00:28:00] one, they're new, uh, two, they're all in stealth mode right now building, you know, like maniacs and you know, they're not coming up for water or to talk with people right now as much.
[00:28:08] Andrew Zigler: And the ones that are. You know, they're like yourself. They're, they're kind of informing people how they need to get ahead of the curve of the adoption. and this is really great 'cause it's an opportunity for everyone to learn about like how the new,organizational footprint of organizations popping up right now.
[00:28:23] Andrew Zigler: And I wanted to ask you, Andrew, in our, in our chat, what makes an AI native company like yours different from per se, like a digital native company?
[00:28:31] Andrew Hamilton: Fascinating. It's a great so speed of iteration is quick, like it's really, really fast. you'll see something, you'll have an idea, and it will have been built a day before because of the amount of stuff that's shipped in the space. But have like a very specific bandwidth. And so of those projects are abandoned very early on in their lifecycle. an example, I'll give you one that we personally work [00:29:00] on. All the time now, and it's something we do consistently, is there's a specification out there called Open API specification. It's basically a standard
[00:29:08] Andrew Zigler: Yeah.
[00:29:08] Andrew Hamilton: that, I'm sure most of your listeners are aware of. But, it's a standard protocol for describing an API. And one of the first ideas I had when open API, this was like a day or two after the MCP launch, I was like, oh, I'm gonna do an open API to MCP, server creator. And within think 24, 48 hours, one had been built. One had been built using almost entirely ai.
[00:29:33] Andrew Zigler: Right.
[00:29:35] Andrew Hamilton: The thing is, is like most AI products, that one was abandoned and then I saw another one pop up.
[00:29:42] Andrew Hamilton: That one was abandoned. so you see a lot of these like really quick ideas, quick testing and iterating and moving on, It gives the semblance that the industry is moving really, really, really quickly, when in reality it's still trying to perfect a lot of, [00:30:00] honestly, some of the really basic stuff.
[00:30:02] Andrew Hamilton: For
[00:30:02] Andrew Zigler: Right.
[00:30:03] Andrew Hamilton: open API to MCP you would think would be solved by now. would think, oh, that's done,
[00:30:09] Andrew Zigler: Right.
[00:30:10] Andrew Hamilton: it's not. I've seen a hundred projects of it. seen a hundred like different versions of it, and they're all like, okay. is what I've seen, and the ones that, you know, most of them aren't maintained. So I think that an AI native company, does its best to try and figure out what parts of itself, it can optimize a way with current, AI utility. So for example, if you, I don't want to. Generate project proposals, I can template them and customize them, very easily for specific stuff. So I can automate that basic grunt work now, a more traditional company, I'm sorry, it was a digital native, the term you used.
[00:30:52] Andrew Zigler: Yeah, digital native, I think is what we have to say,
[00:30:54] Andrew Hamilton: has processes like that, which are already embedded in the company's overall [00:31:00] function,
[00:31:00] Andrew Hamilton: I think you can become an AI native company by trying to look at those, somewhat automateable workflows and trying to shove AI into them and seeing if it works or not. always work. Sometimes you'll find out that the quality of something that you're producing. Plummets, right? In which case it's not a good use case, but in other cases you're freeing up a lot of your employees to do stuff that is, you know, a much more human task. Something that I just can't handle quite yet.
[00:31:29] Andrew Zigler: Right.
[00:31:30] Andrew Hamilton: I think that's what differentiates the digital native from the AI native. It's that we get to start off building our workflows, like
[00:31:38] Andrew Zigler: Right.
[00:31:39] Andrew Zigler: They didn't get to start off having like somewhat autonomous agents to be able to build companies. , Andrew, this has been a super. Incredible conversation and it's been great to have your insights here. I think you gave our listeners a lot of actionable takeaways and you kind of painted a picture about how AI is going to evolve.
[00:31:57] Andrew Zigler: we talked a bit about like walled gardens and [00:32:00] how they're going to change and the opportunities presented for engineering teams right now. So if you are an engineer. And you're on a, a team that, uh, potentially could have impact for MCP. I think this is your invitation, your call to action to go experiment because the lift to get an initial idea going is so low.
[00:32:17] Andrew Zigler: and that it's really effective to just start brainstorming now about how your teams can use MCP, whether internally to go faster, better at building software or externally to give your developer users superpowers. I think there's a lot to, to gain here. And before we wrap up, Andrew, where can our audience go to learn more about Layer in the work you're doing?
[00:32:37] Andrew Hamilton: I think we'll link our website. I think that would probably be the
[00:32:41] Andrew Zigler: Perfect.
[00:32:41] Andrew Zigler: We'll share, we'll share the layer website so y'all can go check it out. and obviously we'll be staying in touch and following the story on socials as well. And so to you, our listener, if you've made it this far, then you clearly liked what we chatted about today, nerding out about all things MCP.
[00:32:57] Andrew Zigler: I think it's a cool conversation that we're gonna continue, but I would love to [00:33:00] hear what you think. Think about it. So be sure to subscribe and share the episode and check out our substack, drop a comment, let us know what you think. And Andrew and I are both on LinkedIn. It's really easy to get ahold of both of us.
[00:33:10] Andrew Zigler: In fact, you can probably just type Andrew into the search bar and we might even both be there. So, you should definitely reach out to us, drop a comment. We'd love to hear from you about what you're experimenting with. And thanks for joining us for Dev Interrupted. We'll see you next time.