Blog
/
Your next user isn’t human: AI agents are changing software design

Your next user isn’t human: AI agents are changing software design

Photo of Andrew Zigler
|
Andrew_Hamilton_blog_png_316e0ac606

AI assistants are evolving beyond mere developer tools to become active consumers of technology in their own right. This shift is creating an entirely new paradigm for how products are used and designed, with AI agents becoming a massive new user base that engineering teams need to address. Andrew Hamilton, co-founder and CTO of Layer, a company defining the frontier of agent-accessible tooling, provides insights into this emerging landscape and what it means for engineering leaders.

AI agents are emerging as a primary audience for software products

The rise of AI agents represents a fundamental shift in how software is consumed. No longer are these tools merely helping developers—they're becoming active users of technology themselves. At the center of this transformation is the Model Context Protocol (MCP), which Hamilton describes as a breakthrough in AI extensibility.

"MCP is that first attempt at really creating that app store for LLMs, basically enabling you to plug and play your own custom software with an existing client application," explains Hamilton.

This isn't the first attempt at creating an "app store for LLMs." We've seen similar efforts from ChatGPT with their GPTs, GitHub Copilot with extensions, and various attempts by platforms like LangChain. However, MCP has emerged as the first protocol to gain significant traction, providing a standardized way for LLMs to interact with APIs and services.

What's particularly interesting is how this shift is redefining the consumer landscape. With ChatGPT alone reporting approximately 500 million weekly users, AI interfaces represent a massive potential audience for products. This has profound implications for engineering leaders who now need to consider not just human users but also AI agents as a primary consumption channel.

Why agent experience requires a new design approach beyond DevEx

As AI agents become more prevalent, a new concept is emerging alongside traditional developer experience (DevEx): agent experience. This isn't just a semantic distinction but represents a fundamentally different approach to designing tools and interfaces.

"Agent experience lives right in between that, like what developers can handle, which is total autonomy, and what RPA does on its own, which is a perfectly strict workflow," Hamilton notes.

Hamilton positions agent experience as occupying a middle ground between the full autonomy that human developers require and the strictly defined workflows of traditional Robotic Process Automation (RPA). While humans can navigate complex interfaces and make intuitive leaps, and RPA systems follow rigid, predefined paths, AI agents operate in a semi-autonomous state that requires specialized design considerations.

Engineering teams need to recognize that AI agents can't simply be expected to navigate interfaces designed for humans. They require tailored experiences that account for their unique capabilities and limitations. This realization is driving a reconsideration of how products are structured and presented when the end user is an AI rather than a human.

How effective MCP design focuses on workflow over full API access

A common misconception about implementing MCP is that it's simply about mapping existing APIs one-to-one. Hamilton strongly pushes back against this approach, arguing that effective MCP implementation requires a more thoughtful design philosophy.

As Hamilton puts it, "I've spent a great deal of time trying to differentiate what makes one product really good for MCP and what makes a product not good for MCP. And what I've found is that it comes down to the workflows of execution that a user on your platform experiences."

Instead of exposing an entire API library (which can be overwhelming even for humans), the most effective MCP servers focus on a small subset of endpoints—typically just three or four—that enable core workflows. Hamilton uses the example of Twilio, which has around 1,400 API endpoints, pointing out that mapping all of these would create an unnecessarily complex experience for an LLM.

The better approach is to create pre-assembled workflow "blocks" that combine multiple API calls into logical units of functionality. This is analogous to providing partially assembled Lego pieces rather than individual blocks, making it easier for the AI to understand and execute common tasks. Real-world examples like Docker and Sentry demonstrate how successful MCP implementations create tight feedback loops and contextual value.

Why MCP integration delivers high value with minimal engineering effort

For engineering teams considering MCP integration, Hamilton offers encouraging news: the barrier to entry is surprisingly low.

"You really should be able to put something like that together in three or four days." Hamilton states. "And by the way, when we speak to people, usually that's what we hear is that it's a three to four day initiative. And then it's a two to three day marketing initiative to at least get something out there and to try it out."

This low-effort, high-potential-reward dynamic explains the recent surge in MCP server deployments across the industry. The availability of SDKs for most popular programming languages further reduces the friction for teams looking to experiment.

Hamilton recommends that organizations start by identifying their power users' workflows as the foundation for MCP integration. By focusing on the patterns and processes that current users find most valuable, teams can create MCP implementations that deliver immediate value while requiring minimal investment.

How growing AI adoption is reshaping product and experience strategy

As organizations adapt to this new paradigm, we may see the emergence of dedicated teams focused on agent experience, similar to how DevOps and platform engineering teams emerged to support developer experience.

Hamilton highlights the scale of ChatGPT’s reach, with hundreds of millions of weekly users, suggesting that generative AI tools are approaching widespread adoption across the global population.

This massive user base represents an enormous opportunity for products that can effectively integrate with AI interfaces. However, Hamilton also observes that the current streamlined and polished AI experiences are likely to change under the pressure to monetize, much like how social media platforms transitioned from clean, user-centric designs to more advertisement-driven models over time.

Why specialized tools will define the future of AI agent orchestration

Rather than converging toward a single monolithic AI experience, Hamilton predicts we'll see increased specialization in the AI tooling landscape.

He anticipates a fragmentation of the market, with a wide range of highly specialized products that integrate generative AI in practical and impactful ways. According to him, this is where the most compelling use cases will emerge.

Instead of the "Swiss Army knife" approach that tries to do everything, successful AI implementations will likely focus on doing specific workflows exceptionally well. As an example, he points to tools like Lovable for web application prototyping to illustrate this specialized strategy.

This vision of the future suggests that users will assemble personalized tool belts of AI capabilities rather than relying on a single "genie in a bottle" solution. For engineering leaders, this means thinking strategically about where their products can deliver unique value in specific workflows rather than trying to compete across the entire AI landscape.

The AI-native organizations that are emerging today demonstrate a different operational rhythm compared to traditional digital-native companies. Hamilton observes that these companies iterate at a rapid pace but tend to abandon projects early if they don't show immediate potential.

"I think that an AI native company does its best to try and figure out what parts of itself it can optimize away with current AI utility," he says, highlighting how AI-native organizations build their workflows around AI capabilities from the ground up rather than retrofitting them into existing processes.

For engineering leaders looking to adapt to this new landscape, Hamilton's advice is clear: start experimenting with MCP now, focus on understanding your power users' workflows, and begin thinking about how to design specifically for AI agent consumption. The barrier to entry is low, and the potential upside is enormous for teams willing to embrace this new frontier of product design.

Listen to Andrew Hamilton’s Dev Interrupted episode here: 

Photo of Andrew Zigler

Andrew Zigler

Andrew Zigler is a developer advocate and host of the Dev Interrupted podcast, where engineering leadership meets real-world insight. With a background in Classics from The University of Texas at Austin and early years spent teaching in Japan, he brings a humanistic lens to the tech world. Andrew's work bridges the gap between technical excellence and team wellbeing.

Connect with

Your next read