Blog
/
Sycophancy in AI: The dangers of people-pleasing artificial intelligence

Sycophancy in AI: The dangers of people-pleasing artificial intelligence

Photo of Andrew Zigler
|
Blog_Sycophancy_in_AI_2400x1256_e6a5b366e8

When we design products, our goal is to make users happy. But what happens when that goal creates an AI that acts like an insincere people-pleaser? This is the core paradox of the AI era, where optimizing for user satisfaction can unintentionally train AI systems to prioritize positive feedback over objective truth.

Dr. Tatyana Mamut, co-founder and CEO of Wayfound, calls this dangerous phenomenon "sycophancy in AI." She warns that by rewarding AI for simply making us "like" its responses, we are unconsciously creating digital sycophants. "If you raise a child that's trained to just make the people around them like them... you're gonna get an adult who's a people pleaser," Dr. Mamut explains. "We're doing the same things unconsciously with AI models."

This "slimy" feeling of interacting with an overly agreeable AI highlights a critical challenge: traditional product development frameworks are breaking down. Unlike deterministic software, probabilistic AI requires a new approach to organizational culture, accountability, and management. This article explores Dr. Mamut's frameworks for addressing AI sycophancy, including how to solve the principal-agent problem, implement effective AI supervision, and build the "multi-sapiens" workforce of the future.

Defining ownership and accountability in AI systems

As organizations deploy AI agents, they face a timeless management challenge: the principal-agent problem. When an agent acts on behalf of a principal, who is ultimately responsible for its work? This question becomes especially complex when the agent is an AI.

"If you are a principal and you hire someone or something... to act on your behalf, you are on the hook for making sure that person is trained well and then supervised well," Dr. Mamut explains.

In the context of AI, many organizations mistakenly assign this responsibility to IT or engineering teams. Dr. Mamut argues this is a critical error. The principal should be the business leader whose operations the AI supports. For example, if an AI handles customer service, the VP of Customer Service—not the IT department—is the principal.

This framework requires giving business leaders direct, non-intermediated control over the AI systems they are accountable for. "Imagine hiring an employee and saying to the manager... you don't get to see the work of the employee directly... You have to go through the IT team," Dr. Mamut illustrates. "That's crazy." Without direct oversight, business leaders lack the visibility and control needed to trust and adopt AI solutions.

Effective governance, therefore, depends on establishing clear standards for AI identity and transparency. Dr. Mamut, through her work with the Agency Consortium, is focused on ensuring every AI agent can be traced back to a human principal. "You could create a system of agents where you can never actually figure out who is the human entity that has trained and is supervising the agents," she warns. "The law can't have that."

The AI supervisor: the key to effective AI management

To provide leaders with the necessary oversight, Dr. Mamut and her team at Wayfound have developed a novel solution: an AI supervisor. "We've built an AI supervisor," she explains, "an AI agent who is trained for the one job of managing other AI agents and helping the humans understand what they're doing."

The AI supervisor acts as a bridge between an organization's goals and its AI agents' actions. It functions as a first-line reviewer, rating agent interactions on a red/yellow/green scale before they reach a human. This system filters out problematic responses and provides crucial context for human reviewers. Green-rated interactions explain why they meet guidelines, yellow ones highlight areas for review, and red ones are automatically rejected or escalated.

The impact is significant. "We speed these things up dramatically," notes Dr. Mamut. "In the first week we sped up a customer by 300%." This supervised approach provides a scalable solution that maintains quality control without the unmanageable bottlenecks of a purely human-in-the-loop process.

The multi-sapiens workforce of the future

Looking ahead, Dr. Mamut envisions a future where organizations are composed of both humans (homo sapiens) and AI agents (AI sapiens) working in collaboration. This "multi-sapiens workforce" requires a fundamental rethinking of organizational design.

"We are right now at Wayfound a team of 30. We have seven humans... and 23 AI agents," Dr. Mamut shares. "We view ourselves as a fully multi-sapiens workforce."

In this model, humans focus on uniquely human tasks like strategic vision, aesthetic judgment, and relationship building, while AI agents handle more routine work. For example, at Wayfound, AI agents manage tasks traditionally handled by product management and marketing departments. "We don't have product management really in our company," she explains, "because all the AI agents manage the roadmap, synthesize the customer insights, [and] write up all of the product requirements." This allows the organization to maximize the value that both humans and AI bring to the table.

Building an honest AI workforce

The challenge of AI sycophancy reveals a fundamental truth: we cannot build the future of artificial intelligence on outdated organizational structures. Simply plugging powerful AI into existing workflows without changing the underlying principles of accountability and supervision is a recipe for creating insincere, untrustworthy systems.

Dr. Mamut’s frameworks provide a clear path forward. It begins with establishing unambiguous ownership, ensuring a human principal is always accountable for an AI's actions. It is made practical through active supervision, using tools like an AI supervisor to give leaders real-time visibility and control. And it culminates in a visionary yet practical redesign of the organization itself—the multi-sapiens workforce, where humans and AI collaborate based on their unique strengths.

For leaders navigating this new frontier, the goal is not merely to adopt AI, but to build a culture of accountability around it. By doing so, we can move beyond creating people-pleasing AI and start building honest, effective, and truly intelligent partners for the future of work.

Listen to Dr. Tatyana Mamut's Dev Interrupted episode: 

Photo of Andrew Zigler

Andrew Zigler

Andrew Zigler is a developer advocate and host of the Dev Interrupted podcast, where engineering leadership meets real-world insight. With a background in Classics from The University of Texas at Austin and early years spent teaching in Japan, he brings a humanistic lens to the tech world. Andrew's work bridges the gap between technical excellence and team wellbeing.

Connect with

Your next read