Artificial intelligence is no longer just a technology; it's a core operating metric for the modern enterprise. As Lake Dai, a globally recognized expert on AI and professor at Carnegie Mellon University, points out, "AI right now has become a core operating metric. In S&P 500 earnings calls, 287 out of 500 have mentioned AI in their earnings." This massive strategic shift means engineering leaders are being pulled into new, high-stakes conversations far outside their traditional technical domains.
The metrics that matter have evolved from response times and scalability to concrete business outcomes, like lifting conversion rates. This new reality presents two unprecedented challenges: the staggering and unpredictable cost of compute and the massive, often-overlooked blind spot of AI governance.
With experience stretching back to 2002 and pivotal roles at Apple and Alibaba, Dai offers a uniquely informed perspective on navigating this transformational period. This article explores her insights on why leaders must now think like CFOs, the three-level framework for tackling governance, and the fundamental trends to focus on when the hype feels overwhelming.
The growing challenge of compute costs
One of the most significant and immediate shifts in the AI landscape is the dramatic increase in compute requirements. Dai notes that compute consumption "has increased by 100x" since the launch of GPT. Even as models become more efficient, she is firm that "compute will remain a constraint," forcing a new kind of financial discipline on engineering teams.
This growth has transformed the financial structure of AI-driven companies, where "30-50% of operating costs are compute costs." This puts engineering leaders in an unfamiliar position. They must now "think from the lens of a CFO to predict and manage these expenditures," developing financial acumen alongside their technical expertise.
The compute landscape is also fracturing. Leaders must now create a strategy that balances the power of cloud-based models with the need for edge computing—running AI directly on devices. This trend is driven by a desire to reduce costs and cloud dependency, address privacy concerns by keeping sensitive information local, and improve response times for real-time applications.
AI governance: the biggest blind spot for leaders
When asked about the most significant challenge facing engineering leaders, Dai doesn't hesitate: "AI governance is the biggest blind spot." The regulatory landscape is evolving at a breakneck pace, and leaders are caught between policymakers who can't keep up with technology and innovators who aren't complying with emerging requirements. The key is to "create a framework that allows you to innovate while protecting your organization and consumers."
Dai recommends a comprehensive governance framework structured across three levels:
- Unit Risk: Addressing issues at the individual coding and component level.
- System Risk: Implementing systematic processes to identify bias, hallucinations, and safety concerns during development.
- Ethical Risk: Establishing clear organizational values and cultural norms that guide AI development from the top down.
The first step is simple: "Process and document it." Dai advises that the framework doesn't have to be perfect from day one. By starting with a basic checklist—much like a surgeon uses before an operation—organizations can begin to build the muscle of responsible and repeatable AI development.
The evolution of AI-native infrastructure
As organizations embed AI more deeply, the very nature of software infrastructure is changing. "We're switching from user interfaces to agent-friendly infrastructures, like orchestration and API layers," Dai explains. This requires engineering leaders to champion new architectural approaches that are built for AI-to-AI communication, not just human-to-machine interaction.
This shift introduces complex new relationship patterns. As Dai notes, "We're talking about a matrix now. Internal and external collaborations, human to human, human to AI, and AI to AI." This evolving web of interactions, where agents can spawn sub-agents, creates a significant challenge in maintaining alignment with the original human intention.
Ensuring that the final output of a complex, cascading network of agents remains true to the initial goal is a novel problem. Leaders must now develop new strategies and architectures specifically designed to maintain context and alignment across these distributed, autonomous systems.
Simulated training environments: the new key to upskilling
How can organizations safely and effectively upskill their entire workforce for this new AI-native world? Dai proposes an innovative solution: AI-simulated training environments. These "flight simulators" for AI would allow teams to practice working with powerful tools in safe, controlled settings before deploying them in production.
"I think [an] AI simulated training environment is one of the keys to upskill your organization," Dai suggests. This approach is especially valuable for junior developers, who can build expertise with powerful but potentially risky technologies without endangering critical systems.
This concept extends to other areas, such as cybersecurity. In a simulated environment, a "red team" AI can be trained to continuously find new attack vectors, while a "blue team" AI learns to develop defensive countermeasures in response. This reinforcement learning loop allows organizations to build far more robust security solutions than was previously possible.
The wisdom to slow down
The rise of AI has placed engineering leaders at the center of a complex storm of new financial, ethical, and technical challenges. In a landscape defined by breakneck speed, Lake Dai's most valuable advice is also the most counterintuitive: slow down.
Rather than chasing every new model or trend, she recommends focusing on the fundamental, historical patterns of technology adoption. "I can't wait for us not to talk about AI," she says, just as "we don't talk about the internet anymore... or mobile anymore." Eventually, AI will mature from a buzzword into a deeply embedded, foundational capability.
By taking a measured approach and focusing on the fundamentals—sound governance, financial discipline, and a robust training strategy—leaders can guide their organizations through the hype cycle. They can build a durable advantage by focusing on the long-term patterns that matter, positioning their teams to thrive as AI becomes the new, invisible foundation for business value.
For the full story on navigating AI leadership in the enterprise, listen to Lake Dai discuss these ideas in depth on the Dev Interrupted podcast.




