The promise of AI-driven development was supposed to be simple: generate code faster, ship features faster, and transform productivity. Yet as 2025 draws to a close, engineering leaders are confronting an uncomfortable reality. Despite a 30% increase in pull requests created with AI assistance, organizations are only releasing roughly 2% more features to production. The code is flowing, but the value is not following.
Ori Keren, co-founder and CEO at LinearB, has been tracking this phenomenon closely. Last year, he made a contrarian prediction that productivity would actually decline in 2025 as teams grappled with AI adoption. Now, looking ahead to 2026, he is doubling down on pragmatism over hype. We still won't see the promised 2-3x productivity gains that everyone expects because the bottleneck is no longer code generation. It is everything else in the software delivery lifecycle.
Why the SDLC is the new bottleneck
The data tells a sobering story about where AI-generated code actually goes. While upstream velocity has increased dramatically, those gains evaporate as code moves through the pipeline. Keren observes that upstream velocity increases are frequently lost to downstream chaos, resulting in a net decrease in stability and quality across the industry.
The problem is not that AI can't write code. It is that our delivery systems were not designed for this volume. Code review queues have become massive bottlenecks, with pull requests stalling at the review and merge stages. Testing infrastructure struggles with increased load and flakiness. The entire back half of the SDLC has become a constraint on the productivity gains AI promises.
This reality check forces engineering leaders to return to basics.
"It's almost like going back to SDLC fundamentals," Keren says. "Like what are the phases? Where does AI really play? Where does it give us the productivity gain and where does it hurt us or take us even back?"
Code review is the critical constraint
If code generation is where AI started, code review is where it needs to go next. The mathematics are simple. If AI enables developers to create 50% more pull requests, but your review process remains unchanged, you have just created a massive backlog. Code review has emerged as the critical bottleneck in 2025, and it will take center stage in 2026.
Keren predicts that 2026 will be the "year of code reviews" where leaders must decide how to deploy AI smartly. This will involve brave decisions, such as allowing AI agents to review code written by other agents, with humans stepping in only when risk thresholds are exceeded.
Automation alone is not enough. The real opportunity lies in context-aware automation that integrates change risk analysis and enterprise policy constraints. Traditional code review policies, such as mandatory approvals for every single line of code, are insufficient for the volume and velocity that AI enables.
Dynamic workflows unlock sustainable velocity
The future of software delivery is not about making every process faster. It is about making smart decisions about where to apply rigor and where to optimize for speed.
This approach requires organizations to define clear policies upfront regarding what constitutes a high-risk change versus a low-risk one. Keren advocates for dynamic pipelines that perform a risk analysis on every pull request. Based on that analysis, the system decides whether to route the code to an AI agent for review or require human intervention.
The organizations that successfully implement these dynamic workflows will be the ones that move beyond legacy, one-size-fits-all pipelines. They will achieve measurable, sustainable productivity gains precisely because they are not treating all code the same way. They are making intelligent trade-offs based on actual risk rather than blanket policies designed for a pre-AI world.
Moving from adoption to impact
Perhaps the most critical challenge facing engineering leaders in 2026 is moving from measuring AI adoption to measuring AI impact. Tool usage statistics and interaction counts are vanity metrics. They tell you developers are using AI, but they don't tell you if it is improving outcomes.
"I think this is a year where every engineering organization will go through this question: How do I measure the success and the ROI?" Keren notes. "If leaders do a good job in defining success... people will start talking in these terms and start showing what they are actually getting."
Keren recommends a funnel-based approach to measurement that tracks code from generation through production. This funnel reveals where value is being created and where it is being lost. If 50% of AI-generated PRs are getting rejected in code review, that is a signal that your code generation prompts need refinement or that your review criteria need adjustment.
Closing the loop with AI productivity platforms
The complexity of measuring and optimizing AI impact across the entire SDLC has created an opportunity for a new category of tooling called AI productivity platforms. These platforms integrate measurement, automation, and policy enforcement to help organizations optimize their entire delivery pipeline rather than isolated pieces.
The power of these platforms lies in their ability to close feedback loops. When a quality issue is detected in production, the platform can trace it back through the pipeline to understand what prompt was used or what review process it passed. Armed with this context, the platform can suggest improved prompts or adjusted review policies.
Engineering leaders who succeed in the year ahead will not be the ones who adopt the flashiest new AI tools. They will be the ones who take a systematic approach to identifying bottlenecks, implementing risk-based workflows, and continuously improving based on data. The promise of AI-driven development remains real, but realizing it requires moving beyond code generation to address the full complexity of delivering software.
To dive deeper into the future of AI-accelerated development, listen to Ori Keren discuss these ideas in depth on the Dev Interrupted podcast.




