Every engineering leader is being asked the same question: "What's our AI strategy?" As AI tools move from novel experiments to essential parts of the development lifecycle, the pressure is on to demonstrate impact. But simply generating more code faster isn't the answer. In fact, it can lead to accelerating in the wrong direction—creating more technical debt, more bugs, and more confusion.
According to Itamar Friedman, co-founder and CEO of Qodo, the key to navigating this shift is to distinguish between two critical ideas: speed and velocity.
"The difference between velocity and speed is that velocity has a direction," Friedman explains. This simple but profound distinction provides a powerful mental model for implementing AI, helping teams move beyond chasing raw pace to achieve purposeful, sustainable progress.
Velocity vs. speed: Purpose matters more than pace
When discussing AI's impact on development, many organizations focus solely on coding faster. But Friedman draws an important distinction between raw speed and purposeful velocity. While startups may embrace the "move fast and break things" ethos, larger enterprises must consider more complex factors.
"There's a very big difference between a startup with a team of three people, a company with 200 developers, and a Fortune 500 with 10,000 developers. What matters is different for each," notes Friedman.
For early-stage startups, the cost of moving slowly often exceeds the cost of mistakes. They may prioritize raw speed and embrace "vibe coding" – the practice of using AI to generate code quickly based on high-level prompts without extensive planning or verification. This approach can work well when teams are small and the stakes are relatively low.
But for enterprise organizations, especially those in regulated industries like banking or aerospace, untangling bottlenecks and ensuring quality take precedence over simply writing more code faster. The cost of errors is substantially higher, and the development culture must reflect this reality.
Targeting organizational bottlenecks maximizes AI’s effectiveness
When implementing AI, Friedman’s advice is to start not with a tool, but with a question: "Where is my bottleneck that I can untangle?"
This approach shifts the focus from general AI adoption to targeted implementation where it can provide the most significant benefits. For some teams, the bottleneck might be code generation, but for others – particularly senior developers – it might be in architecture planning, code review, or deployment processes.
Beyond identifying bottlenecks, successful implementation requires clear ownership. Without it, AI adoption risks becoming fragmented and ineffective, resulting in a scenario where individual developers simply say, "I really love this code generation tool. Let's install that," without a cohesive strategy. To prevent this, Friedman recommends that organizations "put someone in charge of implementing AI across the SDLC in a thoughtful way."
Platform engineering teams emerge as "agent keepers"
Looking ahead, Friedman offers a compelling prediction about the evolution of platform engineering teams: "Let's assume that in five years...agents are writing and reviewing and verifying most of our code. Who is going to be there at the dev organization? I guess it's the dev platform team."
According to Friedman, platform teams are increasingly becoming the natural owners of AI implementation within development organizations. These teams take a holistic view of the software development lifecycle, enabling them to integrate AI tools with appropriate guardrails and best practices.
This trend is already visible in the rapid growth of platform engineering teams. Friedman notes that in some large companies with thousands of developers, platform teams have grown from 5 to 25 engineers in just one year. As AI becomes more deeply integrated into development workflows, these teams will likely become the "agent keepers" – managing, steering, and verifying the work of AI agents across the development process.
Measuring AI impact: Define success on your terms
With any technological investment, measuring impact is crucial. But with AI, organizations sometimes struggle to move beyond vague assertions about developer happiness or productivity. Friedman emphasizes the need for specific metrics tied to organizational goals:
"Think about what matters for you, and then this is how you measure."
For teams concerned with feature delivery, metrics might include time from Jira ticket to production release. For those focused on quality, tracking the percentage of sprint work dedicated to new features versus bug fixes could reveal whether AI is helping or hurting overall code quality.
Establishing a baseline before implementation is essential. Without understanding your starting point, it's impossible to demonstrate meaningful improvement. Key metrics might include feature delivery time, bug resolution percentage, and pull request cycle time – but the specific combination should reflect what "velocity" means for your particular organization.
Context management is essential
As development teams scale, Friedman predicts that "vibe coding will evolve to grounded coding."
This shift represents the maturation from unstructured, prompt-based interaction with AI to more rigorous, context-aware processes. While "vibe coding" might work for small projects or startups, enterprise software development demands greater precision and reliability.
The critical challenge in this evolution is context management – ensuring AI tools have access to accurate, relevant information about codebases, requirements, and standards. As Friedman notes, context gathering can become a major bottleneck if not properly automated.
Effective AI-powered development requires structured workflows rather than freeform prompting. By creating defined processes for AI interaction, teams can guide these tools toward producing higher-quality, more consistent results. This approach represents the future of "grounded coding" – where AI assistance is firmly anchored in organizational context and engineering best practices.
From faster code to true velocity
The journey to becoming an AI-driven development organization is not a race to generate more code. As Itamar Friedman’s insights show, it’s a strategic shift from measuring raw speed to directing true velocity. This requires looking beyond individual tools and towards the entire system.
It begins with focusing AI's power where it matters most—on your unique bottlenecks. It matures by building guardrails and context, empowering platform teams to become the "agent keepers" who guide AI effectively. And it culminates in the evolution from unstructured "vibe coding" to "grounded coding," where AI operates with precision and purpose.
Ultimately, the question leaders must ask is not "How can we use AI to go faster?" but "Where do we want to go?" By defining that direction first, you can harness AI not just to accelerate your team, but to propel them toward building the right things, the right way.
Listen to Itamar Friedman's Dev Interrupted episode: