
Watch On-Demand
GitHub Copilot here, Cursor there, Claude Code somewhere else. AI across tools is adopted, but is it actually working?
Speakers

Ben Lloyd Pearson
Director, Developer Experience
LinearB

Ofer Affias
Senior Director of Product
LinearB
About the workshop
The real challenge isn’t adoption; it’s understanding what happens after.
In this session, you’ll learn how to connect AI activity to commits, PRs, and delivery outcomes, and walk away with an operating model to prove and scale impact.
Answer whether your AI investments are working:
- How much of your code is AI-assisted, and where is it contributing to code, reviews, and PRs?
- Which tools are making the most impact across your delivery pipeline?
- Which users, teams, and workflows are making the most AI impact?
- Is AI improving throughput, or simply shifting work from coding to review and rework?
Your next read

Guide
The APEX framework
An operating model for engineering productivity with practical guidance for how to measure AI impact.

Report
2026 Software Engineering Benchmarks Report
Created from a study of 8.1+ M PRs from 4,800 engineering teams across 42 countries.

Workshop
2026 Benchmarks Insights
Explore new AI insights from the 2026 Software Engineering Benchmarks Report – backed by 8.1M+ PRs across 4,800 engineering teams and 42 countries.