AI has successfully solved the blank page problem for developers, but it has created a massive new bottleneck downstream in the SDLC. LinearB CEO Ori Keren joins us to explain why 2026 will be a year of norming as organizations struggle to digest the flood of AI-generated code. In this annual prediction episode, he details why upstream velocity gains are being lost to chaos in reviews and testing. We also discuss why enterprises aren't ready to hand over the keys to autonomous agents and how to build dynamic pipelines based on risk.
Show Notes
Transcript
(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)
[00:00:00] Ben Lloyd Pearson: Hey everyone, it's your host, Ben Lloyd Pearson. I'm here today with Andrew Zigler, and we're joined by Ori Keren, co-founder and CEO at LinearB. Ori, it's always wonderful to have you on our show.
[00:00:11] Ori Keren: It's great to be here and it's fun for me too.
[00:00:14] Ben Lloyd Pearson: yeah. We, we love getting you predictions every year, so. It's a, an annual tradition. I feel like at this point to have you come on our show and just share what you think the next year looks like for engineering leaders. and you know, of course we sat down with you last year and I actually have to give you like, credit for your prediction.
[00:00:29] Ben Lloyd Pearson: Uh, you know, it was a little bit contrarian, maybe a little controversial, but I think it turned out to be spot on. And before we get into your predictions for 2026, you know, I'm look back on last year and when everyone was hyping up how like AI was gonna 10x engineering and your engineering output and you went on record and said that productivity would actually go down in 2025.
[00:00:51] Ben Lloyd Pearson: And like I said, I think that was a great prediction and uh, I wanna ask you about that. But first I wanted to see if you had any bold predictions for this next year.
[00:00:59] Ori Keren: [00:01:00] Yeah, I hope it is. It is as bold as the old one, as the previous one. But I think, uh, my prediction is that, uh, it's still gonna be very interesting in code generation. New stars will pop up and new hype will, uh, will be there. But we still not gonna see the two x three x like, uh, productivity improvement that everybody's expecting to.
[00:01:24] Ori Keren: So that's my prediction. Maybe not as bold. But I still believe that this is a year of like, uh, norming, if you will. Like, uh, before we get that, uh, promise.
[00:01:36] Ben Lloyd Pearson: Yeah, well, I mean, if you consider we may be in peak hype, that may be actually a pretty bold statement to make right now. Uh, so I got a lot of questions that I want to ask about that. But first I just wanna give you a chance to, to reflect and look back on what we shared a year ago and just see how things have played out in that time.
[00:01:54] Ben Lloyd Pearson: So, you know, one of the big arguments that you made back then is that the friction of adopting new tools [00:02:00] and the natural resistance to change would slow teams down before it sped them up. And like I said, at the top, you described a dip, uh, where teams would have to figure out how to work with this new technology before they actually like receive many of the benefits.
[00:02:14] Ben Lloyd Pearson: Looking at the state of the industry now, like do you feel vindicated? Like, do you think we actually went through this productivity dip?
[00:02:21] Ori Keren: I, I actually think, uh, we did, or at least like we stood still in the same, in the same place. You can look at data that is out there. Uh, we see that there's like a 30% more pull request, for example, that are being created. That's great. But as you go downstream at the development pipeline, you see that it's actually maybe 2% more that are being released because there's a lot of gates that it's being stopped.
[00:02:47] Ori Keren: We saw, I think as an industry, a decrease in the stability and the quality. There's research that's talking about it. I think, uh, Dora, uh, the Dora uh, metrics, I speaking about like, [00:03:00] uh, 7.2 or something like that, like decreasing the stability. and qualitatively people are talking about it.
[00:03:06] Ori Keren: So I think like if you balance all, all of it, we actually had this deep, or at least like we stood still and we're still learning how to, uh, uh, utilize, these tools, right?
[00:03:17] Ben Lloyd Pearson: And I really like that you brought up Dora, because you know, I think the research report that came out earlier this year really did, is a big part of the vindication because you know, one of the statements they made was that upstream velocity increases are lost to downstream chaos. So even if you're moving faster, there's still so many other aspects of our SDLC that haven't been impacted in the same way by ai.
[00:03:39] Ori Keren: Yeah, absolutely agree with that. There's so many, there's so many factors like, uh, it's almost like, uh, and we can elaborate on that later, but it's, it's almost like going back to SDLC fundamentals. Like what are the phases? Where does AI really play? Where does it give us the productivity gain and where they, uh, does it hurt us or take us even [00:04:00] back?
[00:04:00] Andrew Zigler: You know, last year you also made a bold prediction around our adoption of AI agents, and you were spot on that 2025 would be a year of experimentation versus full adoption. But now that we have spent that full year experimenting is 2026, the year that we hand over the keys.
[00:04:18] Ori Keren: Yeah. Uh, unfortunately, I think, uh, probably you're gonna be a downer here as well because um, uh, I think, uh, one of the, my favorite quotes is like, uh, box CEO or Aaron Levi. He told, I think he told once that like the, it's not about how fast the technology progresses, it's about how fast like enterprise can adopt, like, uh, workflows and processes.
[00:04:41] Ori Keren: So the technology is there, but you know, like the merge rate of agentic code is like around 20%. Maybe you get it like to 35, 40% if you're elite. So, and enterprise are not ready to go, uh, uh, let agents, uh, write a code and other agents, check the code for quality and let it [00:05:00] go all the way through.
[00:05:01] Ori Keren: So I think agent like definitely like, uh, there are like early adopters who are doing amazing things with agents. Definitely when you go zero to one right? You build, uh, your app for the first time. So it differ, differ between that to, to like enterprise software that has like tons of microservices out there.
[00:05:18] Ori Keren: You need to maintain the quality. Uh, so it's different. I think again, that the technology is amazing and it's there to create code. But handing over the keys it's not a technology question, it's a, and again, enterprise workflow and processes questions. So unfortunately they're still not ready. Uh, there's steps to make still.
[00:05:37] Ori Keren: Yeah.
[00:05:38] Andrew Zigler: It's so spot on because it's not a technology problem, it's a communication problem. It's a human problem at its core, and I think that's what everyone has spent 2025 grappling with is technology comes in and it exposes these core communication friction points and problems within your org.
[00:05:54] Andrew Zigler: And that's actually what people are gonna have to spend time fixing.
[00:05:58] Ori Keren: Absolutely. You [00:06:00] phrased it. Perfect.
[00:06:01] Ben Lloyd Pearson: Well, you might be referring to yourself as a downer. I call that being pragmatic actually. And I, and I, you know, I've been regular listeners, have Dev Interrupted to know that I am frequently arguing about how the application of this technology is the most important thing right now. That the, the capabilities of it is already quite good.
[00:06:18] Ben Lloyd Pearson: We just need to apply it to all the aspects of our SDLC. And, you know, one place that is being applied a lot is. You know, within the IDE. So one of the things that you discussed last year was how there was this risk of losing the spark, so to speak, of creativity if developers rely too heavily on ai, uh, to generate code.
[00:06:39] Ben Lloyd Pearson: And, you know, we've, we've gone through about a, you know, a year of heavy usage now. Like, are you seeing that loss of creativity or have developers still found a way to continue being creativity despite all of these
[00:06:50] Ori Keren: Yeah. Uh, here, I think I missed by the way, because um, I've seen like senior developers and even product, uh, you [00:07:00] know, uh, managers and people are doing this thing where you can switch modes. Okay. Like, uh, when I'm in this mode of like enterprise work and I need to get things done, yeah, I need to work in a more organized way.
[00:07:13] Ori Keren: but actually when I'm switching into creativity mode and I can do some vibe coding, it actually, uh, I missed there because even myself, like, I experimented with that. And you can see like that you didn't lose like that part of creativity. You just need like to understand in which mode you're operating.
[00:07:30] Ori Keren: So I think, uh, thanks for all the compliments, uh, that, that I hit spot on in the beginning. I think here I missed, I actually missed.
[00:07:37] Ben Lloyd Pearson: Yeah, no, that's a, that's a great admission actually. And you know, I've, I certainly have felt that just personally, like, you know, when I'm in those more creative modes with ai, it does really allow me to, to, to think bigger than I've been able to think before, you know, and it's a pretty, pretty great feeling.
[00:07:53] Andrew Zigler: I think AI has actually. Like enabled my output and my ability to be creative more [00:08:00] than it hindered it. It like, instead of it being a spark, it was like a whole fire and I could turn ideas into prototypes so fast that the math on building everything changed overnight. So I think it actually increased my productivity.
[00:08:14] Andrew Zigler: It makes me think of like, you know how like you have like inventors and like people invent things that are pragmatic and useful, but you also have people who invent like useless things or things for fun and things for show and things to teach. I think suddenly code had that revolution overnight where people build code for pragmatic and useful reasons.
[00:08:31] Andrew Zigler: But now you can build code for fun, to experiment, to teach, to learn, to explore, and that's like a whole new modality.
[00:08:38] Ori Keren: I think you phrase it, uh, perfect. And my concern was that I actually think that coding and building is actually a very creative work and I remember when, you know, coding, uh, very earlier in my career, like you start to write things as as and as you get into the zone, like you get, oh, maybe I can have you get ideas that are like, uh, bottom up that you, you didn't come up with [00:09:00] them.
[00:09:00] Ori Keren: You actually maybe tasked to do something else. I was afraid that you would lose that. And you're right that like, uh, I think like, again, when you switch modes or, okay, I'm not like now in enterprise mode, I need to deliver this feature. You actually give yourself artistic freedom for ideation. AI enabled you to do that like much faster and actually, uh, amplify that.
[00:09:23] Ben Lloyd Pearson: All right, so that that covers the recap from from last year, both the good and the bad. And you've already previewed your thoughts on, uh, 2026, but I'm, I'm wondering what ROI looks like for AI in 2026. So if, you know, we've got a lot of engineering leaders out there now who are looking for return on their investments into ai.
[00:09:43] Ben Lloyd Pearson: Is this a year that we finally figure that out, even if, even if we're not gonna see a two or three x improvement by, by your, uh, by what you believe. Is this still a year that ROI starts to become a thing that engineering leaders understand
[00:09:57] Ori Keren: I think, um, engineering leaders will [00:10:00] work, uh, harder at the beginning to say, Hey, how does success and how does ROI look, uh, for my organization? Uh, if they're doing themselves a favor, that's what they need to do early on. think there there's unfortunately there's a lot of like, um, well, there's tons of data points, right?
[00:10:18] Ori Keren: So, uh, you can have, uh, some politics get into it, and what am I measuring to prove my ROI? So I think this is a year where every engineering organization will go through this question. How do I, uh, measure like, uh, the success and the ROI. I think this is, like I told you at the beginning, uh, if you, if leaders do a good job in defining success and how they measure ROI, which can defer by the way from there are the basic things, but they, it can defer from one org to another depending on stages, uh, the company is in, et cetera.
[00:10:53] Ori Keren: People will start, uh, talking in these terms and start showing, Hey, you know, you know what I'm actually getting like, uh, [00:11:00] an ROI, but again, I think it'll still stay in the, unfortunately on the. Single digit, like, uh, productivity gains, like 5%, 8%, uh, something like that. So definitely year where you can start showing it, uh, especially if you do a good job at the beginning, defining it, thinking about it internally, exposing it externally to stakeholders.
[00:11:21] Ori Keren: People, I think, I think this is there that people will start seeing it, but uh, again, just the beginning.
[00:11:27] Andrew Zigler: I wanna talk a little bit about how the developer experience has also evolved and changed this year, especially the tooling, the Dev workspaces, how people get their work done on a day-to-day basis as software engineers now has fundamentally changed and in 2025 we saw a shift from more like chat, like interfaces to more composer like ones where you, we've gone beyond the world of autocorrect or autofill and auto complete for tab, completing code.
[00:11:54] Andrew Zigler: We moved into the chat era of having the chat window generate your code alongside you. Now we're moving into a [00:12:00] composer mode where you have multiple agents maybe working in parallel or sequentially and you're maybe even. Looking less at the code than you did before. Evolutions of coding tools like Cursor are making that chat and that composer experience first class and hiding away the code in some cases.
[00:12:16] Andrew Zigler: So, you know, how do you think that this change in developer tooling will impact 2026, and what do you think we can expect to see?
[00:12:25] Ori Keren: I have to admit that I think that we're gonna see more um, Innovation around that is like maybe another move like it into web interfaces where I activate a bunch of agents, et cetera. So there's gonna be more innovation there. But again, if organizations really wanna see the productivity improvement they need to start thinking about how to apply ai.
[00:12:49] Ori Keren: It is not just AI about like smart decisions further downstream. So, for example, I'm, I'm getting really excited. And if like, uh, organization will start to think about, [00:13:00] you know, reviews and quality instead of just, hey, this is how we do code review life. We review every piece of code, maybe do it, change risk analysis and decide where you deploy ai, where use human stuff, stuff like that.
[00:13:12] Ori Keren: Uh, I think if people will move from manual deployments or. you know, CD to sometimes to, uh, AI driven canary, uh, deployments and automatic callbacks, et cetera, that will actually move the needle in developer productivity much more dramatically than any other change that you'll get in like, uh. Where it's like, how you generate the code.
[00:13:37] Ori Keren: Unfortunately, I think the universe and the, and the, so the industry will stay focused on, on, on, because LLMs, that's the problem that they know how to solve and everybody gets
[00:13:46] Andrew Zigler: Exactly.
[00:13:47] Ori Keren: and the industry's, uh, so vested into, uh, into it. So I think, uh, there's, it's still gotta be improvement there.
[00:13:56] Ori Keren: They won't move the needle. What we move the needle in, in SDLC is, I [00:14:00] think the stuff that I spoke about.
[00:14:01] Andrew Zigler: That's fascinating and, and we're gonna talk more about how AI's going to impact the rest of the SDLC in closing the delivery gap. And you know, I think what I'm hearing from you here is that it doesn't matter how shiny and new and reinventive we recreate the ways that developers make code. the center of gravity in our industry right now is around code generation.
[00:14:20] Andrew Zigler: But there's a lot of problems in delivering software that can and needs to be solved by teams with working with ai. So, that's a really great kind of insight into how next year might look.
[00:14:30] Andrew Zigler: And Ori looking at the economic climate as well, you know, engineering leaders, when they're in an environment where they're making these decisions, they're pretty, they're under a lot of pressure, like immense pressure from above and below. You have developer teams that have varying degrees of wanting to adopt the tools.
[00:14:46] Andrew Zigler: You have your executive leadership with varying degrees of appetite and taking on new AI experimentation, and you're stuck in the middle navigating that. As an engineering leader. So do you think that in 2026 the budgets will start [00:15:00] shrinking back, that the growth budgets are going to change from last year, where it's buy every tool, experiment with everything, see what happens?
[00:15:07] Andrew Zigler: Now that we've had a whole year of that, what does that mean for a budget cycle in 2026 for these engineering leaders?
[00:15:14] Ori Keren: I think that in terms of economic and the way engineering didn't need to prepare for this year it's more of how, uh, things be, uh, were like last year be. I don't think like where you're gonna see a lot of like, uh. Let's put more budgeting to growth, like hire more engineers, more AI tools, et cetera.
[00:15:33] Ori Keren: I think because we spoke about like, uh, the deep, or at least like at the beginning or we maybe stayed in the same spot and we, and we spoke about like, even if we improve like single digit, I think the main issue is like this. Big expectation gap that exists between like these maybe executives that are not, you know, tech first, or at least how the industry really expect like, uh [00:16:00] oh, when this is like three x four x coming to the single digit.
[00:16:04] Ori Keren: And this will put more economical pressure on like engineering leaders, like, uh, to do more with less. Uh, and it will stay the same, the same. Unfortunately. That's I what I think will happen. So we won't get like. Much more budgets, uh, uh, the expectation to do, uh, more. This will still be, there, will still, uh, under deliver.
[00:16:25] Ori Keren: Because again, I think as a, again, the ex-VP of engineering, getting like five to 8% to 10% improvement is amazing. It is amazing. But it will still like one impress like the, because everybody wants these three x. Um, So that's why budget will still be, and, and macro et cetera and other reasons. That's why budget will still be
[00:16:45] Ori Keren: constrained.
[00:16:46] Ben Lloyd Pearson: Ori, I feel like I am surprised almost on a weekly basis by all the new things that happen in the world of AI and new technologies. With that said, what, what technologies or trends do you think have a chance of surprising engineering [00:17:00] leaders in the next year?
[00:17:01] Ori Keren: if we think about surprises there's two interesting areas. One is that supply chain, I think, uh, there's gonna be like these, the cases where, big outages are gonna surprise eng engineering leaders, not necessarily because of security, uh, you know, incidents in the supply chain, but We've seen more and more dependencies in like a software that's not like created in house that all of a sudden a change, a small change there explodes in a very glorious way. And because outages, I think we'll see more, uh, more than these things I think everybody's mindset to this, uh, supply chain is like, uh, thinking about it is like, again, from the security perspective and not from, okay, like this library changed something and then half of the world is not working.
[00:17:49] Ori Keren: Like, uh, we've seen one or two like incidents. Like that's, that's one thing. And I think, uh, the other, uh, surprise that could be positive or negative. It is like [00:18:00] the technology around CI because, uh, and, and what I see with CI is like, and I think, uh, a lot of like engineering leaders would agree with me.
[00:18:08] Ori Keren: it came back to this like plateau of like, uh, more coverage and more tests and not producing more quality. And it's actually the flakiness and the instability of this system that were designed like years ago, uh, are slowing like the, you know, engineering teams down. So I think there could be like a negative surprises there.
[00:18:31] Ori Keren: Like, okay, I'm blocked. Like I can't release now for, for a week or two because I, my, uh, infrastructure is not stable. Or a positive surprise is like new technology and new innovation that will make smart moves there that could accelerate. These are, these are the areas and there of, of course there's like, uh, you know, quantum computing and everybody talk about, I think like, uh, it's still, it's gonna be a conversation in 2026 in, you know, [00:19:00] uh, for CISOs and security that prepare, but still not like that.
[00:19:04] Ori Keren: The thing that, uh, caught people by surprise in 2026, hopefully I'm not like sitting here in 2027 and say.
[00:19:11] Ben Lloyd Pearson: We'd be in a very different world, I think.
[00:19:13] Andrew Zigler: I think so.
[00:19:14] Ben Lloyd Pearson: Uh, but who knows? And you know, we've, we've been covering quite a lot here on Dev Interrupted, like some of these new security concerns that are arising from AI usage. You know, things like poisoning models to recommend malicious packages and prompted injecting within products.
[00:19:29] Ben Lloyd Pearson: Like, there's a lot of just new things that we've never really viewed as like a security risk that are just like a new category of problem that we have to, to face. But I wanna shift gears, uh, to another specific problem that we're seeing in the market right now. Uh, and that's this gap that's forming between how fast AI allows us to generate code and how quickly we can ship it to our customers.
[00:19:51] Ben Lloyd Pearson: You know, coding obviously has sped up dramatically, but things like reviewing and testing code haven't kept pace quite as well. [00:20:00] So for 2026, Ori, you know, I'm curious, do you think that, like is, is the biggest bottleneck going to be the code review process or are there other things that will also start to appear within the SDLC that are, that are bottlenecks for AI driven teams?
[00:20:14] Ori Keren: Yeah, I think it's gonna be in phases and it depends on how early adopters teams are, but definitely after. Okay. Let's say if you look at the phases of the classic phases of SDLC, you write a code, then you need to get it merged. Definitely a big, big bottleneck that exists now in the SDLC.
[00:20:32] Ori Keren: People, uh, I think this is the year where again, we're talking about technology and we're talking about processes and workflows. I think this is the year where technology is like getting, uh, better by the way, not just as a quality tool because, uh, we talked before about the challenges over security incident, uh, come in, but we're also talking about quality problems.
[00:20:55] Ori Keren: So I look at like, a thing that sometimes in industry we call code review as a [00:21:00] quality gate. Sometimes it's like, uh, silent. It could be silent by the way, sometimes it could be uh, not silent, like active, that like prompts that developer, Hey, we spot a bug here. So I think it's the year of like, uh, code reviews, like definitely or quality or backend.
[00:21:16] Ori Keren: Definitely taking like a, a front seat in SDLC and being adopted. Leaders will think about one, how do I do it smart in a smart way? Am I letting like a, an AI agent, uh, review the code of what another agent wrote and going back and forth? I think there will be brave, uh, organizations that will start making these decisions with risk analysis, et cetera.
[00:21:41] Ori Keren: But even if you're not going all the way there, like, uh, I think, I think people will need to assess the, the quality of the code where it's active in the face of developer or not necessarily active, silent in a silent mode on, on a every pull request basis and produce measurements and [00:22:00] keep on improving.
[00:22:01] Ori Keren: So yeah, it's the year of code quality, if you will. And then like, uh, one instance is code review. Another instance is measuring it and creating a feedback loop to improve how you use the AI tools. Uh, definitely see this as, uh, a year where it's being fully adopted.
[00:22:19] Andrew Zigler: I wanna zoom in on that anecdote for a moment of the idea of like AI agents generating code that other. Are then reviewing and maybe making that decision based upon some risk analysis. And that involves acknowledging that code comes at different levels of risk and quality and need out of the gate, right?
[00:22:37] Andrew Zigler: And so code can't be treated in this one size fit all way anymore. And in fact, with AI in generating the code
[00:22:44] Andrew Zigler: we the ability to then improve our systems. Around understanding and shipping that code. So, do you think that 2026 is a year where teams move away from having these rigid legacy, one size pit fits all pipelines and they started [00:23:00] maybe building or adopting more dynamic workflows that change based on the level of risk for the code and who wrote it?
[00:23:07] Ori Keren: absolutely. I think the teams will need to have smart, uh, pipelines and take smart decision, like, uh, especially like you said around review and merge. Do a risk analysis, decide, uh, what are you doing, uh, with this code, to let like an agent review it and go back and forth. Where do you let that code getting merged automatically?
[00:23:28] Ori Keren: Uh, you need to put it in the framework of, uh, enterprise policy, because remember, that's what's slowing us down, not the technology. And I think this is there where people will, uh, automate like okay, define their policies and then will, want and implement tools that help them like, uh, automate and implement those policies both in review and merge, but also in how you run tests.
[00:23:51] Ori Keren: And, uh, like we said, like more coverage. Now, I think rich like a plateau, more coverage doesn't mean more quality. So what did you do different there in ci? [00:24:00] Where did you choose like smart decision? What smart decisions are you taking there? And what to run, et cetera, conditionally. And how do you handle all this flakiness?
[00:24:09] Ori Keren: Uh, that's yet definitely the year where automation, like if organization, they choose think about it in advance, decide on their policy, uh, choose tools, like to implement it like, uh, with, uh, auto automation workflows will be the, these ones will like hit jackpot. These are the ones that will get like a high productivity increase.
[00:24:30] Ben Lloyd Pearson: So I, I wanna address a, a, a problem that we, we hear quite often with engineering leaders today. And that is, you know, they're looking at their pipeline right now and they're seeing that their developers are using AI to generate, let's say, 50% more code. Uh, however, they're not always seeing more features being shipped as a result of that.
[00:24:51] Ben Lloyd Pearson: if you're an engineering leader that's out there listening to this, you know, Ori what is the advice that you would give as like the first lever they should pull to, to start [00:25:00] closing that gap between, you know, the in increased velocity of code, but with a lack of velocity increase for impact and features?
[00:25:09] Ori Keren: Yeah, I think uh, it's going back to what we said before. Define, uh, your policy on where are you willing to take risks calculated risks? Like where do you, uh, break some of the old paradigms? Like, uh, we said one size fits all. Uh, I have three reviewers looking at every piece of co every pull request that's coming in?
[00:25:29] Ori Keren: No. Okay. Uh, in the, in some cases it's okay for an agent to review the code, so definitely, pull that lever, decide like, uh, on new policies and new ways on how you review and merge the code. That's the second phase, by the way, after code, after writing the code. Right? And by the way you need to make tech smart decisions.
[00:25:47] Ori Keren: I think you need to choose a vendor that, uh, sees everything until like down downstream. Like if you stay in code generation then and you don't see, hey, like how does that impact. my microservice that sits there and how does they [00:26:00] interact with other, and how does that affect my change, failure rate, and other metrics and my rework and my quality. You just did a code review for the sake of code review. So, uh, also choose the right tools that sit, that see like, uh, the downstream impact. Uh, that's the first lever I would pull. And I'm going back, maybe it's boring to the same answer from before. The second lever I would pull is like, smart decisions in, in, you know, CICD systems.
[00:26:26] Ori Keren: I think like it'll move like, you know, chronological order by the phases. I think if, uh, last year was the year of code generation, again, we'll continue to hyper on code generation. This is the year where like, uh, smart policies, uh, we'll get into how we review and merge code and then we get, okay, so some extra throughput gains.
[00:26:48] Ben Lloyd Pearson: And I think one of the best gains that a team can get from, from approaching it this way is that, uh, you know, you don't have to boil the ocean. You don't have to solve all problems at once. With ai, you can pick the, the [00:27:00] painful parts of your process that, that are not creating big bottlenecks and focus in and solve those.
[00:27:05] Ben Lloyd Pearson: And, you know, it's not gonna 2x your output, but it will, it will solve a problem that might be consuming five or 10%. Of your team's time, and if you can just solve multiple problems like that, then that does add up over time. So the last topic that I wanna talk about today is, uh, AI enablement, ROI, executive reporting, you know, all those really big important stuff that, that a lot of engineering teams are grappling with right now.
[00:27:30] Ben Lloyd Pearson: So, you know, in 2025, basically everyone bought AI tools, rolled them out to their teams. You know, now we're all facing this critical question, like how do we actually know if these tools are working? Like, are they improving productivity? Are they ma improving quality and efficiency? And one of the biggest challenges that we're hearing is that it's really hard to distinguish between AI adoption and AI impact.
[00:27:54] Ben Lloyd Pearson: Like we know developers are using copilot in cursor but we don't, it's hard to actually prove those tools [00:28:00] are improving delivery. So do you think in 2026, like, is AI impact like gonna be the focal point of a lot of engineering teams?
[00:28:08] Ori Keren: Uh oh, absolutely. I love this question. And I wanna answer it a couple of phases. First of all, I think even if you think adoption, it's not a yes, no question. Are we adopting? Yes, no. It's like what are we adopting um, in. Which teams? And what's the level of adoption, first of all, that's also not like a full solvable problem, uh, because it's, I think like, again, the metrics that, like the vendors uh, the, uh, that produce, that generate the code give you is like, uh, unfortunately are a vanity metrics.
[00:28:38] Ori Keren: Oh. Like, uh, a lot of interactions with this, a lot of inter uh, who cares? Like, uh, did it really create like a pull request or a value that get all the way to production? So first of all, adoption. Is an interesting question, but you're spot on that I think like the people will move from just measuring adoption to measuring impact.
[00:28:58] Ori Keren: And here it's really [00:29:00] interesting, I think the way to think about one way, like to think about it at least is that there's a funnel here, of like, uh, hey, code is being written, pull request is a it's reviewed and merged. It's, uh, past CI and cd. It's ready to deploy. It's out in production, I don't know, feature, flag enabled, et cetera.
[00:29:18] Ori Keren: Now, if I think about like, even take like, a very famous metric like cycle time that we used to like break into segments and we look at the velocity you know, between okay coding to our fasted reach production, there's a new interesting, uh, statistic that within AI impact that is like. What's the drop off?
[00:29:39] Ori Keren: Also, that's why I'm saying it's a funnel. So it's not just how fast do I move? It's how much like, uh, pull request I lose in the way. Uh, because okay, a lot work created, how much of them got merged, how much of them really got to production, et cetera, et cetera. So I think like, uh, in order to measure, uh, impact, people need to [00:30:00] realize that there's a funnel here.
[00:30:01] Ori Keren: At least that's at least. How I think about it, what's the drop off like in those phases, not just the speed. And, uh, and that is the qua. And then there's a quality question, like, uh, after we look at all of this and we improve all of this, and there's like the um, let's look at the final, like, uh, quality score, right? Like, how many incidents did we have? How many bugs did we have? Uh, we need to keep track that, that, uh, uh, it, it doesn't get hurt. So definitely a year where, uh, measuring AI will be like a major thing. I agree to move from adoption to impact. Uh, and it's really important like to, for leaders to, uh, establish like an agreement with their peers and their businesses on what does the impact look like?
[00:30:47] Ori Keren: How do I measure, like establish it early on and be consistent on measuring it?
[00:30:51] Andrew Zigler: I love that example you gave of the developer generating a lot of code and. You know, along the way, does this merge request go [00:31:00] away? Does this pull request go away? Does an, when it gets delivered, does it get rewritten and refactored later? Did it cause an incident? How many bugs were in it? Like these are still not only like unknowns, but people aren't even looking at them yet, which is fascinating and it, I think it's like a big opportunity.
[00:31:16] Ben Lloyd Pearson: It's like if a tree falls in the wood and no one's around to hear it. Did it ever make a sound? If AI generates a pull request and no one ever reviews or merges it, did it ever exist? You know, it's a, yeah, a very familiar challenge.
[00:31:30] Andrew Zigler: And, and you know, I, I wanna take that into my next question about how, this is something we've been kind of talking about this entire conversation, like this velocity paradox of like, oh, you have more code so you can ship faster. Well, no, there's a lot of other sh. Steps and bringing code to production to actually shipping it and creating value.
[00:31:48] Andrew Zigler: But you know, the center of gravity in our industry right now, it being in code generation, everyone's staying fixated there. And you're right that people are gonna start focusing on the adjacent problems, try to smooth out this bump [00:32:00] we have in our production pipeline. But in the meantime, do you think that this, like,
[00:32:05] Andrew Zigler: almost like grotesque proliferation of code is going to warp how we think and work with the rest of the pipeline. Like I think it's gonna have a fundamental change that if you can create code this rapidly and this easy, it calls into question many things that were part of the pipeline before.
[00:32:22] Ori Keren: Yeah, I think here, spot on, like, uh, it's almost like gonna be, our organization will be split into two cohorts, like one that get it that okay are generating code. And, and by the way, uh. I'm not saying you can't you can always improve how you generate code. Like put more quality in and we can talk about it.
[00:32:40] Ori Keren: It's really fascinating how you create this feedback loop. Uh, but there's gonna be organizations that are still focused, uh, solely there. And there's gonna be organizations that are, that will, understand, uh, they already get it. We, I talk to engineering leaders all the time. They get it. They just don't know how to.
[00:32:57] Ori Keren: Maybe measure it or what to [00:33:00] apply, but the organization that will understand, okay, if I really wanna start getting closer to this, like, I don't know, 25% improvement, et cetera, I'm gonna put my focus on the rest of the SDLC and what do improve there and how can I apply AI there? By the way, that's what get me really excited again, uh, I'm looking for this like, uh.
[00:33:22] Ori Keren: A company that will build something that, uh, the LLMs are actually looking at all the logs that are coming from the services and learning them, and then knowing your services in a very intimate way that they're telling you, well wait, these things that, the thing that you're about to write here's what I think is, is going to happen.
[00:33:41] Ori Keren: And oh, you know, and you know what? I'm gonna elicit like to 5% of the population with a canal release. And, uh, oh, I'm rolling it back. I saw a thing. I'm fixing it now. I'm, uh, rolling it, uh, deploying it again. Oh, now it looks better because I keep, look when that happens. Uh, this is when we get [00:34:00] like the two X and the three x.
[00:34:01] Ori Keren: So, uh, that's what get me excited. Like when people, uh, and I don't, again, I don't know if like, uh, somebody's already working on it or companies are thinking about it, that, uh, and if LLMs are actually, uh, will be great at like solving those problems. And maybe there it's different technologies that we need to adopt there.
[00:34:21] Ben Lloyd Pearson: Well, if there's nobody working on it already after this episode, I imagine they will. Someone
[00:34:26] Andrew Zigler: will crack
[00:34:27] Ori Keren: think I said it like five times in like different com and
[00:34:30] Andrew Zigler: You're just hoping, you're just praying somebody steal my ideas. Somebody
[00:34:33] Ori Keren: no, like at the, at the end of the day, like, uh, uh, trust me, like, uh, you can see it like people already know it. They're already thinking like that. Uh, these ideas exist in brains across the universe right now. Even if I didn't say somebody's
[00:34:48] Andrew Zigler: There's a, there's a lot of companies sitting on top of a lot of earned knowledge and domain expertise and, and partnerships that are building these kinds of agents, right. That are kind of second, uh, getting ahead of [00:35:00] you and guessing what your next intentions are, like. What you just described makes me think of like, there are some AI SREs.
[00:35:05] Andrew Zigler: Out there that exists, that respond to incidents and read logs. And I think those are fascinating because we always think of chat as with an, or like interacting with an AI as something we initiate. But imagine you get that 3:00 AM text from the AI about your outage. It's a different kind of world.
[00:35:22] Ori Keren: Or imagine they fix it.
[00:35:25] Andrew Zigler: Yeah. Or imagine you
[00:35:25] Ben Lloyd Pearson: You don't
[00:35:26] Andrew Zigler: at all because you don't even get the 3:00 AM text. You get an incident report the next morning about what it fixed while you were sleeping.
[00:35:32] Ben Lloyd Pearson: Uh, so I wanna talk about something that's been really essential to LinearBeing the last year. Uh, and that is an being an AI productivity platform. The idea is to combine measurement with all of these automations and policy enforcements that you've been sharing with us so far in this episode. So I'm, I just wanted to touch on this concept of visibility, like metrics dashboards, like what role do those play within this, you know, the next year of AI adoption and impact for organizations that [00:36:00] wanna leverage AI to be more productive?
[00:36:03] Ori Keren: Uh, I think a major role and there's a major opportunity, uh, for, uh, these platforms like, uh, LinearBecause, uh, like we said before you need vendors to that see all the way downstream, the downstream impact, right? Let's take code quality as an example. Uh, even, even if you think about still code generation, right?
[00:36:23] Ori Keren: think about SEI of two years ago where you looked at like, a problem and you spot a problem in your metrics. You say, Hey, like, you know, when my problem is in quality in this service. Think about what you had to do. You had to build a, a program to educate, uh, everybody that's, uh, working in this service and this thing, Hey, this is our, uh, what we need to do.
[00:36:46] Ori Keren: This is where we need to focus on. And I'm not saying you shouldn't do that, but now if you find a problem and if you see, hey, a code that was written by this AI tool or by this team. In this specific service with this [00:37:00] policy produces a lot of security problem. So here's what I'm going to do. I'm gonna tell you, take this prompt and give it back to whatever tool you are using.
[00:37:09] Ori Keren: And all of a sudden the, improvement cycle of the quality is automatic. The loop
[00:37:15] Ori Keren: the loop
[00:37:15] Ori Keren: and, and the loop closes. That's the huge opportunity that exists inside this AI productivity platform. Uh. Look at the, at the pipeline, uh, don't just suggest fixes, like, uh, suggest prompts that go back, like to whether it's like to the code generation tool or the, the thing that reviews the code and improvement is there.
[00:37:38] Ori Keren: You don't need, like now a, uh, a rollout program that educates everybody, et cetera, and people appreciate it and love it so. Uh, there's a real, uh, uh, chance here of like close the loop and get improvements fast for AI productivity platforms. That's why I think again and I know I'm biased because, uh, I'm the CEO of like a such a company [00:38:00] as a AI productivity platform.
[00:38:01] Ori Keren: That, uh, if people choose, uh, the quality or the code review tool from companies like us that see everything, like the chances, that's everything else. So the chances of in, of constant improvement, like, uh, increased dramatically
[00:38:16] Ben Lloyd Pearson: well, Ori, this has been a really great episode. I just have one more question for you, uh, and that is, you know, I think it's like the ultimate question for 2026. If you're an engineering, you're an engineering leader out there who's listening to this, you're probably hearing the question from someone on your executive team.
[00:38:31] Ben Lloyd Pearson: What's our ROI for all of our AI investments? So, Ori, in your opinion, what's the answer that, uh, shows this impact that they can provide today?
[00:38:41] Ori Keren: Yeah. Maybe I'll disappoint you, but I think like I would say, you go and define with your business if engineering how, ask them how do we measure success or say, hey, together we're gonna decide how do we measure success? I think that's what [00:39:00] my, my advice like for every engineering leader. Put this question, start, uh, running this question with your teams internally.
[00:39:06] Ori Keren: Then expose it with like, uh, uh, your peers, your like, uh, business peers. That's my advice because that's, uh, what I think every engineering leader need to ask now. answers will come up such as do we measure the throughput? Uh, if all the things that we spoke about today, if we measure throughput, let's measure the throughput across the entire SDLC.
[00:39:27] Ori Keren: So I guess that's my answer. if I have to ask a question, an engineer there, say, how, how do you measure success? Ask yourself. Ask your, the people that report to you, then get these answers back to the, uh, business. And set expectations because you're gonna have a rough ride next year. Remember, the expectations is for three x, and you're gonna be proud in your 8% improvement.
[00:39:54] Ori Keren: Uh, but if you, uh, set the expectation, right, uh, I think you're going to get like a little bit [00:40:00] easier life as an engineering leader.
[00:40:03] Andrew Zigler: , Okay, Ori, I have one quick question at the end before we go. What is the most interesting thing that you have vibe coded this year?
[00:40:10] Ori Keren: Oh, I love this question. So, I did some things that are related to the business to help where, okay, let's move them all to the side and talk about like a cool project. So I vibe coded. Uh, I'm still working on it, but, uh, I'm, I love music, so I vibe coded like a, a tape, you know, I used used to have tape as a teenager, like, uh, well you can't really, like, okay, next song.
[00:40:37] Ori Keren: And then you can hear the next song. Uh, you can, you need to press forward and then you don't know, like, okay, it's like moving, where will it end? Oh, it's like in the middle of the old song. So you build a playlist. If you have to really think what songs you, you wanna put in your playlist because, and then like you can move, uh, forward or backwards or record, et cetera.
[00:40:57] Ori Keren: And, uh, we're now working on visual [00:41:00] visualization of the tape. And, uh, it's so much fun like doing it, uh, over the weekend. You,
[00:41:05] Ben Lloyd Pearson: and
[00:41:06] Andrew Zigler: I love that.
[00:41:06] Ben Lloyd Pearson: if you come back to listen to it, you gotta rewind it. If you wanna
[00:41:09] Ori Keren: gotta rewind. Yeah, you gotta rewind and wait. You can't just like start. Yeah, we gotta rewind.
[00:41:15] Andrew Zigler: Great. I love that. It's, it, it's fun to like use code to explore other hobbies and interests too. One thing I built this year was, uh, a recipe app. You know, I love to cook, but I wanted something a little more bespoke for how I collected my recipes and my ingredients and stuff. So I just kind of whipped my up myself.
[00:41:30] Andrew Zigler: So that's a fun, uh, anecdote. I love that.
[00:41:33] Ben Lloyd Pearson: Yeah, and, and I, I think I love most is, I think it just represents my belief that we're entering this like code is art phase of, of the, the world where like if you have an artistic idea, just write code that generates that idea for you. It's really cool. I. Thanks for joining us today, Ori. It's always a wonderful pleasure to have you share your insights and check out, you know, how your predictions perform year to year with our audience.
[00:41:57] Ben Lloyd Pearson: And that's it for today's show. If you enjoyed the [00:42:00] episode, the best way to support us is to rate our podcast on your preferred platform, whether that's Spotify, Apple, or wherever else you might be listening to us. And also, if you wanna learn more about LinearB and all the things we're discussing today, head over to the to LinearB.io to check out all of our latest research and content.
[00:42:17] Ben Lloyd Pearson: Lastly, we love to hear from our audience You can connect with Andrew Ori and myself on LinkedIn or join the conversation about this episode on the Dev Interrupted Substack or LinkedIn newsletter. So thanks everyone. We'll see you next week and thanks again, Ori. I.
[00:42:33] Ori Keren: Thank you for having me.



