As the milestone book Atomic Habits laid out, the key to life-changing habits is adopting one effectively and then layering another desirable habit on top of it.
The same is true for efficiencies in software engineering.
When your team adopts one efficiency, sees it bear fruit, then adds the next efficiency habit on top of it, the result is compounding efficiencies.
In this conversation, LinearB’s CTO Yishai Beeri reveals the data on compound efficiencies as experienced by real dev teams out in the wild.
Episode Highlights:
- (2:53) Sourcing the data
- (5:57) Visibility for devs & managers
- (12:15) Improving code reviews
- (19:30) What are compounding efficiencies?
- (21:48) Streamlining the PR process
- (25:52) Results from efficiencies gained
- (33:40) Giving devs back more focus time
- (40:45) How to get compounding efficiencies for your team
Episode Transcript (disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)
Conor Bronsdon: Welcome to another Labs episode of Dev Interrupted. In these episodes, we dive into the most impactful research and insights about engineering, and most importantly, how to apply them to your organization. I'm your host, Conor Bronsdon, and today's topic is one, the data team at LinearB as a unique authority to speak on.
Because they've spent all of their time providing visibility into the development workflow process, they've unearthed parts of the software development pipeline that seem to be taking way too long, and they've experimented with tools and ideas that have helped Dev teams, program workflows, and make these problem areas more efficient.
In this episode, we're gonna be talking about what these efficiencies are and the compounding effect they can have on your entire organization when you implement them together. For example, we'll be discussing away Dev teams have cut cycle time by 60%, but only after they implemented another efficiency that previously cut cycle time, almost 47%.
In short, this episode is all about the power of compounding efficiencies and engineering. For those of you who've read Atomic Habits, you may be familiar with the concept, building these habits within teams or your personal life to compound efficiencies. And we're gonna run through these efficiencies so that you can layer them on top of one another to completely change the way your organization works and improve productivity.
To walk me through the places we've found room for improvements, in depth pipelines, ways to create efficiencies and their overall impact, I've got LinearB CTO Yishai Beeri with me. Yishai, welcome to the show.
Yishai Beeri: Hey, it's great to be here.
Conor Bronsdon: Yeah, it's great to have you back. I really enjoyed our last episode we did together around, the best programming languages for Dev workflow.
I'd love to start by diving into where all this research comes from. Yishai. Can you tell us a bit about where all the information that we're gonna talk about today was sourced from?
Yishai Beeri: Sure. Yeah, at LinearB b we are, fortunate to have access to. The workflows and the data coming from thousands of Dev teams, on our free offering, on our paid offerings.
And we have a unique window, into understanding what's happening in the PR process, what's happening throughout the development process, where the bottlenecks are. So yeah, we can see, millions of prs, thousands or tens of thousands of developers and thousands of teams, a lot of data that helps us understand how this looks.
In smaller teams, large orgs, different behaviors like you mentioned, programming languages. So a wealth of data in my data team, is spending its time picking through that to surface insights about the thing we care about. This is about how to improve the developer experience and how to improve productivity by removing those bottlenecks.
Conor Bronsdon: And there's a great analogy for how you are improving the efficiency for Dev teams. And something we can all understand, food. If you ask a nutritionist, what is most effective as a diet for losing weight, they might tell you that all diets are effective, in helping you lose weight.
Because they give you the visibility into what you're eating and they make you think about it and how much you should be eating. By being conscious of what you're eating and knowing the right amount of, food to consume for your activity level, you'll start to move towards your natural balance and you'll see those opportunities for improvement.
And I know you've said that this can apply to engineering as well. Can you tell me a bit more about that?
Yishai Beeri: A lot of the of change that we wanna create, and this could be about our personal lives or about the Dev process, a lot of it is down to human behavior. So changing human behavior is about changing habits.
You mentioned the food analogy. Like being aware of what I'm eating or not eating. That's, that's a behavior change. Not like just the awareness, just being, not eating automatically, emotionally eating whatever, or tons of reasons why. Being aware helps me control and helps me improve my behavior.
the same thing, happens with development and Dev teams and Dev processes by. Being aware of the productivity the inefficiencies, the bottlenecks, by talking about them, by measuring them, by surfacing them, by making them important, and part of what we are striving to improve.
That alone starts to move the needle because if my focus is only on writing the code and I'm not aware of the, those inefficiencies, those wait times, those idle times, then I can, I may be able to create a great code, but it. At a level of, efficiency that is not optimal. If I'm starting to think about and live the, the pain and the, give it a name, give it a number like the contact switches, everything that is blocking the team from moving faster.
Now I can begin to focus and change behaviors around it. Awareness is the first level, the first layer of, of behavior change, and that begins to move the needle.
Conor Bronsdon: And is this awareness and visibility component mostly crucial for leaders within the org, or do you see it as something that should cascade throughout the org so that every Dev has access to it.
Yishai Beeri: If you compare this to food, if your nutritionist or you a consultant had is aware of what you're eating, but you are not, that would probably not move the needle, right?
If, you meet every three weeks and look together with a report, then they may have insights and they may say, oh, you're eating too much of that, or too you need to change. But day to day, if you are not in the loop and you're not the. The one that has the, ongoing awareness that's not gonna work.
And with developers, it's the same if the VP or senior manager is looking at reports and looking at metrics saying, there are inefficiencies. The problem is here. They could try to move the needle by saying, Hey, let's fix this. That's a long process. And if the developers are not aware day to day of the same problem in a way that can also, they can also react to.
That's not gonna work. And we've seen that, Dev orgs that are focusing just on visibility. Show me the metrics. I'm gonna visit the dashboard every three weeks, every month I'm gonna do a retro with my teams. That's nice. That's not enough. That's not really changing the behaviors.
Conor Bronsdon: So how do you think teams should go about implementing this initial layer of visibility? For both leaders and individuals, so that they have the right amounts of information. Cuz I think the concern you're hearing me allude to is okay I know most devs do want to know, what they're eating, so to speak and the different impacts that their decisions are having on the broader organization, but also their focus is typically on delivering their project.
It's not necessarily looking elsewhere. How can you start setting up this program so that everyone is getting the visibility they need?
Yishai Beeri: So I think it begins with understanding what to measure and what metrics are going to be conductive to improvement, to the culture and so on, that you, this can go wrong if you're measuring the wrong things, if you're, I dunno if you're counting lines of code and we've all seen those public examples, that can go wrong in many, in so many ways.
So begin by understanding what's a measure. What are the metrics you should be focusing on and which really represent the problem you're trying to solve? Dora Metrics is a good example, a well-known example in our space of, a set of metrics that our team focus. They're focused on the process, not on individuals and are a good place to start.
So we have Dora, there is a another. Like set of metrics called space, which is also becoming popular. But the key is to measure the process, not people, you're not measuring or stack ranking developers, you're looking at where the process is broken or can be improved. You're measuring and helping the teams.
And I think the final step here, and this is, where I mentioned not just the manager looking at metrics. You want to use metrics that also lend to, an immediate response, something I can do right now to fix, like a micro change in my behavior that changes those metrics or that problem in a small way for a single pull request or a single piece of work, instead of looking at the metrics and saying, I had a problem last month.
Conor Bronsdon: So this will be something like breaking PRS down into smaller chunks to enable easier review and cut down on context switching.
Yishai Beeri: That's a great example. Small prs are like a leading indicator of good DORA metrics and good, like short cycle times, quick time to delivery. So by measuring the. The leading indicators, things that you can actually start changing to to change the picture.
You get a chance of, doing a little better next time or a little better in this pr if I'm aware that PR size matters, if I'm aware of what is the size of the PRS that I'm creating and the team is creating, and we also talk about what is a desired state. We can set a goal or decide that we want to have our prs typically lower than, I don't know, a hundred lines of code, 200 lines of code, whatever.
Now my awareness as a developer of the importance of breaking down prs is not just a theoretical thing I learned in school or had the manager tell me this is a live thing and if I can now, when I'm working on this new item, new bug fix, new ticket. I've broken down this to two prs, which are independent and smaller.
I've started fixing the problem. This is a good metric that lends to immediate behavior change. And, in small amounts, right? This single PR that it broke down into two is not gonna change the overall metric for the team over a month, right? That alone. But every Dev doing this, every day, every Dev doing this for even some of their prs and starting to shave off, the prs to become smaller.
That is the behavior change now. And it's not just for the metric. They, the metric. Having that number on the wall, if you like, helps. Move that behavior. But the real reason why the developers will stick to that new behavior is because they see the benefits. Those smaller prs get reviewed much faster.
They get merged faster, they create less risk in the code base. So I don't have to roll back or fix, a bad release. All these things are things that developers feel in their gut. Having the number on one side, having awareness that is important, but also seeing the immediate impact of my small changes.
Oh, it's so much fun. My PR got reviewed like this. I'm gonna do more of that. That behavior change, but will not persist and it's no longer dependent on the metric.
Conor Bronsdon: So great, let's put metrics into place. Let's get visibility for both, leadership, but also the entire team that has a certain impact.
Yep. But now it becomes time to say, okay, how do we apply that visibility and knowledge into your actual software development pipeline? And it sounds like code reviews, something we've talked about quite about in the show is the key area where you think teams can make an impact based on your research.
Yishai Beeri: So first of all, while there are some alternative methods of, collaborating as a team to create value through software, there's trunk based development and other modes. The vast majority of the, of the industry uses pull requests. That, that has become a standard, carries a lot of benefits.
And, almost everyone uses PRS or MR. As a way to manage how the team collaborates on creating new code. Given that we now have a stage in the development process that is very human-centric and relies on like asynchronous interaction between people, I put up a pr, I asked some people to take a look.
If they need to know that I need, I'm winning for them. They need to find time, they're reviewing, giving me some asing comments back and forth. Eventually we'll get to a place where the PR is ready to merge. This is a very different dynamic than the rest of the pipeline. Like when I'm coding for a new PR typically is one person.
Or collaboration on, but typically it's just me with a problem with a code. I may have some interaction with the requirements and the product side of things to understand what I need to do, but it's more like this is creator time.
I need to be in the zone. I need to be uninterrupted to be very productive there. But I'm not dependent on other people in a material way. After the PR is merged, typically this is now machine time. There are gonna be automatic tests. Sometimes there's a manual testing, a qa, process that happens before or after merge.
But in many ways what happens after merge is already automated or semi-automated, and it's again, less dependent on frequent interaction between humans to get to a, an agreement. So the PR process of the. From my suggestion to add to the code bases all the way until it's approved and merged, has a different timescale.
It dominates the time it takes to move to, to create value because it's just human dependent. We humans do not respond in seconds, in our jobs. Like reviewing a PR takes time. It's not a 30 second run of a CI job.
Conor Bronsdon: It's where dependencies start to come into the process.
Yishai Beeri: Yes. And there are dependencies of humans rather than machine or code or automations.
So those two, specific behaviors cause the PR process to be very, important and if we are spending time there, we are waiting for the review or the response. For review. We are now creating, contact switches which are expensive and painful for the developers. I have to respond to comments on a port request.
I finished three days ago. I've already moved on. I'm working on the next thing. I have to reset my brain to what I was doing before and what this review means. And this back and forth can take a while. So that is why the PR process, is a very lucrative place to find and remove dependencies in bottlenecks.
And also those kinds of improvements really help the developer experience. Everyone wants to get their job done, to push value, to fix the bug and move on. And if I have to wait for someone else to review my stuff and eventually to convince them that it's okay, and we can merge. That is holding me back.
And typically developers, to be in teams that respond fast, that help each other quickly get things done, small steps through small prs, that all creates a feeling of achievement, that is ongoing.
Conor Bronsdon: Yeah. The statistics I'm seeing here in your research are fascinating. The average cycle time across LinearB research is for a piece of work is a full seven days, but half of that is spent in the PR lifespan where, so there's clearly this huge stumbling block here, even though, when we break it down, cycle time in fact will go down significantly.
If you cut a PR from, to your point, like 200 lines of code to a hundred lines of code. And conversely it'll double, and go in the other direction. So I guess my question is, if we know there are these sticking points, this is like a problematic area, for Dev production, what benefits do you get simply from adding visibility so that teams are aware of this?
Yishai Beeri: Yeah, so if we identify that this is a key area of the process that has a lot of inefficiencies and creates friction, Then you want the metrics and your visibility to focus on that. It's not the only thing to measure, but, cycle time. One of the, important metrics, a good place to start if you're not measuring anything, really gives you visibility into how long does it take me as a team to move value from beginning of coding a new piece of, value or fix or whatever, all the way until it's in production.
In a large part of that is gonna be the PR process. Typically, you wanna break cycle time down so it's different, segments or behaviors because when I'm coding the behaviors and the bottlenecks are different than when I'm doing the PR process, get waiting for a review, waiting for someone to even take a look.
And eventually when the PR is merged, there is like deployment time, which is, could be automated, could wait on external signals, could be another team's problem. So because of the dynamics inside cycle time are different, you want to them out and have different measurements for each part of each phase of the process.
So measuring that area and getting some detail on the different steps and behaviors is crucial. As I mentioned, also adding some infor some of the leading indicators and PR size is a great le leading indicator. We've seen, like you mentioned, smaller prs. We'll get reviewed faster, we'll get merged faster.
We'll be deployed faster and safer. The data is very strong around, as you mentioned, like going from a hundred lines of code to 200 lines of code typically tends to double. The cycle time 200 lines is still pretty small as an average is the great place to be moving. Up the PR size makes everything slow down.
Because we know it's such a important leading indicator, let's measure that and let's, start behaving, start changing the behavior around that, not just around the final cycle time metrics, which are the result. So that is a good example of measure the important thing, which is the cycle time, and then add measurements of, what is gonna predict and, impact the cycle time like the PR size and.
Giving the developers a visibility ongoing on the PR size, not just a total like, retro kind of view. do I have a problem with my current pr? Is it waiting too long? Is it too large? Is it within the boundaries of what the team wants to have, is its best behavior that allows me as a developer to start shifting my behavior?
So of metrics that measure the right problem, and, ongoing awareness and ability to respond to those metrics, not just, retrospect to you.
Conor Bronsdon: So that first level of visibility where you can break down your cycle time into segments, understand your metrics, and start to respond as a team or as an individual.
When someone does that with LinearB B and starts to manage their software development life cycle, that in that way, what is that first compounding efficiency impact? What do you see in the numbers when people do that?
Yishai Beeri: So what we see is that, you should be expecting, a very sizable, change and a very noticeable change, in those, like in the size of prs.
Cycle time and it's parts like, how long does it take to pick up or review a pr I'm talking about 30, 40, 50% improvement over, across a quarter, maybe four months of using, LinearB B or a similar system to measure. And if the context is provided, like I said, to develop developers ongoing, not just in retrospect, then we see across the board cycle time gets slashed by a half.
That's a very typical. And very immediate, improvement that you see. A lot of that comes from the PR size, like PR sizes go down by a third. And that causes a lot of the improvement because again, pr, smaller prs get picked up faster. But also awareness to the other, to the actual cycle time.
And it's, like it segments, if I'm looking at pickup time, which is how we define. How long does it take a reviewer to begin reviewing a PR that's waiting for them? This is all about communication, all about how do I let them know that they need to vr pr. Getting the email from GitHub is not good enough.
No one sees that. We used to be able to shout to each other in the room, Hey, I just have a new pr. Can I take a look? Or, slack each other and, but these are all manual broken methods, so having some automation, Having some solution around the communication between people to help people know that something's waiting for them, that creates the you can slash pick up time by half a very typical case where I can get responses to my PRS very quickly and if they're small even more quickly.
And that really creates those double digit 30, 40, 50% improvements across the board.
Conor Bronsdon: So as you move from non-formal processes and begin to streamline them for poll requests, things like assigning reviewers, standardizing PR size, like you mentioned, maybe adding context, through tooling. How can you attempt to resolve these issues in the PR process?
Yishai Beeri: So yeah, you mentioned tooling, I think in context is key. So we talked about the metrics and understanding why. The behavior changes needed, but additional things like context, I'm gonna give some examples, are also crucial to start moving faster. We analyzed some data, we talked with customers, and we understood that people that are now like reviewing a pr, people that are basically getting pulled in from what they're doing to help you get your job done.
Besides sometimes being Interrupted and sometimes having overload of reviews, on the surface, every, all of these review requests look the same. Someone needs a review. Okay, what is this? Is it urgent? Is it a bug fix? Like a, like an urgent bug fix for production? Is it a new feature?
How long is it gonna take me? Is this a big thing? Is this a small effort? Can I. Do this in the five minutes I have left for before a meeting, or do I need to set up, set aside an hour to do this many, context pieces that are typically not available. You just get the link or an email to a pr, you have to go into it and start looking to understand what's going on.
You open the pr, you get this big wall of red and green and you say, oh, I don't have time for this. This is complex. The more context you are able to push as a developer towards your reviewer, the more upfront work you can save them, the discovery work and the orientation work. So great prs also describe what they're doing in terms of the code.
If you as a developer can spend some time to add context, that will get you a better review and it'll get it faster because the reviewer can make an intelligent choice about when to do it. And am I the right person to do it? The more context, the better their process will be. So some of that is on the developer to improve how the PR looks and how the PR describes itself.
But also there's tooling that can help you with, with some of that context, surface information about the geo ticket this is, assigned to or related with, is this a bug? Is this a story? Is this a P zero? Tell me how long it's gonna, take me to review. Yeah, we have implemented a machine learning model for estimated review time, and we can tell you when a reviewer gets a request to review a pr, this is gonna take you two minutes.
This is gonna take you 30 minutes. Okay. Now I can make
Conor Bronsdon: Based on the area of the code base and the number lines of code or what the inputs are?
Yishai Beeri: Yeah. Based on a, like a multitude of features or parameters, like which files, what types of files, the size of the change area of the code base. Who's, who is the reviewer and who is the coder.
And again, we have, great data set to, to train this with and we see the reviewers respond to those cues and to those, that additional context and change their behavior. So now, when they know this is a small PR and it's a quick win, or this is gonna take me two, three minutes, they jump on it quickly and when they see this is gonna take me 30 minutes or an hour, they allocate time.
They schedule this for later so they can now manage their contact switches better. And this means they respond faster overall faster for the small prs, for the easy wins in, instead of, oh, everything looks the same, I have to guess, or spend my time taking a look and stepping back. So context is very important.
And it's part of solving the human communication problem, which is enter into the pr, process. So it's about letting people know. And about letting them have enough context so they can make a smart decision.
Conor Bronsdon: So once teams have this visibility into their metrics, and maybe they've benchmarked it against industry standards or some of the research you've done, and they go into the PR process and they start to fix things, they go, Hey, maybe we need to move from 250 lines of code to one 50, or, 200 to 100.
And provide some context with tooling, programmable workflows like git stream, which I know you, your team's experimenting with a lot. What are the effects that you're seeing once those. Efficiencies are being introduced into the code of review process on top of the visibility work that's already picked up.
Yishai Beeri: You can talk about three layers, like the, there's the visibility, and awareness. Then there's, MicroAutomation of the human process, and LinearB b, this lives inside our WorkerB. Smart bot lives in Slack and teams. And this is about nudges for delayed pr. This is about the additional context that I mentioned, like estimated review time.
It's about letting the reviewer know that they're up, they have some work to do, and letting the coder know that the review is waiting for them to respond to all of these micro optimizations, move you from the 30% improvement to the 50 or 60% improvement. That's the, like I would say, the first line visibility on some automations.
The second line you mentioned, good stream, that's, going even deeper into automation. And this is about not just, the human interaction, problem, but it goes deeper into understanding. My process for doing PR and for approving code today is pretty uniform. Al almost all companies have a policy.
If someone has a review like four eyes and we're good. This is very typical. Everything needs to get a review. It doesn't matter what kind of review, as long as it's approved by someone with authority, we're done. Some companies or some orgs with higher regulation or requirements have a blanket, policy of any two reviewers, six eyes.
But it's the, typically, the process is very rigid, very uniform, and cannot take into account. The differences between a PR that changes, a aligning the documentation or replaces an image for a website, from a PR that changes a logic like code logic or touches, sensitive areas, changes an api, handles tokens, whatever.
And a lot of that, like artificial uniformity in the PRS is because of lack of tooling. There's no easy way to make those distinctions and to codify what I need to, what's the required behavior across different, types of code changes. So teams just default to a very blanket, approach and that is not efficient.
So gitStream like this is our way of codifying a process and starting to. Tease apart what actually should be done. What is our desired process as a team or as an org for getting some code approved. And now you can automate things like deciding whether this needs one reviewer or two reviewers, maybe, pull in a security expert for this PR because it touches relevant areas in the code or relevant the changes such that you need a security expert to approve this.
But being able to do that only on the 5% of prs, which really needed, instead of having a blanket policy saying security has to approve. Everything is a huge gain. If your default policy is having two reviewers for every piece of code, but you can, extract some of your load, maybe 20% of your prs that can live with just one reviewer because they're simple.
There are only changes in static files or other, you can carve out whatever you need. You've now gained a huge, improvement. You're not, you're just not doing work that you've done before, which is not really needed. So the review burden is much smaller for the reviewers. Their precious time is spent on where it really is needed and it matters.
And those kind of, automations and codifications of a smart process. Now can really push the needle, again, in double digit numbers. So with our research, what we've seen is when Girim is employed and doing automations on prs, we are seeing, another 50 or 60% reduction in cycle time.
Conor Bronsdon: So these programmable workflows are enabling this next layer of saying, okay, we've got visibility, we've got benchmarks, maybe of what success looks like. We can gain, 47% improvement in cycle time, and now we're gonna do it again to get 50, 60% per improvement.
Once again, by helping ensure that every PR is being treated differently the way it should be, instead of being treated with the uniform process.
Yishai Beeri: And the nice thing about this automation is that you can begin, you can slow roll this by, let's add some visibility. Let's start to, add labels on prs.
This is a PR that should only, that could be approved without a review, could be automatically approved. That's a label, and the team can now see the. Simulate the rules that they would have. If I'm saying documentation changes do not need a reviewer. I can start by just adding a label saying this could be automatically approved.
Once the team is confident, they cannot turn on a automation switch and say, now it's actually automatically approved. Or look at the flip side. We have teams that are using this to say, if your PR is using a deprecated api, Automatically reject a PR with a comment. So this is like an automated reviewer doing the obvious thing instead of a human doing, Hey, don't use this reciprocated api.
So you have now saved some time, you've saved a review process, you haven't risked anything because you're not even automating, you're not letting the machine decide what to accept, but you've removed work. And so you can start by like more visibility labels pulling in reviewers to the pr, like selecting the right reviewer becomes a problem with larger organizations.
It's not always clear who should be reviewing my PR people onboarding the teams, new people in the code base. Hey, I don't really know who, who to pull in. And GitHub code owners is very rigid. It's hard to maintain. It doesn't really give me the flexibility of knowing who is the right person to pull in right now.
So we have code experts in, in, in stream, which uses the actual data and history, and you can tune it to say, give me the best expert, the one who knows this code base. Or you can tune it to say, Pull in people so that my knowledge gets spread across the team into the org. Maybe pull in, not the best expert, but someone who by reviewing the PR, will now create some spread of the knowledge.
So having that flexibility is a game changer for eventually faster process, better experience. I don't have to hunt around for the right reviewer, and I can also start serving complex, security and compliance use cases make the. The, those practi practitioners happy because they can get pulled in at the right moment if the PR is touching a sensitive area or a security tool that's complaining.
Again, anything that is blanket in uniform, all prs have to the exact same procedure and control is a losing proposition.
Conor Bronsdon: So once you layer this programmable workflow tooling, gitStream onto your visibility and start improving the PR process you're seeing me as incredible gains, which is fantastic, but there are still efficiencies to be had in improving the quality and amount of focus time, and building habits that can help.
Limit disruption for devs. And yes, programmable workflows will help with some of that, but it also goes beyond tooling. As most people who listen to this podcast know, we're adamant that developers need to be treated as knowledge workers who. Time they need to problem solve, think, create. They're not robots on a production line.
I know from personal experience, that doesn't work very well when you're asked to, treat work that way. And you can't be productive on a whim necessarily. So that's why in addition to providing this visibility and the metrics, we, I think typically evangelize. Providing devs with core blocks of uni, uninterrupted time to code, create and think.
Yishai, I know you and your team have also done research on this topic. What have you found to be effective?
Yishai Beeri: It could sound like an oxymoron. We're saying in add some automation to the process. Does that not, move me towards treating my developers as cogs in a machine? Oh, everything's automated.
I'm just like, Being like called on like an api. But in reality, those automations actually, help the developers because again, they provide them with more autonomy and context so they can choose when to do what to do when they remove toil, which is work they could be avoided. All those reviews, which I mentioned.
So the first thing I wanna say is automations in the PR process do not. Mean or do not entail. The developers are like working in a more auto automatic mode. It's actually removing some of the toil and letting them spend more time creating. But as you say, there are, very, common behaviors and issues with, what's needed to be a productive coder, a productive developer.
And for example, uninterrupted focus time to actually code my stuff is crucial. And we've seen that, a contact switch if I need, if I'm in a focus and I'm not need to move to another problem or respond to someone, respond to review or review someone else's code. Those are expensive.
It takes 20, 25 minutes on average to get back to where what I was focusing on. This is last time. We all, we all feel it in our gut. This is, it makes us more tired. It makes us more irritable, but it's also lost time. Like it and if I'm always Interrupted, throughout my day, I think the day is gone and I've done nothing.
That's the feeling. I've absolutely rest very little. I may be helping others. I may be interacting. Those are important things too. But I have made very little progress in the. In my creation, which is I need focus time to write my code, to test it correctly, to wrap my head around the problem.
Context switches and interruptions are really an efficiency and productivity killer.
Conor Bronsdon: This becomes a bigger issue as organizations scale as well.
Yishai Beeri: Correct. Yeah. So with larger and larger orgs and org like development teams and orgs, there are more people, more stakeholders, more people affected in interacting with what you're doing.
And developers tend to find themselves on more meetings, sometimes meetings that could be avoided or should be avoided. So if you're looking at, I dunno, around 17 hours of focus time left for me in a week. Or, how long am I spending in meetings that could range from 17 hours to 22, 23 hours?
The larger the org gets and the more communication is needed to keep the org functioning. Developers are finding this very taxing and, even by applying very simple methods like grouping the meetings together. Obviously avoid meetings that are avoidable, that's always a, like an battle, but grouping them so that the rest of the time is uninterrupted Focus time, needs to be long.
You can't go into the zone in oh, I, there's a 30 minute hole with me and my meetings. I'm not gonna get anything done as a developer there. So having consecutive like aggregated time, which is free for meetings. A very basic step. It's hard, it's not easy to do with, large organizations and many moving parts, many stakeholders.
There are some automation tools out there, like smart agents or helpers that live in your calendar and help you move that. But it's also, a challenge for the team leaders and for the managers to constantly look at that and make sure that we're not, cuz it's easy to schedule a recurring meeting with 15 people, like two clicks and you're done.
So being aware of that and pressing back and allowing the developers to have that consecutive free time for focus, is crucial.
Conor Bronsdon: And the polling from managers to devs, it seems, everyone agrees this is crucial. 76% of managers say more focus time for developers leads to more revenue. 90% of leaders say more focus time leads to more productivity.
80% of engineers say more focused time leads to faster production. So what can we do to make sure Dev teams are using their time to code and not to fragment it with meetings?
Yishai Beeri: I think this is a place where, developers need help. It's very hard for the, ics, the developers to protect themselves from the ongoing, poll to have more meetings and more interruptions.
This is mostly on the team leaders and on the managers to make sure that the calendars are not fragmented to help fight off these recurrings or again, aggregate them so that there's enough. Consecutive unfragmented, focus time. Try to re remove the mandatory meetings or make meetings optional.
Move things to async and we have a constant, dilemma. As managers, we wanna push context down. In my view, the only way developers can be successful, is by having context. They need to know the why. They need to understand the business. They need to understand the customer pains, even if they're just fixing this widget.
And typically the way to deliver that context is through meetings. So it's a challenge you don't wanna lose. And if the developer is left alone and has no context, they're not gonna do the right thing. So this is an ongoing challenge, but by being aware of how fragmented the calendar is, the ongoing tax of mandatory meetings and con like consistently backing your developers and fencing off their time, I think it's a crucial part of being a team leader and an engineering manager.
This should be top of mind because of that cost, the very expensive cost of, fragmentation in my time.
Conor Bronsdon: Makes total sense. And I know that as you continue to layer these together and, automate meeting blocks, try to support programmable workflows, have visibility for teams, you start to see these major impacts.
So to recap, you need to improve and increase your team's focus time. Average devs get way less than they should, and when I say average, I just mean any Dev. You need to provide them with blocks of under time to code. You need visibility into metrics on how your team's performing so that you can understand not only what's happening within your org, but then benchmark it against other teams through research, like what you've done.
And that way you can make decisions on how and what to improve. And then you also need to reduce this, number one inefficiency You talked about in cycle time, code reviews and poll requests. So by layering these three elements together, your team has more time to code, makes smarter decisions about how to code, what to work on, and turns around code reviews faster.
This big sticking point. If listeners are interested in using these compound efficiencies, what can they do to get started?
Yishai Beeri: Obviously invite users, like listeners to take a look at. At LinearB B'S offering, we have gitStream, which is the automation engine for the pull requests. This is where you can get started with removing toil and adding context automatically.
It lives in the pr, very easy to get started, completely free. LinearB b also have we have this. Full suite of metrics, visibility, WorkerB, which I mentioned this nudges and helping the human communication parts. So really end to end approach to measuring, to improving and allowing the developers to improve through automation and through, Helping their ongoing awareness of what matters.
And on the, manager side connection with, with a business, understanding investment versus business priorities, things are important to keep all that, improved and more efficient effort aligned with what we should be building because you can be extremely efficient in your work.
Reduce all the friction, remove contact switches, and be building the wrong things. So the final piece is understanding what you should be building and what to focus on. So that is all like all available, in our solution for engineering managers and for Dev Works. But really easy to get started with some automations using gitStream.
You can get this in new repo in five minutes and start getting the benefits.
Conor Bronsdon: Fantastic. Shy, thank you so much for coming on the show. Really appreciate you walking us through this and hopefully. This will help other engineering leaders to apply not just one efficiency, but multiple in order to compound these impacts.
Cause it's awesome to see the impacts of this research. As you get that initial 47% improvement cycle time and you add 61% for programmable workflows. The results are just so astounding. Thank you for sharing the research with us and if folks wanna learn more about the research, what is the right place to go?
LinearB.io/benchmarks
Yishai Beeri: Yep. Thanks, Conor. This was great.
Conor Bronsdon: Yeah, always great chatting with you. For those listening, if you want more content from Labs like this, make sure to let us know. We love hearing from our listeners, whether that's in a review form, tweet, LinkedIn, whatever else.
We want to know if this is the kind of topic you want us to cover. And again, thanks so much for coming on the show.
Yishai Beeri: Thank you.
A 3-part Summer Workshop Series for Engineering Executives
Engineering executives, register now for LinearB's 3-part workshop series designed to improve your team's business outcomes. Learn the three essential steps used by elite software engineering organizations to decrease cycle time by 47% on average and deliver better results: Benchmark, Automate, and Improve.
Don't miss this opportunity to take your team to the next level - save your seat today.