To open the show Ben and Andrew dissect the latest headlines, from DeepSeek's challenge to ChatGPT to the growing importance of AI cybersecurity. They explain why everyone suddenly searched 'Jevons Paradox' last week and discuss strategic AI investments from IBM and financial giant Goldman Sachs. These investments underscore the growing importance of strong engineering leadership in the age of AI. 

Then Luca Rossi of Refactoring joins the show to discuss his latest research. Drawing from a comprehensive survey of engineering professionals, Luca breaks down the key traits and practices of successful engineering teams, revealing surprising correlations between team happiness, shipping frequency, and recognition by non-technical leadership.

Show Notes

Transcript

Ben Lloyd Pearson: 0:00

Hey, Andrew, I heard you have been doing some really cool experiments with generative AI. Why don't you tell our audience about what you're up to?

Andrew Zigler: 0:08

I have been in the lab this week, full confession, working on the developer experience for LinearB's Git stream, where we're using the power of context with generative AI in the code review process. I imagined the process a bit like a restaurant. Bear with me. The model is the chef, and it's skilled and trained and capable of preparing an incredible variety of things, depending on what you give it. And the data, that's like the ingredients. Without good ingredients, even the best chef will struggle. And when the ingredients are local as opposed to store bought, they're often better. But the context is the order from the customer. And the customer is everything. They tell the chef what to prepare, how to season it, if they have any allergies, whether to mix it in or put it on the side. And no matter how great the chef is, or the ingredients are, if the order is unclear or missing, the dish doesn't meet your expectations, and you're not going to be happy.

Ben Lloyd Pearson: 1:07

I think I'm starting to see where you're going with this.

Andrew Zigler: 1:10

So, at LinearB, we're focusing on that power of the context to give the customer exactly what they ordered. Because no one cares if your chef's from an elite academy, or if your ingredients are fully organic, if you order a steak and you get a pizza. To that end, LinearB released the AI Starter Kit for PR reviews. It's a collection of workflow automations that make it easy for engineering teams to start experimenting now with the power of context in their pull reviews, It brings the power of context to your AI models and services along with native GitStream capabilities.

Ben Lloyd Pearson: 1:45

Yeah. So this is really cool because, we're seeing so many products out there that are emerging to tackle this space. There's definitely a lot of confusion about like what works versus what doesn't. and I really liked the focus we've been taking with this so far with this experiment is, is on responsible adoption. So if you're just getting started with AI, like you've maybe just bought one or two tools and you have an API that your developers can query, or if you're somebody who's actually already purchased a bunch of AI tools and is trying to roll them out responsibly and securely and effectively, we're building these tools that are, that are here to help you mature that process. So where can our audience get their hands on this?

Andrew Zigler: 2:23

Well, anywhere that Google searches or search engine results are served is probably going to point you in the right way. the GitStream documentation is going to have the best starting places for getting started with Git. The AI Starter Kit, as well as the other features available on GitStream. But if you want someone to walk you through a more in depth guide or walkthrough based upon your own needs, definitely reach out on the LinearB website. It's really easy to book a demo with an expert and get your hands on some great ideas for how to take this back to your team. And we'll put the links for this in the show notes as well.

Ben Lloyd Pearson: 2:54

Yeah, awesome.

Andrew Zigler: 2:55

So Ben, what have you been reading this week?

Ben Lloyd Pearson: 2:58

Last week we were talking about all these investments that are going into AI. Of course, this was before DeepSeek completely upended the market. but I am proud to say that many of our predictions from then were relatively accurate. You know, some might say we have some foresight here. another story that has caught my attention for reasons that we'll get into in a moment. IBM is IBM, you know, a company that was has been around forever at this point, very early to the AI space with Watson. But a company that I think we've all kind of forgotten about for a while because it just hasn't been a lot of groundbreaking news other than maybe like them buying Red Hat and HashiCorp. they're kind of Q4 earnings report just came out. usually these are not interesting at all, but they showed some extremely bullish growth in one area in particular, and that was their AI services. So they announced 5 billion in total sales and 2 billion of that was in Q4 alone. So a massive, massive quarter over quarter growth. naturally there's a huge spike in their stock price, you know, sort of bucking the trends actually of this week of, of many of the companies in AI. But, and actually there's, there's one finer detail that was like hidden under, under all of this. And that 80 percent of this is actually consulting. So, you know, only about 1 billion was from, actual software. So, you know, getting back to our opening there, where we're talking about experimenting with AI, like you hire consultants when you're not sure what you're doing and you're experimenting, you know? So I think that's actually a very good indicator of where the industry as a whole stands today. the question I keep asking myself is, when is that going to change? Like, when is software, like purchasing software tools, when is that going to become the main driver of AI adoption? So, but we've got an upcoming guest from IBM, don't we, Andrew?

Andrew Zigler: 4:51

Yes, I'm very excited for our upcoming guests on the show. We have Dr. Ashoori of WatsonX AI at IBM is coming on talking about how engineering teams are making those first moves now to adopt AI practices and the things they can do to be more effective in the future with those tools they pick up and getting a playbook like that from an expert at IBM is invaluable. Like you just said, so much of their market value comes from providing insights and details on how people can implement this. So imagine. We got a bite of that for free here on Dev Interrupted for our listeners. I'm really excited to share it, in an upcoming episode.

Ben Lloyd Pearson: 5:27

Yeah. Awesome. You know, and it's, it's really, I think, going to be an awesome story of how, legacy companies are reinventing themselves. You know, everyone is facing this new reality of AI disrupting them. Like, everyone has to be focused on how we're going to reinvent ourselves for, for this new era. so let's talk about some of that disruption. I think you've got a new story that you wanted to share that gets into some of the events of the last week.

Andrew Zigler: 5:50

I read a great article that kind of summed up some of the disruptions this week. at the beginning of the week, deep seek droning chat, GPT in the app store. Nvidia losing $593 billion in market value in a single day, which is a record one day loss for any company. And everyone, Google Jevons Paradox. And Ben, were you among those Google searches?

Ben Lloyd Pearson: 6:12

no, I let my GPTs do my searching for me these days. I don't, I don't need Google anymore. A joke. I actually did Google it. So I am definitely one of those millions of people who wanted to learn what it is. And for anyone else out there who hasn't been, had been exposed to this yet this week, the Jevons Paradox is a paradox where, uh, it was observed in the late 1800s where the invention of, of a more efficient steam engine, you know, some people thought that meant that coal demand would plummet, but the reverse was actually true. So by making a more efficient steam engine, it enabled more people to afford the amount of coal that they needed to run it, which increased demand for coal. So, you know, and when you really think about it, Chips are just rocks, you know, like better efficiency is going to drive more demand for rocks. So I think companies like NVIDIA in particular will probably end up being fine through all of this. But one thing in the article that we're going to link to these in our sub stack newsletter and in, show notes, is it reinforced a theme we brought up last week, actually, on how there's a bit of an, an arms race emerging, As it relates to like this artificial general intelligence that everyone is pursuing and you know, the companies that I would be concerned if I worked for them are definitely the ones that are focused entirely on AI. the question I'm asking is like, will these events be a catalyst for VCs to pull back from AI? To focus on more efficiency, or are they going to double down on quality to try to keep leapfrogging the competition? another story that came out that I was just thinking about that is maybe related to this is, you know, OpenAI just announced a new partnership with the U S national laboratories that is specifically focused on cybersecurity for the U S federal government. So like, we're already seeing this idea of there being like real national security interests in, Building this stuff and securing it. you know, and I think this also kind of highlights an issue that a lot of these Chinese companies are going to have is that, you know, they have to adhere to their local laws and, you know, companies that really value their security and their privacy, and they don't want to hand over potentially competitive Intel to foreign agents. it's a pretty risky situation to try to go with, something like DeepSeek or any of these other. you know, I think there've been more companies now in the last week that have emerged from China with similar services.

Andrew Zigler: 8:37

And while that thread is definitely there, you know, on the other hand, it also in the environment makes it even more difficult for engineering leaders to justify costly AI spend to maybe more traditional or Western model providers, especially justifying that to non technical executives who may not fully understand the risks. They see the cheaper number and they see the savings that they could be gaining while still innovating faster than their competitors. And that's very appealing. So when a cheaper service is sitting right there, They are ready for use that really kind of calls everything into question about how the strategy should play. And while some industries and organizations could never tolerate those risks for their IP, like you said, handing over their information to, you know, like a foreign entity, At nearly every industry is experimenting with AI in some capacity and everyone's risk level is going to be very different. And, you know, even Perplexity, you just mentioned, you know, doing your Google searches for you. They even have a DeepSeek R1 mode now. You can use it there. And DeepSeek is also open source, so you could run that efficient model locally with those benchmark performance. But of course, that does nothing about addressing the censorship concerns related to the LLM, which also hit the news this week about people could get it to say and not say. And all it really comes down to say is that for some businesses, it may make sense to pay or use a cheaper service if that means that you can scale faster and turn perhaps your AI wrapper into a fully fledged business with wings that can go vertical.

Ben Lloyd Pearson: 10:10

And our producer Adam, he shared this really awesome article with us and he does not get enough credit on the show, which changes today. Changes today, so thanks Adam. But it was actually an interview from the CEO of DeepSeek a few months ago, so before all this hype took off, where one of the things that really stood out to me is they were talking about how You know, China in modern history has been more of like a one to 10 culture when it comes to technology. Like they take existing ideas and they, they 10X that idea. whereas the Western society has been more of a leader in the zero to one. So going from, you know, originating new ideas, and their CEO did really seem invested in this notion of like China becoming more, more innovative rather than, you know, copying. Western counterparts. So, and you know, with it, with it being open source, it'll be interesting to see what American companies learn from this, because I think HuggingFace is already running their models that has been trained on HuggingFace data at this point. So yeah, I definitely think it's going to, to lead to more competition. Yeah,

Andrew Zigler: 11:14

sourcing AI, it's only one part of the puzzle. Another thing that was in the news, going back to legacy companies, traditional enterprises and how they're adapting, Goldman Sachs, they hired Daniel Marku from Amazon as its global head of artificial intelligence, engineering, and science. And why does this matter? What does this mean? You know, a huge traditional, financial, highly regulated company just poached a leading AI, Leader about, being able to implement these practices within their own company, because Goldman Sachs AI assistant has already rolled out to 10, 000 of its knowledge workers within its organizations. That's a huge rollout at a big scale. You know, I mentioned earlier, all industries right now are experimenting with AI. That means you get some really big players in the space. outside of maybe traditional tech world, right? And the idea that they're going to roll this out to more of their knowledge workers and have it in all of their hands by the end of the year, they need somebody with that kind of expertise. And I think there's a lot to learn from a traditional enterprise making a move like that.

Ben Lloyd Pearson: 12:15

this scale, you know, the scale of AI adoption that we're seeing in this story specifically is, you know, I think this is where things fundamentally start to change in how businesses operate. We have like even these traditional companies who are making strategic hires to align themselves with all of this rapid AI development. So new models every week, you really need to have somebody there who can be an expert at how to implement that for your organization. And, you know, I think everyone, every company should have people who are leading their internally initiatives like this.

Andrew Zigler: 12:48

I agree. It's about identifying those opportunities now that will have a big impact for tomorrow. And speaking of big impact, you know, last week, we had Luca Rossi of refactoring and Yishai Beeri of LinearB, and they sat down to discuss how engineering teams can go beyond academic frameworks to have impact based practices for their engineering organizations. And that survey was made possible by our listeners, the Dev Interrupted community, as well as the refactoring community for responding to our survey. So thank you again for lending us your voice, your perspective. It led to some really great insights that I don't want you to miss. So after the break, we're going to bring Luca on the show to tap into some of those insights. And what we're going to do is go hunting for flywheels. We're going to find those things that you can take back to your engineering team to start building momentum for tomorrow.

Ben Lloyd Pearson: 13:42

Yeah. I was really excited to record this interview. So make sure you stick around. Last week, we launched an audience survey to gather feedback on Dev Interrupted to make the show as great as we can for you, our listeners. So far, we've received overwhelming feedback that we should pivot hard to AI content. At least three people have told us this. Almost completely convinced that this has to become an AI podcast at this point. And, I've also heard that it basically guarantees you endless VC investments. So we'll see. if you don't want us to become an AI podcast, or maybe if you do now is your last chance to give us that feedback because the survey will be closing very soon. Andrew, what else do you have to tell them about this survey?

Andrew Zigler: 14:26

Well, you know, Ben is mostly joking, but we are very serious that we want your feedback. We want to hear from you. And I want to take this moment to highlight you, the listener, for joining us on this journey. Your opinions on topics like developer productivity and experience, they really matter to us. And so joined by you, we're rethinking every week what it means to build engineering teams, tackle software challenges, and lead in a world that never stops shipping. And so if you're ready to roll up your sleeves and join our conversation, the stakes are high, but the problems are fascinating. And we'd love to include you. So in that survey, there's a way for you to raise your hand, and it only takes a minute to complete. So please check it out in the show notes. And thanks in advance for lending us your time. Welcome back to Dev Interrupted. I'm your host, Andrew Ziegler, and today we have a special episode in store for you. I'm joined by Luca Rossi, the mind behind refactoring and a returning guest on the show, as well as my co host Ben Lloyd Pearson. We're diving into the traits and practices that make engineering teams successful based on a survey made possible by you, the Dev Interrupted and refactoring community members. Luca, welcome back. How have you been?

Luca Rossi: 15:37

Thank you. Thank you so much for having me. I'm great. I'm so happy to discuss this great work that we did and excited.

Andrew Zigler: 15:44

We're excited too. So, your latest report, it uncovers some surprising insights, at least they were surprising to me when I went through, about engineering teams, from how shipping code impacts happiness to the hidden flywheels in team dynamics that are hiding right in front of us. So let's dig in.

Ben Lloyd Pearson: 16:03

Yeah. And I wanted to start with the thing that like, really, since I first read this report, I've been thinking about like nonstop, and that's these like three successful traits that you defined, for engineering teams and why they matter so much. So in this report, there were like three key items that we found that. we're consistent among, successful engineering teams. So first they're well regarded by non technical leadership. They're happy with their development practices and they're able to ship code quickly. So Luca, let's just dive in with those three things to start with. And why are these things so tightly correlated with engineering success?

Luca Rossi: 16:42

that's such a good question because when we started diving into the data from the survey, we tried to do that. without bringing in any kind of bias from our own, because of course we all have our own opinions about what makes engineering teams successful, you know, when it comes to practices or the type of outcomes. we really try to go into the data like eyes wide open and we found that These traits were with a wide margin, the most correlated with all kinds of good practices you can think of, which means, when it comes to engineering, being well regarded by all Non technical stakeholders, engineering being happy about their practices, project shipping on time, all these things, you can trace like a gradient that the more these things are good and well considered by engineers, the better the engineering team performs on things like having enough focus time, spending time as it was planned, uh, engineers not needing to wait for others. They're happy with everything they're doing. So, I would love to be able to say. These are the successful trades because of this and that, but I can only observe the data and say, this is apparently what matters the most or what is quoted as the most, with success.

Ben Lloyd Pearson: 17:55

Yeah. I mean, you know, we hear constantly that like developer experience is like a critical component of, of being productive, like having a good experience at work and, and being engaged with the things that you do and yeah, I do, I do think it's important though, to note that, you know, these are correlations. So it's, shipping quickly doesn't necessarily mean that everyone's going to be happier about things, but you know, a team that is able to ship quickly and is successful over a long term is probably going to be more likely to be well regarded, within, you know, their entire organization. So like that, that brings me to my next question on this, like what are some practices or cultural elements that you think help teams like develop these traits,

Luca Rossi: 18:39

Yeah, this is, this is again, another great question, because as you briefly said, We cannot know for sure where the, let's say the direction of the correlation goes, right? So I think that in some cases, engineering, for example, being well regarded among technical stakeholders is downstream. The fact that the engineering team has proven They are worthy of being well regarded because of the good practices, for example, that you may mention. Other times, it is maybe also a top down element that comes from technical subway leadership, you know, technical founders, that creates a culture where engineering is well regarded. so I think, there might be different directions that depend on the team. but what we are seeing when it comes to practices that lead to engineering being well regarded, it's many of the things that you may expect, like actually project ships on time, people not needing to wait for others, engineering investment. So speed, for example, is very important, but it seems that predictability about outcomes and about where the time goes and where the time is spent, plays in even a major role when it comes to trust. Yeah.

Ben Lloyd Pearson: 19:56

I've been thinking about that first point a lot, like being well regarded by non technical leadership. It's like, what if you're not well regarded as an engineering organization, like you should dive into those with those other organizations, why that's the case, you know? Like, you know, marketing is going to launch a new campaign around features that we're going to release. And sales is like promising these features to people that they're speaking to in conversations. And, if your engineering team isn't predictably meeting those expectations, then you're probably not going to be well regarded. So if you have that conversation and you find out that, You know, sales is frustrated for this reason or whatever. I feel like that's such a great place for an engineering leader in particular, to start that conversation with how do we make the justification for our engineering organization as a whole to be more effective?

Luca Rossi: 20:46

Yes, I completely agree. And I think that this is a conversation that is more and more relevant, these days and this time in history where you see companies getting leaner, engineering teams getting more productive with better tooling than ever with AI. so I think there has been a time where. engineering value was kind of taken for granted and teams were ever expanding in big tech and in various departments. but now there is more, you know, tight, control over what kind of budget goes into engineering teams. And I think this is in many cases, even a healthy. thing that makes leaders, have to come to the table with a good understanding on what's the business value of the things that they're doing. and I think it makes, engineering organizations much sure, because they have to go outside of simple their technical castle and go there and be able to speak business, which is a good thing. If you ask me.

Ben Lloyd Pearson: 21:44

Yeah, yeah. And a lot of organizations aren't experienced or used to doing that, you know, like it's a level of maturity that I think is still very fresh to a lot of people in the engineering side of the equation.

Andrew Zigler: 21:56

You described the data as a gradient, which is something that really stuck with me about how all of these things trend together. And it makes sense because if something is, if the team is well regarded by non technical leadership, it's probably because they're shipping quickly. And if they're shipping quickly, it's probably because they're happy with their practices. So they all feed into each other, but it creates a unique chicken and an egg scenario, like a hard problem to solve for the engineering team that doesn't have all of those in place already, because if they're not happy with their practices, then they're not able to ship quickly. So they're not able to become well regarded. And if they're not able to become well regarded, they can't improve their practices, and that way they can ship quickly. So it's an empowerment issue. And how can leaders, you think, how can they get that moving, if their team feels stuck in proving themselves through these, like, chains of traits that we're talking about?

Luca Rossi: 22:50

Yeah, that is such a tough question to answer. First of all, I, I want to say that What you just said is completely confirmed by the data. That is what you see is that successful teams are not like successful at maybe one big trait, you know, they're successful at everything. So good teams do everything good. So it's like a flywheel, as you mentioned, that success brings more success and of course makes people happier about engineering and. That gives engineering teams more leeway and space to, to have more impact and that creates, you know, more trust. And, and so I think it's a, this is the positive flywheel, which goes also in the other direction. things don't go well, trust decreases and you get less scope, less impact and, and da da da. so I think, when it comes to what to focus on, there is a lot of, Things that you can work on a lot of tools to measure pretty much everything these days, but Everything takes work. I mean, when you take one piece, one part of the development process, and you want to improve that, it's just, it's a lot of work. whatever you focus on, because you have to measure what works, what does not work, you have to embed that into, I don't know, team processes and, and see if things improve over time, design initiatives to make the thing, get better. So I think the safest way is just to focus on One thing at a time, rally your team on that and create momentum on that thing. And then go really one by one, knowing that many things are related to each other, so it's, they don't really work in isolation, but you can, neither. fool yourself by thinking that you can just address all the problems of your team at once.

Andrew Zigler: 24:32

That's right, our co host Dan, he says this a lot, that you should just find one thing and put an automation or a fix in place to make it better. even if it's not perfect, just making that one small change, is really important to getting started. And when it comes to a flywheel, one small change is how you start pushing it the other direction and making it from a negative flywheel into a positive one.

Ben Lloyd Pearson: 24:53

Yeah. And I think, I think visibility is such a key aspect to this too. Cause you know, I think back to anytime that I've tried to unpack a, like an overly complicated process or something that just isn't working as efficiently as it should be like mapping everything out so that when one of those efficiencies appears, you have a point of discussion. It's like, here's a pain point we're experiencing today. And here's the data that shows us that we're experiencing that pain. What do we do to minimize that in the future? And in, You know, that helps you just one by one pick out the things that are sort of slowing your team down. And speaking of slowing teams down, you know, we found one thing that was found in this report is that the frequency of shipping code impacts team happiness and performance. The report shows that shipping code multiple times a day makes teams dramatically happier and more effective. So why do you think that shipping frequency is such like a game changer for morale?

Luca Rossi: 25:51

Well, I think everyone can relate, uh, from our own experience when, I mean, as software engineers or for those of us who are managers or CTOs from the time where we were engineers I think the best times, in terms of personal satisfaction and feeling impact was where I was. We are able to do things really fast and push new features and code like nonstop. Maybe you can think about the experience you have with your side projects, right? Where everything feels fast. You ship in minutes and you can make incredible progress by yourself. So that's the exact feeling that I have. Some of the best teams are able to replicate, on the full team. and I think that, to answer your question, what makes shipping fast, really important is, is like this feedback loop. So when developers are able to create such a team. it's a very tight feedback loop between the thing that they do and whether it works, it doesn't work, it goes well, it's adopted by users or not, it makes work just more effective. Otherwise, when things go slowly and they have to context switch to other tasks before seeing their work being released, everything gets slower because if you have to fix something, you have to context switch again, or you have to wait for a long code review and you context switch again. the less. you have to switch to other tasks and the more you can stay in your lane doing your thing and the better, and that's only enabled by, a fast workflow and that allows you to do things very, very frequently.

Ben Lloyd Pearson: 27:21

we live in a world of immediate gratification. you think about the impact of, if you write code today and it takes a week or two weeks for it to get merged into production, like the gratification you feel from that is going to be so much better. Diminished versus like something that just gets shipped, within an hour or so, you know?

Luca Rossi: 27:42

totally. And, and think that, I mean, one of the kind of scary things that came out of the survey is that, like 35 percent of the teams take like days to release to production. So it's, it's a big number, both, both the percentage and the amount of time it takes to ship to production. And when this happens, it's usually because maybe you have a staging environment where changes get batched. So when things get released, it's a lot of stuff together gets released. What you worked on, it's modeled and it's put together other things and you can't really know how it's going. So everything gets kind of worse. So I think any work that we do to make this feedback loop better and faster and leaner, the better. So,

Ben Lloyd Pearson: 28:28

do you have any advice or tips on for organizations that Do want to get faster at pushing code through to production?

Luca Rossi: 28:37

of course, pushing code to production is, like a pipeline made of many steps. And, some of these are like technical steps, like build times, deploy times and others are instead manual steps, like, reviewing, the code in the PR or doing manual QA. And it's way easier. on the manual parts of the process and try to reduce them. And also they're, they're usually also the most impacting. The things that take hours, it's rarely the technical stuff. some engineers are more comfortable at trying, you know, working weeks to shave minutes of the CI CD builds. But then you see code sitting idle for like 15 hours, waiting for a review. And that just doesn't make sense. focusing on the manual parts of the process, it's usually the best bet. And it's where you can shave off more and more time.

Ben Lloyd Pearson: 29:29

Yeah. And from our experience, a lot of the issue tends to be around code reviews. You know, we've seen this quite frequently where, developers are generally pretty good at writing code. and CICD has done a pretty good job at making it pretty easy to get code deployed to production. it's all the human to human stuff that happens in the middle and sometimes human to machine stuff that happens in the middle that I think a lot of organizations get really get bogged down in today.

Luca Rossi: 29:55

Yeah.

Andrew Zigler: 29:56

like the scary problem to solve too, right? The human problem confronting each other or otherwise figuring out how to fix a broken communication or process is way harder than, like you said, shaving a few minutes off maybe your CICD pipeline or whatnot.

Luca Rossi: 30:11

yeah, I agree. I mean, even if, I mean, when you look at the practical implications of solving that problem, solving the humans, but your own parts of the problem can be easier. I think many of us technical people are like more comfortable just working on systems, which are a lot easier. Totally under our control. They don't require like hard conversations. I mean, I totally get it. I'm like that.

Ben Lloyd Pearson: 30:35

hmm.

Luca Rossi: 30:35

uh, we have to say because that the most gains usually are in the human parts of the

Andrew Zigler: 30:43

Right, right. And they're closed systems, right? You go to them, you know how they work, you put your inputs in, you get your outputs out. But when you work with another person, it's a lot harder because there's context shifting, because, you know, you might have to take a break from that conversation. Maybe they're in a meeting right now, and you have to wait for them to get back to you. Maybe they're on the other side of the world, right? So those human problem solving things, they happen at a different cadence, and they don't happen on your own time always. So that. That's another barrier, right, in actually solving them.

Ben Lloyd Pearson: 31:12

Yeah. And imagine, you know, not, not very many developers get, training on how to like productively critique, coworkers work. You know, like there's just not really a, you know, software developers alone. Most people just don't really get training on that. And it's, I think a particularly difficult task for developers. and it makes me think of some research actually that I read recently where. one of the benefits of using generative AI within like the code review process was that, it always defaults to this like professional, friendly, engaging, like helpful tone that, even if it's not giving you the best feedback, it makes you feel a lot better because it's just structuring it in a way that, that makes you feel like it's, it's helping you, you know?

Luca Rossi: 31:54

It's true.

Andrew Zigler: 31:55

I really like how you compared the ideal experience being like your side project, because that's something that you work on in your own time, in your own closed system, and all the human problems are you talking to yourself about how to solve it or make things better, and so it's like a much happier place to be. I think what that drives out is the cohesion. that's needed between teams to solve it. They need to work together in a quick way, much like how you would on your hobby project on a weekend. And you also pointed out a really good thing that when you, have longer gaps, in this review, or it takes a while to solve these problems, you know, it adds, risk effectively to the project and whether it's going to ship on time, whether it's going to hit the things needed, whether it's going to be secure, right? There could be so many things that can go wrong in the cracks between those human interactions. So how can teams address that without overwhelming themselves?

Luca Rossi: 32:48

So I think that first of all, one of the major reasons why in some teams, code reviews take too long, is that they're too hard. the human reviewer needs to do too much work. that's because, the pull request is too long. It's too big. And so it doesn't get, shipped in, in, in small commits. or there are no guidelines for how the code review needs to be done, or maybe there are guidelines, but the human reviewer needs to check for too much stuff. So I think some of the, actually the easiest advice to implement, are not about the human factor, but are very about the systems and you can do a lot of improvement there. I mean, you can focus on creating small pull requests and breaking down work in small batches so that reviewers can more easily, Take some time to do the pull request instead of doing that at the end of the day, because it will take one hour, you know, and another thing is that a lot of work that doesn't require too much intelligence. can be delegated to good tooling, static analysis, AI powered, static analysis, that today is able to catch a lot of stuff from code smells to security, issues to vulnerabilities. I mean, there is a lot that you can check. Both in, your build, but even in your IDE with the good plugins, with, with AI stuff, and that of course raises the bar of the stuff that gets in, to the pull requests. And so human reviewers need to do less work. the way I see it is that code reviews are basically the worst place to catch any issue with code, because it's basically the last step before shipping. uh, the earlier you intercept problems, the more you can shift left, the better. And so ideally, if you're, If you see that your code review process constantly catches a lot of stuff, that's a bad sign. you should be able to catch more stuff before code review and ideally code reviews shouldn't, shouldn't catch a lot of things and just mostly be used for knowledge sharing.

Andrew Zigler: 34:51

That does take work to solve those problems earlier, but it makes a lot of sense. And it definitely requires a re imagining of how to use a code review, and maybe some best practices for teams to apply moving forward.

Ben Lloyd Pearson: 35:03

so you mentioned all these like really great tools that are helping developers increase the quality of their code, but they also I think are representative of a lot of the difficulties that developers are facing today as well because you know, if, if these tools are tacked on, but there's not clear guidance about how to effectively use them, like what the expectations are around them, what to do when they report something that is a failure or whatever. And really what I think represents is like this idea of a code review has just gotten so complicated in many ways that it actually is getting much more challenging for developers to just navigate all of the requirements, you know, and that's, you know, and that gets back to like. manual processes, they shouldn't have to be keeping all that knowledge in their head. Like there should be automated guardrails that sort of push them down the right path,

Luca Rossi: 35:51

100%. And in fact, I mean, one of the. Key areas when you talk about and think about developer experience these days is like cognitive load. So keeping the amount of stuff that developers have to keep in their head under control, making tools simpler and parts of the development process simpler and more efficient. That do not require too much thinking. And when it comes to code reviews, it's very hard task, not just because you have to, check for code that you have not written. It's a lot of complexity. And then there is the additional complexity of the human relationship. So you don't want to make out the other person feel bad. I mean, you want to, to write things the right way. So there is so much. complexity in doing a good code review, that the more you can streamline these, the better really.

Andrew Zigler: 36:39

kind of calls out something that a phenomenon that happens a lot in teams is because that is a skill, you often get one or two really high performers on code reviews who then just getting crazy bogged down on code reviews, because their people turn to them as being the best ones. But really, what that indicates is that there's problems in the process that need to get solved earlier, right?

Luca Rossi: 37:01

Yes. 100%. Yeah.

Andrew Zigler: 37:03

And so far we've talked about, you know, the practices and how those practices can be applied to help you ship the code faster, have team practices that make you and your devs happier. but there's another side of this conversation too, which is measuring, that over time, measuring the impact and then rolling that impact up to your non technical leadership, to the rest of the company, getting them on board with your successes and understanding, you know, your failures when they do happen. And so metrics are crucial. And your report highlights that, and how they're often even misunderstood or misused by either side of that conversation. It can be kind of a double edged sword. it can lead also to unintended consequences like mistrust within a team. but metrics should be a tool for understanding. I think that we all strive for that. and it's not just a target to hit, but really to understand. What you and your teams are doing. so why do you think it is that teams fall into the trap of misusing metrics and what can they do to avoid it?

Luca Rossi: 38:00

I think, uh, because metrics sometimes feel like low hanging fruit, that they seem easier to use for good than they actually are. so on one side, I believe they're crucial. I mean, to be sure that you're actually getting better. I mean, the analogy that I always make is that let's, let's say you're trying to losing weight and you're not weighting yourself. How do you know? So you have to measure things, right? But at the same time, even for things that are easy to measure, sometimes it's not easy to set targets for that and make them healthy. You know, even when it comes to your, uh, weight, you have to take into account your height and, and your, your muscles and, and many things. So you can't just blindly take a number and say 95, the 95. People always this way. And that's my target. I mean, it doesn't work like that, but instead some teams, get into this magic workflow kind of blindly, setting benchmarks that. I'm not, I mean, they're not always comparable because it depends. You're a startup, you're a big company, you're a high growth company. you're into, products, B2B, B2C, it's, it's all very different. So it's hard. but the act of measuring and, using measurements as a ground for conversations, for, for feedback loop, for improvements, that's the most important thing, I think.

Andrew Zigler: 39:22

You bring up a really good point here about how it's not about what the metrics mean for everyone else. It's about what the metrics mean for you and your team. It's about having that conversation up front. Because having the metrics in place, you know, that's the first step. And that's relatively easy. A lot of places are drowning in all sorts of data. points that they could look at. But it's more about having a common understanding of what those data points mean and when they should be taken in context or with other numbers. You know, I really like your analogy about weight and taking into effect other things like your height and your muscles, because otherwise you're not getting a holistic view of your weight. And in that case, weight is giving you an imperfect view of your health. how can someone introduce these metrics then without creating like resistance? So if, if we acknowledge that, there are different uses to metrics and we have to have like a common understanding and it's not just good or bad. how do you start?

Luca Rossi: 40:14

Yeah. So first of all, I think one of the, the, the hard parts of this is that using metrics right is like a pervasive practice that involves everyone. I mean, software engineers, managers, eventually, you know, the leadership gets reports of stuff and so on. So everyone needs to, you believe that these things are an ally for them. That this create value for them, make that work easier or they work more valuable. and so it's really important to present them the right way. If you are, let's say the advocate of your team for metric program, it's really important that you present them to your developers or to your leadership in a way that speaks to their needs, their pain points, because that can be very powerful, but you have to. Also to, to understand what the other parties, have when it comes to concerns, what they want. so it's important to, to create a cohesive, mood and culture around them, first of all, and then you don't have to. I think Boil the Ocean, which is the, I think the most common issue that teams run into. I mean, there are so many things that you can measure automatically. right now, like you attach a tool to your, to your CI CD and you get all kinds of measures. but again, as we said before, you can probably afford to take one thing at a time in an intentional way. You can use these KPIs for good conversations, irrespective and whatnot. But if you want to improve anything, you have to design projects and initiative to improve them. You have to measure them over time. Yes. You may set some sub target, but. For how many things you can afford to do that. Maybe it's one thing per quarter, you know, because you have also other things to do. So, start small, build momentum and go step by step. And, I think you, you might be on the right track.

Andrew Zigler: 42:12

So metrics becomes like a common language for people to understand the shared problem that they're experiencing and how they can drive towards a solution together. But the metrics themselves are not the solutions. You use the metrics to, put projects in place to improve that efficiency over time and to get those gains. And when that happens, then you have a communication and collaboration with your non technical leadership. This sounds a lot like our flywheel from the very beginning, where you're helping build trust with them and what you're doing. And I can't think of a better way to build trust with someone than establishing a common language and a common rapport about what you're both doing. It's a healthy relationship, I think, too, for leadership to have with engineering, like how they do with sales. something that we say all the time is like, you know, like a VP of sales, they know their pipeline, they know their leads coming in, they know what are the deals on the table. Like if you as an engineering VP, like you need to have those same common language points ready to talk about. That's how you build trust within your organization. and so connecting all of these parts together about it being, you know, building this trust with non technical leadership, and then that causes, them to have more. Buy in to have the practices they want and ship their code quickly. All of this feeds into each other. it requires communicating with like business terms too. So how can teams get that secure buy in from their non technical leadership? Let's say that they're starting to establish these. Pieces, they're, they're starting to look at those really tough bottlenecks, like code review they're starting to establish a common vernacular on metrics and how they're talking about it. then now it's time to go and, and have that meeting with like your, your, your leadership team or the VPs. how do you start that and how can, how can everyone on the team contribute? To that, and it's a socialization thing. How can engineers do it? How can the leaders and their team leads do it? How can their managers do it? how do you see it being like, what's the best playbook?

Luca Rossi: 44:09

So I'm not sure there is a playbook. I think that this should be an open conversation where the various levels of the organization are able to talk to each other and understand each other concerns and what they care about. And so that, your measurement system, feeds into the things that really matter. I've rarely seen any. non technical leadership opposing, measuring more the engineering process, because that's, as you said, usually a pain point that engineering kind of does its own thing, right? but it's also true that you can measure things in a way that doesn't matter to leadership. I mean, they don't care how many comments you do per day, right? And many other things. So I think, when it comes to. The highest level usage of this data, so the usage that matters to your CTO, to your, chief of product or whatnot, it's really important to understand what these people care about. What do they Want engineering to achieve and how would they like to partner with engineer? do they care about, shipping fast? Because maybe you are at the stage of your product where you, you have to validate many hypotheses, find product market fit, you have to go as fast as possible, right? Or, or do they care about predictability, about upholding your commitments and shipping things on time? Because maybe you are more on a schedule where, you need longer plans and these plans need to be, maintained. So it really depends. And I think nobody can just go and on, on their own, measure things and then report them to others, without them knowing, right? And the best, programs are always well shared and understood by people across the whole ladder.

Ben Lloyd Pearson: 46:08

know, I think what we're really getting to is the core of that sort of top line bullet point we had where engineering, successful engineering teams are well respected with non technical stakeholders, and the reality is that, you know, I think what we're really articulating here is that, you know, It's really a two way street, like engineering leaders need to be able to communicate in business terms and then the business needs to be able to communicate their expectations back using like sort of this common language. And he mentions, you know, it often, it just seems like engineering does its own thing. And the reality is that often engineering does sometimes have to just do its own thing because it has to do things like, you know, Keeping the lights on and reducing tech debt and improving developer experience. So, you know, what do you think is the key here for these engineering leaders to like secure that buy in for long term improvements into the things that, engineering only cares about,

Luca Rossi: 47:03

I think that when it comes to these things, visibility and transparency into these issues, for example, given by hard data provided by this kind of metrics is really helpful. when engineering managers say that, the team is drowning in tech depth and KTLO work, it's one thing to say that like anecdotally or based on the last sprint. Another thing is to bring up the data, that shows that like 60 percent of the tickets go into maintenance and the team has only like 20 percent of the time, to work on new features or improvements, for example, right. and then. The conversation, of course, doesn't stop there because you have to advocate for the value of doing, technical work and repaying technical debt. And you have to answer continuously the question like, and now what? And now what? Right. And you can talk about product enablement, how the things you do enable more things on the product side and free up people time. And I mean, when there are good relationships between leaders and people are savvy. It's rare that these things go unheard, but it's, it's important to present them in a way that is believable and that speaks to. To the, to the business needs. So what can we achieve by doing this kind of technical work? Because I don't believe in technical work, of course, for, for its own sake. I don't think you should have like a technical roadmap, quote unquote, right? That is like separated from, from everything else and has its own agenda. you should speak to the, to, to the value that you create, for the business. And sometimes it's harder to do when it's pure, you know, under the hood work. but the other Always angles that you can, you can leverage, especially if you have the hard data about them.

Ben Lloyd Pearson: 48:49

Yeah. I mean, I think if you ask any sort of like non technical stakeholder, like, what do you want out of engineering? They're just going to give you a very simple answer. And that is new features and improvements. That's all we want, you know? And yeah, you need to be able to predictably deliver that, but you can't just always do new features and enhancements, you know, you have to also be doing the infrastructure things behind the scenes that that empower those.

Andrew Zigler: 49:14

Which is the power of that common language. Because then you can express that to your non technical leadership and help them understand. And maybe they don't need to get drowned in all the details. Like, how many XYZs per day you're doing on all of those things. That may be a matter internally, but it doesn't matter to them. All they need to do is understand how that impacts your ability to ship features faster, or to address things faster. That's a really great way to kind of start wrapping up, I think, our discussion on the report. And I want to start by saying, Luca, this has been a really insightful conversation. You bring so much expertise, and you explain it so eloquently. I think there's so many like nuggets of advice in this conversation that we've had, that teams can apply moving forward to get that little bit of extra velocity, right? Turn that negative flywheel into a positive one. And the report itself, it gives leaders both the data and the roadmap. The report itself, you know, our listeners, if I really encourage you to check it out, it'll be available in our show notes. So definitely go and get a copy of your report so you can follow along with what we talked about today. but Luca, you know, where can people go to learn more about you? Can you tell us a little bit about refactoring?

Luca Rossi: 50:23

Yeah, first of all, thank for hosting me. It's been a great conversation. So refactoring is my own newsletter and podcast and it goes out weekly to more than 100, 000 engineers and managers and made possible this whole work because this report started as a joint survey between the refactoring community and the LinearB community. And so you can find the refactoring newsletter at refactoring. fm There are more than 300 articles written on engineering and management topics, including the latest report on engineering maturity and how teams are using data to improve, which is what we have been talking about today.

Andrew Zigler: 51:02

We're big fans of refactoring and we definitely encourage everyone to check it out.

Ben Lloyd Pearson: 51:06

thank you so much again for joining us today, Luca. Make sure to subscribe to the Dev Interrupted podcast and newsletter on Substack. Share this episode and check out, our Substack newsletter for more insights from today's discussion, including the full report that we discussed here today. That's it for this week's Dev Interrupted. We'll see you next time.