"The most common one that I see is teams are trying to do way too much. Like, way too much. You stuff like, 30 items in a sprint, just in case you can get to the 30th, right? And then you're in the sprint and you finish 5, and you move the other 25 back in there." — Thanos Diacakis

Imagine a world where your engineering team ships features at lightning speed, and they're genuinely happy doing it. Sounds like a fantasy? It doesn't have to be. 

This week, we're diving into the secrets of building high-performing, happy engineering teams with Thanos Diacakis, a fractional CTO and engineering coach with over 25 years of experience spanning companies like Uber, where he optimizes software engineering teams and drives technical innovation.

Thanos reveals the common mental models that hold teams back, why 'technical debt' is a myth, and how to break free from the 'feature factory' mindset. Together, he and Andrew explore practical strategies for achieving engineering excellence, from mastering iterations to managing complexity, and discover how to build a culture where velocity and happiness go hand in hand. 

Show Notes

Follow the hosts:

Follow today's guest(s):

Referenced in today's show:

Transcript 

0:06

Welcome to Dev Interrupted. I'm your host, Andrew Ziegler. And I'm your host Ben Lloyd Pearson. In today's news, we're talking about the latest ups and downs in the tech job market. How the persistent effects of layoff anxiety impact our work and how right now, the only thing protecting all of our free and open source software's, websites, and infrastructure from constant AI scraper DDoS attacks is a proof of work proxy with an anime logo. So Ben, what do you want to talk about first? An anime logo. Well, that, that caught my attention, but I actually wanna talk about that last story first, Simply because you and I both have worked on open source in the past, so it's yeah. that, that holds true to us in particular. So what, what do we have there? you hit it. Spot on There is that open source is really important to you and I, and it's the backbone of our world really. And this article from Niccolò Venerandi, it shows how AI scrapers are aggressively targeting free and open source projects like SourceHut, KDE, gnome, fedora, like big names that are the foundational parts of the infrastructure that powers our modern world and these types of, attacks or DDoS attacks coming from AI scrapers, LLM scrapers, collecting information off their dock pages, sometimes going through their git repos, looking at every single possible file. Git blame, git diff that you could look at in an open source project repository, and sometimes doing it multiple times. This kind of effect is really kind of causing free and open source software development to slow down, crawl, be very unresponsive for folks. Uh, and it's actually kind of wild to see how quickly it has impacted the abilities for developers to, to actually work on those projects. you know, and we're seeing that there's a lot of AI companies that, that are, they're ignoring these like conventional web standards that I, I mean, when you really think about it, they're not even standards. It's more like a gentleman's agreement when you have like a robot's text file. You know, it's, it's like generally speaking, we've all been honest enough on the internet to, if you have a scraper, to respect the, the contents of that. But now we have a new type of scraper that's powered by AI and they're just ignoring those things and, and doing their best to actually actively evade detection. you know, all these models, like they need tons of data for training and for all the context that they pull in. But they really don't have like any sort of incentive to do it in a responsible way. other than like keeping their data source from like crashing and, and dying, and it's like the thing that really bugs me about this is that, you know, open source is, is built on this concept of transparency and being and doing things in the open. And it's like that very thing that makes open source so awesome is what is now putting them in the crosshairs of these AI bots. Yeah, you couldn't have put it better. And the thing that also makes open source awesome is the same thing that's ultimately going to protect it as well. Because from all of this, there's been a tool that's emerged that's helped some of these, infrastructures, these websites protect their files from these kinds of DDoS attacks. Using a new proof of work type of captcha, it catches the bots, it prevents them from actually causing all of the inefficiencies on their systems. And this tool, it's called Anubis. It's kind of cool because it does the same thing as the Egyptian God from the mythology that it's named after. It weighs the soul of the incoming HTTP requests, using proof of work to stop AI crawlers. And that's the same as how in Egyptian mythology. Anubis weighs the heart of the non-living to decide their fate. So Nono or Llama or Penguin Mascot here and Anubis, it has an anime logo. That's the funniest part about it, is that right now anywhere you can go to a lot of these, uh, you know, FOSS websites and what pops up is this anime logo that checks that you're human before you can go in. strangely enough, this reminds me of, eternal September. Like, I'm getting that vibe from this situation. So, uh, for anyone that that may not know, uh, you know, that's back in the, the mid nineties, uh, in the Usenet community, they got used to having like, every September, all of the college students that would get access to Usenet for the first time in their life would flood the, the airwaves trying to learn how to actually use Usenet. and then you had this moment where AOL gave everyone a Usenet connection when they signed up for their service. So they called it the Eternal September because you, you had this endless stream of people trying to learn something new for the first time from this community. And that's kind of what we're seeing here with these bots, right? Like it's, instead of the, the. A OL masses. It's the AI masses that are coming descending on these websites and constantly pulling the same data sets like over and over again. But I mean, it really is like, this is not gonna be sustainable. Like we, we, we really do need to start thinking about how. we responsibly give AI access to data. Like, it, it doesn't make sense to constantly scrape a web source over and over again. Like, if you need that data, you probably need to like store it yourself, for example. I mean, this kind of urgent, like this Anubis kind of feels like it's a, it's a nuclear reaction. It's, it's like instead of, dealing with it, we're just like doing what we can to shut it down and then try to like. Pair back from there. exactly. That's kind of the core of their problem is that all of the approaches they can take also harm actual users, maintainers, developers from accessing and using the tools, and so there's not an easy way to win. So let's move on. I wanna talk about these programmer jobs vanishing, because this has been a little controversial when I've been talking about this out on social. So what can you tell us about that story, Andrew? Yeah. So this story talked about how, more than a quarter of, uh, computer programming jobs are simply vanishing. You know, programming is dying, is how it framed it, and software development is booming. So this is a very kind of, divisive and polarizing, I think even for some people. But really what it calls attention to for me is how people, uh, maybe attach meaning, uh, to those two words and what those people do at their jobs. it talks about how, you know, US programming jobs, their down 27.5% over the last two years, uh, which is the lowest since the 1980s. which seems so contradictory to everything that we see and hear constantly with the developments of new technology. Right? we're playing, uh, the semantics here a little bit because this is, you know, the US federal government's definition of a programmer versus a software developer because, Adam, our producer, told me, I have to bring up all the LinkedIn takes I've gotten from this about how well I'm a programmer, but I consider myself, some people would call me a software developer, and it's like, yeah, I understand that these are titles. Like they have different meanings depending on who you are. But the US government is specifically looking at people whose job is to take existing specifications and implement some sort of like low level routine for that specification. So they don't make a lot of like strategic decisions themselves. They're simply like. It's almost like a translator, you know, if you have somebody that just takes it from a, a text from one language to another, this really to me sounds like the type of work that is perfectly ripe for AI to take over. So it doesn't really surprise me that this is happening, and I don't really know a lot of people today that I would consider a programmer according to this definition. I certainly have worked with those people in the past. I think really it's what, what we're seeing is that AI is not really killing a profession. Like the profession has evolved from programming to software development for the most part. but what's really happening is it's, we're seeing the erosion of the moat that coding skills. Created. So being able to just write code isn't enough anymore because AI is able to also write code. It's more about understanding, making higher order decisions, relating work to like the end user impact. It's, it's things like that. Absolutely. And if you've been li listening to Dev Interrupted, you know that Ben and I talk about this a lot on the show about the new skills that teams need to develop, that individuals have to build in order to be the software developer or programmer or coder or designer of whatever you think in the future. Right. It's like the, those protections like that, you put it great in as like a, a moat. That's kind of what coding skills gave a lot of people as a moat that's going away and just how that's impacting large companies. It's impacting individuals and what they do at their work too. But that opens up so many more. Also, opportunities to build new skills and to move your expertise into new areas and to use the augmentation of things that you used to have to spend a lot of time doing to then spend more time on the impactful work. And I think that's the real evolution that we're all living through. Yeah. And, and you know, and I think things don't look super great for software developers right now, just as a, job, industry. our final story, we'll get into some of the difficulties with that too. Uh, but the chart that we're looking at in this article does show like a very steep, long-term growth. So there's a bit of a pullback happening right now, but the long-term trend has been very positive for the industry as a whole. So, I mean, really what it comes down to is like routine coding. It's being automated. Like that's just the reality that we live in, and it's the, the developers who have that like system design thinking, the ability to integrate with AI capabilities, those, those people are still gonna be thriving in the market. I. What you just said. Ben reminded me so much of what our recent guest, Dr. Ashoori said on the show. She talked about how the roles that will have growth in the future, the roles that have opportunities are the decision managers, So the action doers, are evolving into the decision managers, and that's the same thing that we're seeing here from this article. Yeah. Yeah. Thanks for giving me a chance to say the same. The thing I've been saying over and over again that everyone's a manager. Now you need to learn how to manage it. Your ai. So, Yeah, precisely. And, and you know, if you've listened to this and you have some thoughts too, definitely please come find us on LinkedIn. We're talking about this stuff constantly and not just here on the podcast, and we'd love to hear what you think as well. Yeah, absolutely. So let's get into our last story about layoff anxiety. so this article addresses how layoff anxiety is impacting employee confidence and pushing it to a rock bottom In nearly a decade, talk about how one in three Americans have layoff anxiety, and the one in three Americans that experience that, Of them. Remote workers were more likely twice as likely actually to feel that anxiety compared to the people that were in the office. And amongst everybody surveyed Gen Z were the ones that were most anxious and had the most anxiety about layoffs. And so this really kind of indicates, really where the anxiety is in the market and why they feel those insecurities. You know? Ben, what did you think about this article? as a remote worker, uh, I did, previously make a decision to live in a city where I knew if remote work dried up, that I would still be able to get local opportunities. I've since been fortunate enough that I, I, I feel comfortable enough with remote work that I don't think that's the reality anymore. But you know, these, like January, February. Layoff waves in the tech sector have sort of become like a tradition at this point. Granted this article is about the broader industry, right? But I think there are still some parallels we can, we can bring into software development.'cause, you know, based on what I'm hearing in, in my network, from engineering teams is that, you know, this, this year's wave did feel like it hit a little bit harder than it has in recent years. like practically every engineering leader that I've spoken to has been focused on efficiency. You know, they're, they're not really thinking about growth. They're more thinking about cost cutting, how to do more with the same or less, or the same with less. today we're in sort of this period of like relative. Stab stability in terms of like the economy when you look at like interest rates. I'm not gonna talk about the stock market. That's a, that's a different thing, but, you know, interest rates have been fairly stable for a while, but we're still in this, like, wake of the zero interest rate free venture capital money, effects. So I, I feel like there's still a ways to go, like things aren't really getting worse in the tech sector, but we, we still haven't fully recovered from it. Another thing that was interesting to me that the survey pointed out was how, the workers that were surveyed discussed how they would change or evolve their role in order to stay at their job or to find more security within their job. I. in fact, 46% of people that were asked, you know, they were willing to take on more responsibilities, 69% of them even said they would prioritize that security over their long-term career growth. So definitely a lot of people feeling the, the stressors in the economy in terms of their employment, especially with recent layoff waves, as you've said, hitting every year. but really my takeaway from this article too, was going back to what we talked about a moment ago. With evolving your skills.'cause I think that's one strong way to, to impact your layoff anxiety, because of course, ultimately layoffs when they happen, they're outside of your control. But what's in your control is your ability to build skills to be indispensable no matter where you end up going. I. Yeah, especially, you know, now that we're in the age of ai, like it's never been easier to build new skills like that. If you're in a position where you're, you know, maybe struggling to find work or you don't have the position that you want currently, like I. That's my recommendation is you have more tools available to you today than we've ever had in the history of humanity for that matter. Uh, so, you know, take advantage of them. So Andrew, tell me about our guests this week. Oh yes. I'm so excited for our guest, Ben. So after the break, we're bringing Thanos Diacakis onto the pod. Thanos is a technology leader of fractional CTO and a total wizard to me. He had over 25 years of experience. I. Optimizing software engineering teams at Uber, he led the technical integration of the jump acquisition. So they scaled 45,000 vehicles to 2 million monthly trips Wow. 150 million in revenue. But my biggest takeaway is, could you imagine being at Uber as the rideshare category exploded? Sounds awfully familiar to some of the times that we're in now. So after the break, we're diving into the strategies and the mental models that help Thanos team go from zero to one and then from one to 100. If you've ever struggled to explain developer experience to non-technical leadership, this workshop is for you. LinearB to learn how to translate Dev X metrics like developer satisfaction and AI performance into clear business outcomes. We will give you proven strategies to align engineering priorities with what execs care about the most, faster delivery, reduced cost, and ROI on AI investments. Plus, you'll get early access to our CTO board deck template to make your next leadership meeting effortless. Head to the show notes to register. I'm your host, Andrew Zigler, Developer Advocate at LinearB, and today we're asking a timeless question. How can engineers ship both fast and happy? Joining us is Thanos Diacakis, a fractional CTO and engineering coach whose background includes technical innovation for Uber at the office of the CTO. Thanos has years of experience building high performing teams, and you can say that he turns chaos into predictable results with just a snap of his fingers. And Thanos, thanks for being here. It's great to be here. Looking forward to the conversation. Yeah, it'll be a fun one today. And one of the things that most excites me about your background is how coaching oriented your perspective is. I think there will be a lot of learning opportunities, along the way. So let's get started and kick things off with mental models, which represent how people frame and think about things over time. And Thanos, you've spent a lot of time decoding this for engineering teams. What are the common mental models for how leaders might perceive engineering work? Right. one of the most common things that I see is the mental models don't really match, right? So the leader has a mental model in their head and the engineering team works in a certain way. when those two don't match, bad things happen, right? So one of the most common things that I see is People outside engineering think of engineering as some kind of feature factory, right? I put engineers in the process, right? I give dollars to hire engineers and I get features out the other door. And to some extent, you know, that kind of works, but it's not true all the way. And one of the most common things that I see is People don't understand what engineering does and how engineering works. we have like, hands, or we have butts in seats, or however you count engineers, and you think that's a direct correlation to the output. one of the things I like to start explaining, uh, and, and make sure leaders outside the technical organization know, is that engineering does other kinds of work other than putting out features. They have to fix bugs, right? When you put out a feature and it hits the customer, things always change and things are always a little bit different. So you have to go and fix that delta and clean up some of the bugs. In addition to that, every now and then you have to take a pause and sharpen your tools. You can use any analogy you want. Some people call it tech debt. I don't really like tech debt. I think it more of as investments. You have to go invest in your team, whether this is the processes, the tools, technologies that you use, cleanups, refactorings. All of these things, these will help you go faster in the future. And that's valuable work that a team has to do. And then you also have to manage risk. this is like the four items. They come from something called the flow framework. but it's not just feature work. It's all these things, that engineering teams have to do. And oftentimes they just don't do them. Sometimes they just say they don't do them, but they do them behind the scenes. But a lot of times what happens is because the business is so insistent upon features, The engineering team gets trained that features is the only thing that matters, and this work never gets done. And when you have too many bugs, customers complain, or if you don't invest in your team, your velocity really slows down. And if you don't fix risk, then you get hacked, or you get some catastrophic events because someone failed to apply security patches on time. So that is one of the most common things that I see where the mental model sort of breaks down. And when leaders realize this, it's fairly straightforward to sort of connect the lines and say, well, okay, my long term velocity is going to suffer if I don't invest in these things. Well, how do we do it? And they're often taken by surprise when you have like a fairly mature team that you tell them, well, you know, maybe 30 percent of your time is going to go on features and 30 percent on bugs and 30 percent on investments. They expect it to be like 90 percent of features. If you have a green field, new startup, you might be starting with something like 90 percent and you'll move to different later. This very much varies depending on the maturity level that you're at. that is one of the most common issues that I see across teams that usually helps to correct up front. Yeah, and you know, we don't like that word technical debt around here either. It's a very core part of what engineers do and a phrasing that I hear people say that I really like is keeping the lights on. It's acknowledging as it is part of the work that you do in order to have your output and maintain it over time. But when you're in that feature factory like mental model, with your execs or with your non technical leadership. It hurts you over time. how can, um, technical folks help those non technical leaders reset that mental model? I think it's a conversation that you need to have and you can have this conversation by using real time examples. it's perhaps also for me a bit beyond keeping the lights on, because it's about engineering excellence, right? When you go and you build a bridge. You're going to go do some design up front. You're going to calculate the load bearings and all that stuff before you even like start putting anything between the two ends of the river, right? in software engineering, somehow we, we tend to skip that sometimes and be like, yeah, we can, you know, we'll do a testing afterwards or something like that. Right? So it's about having, the realization that these are not valid shortcuts. They're not really shortcuts. Once you start incurring technical debt, it doesn't often bite you like next year or next month. It's usually like the next day or the day after that. And because you've made a decision, you kind of want to stick with decision. No, this is the right approach. And so it has impact on you very quickly. I think you can use these examples to go back to your leadership and go, look, we tried this and three days later, this is what happened. So this is how we should do it to do it better. And these are all the other activities that we have to do as part of engineering to make sure that we're shipping things in the right way. shortcuts are often really not shortcuts. And as you start pointing this out and going from first principles, you can get to that. Right. It's about connecting the dots between the problems. Yeah. And it's not as much as in, Hey, I told you so, but it's like, let me show you the right way of doing this thing. And let me show you how this actually works. And I think you can also present trade offs to the business that help them, right? You can say, Hey, maybe, instead of, we have to do these other activities. Instead of shipping that feature in a crappy way in two days, it will take us three days, but then it will be repeatable. And you can ship the next one three days later. And the next one, three days later. And now we have this lead time from idea to actual implementation that is fairly quick, unlike maybe what we had when we were building and adding this technical debt, and now we have to pay all this interest and we have to deal with the outages and people are distracted because they have to go fix bugs, right? So if we do it right, we don't actually have to go back to it much, or we have to go back to it a lot less than we had before. So it really helps our long term velocity. Right. Because it's incorporated into the process you're already doing. It's already well educated across both sides that this is part of the work. It's not just new feature, new feature, new feature. It's about maintaining it. And I really like the way you described it as being a bridge. Or like, if you're going to build a bridge, before you, put anything over the water and connect the two sides of land, you gotta do a lot of work. And imagine if we built bridges like how we built software. That would be scary, because we would build the bridge first, and then we would do the testing and see what goes wrong with it or what happens. But at the same time, there's a lot of software that gets built that is just as critical to human and human lives as bridges. And so it's definitely a really interesting mentality to apply. Do you think on the engineering teams, they have some mental model work to do too? Is it maybe more than just the executive that has to change how they think about it? it's, it's. Hard to put this in a politically correct way, but I think we as engineers have often failed because we have accepted these requests. So if, if you told a, you know, whoever's responsible for a bridge project to start sending the traffic and you put the guardrails afterwards, because in the beginning, there's not going to be that much traffic and people are going to be careful. So it's fine. Right, Most people would say, no, there's no way I'm doing that. And I would quit before you do that. Cause people are going to get killed. In software, we're like, Yeah, let's push it out. We're at the test afterwards. No problem. And, and, you know, I've been guilty. I've said yes to it sometimes. Now I'm, as I've had more experience and been burnt more, I'm now more empowered to say no. And you know what? That's not a responsible way to do software engineering. And I will not do that. Now it's, it's hard, right? Cause you don't always have that luxury of being able to go and say, no, I'm not doing this. And it's especially hard because I think we've. Developed into a culture where this is somewhat the norm, and I think that's something that I would like to disrupt a little bit as best as I can by having conversations with this. This is not cool. It is not acceptable to put things out without testing, without documentation. And we're not talking about going to extremes, right? We're not saying you write a feature for one day and then you spend 10 days doing other stuff and, you know, you have to have the perfect documentation with the perfect grammar and the nicest formatting. But there are some basics that have to be done before you can consider something complete. I agree. And so maybe once you've done that work, and you and your engineering teams are on the same page as your non engineering leaders about how you should think about building software and the amount of energy that in time that people need to invest in it, then what's the next step for building that engineering success over time? Is there like a way to then build the velocity once you have the buy in about, okay, we have to do this work to keep the lights on or do more? there are practical things that you can do as you move into, okay, now we've changed our thinking a little bit and let's see how we go about it. I want to start with one more mental model before we jump into how, which is what is important here, right? And if you think of a business that writes software, something along these lines is, is true, which is we deliver value to our customers through software and we get paid for that in some way, right? So we ship some new feature that is value to whoever is using this feature. We make money off of that, whether it's subscriptions or new software versions or however it works. But the general gist is the benefit here is in shipping. Yep. The benefit is not in doing work. Doing work doesn't get us into anything. The benefit is getting the feature in the usable form, a good feature in the hands of our customers. And what you see in a lot of places is they have this pipeline model where you start off and maybe you have some kind of user research. Maybe you have then some design, maybe that has the product. Maybe then it goes to engineering. Maybe you have QA, sometimes you have operations that will deploy the software separately. and you have this pipeline where a feature goes in the pipeline and it has like five or six steps before it comes out. And maybe it's not five or six in the old place. Maybe it's two or three, but by the time it pops out, the first phase has moved on and is doing something else. Right? So maybe three features down. And this keeps everyone busy. You have this very pipeline model where everyone has work to do. And if everything was flowing smoothly, everything is happening in parallel. And you can be spitting out several things as soon as they hit the end of the pipeline. The problem with that is it never really happens like that in reality. Because what happens is when you ship something at the end, you find a bug. And the bug now has to go all the way back to the beginning and be like, Oh, we didn't think of this. How do we fix this? And then now that will create a gap in the pipeline as that whatever work was being done isn't being done anymore. And then I might skip and hop in some other places in the pipeline and jump around. And that doesn't really end up with good field. The pipeline ends up breaking. You have lost context because you may be several weeks ahead from what had happened when you first shipped the feature to when you're going to revise the feature. And it turns into a big fat mess. I'm saying this because One of the things as we go over the different phases of how to actually implement this is focus on the goal, right? Focus on what is important in shipping. You don't want to be locally efficient where everyone is busy in their little cubicle doing their thing or running in their, in the maze. What you want to make sure is that you've organized your teams in a way. that they can actually produce results. So that usually means small teams. if you subscribe to team topologies, that would be the stream aligned teams, they call them that have cross functional. They can do stuff inside that team. They don't have to have many other interactions with other teams. So with that said and done, I like to break this in, in four, linear steps. Now, They're kind of linear. You kind of may have to jump around sometimes, especially if you're a bit in the more mature phase, but it's easy to sort of break this down into a linear way, and there are iterations, quality, complexity, and last is planning and predictability. So let's break this down because that's a lot to take in at once. Right? first step is iterations. What we want from a software team is to be able to go from an idea to software in customer's hands in a fairly quick period of time. this could be days, this could be a week, in some cases longer, but that's the exception. But like not a month or six months or a quarter or anything like that. There's a bunch of tooling that you need to have in place to be able to do that, right? So from, you know, the engineer is like typing some code. If your linters are working correctly, you're going to save yourself from a bunch of hassle all the way down the other end of the pipeline. When things are coming out, CICD pipelines, tests, uh, automations, all that good stuff goes in iterations. And you'd be surprised, like, I work with a lot of teams, and you go in and you look at what they have, it's not that they're stupid, right? They're busy. So from the 10 or 20 things they should be doing in the iterations category, they might be doing 5 or 10. sometimes it's lack of knowledge. Sometimes they heard this is a good thing and they put it in, but it turns out that's not that great thing to do. nailing iterations first is a good way to get started. once you have iterations down, then you start worrying a little bit more about quality. as part of iterations, you're definitely having some tests and some automations and things like that, but quality goes a bit beyond that, right? Is, are the customers using it like you thought they're going to be using it? Is what you think is running in production, What is actually running in production? Are the metrics that you're reading correct? So that has to do with tying the feedback loop to a little bit further to the right of the process. And making sure that everything is working as you think it's working. This is establishing things like on call and being able to have these quick iterations, but also get pretty good feedback and then close the loop and be able to iterate back and, and have software that is, that is pretty solid. Now you've got these nailed down, you're doing now pretty good. And where that now begins to bite companies is that they build more complicated and complicated and complicated stuff until it collapses under its own weight. And we've all been there. I've done this several times, right, where you put it together and that seemed like a good idea at the time, but the customer turns out that you're using 10 percent of this and you can go throw the rest away. Or, uh, you know, throwing Kafka in here as your queue sounded like a great idea, but we don't really know how to manage this stuff. And, Uh, I was working with one of my clients recently, that is like a three person startup. And they were advised that a Kubernetes cluster would be the right thing to do. And you can imagine how that went. it's, you know, someone was well meaning with that idea, but it didn't work. So for me, work for them. yeah, it did not pan out very well because no one had Kubernetes experience. And it's really hard to manage it for, for such a small, For such a small team. Yeah. so for, for step three here, we try to manage with complexity. This is not just by the way, architectural complexity could be team complexity, procedural complexity, but all these things that, that build up, we can talk about bottlenecks in a little bit, but generally speaking, if you have bottlenecks or constraints, you tend to put, certain rules in place to manage those constraints, right? So if you have. Some customer that has a thing that happens on Fridays, we don't do releases on Fridays because that customer has that thing on Fridays that we don't want to break, right? And when that customer goes away and we still have the rule of not doing releases on Fridays, well that rule needs to go because now we're losing 20 percent of our releases because of that thing that doesn't exist anymore. So all these kinds of complexities, in phase three, we like to look at and sort of, you know, start untangling. This is interesting, right? I said this is linear, but if you're jumping in in a mature company that has been writing software for a couple of years, they might have some complexity problems you need to jump in first. And this is where it gets fun, right? Because if they have enough iteration capabilities and quality capabilities, you can jump in and fix some complexity problems right up front. But if they don't, then you can't go and refactor complexity out without having some tests because that's going to be problematic. So we go back to the beginning and start from there. But sometimes if you have enough capabilities, you may can go fix your top complexity problems with whatever iteration capabilities you have, and then jump back a little bit and fix your iterations. So, we got iterations, we got quality down, we now have a mechanism to manage in complexity. So we've got a pretty good loop and now is a time where we can get planning and predictability in. A lot of people try to reach for this first. But if you couldn't plan a sprint and, you know, half the things are falling out the sprint in like a day or a week, you're not gonna be able to plan a six months or a year ahead. Yet most companies seem to spend an inordinate amount of time planning six months because we have to, because the business wants to know. And if you go back and reflect on if our planning was correct, it never is. And it's terrible. And it was not even close to reality yet. Next year, we go and plan again. It's like, what is the value of this activity? Now, if you've gotten your first three phases down pretty solid, I think most companies can get to a point where we can get some planning and predictability in. You can, should be able to look like a month in. You should be able to look a quarter in. A year, unless you're really good as maybe stretching it, but you can get there. But you need to get the other pieces in there first. So I kind of like to guide folks, through these four phases. Again, it's not always exactly right, but I think it gives people a way to understand and be like, okay, I can kind of see the path here. Yeah, and I think it's really key how you point out that they're not, they're kind of linear, they're not necessarily linear. They certainly build on each other. And before you can do the later ones, you need to have figured out the earlier ones. But that ordering can be really unique, depending on the resources and even the age of your organization. And you're so right about teams that will plan six months or a year or even just six weeks ahead and then miss that, but then plan again and again. And I think that A common pitfall, going back to engineers accepting the world that they can be in with execs is that if they get into that mentality of predicting wrong over and over again, that burns through all of your trust with your non technical leader. So then later when you do come to them and say, Oh, I want to, you know, we really need to do some unit tests, or we really need to fix our CICD pipeline. They don't like value or understand that because your predictions in the past. About how that predicted the features they wanted to ship was wrong. And so it's about fixing your ability to be accurate so you can build that trust. Yeah, that's, that's exactly right. when you slow down, you often have to come up to a mode where, because you're going to take longer to do certain things , you're going to be able to get better velocity in the long run. you're gonna have to do less things, but hopefully as you do these less things, you can go a little bit faster for each of them. Yeah, and that resets really tough. And it's not convenient either, which kind of brings me to some more things I want to learn about the companies you're exposed to and what you see them do. And you have like a large surface area on all sorts of different size teams that have very unique problems when they bring you in. And so when you come into these teams, you know, and you see something that's good or bad, what is like a bad habit in engineering that you love to call out, like maybe when you're first on the scene. Yeah, the most common one that I see is teams are trying to do way too much. Like, way too much. so you stuff like, 30 items in a sprint, just in case you can get to the 30th, right? And then you're in the sprint and you finish 5, and you move the other 25 back in there. and Rolling the boulder. It's like the very Sisyphus moment for us all. That's relatable. I push my task back like that all the time. and this happens, right? Because you go in in the planning meeting, your business stakeholders, and they really want these 30 things done, right? And I understand that. We laugh a little bit, but I think this is in, um, this is, this is serious, right? Cause the business depends on these 30 things getting done. And yet we keep doing this silliness of moving around. So I like to go in and sort of explain that there is a hidden cost in having this, uh, WIP, right? Work in process. and by the way, some really wise person, uh, Drew told me this. It's not work in progress, because if it was progressing, we wouldn't be talking about it. It's work in process. It's in the process, but nothing's happening to it. Because we put these 30 items, and they're just sitting there. These items have a cost, because we will spend a little time on them. We will spend a little time looking at them. Same thing as work in process in a factory, right? You'll trip over it. It will go rusty. You will misplace it. And at the end of the day, if I put these 30 items in, I worked on two, I shipped the one, but the other one I didn't work on. Well, guess what? The priorities next week are different. So the one that I worked on and it didn't finish it, it's not going to get shipped and that work was thrown away. a lot of teams kind of advise, like we just drop the sprint model, right? The sprint model was useful 20 years ago when we're doing six months releases and doing one week release or two week releases was earth shattering. But now it's more like, I think most teams are selling better with like a Kanban model. We'll pick things off the pile. We will work on them, ship them, make sure they're good. Then we go pick something else off the pile. And that makes the business side uneasy. But when you show that because you're not distracted, you can pick something off the pile and get it done quickly. That also enables you to be agile with agile with a small E for me. It was, Oh, if I say agile, it's always with a small E. because when the business has decided they want to do something else, then you haven't spent time on doing anything. You can just grab the next item off the pile, whatever they decided the next item is. And now you're really being agile with a small a, now you're really being nimble. So that's one of the first things that I like to go grab and look. Can we do more by doing less to begin with? Right, so they're chomping at the bit. They have that 30 ticket sprint. They're going to get as much of that done as they can. They get five, right? So the teams are overplanning and they're underdelivering. if you then pause, hit the reset. Scale it back. And, you know, maybe you're just going to pick up the two or three things, or four things are going to actually get done this week. What do you think is the, like, a mistake that you see people do when they're in that state, and then they try to then speed back up? You know, they're used to doing those 30 ticket sprints. It's the instinct there is, it's palpable. the speed versus delivery is an interesting one, right? I think a lot of people confuse, they think the trade off is I either get speed or I get quality. And I don't think that's necessarily true, right? It could be that if you're going too fast and you don't have the process in place, your quality is going to suffer. But the idea is to move the tradeoff curve where you can move fast and have quality. The problem with slowing down for quality, the most typical thing you see for slowing down for quality, is we're going to have a thorough regression test done by hand. At the end of this release to make sure that what we're putting out is not crap. And you usually find out two things. You're not as good as you think you are in finding these mistakes. So you're probably going to put them in production anyway. And your customer is going to have to sit with that bug for all this time that you're doing your thorough QA analysis, right? so if you had some better way of having more automated testing that actually finds the bugs better than people do, then you can become again, more nimble because when you find the bug, you can turn it around and not have to wait for five days for a long QA regression, and then you can ship it again. So the thing that I try to help teams here is Start thinking these, not as a trade off, but how can you get both of them at the same time and start making the changes to your team to get in that space. It's not easy. It is hard. It takes some, you know, figuring out, it takes some analysis, like, where were my bugs? Where are my bugs coming from? Why are we making those bugs? And doing this analysis and then jumping back and doing the process changes necessary to do that. And let's talk about that need for speed, that instinct to go fast. And at the beginning, I introduced this by saying that when teams are happy, they ship faster. And this is something that we see and hear all the time on the show. We recently partnered with Luca Rossi of the refactoring engineering blog and podcast, and we did a survey with both of our communities, learning about their environments where they worked and how that impacted the perceptions. of engineering leaders by their non technical stakeholders, as well as how happy the teams were and how fast they could ship features. And what we found is that all three were extremely tightly correlated. They're a flywheel. There's, if you're a team that's well regarded by leadership, you probably are shipping fast and you're probably happy with your practices. And if all the opposite are true, like you're not happy with how you're working, you're not shipping very quickly, you're not well regarded by the teams, it becomes very hard. to start because that flywheel is working against you. How would you recommend a team that's in that position start making those changes? Uh, slowly. Slowly. yeah, there's a lot. Usually a team will have a bottleneck that controls, what is going wrong. And oftentimes a team may be, that is struggling, you'd be scrambling and trying to fix too many things at once. So, so think of like, just, just to illustrate the point, think of a factory, right? Software is more complicated than a factory, so we'll make this a bit more complicated in a second. But if you have like a sequence of machines that makes a widget, and the bottleneck is somewhere in the middle, if you improve to the left of the bottleneck, you're going to jam that machine and it's going to be overloaded. And if you improve to the right of that machine, they're going to be starved for work, because the thing before it is the bottleneck. Now, in software, this is a bit more of a web than a linear thing, but it's generally true that there's one thing somewhere that is constraining you. And when you fix that one, then the bottleneck moves somewhere else. And when you fix that, then again, it moves somewhere else. So a team that is unsure where to start should spend some time thinking of where is my bottleneck, what is controlling my speed and start fixing that. the risk is if you try to do too many things, then you lose focus. Cause you're trying to fix too many things and it's hard, to do that. But also it's hard to see the results. Like if I made a change here and a change there, Did it work or did it not work? which of these changes was the one that, fixed the problem or not? So I usually advise like trying to think of this in terms of bottlenecks, trying to figure out where your bottleneck is at and fix one thing at a time. also going back to sort of the work in process, uh, comment. fully agree with you on the correlation between all these things and happiness, right? can take a fairly straightforward view of this of, as an engineer, if I have one thing to do, and I can do it well, and I'm not interrupted by six other things that hit my plate today, I feel content, it hits the users, I get all my neurons firing happily because I see the value of my work being deployed to production. And then you contrast this with, I'm working on something, someone interrupts me, I have to jump and work on something else, now I'm annoyed because I lost my train of thought for where I was at before, then I ship my feature because I would love my con, I lost my context, I now shipped a bug. So then someone else comes and they're pissed off because now I have, they have a bug in my thing and I feel bad because I made a bug and now the customer's upset and, and, you know, your morale. Everything's going wrong. all goes downhill, right? You know, the old thing that, the beatings will continue until morale improves. It's kind of like the same thing. Like the beatings will continue until velocity improves. Well, it's not going to so start slowly. talk to the stakeholders, right? Understand where you're coming from, like get a line, right? You're all in line on the same thing, right? You all want to ship features for the customers in a successful way. And show to all the stakeholders sort of from first principles of how this works and have these discussions. I've seen teams that try to hide this, right? It's like, Oh, well, we'll tell the business that we're doing all features, but we're going to go fix our technical debt behind the scenes. Yeah, that's common. a few times, but at some point they're going to start shoving more stuff. And at some point the team members, you know, maybe the leader is trying to do this. but the team members keep hearing from the business features, features, features. So they're all conditioned to work on features. They all think the features are the most important thing. And that usually falls apart, eventually. So if you're an engineering leader who's trying to connect all the pieces of this conversation to foster that positive team culture, you know, there's a lot of different places to start. It sounds like from your perspective, it's about starting on that iteration, it's about finding the bottlenecks that way you can speed up, but do so in a sustainable and even way. are there other things that you think that they should look for or be tracking even, like on a metric side? I think we've covered sort of the way to think about it and sort of the route by which to go through. What I would stress for this point is that you as the leader, you kind of stuck as the interface between your team and the outside world. And you have to be careful. and Diligent in managing both sides of that equation, right? You have to do all the good stuff we talked about to explain how the engineering team, works, to the outside world. and then you have to explain to the engineering team what is important and how they should be rewarded and how to do that, right? So there's not any sort of disconnects and you have like the feature trained team or anything like that. And, uh, as with anything with leadership, it's communication, communication, communication, right? You gotta say it like a hundred times, and you gotta say it all over again, because what is it? It's, um, Culture beats strategy every day. CultureEatsStrategyForBreakfast. Yes, that's the so if your culture is developed in a certain way, you can keep making any strategy you want, just, repeating it as many times as you want, but until you actually, get people doing it and that turns out to be the culture of the place, then really very few things are going to happen. you and your organization, and ultimately driving towards engineering happiness. do you see a correlation in the teams that you work with when engineers are happy with everything's going well for both sides? Absolutely. I mean, it's like night and day. I think most engineers went into engineering because they enjoy the craft of building things and they enjoy building things right. No one came out and went, say, I want to be a software engineer because I want to write crap. they all enjoy the process of creating software. So it's really sad to go to places where you see people burnt out and people so, so unhappy. So when I first went solo and started doing the coaching business, I started off with, you know, what people want, people want efficiency. So I went out and like, look, we'll go with engineering efficiency, engineering effectiveness. and very quickly I realized, well, no, there's a very important feeling here, which is everyone is frustrated. not just the engineering team, but everyone seems to be frustrated. So we have to address with this. So, so now my pitch is it's effectiveness and happiness and those go together. Are there successful habits or practices that you see teams picking up right now that's alleviating these stresses for their teams? there's some stuff that's happening with tooling. So, so for example, all the AI coding tools. They can make my life easier, Engineer, because I can probably get some stuff done faster. but in many ways, some of these tools are sort of patching the symptoms rather than fixing the cause, right? So if I can write some stuff faster, yeah, maybe from the 30 things you put in my sprint, I can get 10 done instead of 5, but the problem is still very much there. I've seen some interesting metrics that are coming out, and I, I like metrics. I'm very much. Uh, and enjoy metrics and seeing them. and I think when you see the enlightened places that use metrics, they, they always point out, this is a debugging tool, right? This is not the end all be all like metrics change. They're good. Like if I'm looking to diagnose something, I want to find, you know, what are the lead times here? what is happening in this area? very much being metrics driven, but when you then make goals or reward people based on metrics, those will change. I often say, make metrics that people want to game, right? Because you make a metric because you want to affect behavior. And if the question is, well, what if people game it? It's like, well, game it. That's what I want. I want this metric to move. That's why I'm making this metric. Now, if people game it in an unsuccessful way, they're not going to go change the metric, but I'm Then just go change it. Just change it. Yeah, Don't shy away from putting metrics in there. there's a lot of activity, um, that is happening right now. in terms of these tools that put metrics in terms of, on top of engineering systems, engineers are a little afraid of them. So I think businesses need to be very careful to saying we have these because we kind of want to help ourselves. We want to help you. the instant you start saying, you know, we fired Bob because their metric were low, that's going to be a real problem because then no one's going to pay attention to the metrics or no one's going to, no one's gonna support your metrics. They're not going to trust them. They're going to game them. So I see these tools in general as sort of supporting these processes. But I think the beginning comes from changing your philosophy, changing your culture, changing how you approach things and the tools will help you. But the tools are not going to always just install it for you. You're not going to, I'm going to take this nice tool. It's very nice to install it. And now my problems are done. Precisely, they're tools, you know, you're using them to affect some kind of change or have an end result. I agree. The best teams that you're using metrics, they're using them to track something else, you know, to drive towards another goal. It's not metrics for the sake of metrics. And when you put them in place, they show the symptoms and they can address the symptoms. But oftentimes, in an engineering org, the hardest problems to solve are human problems. and so what the automation can do is free you up to be able to address those human, those human issues. Yeah, and there's tools you can put in place, that help you for making silly mistakes, right? And these are, like, fantastic. Like, we've had, you know, automatic tests in the CICD pipeline, right? These are no brainer tools that will help you make, not make mistakes later, right? So, I was working with a team recently. that when I started, they didn't have linters, working, right? So we would ship things that would have like the silliest mistakes that the linter would have caught early on. So, you know, since then we just, you know, CICD runs a linter. If it doesn't think, if it doesn't pass, you may not merge your PR. It's such a brain dead thing. Um, but you know, caught what was half of the mistakes. Uh, happily that team has graduated and now the mistakes get much more sophisticated. but that's nice to see sort of that progress. Yeah, it's a good problem to have, I suppose. You solve the easy problems, and now you've got some bigger ones to solve. but it means that you're moving up on, in those steps that we talked about earlier. You're moving past that iteration stage. that's the constant evolution, right? As of the last 25 years that I've been doing this, the levels of abstraction keep moving up and moving up and moving up. I was writing some code for a client the last few days and I mostly didn't write a line of code. It was, hey chat, do this, hey chat, do that, hey chat, do this, and it just keeps building it. And I can type a sentence, and it gives me like a page of code that I can read much faster than I would type it. You have to make some changes and stuff, but the levels of abstraction move up, so hopefully humans can focus on higher layer things than, okay, was this the right little bit here, and is it one equals or two equals that I should be typing here. yeah, exactly. And you know, you touch on something really interesting about how you use it to, you know, hey chat, what's this? Hey chat, what's that? And you get like that high level view. And then maybe you throw some away and you go deeper on some, you know, that's actually a really common way I hear more like high level engineering minds. Think about using that tool is like ideate quickly, get the, it's almost like you're having a mini sprint planning with yourself and just kind of getting, throwing a bunch of things at the wall, which is really different from how, newer engineers or like ICs, for example, might use the tool to be more assistant step by step with things that they're producing. and so when, uh, this is an interesting perspective, I'd love to get your kind of insight on, Are you saying a certain kind of habit or way that teams adopt a tool like Assistant AI to get their work done right now? I think this massively depends on the company, right? So if you're in a small company, then, you know, one guy uses it and they're like, Oh, this is great. And then everyone else grabs it and starts using it. And then the And your results may vary. Yeah, your results may vary. in a bigger company, you might have like some much more thorough analysis before you start doing that. this seems to be kind of all over the place, but I want to go back to what you said earlier, because it was kind of interesting point of junior engineers and how this technology is going to affect them. And this may be like a little bit of a tangent, but this scares me a little bit. because I think a lot of the things that a mid level or senior engineer, you know, I'm not going to sear by how much experience you've had in doing this. Not necessarily by actual, you know, years of How battle scarred you are yeah, how battle scarred you are. having the benefit of being battle scarred, you kind of know how you would code it. And you can like guide the AI to take you like step by step in how you'd want it to structure in the way you want to do it. Like if you told the AI like building this whole thing, it would probably fail. wouldn't have the ability to go through that. So I'm kind of wondering how the more junior engineers are going to start getting into the industry because I don't need a junior engineer to do that thing for me anymore because I can pretty much do it myself. so we may have to start changing how we train those folks to slightly different skills. Of maybe learning themselves of how some of this coding is done, you know, playing at a different level, with the tools, how they join companies and work inside these teams. That's going to be, I think, quite different, for the coming years. So And I think, you know, teams are learning. Exciting and scary. I agree. And I think teams are learning now that like engineers, they spend a lot of their time, in fact, most of their time not writing code. Doing other things related in the engineering world that's not code creation. So I completely agree with you. It's about upskilling and understanding what are the skill sets that new engineers on the scene should bring, in order to be, the best possible contributor. If we don't need them to write that kind of more boilerplate code or take that more low level approach to getting things out in a more systemic way, that's great. You know, the AI has freed us up to, and it can handle that really formulaically. But now, what's the new impact? Writing code is just one way to have impact as an engineer, but there's so many ways to have an even greater impact, and those are the things that we're going to start trending towards, I think. It's funny you say that because I read a post on LinkedIn six months ago about how engineers probably should be spending about 5 percent of their time writing code. It was one of the most polarizing posts that I've ever had. Because the main point there is, if you're in a company that is building any kind of sophisticated software, there's a lot of collaboration that needs to happen. There's a lot of discussions and meetings and that sort of thing that is productive ways of understanding what you're going to build. And the actual building, I don't know, 5 percent was too low, but it's a small part of this. And if you're the person that wants to just sit in your cube and just write code all day, you're probably not going to be successful in one of the outfits where these kinds of interpersonal things, um, So sort of, uh, play in there. so it, it, it's kind of interesting of how that changes given some of these tools and how companies are sort of managing that, right? We all agree that like, we don't wanna be in meetings all day, but like, are these productive meetings and if I'm coding all day, am I missing out from not collaborating with people and understanding what other parts of the problem that I'm trying to solve is because at the end of the day. You know, most software engineering work is taking some abstract requirements that someone wrote down or told you and converting it to something very, very concrete, right? Code is extremely concrete as a specification. Most of the times, we're not doing the dumbest thing of just completely just, it's not like a translation. There is a bunch of interpretation that is involved in doing the work, and that depends on the context that we have. And the context is not just context that we had inside ourselves, but it's developed by talking to product managers and designers and engineering managers and other leaders and other, of our peers and then figuring out how to make that translation. That's right. It's about building that context. That way you can have a bigger impact. And I think that's a really great takeaway. and you know, we've learned so many things today in our discussion and it's been really kind of fantastic actually to kind of open your head up and kind of like figure out all these different mental models and see how different teams that you've interacted with have used them both to good ends and bad ones. And I really liked the framework you gave us for building that success over time. And I think there's so many actionable takeaways in this conversation for our listeners. So I'm really excited to hear what people have Think. before we wrap up, where can our audience go to learn more about you and what you do? So you can find me on LinkedIn and send me a message there and I'm happy to chat about all these things. I really like to nerd out about everything engineering related. You also can find information about myself on my website cosmicteacups.com There's a bunch of information in there. I have a PDF with all the mental models that I try to get people to think in those terms. So go grab that one if you're interested and happy to chat about it. Fantastic. You know, we'll, we'll make sure these links to get in our show notes and on our Substack newsletter so you can follow along. And to you, our listener, thank you for joining us this far. If you're joining us on our, your podcast app, be sure to like, and subscribe. Today's episode, share it with folks that you might think would enjoy it and check out our sub stack if you haven't already and we'll include the links from today's conversation. You can also be checking out videos of this on LinkedIn as well as on YouTube as we're going to be sharing the version of this podcast and I would really love everyone to check out the video because behind Thanos, our entire conversation has been the Thanos gauntlet. been informed by our guests that it is a gauntlet, it is not a glove and I highly recommend everyone check So that's it for this week's Dev Interrupted. Thanks for joining us and see you next time.