It doesn’t matter if you have an innovative technical strategy if you’re not solving problems the business cares about…

This week, host Conor Bronsdon sits down with Rob Zuber, CTO at CircleCI. They delve into the evolving role of engineering leaders, and the importance of building a technical strategy that aligns with overarching business goals.

Throughout the conversation, Rob emphasizes the importance of focusing on customer needs, gathering direct feedback and maintaining strategic flexibility. If you want to understand the balance between technical strategy and business leadership, this episode provides a wealth of knowledge, strategies, and real-world examples.

“For me personally, I even think of technical strategy as secondary. It allows me to focus more of my energy on my first team, which is the executive of the company.

My job is to solve the problems of the business, and if technology is helpful, I will apply technology.”

Episode Highlights: 

  • 01:38 Crafting technical strategy for teams at CircleCI 
  • 07:26 Making the most informed choices about your business
  • 17:47 Using postmortems to fuel a growth mindset 
  • 22:39 Applying hypotheses to be prepared for worst-case scenarios 
  • 27:43 Solving Business Problems > Technical Strategy
  • 33:17 Advice for ICs or Directors on becoming a business leader
  • 39:17 Building trust & organizational design 
  • 44:36 Being a technical founder
  • 55:12 What is CircleCI doing in ML?

Episode Transcript:

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

Rob Zuber: 0:00

And the first time I did continuous deployment, which was like 2011. I was terrified, Deployment used to be seven gallons of coffee and four all nighters in a row to try to get the thing to work. And I was like, we're going to do that every day? That sounds awful, right? But then we tried it and I was like, this is amazing. I will never not do this, right? I will never stop delivering value to customers as quickly as I think about it. Here's my idea. Now it's in the hands of customers. Like there's a little typing involved, but that's going away, apparently

Ad Read

Is your engineering team focused on efficiency, but struggling with inaccessible or costly Dora metrics. Insights into the health of your engineering team, don't have to be complicated or expensive. That's why LinearB is introducing free door metrics for all. Say goodbye to spreadsheets and manual tracking or paying for your door and metrics. LinearB is giving away a free. Comprehensive Dora dashboard pack the central insights, including all Forkey Dora metrics tailored to your team's data. Industry standard benchmarks for gauging performance and setting data-driven goals. Plus additional leading metrics, including emerge, frequency, and pull request size. Empower your team with the metrics they deserve. Sign up for your free Dora dashboard today at LinearB dot IO slash Dora. Or follow the link in the show notes.

Conor Bronsdon: 1:13

Hey everyone. Welcome back to Dev Interrupted. I'm your host, Connor Bronston, and today I am joined by the CTO of Circle ci, Rob Zuber. Rob, welcome to the podcast.

Rob Zuber: 1:21

Glad to be here. Thanks for having me.

Conor Bronsdon: 1:23

My pleasure. It's, uh, really great to have a technical leader like yourself here. You've been a two-time founder, obviously doing massive work at Circle ci, and it's given you this in-depth perspective on when efficiency is paramount and technical, uh, expertise. You know how businesses grapple with aligning technical direction to business goals. And avoiding wasted time and resources and missed opportunities when you're leading. And as an expert in crafting technical strategy, I know you're giving a talk today around how to do so effectively, but I'd love to understand the risks of failing to do so. Because I know so many companies struggle with this.

Rob Zuber: 2:02

I think it's a great thing to think about it, and to be super clear, like, I would not call myself an expert.

Conor Bronsdon: 2:07

I'll do it for you, don't worry.

Rob Zuber: 2:08

Well, I mean, all of that comes from trying many, many different things and learning and, you know, trying and maybe we'll call it failing. Having a hypothesis, seeing what works, what doesn't. All that stems from exactly what you're asking about, which is that the risks of being misaligned, the cost, the overhead of maybe building things in a disparate direction. So I would say a couple key things in there. Aligning folks across an organization so that you get more leverage out of the work that you do. Often the same problems are being solved in pockets around organizations. And you don't necessarily want everyone waiting for one person to solve that problem, but as a leader you often have visibility that individuals or maybe teams don't have, where others are facing similar challenges. And so helping lift those up, surface them, identify, hey, These folks have a really great solution. Maybe you can learn from them, take from them. Maybe we can build something that others can use, et cetera. That duplication is a trade off. There's sort of individual velocity or throughput, right? But over time, that tends to build up as sort of debt across your organization where you're paying for the maintenance of many different implementations, let's say, of a solution to a similar problem. I think engineering leaders as a whole, once you get to, let's call it a director level, which is sort of the audience here, are really not just engineering leaders, right? They're, they're business leaders. The responsibility is to use technology to drive the goals and the outcomes of the business. And so one of the challenges that we face as engineering leaders is the investment time horizons for technical strategy tend to be longer, then the time horizons that we have clarity about the business, right? I mean, particularly look at the last few years. So many things have changed so quickly. Feels like chat GPT and like generative AI showed up overnight, right? I mean, it was like 60 years in the making, but it still feels like November 30th, 2022 was like the day sort of thing. And so being in a position to adapt. is a big part of what you should be investing in, technically. And I think people tend to get focused, engineering leaders, and again, I'm speaking for my own mistakes, too focused on sort of a precise view of where the business is going and what you're trying to achieve, versus what I now refer to as the sort of set of most likely outcomes, right? Like the, the span of those possible futures, if you will, and putting yourself in a good position to adapt. I mean, from a software delivery perspective, we've spent the last 20, 25 years moving from Waterfall to Agile and then CI and CD and like progressive delivery. All of these tools are designed around the assumption that we are wrong. That we won't know what to build. That that MRD, PRD, whatever was not right. And so how can we quickly get feedback? And so if you assume that we're going to be wrong, next week. You definitely have to assume that we're wrong about next year and the year after, so how do I build systems and strategies that support the ability to change, not support this very specific view that we had at this point in time?

Conor Bronsdon: 5:16

Yeah, that accounting for variability is a really crucial skill I see a lot of venturing leaders develop as they go through things. If I look more broadly, absolutely. Maybe the feature we're building, we're gonna get customer feedback that we need to adapt it. Like, I feel like every time I've ever built something, once it's in the field, whether it's a piece of content or, or you know, a new app or a new feature, customers go, oh, I use this differently. Yeah. Like, yeah, I like the way you thought about this, but no, no, no, no. Mm-Hmm, the triangle goes in the square hole.'cause the square hole's easiest one. Yeah. Yeah. And I kind of hear this a story from you that we think a lot about, which is the dual mandate that faces engineering leaders. So for a long time, I think engineering leaders have really just thought about efficiency. Like, how can I make my team more efficient? How can we move faster? How can we solve these software delivery problems that you bring up? And now we've really realized, and of course some folks have known this for a while, but we've fully accepted as an industry that, If we fail to be business leaders, we're gonna see cuts hit us in, in ways that are maybe not conducive to long-term success. Mm-Hmm. And so this dual mandate for engineering leaders now is like, yes, operational efficiency. Yes. Be great at delivering software, but also deliver the right software. Make it something that's impactful for the business. And to your point, that's a, it's a hard thing to learn when often engineers are promoted for their technical skills. Mm-Hmm. And then have to learn the business side.

Rob Zuber: 6:45

I think that's, I think that's very true. I think it's a, a big growth step as you move up through engineering leadership, you know, from a manager to director to VP sort of thing. But I would say, even as a director, like that level, think about your peer group, your first team, which is an expression that we use a lot. Like, I think directors think about their peer group as other engineering directors maybe versus all of the directors in an organization.

Conor Bronsdon: 7:07

Oh.

Rob Zuber: 7:07

When realistically, How marketing is trying to achieve results out in the market. How your sales team is meeting with customers, what customer success folks are hearing back from customers. All of that stuff is critical. I mean, yes, to the feature that you're building now, but more importantly, to your ability to set yourself up to adapt as things continue to change. And I think when you talk about engineering efficiency, delivery, et cetera, you have to think about why we want that, right? It's so we can be more prepared for change, right? Like things are constantly changing in the bar. The only consistent thing is change, right? So, you wanna set yourself up to be able to change. One option is to make everything perfectly abstract, right? And what engineering team that made everything perfectly abstract can we even name because they're not in business anymore. Yeah. Right? So you have to make choices about where you're gonna have that flexibility. And in order to make the most informed choices, you want to be as informed and knowledgeable as possible about the business, about the market, about how you play in that market, not just about the feature that your PM handed to you to go implement sort of thing. And so maybe as an IC, You, you have to be able to focus on that thing that we're doing right now. Um, but even as you get more senior, as an ic, you want to have that visibility out into the, out into the market so you understand where the value and flexibility is coming from. Yeah, totally.

Conor Bronsdon: 8:25

Great staff plus engineers have context on what customers want. Mm-Hmm. have context on the business needs. They understand the priorities of the business and they help mentor and guide others in that. I think a lot of folks hear this and they go, oh, great general advice. Like, I think you had a great nugget in there about making sure that you're, you're communicating with your first team, not just within engineering, but also you know, marketing, cS product, you know, product marketing. What are other ways that you think engineering leaders should work to find that business context and understand what's happening in the rest of the business or the needs of the customer?

Rob Zuber: 8:56

Talk to customers.

Conor Bronsdon: 8:57

Great advice.

Rob Zuber: 8:57

We're probably in a bit of a unique position because, we sell to developers, right? Like our customers are basically peers. Yeah. To many of those folks. So there's no excuse for us. I mean, we're talking, I'm, every engineering leader I'm talking to here is either a customer or a potential customer, right? So they're thinking about the problems that we're thinking about. So that's, that's probably easier for some, but all of the indirect feedback that you're getting is passing through filters that prevent you, I think, from being able to apply your own creativity to a problem, right? Like if I hear a customer has said something to a salesperson who passed it to a sales leader who passed it to me sort of thing, then I'm getting a very prescriptive view of what the customer is looking to do, but if I talk to that customer directly, again, particularly in the case where they're more like me than they are like any of the people in that chain in terms of what their interests are, I have a much better chance of understanding the problem they're trying to solve, and maybe even how that problem is going to evolve in the future. That allows me to better think about how do we build the system that enables us to solve all of those problems, as opposed to how do we implement the feature that they asked for right now. And I think that's true across many spaces. I mean, I haven't always worked in developer tools, so I would give that advice to anybody. And even if you can't talk to them directly, like we, we use Gong, which is like, I think many people do, right? Call recorder, you have some kind of call recorder, you have access to this feedback. And sitting and watching those calls as an engineering leader is fantastic for me, because I hear the phrasing and sort of nuance in the way people describe things that maybe doesn't necessarily get translated by the time it would get to me some other way.

Conor Bronsdon: 10:29

Yeah, it's interesting you bring up this idea of reducing the game of telephone that occurs so often in organizations. But there's multiple layers of communication within that where things can get adjusted, perspectives. Come on. Mm-Hmm. I mean, this is true of any sort of communication and I, I love this advice. You know, whenever I hear amazing engineering leaders give it, uh, they say, look, talk directly to customers. Listen directly to customers. So, you know, gong is a great resource. Um, finding, finding ways to be the customer, frankly, like on this podcast, it's one of the reasons we do it is we're like, let's get this qualitative signal. Understand what's on people's minds. And I think it not only provides this information benefit and gets you directly to the source. But it also is something where your customers end up appreciating it. They're like, yeah, thank you for listening, thank you for having this communication. And you end up with better product results. How do you retain that information, though, as you're building out longer term technical strategies? Is it more of a set signposts and guideposts so we make sure we're getting this feedback repeatedly and adjusting? Or what do you think about as far as building out the longer term strategy?

Rob Zuber: 11:48

So it's less like we're trying to solve this specific problem and more, there's a set of problems or a type of problems that our customers are consistently running into. And we're not sure how we're going to solve them. So how do we build a system? That will make solving them easy, as opposed to how do we solve that specific problem, right? And then, what you're looking for is, okay, we believed that would put us in a good position to solve these problems, and then when we got to that problem, it was hard, right? We were cutting across the grain, if you will, of our system design. Okay, that tells us something new about our system design. Can we incorporate that in a way that, you know, continues to make our system design appropriately flexible? Like, that's the way that I would think about it. Again, if it's abstracted in every possible way, it becomes so generic, it's useless. But I want to be able to make this kind of change rapidly. So how do I ensure, for example, that everything related to that change is sort of consolidated in one area or easy to make by the folks trying to make it, right? If every project is a 10 team cross-functional, you know, Gantt chart, then your likelihood of delivering what your customer's looking for gets very, very low.

Conor Bronsdon: 12:54

Yeah. J just hearing you say that, I'm kind of like over here, I'm getting goosebumps, like, oh no, this is a problem. Right? Because it creates such an issue with prioritization and the ability to pivot when needed. So, I think we hear this advice a lot of like, okay, get to customers, learn more, have more direction. I think a challenge a lot of engineering leaders face is, okay, how do I then prioritize that feedback and actually apply it to make decisions?

Rob Zuber: 13:19

Well, there's a tough balance in there. So yes, I'm trying to set myself up for sort of future success. You can, you can ratchet that down or sort of focus in or zoom in maybe a little bit, and this is down to the IC, right, even below the, the engineering leader, um, in that everything you're doing, everything you're building, everything you're implementing, you want to consider What's going to be easy to change or hard to change, right? Simplicity, I think, would be the first thing that I would point to. Again, with a, with a sense that we have a really clear direction and we're gonna, we're obviously gonna be on this path for a while, we tend to, like, orient specifically around that direction. I'm going to use hard code as like a metaphor more than specifically, but say, you know, everything's going to move through our system in this one path and changing that path gets hard. Whereas we don't really know, like really accepting that ambiguity and saying, okay, again, like all the way down to the way that I implement specific parts of the system. How do I make this easy to understand, easy to change? A lot of that comes down to simplicity for me. Yeah. In terms of everything from like simple function structure to how services communicate with each other and then sort of like that change co-location that I talked about before.

Conor Bronsdon: 14:37

I think this concept of applying simplicity is a great one and it's really useful I find for engineering leaders to have like a clear example in their mind. You have an example from your career, whether CircleCI or previously, that you could describe about how you've applied this concept to drive that success.

Rob Zuber: 14:52

Is it easier if I talk about where it hasn't worked?

Conor Bronsdon: 14:55

I mean, that's, that's a great option too.

Rob Zuber: 14:56

Yeah. So I mean, a very sort of, long story of my arc at Circle CI has been, in order to get to market in the early days of Circle ci, we leveraged our relationship with GitHub, right? Which was, you've already got an organization, it's got users, you've got a relationship between those users and the projects, et cetera. And so we were able to take advantage of a lot of things to get ourselves into the market. But, what we unfortunately also did was sort of include that understanding in many parts of our system instead of pushing it all to the boundary and then making the internals of our system, I guess, simpler and less coupled to, to something else, right? So loosely coupled systems is something you hear a lot when you talk about simplicity, right? Yes. Over time we've changed that, I mean there are many providers of Git, there are many people building things today like prompts for AI and ML that have nothing to do with Git repos. And so we now are effectively disconnected from that, but we had to take a lot of pieces and push them out to the boundary and define our own internal model. And if we had thought about that future, which is not really that surprising now that we're in it. We would have been in a much better place to make some of those changes faster. So, we had to go back and do the work to basically say, Okay, this is not the full set of ways that people are going to use our system. And therefore, we have to abstract ourselves away from that and push that again to the boundary. And I think, even on that first implementation, but if you are talking to an external system, make all of your understanding of that external system isolated to a single spot, so that changes in the external system don't impact you, right? And so that you can connect into a different one, or like, change the way that your internal systems work without worrying about any of the understanding of that external system. And your ability to make change, right? Or your ability to respond to change, like, when someone else is changing their API, one of the third parties that you depend on, you want that very clearly isolated in one spot. I mean, they will give you notice and all those other things, but if you're, yeah, exactly. Okay, sometimes. Let's just But you wanna make sure that you have a very easy path to deal with that. Yeah. And it's not this scramble. 17 teams are trying to fix things right now because one external API changed, right. It should be really easy and isolated.

Conor Bronsdon: 17:21

To pick a example outta a hat here, like look at what happened with Reddit earlier this year where we've seen massive protests against the changes of the API Mm-Hmm. Part of that's that so much of the systems that we're relying on were directly relying on Reddit. They weren't using other platforms. But, it's so easy for what you build to be fantastic and work incredibly well until one thing changes with this key system that you're integrating with, and it can completely destroy the infrastructure of what you're trying to do. So, yeah, I think this is a, a smart approach to ensuring that you have reliability and control of the technical approach you have. However, I'm also, from what you're saying, hearing this thing that from the start, maybe that wasn't a priority. You had to just get into market. What did you do to take this retrospective and almost like post mortem approach to decide how you needed to adapt?

Rob Zuber: 18:10

In that particular example, there were many processes. It's been a long ride, but there was effectively an early realization that we, you know, we wanted to talk to other systems. And we put some initial implementations in place that still, you know, they were more abstract, but still more kind of. In multiple parts of our infrastructure or code base. And ultimately looked at that and said, okay, this is actually getting more expensive, not less expensive to make changes. So how do we push all this stuff? I keep calling it pushing it to the boundary. And DDD, if you want real advice, just learn about DDD, but it would be called an anti corruption layer. And so how do we take this and push it out to a boundary where we then translate it effectively into our own representation of how we want all these things to work? I think from a, a retro perspective, like some of this was obvious in terms of how the market was shifting and then also in terms of our own ability to move. I think there was some natural progression, but then along the way a, if we're going to be naturally progressing, this is kind of where it comes back to technical strategy. Like if there's a natural forcing function, let's say to change the way this works, let's actually stop and have more people just for a minute, think about what we think this needs to look like versus solving the next day's problem and the next day's problem. This is where that time horizon sort of situation comes up, which is, yes, we could add the next thing right now, but what do we think the next 5 or 10 or does this turn into 300 look like? And in that world, what would we need to build? Okay, we need to build this. Great. Let's start. And now I need to back that down to like, what are the first steps? Like, how do I take useful steps? And one thing that I think about, you know, kind of coming all the way back to tech strategy is, do I have something in one of these projects that is an easy off ramp? Like, if we actually only end up being able to invest two weeks or four weeks or two months or whatever it might be, Have we made real progress, or were we still, you know, drawing designs and talking about new systems that now we've put a little bit of a new system in place but we're still using the old system and we're actually net worse off? And so, you have to then draw on all of your skills from an agile, incremental, however you want to think about it, perspective to evolve towards that outcome and not just say, oh yeah, in the future we're going to be here. And that's gonna be awesome. And then try to build the future. Yeah. Right. There was actually a talk yesterday at Lead Dev about a multi-year, kind of rewrite of an entire system and, kudos to them for getting through it, but that is an extremely, extremely dangerous place to be. And so looking at how can I do these things incrementally? Like of course we think about that with product, because we assume we're going to be wrong, but like technically we're going to be wrong also. There was another talk actually yesterday, later in the day, about a project that just totally went off the rails and they had to start over sort of thing, because they were wrong about some of the decisions they made. Which is what's going to happen. You know, what is the thing that we can do to get better information, right? If we're unsure about this, let's not make a guess and charge ahead. I mean, unless it's like both of these options are totally fine, but rather, how do we get started and learn something new? What's the smallest thing we could do to get better information so that we can make a better decision if it's something that's going to take a big investment?

Conor Bronsdon: 21:30

Yeah, this key through line that I'm hearing in what you're saying is learning from your mistakes and learning quickly and iterating. This is something that we talk about on a technical side of like, oh, iterations, we should learn, we should get feedback. We can keep doing this. But it's also something that is really important to apply as a leader. Mm-Hmm. Whether you're looking at the strategy side or just managing your team. This like fast feedback. I mean, you'll hear folks talk about it. Uh, obviously continuous learning, that phrase comes up a lot. Or you'll hear a growth mindset, which really just means learn from your mistakes, learn from things you're doing, and keep improving. Do these postmortems, keep evaluating and to your point, break that unit of learning down as far as you can so you can be doing it as often as possible because that's how you, outpace competition, so to speak, by improving your knowledge.

Rob Zuber: 22:10

I think that that, you know, your point about mindset is huge in there, which is. We don't know. There was so many sayings about this, but as we grow as leaders, I think we tend to like realize that more and more, that it's not about having the right answer. It's about having the right set of questions. Like, how can we get better information? How can I help people get better information so that they can make good decisions? Right. And how can I grow them? How do you learn to the place where they realize I don't have to have all the answers. What I have to have is an approach to get some answers so that we can move forward.

Conor Bronsdon: 22:39

The other key piece that I'm hearing from you about how to adapt strategy and priority based on conditions, obviously applies to that continuous learning mindset. But I'm also hearing this idea about looking around corners, trying to project ahead and say, let me be a little prepared for some of these risks. Let me understand some of the directions it can go. Let's say something goes wrong, you're like, oh yeah, I thought that might happen. Like, Mm-Hmm, well, okay, now we can learn from it and apply it. And instead of going into the moment and saying, oh God, something went wrong. What do we do? You have at least a general idea of how to apply it.

Rob Zuber: 23:14

Yeah. I think that notion of being prepared for something going wrong, something just, I, I mean, I, I'm a big fan of hypotheses. Yeah. Right? Like at the end of the day, this is a hypothesis. We're making a bet, and so we're trying to measure the risk of taking that bet. Right. And there's lots of ways to de risk bets, right? Again, how can I learn a little more to sort of make a better decision? Um, but if you treat it as that, this is an exploration to get new information, then it's a much more comfortable conversation, first of all, right? Like, what did we learn? Oh, we learned that that was the incorrect hypothesis. Cool, like no one's upset about that. No one's emotions are tied to it, right? That was our plan, was to learn something, right? I just finished this book, Modern Software Engineering, and I think, uh, it's Dave Farley's book. He does a great job of talking about applying the scientific method, if you will, to building software and all of science, like everything that we know came from guessing and then testing. I don't know, maybe it's this. Oh, can we get empirical evidence that that's true? Oh, actually it's been proven wrong because this other thing came up. No one's like, I failed as a scientist. They're like, I did my job. I learned something really interesting about how the universe works or whatever. And if you think in that way, then all you're trying to do is get better information all the time. Instead of, again, tying everything to this was my idea and now it's a bad idea sort of thing. That does not allow you to move forward.

Conor Bronsdon: 24:34

It does bring up another risk, though, where some folks get so obsessed with this idea of being right the first time, or like really nailing it that they spend too much time on that. Like, let me figure out my hypothesis. Let me strategize. Let me try to project the future instead of saying, okay, let's get my hands dirty. Make one hypothesis, try it out, fail. Learn from it, have more context and information, then go apply that.

Rob Zuber: 24:55

We have tried to instill, and this is hard, I'll be honest, that even doing things that you know are the wrong direction, if you will, can be useful if they will give you new information. So, in an incident handling situation, right? So, let's say we're having a production incident. We would want to restore service to our customers first and foremost. That's the most important thing, right? Totally. And saying, what if we turn this thing off? Well, we're not 100 percent sure. Or what if we roll this back? Let's take a really simple example. What if we roll that thing back? Well, we're not 100 percent sure that that's the problem. But if we roll it back, and it doesn't fix it, now we know it's not the problem, we don't have to debate it anymore, right? Instead of feeling like, oh, but, you know, we need to have the right answer in order to make a decision right here. Like, even doing things, that's not necessarily the wrong direction, but doing things that we aren't confident in, if they will tell us something new and we can gather new information about the situation, is a completely reasonable approach. And then, if it does fix it and we don't understand why it fixed it, Let's discuss it in the retro instead of while customers are not able to do their work. A, that's a little bit on the side about having clear priorities and what you value, but B, saying if it gives us new information, then we're willing to try it. And, you know, we apply that across a lot of things where we say, Oh, you know, we're not really sure if this is important or if this is important, like let's get rid of it and see what happens sort of thing. Right. Like we're particularly in this, you know, year that we've all been operating in from a macroeconomic perspective, there's a lot of, you know, do we need all these tools? Do we like, can we be more efficient sort of thing? And you sort of, you know, you look around and see, well, someone's using this and someone's using this and whatever. Like, could we consolidate on one? And, and, you know, well, that would slow us down in this spot, but would it speed us up overall? Right? And sort of like trying to zoom out a little bit and look at what's the big picture and will this drive us towards our goals? And that might feel a little backwards over here, but we can make that decision to sort of drive all of us forward.

Conor Bronsdon: 26:48

I think this aligns back to one of the first themes you mentioned around how to create great technical strategy and make sure you hold it forward, which is getting business context. Mm-Hmm. because what you're talking about is two things. One, it's getting technical context, you know, on an incident, on something else like this is something we already do or should do as a best practice. And so then if you apply that out to saying, okay, we also have to be business leaders at a certain level. Yeah. Great. That then it's logical, you should try to get the business context from these communications. Yeah. Hypotheses, et cetera. And the other pieces I would say, where you're thinking about efficiency of tools, thinking it through. In some ways, that's also business context. We need to cut our burn rate, or we need to reduce our tool spend. Or simply, we need to manage less tools because the overhead of that is challenging for our platform team, or it's challenging for just organizational throughputs because it creates these communication barriers, and it creates information barriers. And so I'm hearing this theme for you where you deeply value learning and context, and I think it applies directly to your talk. And so I'd love to zoom back out a bit and say like, what are other things that you view as key to maintaining this crucial technical strategy direction?

Rob Zuber: 28:01

It depends on your level a little bit, but I would say for me personally, I even think of technical strategy as secondary, because what it allows me to do is focus more of my energy on my first team, which is the executive of the company, right? When you're a CTO, your peers are not mostly technical leaders, right? They're the CFO and the CEO and the CRO and whatever. I don't necessarily think of it as just collecting context and putting that into technical implementation. Like, my job is to solve the problems of the business, and if technology is helpful, I will apply technology. If my team or my organization or the engineering organization is helpful or how they're behaving needs to change or what you know what they're focused on needs to change then absolutely like I will use that knowledge to drive that But at the end of the day, I'm trying to create more alignment, whatever, so that folks are more effective and I don't need to be as involved so I can go focus on the things that really matter for the business. Right. Yeah. I mean, I am a salesperson for the company. I am a, you know, customer focused leader. I concern myself with the issues of legal and finance and everything. Right. And we, as a team, look at all those problems. And work to solve them together, more than everyone focusing on their own department and then coming and reporting status. Like, we don't deliver effectively as an organization until we as an executive are working on those problems together. And yes, we have specific skills, we have specific organizations. But as a CTO, if I don't understand, in pretty decent intricacy, how our financial model works, or what our legal concerns are, But all of those pieces matter. And I can help the legal team make good decisions about how we should be structured globally, right? I can help the finance team make good decisions about what we should be spending in different areas or look at where their concerns are and say, Oh, I think I know a way to solve that problem that may not even be coming from a technical perspective, but just from my understanding of the business and the market.

Conor Bronsdon: 30:12

That brings up two things. So one, I, I hear you talking about helping provide context, right? Like CFOs need context into how technical leaders are thinking too. Yeah. Like you have a role in providing them that and Mm-Hmm. and helping share, oh, this is why we're thinking about the product this way. Yeah. This is why we're, you know, engineering this this way. So I think that's. That's something that I would love to talk a bit about and understand how you approach that. Because yes, absolutely what you're saying, we need to get context from the rest of the business. But you're also at a role where now you've started providing it. And I think a lot of folks who are maybe at that director level for the first time, and are starting to have to do more of that, would love to understand how you approach best practices. How do you then bring that context that you gather back to your team effectively?

Rob Zuber: 30:58

I think it's a little bit collaborative problem solving, right? Like where I see these things come together. And you know, there was an interesting talk on AWS cost in the last couple days. And I'm sure cloud costs is exciting for everybody. There's a lot of situations where folks in engineering, for example, feel like finance provides them a set of, you know, a set of budgets. You can spend money on AWS, you can spend money on people, you can spend money on whatever. Let's say R& D, right? So I'm thinking like in accounting lines from a public statement, but R& D is going to spend a bucket of cash. And try to drive the maximum outcome for the business. And I, as a technical leader, can say, Actually, if we spent less on people over here, and more on third party software, or actually reduce this part of third party software and put that into our cloud spend, or whatever, we would have greater impact for the business. Right? So, That's not necessarily something that someone in a finance role is going to see. Like, I need to help them see that, help them, and learn from them. Like, what is the difference if I hire a contractor versus a full time person? Like, what does that mean for the company? Do I want to buy a three year commitment to get a, an additional discount right now, or is that too much risk for us as a business, right? Like all of those things are places where I have useful information and my CFO has useful information and the best outcome is not going to be, them telling me something or me telling them something but us sitting down together and saying here are our options. Let's figure this out Let's understand the trade offs yeah, exactly and it's going to take both of us and I think that's not something that has to happen only at I think at a, again, at a director level, like every director has a finance partner or something like that, like someone in the finance org who's working with them, and they should be having the same conversation, not, here's your sale in this spreadsheet, try to stay under this number, but this is what I'm trying to achieve, here's what I think it would cost, here's the impact that I believe we can have on the business, You know, are we in a position to spend that cash right now if it's going to return in 12 months? Or do I need to be spending cash that's going to return in 6 or in 3? How can I think about structuring that? Like, that's a, that's a conversation that I would want directors to be having. And if you're not having those conversations, you're not preparing yourself to then be a VP where you're definitely having them, or a CTO, or something else.

Conor Bronsdon: 33:17

If I'm a director or maybe even just like a senior engineering manager Mm-Hmm. Who wants to take a step to be a business leader. How should they be thinking about that level up process and how to, to grow?

Rob Zuber: 33:28

Yeah, it's, it's a fantastic question and I always struggle a little bit in these conversations because I've been a CTO for like 16 years by starting companies. And so that's one option. If you go run your own business, you're going to learn all this stuff very, very fast. CFO at some point because you won't have one yet. And it gives you a very, very good perspective of what matters, particularly when you're It's a very good perspective. You don't have to do that. It's not for everyone. Uh, it, I mean, people are stressed out right now. It's like a very high stress thing. Participate in those conversations. I mean, whatever level you're at, there's someone above you who's having a bigger conversation about that topic. And I would say, I don't, I don't have a lot of people coming to me saying, some of my folks, I mean, to be clear, I actually don't run the engineering organization. I have a sort of small group of very technical IC folks that work for me. But some of those folks are interested, like, help me understand, you know what the CFO is thinking about right now. Yeah. And I'm happy to have that conversation'cause I have enough context about what the C FFO is thinking about.'cause we are very close to be able to share that with someone. And then if they said, I'd like to understand more, tell me about this. And I say, I don't have an answer to that question, but she would love to talk to you. Right. Because that's helpful to our CFO as well. I mean, I'm not gonna send 200 engineers to talk to the cfo. Right. But people who are keenly interested in trying to get that growth or you know, let's have a session maybe if all of you wanna understand, like, if I see enough of that interest, all of you wanna understand. Let's have a session and talk through the things that we're seeing and the things that, you know, our finance team is seeing and what really matters, so that we can each, get a little bit more context about what matters. I don't know if it's everyone, but I'd say folks are generally reluctant to just ask around, Hey, you know, this, this is out of my wheelhouse, therefore I shouldn't talk about it. But those things out of your wheelhouse are the things that are going to help you be a good business leader and ultimately growing as an engineering leader is about growing as a business leader because the more of the value and impact that you can have around the business that you can bring. The more you're going to be able to move up through those roles, like at some point you're not about technology anymore. You know everything you need to know other than the ability to learn new tech or understand the impact. And what you need to know now is how the business functions and how to be impactful.

Conor Bronsdon: 35:43

That alignment piece and bringing that alignment back to your team as you brought up is also a crucial skill set. What were some of your best practices that you had mentioned around helping share all the contacts you gathered, or encouraging others to gather context themselves?

Rob Zuber: 35:56

I um, I had this moment recently, which was, I'll just describe it, where I had received some questions about some other stuff related to, you know, how we were structuring teams and some sort of low level stuff, but the signal that I got from that set of questions was that people didn't understand, you know, within parts of engineering, basically house CircleCI makes money, and I don't mean we charge customers and therefore we get paid and we make money, but literally, like how we really go to market, right? What our growth and expansion model is inside of our customers, et cetera. Because they think about the user and they think about the problems that user, but they maybe don't see. You know, our go to market motion and the split between what happens with self serve customers versus what happens with, you know, customers where we have a sales engagement. Those sorts of things. And so, I basically kind of drew a bunch of diagrams and did a talk at one of our R& D call hands. We call it whatever ridiculous name, but anyway, an all hands that's on a call. And just walk through, step by step, this is how people join. This is, you know, this is how, what it looks like when they're free. This is what they, how they pay us. This is what revenue recognition is. We're like very sort of accounting pointed terms, but that matters in terms of, you know, how we build features, honestly, like what it is, what kind of capabilities we're trying to give to our customers so they'll be successful on the platform. You know, there might be big opportunities like that to say, Okay, like, this is how our business functions from a purely sort of someone else's perspective, right? A finance perspective, a go to market perspective. And understanding that picture, I mean, maybe doesn't change, everything that they do every day, but might change the way that they think about a very specific problem. And, you know, one thing that I often say is the number of decisions that your developers are making every day is you can't keep track. Every line of code, in some fashion, is a decision, right? I'm making a trade off decision every time I hit the keyboard, and you're not going to be involved in 99 percent of those decisions. And so, trying to give people, that bigger picture understanding, which again, you can do at very large scale. Also, at smaller scale, you have a whole management structure, right? So you want your directors to be really clued in so they can help the managers understand. And also translate that down into context for that sort of more localized pursuit. Just as a CTO telling everyone everything about what's happening in the business is probably a little overwhelming. Take a really long time. And often ends up being in a place like, okay, I think I understand that, but I don't see how that applies to what I'm doing. And so using director, manager, senior manager, maybe in there, Localize those, those problems and add the context of the challenges of a particular team, right? So, you can imagine, while we have certain teams where, like, how money flows through the system is really critical. Right. And then we have other teams who maybe aren't thinking about it, but that might impact, oh, if we, you know, if we built this feature in this way, that would help us grow, you know, free users, which is helpful to us in the long term. Right. So how can we do that?

Conor Bronsdon: 38:57

Yeah, I think this scaling thing you're talking about is really important too. Like we think a lot about, like as leaders, how do we scale ourselves and scale our organizations? And one of the things that really underlies a lot of the approaches you're taking here is like you clearly trust that your team will get context, be provided context by you and others. Mm-Hmm. And then make good decisions. And there's a lot of research that shows that when engineers or any team members feel that trust, they perform better too. They, they understand what they need to do. They keep doing that learning process and they feel enabled to do their best work. What's your approach to building that trust in your organization and making sure you have the right people in place?

Rob Zuber: 39:38

Yeah, it's a great question. I mean, that's an ongoing pursuit, obviously. And I mean, you, I, I'll always hedge like we are not perfect, right? Like there nobody places, nobody. Yeah. There are places where I think we do this well and there are places where I think we struggle and that can depend on people. You know, it can depend on people involved, it can depend on the particular context. Like I think if you're a CircleCI engineer, um. We talked about money, that's not the core of what we do. Absolutely we need to charge people and we need to make money and all that, but that's not how most of our developers, for example, think about the product. They use the product every day, unsurprisingly, to deliver their own software. If you're working on those features, then it's really easy to have context. You have this like, oh I know, this would be easier if I did this, right? And you can go talk to your PM and you can sort of share that locally. You know what we don't necessarily see, I won't go all the way into like, payment systems, but what we don't necessarily see is really large enterprises, right? The, the challenges of really large enterprises are not as close to our challenges as individual developers.'cause we're a smaller company. We things like SOX compliance, right? Public companies, and sort of separation of duties and things like that that we, you know, a lot of us earlier in our organization thought. Nobody needs that, right? That's not a good way to deliver software. Versus, okay, so now we have a bigger picture. So yes, how do we take that and help folks see it, right? And a good part, we talked about talking to customers. Obviously we have PMs, but, um, who gather that at a higher level. But really encouraging. The discussion, right? Oh, that's interesting. Why are we doing that? Like, help us understand, give us some examples of customers, whatever that might be, to understand this is why they're asking for it, this is how this works, and, that works well in some cases and doesn't in others, but trying to get everyone to understand the customer and that particular customer problem in a way that they can come up with creative solutions. You know, it's one of the things that I've always loved about CI and cd, agile, whatever, like we talked earlier about, we're probably gonna be wrong. Right? And we're all about getting faster feedback. Yeah. And one of the, the key elements of that fast feedback for me is that as an engineer, I am putting code in front of customers constantly. And so they might not be calling me and saying, Hey Rob, I don't like this feature, but I can see the metrics moving. Oh, we put this thing out and yeah, everyone loves it. Or we put this thing out and no one's touched it. Like, what did we do wrong? How can we learn more? We do lots of user research, which is kind of in the middle, and we have access to the recordings and, and sort of backroom access and stuff like that so that like individual engineers can go really hear how customers are thinking about the problem. And I think all of those things to get you that fast feedback directly from customers really connects you as an engineer with the problem, right? I mean, I grew up through waterfall and 12 month delivery of products that nobody cared about and nobody wanted by the time we got them in the market, and the first time I did continuous deployment, which was like 2011. I was terrified, right? Deployment used to be seven gallons of coffee and four all nighters in a row to try to get the thing to work. And I was like, we're going to do that every day? That sounds awful, right? But then we tried it and I was like, this is amazing. I will never not do this, right? I will never stop delivering value to customers as quickly as I think about it. Here's my idea. Now it's in the hands of customers. Like there's a little typing involved, but that's going away, apparently And so, uh, so then like, why would I stop doing that? Right? And that the thing that I love about it is that I understand how the customer is responding and that direct connection and direct feedback allows me to your point, to be more engaged in the problem. More so say, okay, I understand how to do something for my customers, and I feel that. I don't know, call it a dopamine hit if you want, but like I feel that positive energy towards delivery. And when it's negative feedback, great, I know how to fix it, right? Versus, okay, let's sit and talk about it for a while, and then like this person's going to talk to that person who's going to talk to that person. So, I think everything we've done around software delivery in the last 20 plus years has really got us to this place where we're connected to the customer in a way that's really engaging.

Conor Bronsdon: 43:54

Yeah. This is a, a generalism, but, something we felt think a lot about is like merging devs or happy devs. Mm-Hmm. You wanna deploy co code. Yeah. You want that dopamine hit, you wanna see how things worked and then learn and grow off it. We like problem solving. Yeah. And I mean, frankly, every time we put out a podcast, I get excited too. Yeah. I'm like, oh, great. Like we're, we're, we're putting something out there. Hopefully it has value. I think a lot of what you're saying is great advice for technical leaders at every level. Yeah. Get business context. Understand the customer, um, you know, talk to your peers. Learn, learn, learn, learn, iterate, test hypotheses and I would love to understand a little more in depth what advice you would have for people who want to go be technical founders, because I know a lot of our audience is like, okay, I'm an engineering manager, that's, I wanna be a director, or I'm a director, I wanna be a vp. And I think you've given some great advice for them, but I know you also have this unique perspective as a multiple time founder who's had success doing it.

Rob Zuber: 44:43

It's gonna be a ride. It's hard work. uh, that's not really helpful. I mean, I think a lot of my approach as a leader comes from that. Right? Like, so if a lot of what I'm saying is how I would approach starting a business. Even more than in anything you're doing now, the faster you can get feedback about your idea, the more chances you're going to have to get it right. And to go to the, the sort of simplest extreme of that, like when I want to launch a new product, I'm going to start with a landing page, right? I'm thinking about this idea. If I put up five, 10 landing pages, which means just, hey, coming soon, put in your email if you want to be notified. Does anyone respond, right? Does anyone click on the ads that I put up? Does anyone put in their email? If no one cares about this thing, how much money am I going to invest in it, right? Like, I think there's a good amount of, Aura? Founders who have this amazing idea that no one believes in, but they have this tenacity and they go after it for years and then they succeed. That is an outlier. That is an extreme outlier. What most people do is pivot, pivot, pivot, pivot, pivot, like try to figure out there's something here. I know there's something here, but I need to get it into a shape that people are going to respond. If I phrase it in this way versus, okay, people are interested in this idea, but if I explain it this way versus this way, if I call it this versus this, does that drive differences? Right? These are like 50 experiments, right? I talked to people who are thinking of founding companies and they're like, I'm going to mortgage my house and hire a team and do, I'm like, please stop. Like, until you know that you have product market fit or something, some signal that you're on the right track, the cost of experimentation today is almost non existent, right? Like, I can spin up a single host in a cloud, I can run a Lambda job to respond to requests, I can just pay for a subscription on, I don't know if Unbound still exists, but like a landing page site. The cost of experimentation should be, it shouldn't even show up on your personal monthly budget. It's what I would say, right? And so there's so many opportunities to, to learn and figure out where you have an opportunity. And then from there, don't lose that discipline, is all I would say. Like, doesn't matter how big you get, it doesn't matter how fast you're going, like, stick with doing the things that you have some evidence are going to drive value, and if you don't have evidence, find evidence, right? That same, like, how can I test the hypothesis that tells me if I'm going the right direction or not? Because the faster you adjust, the more time, right? Let's say you're funded at this point, right? You've got cash in the bank, and you need to put that cash to maximum use. Like, are you going to spend it all building the first version of the product that no one likes? Or are you going to build 18 versions in that time? Right? Because if you build 18, you've got way more chances to get it right. And so, that discipline, I think, is super critical.

Conor Bronsdon: 47:41

Keep making hypotheses or, or betts and, and getting more information and learning from it. Yeah. And I, I mean, you brought this up or alluded to it earlier that, I mean, things change really fast. Mm-Hmm. Both in technology. I mean, yes, chat GT's been technically out for a while, but it went viral really fast and now it's this normalized thing. And L LMSs are everywhere and we're all talking about how can we apply an LLM within our technology? How should we be managing our data? Two years ago, not everyone was doing that. Yeah. Uh, or look at just what's happening in the business world broadly. I mean, a couple years back, like the, the balance of how people were hiring was very different. Mm-Hmm. uh, there weren't massive layoffs that were happening. Uh, maybe you could fully grow a business off of, oh, we're selling to small startups and scale up. And it's a lot harder now because, money is a lot more expensive just looking at interest rates. Yeah. And things changed fast. We went from boom times to not so boom times and. If you lose that learning that you keep on bringing up, it's so easy to be caught flat footed.

Rob Zuber: 48:44

absolutely. I think things are always going to change, right? Change is the job, right? Like, if you think about it, even if you go to the very, very micro, right? I actually did not study computer science. I have an engineering background. Almost every other engineering discipline has way less flexibility, right? I design a bridge, I build the bridge.

Conor Bronsdon: 49:05

It's going to be around for a while.

Rob Zuber: 49:06

Yeah, exactly. I mean, unfortunately, the new Oakland Bay Bridge, they had to retrofit, I think, two or three times right after building it. But, for the most part, like, you're committed to a design by the time you start implementing it, right? So, the process, the process of designing, of like, multiple engineers signing off, and you know, lots of research up front, et cetera, that's how those things have to get done. But software is like a, it's so pliable, right? All, so you can start from, you know, everyone builds their hello world, right? And that's it. Initial commit, get a knit, whatever. And then everything after that is a change. I change it, I change it, I change it, I change it, right? And so orienting even the software that you build at the smallest level to be prepared to change is the job, right? Because all you're doing after that is changing it, and so you want to make it easy to change. And so, if you take that discipline and apply it to how you think about business, you're going to put yourself in a really good spot, right? Like, we want to be well positioned to change. Doesn't mean we should change all the time, right? Confusing our customers, launching different products, whatever, like, that's not a great place to be. But there's always going to be shifts in the market. There's always going to be dynamics. Like we're going to go from cash is easy to come by to cash is hard to come by, right? So we're changing the model of how we operate our business. We're going to go from AIML is nowhere to if you don't have it in your product, no one wants to talk to you, right? So how fast can I make that change? Right? And if I am 100 percent committed, all of my weight going in one direction, and now I have to turn, that's way harder. Yeah. And it's gonna leave me exposed when someone else is going to grab that idea and, and run with it in another direction.

Conor Bronsdon: 50:46

A actually, you brought up a, I think, an important piece of advice to founders, which is just buy a.ai domain. That that's the first thing you should do. After that, you know, it all gets a little easier. It does bring up something I want to ask you, which is, what are the changes that you see coming next? Are there things that you have your mind on and you're like, you know, I'm anticipating this, whether, you know, broader market conditions or technological things that you see coming down the line that people should be thinking about?

Rob Zuber: 51:11

We're definitely in a time of uncertainty Ask anyone about economic decision, you know, future economic conditions, you're going to get a broad spectrum of answers. So, I mean, there you go. That's like your set of possible futures is quite wide. I'm not going to say this is the one I'm betting on. What I'm betting on is a high variability in there, right? So, so then your questions are, how do I hedge? Right. Which I think is, I've been asking that question a lot more lately than I have in the past. And I think it's a really interesting question. I don't know if people are familiar with like financial hedging and I'm not gonna talk about that model.

Conor Bronsdon: 51:41

I'll, I'll just say that's a mistake I've made before. Right? Like thinking I have the potential future in mind. Yeah. And going all in and going, jumping on it. And when it works, it's great. Yeah. But when you're wrong, the risk is high.

Rob Zuber: 51:51

And I would say in those kinds of conditions, another thing that I would highlight, which I think is good for founders and for many folks is I can only think of it metaphorically, but knowing what your stop loss is, what is the point at which if this is not working and we can't prove it, we are going to call into question this plan. I love this expression, a Ulysses contract, and I'm not going to explain all the details, but it's basically like before, when I am of sound mind, I make the commitment to certain behaviors, such that when we are in the moment, right, it's so easy to say, what if we just keep going, what if we just keep going, and I like, my favorite examples of this are all from like mountaineering disasters, and I'm sorry for bringing this up, but like a turnaround time where you say, if we are not In the final stretch by 2 p. m., we are going back down the mountain because then we're gonna live to, like, do it another day. And every mountaineering disaster story you read has people going past the tide. Like, oh, we'll get the summit. Yeah, yeah, no, we're just, but we're so close, right? We're so close.

Conor Bronsdon: 52:46

Maybe we're oxygen deprived, and now we're, like, not making straight.

Rob Zuber: 52:49

Right, because you're making terrible decisions. Yeah. Right? And we may not be it so, you know, 8, 000 meters or whatever in our day jobs, but under stress, right? Oh my gosh, we've like burned a bunch of cash, and now we're feeling stressed that we have to make this work versus we've proven it doesn't work, how much time do we have to pivot? Or like, what is our hedge, what's our other option? You know, all those things I think are good to frame early. There's this belief that founders are people who commit to this one idea, and they just keep, you know, charging forward. Don't let that be your thinking. Absolutely there are points where people will question, and you have a vision and you need to proceed, but you need to decide like where that is. Because if you just think the first thing you thought of is gonna be amazing, you know, you can burn tons of cash, tons of trust, tons of future opportunities, by pursuing something sort of without checking in, I guess.

Conor Bronsdon: 53:40

Yeah. That check-in is crucial. Rob, I've really enjoyed this conversation. Do you have any closing thoughts you wanna share with the audience? What's coming, advice, or anything else you want to mention?

Rob Zuber: 53:50

Prepare yourself for change. I think the thing that I would want to land out of all of that is that approach makes things easier. I think we often end up in this spot and we're all under a lot of pressure right now. We're all sort of exhausted from just a few years of what's been a grind. But it generally pays off to put yourself in the situation where you are more adaptable. So I would take that extra time, like focus less on what's the thing I have to do right now and a little bit more on how do I set myself up, particularly as leaders. For a wide variety of outcomes, uh, in the next, in the next couple years. I mean, we, at CircleCI, unsurprisingly, trying to adapt and, and focusing on adapting to, you know, this whole shift in ai, ml, how do we put capabilities in our product? Like are we prepared for that change? How do we support other people trying to build those things? Like what does software development look like over the next couple years? That's super fascinating. I'll just say if anybody wants to talk to me about that, come find me. There's a broad spectrum of possible futures, probably more than we're used to at the moment. And so take the time to step back and look at how are we going to act in each of these situations and how do we set ourselves up to be successful across that broad spectrum.

Conor Bronsdon: 55:04

Yeah, de risk a bit and understand what could be coming because you'll be more prepared for it. I think that's a great insight. Do you want to talk about what CircleCI is doing in a I ml? I'm happy to dive in there a bit more.

Rob Zuber: 55:15

I would say that there's two things that we're very interested in. One again, is, is how do we help people deliver software faster of any kind. Yeah. Right. Yeah. We, we launched this error summarizer, so you have a stack trace, you know, you can click a button and get an English sort of explanation of what the problem is and how to fix it. We're gonna continue to iterate on tools like that. We've, uh, in the lab at least demoed self-healing pipeline. So if you're, if you build breaks, we'll just fix it for you.'cause we have all the tests. We know what it looks like to be right, so we can generate code that is right, according to your definition, even if your code is wrong. Right. That's a pretty cool spot to me. Yeah. Um, and then on the other side, many of our customers are in the same situation, right? They're building AI enabled capabilities into their products, and they're unfamiliar with how to test those. Right. We're, as engineers, we're all used to determinism, like two plus two has been four for a really long time, and now it's like somewhere between 3.1 and 4.9. It's probably okay. Right? Yeah. And so helping folks. A, just learn and understand how to do that effectively in their standard software practices. Like, we don't need to reinvent all of software delivery to deliver software that happens to be AI enabled. And so including some additional capabilities around non determinism, stuff like that. And then, and then we're looking at all the other tools that we have and helping folks apply them. Like, I'm putting out a new, I'm testing a new endpoint in a product, right? Does that look much different from, uh, I'm just rolling out a new version of something? Like. Canaries, blue-green feature flags, like we've built all the tools for these things over time, and software engineering has solved a lot of these problems. But we're at this interesting intersection of kind of two communities, folks who have been building AI ML in the lab, and folks who have been delivering software that they expect to behave exactly the same every single time. And so we're bringing those pieces together to help those software engineers deliver with the tools that, like honestly, the A IML folks have built all the tools. It's just that the software engineers don't necessarily know that they exist or how to use them. So putting that into your delivery pipeline so you can continue to deliver effectively. And I always tell this joke or whatever, I guess it's like self-deprecating, but when CTO shows up and says, I don't really know what AI is, but we need it in the product by next week. Right. And, and as a developer, you're like. What do I do? Yeah. And the reality of this situation is so much of this stuff has been done for you and you have all the pieces, and so just helping people sort of see that path and continue to deliver effectively. So that's what we're, we're focused on at the moment. I mean, it's, it's an awesome opportunity, I think for everybody. People are building some very, very cool products. People are building tons of products that are not cool, but it's experimentation, right? It's like, absolutely be hypothesis driven, because this is brand new. We see tons of possibilities. Not all of those possibilities are going to play, but some of them are. Send a request to a foundation model and put something in your product and see if it works, right? Yeah. And then when you know that it works, you can worry about how do I scale it, how do I optimize it, how do I deal with cost, et cetera.

Conor Bronsdon: 58:14

Fantastic. Thank you for all these great insights, Rob. I've really enjoyed our conversation. If you want insights from more engineering leaders, just like Rob, make sure you tune in every week, uh, whether you're on YouTube or Spotify, Apple Podcast, we're on all your podcast players. And, uh, you can also check us out on substack devinterrupted.substack.com. We put articles out there every week along with, uh, information on the podcast. And, uh, thanks for tuning in. Hope you enjoyed the conversation. Thanks for coming on, Rob.

Rob Zuber: 58:37

Thanks for having me. It's great to be here.