By giving more visibility into what's happening with your software, your application, you actually do enable development to be faster as well. It's not just insurance for when something goes wrong.
This week, co-host Conor Bronsdon sits down with Daniela Miao, co-founder and CTO of Momento, to discuss her journey from DynamoDB at AWS to founding the real-time data infrastructure platform Momento. Daniela covers the importance of observability, the decision to rebuild Momento's stack with Rust, and how observability can speed up development cycles. They also explore strategies for aligning technical projects with business objectives, building team trust, and the critical role of communication in achieving success. Tune in for valuable insights on leadership, technical decision-making, and startup growth.
Topics:
- 02:01 Why is observability often treated as an auxiliary service?
- 06:14 Making a push for observability
- 13:32 Picking the right metrics to observe and pay attention to
- 15:49 Has the technical shift to Rust paid off?
- 19:23 How did you create trust and buy in from your team to make a switch?
- 26:31 What could other teams learn from Momento’s move to Rust?
- 38:15 Advice would you give for other technical founders?
Links:
- Daniela Miao
- The Momento Blog
- Momento: An enterprise-ready serverless platform for caching and pub/sub
- Unpacking the 2023 DORA Report w/ Nathen Harvey of Google Cloud
- Google SRE
- Rust Programming Language
Transcript:
Daniela Miao: 0:00
The iterative development cycle depends on, is often feedback driven. It sort of depends on, well, what's my current product doing? Where are the requests going? Why is my app slow, right? So I think a lot of that feeds back into the continuous, the next cycle of development. So rather than let that opaqueness grow, observability gives you that venue, where you can find, the slowdowns, you can find performance,
0:29
Developer productivity can make or break whether companies deliver value to their customers. But are you tracking the right metrics that truly make a difference? Understanding the right productivity metrics can be the difference between hitting your goals and falling behind. To highlight how you can adjust your approach to both measure what matters and identify the right corrective strategies. LinearB just released the engineering leader's guide to accelerating developer productivity. Download the guide at the link of the show notes and take the steps you need to improve your organization's developer productivity today
Conor Bronsdon: 1:03
Welcome back to Dev Interrupted everyone. I'm your host, Conor Bronsdon. And today I am delighted to have an exciting episode with Daniela Miao, co founder and CTO of Memento. A real time data infrastructure platform that simplifies architecture at any scale. Daniella has a really impressive background, having helped build DynamoDB at AWS and worked with IBM at their Canada Software Lab before founding Memento. Thanks so much for joining me on Dev Interrupted.
Daniela Miao: 1:28
Thanks for having me, Conor. it's, uh, super exciting to be here today.
Conor Bronsdon: 1:32
Yeah, Daniela, I'm excited to tell a bit about your story, talk a bit about your approach as a founder and dive into the importance of observability with you. A lot of our listeners are founders themselves or thinking about being technical founders in the future. And we've done quite a few episodes focused on different leadership skills and the approaches leaders take. But for today's conversation, we really want to take a look at your playbook as a CTO and understand both why you think observability is so crucial yet often overlooked until it's too late. And, uh, I also want to get a sense of, for why Daniella, you are rebuilding Memento's entire stack with Rust, uh, and how you got the buy in to do so. But before we jump into all that, I do want to remind our listeners, if you enjoy these episodes, I know we ask you every week, but please just take a brief moment to rate and review Dev Interrupted on your podcasting app of choice, your feedback really helps us to bring you more great conversations with leaders like Daniella, uh, and we really appreciate it. It means a lot to us every time we see one of your reviews. But with that said, uh, I want to start off with some congratulations. So Daniella, first off, congratulations on your Series A funding.
Daniela Miao: 2:34
Thank you. Yeah, we're super excited for what's, uh, coming up ahead for us, for our company.
Conor Bronsdon: 2:40
Yeah, it's got to be a huge milestone and really exciting to have that opportunity to kind of take this next step. Uh, and I know one of the big things you were thinking about as we first started to talk about having you on the show is the importance of observability and how you think about it. Why is it that you think observability is often treated as an auxiliary service? And, what are the consequences that you see of not prioritizing from the start when your company's just getting started in that seed pre seed stage?
Daniela Miao: 3:06
so a lot of my opinions and, uh, my thoughts here, uh, have been formed because of my previous experience working four plus years at an observability startup in San Francisco called LightStep. Um, and there we sort of saw all kinds of customer stories. Um, and that's where I saw that a lot of companies, especially the ones. Startups that are trying to move fast, break things, find product market fit in the shortest timeline. They really focus on building the product. Observing the actual product is usually an afterthought. Once you have the feature ready, you want to push it out and you want to show it to your customers. You want the market to try it out. That's very understandable. However, Most of us do not build perfect software the first time around. and then essentially, uh, what we've seen is when issues occur, um, there's just a lack of visibility into what's going on at all. and then, uh, essentially the, the developers are flying blind at that point. Um, and you really want to, um, These things to not happen when you, when you hit, uh, that growth curve, when your product becomes successful. now, unfortunately, usually issues occur when you become popular. Um, so we've seen a lot of customers kind of rushing in to say, Oh, we're really struggling. What's the fastest way to get observability, uh, in our stack. Um, and then unfortunately, again, it's something that you have to invest in over time and, and it's really not a whole lot of shortcuts.
Conor Bronsdon: 4:42
technical challenges as a technical founder yourself, while you're also dealing with things like a fundraise, which you just closed? How, how do you kind of balance these priorities, um, given that I would presume your investors are more excited about you, uh, driving new product features, uh, versus internal operability and understanding what's happening with the ongoing platform?
Daniela Miao: 5:04
I think, the challenge here as a founder or technical leader is that you do have to make a call on how to balance that. There's no perfect formula. Every company's product is different. Every company's timeline to find product market fit is different, depending on your runway. For us, for instance, we are a real time infrastructure platform. So we power the underlying infrastructure for a lot of other companies. So to me, observability is crucial in that because if we're down, our customer's softwares are down. So it's sort of, uh, like a, the, the DNA of our software. So I do set up guidelines. There is sort of a minimally viable, um, launch. list that we have to check before a feature is launched. So it just kind of becomes natural. Um, yes, you have to do a little bit of actual work to ensure that you've added the correct metrics and that you can observe what your new feature is about to do. And that just kind of becomes, it's like a little bit of tax, um, tax, maybe to, to negative of connotation. It's that little bit of effort that you put in with every feature. That ensures, gives us sort of the assurance that we can see what's happening if issues start occurring. Um, that may not be necessary for every single product to be honest. So I think as a leader you have to make a call of how much you want to attach to each product launch. Um, I would just argue that for most products having zero observability before you launch is probably not the right call.
Conor Bronsdon: 6:43
I think that's fair. Understanding the impacts of, of your product once it's out there in, I'll call it real space, uh, starting to actually be used by customers is, it's definitely something where you can iterate and learn off of it. And you and your colleague Chris Price have also talked about this concept of moving bugs forward in time or dealing with them before they become bigger problems. kind of coined this as like shifting your bug catching and fixing left. Does this approach, this idea of, hey, I want to solve these problems before they become bigger ones. Is that a main feature of why you're making this push for observability?
Daniela Miao: 7:16
Yeah, totally. I think it's not just Chris and I, although he definitely has authored many content, including talks on this. Um, it's something that I think by now our entire team, our entire technical team subscribed to, is that you want to catch the errors, hopefully in development. And if not in development, definitely in early testing, alpha, beta, beta stage testing. and definitely by the time that something makes it and, uh, becomes popular in production, um, that a lot of those errors, a lot of those, um, uh, issues have been caught already. Um, and this boils down to everything that we do, including observability. We have many stages of deployment before something actually makes it into production. Again, it can be this balance of, well, I've built this thing. I just want to put it out there for people to try. And again, it's just, it's risk management as a leader. You have to think about what are the chances that this will work? Go wrong. And what are the impacts of this going wrong? How important is the launch? If you're a little bit earlier and you're still chasing product market fit, and you launch something and not many people are using it anyway, then maybe you do want to keep launching and you want to shorten that testing and you want to reduce the amount of observability that you invest at that stage. So it's kind of like a sliding scale that you can decide.
Conor Bronsdon: 8:43
This sliding scale approach that you are, Bringing up here makes a lot of sense to me because obviously the context is different for every team and the kind of needs of every product are, are different. But I'm sure there are CTOs out there or, you know, other engineering leaders listening who are thinking, look, my team's really early. This is a new product. I just want to get this out in the hands of customers, at least in a beta. So we can start tooling around with it, see what happens. And then maybe our. Uh, you know, let's call it a few months later, going into different iterations and realizing, Oh, we need more observability on this product, but are getting pressure from business leaders, whether that's their CEO, whether that's the board, whether that's shareholders, uh, saying, Hey, we, we need to move faster on this. Like, let's just get this out there. What advice would you give to those other technical leaders who are facing that challenge about how to kind of shift the perspective?
Daniela Miao: 9:39
by the way, I think the perspective you're talking about, about speed and velocity, um, total valid perspective, um, one that I've, I still carry to today. Again, it's, it's all about the balance. And we've talked a lot about issues and, uh, using observability to debug. I think what people often don't think about is issues. By giving more visibility into what's happening with your software, your application, you actually do enable development to be faster as well. So I think one way to think about that investment is it's not just insurance for when something goes wrong. although that can be valuable, not a lot of people like to invest in that because we don't like to think about, well, what happens if it goes wrong? What does it matter if you haven't achieved product market fit and all that matters is that your product gets out there faster. The iterative development cycle depends on, is often feedback driven. It sort of depends on, well, what's my current product doing? Where are the requests going? Why is my app slow, right? So I think a lot of that feeds back into the continuous, the next cycle of development. So rather than let that opaqueness grow, observability gives you that venue, where you can find, the slowdowns, you can find performance, um, you know, we're a very performance driven company. So if you're, if you're surfacing that directly to developers, then naturally on the next development cycle, your features will ship faster and better. Um, so I think that's the perspective switch that we also need to think about is it's not just about debugging, it's not just about incidents.
Conor Bronsdon: 11:14
Yeah, I think it's interesting to equate this to a conversation I had with, Google's Nathan Harvey when he came on the podcast last year to talk about their DORA report. You know, a couple of the insights that he shared at the time from their research, uh, were that, you know, when companies had intensive change review processes, it actually reduced the stability of that code. And that when The code review, uh, or the PR process was faster. They saw like a 50 percent faster software delivery lifecycle because of it. Um, and that it's a little counterintuitive, like sure. Okay. Faster, faster PR reviews, getting code out there faster. Makes sense that. You know, the overall team would move faster. That is a blocker for a lot of teams. But you would think that having the ability to have more in depth reviews on change reviews, being more intensive there would actually improve code stability, code quality. Uh, but because of the way it, it changes the team's workflows and like limits these feedback loops, instead of saying, Hey, we can now see what's We have great observability. We can, we can see how this code functions. We can, uh, you're actually creating this blockage in the process. So it's, it's interesting to kind of look at these two adjacent problems. Cause I, I would argue that, you know, having better observability throughout your software development lifecycle, uh, is maybe a better approach here than, You know, focusing on just on something like the change review process. It's like, Hey, this is, this is where I always stop stuff because, uh, it creates a blocker within your process whereby you get less of a feedback loop, like what you're talking about.
Daniela Miao: 12:49
Yeah, exactly. if you take it to an extreme, if you make the review process super lightweight, then you're constantly rolling out new changes into your environment. Now, if you have good observability on that environment, then you're Then you can quickly get feedback, like you said, about how the behaviors of your software or things that, um, is really difficult to catch in code review. and even in testing, in a local development, a lot of run time behavior is not, um, is not predictable. Um, so you have to just stick with it. This is of see what happens quote unquote. And like I said, going back to the development cycle that honestly is one of the fastest ways for feature velocity to go up. You kind of ship it. Of course, basic review is still important. I think code review is necessary. But actually, at Momental we do implement that strategy. Our code review process is very light. The intent of the code review is not to catch bugs because reviewers rarely can, is able to do that. It's really to, to share the knowledge, um, and you know, to have some basic sanity checks on the approach, the, the sort of the architecture of the code. But a lot of the, we rely on observability in our pre prod environments, um, to catch issues or to observe the behavior of the new feature.
Conor Bronsdon: 14:08
and I'm sure there are folks listening right now who are just saying like, this is just doing CICD effectively. And yes, in a lot of ways, you are right. This, uh, these are crucial elements there, uh, of building resilient systems that can, effectively do CICD. So I appreciate you taking this angle of like, let's talk about the importance of the analytics. Let's talk about the importance of actually understanding, what's happening. And you mentioned earlier that One of the crucial things you thought was identifying the right metrics, the right, uh, you know, I'll call them KPIs to be paying attention to as far as your observability process for, uh, your infrastructure, your code. How do you think about that process of picking what the right metrics, the right things to pay attention to are?
Daniela Miao: 14:49
Yeah, I mean, there's been a lot of literature on this, um, talked about, uh, extensively in the Google SRE book, uh, around service level objectives, SLOs. Um, honestly, the simplest way to think about it without having to go and read all that literature is, uh, Uh, what affects the experience of your customers? How, like put yourself in the shoe of your customer. Maybe if you're building an app, like go use the app. We're building an infrastructure service, like build something on top of the infrastructure service and think about what's making you evaluate this vendor, this piece of software, this app versus another app, right? Is it speed? Um, is it the design of the feature of the, is it the completeness of the feature set? there's basic things like, uh, when I click a button, does it work? Uh, so things like that are things that you need to be able to measure. Um, and that you need to be able to. Think about waking up at 3 a. m. because there's a critical issue and you need to pull up a chart. What does that chart need to tell you for you to know whether my app is working or not? and honestly it usually boils down to very, very simple things. Um, like, uh, there's a minimum of three sets, right? Um, the number of requests, the number of errors as a percentage of, of, of those requests and latency. How fast are those getting served? I think that's sort of a bare minimum set and it's very generic. Um, I think it, depending on the actual product, then you could have application specific metrics that, um, that measure the success of your app.
Conor Bronsdon: 16:21
Totally. And I think that's the crucial thing to understand is every engineering team is going to have to tweak this a little bit. Uh, we're talking general approaches and specific examples. And speaking of examples of, you know, building resilient systems, I understand that you've made a bold decision to rebuild Memento's whole stack on Rust. Uh, which I can imagine has been a transformative process. That's also probably taken quite a bit of work. Uh, what's led you to make this kind of big technical shift and, and bet big on Rust and has it paid off?
Daniela Miao: 16:51
I'll answer your second question first. I think for the most part it has, but it's a long term investment. So I think some of the benefits we need to reap over time. And I think we're still reaping it today, and I'm hoping that we'll see more pop up in the future. In terms of what enabled us to sort of invest in making this big shift, we're still pretty early in our startup journey. So it gets more and more expensive to do this in the future. we see a clear trend in the industry. Uh, we have a lot of people, friends, and investors that have worked with Rust either through their own companies or through their friends companies. Uh, we trust their judgment. Um, they've given us real feedback. Um, we also have, people who've worked with Rust in production. Surprisingly, there aren't actually that many companies who's sort of run Rust at scale. So, the ones that have, we, we've, we've talked to them and we've gotten really good feedback, really good feedback. Reviews of the language. Ultimately, Momental is a very performance focused company. While we launched the first version of our product, In a JVM based language in Kotlin. Most of our engineers have a very deep JVM expertise, and I think it was the right choice to start with a JVM based language because it allowed us to go from no product to a version of our product very quickly. Once we got that out and we got enough product market fit signals to know that this is the product that we want to stick with, we have to think about how Our competitive edge and the bread and butter of our service, which is in low latency, high performance, at scale services. Which, like, that is sort of what Rust is touted for. Um, so I think once we realize that our North Star, um, is going to be a different language that's not JVM based, then we're It was an easy decision to make to say, okay, we got to rip off the bandaid now, rather than wait until we have 10x the number of customers, 100x more tech debt before we make that switch. Um, and you know, it definitely, it came with its own set of challenges and pain. Um, but we've kind of, we're over that hump now. Um, and we're very happy with that decision. A
Conor Bronsdon: 19:13
and how you dealt with them?
Daniela Miao: 19:15
lot of the challenges are, really big. few members of our team actually know the language. So not only do you have to operate an existing service, you have to not let our existing customers down, and switch it out from underneath while trying to maintain feature velocity. So I think it had to be a team decision. People had to feel committed, and they had to feel excited. about the change because they had to learn a lot. And it is a decently steep learning curve compared to, you know, some of the most popular languages like Python and JavaScript. And again, coming from a, coming from a JVM based world. Especially with Java and Kotlin, I think, uh, it just took a lot of getting used to. I think technical challenge wise, we, you know, we have great engineers. I, I believe in them and they believe in themselves. They knew there wasn't going to be an issue. I think it was more emotional and psychological having to pick up a completely new language and think about how to optimize things for performance and concurrency at the same time.
Conor Bronsdon: 20:17
So how did you create that? I'll call it trust and space and buy in to get your team to say, Hey, yes, let's do this together.
Daniela Miao: 20:26
Yeah, I, I think actually that's a good question because When you are a founder or a technical leader and you see the, like, you know, my co founder and I had talked about this, um, and we talked about it with a couple of engineers, and we quickly came to the alignment that, you know, Rust is the way to go. It can be tempting as a leader to want to lead and to want to drive the team forward. But really, I think my approach, and I think it's an approach that I recommend, depending on the leadership style, but my approach is definitely to get a lot of alignment and a lot of buy in and kind of lean into this idea of asking questions rather than instructing. So for us, it was sort of like, well, you know, Is our product at the point where we believe that we know what it looks like, we know what the feature set, we believe in the current architecture? Yes, we reached that point. Do we care about performance? Yes, obviously we care about performance. Have we optimized our JVM stack to the point where we feel like there aren't any more low hanging fruits? Yes, we did that. Can we get faster? With Rust or, you know, a different low level language like C Yes, in our spikes we discovered we can. And a lot of it is sort of inherent to how the language is structured. The fact that there's a JVM, the fact that there's garbage collection. So as you ask these questions and kind of let the engineers and the team discover, they will come to the same conclusions a lot of the time. And that is really powerful because then they're committed to it, they believe in it, they see why the North Star is the North Star. And it takes more time, which can be frustrating as a leader because you're like, well, I know the answers and you should just trust me and go. And sometimes you do need to do that. Um, you know, with certain decisions, it's not a, it's not like a one size fits all. But with something as instrumental as this where I needed the whole team to, to be bought in. And to spend, you know, weeks, months on it, um, you, you need a lot of, uh, internal motivation. Um, and that's kind of how we went about it. And, and, you know, everybody, and, and there is a network effect, you know, once you get a few senior leaders, uh, bought in and other people start to see it the same way. Um, so once you get over that initial resistance, um, then everything else was pretty easy for us.
Conor Bronsdon: 22:56
I think that's great advice for other engineering leaders who are considering similar transformations. Um, and I'll say personally, I'm interested to see if these kind of, uh, rebuilds are opportunities for to leverage some of the coding tools that are kind of coming out here that with that are AI assisted to say, okay, like. Can this help with this kind of, I'll call it translation layer from one language to another. Cause we're, we're seeing claims from certain enterprises, Amazon, for example, claiming, Hey, upgrading from, you know, Java seven, Java eight to 1113, much easier now with some of these AI code assistants. And I wonder if we'll see in the next couple of years, then get to a point where, Oh, shifting from, let's say Python to Rust or something is a lot easier. but that, that'll be an interesting one to see if that gets more solved in the next couple of years.
Daniela Miao: 23:42
By the way, like I, I was totally ready to be convinced otherwise as well. Um, so I think as leaders, it's, it's good to have, um, conviction in your opinions. But you always have to keep an open mind that, uh, discovery and talking to your engineers, uh, might, might change your mind, right? Um, so, you know, I was, I was ready to be convinced otherwise. Um, like I said, uh, it sort of gave me, gave everyone more conviction that everyone individually, as they do, did more discovery and thought about our North Star. We came to the same conclusion.
Conor Bronsdon: 24:17
Yeah, you've talked about this idea of wanting to build a kind of robust, scalable platform, obviously caring about latency, caring about, performance. And I would presume that you view this transformation to Rust as complementary to your perspective on observability and how you want to broadly approach software development at Memento.
Daniela Miao: 24:35
Yeah, like I said, I think, what you call, fixing left, shifting your, your problems, shifting your bugs forward in time. Um, it's very much ingrained in our DNA now. Um, so like I said, with every single feature launch, um, Part of the, uh, the draw of Rust to us is that it, you know, it's actually very, like, they catch a lot, the compiler catches a lot of issues. And that can be frustrating for developers, especially if you're coming from a world where, um, something like JavaScript or Python and a few lines of code, you can have a fully working application just like that. And that is the reason that it became really popular because of its, and it's, it's great because of how quickly it allows you to get prototypes out. When you shift from being a prototype to being a real service, which again, like I said, we've gained some confidence that our product is at that point, then we really want to spend more time in development because that is honestly the cheapest place to catch a lot of these issues. And there are areas, I mentioned concurrency, that is very difficult to observe. Um, it's, it's kind of an abstract concept to the point where you can't easily visualize it on a chart either. I mean, you can try and there's all kinds of, you know, numbers around, you know, how many threads you have or, you know, how many parallel processors. It's just really difficult. Things like data races, things like, in classic, programming, like buffer overflows, uh, accessing a variable that's been freed, uh, so memory management issues. A lot of that is sort of inherently designed and built into Rust as guardrails that protect you from making those mistakes. So I guess the way I think about, our shift to Rust is that the compiler helps us catch a lot of that. The ownership model prevents us from having concurrency issues. all over the place. So it kind of catches issues that I think are difficult to observe. And then for like integration, for the end to end feature set, observability fills in that gap. Right? So there's a, there's a development and there's like certain things like concurrency that the language helps us And then for the rest of it, where you can't observe it during compile time because it has to do with the end to end behavior of your application, we have observability. So to me, it's really about thinking about the gaps in where we're able to see and catch. Issues and give our developers more visibility into what's happening. Um, and then using our tool set, uh, and it's not just the language or, or observability, there's other things, of course, you mentioned like CICD and, and, um, things like having pre prod environments, um, all of those things kind of add up to, um, our own commitment to the robustness of our platform.
Conor Bronsdon: 27:25
Are there particular lessons that you feel like you've learned through the, this technical transformation process? Whereby, from integrating this Rust transformation with your observability practices, maybe other leaders or teams could learn from your approach?
Daniela Miao: 27:40
I think setting really concrete goals from the beginning. observability is a whole, you know, science slash art in of itself. you know, there's many books on it. There are many experts on it. There's a ton of podcasts on it. I think one can get really carried away, uh, if you're, if you're passionate. And I'm certainly passionate. Um, same with Rust, right? There's a very vibrant community, really awesome set of developers that are really passionate and could keep talking about it. But ultimately, you have to think about, for your business and for your product, what are you trying to achieve? What moves the business forward? So I think everything, even the technical projects, need to be grounded in a business objective. And then you say, okay, well, what is needed. What is necessary? What is sufficient to achieve that business objective that I need to invest in observability and, and rust. Um, so, so that you're not working on a technical project in isolation, because it's easy for engineers to get like really deep down the weeds, really excited about a project, and then you forget, why are we doing this in the first place while it's to achieve the business objective? So I think this is important as a leader. As co founders, uh, Quaja and I, we do take the time to really share the As a company, where we're going, sometimes it seems like, well, why do I care, you know, how many customers we have or how much money we've made this quarter? It matters. It all breaks down to having a viable and a successful business. The feature you're building, even if you're working on some, you know, tactical Rust tool, it's all contributing to the business. So I think that's, um, that's something that I'm still learning and, um, you know, to be honest, sometimes struggling with because I, I do enjoy the technical problems and I want to solve them.
Conor Bronsdon: 29:34
I mean, I think this is a lifetime learning opportunity for all of us as leaders, right? Where how do you align your team when you're making these significant technical shifts or just on the day to day? I'd be curious to learn a bit about your approach currently.
Daniela Miao: 29:50
So one thing that I mentioned, you know, setting those concrete objectives and sharing business objectives and, and business goals with, with the team, um, something that I am learning to do more of that I feel like we did not practice enough before is. celebrating when those objectives and those goals are actually achieved. So for instance, um, you asked me, you know, has the shift to Rust been worth it? And I said, I think, you know, we've gotten plenty of signal that it is, and then I'm hoping to get more and even in the future, um, we've, we've like hit like amazing performance goals ever since our shift to Rust, right? Things that where we, we run the test and we have to do a double take and we're like, Wait, is this test correct? Is it actually hitting all of our components? Because this is like ridiculously fast, right? and we, we need to be proud of that, and we also need to tie it back to, uh, POCs with customers that we're winning. Um, and, you know, sort of, not to toot our own horn, but like, Just to plug ourselves a little bit like we've been, we've been kicking butt in our POCs, right? Just like working with with our customers and them just being really impressed about, about our performance, because everything is like really tuned and optimized throughout our stack now. Um, and when that happens, I think, um, A lot of the time, like, sales will celebrate, um, marketing will celebrate, like, the go to market team will celebrate those because they're working directly with the customer, with the POCs. I think it's important to bring that back to the team and be like, Hey guys, like, it wasn't just a cool project to write everything in Rust, like, this actually mattered, and like, Literally made a difference in the success or the failure of this POC. So we're starting to do that more. Um, and it might seem also very, very straightforward, um, when I'm talking about, but when you're a founder and you're running all over the place, it can seem really, really obvious to you that of course, this POC is a win. Um, but the engineers, they don't know that. And they're not making the connection necessarily to the Rust project that they worked on months before. Um, so I think that's, That's definitely one way to continuously feed into that virtuous cycle, I would say, of wins and then investing in technical projects.
Conor Bronsdon: 32:07
Yeah, I think maybe our most used word on this podcast besides, you know, software development is probably communication, right? And I think you'll hear folks sometimes go like, oh, another communication topic. But it really is, is so important because what you're talking about is. How do we align our different teams around these shared goals and make sure they understand that they've played a part in these victories? Because great sales sold a deal, you know, marketing gets to attribute some revenue to the work they've done to get that deal to sales. Uh, but. The product is, is really what's in the end going to get things off of the line. And it can be really easy to get disconnected from that work as you point out. So, uh, I do agree. I think, I think leaders have a duty to their technical teams to share in that success and, and highlight when those successes are occurring. but I, I wonder, are you also doing anything to. Insert your devs into the go to market side so that they are highly aware of what's happening or sit in on meetings or anything like that. I know that's an approach some folks take.
Daniela Miao: 33:21
Yeah, absolutely. I think especially for the folks that volunteer and are very interested in talking to customers, Not everyone wants to constantly be in meetings, which is
Conor Bronsdon: 33:33
Totally.
Daniela Miao: 33:34
but there are definitely people who are more passionate about sitting in on those meetings. We actually have a pretty good structure for this. Like, for instance, you wouldn't want the ratio to be off. You don't want like five people to be from Momento and only like two developers from a potential customer. You probably don't want to bring engineers into like the very first meeting because you're doing discovery and you're just doing sort of initial intros. So like later on when you're doing actual technical deep dives, that's good meetings to be on. Existing customers get a lot of contact. I mean, we have, you know, a 24 hour on call rotation. Most of our engineers just chat directly with our customer. We're at the stage early enough where we can do that, where they provide direct technical support in Slack to our customers. And then vice versa, right? One of the things that I've been focused on is like as a leader, you should do this as well. A lot of the time, it's not just about the leaders communicating back. Like if there's like a really specific, technical integration, like recently, um, you know, we have this DynamoDB integration that, sort of made it like the, the customer was literally during the POC being like, wow, this was so easy to integrate and it's like magic and everything just kind of worked and momentum cash just, you know, kind of, you know, it didn't take a whole lot of integration effort The sales engineer was kind of, talking to me and complimenting the integration. I was like, Hey, don't, don't just talk to me about it. Maybe like message the team that made the integration directly, and tell them that directly, right? And it's, it can be really easy to forget these things, um, and kind of just to celebrate in private and we should always celebrate publicly.
Conor Bronsdon: 35:10
Yeah, I totally agree. I think it's important not only to Talk through when we have challenges, which I think is something a lot of leaders already think of like, oh, well, when something goes wrong, I want to make sure my team understands it so we don't make the mistake again. But also to make sure we know when we're celebrating to make sure we know when things go right. And too often we either only focus on the really big wins and not on the day to day. and It also, I find, decreases my team's agency and their sense of impact when I'm failing to celebrate their wins. Whatever those wins might be. because Like, yes, we're all being paid to do a job, but also people want to feel like they're being impactful in the work and teams are more successful when they feel that impact, they see that impact and it's shared with them. So, so I love that you're highlighting that. are there other strategies that you're using to. Either gather technical credibility or build trust with your team, as you gather decision making, prowess and, uh, get folks on board with some of these larger transformations.
Daniela Miao: 36:13
I actually learned this from the previous startup I was at. I mentioned LightStep had a lot of, uh, had great founders and a lot of great engineering leaders. Uh, one of them in particular, our VP of Engineering, really took the time to actually ask for, opinions and feedback during one on ones. Again, might seem like obvious advice, but too many one on ones, especially at a startup, end up just being project updates. I think it's really important to talk quote unquote strategy with your, uh, with your engineers, with your ICs, because, um, especially for us, we're a technical product. Our customers are developers as well. it's important to kind of ask, Your direct reports, our team to ask the engineers. Hey, do you care about? What are you excited about? What do you want to work on? just like sort of more, more general questions beyond the day to day. a lot of our, right now, at least for me, a lot of our roadmap, um, gets like the feedback that they give me, gets formed from the feedback and the input. Of course, our customers give us the most directive on our roadmap and sort of vision, but internally, actually, I get a lot of feedback from that. And it's sort of, it's a great trust building mechanism as well, because people are genuinely seeing that their thoughts and their opinions are kind of being fed into what's coming up. And I think that makes them feel heard. Which is really important. and that's actually the most important bit. It's just being heard. What they're saying to you doesn't necessarily have to make it to the roadmap. Because again, like I think our customers needs are most important, but as long as they feel like you're actually listening and incorporating, that's, that's really important. Um, so yeah, my one on one. I've tried really really hard. Sometimes engineers are, you know, are so eager to have these conversations about projects because they're kind of like heads down and coding. They immediately come in and want to update me on like, look Daniela, I got this thing working and I'm like, that's great. Maybe we can take five minutes to talk about it since you're so excited, but like, I do want to spend the rest of the time asking you a little bit about, you know, what's coming up, how things are going. Um, yeah, I think that's like, uh, asking about product vision and product strategy with your ICs has been a very good and fruitful exercise for me.
Conor Bronsdon: 38:40
I absolutely agree. Before we wrap up, Daniela, do you have any other advice that you would want to share for particularly folks who are thinking about taking a journey as a technical founder? Um, we have a lot of folks in our audience who are either current engineering leaders at a large company or maybe another startup that are thinking, Hey, maybe I want to start a side product, or maybe I want to, you know, dive full in, or maybe we even have, you know, senior devs, junior devs in the audience who are looking ahead and saying, I want to take on this leadership challenge. What would your advice be to those folks who are listening about, taking this approach of being a technical co founder or founder?
Daniela Miao: 39:15
I mean, first of all, I highly recommend it. I think that it's been a really amazing growth journey for myself. It's, it's pushed me to be uncomfortable in ways that, uh, I don't think I've, I've sort of felt before. Um, and it's been really great because it stretches me and it forces me to grow. Um, and then I think the other thing that I'll say is, you know, don't, be afraid to kind of Uh, rock the boat, in terms of where you're currently working, um, if you want to make certain changes, um, just try it. I think being a founder is not constrained to, you know, raising money and starting a company. You can be entrepreneurial in your current role as well, as a way to kind of try it out. And I think a lot of it comes down to innovation and not doing Things the way that they're quote unquote supposed to be done, right? And I, and I think there are benefits, of course, to, to, you know, being a team player and, and, you know, doing what you're supposed to when you're working at a company, not asking everyone to, you know, to be a, to be a rebel. But, but think about ways to, to invent, think about ways to innovate. Um, and share those, um, outwardly with, with your peers. Um, and in terms of just, you know, the founding journey, I absolutely recommend it, and I think that, um, if. It seems like this is what I did. It seemed like too much of a jump to go from, you know, working at a big company, having a lot of support to being a founder, um, you know, I joined an early stage company to kind of be early and be, um, almost like the founding, you know, was a part of the founding team, almost like a founder and made it very clear to the founders that I have that aspiration. And then they included me. And a lot of those conversations. And I think it was a great experience and a great setup for me personally, in terms of that, that step function, that springboard into, um, into actually founding.
Conor Bronsdon: 41:11
Thanks, Daniela. I'm sure our listeners will take a lot away from this conversation. And to your point, I think there is so much upside in being innovative, being a leader from whatever your seat may be. It doesn't always take positional authority to make an impact on an org. And, I, I completely agree. I think whether you're an individual contributor, whether you're a leader of a team, whether you're a founder yourself, like you can make an impact on the org if you, if you choose to. Um, so thank you so much for joining us today. I really appreciate the time.
Daniela Miao: 41:39
Thank you, Conor, for having me. I had a lot of fun chatting. Um, and I hope that this is, uh, helpful for, for folks out there.
Conor Bronsdon: 41:45
Absolutely. it's been, it's been a ton of fun having you on. And as always, for those of you listening or watching us on YouTube, thanks for tuning in. Don't forget to subscribe to our Dev Interrupted newsletter on Substack for deeper dives into the topics we discussed today and see you all next time.
Daniela Miao: 41:59
Bye.