AI is the biggest hype cycle happening in tech right now, but how do you know what’s actually going to make an impact for your product and team vs. what’s just new and shiny?

This week, LinearB COO & Co-founder Dan Lines sits down with Louis Brandy, Member of Technical Staff OpenAI and ex-VP of Engineering at Rockset. Louis shares his unique perspective on the evolution of AI, drawing from his experiences with early days AI work at Meta and now with OpenAI following their acquisition of Rockset. He shares grounded insights into the realities of AI, separating fact from fiction in an industry often clouded by buzzwords and unrealistic expectations.

Listeners will learn about the practical applications of AI, the challenges and opportunities it presents, and how to go past the hype to find AI's potential. Whether you're an AI enthusiast, a skeptic, or a professional looking to understand the true impact of AI for engineering teams, this episode offers an insightful look at one of the most talked-about topics in tech today.

Episode Highlights:

00:32 Louis Brandy's background with AI at Meta
04:31 The current AI hype cycle
13:09 How should engineering leaders think about AI and the pressure to use it?
17:58 How to know if you’re falling into the hype cycle
25:50 AI vs. human code
34:42 Real time when it comes to AI
38:36 What should an IC do about AI in their career path?



(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

[00:00:00] **Louis Brandy:** We're always going to find ourselves attracted to some shiny thing at any given moment. Some of them are much more real than others. My big hot take on AI is that like It's going to get radically better at things you don't expect. Two years from now, it's going to be shockingly good at something you did not expect it to be that good at that quickly.

But there will probably also always be ways that it's bad in ways that are surprising. Like you, you'll look at it, be amazing at this thing you thought it would do, but it's going to fail in some interesting way that was, that's maybe unexpected. So it's always going to be this like lopsided creature that's, that's getting better in various ways.

I have a feeling there'll be a super human intelligence that can solve all our problems and we'll still be sitting around going, but yeah, but look how dumb it is at this thing.

Are you looking to improve your engineering processes and align your efforts with business goals? Linear B has released the essential guide to software engineering intelligence platforms. And this comprehensive guide will walk you through how SEI platforms provide visibility into your engineering operations. Improve productivity and 

[00:01:00] forecast more accurately. Whether you're looking to adopt a new SEI platform or just want to enhance your current data practices. This guide covers everything you need from evaluating platform capabilities to implementing solutions that drive continuous improvement. Head to the show notes to get your free copy of the essential guide to software engineering intelligence platform is today. And take the first steps towards smarter data-driven engineering.

A quick note from the Dev Interrupted team. Since we recorded this episode. Rock set has been acquired by open AI. And Lewis is now a member of open AI's technical staff. Congratulations to the whole team at rock set on the acquisition.

[00:01:35] **Dan Lines:** Hey, everyone. Welcome to Dev Interrupted. I'm your host, Dan Lines, LinearBCOO. And today, I'm joined by Louis Brandy, VP of Engineering at Rockset. Welcome to Dev Interrupted, Louis.

[00:01:50] **Louis Brandy:** Uh, thank you for having me.

It's super exciting to be here.

[00:01:53] **Dan Lines:** Yeah, really. I think we're going to have a great conversation today.

[00:01:58] **Dan Lines:** I love the [00:02:00] topics that we have in place. We have like AI hype cycle, things around AI. A little bit about your background and that type of stuff. What I see here is you joined Rockset. You spent more than 10 years at Meta. So before coming to Rockset, you were at 10 years at Meta. And one of the things that you were working on,

and built and I think even leading an engineering team was around Meta's AI initiatives. could you tell us a little bit about your background in Meta and then getting to Rockset?

[00:02:36] **Louis Brandy:** Yeah, so I want to be careful here because AI initiative at Meta is a very loaded term. But, um, I was doing AI and ML stuff early on.

And like before it was cool. So this we're talking about hype cycles. So this is kind of cool because it'll be this is in the in the before time. So specifically for people who know a lot about like how AI has developed in the last 10 years. This is like the pre deep [00:03:00] learning time. Uh, there's still a lot of that stuff.

Like there's a lot of like classic machine learning, and there's a lot of use cases for that kind of stuff. But this is I actually worked at most mostly on this stuff, doing a lot of like proto code. I don't want to call it proto AI. I mean, it was still AI, but it was, it was different than it is now, and we built a lot of really important systems.

I mostly worked in the spam fighting and image classification kind of world, so this is actually quite relevant to what's happening right now, like right, right as we speak, but this is before Everything broke loose, so to speak. So this is before like Facebook had what is known as FAIR and a lot of the other like extensive and major investments into AI.

But yes, I was building all of that for a long time. We built a lot of cool systems in that era, many of which got replaced later by much bigger, more sophisticated AI as that as that research kept kept going.

[00:03:48] **Dan Lines:** Yeah, that's really cool. I mean, what, tell us about the different stages, right? Cause you were there kind of like, I don't know.

You said like pre stage a little bit early, early on, then something shifted and [00:04:00] now you see AI everywhere and everyone's talking about it. Can you tell us the difference between these two? Stages that we're in.

[00:04:07] **Louis Brandy:** So like my, my history, I even got back before Facebook, I worked in face recognition and face detection.

I, but I've always been on the implementer infra side more so than the research. So like, my, my thing was always like, how, how fast, how many frames a second can we make this thing run? How, how many, you know, how many, how many dollars or watts does, does it need to run? Like, that's kind of always been my, my.

MyHomeTurf, like the infraside of it. And, but what's, what's interesting in like how it's developed was like, I kind of watched it all happen. Like we were doing like a lot of the stuff that we were doing kind of, you know, in that first few years at Facebook, as well as before that, all that stuff sort of vanished in, in favor of this, this deep learning type of stuff.

Um, where the idea was like, you have these things that just scale really far with the amount of data. And a lot of the research before that. As I understand it was like of the form, you, you [00:05:00] built better models to make use of the data you had. It turns out that's actually the wrong approach, or in some sense, there's so many situations where that's the right approach.

But there's at least this one category of model that it's actually no, it's actually just more and more data, like how far can this model model, A huge enough data set and then can you show enough data at it? And that's roughly what happened. And then, so these really, you know, big places were able to do lots of training on lots of data, and they started to build things that were far more powerful than a really sophisticated model that was trained on less data could, could ever hope to accomplish.

[00:05:31] **Dan Lines:** And that's really cool. I mean, I think that's what you're saying where we're seeing it in like consumer application, chat, GPT, everyone in their mother and their mother knows about. AI now in some way, whether it's like a buzzword or maybe it's not a buzzword, but everyone is experiencing it. And I think that's kind of what you're saying.

I mean, that was kind of like the big change with the big datasets and Google and all, and all of that.

[00:05:56] **Dan Lines:** So, what is your [00:06:00] stance on the current AI hype cycle? Is it really hype? Is it not hype? Here to stay? Like, what's your viewpoint?

[00:06:09] **Louis Brandy:** So I was, I was doing, we did a conference not very long ago, and one of the questions I got asked was, is it all hype?

And I love that question because it's easy. Like the answer is, it's definitely not all hype. And I was really thankful they used the word all because you, you asked the harder question, right? Which is like, what do you think about it? You know, I think, look, I think that there are always going to be hype cycles.

We're always going to find ourselves attracted to some shiny thing at any given moment. Some of them are much more real than others. My big hot take on AI is that like It's going to get radically better at things you don't expect. Two years from now, it's going to be shockingly good at something you did not expect it to be that good at that quickly.

But there will probably also always be ways that it's bad in ways that are surprising. Like you, you'll look at it, be amazing at this thing you thought it would do, but it's going to fail in some interesting way that was, that's maybe unexpected. So it's always going to be this [00:07:00] like lopsided creature that's, that's getting better in various ways.

I absolutely don't think it's all hype. I absolutely think there's an insane amount of hype too. And there's a lot of froth that you're going to have to like sort through and figure out what's real and what sticks. So in my current role, we're building infra for AI. So in some sense, I have a.

Good horizontal view of all the different kinds of things people are trying. And, but even from where I sit, it's hard to tell what's real. You know, in other words, someone's spending a lot of money to build a thing. I don't know if they're making money yet. Like it's hard to, that's going to take years to kind of figure out if.


[00:07:34] **Dan Lines:** monetized. Yeah,

[00:07:35] **Louis Brandy:** like, so how much of this is just like fueled speculation versus like a real value, so to speak? But, but then again, on the flip side, like, I also get to see lots of really clever things that people are doing that, like, at least on the surface, like, seem promising. And in some ways that's, you know, seductive like it's dangerous like the fact that it seems promising but maybe it isn't is like sort of the way you that's the that's a [00:08:00] good way to lose like a lot of uh time and money to to chase something um but so so no what's my current take on it i think it's going to change a lot i think it's going to also there's going to be false promises along the way there's going to be things we think it can do that it won't and that it's going to be This is like a classic moment of disruption where we're gonna have to figure this out, and we have to be careful, to some degree.

[00:08:22] **Dan Lines:** Yeah, I really love how you sum summed it up. The thing that you said that I like most is, well, a few things, actually. One thing is, AI is maybe right now good at certain things that are a little unexpected or could be unexpected and other things that you may think AI like should just be able to do and it's like not so good at it.

And with your background, you're kind of sitting, I think, at the infrastructure type area. So you get to see all of these like trials and errors. There's a lot of money going into it. I don't know if it's like the dot com boost because I didn't [00:09:00] run a company, uh, in that like era, but maybe it kind of feels that way in the same thing of like, okay, let's try AI for everything and see what sticks.

Let's put a bunch of money into it. The ones that are hit are going to be like gajillion dollar companies and the ones that don't it's like, whatever, I'm going to invest in like 20 of these and one of them will hit. But what I would ask you then. Um, is where do you think AI is doing well, like useful right now?

And do you also have an example on the other side of like something that you think like, Oh, it should be good at, but it's actually like stinks at this and it's not gonna do it. I

[00:09:35] **Louis Brandy:** mean, we could do it in reverse. I mean, I, I, sorry, answering in the reverse order, because I actually think this is like, we could like have a whole philosophical discussion here because like every single AI breakthrough that's ever happened It's really trivial for a human to sit down and be like, look at this absolutely ridiculous thing that it's failing at.

Like, like you can do that with chat GPT right now. You can go in there and I actually did this recently where I was like, [00:10:00] Hey, give me the three biggest, most expensive software bugs in history. And it lists all three of them, like the three most expensive. And I was like, put them in order. And it didn't, it didn't put them in order.

Like it had the right numbers. It just couldn't order them. And it's like, this is a really trivial mistake to make. I think it'll always be like that. It'll always be like that. Like it's making a mistake in some surprisingly trivial way because it doesn't think like we do. So something that's trivial to me and you is not going to be trivial to it.

And I have a feeling that's gonna, there'll be a super human intelligence that can solve all our problems and we'll still be sitting around going, but yeah, but look how dumb it is at this thing.

[00:10:39] **Dan Lines:** Right? Yeah. Something that is like so easy for all of us. Right. And you would say, oh, this like advanced intelligence, like AI, that should be nothing for it.

Yeah. And it's tripping up.

[00:10:50] **Louis Brandy:** Exactly. And like, what I think that means practically is that you can't really take its expertise for granted. Like, like, like you could imagine a world in which a self driving car is like, [00:11:00] Provably better than humans or a, or a doctor is provably better at diagnosis than a doctor.

However, it will make mistakes that any human will be like, no, whoa, whoa, whoa, whoa, whoa, whoa. That's crazy. And so it, what it means, I don't like, so I guess my, like the idea that like in some sense, the combo maybe is, it will always, at least for the foreseeable future is going to be very, very powerful. Um, yeah, but I, I think that's going to happen to everywhere.

So in some sense, it's like horizontal. In other words. In every question, the AI is going to get really good, but also be really bad in some other way. So it's not like it's going to be good at these jobs and terrible at these jobs. It's like, it's going to be really good at significant aspects of everything and really bad at significant aspects of everything.

And it's actually hard to predict ahead of time how that's going to feel.

[00:11:51] **Dan Lines:** That's a, so you're describing it more, you said horizontal, right? Yeah. And when you said that, what I [00:12:00] started to think about it, like when I think about horizontal, does it mean kind of AI could be, Now, useful for maybe every profession in some way, but not necessarily, but still make like the dumb mistakes also in every profession.

So for example, if I'm, I mean the obvious things I think that AI is good as, you could say something like, write me an opening. Like speech or something or like do a template or like write my book report like all that that kind of stuff Yeah, I can collect all information do do really well. But do you see it the same way that I'm seeing in that sense of like horizontal could kind of help everybody but kind of miss also for everybody.

[00:12:52] **Louis Brandy:** I think the most likely medium term If it's going to end up being majorly disruptive, I think the most likely medium term version is like, it [00:13:00] disrupts a significant fraction of most of our jobs, as opposed to it takes it takes it's what it is the one reason why this is a whole job.

[00:13:08] **Speaker 3:** Yeah,

[00:13:08] **Louis Brandy:** yeah, it's, it's one of the reasons why this is different, maybe in a material sense from like previous like automation kind of disruptions, whereas like, Like, hey, we built, I mean, we'll use a classic example, like, hey, we built mechanical looms, the weaver profession is in big trouble. This is a, this is a little different than that, because it's like in the sense of there's the disruption to everyone is actually hard to predict exactly how much of your job can become automated.

And like, certain parts of your job can just become a lot easier. And that gives you more time to do the other parts of your job. So exactly how that impacts things, it's super hard to predict. Um, but, but just to be clear, like I think all this is still a little bit, this is like medium term, I don't think in the short term, we're like in, we're in super grave danger of a lot of any of these like major disruptions.

But I, I think it's worth, Thinking about, so that you, whether you're, depending on where you're at in this, in this equation, at least [00:14:00] thinking about like, if it comes to be, right? Maybe, maybe it won't, maybe it will, but if it does come to be, it will at least have put some intellectual effort into figuring out like how that's going to impact the different parts of our, of our, of our jobs.

[00:14:10] **Dan Lines:** Yeah, well, let's say, I mean, it's a great intro and an opening. Let's turn it into, I mean, you're an engineering leader, right? You're a VP of engineering. That's been your, I think your trade throughout your career. That's what you've kind of like studied, specialized in that type of thing. But also you have a lot of background on

infrastructure, like you said, for AI and is even like when you were at, in the Meta days, like you said,

[00:14:35] **Dan Lines:** Hey, it's like, wasn't AI, it was like the precursor, but you know, how should we think about this as engineering leaders for those listening on this pod where it's like, AI is, it's almost like there's a pressure of like, Hey, how are you using AI?

What are you doing with it? Yeah, how should we think about it as engineering leaders?

[00:14:54] **Louis Brandy:** I love this question. I have a question first you have to answer before I have an opinion, which is [00:15:00] where are you in this space, right? So there's like, I think there's like a, there's like a decision node right at the start, which is either I am in Like, I'm betting some significant fraction of my business on this idea, and how did I get here?

Like, should I even do that? Then there's like, you're, you're not there, you're, you're on the outside. As you said, there might be like a pressure, like a thing has just happened, like I'm an insurance company, AI has just happened. Does that, does that matter to me? Like, is there, am I missing something? Am I about to get sideswiped in a way I don't appreciate or understand like, what, what's just happened?

And I think these. These two situations are different, but there are some things I would say is the same. It's funny because this whole intro I gave you the pro like, Hey, I think there's something very real about this type cycle you should think about. Then there's like the exact opposite. So like if you ask me, the VP of engineering, I actually have a completely different opinion, which is like, I have a, like a small, we have, we have a whole bunch of software engineers.

I can assure you that if we said, Hey, AI is super cool. Go work on that. They would all like run full speed. [00:16:00] Like, that's a super sh That's fun. It's exciting. I mean,

[00:16:02] **Dan Lines:** I'm an engineer, I want to do fun shit. Yeah, exactly. And in that situation, honestly, the fun part, like, I don't care, honestly, if it works out well or not.


like, I just want to be involved.

[00:16:13] **Louis Brandy:** Exactly.

Exactly. Like, so I have this like, there's okay, this is a much more boring podcast, but like, in Management 101, one of the things I always say is like, there's like a two, like any project you have like lives in this two dimensional space, like one, one dimension is how shiny it is.

And one dimension is like how impactful it is. And so, like, the truth is, shiny and impactful things are almost always never real. Like, those are, cause if they were, if there was something that was truly shiny and impactful, we would have done it last half. Like, we wouldn't have waited till now to do it.

It's already done, in other words. Now, it's not always true, of course, but typically, that's true. So, that means that any given, and by the way, if something is not shiny and not impactful, we're definitely not going to work on it. Like, if nobody wants to do it and it has no impact, like, that's not going to exist.

So, everything kind of exists on this, like, horizontal Front. And so [00:17:00] like, as an average rule, as an average rule, this is an average, the less shiny it is, the more impactful it is. That is the, that is the mathematical thing I just worked out. So like, as a manager, you should be deeply distrustful of shiny things.

Like, oh, no, no, shiny, shiny's scary. Because, like, you don't need me to pump us full of, like, enthusiasm for shiny. Like, that's not, like, right? Like, that's not what, that's the, that's the easy part, right? The hard part is to say, like, no, no, actually, like, the unit tests are actually, that's the thing we should be, we should care about right now, not, like, the, not, not go solve all the AI problems in the world.

Yeah, let's

get our test coverage in order so we can move fast. Right.

Right. And so that's, so this is, like, the counterpoint from before, which is, like, When you're in these kind of like frothy moments, you don't want to miss it, but you also don't want to get sucked away by it. Like you don't want to get, like you don't want to like lose a half a team for half a year.

In some, in the froth, right? Of, of, of the cycle.

[00:17:55] **Dan Lines:** Okay. So I think that gives us kind of maybe like a mental [00:18:00] model of how to think about it. And then for you, I'll put you on the spot for you personally. As an engineering leader. So you gave us a good, good mental model. Are there aspects with AI or maybe it's not even fully AI, but like, things like co pilot, like, are there things that you're saying, you know what, I will spend effort to look in an experiment or no, I'm not, I'm like.

I'm not going to

[00:18:33] **Louis Brandy:** do it. I think step one is you have to, you have to make a value judgment on this cycle. You have to predict the future. Like, we can choose if you'd like a previous hype cycle, which was blockchain. I think that was the most hypey I remember that one.

Yeah, before AI. It may still be a thing. It's good to have two in your brain, because even if you're very pro AI, it's fairly easy to be skeptical of blockchain or maybe vice versa, depending [00:19:00] on your environment. What's the one before that? I don't know. I'm old enough to remember like Web 2. 0. Like the Internet? Like the first one? Yeah, like the Internet. There's probably, if we put some time in, we could probably think of some other. Like the

[00:19:13] **Dan Lines:** computer?

[00:19:14] **Louis Brandy:** The personal IPC? I mean, I guess before that was mobile, everything had to be mobile. Mobile, mobile was the one, the hype cycle before that.

## How can we know if we're falling into the hype cycle or if there's work to be done?

[00:19:24] **Louis Brandy:** Everything had to be mobile, right? Like, everything had to move onto the phones, was maybe the one hype cycle before that. You have to kind of make a value judgment. And for example, AI I think is, it's very easy for me in my position to not be hyped up about blockchain. Like I build a database, that's my day job. So like blockchain is not something that's going to be super impactful for me. I can make that call ahead of time.

AI is quite different though, because It actually has two aspects that are interesting, because one isn't just my product. Like, can my product be made better? But also my developers, right? Like, so you mentioned Copilot. So that's like a whole extra layer to this. So like my rule, like there's two ways to think about this.

And this is the way I [00:20:00] always do this. One is outside in, like market in. Like what, what does the market want from this product? And is there a way that AI can help? And then one is like bottoms up, right? And this is like, can we come up with cool ideas for the product? Using AI and or can we use it internally to make ourselves more productive?

And we actually, for the second one, we did both of these things. And the, I mean, we did it in a fairly straightforward way, which was like time box sort of hack style things. Like we did a, we did a week and we said. Everyone go nuts. We didn't actually limit it to AI. We didn't say, hey, do an AI hack for whatever.

We just said do it. AI was super frothy, so it was like everyone go experiment and play with this. You can work on whatever you want, but we did basically like a one week go nuts. And, you know, demo day is on Friday, so build something this week that you can show off by Friday. What's cool about something like that is even if nothing happens, , even if nothing comes to be, by the way, sometimes it does.

Like there are very famous stories of stuff like that turning into entire bus [00:21:00] billion dollar ideas.

[00:21:01] **Dan Lines:** Yeah. Like hackathon into billion dollar idea.

[00:21:03] **Louis Brandy:** Uhhuh .

Mm-Hmm. . Yeah.

But, but even if it doesn't, even if it doesn't, something very important happens, which is like you, you no longer are operating at this first level of AI understanding.

You're like two or three levels deep. Like you'll see a demo and you'll say something like, man, that isn't going to work. But if it was better at A, B, and C, this suddenly gets interesting. And now you have a new understanding and now you can pay attention to A, B, and C and, and, and like pay a lot more attentions.

And you've also, by the way, the manager brain, you've time boxed this. Like I'm not like, you know, I, I dedicated a week or a month or whatever. Right. But I've, but I'm not going to like. I'm not gonna just like flush huge amounts of time.

[00:21:39] **Dan Lines:** Yeah, you've capped your initial investment.

[00:21:41] **Louis Brandy:** Exactly.

[00:21:42] **Dan Lines:** you did, did you do it for like a week?

[00:21:45] **Louis Brandy:** We did. So this is part of our, we do this anyway, we do this periodically, we just have a week and we just do this. Um, this is like already a thing we've built into, to the thing, to the way we work at Rockset. Lots of good things have come out of this.

So again, we're building infra, [00:22:00] so, so for us it's a little bit easier when you're sort of building shovels rather than like digging for gold. Um, and like a lot of our existing infrastructure did come from some of these, from early prototypes in this space. So for example, um, the first version of vector search within, um, within Rockset was, was built in during that week.

And it was a, again, vector search is a cool, it's its own super exciting, shiny problem. People love that problem. And so it was pretty easy to get people to go like implement cool vector search and like get it working properly on our, our infrastructure. And then all of a sudden it's like, well, this is.

This actually fits. This is actually a really good, you know, and so that eventually became a full work stream and it was originally birthed in, yeah, in one of these Hack Weeks.

[00:22:42] **Dan Lines:** I mean, what I, even one takeaway is like Hack Week can produce something of value that you can, you know, so it's, and then cap your investment, learn.

I liked what you said, like, okay, now you're a few levels deeper and maybe [00:23:00] you can have like a more, educated opinion of how much more or not to invest like that, I think is a good takeaway.

[00:23:08] **Louis Brandy:** There's a lot of reasons to do this kind of a hack week thing. Like, it's just fun for morale, like as a team building thing.

It's fun. It's just fun in general. But this one ulterior motive, I think is actually really valuable, which is like, it's a really great way. It's a really effective way to kind of do a bunch of things, fail very safely, and learn a lot from failure, so to speak, in a way that is like, feels awesome. It's a totally different thing to like, set up a project in three months and kill it because it's not going well.

That does not feel good. But having a hack week where you just build something crazy and you're like, oh, this is terrible, and like, and then you erase it at the end, like, it feels great. And, and you learn a lot about the limitations. And so this is something I've always kind of, Yeah, I mean for AI in particular, I was actually quite interested in like, if I build, if I take our docs and I throw it into an LLM, like how good is that robot at answering questions about our, our, our [00:24:00] docs?

It's surprise, going back to the very beginning, it's shockingly good, but also weirdly bad in certain, in certain ways. But it's really, really good visceral to experience that directly on a quick, in a quick way.

[00:24:11] **Dan Lines:** Are you using AI in any of your software development practices that are like baked in, like into your workflows or are like, are you

invested in it?

[00:24:23] **Louis Brandy:** We turned on CoPilot, like in CoPilot like things. We have people who are using it and trying it. As of this moment right now, I cannot say that it has been wildly successful.

[00:24:36] **Dan Lines:** Yeah, okay, interesting.

[00:24:37] **Louis Brandy:** we have to define wild success. I mean, I don't, you know, but I still, so I don't think I would At the current moment, I don't think I would, um, call it necessary.

Like, I wouldn't say you need to go do this right now or you're, or you're behind. Um, however, I don't know how much longer that might be true. Like, I think [00:25:00] today, it's not necessarily, You have to go do it right now. It's unclear to me how much longer that can be true, especially as we get into like these AI systems that start to use tools, like they, in other words, they can write tests and run the tests and like this, you get into this, like, I don't know how much, how deep you want to get to be getting these kind of like tool using agent type,

aI workflows. All of a sudden it may not be insane to like have it say, write a test suite for something. I, you know, again, I don't really know. I would say as of today we do not use, at least not like as a first class citizen. Uh, in our development environment today. No. Okay.

[00:25:41] **Dan Lines:** Yeah, I mean, that, so, that's, that's awesome.

Like, thank you for sharing, because I think, When I was saying, like, the pressure, I was saying, you know, some of the engineering leaders I speak with, there's almost like, uh, Hey, what are you doing with AI? What are you doing with Copilot? Is it everywhere? And a lot of what we, so [00:26:00] at LinearB for our customers, we measure the impact.

Did CoPilot actually improve the developer experience? Like you thought it might, or did it not? Or is it like somewhere, you know, in the middle? Um, this stuff isn't free, so it costs money, right? So it's like, is your investment worth it? So we're, we're doing a lot, like. There's a lot of companies there, I would say, experimenting with the use of it and want to know the impact.

And then the other thing that you said that is interesting is you do see some of this, like, uh, you said it with tooling, right? I think that's the terminology. It was like different tooling, like, okay, maybe like, uh, a PR pull request review. Type AI or like I can analyze the code and say like, here's, here's like the summary of what this is or the description or what you should look out for.

Um, and I wouldn't say, and I think that is actually interesting. [00:27:00] Um, there are some companies using that. And the other thing I would say is like, I don't even know if I would call it AI, but bots, let's say bots. Yeah. And the way that I define at least a bot in like the S like the SDLC, like the workflow is.

[00:27:16] **Dan Lines:** I see a lot of companies where 50 percent of the PRs are not made by humans, or like 25 percent or 70, like it ranges. And when you start getting into that situation, then I've seen the situation where it's like, the PR was not made by humans. Created by a human. And the first review was not done by a human.

Okay, now we're into this like, that's interesting. How do we orchestrate this together? So let's go to your point. All of these, and tests are maybe lower impact, but the tests were not created by a human. And then like the review of the test was not created by a human. What do you do to orchestrate that?

And, um, I, I see that [00:28:00] the, uh, industry experimenting with that, let's say.

[00:28:04] **Louis Brandy:** I mean, so you get, you, again, we, we, it's, it's hard to, we, it's hard to have this conversation without things rapidly getting philosophical. But, like, there is a spectrum, right? Like, I mean, At some point, like having a linter who, that, that complains about your code, having clear bugs in it is like a proto form of AI.

Like it's not a huge job, right? Like, and, like, we use a lot of that stuff, right? Like I, I, you know, I've used clang tidy and stuff for years, right? To like clean up code. And to be clear, when you work at the giant places like Facebook, like they, like there's robots everywhere. They're making bugs.

They're commenting on change stats, they get in fights sometimes, like one, like one will yell at the other one for doing something and then the first one will, you know, like, there's all kinds of entertainment with the robots, uh, you know, trying to make code changes. I'm actually kind of curious for, then I guess I have a question for you.

We can turn this around. Like, have you seen horizontal evidence that Copilot is [00:29:00] beyond the experimentation phase because I mean, we're experimenting as well, but have you seen people using it extensively for like major impact?

[00:29:06] **Dan Lines:** I have seen larger companies rolling this out to 70 percent plus of the developers.

The coming from more of a top down like CTO, VP saying, Hey, we are going to At least ensure that we could be ahead of the game in this area.

[00:29:32] **Louis Brandy:** Sure.

[00:29:32] **Dan Lines:** But I, but also with that same type of person coming with, I am not saying that this is going to revolutionize. I would like to measure and then report back to the business if that is a path forward for us.

That's the state that I see it. So I see, okay, to summarize. So. Most companies are experimenting and want to know the impact. That's where we're [00:30:00] helping with, with LinearB. And that's what I see. That's my, my I mean, that's,

[00:30:03] **Louis Brandy:** that's almost exactly what I would say

[00:30:04] **Dan Lines:** as well. Now the impact of that, I think ranges per, there's a variance, let's, let's call it.

Like, what type of developer are you? What are you doing? What app are you building? Are you front end, back end? Are you And so, I think that data is starting to come to light. Um, and then I would say, on the other side of it, besides Copilot, so I'm like, helping you create code, I have seen, and that's why I said it really explicitly, like, I said non humans.

I didn't say necessarily AI is creating da da da, but I see a lot of non human generated code that is actually hurting how fast code gets released because it's getting stuck in the review process and humans don't want to review all of this [00:31:00] non human code that is rapidly being generated. Let's put it that way.

Alright, squares of my experience. Yeah. So, I, I'm in the posi I think it's a very interesting time. Very interesting. I can, I can see, I could see a world where it's like, rapidly exploding, but I do think people are cautious. They want to measure. What is your, so I know, I know you're more, we could probably do like a two hour, two hour podcast on this.

I know you said you're more on like the infrastructure side with AI, maybe tell us a few words, like a little more of a background about that, but also if you have a comment on how, like you said a little bit about it, but like AI enablement. In the product, like let's talk about products for a second.


[00:31:51] **Louis Brandy:** Yeah.

Yeah. So like at a really high level, there's this giant neural network somewhere and it, it effectively is like the [00:32:00] sequence of vectors, right? It is able to, it, it iterates vectors right through the layers of this network. And this is interesting because when you look at, One of the things that can happen, and this is, you know, how some of this works, is you get this, like, vector idea, right, which is, like, I can, I can turn a prompt into a vector, and then I can do vector lookups, vector search, um, to find, essentially retrieve other data, and you get this idea of retrieval augmented generation, right, so that, so I can fetch data, Doing this vector search that is relevant to the question that's been asked, this AI, and then I can provide the AI with that data to form a better answer.

This is Retrieval Augmented Generation, RAG, it's its own hype cycle, sub hype cycle, and the larger hype cycle is RAG. And, from an infra perspective, this is a super interesting problem because vector search, is a, it's actually a very hard algorithmic problem, but it's very easy relative to the neural network stuff, especially computationally.

And so it's the only part that you really [00:33:00] have to mega scale. And, and, and so you end up with an architecture that, again, it's hard to predict. This is a super fast movie, but you end up with an architecture where Like the neural network is the brain to some first approximation, and the retrieval is the memory, so to speak, in this, the CPU and memory, if you prefer that, if you prefer that analogy.

And they have very different properties that are very powerful, and like one of the, one of the powers of the memory of this system is that it can you can insert data into it and remove data from it in a way that's easy and quick. Not, you can't retrain an LLM at the same speed. You can add and remove data from the database, so to speak.


so teaching an LLM how to use a database, or Google, like literally teaching it how to use Google, to give you better answers, is very rapidly emerging as a very powerful tool. Enabler of a lot of these systems. And that brings one other piece of radically important [00:34:00] infra requirement into the situation, which is what is the data latency requirements of this data?

So in other words, if you ask, we'll use ChattGPT as an example. If I asked ChattGPT about the top hundred songs in 1957, it doesn't, it doesn't need to know, but if I ask it the score of the baseball game right now, it, the, the, the neural network doesn't know a single thing about the baseball game right now.

In order to answer that question, it has to go fetch something live, some kind of live piece of data. So this real time component is actually really interesting because it's really important for some questions, and it's also impossible for the heavy neural network to handle. And so this is, this is where you get into this very interesting balance between Like, the heavy intelligence versus the rapid lookups and then teaching the heavy intelligence to use the lookups effectively, which is this, this rag style thing.

And this is, this is like this emerging architectural space where, where we, we, [00:35:00] we, you know, we Rockset, we are heavily focused on this real time component of, of things. Like, like

[00:35:07] **Dan Lines:** what's, what happened, like. You're saying, did someone just get an RBI in baseball right now? That's like more real. Yeah. Okay.


[00:35:15] **Louis Brandy:** I mean, a better example would be like for, uh, uh, like real time recommendations, for example. uh, we have a customer that's, it's called whatnot. They do live buying and selling. It's like Twitch. It's like Twitch streams, but for, for auctions. So you can literally go find someone that's selling Marvel comic books right now and interact with that human live That's a classic recommendation problem.

Like Amazon does this you go to Amazon you say products like this one and they'll show you other products But but one of the reasons their problem is interesting is it's because it has this live component Twitch has the same problem by the way, which is like I can only recommend you channels that are live now Otherwise, it's like a pointless feature

[00:35:53] **Dan Lines:** Yeah.

And so this, otherwise I missed the opportunity.

[00:35:56] **Louis Brandy:** Right. So the, the liveness piece of data is utterly [00:36:00] real. Time is super real time, and you have to incorporate that into your AI recommendation frameworks. And so this is the kind of thing that rock Set is designed to do.

## What is real time when it comes to AI?

[00:36:08] **Dan Lines:** Yeah. Okay. That's interesting. And maybe dumb question, but I, I was thinking about it while you were talking, like, when you say real time, what is the , the, is it like.

Minute, like something happened in minutes. Is it all the way extended to an hour or is it like seconds?

[00:36:27] **Louis Brandy:** So it's a perfect question. It's not a dumb question at all. It's actually the smartest question because the word real time is one of those super overloaded terms. So depending on who you talk to, it can mean any of those things.

For us, it typically means on the order of minutes to seconds. For a large scale data retrieval system, that's super fast. But obviously if I was building like a kernel to fly in a plane, you'd want microsecond, you know, latencies, not, not, uh, not minutes, but yeah, so in our case, what we're talking about typically is in the minutes to seconds range.[00:37:00]

[00:37:00] **Dan Lines:** Yeah. Yeah. I'm thinking like, uh, then, okay. Like product usage or even like. Even before we would go to like product usage, just like my very broadly, like my engineering infrastructure, like I'm the VP of engineering. I need to make decisions on my infrastructure. I need to understand what my product does.

If I understand enough what my product does, then Like, when would I make a decision to use like this, this type of stuff versus, Oh, actually I don't need like that style real time. That's where my head was

[00:37:34] **Louis Brandy:** going. I mean, there, there is a fundamental trade off between like how cold the data can be and how much it costs like to keep it warm, basically.

Right. So if you're good with yesterday's data, and by the way, a lot of the world is Like, a lot of the world is fine with yesterday's data. Um, you can do it very 24

[00:37:49] **Dan Lines:** hours is not real time. In this case, it's like yesterday's day is fine. Yeah,

[00:37:53] **Louis Brandy:** but, but it's also far less expensive too, like in a very fundamental way.

But like, even going back to AI, so we sort of like, [00:38:00] again, there's this gamut of what this word even means. Like, recommendations is like a classic ML problem. But even something simple like You want a chatbot, like imagine you're an airline and you want a chatbot, like hey, what's the status of my flight and it just gives you the answer as a chatbot.

That's a perfect example of a thing you cannot build without a real time, to some degree, real time data system behind the LLM that says, Hey LLM, this is the status of that flight, give them some text to, generate some text to answer that question.

[00:38:30] **Dan Lines:** Yep. Yeah, makes sense.

you would consider that AI? Like, if I'm getting an answer like that, is that AI? Well, AI, now it's like this buzzword. Now, now AI to me, it's only AI if I'm impressed. That's a good, that's a good definition. It's like if I, if it like impressed me and I was like shocked that it did a good job, like that's AI.

If it's not, then it's like, that's that old thing that we used to do.

[00:38:56] **Louis Brandy:** I think that people underestimate, I think most people, I think we'll [00:39:00] use the term chat bot, but I think that, I actually think that denigrates the value of a, of a human interface, a human language interface to a thing. Cause like, I would love to be able to like, I do this all the time already.

Like, Hey Siri, and ask Siri a question and like get an answer. If you can make that materially better, like, Hey Siri, what's the status of my flight? Siri's talking to me. Okay, I got to stop that. Um, But like like it'd be really cool to like, you know, get answers to that that you could you could trust and and so I Do think that like even though yeah, it's just an LLM spit Like I could have just as easily googled the status of my flight, of course, but I do think there's a lot of places where having the LLM generate a human Like, taking in human questions and giving human answers is actually a superior API to, to a particular, uh, question.

[00:39:53] **Dan Lines:** Yeah, makes sense. We kind of, I think we talked, okay, you're like the VP of engineering. We talked a [00:40:00] little bit like, okay, think about it as product too.

[00:40:02] **Dan Lines:** What about for the individual engineer or the individual developer, and maybe I'm thinking about my. Career or maybe I'm thinking about my productivity. Am I good enough? Like, where do you, do you have any advice if I'm a IC?

[00:40:22] **Louis Brandy:** So there's two things I think about here. One, one general piece of advice I always give people is like the, the max, the best way to be successful in your career is to, Align your strengths and passions with the thing the business cares about. So, you know, like you could be, you could love being a kernel hacker, but if you work at like, you know, like a web dev shop, they're not, they're not going to value your, your interests.

So like step one is like aligning the things you, and I said something that I skipped past that I shouldn't, which is your strengths. A lot of people focus like on their weaknesses. And I actually think that's, that's actually a mistake. Like when you're fairly [00:41:00] junior in your career, it's good to know like what you're not so good at, but as you get more senior, it's really about what you are good at and like trying to make that your entire day and trying to align that with the business.

And so we talk about something like AI, it's exactly the same kind of idea where it's like, you may be super excited about AI, but if that really, at the end of the day, doesn't align with the business, like it's, you're going to be the one pulling on the shiny and the manager somewhere is going to be the one that's like, no, no, no, no.

Don't come back from the shiny. On the flip side, if you find a place where that stuff does align, where there is a product that works in this space, where, where, where this stuff actually moves needles that matter, and you're super passionate about it, and you're good at it, like, that is where, like, you get all that stuff aligned, then, then everything goes, goes wild.

So, like, to some degree, like, when I think about this kind of stuff, like, I do think as a software engineer, you're always going to want to be messing with the new stuff. Like, always. You're always going to want to mess with it. Because that's just part of what you do. That's like part of what got you here.[00:42:00]

It'll always be part of your wiring. So kind of developing that internal governor of like, I've messed with this enough, this is probably not the right thing to be doing at the moment. Or, saying like, no, this is, this matters a lot, I'm going to go bet on this. And then finding a place where that aligns with the business, right?

Like, you got to make a judgment on which way the wind is blowing is good or bad. And then you want to like get into a place where that matters, where that matters.

And, and I think that that's like the, the general feel of this. And as a general rule, again, same thing as before, like, it's also really great to know something and, and kind of know its limits and say like, no, we shouldn't do that. Like sometimes you get to be the grownup in the room and be like, guys, this is too shiny.

We shouldn't go down this road.

[00:42:39] **Dan Lines:** Yeah.

Um, and I think like, I'll just add in the only way. Or most of the time if you're like, how you be the grownup in the room and say like, this is too shiny, is to investigate it and like know to say that that's the key. Yes, yes.

[00:42:55] **Louis Brandy:** Yeah. Like we, like just be clear. We use the word like engineering leader a lot, but like it's, I'd [00:43:00] rather not be the manager that's doing that.

Almost always I would, it's much worse for it to be me than someone else. Like I'd much rather this the tech lead or somebody stand up and say, that's I, I, that path leads to madness. Right. Like it's cool, like, hey, I dove

[00:43:14] **Dan Lines:** into this for, you know, on the weekend or a week or whatever, and like, here's my blunt opinion.


[00:43:22] **Louis Brandy:** And like, and if you want to prove me wrong, this is, this is the next level, the second or tertiary level of things that I think are, are bad, that are gonna be a problem. Like, if you wanna prove me wrong, that's go, go attack those problems and come back. Right? Like, and that's again where that, that investigation is really valuable.

[00:43:38] **Dan Lines:** Alright, well some great advice. And I think we have enough, like, follow up topics that we could do another fun pod, but we will wrap it up for today. So Louis, thank you so much for joining me. It's been a pleasure.

[00:43:53] **Louis Brandy:** Thank you so much for having me. It's been great.

[00:43:55] **Dan Lines:** And for you listeners, make sure You've subscribed [00:44:00] to our Dev Interrupted YouTube channel to watch this episode and tons of behind the scenes content.

Thank you everyone for listening. And again, Louis was awesome to have you on, man.

[00:44:11] **Louis Brandy:** Thank you for having me.