Podcast
/
Why enterprise AI lives or dies on applied research

Why enterprise AI lives or dies on applied research

By Elizabeth Lingg
|
Blog_Comprehensive_DORA_Guide_2400x1256_10_5a62fc4443

"I always thought in academia the research is very hard, which it is... but it's surprising to see how much you have to adapt that research to really put it into production [and] work with multiple teams in multiple disciplines."

What does it take to transform a brilliant AI model from a research paper into a product customers can rely on? We're joined by Elizabeth Lingg, Director of Applied Research at Contextual AI (the team behind RAG), to explore the immense challenge of bridging the gap between the lab and the real world. Drawing on her impressive career at Microsoft, Apple, and in the startup scene, Elizabeth details her journey from academic researcher to an industry leader shipping production AI.

Elizabeth shares her expert approach to measuring AI impact, emphasizing the need to correlate "inner loop" metrics like accuracy with "outer loop" metrics like customer satisfaction and the crucial "vibe check." Learn why specialized, grounded AI is essential for the enterprise and how using multiple, diverse metrics is the key to avoiding model bias and sycophancy. She provides a framework for how research and engineering teams can collaborate effectively to turn innovative ideas into robust products. 

Show Notes

Transcript 

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

[00:00:00] Andrew Zigler: Today we're diving into a really cool topic.

[00:00:02] Andrew Zigler: We're exploring the world of specialized intelligence. And the challenges of bridging research and product development. And we're doing this with Elizabeth Lingg. She's the director of Applied Research at Contextual ai. That's the team behind rag. And ultimately, Elizabeth's impressive background spans major players like Microsoft and Apple.

[00:00:22] Andrew Zigler: She brings a unique scientific perspective to making AI work for real world applications. Elizabeth, welcome to our show.

[00:00:31] Elizabeth Lingg: Thank you very much. I'm so excited to chat.

[00:00:33] Andrew Zigler: We're excited to dive into your research background. And I wanted to start just by talking a little bit about your career, cause it's taken you through some impressive places that I just named and it all started with you for a, a, a love of learning, what does it look like as an applied researcher on a software team?

[00:00:51] Elizabeth Lingg: Yeah, so I would say that I've always been passionate about math, um, ai, even as a kid, I read 2001 A Space [00:01:00] Odyssey, and I was so fascinated by the book and whether I. had a software hardware malfunction. Did he achieve singularity, when he ejected the person from the spacecraft? What, what happens with intelligence with ai? So, I've always loved AI and the research side. Uh, for the longest time, bath statist. Data, data science, it's always been a passion, but then how do you actually bring that into a product? How do you build something, something that works for, you know, real users, real customer data? So that basically is where my career has taken me to actually apply the research, apply the ML models, and build real and exciting products.

[00:01:43] Andrew Zigler: So your journey starts by becoming a researcher, someone who's obsessed and great at science and math, and that lets you go, you know, deep on like an academic type of route of learning how to conduct research and collect great data. and then from there, how do you, how did you look to pivot those [00:02:00] research skills to become an engineering leader?

[00:02:03] Elizabeth Lingg: Sure. So I think just the fact that I joined industry after grad school at Stanford, I ended up. You know, working on different aspects of the code. So not just, say clustering and similarity algorithms, but also information retrieval, building my own lucine frameworks, which, and, and colleague, uh, different, you know, APIs, um, external APIs and so forth, which brought me into, uh, a lot of this, uh, software side of. Of course for anyone with a computer science degree, you do learn the fundamentals in different areas. But even though my job title was on the ML side, I'm still touching all these different aspects of the system and I'm working with the DevOps team and I'm working to trade a model on a, a server, on, a specific compute. So I think what happened is once I joined the real world, I have to build these products, I have to make sure the customers are happy, then I start worrying about things like latency, right? [00:03:00] So even 16 years ago after grad school, latency was a thing. so I was working on governance risk and compliance and, fraud detection and using AI for that. But even for that, you know, if it's too slow, the customer's gonna be unhappy. So that's, that's what brought me into the software side of things. And then honestly, it's been the best fit for me. I realized that even though my greatest passion is on the research, AI and, and math side. it's really good to have those solid computer science fundamentals as well, and at least have an understanding of code.

[00:03:32] Elizabeth Lingg: Of course, you could buy code these days, but you, you have to have the understanding of the system. Um, having that full understanding I think is, is really valuable. And what I've noticed is the industry has shifted. Like when I first joined, you either picked, you're an AI researcher or you're. A software engineer, you're a data scientist, you're a software engineer, but these days it's very common to have the full stack, MLE, applied scientist, applied researcher, and uh, which works, works out great for me.

[00:03:59] Elizabeth Lingg: [00:04:00] I think it, it is important to have that full per perspective.

[00:04:03] Andrew Zigler: Yeah, that's a great call out that you've made there, um, about the diversification of the skills and the baseline that folks are expected to operate at. We're hearing that from everybody We talked with, uh, Lee Robinson, VP product at Versal echoed the same thing happening in web development where you have like the full collapse of front end and backend into this full stack engineer.

[00:04:22] Andrew Zigler: That's a baseline now that's. Table stakes as someone working in that environment and you see the same thing happening. We talked with a guest from uh, a Google DeepMind who talked about how most engineers and how engineerings work. It's a collapse of backend front end. It's not that specialists go away, it's just that the expectation level and that everyone's supposed to be operating at shifts.

[00:04:42] Andrew Zigler: And there was something else that you said in there too that I really like, I wanna double click on, and you said, when I entered the real world, when you went from doing the theoretical work to working in, in the practical, uh, application world, what was a big challenge you faced when you made that transition?

[00:04:57] Elizabeth Lingg: I think anyone who goes from the academic [00:05:00] world to a more of a production environment to learn about all the, basically all the failure cases that exist and, you know, inner team dependencies and to your point, um, I mean, yes, specialists still exist. They're great. I love specialists.

[00:05:17] Elizabeth Lingg: it hasn't been obviously my trend in, in, in my career, per se, but, um, it, it is the baseline. You do, even if you're a specialist, you do have to have that baseline understanding. And so I think, what was shocking to me is just. Just at first, once I was getting to know, um, how to build, you know, software and cloud software and applications and so forth, and eventually SaaS is like, okay, there's so much that goes into it.

[00:05:46] Elizabeth Lingg: You know, I always thought in academia the research is, is very hard. It is, it's a very challenging aspect to it, but it's, it's surprising to see how much you have to adapt that research to really put into production work with multiple teams in multiple [00:06:00] disciplines.

[00:06:01] Andrew Zigler: Yeah. And, and, and especially when you work at like a large company like Apple and Microsoft that has a bunch of teams, has a bunch of disciplines. It's a huge spread out organization. What was it like shifting from working in that kind of environment to being an an applied researcher and working in a more startup like an environment?

[00:06:17] Elizabeth Lingg: so my trend over my career has actually been either startups or sort of the startups within the large companies. So I have been a part of, you know, several early stage startups or like startups that are quickly acquired by a big guy kind of thing. and I think what's what I love about it is for both of those situations is, the ability to really contribute to the research community to be scrappy and move quickly. For example, at Microsoft, um, I was most recently working on the co-pilot team and leading the language understanding for the office co-pilot. in that role, you know, we started with a hackathon and then eight months later [00:07:00] we had to deliver entirely new product, which you think, um, would be easy at a large company, but you know, large companies also have their downside, they have their processes.

[00:07:09] Andrew Zigler: Right.

[00:07:09] Elizabeth Lingg: was really exciting getting there and getting to that large number of, of customers and large sales number and moving to GA and just eight months.

[00:07:18] Elizabeth Lingg: So I always love both within the startup and the large company, trying something new, working on risky teams. and then I would say at startups though, it's the best growth in learning. I see so many people, uh, especially in the AI community, attracted to startups because you wear so many hats, you touch so many things.

[00:07:35] Elizabeth Lingg: You learn a lot, you challenge yourself. And then certainly at large companies on the right team, there's a lot of learning and growth as well.

[00:07:43] Andrew Zigler: And when you're on those teams and you're, and you're working on applied research within an organization like that, how do you, how do you measure your impact and how productive y'all are as a team together?

[00:07:53] Elizabeth Lingg: So that's a interesting question is always a measurement and metrics, which as one knows can be [00:08:00] biased, right? so you may measure accuracy, but you have one perspective, uh, somewhat of the other side of the world or coming from a different culture may have a different definition of accuracy or. Or golden labels or, what is the, a representative data set? Is it the, the data on the internet? Probably not. Uh, we, we've read the stochastic parrots paper, but bottom line is, I would say, we look at, we do try to quantify, measurements, uh, typically like accuracy and so forth. or say for retrieval, you know. Cg, MRR, um, recall precision, all that kind of thing. and then we try to, with like, uh, large language models, we measure groundedness, equivalence. contextual has a really cool methodology called LM Unit, which I could tell you more about. Um, which we're software engineering unit tests to verify criteria.

[00:08:52] Elizabeth Lingg: So we have all these qu, um, and Microsoft and, um, and startups. We do have these quantitative and, and large companies like Apple. We [00:09:00] do have quantitative measures, which we use, and we try to diversify those, those metrics. That's always my goal to avoid bias, so get perspectives from different people. Aggregate different metrics together, try to balance out, you know, the bias of the data sets have a holdout data set. one way to measure success. But the biggest measurement of customer success is always like, I would say, always like customer utilization, cut customer satisfaction. So one thing we've worked on doing is, correlating our, what we call our inter loop metrics with outer loop metrics. So how many queries per day, how often are people using the app? You can correlate that with, for example, you can see if there's a correlation with some of our internal metrics in terms of, of recall for retrieval. can you apply a linear regression and do you see.

[00:09:51] Elizabeth Lingg: That there's a correlation there. What features should be weighted higher or lower in order to determine how the inner loop out, influences the outer loop. So we do some of that [00:10:00] quantitative work. And then what's interesting with what I've seen, um, at certain companies is there's a lot of like vibe check and user experience. So even Apple I think is really good at that. Like Apple, is not historically been, um, like the. know, AI data company, but for example, they're really good at like building these really cool, engaging products and really beautiful co products and making data beautiful and making AI like the excitement of using ai.

[00:10:30] Elizabeth Lingg: When I worked on the Apple Health team, for example, on your phone, on your watch,

[00:10:33] Andrew Zigler: Yeah.

[00:10:34] Elizabeth Lingg: I would say there's both the quantitative and the qualitative metrics and they both matter. And for both of those, you have to be very careful to avoid bias.

[00:10:45] Andrew Zigler: It's really helpful to understand how you walk through the process of evaluating your own work and, and, and looking at it when it's actually applied at scale and inside of applications and those things that you look for, the signals from the end users, um, on is this good, is this bad? But also just [00:11:00] internally your methodology.

[00:11:01] Andrew Zigler: For making sure everything stays grounded because that's why you're there. for those listening, you know, they, they might see a lot of things related with this kind of mentality with their own kind of mini startup groups and cultures within their orgs right now. 'cause everyone's experimenting with this kind of technology, um, and they're trying to be grounded in methodo methodological about how they approach it.

[00:11:21] Andrew Zigler: Right. And I'm wondering from your perspective, you know, how could a leader foster that kind of culture of, of a grounded research and innovation at their own company?

[00:11:30] Elizabeth Lingg: so definitely at Contextual, for example, we're very much focused on groundedness and accuracy. So the reason why is for a lot of enterprise use cases. It's very important that you're grounded in the retrieve knowledge, in the specific knowledge, and there's not hallucinations based on the models based knowledge. So that is, um, one thing you can foster is like if the company's goal. Is enterprise customers who need to be grounded, who need to be accurate. That's just such a big need. I think [00:12:00] right now in the community and um, in ai, then that's, that's something you should focus on because that's what's important to your users. on the contrary, if you're like a consumer company like artificial general and intelligence, you know, the open ai, anthropic, et cetera, who's looking at the consumer market versus enterprise? You might be more lenient on creativity versus accuracy, but just not important to your users. So you do always have to be like customer first and user base in mind. But even then, um, and this is something I learned across my career at different companies, you, you also just have to try to seek the viewpoints and the external metrics. and try to understand from like diverse, team members how to avoid bias. So like large companies, what's great is they have RAI, even there are even, you know, tools around this.

[00:12:52] Elizabeth Lingg: like, um, there's models which will tell you if something is offensive or they'll be sure to be careful about. [00:13:00] RAI, the Lama guard type bottles especially Yeah. Yeah. So, and the, the, the, um, prompt shield and so forth. So there's, there's things you can do whether you have the resources of a large company and have a whole team around it, or you're using one of the open source. Models. But, um, I would say that's important to just have, like, try to have a broader perspective or make use of tools that can kind of give you a new sense of awareness. and then I would say definitely use, uh, multiple metrics. That's another thing. And then the final thing is just beyond all that, don't forget the vibe check.

[00:13:33] Elizabeth Lingg: Don't forget the actual user experience. What is it really like to use it? even having humans red team it, um, tell you what they think. That's invaluable.

[00:13:43] Andrew Zigler: I, I like what you're calLingg out at the end, the, the vibe check aspect. It actually reminds me of, the recent memo that, that came out from OpenAI about their own model releases around Sycophancy because they use that language vibe check as part of their, review process that [00:14:00] maybe didn't, ultimately pass the right test.

[00:14:02] Andrew Zigler: It was, uh. If you remember the memo, I'm not sure if you're familiar with it, it was, uh, very much about how the LLM responses you would get would, say what you wanted to hear as opposed to being grounded in truth or having a reasonable response. Having those safeguards, are you familiar with that memo that they, that they dropped?

[00:14:20] Andrew Zigler: That's what I think of when I hear you describe that topic.

[00:14:23] Elizabeth Lingg: Yes, yes. But this is, uh, I mean, human preference based alignment is a whole research area. In fact, we have, in my company, we have a lot of researchers, um, who are collaborating with Stanford. And that's one of the dangers, with, RLHF and reinforcement learning and any kind of, uh, human preference alignment through thumbs up, thumbs down Uh, people have their own bias. Like they, they wanna be told what they want to hear, but that's also dangerous, right? So that's why you do have to balance it out. You can, you don't wanna just rely on that, uh, sick of fancy, like, or you wanna maybe have some vibe, check to counter that. That's one thing. [00:15:00] Or you could use another metric.

[00:15:01] Elizabeth Lingg: Like I mentioned. We have a, a, a model that measures groundedness to say how accurate data, how grounded, The responses are in the retrieve knowledge, how accurate they are. So those are different, methodologies you can use to balance it out, but that's why you need those multiple metrics because you don't wanna just optimize for making the user happy. I, I know before I mentioned that was the ultimate metric, but it's not, their happiness is just not being told what they wanna hear. Like enterprise users also want something accurate and even consumer users like. There's a good number of consumer users who will be angry if they're told something inaccurate,

[00:15:38] Andrew Zigler: Yeah.

[00:15:38] Elizabeth Lingg: have to have a broader perspective on that.

[00:15:41] Andrew Zigler: Yeah, it, it absolutely depends on the reason to which you turn to the tool, right? It goes back to what you're, what you've rightfully called out, out about the usage of generalized models versus specialized models. Understanding what and why you need to be using the tool that you are, uh, for the stakes of your, of your company or your product or [00:16:00] what you're putting out there.

[00:16:01] Andrew Zigler: So In your kind of environment, you know, you spend a lot of time thinking about the specialized intelligence and how do we create it in a way that is grounded in the facts and truths of our org and our live data and, and the things that matter to that agent doing that workflow. what are some things that that, that you see working in specialized, its intelligence that just off the shelf models just can't do that.

[00:16:25] Andrew Zigler: Maybe folks are missing if they never, never look deeper.

[00:16:29] Elizabeth Lingg: I would say, what folks are missing with off the shelf models is even with them, there's just a lot of. A lot of effort, even so, to make a specialized use case work. Like if you look at some of our customers working on, uh, technical documentation, code gen for not just general code gen, but code gen for their libraries, their specialized use case. Hardware configurations, customer support for electrical engineers, like really experts or, [00:17:00] uh, another, you know, customers in the tax and legal space, like experts in, uh, different, uh, regions, for example. There's different tax laws and then. So out of box, you could get a certain thing. But a lot of organizations, they have their own internal data.

[00:17:17] Elizabeth Lingg: They have their own internal knowledge base, right? And so I think what's missing is it's not so easy to just throw all that into context or set up a rag platform and use the foundation models. it, it takes a lot of, you know, human effort and work. Like you, you have to start data cleaning, uh, rewrite curve generation. And so a lot of what the value proposition of what my company is doing is making that a lot easier to spin up these specialized agents very quickly by having all these components document understanding, parsing, re-ranking. Generation retrieval models, et cetera. and the ability to specialize them, and also evaluate for groundedness and so forth.

[00:17:59] Elizabeth Lingg: [00:18:00] And so having that all together and having that acceleration within a product and a platform is definitely something that I think is much needed because otherwise you set up one use case, you spend a long time, and then this new use case comes, it's like back to ground zero. And I know there's a lot of companies doing that.

[00:18:18] Elizabeth Lingg: A lot of companies are just. they have this customer and then they have a bunch of folks like, set it up, make it work, and then they're back to ground zero for customer number two. Number three, it's still a good business model, but our goal is, is, is to make that much easier. And so I think what's missed there is not having the state of the art components and modular components that you could use and quickly stitch something together.

[00:18:42] Elizabeth Lingg: And I think there's a big market for that.

[00:18:44] Andrew Zigler: It makes a lot of sense and I, I think of, I, I put myself in the shoes of like our listeners too, and, and many of them find themselves, you know, uh, a, a leader at an engineering organization. They're experimenting. With AI workflows trying to help their engineers have a better [00:19:00] developer experience, be more productive, remove friction.

[00:19:03] Andrew Zigler: And Elizabeth, I know that you also, your, your background includes working, you know, very closely with engineering teams and engineering managers. I'm wondering what advice you would give to those folks for how they should go about exploring this problem, because for many of them, the, the accessibility of starting with a specialized model approach, no matter how much institutional knowledge they're sitting on, might, it might be very daunting and but I'm just wondering if you see opportunities for them to, uh, ultimately work with off the shelf models along with their own data or,

[00:19:37] Andrew Zigler: eventually moving towards the specialized intelligence approach, you've said how do they really, uh, you know, get started with the, the kind of tools that are available to them these days? You think?

[00:19:47] Elizabeth Lingg: So I mean there's definitely, um, some options out there and then. for example, in our

[00:19:53] Elizabeth Lingg: I.

[00:19:54] Elizabeth Lingg: we have a certain free trial period for use of certain APIs. So they [00:20:00] can go play around with different options like contextuals, see how they, how they like it. But I would say, um, they can also just experiment with that versus out of the box.

[00:20:11] Elizabeth Lingg: I would say honestly. Out of the box. Again, it's only gonna get you so far, and you're gonna have probably have to dedicate a lot of, of, of resources to that. So that's an option. That is an option. But it depends on this, you know, the size of your company, the number of use cases, but you, you can consider, you know, working with a company like Contextual. Um, again, you just have to weigh the pros and cons for some people too. If it's a very simple use case, maybe accuracy isn't that important. Maybe something out of the box will work. Great. And then in terms of working collaboratively as an AI leader with engineering leaders, yes, I've definitely done that.

[00:20:50] Elizabeth Lingg: And then I've also worked, even on the software engineering side and big data, um, myself, which I think is very valuable again, to have that full stack, full perspective. [00:21:00] But I think it's very imp important to try to communicate and un understand the other perspective as much as possible. And kind of explain where you're coming from, and work together with the engineering leader to say, Hey, this solution has these pros and cons, this cost associated, this has this, here's the first sequence of experiments we should try and let's work together, um, with the software engineering team and with the. Applied researcher, applied ML team, and come to a conclusion on where, uh, what kind of roadmap, and then definitely starting with those initial small scoped experiments.

[00:21:36] Elizabeth Lingg: In my opinion, that's the best way to go.

[00:21:38] Andrew Zigler: So starting with small, repeatable experiments that have a reason. You see, was it good, was it bad? You do the vibe check along the way, and ultimately, you know, you're working with a team of engineers and, and, and you said you, you spoke to, you know, you understand this very well as an engineering manager, someone who is an engineering leader, um, that, you know, ultimately everyone just wants to write good code ship, good software, [00:22:00] right?

[00:22:00] Andrew Zigler: And, and, and they want to try to use new tools to do that better, faster, and safer. So it's about trying to roll that out and kind of like pilot programs and, and, and don't obviously overload your teams with tools that, you know, they're all trying at the same time. 'cause then it's hard to measure what's really working and what's not.

[00:22:16] Andrew Zigler: Right.

[00:22:17] Elizabeth Lingg: Right, and it, it does, especially these days with a fast pace, it does get a little challenging because. you know, there's the people from the research perspective like myself who more, uh, typically, uh, slam more the side of, let's experiment, let's try this new technology. let's try this new model. But those folks need to be also grounded, you could say. And the fact that, hey. No, this model uses a crazy amount of memory and GPUs and you're, the code is, is rung together. So that's why the partnership with engineering, and then even for myself again, because I have, I do have some perspective of both sides of things. Trying to [00:23:00] help bridge the gap and collaborate and come to a common conclusion is helpful. It can be hard too because if, uh, on the research team, it's more about like innovation, prototyping. Not so much about writing production ready code. So that's why really clearly, like partnering with people who love to write that robust, product ready, well-tested code is critical and will also just being flexible to some point.

[00:23:27] Elizabeth Lingg: I think that's what's helped me too, is just, um, having people on my team try, maybe experiment with some of the things that the other team is doing, or at least observe what they're doing. Oh, this is what it, what it's like to train a model. This is why, you know, this

[00:23:42] Andrew Zigler: Hmm.

[00:23:42] Elizabeth Lingg: doesn't work and you can't just fix it.

[00:23:44] Elizabeth Lingg: Or, oh, this is what it's right, like to write some production code. Let me try to write some code in the system. And I found actually when I had, 'cause I've had people in my team who are like very strong engineers or very strong researchers, and then more people in the middle. But when I had those people who [00:24:00] were kind of like on the opposite sides of things. those with like a great growth mindset who wanna learn. I found that when they kind of experience the other side of things and it, it also helps them to understand. The only trick there is, it has to be a balance because for example, on my current team, I work with a, um, our platform software engineering team, and they're like really brilliant, brilliant engineers and so.

[00:24:25] Elizabeth Lingg: Sometimes my team, we do have engineering knowledge, so we can debug something, but you have to think, okay, how much are we going to do? And then how much are we gonna ask them for their help?

[00:24:34] Andrew Zigler: Right.

[00:24:35] Elizabeth Lingg: my strategy on that is, at least in good faith, put some time in, look at the log, send over a log, share what you've tried, shared what you think the next steps are.

[00:24:45] Elizabeth Lingg: And then also the other team will feel, Hey, this person actually tried to look into it. Though again, sometimes time doesn't permit that. It's like, Hey, we need this now there's an outage. Like, help us. and then on the opposite side, like, Hey, there's an accuracy issue in the [00:25:00] model. We need it fixed now, kind of thing.

[00:25:02] Andrew Zigler: that's a good call out about the idea of like, as an engineering, uh, practitioner, you know, you can practice, you can, you can do it, you can apply it, you can try it yourself, uh, before you try to pull in other folks. And what you do in that case is you're creating this culture of learning where folks in the organization are learning things outside of their immediate team scope and their responsibilities.

[00:25:21] Andrew Zigler: And I think that's ultimately one of the biggest, uh. Benefits of having like an applied research team and, and folks who specialize in that is, uh, they're able to connect all of those dots between the engineers and the silos in which they operate. Right. And, and in doing so, you build a culture of learning and I, and I'm, and I'm curious from your perspective, what are opportunities you see now for engineering, maybe like new engineers or people who are trying to level up to become those engineering leaders, managers, uh, someone like yourself.

[00:25:49] Andrew Zigler: What, what, what kind of skills do you think are most important these days? I.

[00:25:54] Elizabeth Lingg: yeah, so just to follow up on what you just said first, I think that is absolutely critical to have that growth [00:26:00] mindset and that openness to help where needed and work together. But I also just wanted to say that that has its pitfalls too.

[00:26:07] Andrew Zigler: Mm.

[00:26:07] Elizabeth Lingg: my natural style, and I think collaborative teams do the best, but it's also, you have to be careful because I have had people in my team working on projects. Completely outside their area of expertise. And it can be like, okay, how long is this person gonna spend on this for the good of the team and the company? there's different reasons for that. So you have to, you have to be careful there. Now, in terms of growth, being a, you know, applied researcher and having the perspective of both, like software engineering and ML or an MLE, um, and how you reach a leadership level there. I would say honestly, , when you're really new, just be really good at your job, do your best, and then as you start to feel more comfortable. Think about how you could start solving problems for more people. Bring people together, whatever your talent is. So your talent could be, you're really great at architecture, so you really take the lead there.[00:27:00]

[00:27:00] Elizabeth Lingg: Or your talent could be, you're really great at collaboration. So you focus on bringing, everyone together, getting everyone the common ground in order to solve a problem. your talent could be you're really good at, Like you have strong skills in debugging. So then you could take the fact that you have those skills, plus you have ML background, and then be able to debug across the full stack and then maybe, teach others. And then think for any job, it's all like sort of a matter of like, do the job before you're promoted. but you should talk to your manager probably about how you could do that or your mentor. usually if they're amenable to that, like they wanna help you grow to what you wanna do.

[00:27:42] Elizabeth Lingg: If not, probably you should start like looking for somewhere where you can eventually grow.

[00:27:47] Andrew Zigler: Right.

[00:27:48] Elizabeth Lingg: thing I do is I'll give people side projects. So a lot of people might wanna get into specific research topic and I'll be like, okay, well just focus on delivering. But then 25% of your [00:28:00] time. See how you can apply this here. and then often that can help them, uh, level up. And then another thing is, I feel like in startups is you really make your own opportunities. So startups,

[00:28:11] Andrew Zigler: Yep.

[00:28:11] Elizabeth Lingg: growing like crazy. there's opportunities available. So try not to think in terms of restrictions. Like, I don't have this title, so I can't do this. Just think, okay, I'm good in this area. This is, I think this can help. But within that, I mean, talk to your manager, talk to your team. Don't just go and like, well, I'm just gonna come up with this new project not get it approved. It can't be like that. But if it's a side project, most likely it will.

[00:28:35] Elizabeth Lingg: I would say. Also, I think just being open-minded for those opportunities that come to you and just thinking if it's a good opportunity for growth within your company or externally.

[00:28:45] Andrew Zigler: Yeah, it's like understanding your impact and what your impact can be. I love how you called out how in a startup, you know, you make your own opportunities. I think that's completely true. Uh, folks who experience a lot of success and growth within a startup. Are the [00:29:00] ones that can market themselves and the impact they have on the day to day really well.

[00:29:05] Andrew Zigler: And, and that, and that requires obviously being deeply familiar with the problem that your company is solving and being really close to those critical conversations that are happening every day, whether you're a part of them or not. Like just understanding how it's shaping, uh, and then thinking about how can my impact.

[00:29:21] Andrew Zigler: Tie back to that. and in, in doing so in a way where you don't tank your current expectations for what you're ex, what you were, what you are there to do, because it's great to be curious, it's great to want to grow, but if you're there to perform a specific function, you have to, you know, first and foremost make sure you fulfill your function.

[00:29:37] Andrew Zigler: 'cause your company is relying on you in that way. So it's like a, a good, a good bit of nuanced advice, but it really calls out how. You should be focused on understanding, you know, what's the core thing that my company is trying to do right now and what is available in my skillset, in my wheelhouse, to get us a little bit closer to that goal.

[00:29:57] Andrew Zigler: And then once you do it and you start working on it, it's about [00:30:00] explaining to people about like. This is what I tried, this is what worked, this is what didn't, this got me a little closer. We're all stumbling in the dark, trying to figure out, you know, where are the light switches so we can see everything is this, this is me trying to get that direction.

[00:30:13] Andrew Zigler: and so it's kind of, it's, it's helpful to understand it from your perspective too, as someone who bridges both like the theoretical and the practical world. Um, I can't think of a better perspective to draw upon to kind of connect those dots for us. So, I, I appreciate that. And you know I kinda wanna round things out too, 'cause we've, we've covered a lot of really great ground here.

[00:30:31] Andrew Zigler: Um, about grounded research and about, uh, uh, generalized and specialized intelligence, but also about your, career and how you've, moved, through these types of roles and in these kinds of environments to ultimately take the. Theoretical, research minded approach and apply it into solutions, software and applications that are shaping our world.

[00:30:48] Andrew Zigler: Uh, but before we wrap up, uh, where can our audience go to learn a little bit more about contextual AI and, and what you do, Elizabeth?

[00:30:56] Elizabeth Lingg: so you can definitely go to our, our company [00:31:00] website. We also have pretty active social media on, linkedIn Twitter as well. Just search for contextual AI and then because as we mentioned, um, the leaders of the company are, part of the original RAG team.

[00:31:14] Andrew Zigler: Right.

[00:31:15] Elizabeth Lingg: is one of the co-authors and a professor at Stanford. He, um, he was interviewed recently, so yeah, feel free to check anything about our company or send me a, a LinkedIn message. I do get quite a lot, but I'll, I'll try to re uh, reply if you have a question.

[00:31:32] Andrew Zigler: Yes, yes, please, please reach out to Elizabeth on LinkedIn. Uh, you know, you should definitely give us a shout out if, if you listen to this episode there, dev interrupted posts every week. We're gonna be sharing clips of this conversation with Elizabeth there as well. And we'd love to hear your thoughts on what we discussed today.

[00:31:47] Andrew Zigler: So, uh, for those listening, thanks for joining us this far. Be sure to rate and subscribe to the podcast if you haven't already, and we'll see you next week.

[00:31:55] Andrew Zigler: Thanks for joining us on Dev Interrupted.

[00:31:57] Elizabeth Lingg: Thank you.

Your next listen