The common narrative suggests AI will make engineering leadership obsolete, but history - and the Industrial Revolution - suggests the opposite is true. Engineering executive Manoj Mohan joins the show live from ELC to argue that as code generation costs drop, the demand for high-level judgment and strategic oversight will only skyrocket. He breaks down why leaders must stop starting with models and start with customer pain points, utilizing his "3GF" framework to manage the risks.
Recorded live at the Engineering Leadership Conference.
Show Notes
Transcript
(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)
[00:00:00] Andrew Zigler: Welcome back to Dev Interrupted. I'm your host, Andrew Zigler and today we're sitting down with Manoj Mohan, a seasoned engineering executive with experience leading large scale software data and AI transformations at companies like Intuit Meta and Apple. And Manoj is passionate about empowering engineers and simplifying complex infrastructure, while also maintaining a focus on responsible AI and developer experience.
[00:00:28] Andrew Zigler: And he's wildly optimistic about what the future of engineering holds. And we're really exciting and really excited to sit down with him today. So Manoj, welcome to the show.
[00:00:36] Manoj Mohan: Thank you Andrew. Thank you for having me, and thank you for that warm introduction. Look forward to our discussion today.
[00:00:42] Andrew Zigler: Yeah, likewise. So, you know, we're here at the Engineering Leadership Conference and you're here on a panel today talking about AI is here to stay. Can you tell us a little bit more about that?
[00:00:52] Manoj Mohan: Yeah, first of all, super excited to be here at the ELC annual conference. It's super energizing to be here and, and, uh, you know, be part of [00:01:00] the LinearB B team and everything. Um, so, uh, the conversation we are having in the panel discussion today is how are engineering organizations going to evolve in a post AI era. So that's the panel discussion we are having.
[00:01:16] Manoj Mohan: I know it's kind of, you know, top of the mind concern for, uh, engineering community and things like that. So that's the conversation and I'm happy to spread some light if, if you want me to do.
[00:01:26] Andrew Zigler: Yeah, let's park there for a minute and, and learn some more about that. So, you know, AI is here to stay. We talk about that a lot on the show and it opens up a lot of opportunities for teams and it shifts everyone into a new paradigm, a new way of thinking. So what challenges and opportunities do you see right now for engineering leaders?
[00:01:45] Manoj Mohan: Yeah, so, so let start with the forest view, right? So the, the common misconception that I hear from several folks is AI is starting to generate more and more of the code [00:02:00] engineering leadership and engineers are becoming less relevant, right? That seems to be the resonating theme that's concerning to most folks, and in my opinion, I think engineering leadership is more critical than ever before as of today.
[00:02:17] Manoj Mohan: And why do I say that? The reason for that is the onus of making a right judgment call. The onus of creativity, the onus of investing in right, big bets based on criticality. All of this has to come from engineering, uh, engineer the rockstar engineers and the rockstar engineering leadership, right? And, and maybe an analogy that I can, you know, I can state here would be back in the 1800s,
[00:02:47] Manoj Mohan: when Industrial Revolution was happening, everybody was freaking out that these big machines are gonna take away all the jobs, gonna eliminate all the jobs. But if you look back [00:03:00] at history, what happened is just the opposite of it. These machines actually spun up so many new industries, be it aircraft, be it transportation, be it railways, that in turn created millions and millions of new jobs, right?
[00:03:15] Manoj Mohan: So every transformation that we have seen historically, always looks like it's an end or it's a do. But if you give it its own test of time, it's gonna translate into, you know, newer opportunities. The ability to do more meaningful work. if you just let it carve out its path. So that's how I see it. I, you know, I sense a lot of optimism with how AI is taking over all of the mundane work.
[00:03:46] Manoj Mohan: Right. And, and one other analogy I could use is, I see AI having the ability to generate code more along the lines of intern having an intern in an organization right now, [00:04:00] just because an engineering organization is constrained on capacity and say if you feed in a hundred new interns to that engineering organization.
[00:04:11] Manoj Mohan: Do you think it's going to magically solve all of those capacity constraints? Absolutely not. That's not gonna happen, right? If anything were to happen, it's gonna choke the organization even more. It's gonna create collaboration bottlenecks, it's gonna create innovation bottlenecks, it's gonna create dependency, bottlenecks and so on, right?
[00:04:31] Manoj Mohan: So, um, so I don't think the solution is that AI writing code is going to eliminate engineers.
[00:04:41] Andrew Zigler: Yeah.
[00:04:41] Manoj Mohan: Is gonna happen is it's AI is doing all of the mundane work and it's gonna free up engineers to be able to do more meaningful work. Right? It's, it gives them more space to innovate, to be creative, and to go after bigger bolder bets so that we [00:05:00] do more meaningful work in the longer run. So, so that's the optimism that, you know, I would love for everybody to think and, and, uh, thrive upon.
[00:05:08] Andrew Zigler: Yeah, I, I like how you, um, use the analogy of being back, like in the industrial revolution and factories being introduced, there's been a lot of transformative, um, things that have transformed industries that we've talked a little bit about before. Interrupted. That's a great example. Another one is like, when.
[00:05:25] Andrew Zigler: The Macintosh came out and traditional artists kind of up uprose against like a new computerated graphics. And the same thing for architects with doing drafting on a computer versus doing it traditionally on like a drafting table in a room full of architects. So in each of those changes, like you said, it opened up a, a new world, people had to steps through a threshold.
[00:05:44] Andrew Zigler: It's a bit of an uncertain time of what's gonna be on the other side of the store, but ultimately it's more jobs. And more opportunities and more ways for humans to have a higher impact on the work that they do. And it reminds me a bit of like earlier this year when [00:06:00] everyone for the first time Googled Jevons Paradox to learn about, remember when DeepSeek came out and the low inference cost caused the open AI to dunk down for a day?
[00:06:10] Andrew Zigler: Because everyone thought, oh, if AI is so cheap that you can have it in all these different waves, and it's like, why does this company like Open AI have its moat. But then what everyone realized is that when AI access became more available, it increased the demand for it even more. And so what happened is the consumption that went up even higher instead of then, oh, the costs that you or the, the.
[00:06:33] Andrew Zigler: The, the benefit or the, you know, profit you gain from that going down. And that Jevons paradox applies also to engineers and engineering jobs when there, everyone is an engineer. When everyone is building software, when it's at everyone's fingertips and grandma can vibe code, the need for engineering knowledge and engineering expertise is higher than ever before.
[00:06:55] Andrew Zigler: You're need, you now, all sorts of industries and companies and, and places where [00:07:00] maybe before they had, maybe they had a technologist in-house. They're not a tech company. They had a technologist though. But now everyone within that company is a technologist in their own right. And that's like a really exciting time.
[00:07:11] Andrew Zigler: But it, it's also a time of a lot of work for people like you and I, you know, to talk about it here. Uh, Dev Interrupted, but also to, to, to take the stuff that we've learned, uh, our foundational knowledge as engineers. And help train the world for this new era that we're in.
[00:07:25] Manoj Mohan: Yeah. Yeah, absolutely. Very well said. Um, you know, and, and every such transformation. There is a sense of skepticism, and for folks who don't understand the, the granular details that are happening, that skepticism just explodes into paranoia and things like that. Right? you know, the only way to approach any such skepticism is to drive more and more clarity.
[00:07:50] Manoj Mohan: Of clarity, um, you know, um, of inculcating some knowledge within yourself, and also trying to, you know, break down the barrier and, and, you know, uh, [00:08:00] create some clarity with, you know, with what is at stake and things like that.
[00:08:03] Andrew Zigler: Yeah. So if you're an engineering leader right now, what can you do to build that clarity within your team to get everyone aligned?
[00:08:09] Manoj Mohan: great question. So, um, I, I would suggest a couple of different things and, and some of these things I've worked and the companies I've worked with in the past. So, one, if, if you were to carve out all of your productivity or, um, contributions in a typical week or 40 hours of work, you try and identify where you end up losing the maximum amount of time.
[00:08:35] Manoj Mohan: The least on productivity, right? It could be something like sending out meeting notes, uh, after meetings might be taxing for engineers and engineering leaders. It could be putting together, documenting a design or vetting out a design architecture, or it could be being on incident calls when you have customer. [00:09:00] So you look at what are those time consuming activities in decreasing order of where your value add is minimal,
[00:09:08] Andrew Zigler: So identify the toil
[00:09:10] Manoj Mohan: Yeah, yeah. Identify the tail or identify where you could amplify your productivity, and then look at how do you kind of solve for it either through a process. Through automation or with AI or with agent workflows or whatever.
[00:09:27] Manoj Mohan: Right. I wouldn't prescribe AI as the only means to improve your productivity. Yeah. It's definitely one of the most significant means because it enables better automation. So looking through along those lines has helped immensely. One example to quote is, um, you know, my team of engineers used to be stressed out.
[00:09:47] Manoj Mohan: Like, and, and this used to come out from the top of the mind conversations. They used to be stressed out about the on-call process. Um, and, and with the on-call process, they would have to go through [00:10:00] the stress and the burn and the churn of trying to figure out. Where is the root cause of the issue? And this would span sometimes, you know, two hours, sometimes three hours, because our meantime to detect would vary from, you know, 90 minutes to 180 minutes and so on.
[00:10:16] Manoj Mohan: Right? So that used to be a very stressful thing for the engineers. So what we did is we said we are gonna try and solve for it by investing in an experimental troubleshooting bot and that troubleshooting bot will help identify the initial few nuggets that will give a starter for the engineer. So earlier, the engineer is already on the stress of being on a customer incident call, but now with the troubleshooting bot, they come to the incident call. But if they have nuggets of information Yeah. That they can look into. Right, right.
[00:10:55] Manoj Mohan: So that was a huge morale boost for engineers.
[00:10:59] Andrew Zigler: was like [00:11:00] a, that was the big pain for them. The big pain point, right? Because it's like starting out, that incident investigation from nothing is really hard. You know, you're repping through some logs, you're trying to piece together some information that you have. And so starting with a a best possible first guess from an assistant is a great way of reducing the toilet and stress.
[00:11:18] Manoj Mohan: And, and we obviously, we didn't get it right on first attempt or first and our strike rate of success of what it recommended was low to begin with. I think it was 12% or something. But by trading through and learning, giving it feedback, we were able to notch it up several, you know, several points higher and, and now it's kind of become a default, um, standard for every engineer on the instant calls.
[00:11:44] Andrew Zigler: Yeah, it has like the same or maybe a little more accuracy than a human giving, their first guess. Yeah. Someone with domain knowledge. And I love that you pointed out that you had to iterate on it, right. You had the fir, you built the frame of the workflow. You got it, you got it. It saying stuff when something wrong would [00:12:00] happen, but it didn't get all the pieces right.
[00:12:02] Andrew Zigler: But as you used it more and more you found, oh, it's kind of falling into this trap. Or, oh, it's making this wrong assumption. And then you can equip it with more and more knowledge. And in doing so, you kind of incrementally build this tool that then you get to keep around and it, it, it, it actually has a lot of long-term value.
[00:12:18] Andrew Zigler: And I'm wondering when you iterate through that process. How do you maintain the quality and make sure that you don't take steps backwards? Like, did you all use evals, for example?
[00:12:29] Manoj Mohan: I was about to
[00:12:30] Andrew Zigler: Okay, amazing. Let's talk, let's talk about evals. How do you, how are you, how
[00:12:32] Manoj Mohan: so I'll, I'll, you know, I'll give a couple of flavors to it. So first and foremost, for anything that you leverage AI in an enterprise world, right? Enterprise world is kind of scary because there is compliance, there is legal, there is security, there's, there's, there's tons of, things you gotta put together.
[00:12:51] Manoj Mohan: But I call it as the 3GF, factors like three great factors anytime you think about AI for [00:13:00] enterprise, I call it the 3GF approach. Okay? What it means is, um, you know, one, whatever AI enablement you're doing, you've got to ground it. That's the first G. You guard it and you gotta govern it. So when I say, when I talk about ground it, ground it is you gotta give the citation what
[00:13:22] Manoj Mohan: sources of sources did AI refer to in order to come up with that best recommendation? Right? Think of it as the system being transparent about what it's coming back with.
[00:13:34] Andrew Zigler: observability.
[00:13:35] Manoj Mohan: Some sort of observability. And that also enables your users to gain confidence on, you know, okay, this is the reason why AI is giving me this particular reason answer, right? but the lack of that transparency will mean that you're gonna lose the trust of users for every false recommendation. And then gaining that trust back is going to be a.
[00:13:57] Andrew Zigler: Impossible in some cases.
[00:13:59] Manoj Mohan: Right, so [00:14:00] that's the second part is guard. Guard it is privacy by design. Right? You're gonna have tons of information, be it sensitive data that you need to mask out.
[00:14:12] Manoj Mohan: Um, you might have information that should not make its way to the model and things like that. So you've gotta have those guard guardrails or guarding principles with a privacy first design approach. That's number two. And the third one is governance, right? And governance is where mostly evals comes into effect because.
[00:14:32] Manoj Mohan: You AI does in terms of coding, I treat. Measure it. Measure as precisely, as accurately as possible, right? So you have to look at data drift metrics. You have to look at model drift metrics. You have to look at fairness metrics. You have to look at all sorts of critical metrics that will tell you, or that will [00:15:00] be the validation mechanism for you to say that, look.
[00:15:02] Manoj Mohan: For this being the problem statement, for this being the outcome from ai, this is absolutely the right approach. Right? So the three Gs I have explained it is the, um, ground it, guard it, govern it and the 3 Fs are what are the outcomes it is fast, faithful, and fair, outcomes, right? So you take care of the three Gs and you get the three Fs, and, and, and, and that's, that's, that's exactly what you want.
[00:15:32] Manoj Mohan: You want a fair outcome that is, uh, rationalized for your product use cases across the cohort of users. You want it to be fast, uh, fast and accurate and, and you know, so, so along those, that's how I look at it. So in the enterprise world if you take care of the 3GF think you're, you're better suited to increase the odds of adoption success with the outcomes.
[00:15:58] Andrew Zigler: I love that framework and [00:16:00] you know, you talked about the, you talked about the F's and you talked about the G's, but now I wanna talk about the S's and so, skills. The skills that are important to developers right now to engineers, and I'm wondering what are the things that you think that engineering leaders should be screening for and understanding from their teams as they build them?
[00:16:16] Andrew Zigler: In order to have teams that are really ready to grapple with everything
[00:16:18] Andrew Zigler: AI can do.
[00:16:19] Manoj Mohan: Yeah, yeah, great question. Um, the, the, the principle that I do for myself and for everyone in my team is, um, you know, small incremental learnings done consistently day over day. Will lead you to more, you know, being more knowledgeable. Being more, um, empowered to leverage the latest and greatest tool.
[00:16:44] Manoj Mohan: And the reason I state that, along those lines is the pace at which the technology ai, the stack is evolving, is blazing fast. Right. I think you're on day one and you could be [00:17:00] outdated by day 30, right? Yeah. So, so that's why I think hold dear to your principles. learn incrementally and apply the patterns, right, and leverage those patterns on, okay, what is a use case where I should leverage retrieval augmented.
[00:17:19] Andrew Zigler: Yeah.
[00:17:21] Manoj Mohan: do I leverage vector data stoves? Where should I leverage embedding? When do I invest in an in-house foundational model that's custom to your organization versus when can I leverage a, uh, you know, open AI foundational model? Right, right. So having a lay of the land, a high level understanding of the forest view.
[00:17:43] Manoj Mohan: And being able to map it out to the relevant use cases, the relevant pain points or problems, and how you'll tackle it, I think is a great start. And that is also too much to chew for all of us. the way I would approach it is, um, consistently look at small, [00:18:00] in small pockets and, and try to kind of get, get a feel for it, get a sense for it.
[00:18:05] Manoj Mohan: Build a trust and, and you know, and, and there are tons of opportunities. I'm not gonna outline all the courses. Course that are YouTube, Google. Yeah. There are plenty of them. You could lose track of it. So I'm not gonna list it out, but.
[00:18:16] Andrew Zigler: In fact, in many cases, the best thing you could do is just get your hands in it and mess with it and do it more than just read about it. You know, it's rich. I love that you called out a daily practice, a daily incremental practice. 'cause I also think that is the key to getting better at these tools is just do a little bit every day, but also then reflect on your usage and what you learn from it.
[00:18:35] Andrew Zigler: And I also love to partner with the LLM to write. Artifacts of the process along the way. Markdown documents of what we talked about, what we aligned on, what was the spec, what were the tasks, what was the, the framework. And then getting all of that as like something you check in to git even, it's like become a really important part of my daily process.
[00:18:53] Andrew Zigler: Uh, so that daily practitioner, uh, view is really
[00:18:56] Manoj Mohan: And, and one more nugget I would say is, um, you know, [00:19:00] every, every week or weekend I look. At meetups, right? I look at.
[00:19:04] Andrew Zigler: What are, what are people meeting about and talking about right now?
[00:19:07] Manoj Mohan: There are, there are at least two or three hackathons happening over weekends, uh, or
[00:19:14] Andrew Zigler: I did one last week. I know they're all the time now,
[00:19:16] Manoj Mohan: And, and we did one here at ELC. Yeah. You know? Yeah. So, so there are tons of hackathons happening. And, and if, if you feel like you need that initial starter pitch, right. And, and you wanna join forces with someone else, hackathons are a great place.
[00:19:30] Manoj Mohan: Meetups are a great place. You should just expose yourself. Um, and even as a beginner, you get that kickstart and then you could just, you know, drive it on your own with a bunch of agents running on your laptop all all day. Yeah.
[00:19:42] Andrew Zigler: exactly. And so on that same question, Manoj are, are you vibe coding?
[00:19:47] Manoj Mohan: I'm white coding,
[00:19:48] Andrew Zigler: Yeah. So what's that like for you? What's your process like?
[00:19:50] Manoj Mohan: I, I take a couple of different approaches. Uh, first one is I try and get OpenAI and Gemini [00:20:00] and, um Anthropic to write comprehensive prompts for a high level problem statement. Yeah, so I, you know, I it with, you know, ARA based, um, you know, comprehensive prompt specialist. I treat it like an expert.
[00:20:15] Manoj Mohan: I give it a high level problem statement. I go to Figma.
[00:20:19] Andrew Zigler: So you do a big thought exercise first.
[00:20:21] Manoj Mohan: I, I do a big thought exercise. I kind of, uh, you know, I know because I know I'm not, my thought process is not complete, but I know what that end outcome needs to look like. So I try to kind of paint that forest view, not at the hundred thousand feet level at which I'm thinking.
[00:20:37] Manoj Mohan: I try to break it down to 10,000 feet level and all of these tools, like I take nuggets of good things that it comes up with in its comprehensive prong. I put together all of it collectively to build one of the best prompts possible. And then I run it on deep research mode, you know, to get the next level of breakdown.
[00:20:57] Manoj Mohan: And then I modelize it. And then those [00:21:00] modules I do with Claude CLI, I like Claude CLI that's my favorite tool of the time. So, so I give it small, incremental modules to develop and, um, you know, and then it, uh, it tries to develop all of it. I consolidate all of that into a git repo, and then I tried deploying it on Vercel and things like that.
[00:21:19] Manoj Mohan: So, so that's been my approach. and, and,
[00:21:22] Andrew Zigler: rapidly building and trying it
[00:21:23] Manoj Mohan: yeah. Yeah, and, and again, I, you know, I, I don't think I'm doing enough justice because I'm, most of the time I'm running one agent or one problem statement on my laptop. I.
[00:21:33] Andrew Zigler: You feel, you feel lazy for not running a whole bunch of them in parallel,
[00:21:36] Manoj Mohan: could, I, I should be running like 10 Claude agents and 10 cursor agents and 10 anthropic agents on 10 different projects in parallel, because, you know, that's what, that's the age we're living in, so I would love to get there.
[00:21:49] Manoj Mohan: So I feel like I'm a, you know, I'm lazy to do that yet, so.
[00:21:52] Andrew Zigler: Well, I mean in, in the world we're in. And the, the kind of role that you're putting yourself in, in your process, you know, you are the [00:22:00] top down person with the vision, and ultimately you're using the AI agents as a team, right? And so what it does is it moves all of the importance of the work that used to be before.
[00:22:10] Andrew Zigler: Like, oh, you know, get the POC down, get something up as fast as possible. Now, a lot of that work actually is understanding really clearly what you want, and then being able to put it into words and then using that. Incrementally, like you said, to find the best parts of the process. And then once you have it in a modular way, capture it and bring it in, and then use that as a reference to create that next modular piece.
[00:22:34] Andrew Zigler: And you are able to piece it together slowly over time. But all of that builds out of your intent, your ability to express what you want in natural language. And so has that shifted the kind of skills that, that you've been building and using as an engineer?
[00:22:49] Manoj Mohan: Yeah. So, so, and, and you know, part of it is also there's some history to it, which is, I, I still treat the AI code gen capabilities like an intern. [00:23:00] I don't think it has earned the trust to become a junior engineer
[00:23:03] Andrew Zigler: gotta review all the code and you need to understand what it does.
[00:23:07] Manoj Mohan: So, so every time I have tried giving it a forest view macro multi-dependency level problem statement, it. It would falter big time and, and once it starts to falter, if I go and tell it even as precisely as possible, Hey, you know what? This dependency between this particular module and this module is not working. I know it's a bug. I want you to fix it. And I'll tell it three times, four times, five times.
[00:23:35] Andrew Zigler: It.
[00:23:36] Manoj Mohan: It still wouldn't get it.
[00:23:38] Manoj Mohan: So, so that's why I still, you know, hold onto my theory that I still treat it like an intern. I would love to see it graduate to more of a junior engineer someday. And, and then maybe I would give it more of a forest view problem statement and, and expect it to kind of go after solving.
[00:23:55] Andrew Zigler: I, I'm curious to, from your perspective as an engineering leader, how do you take that [00:24:00] practice? What you align all of your engineers to do in, in this AI engineering world. How do you convey the impact and the adoption of that back to like your non-technical leaders and the folks that you know, they hear from the boardroom level conversations, we need more AI adoption, we need more throughput.
[00:24:18] Andrew Zigler: How do you balance those conversations? That way you don't burn out your devs, but you still take advantage of opportunities in the market.
[00:24:25] Manoj Mohan: So, so I have been somewhat fortunate to be associated with organizations where the leadership understands that. AI is not a magic wand that can solve all the problems, you know, uh, with, with a magic bullet, right? So that's, that's been my blessing because, because that has avoided a lot of undue pressure, but I can see that pressure build up in several other enterprises, right?
[00:24:53] Manoj Mohan: So, so the, the principles I try and abide by, even for startups that I advise and all that is [00:25:00] one. Do not start with ai. Do not start with the model. Start with your pain point or product problem statement that you absolutely have to solve to create a mesmerizing experience for your customers.
[00:25:15] Andrew Zigler: start with the problem.
[00:25:16] Manoj Mohan: Right. So you are in the business of solving for your customers, creating a compelling value proposition. Right now, what is that, right? And what are the North star metrics that are truly gonna move that experience for your customer, right? If you have clarity on that aspect of it, then break down that North Star metric into more granular, quantifiable metrics.
[00:25:44] Manoj Mohan: And then look at which of those metrics are you gonna solve for in a more meaningful manner. Right? For example, let's say you have done all the funnel analysis around your product analytics. You know that a significant [00:26:00] fraction of. Dropping off a particular portion of the workflow, maybe that workflow is too hard.
[00:26:06] Manoj Mohan: The customer understand maybe the integration or the data customer being. Is there value add with embedding a chat bot that understands the context of that particular step within the workflow and identifying the top three recurring reasons? Are you able to surface three default options for the user saying, Hey, are you stuck on this particular step?
[00:26:33] Manoj Mohan: Are you running into this particular error? If so, click here. and if you could create that magical experience. You have automatically created a value proposition, right? So, so that's been my philosophy. Stay centered around your core pain point, and then look at avenues with AI or without ai, and how you can move the needle for your customers in a compelling way.
[00:26:58] Andrew Zigler: So you, you [00:27:00] find the problem that your pro, your customers have, and you, you solve it and you use AI as a tool, as a catalyst to make that change and to solve that problem. Not just using AI for AI's sake. You know, when you have a hammer, everything looks like a nail, is kind of the, the scenario we fall into.
[00:27:15] Andrew Zigler: And I really like how you highlighted, the user intent of what. They are coming to you for, you're like, as a, as a company, as a software provider, as an organization, you're solving something for somebody. And it's important to understand what that solution is and what it makes it valuable to that person.
[00:27:33] Andrew Zigler: That way you can build that better experience for them. But there's also something to be said about understanding intuitively their intent of what they're trying to accomplish when they come to you and they use your products. And before ai. It can be hard to understand intent, like you said, like you look at the, the adoption or the usage of a tool or a workflow, and you see at some point there's a big drop off and maybe you have a lot of theories and you run a lot of tests and you figure things out.
[00:27:59] Andrew Zigler: [00:28:00] But there could also then in this world, be an opportunity for AI to be a part of understanding the user's intent and then using that to keep them embedded within the system, keep them from dropping off. we sat down with Lake Dye recently who, who put this beautifully about, uh, how intent was solved for with search, how, you know, the user searches for A, but they really mean C and we serve them B and then they click on D.
[00:28:25] Andrew Zigler: Right. And so it's like. But ultimately D was what they wanted, but that was really far from what they said. So you had to really neatly understand their intent to step their way there. And I think AI is going to enable a lot of tools and software to have that same delightful experience that search has had for us.
[00:28:45] Andrew Zigler: And I think that's really exciting.
[00:28:46] Manoj Mohan: Yeah, absolutely. Absolutely. I think all of these tools, all of these are options we gotta leverage. But the attempt should not be to leverage them for anything and everything that you're trying to do, right? [00:29:00] Uh, that decision making, that ability to invest the right tool in the right area is where the engineer brings the best of value, right? AI is absolutely here to do all of your mundane, code generation stuff, and it's essentially to free up your engineer to do. Go after the bigger, bolder, more compelling, more meaningful work items so that we're able to serve our customers, our stakeholders, and, and improve the ecosystem for the better.
[00:29:31] Andrew Zigler: Wow. I mean, this has been like an amazing deep dive on how you think about using this as a force multiplier, but also how you lead teams and are educating them, and everyone is upskilling right now, and I, I really like your optimist perspective on, on how teams are gonna be utilizing these systems and, um, you know, you're giving a panel today, ELC, and so there'll be more to follow from, from you as well.
[00:29:51] Andrew Zigler: But I'm curious for our listeners and those that are tuned in today for our, our, our conversation, where can they go to learn more about Manoj and what you're working on?
[00:29:59] Manoj Mohan: Yeah, so I [00:30:00] post a newsletter about all of the stuff that I'm fascinated with. Um, so most of the stuff I deal with is AI and engineering leadership and, and productivity gains and hacks and all of it. So that newsletter on LinkedIn, I also have a Substack, uh, you know, Substack newsletter. Um, yeah. And,
[00:30:19] Andrew Zigler: Well then us too. We'll, we'll, we'll plug you in with our community. We'll get you we'll, we'll get, we'll get those linked in our show notes. That way folks can go and check that out. And then I just really wanna thank you again for sitting down with us in the Dev Interrupted Dome. It's been really great chatting with you.
[00:30:31] Manoj Mohan: And thank you. Thank you. The pleasure is all mine. And, uh, I've always been a big, uh, fan of your podcast and all the amazing guests and the topics you cover. So, uh, so please continue the great work.
[00:30:42] Andrew Zigler: Thank so much. It's great. Sit down with community members. So thanks again and thanks for tuning Dev Interrupted.



