“ I've seen a lot of developers that chase after technologies as a solution to figure out how they can just integrate their stack with what pops up or is popular in the market. I would say that the shift shouldn't be chasing the technologies. The shift should be focusing on what is the problem that you're trying to solve.” - Dr. Maryam Ashoori

In this episode, Andrew Zigler sits down with Dr. Maryam Ashoori, Senior Director of Product Management for watsonX at IBM. They explore the evolving AI stack for enterprise and the growing skill gap challenging developers. Dr. Ashoori shares insights from a recent survey of 1,000 developers, highlighting the need for better tools and strategies to manage the growing AI tool sprawl.

The conversation also explores the rise of AI agents, the potential of no-code AI development, and the future of software engineering in an AI-powered world.

But first, host Dan Lines (COO of LinearB) sets the stakes for engineering leaders everywhere: the future of technical work will evolve with agentic capabilities. Must we all become “AI managers” now? Check out the full episode below for the scoop!

Show Notes

Transcript 

Andrew Zigler: 0:06

Welcome to Dev Interrupted, everyone. I'm your host, Andrew Zigler. Joining me today, as you know, is fellow host Dan Lines, CEO of LinearB. Always great to have you here, Dan.

Dan Lines: 0:18

Yeah. Awesome to be on. Thanks, man.

Andrew Zigler: 0:20

Of course. Uh, you know, listeners, we have a great episode in store for you today. It's featuring a conversation with Dr. Maryam Ashoori. She's the head of product for Watson X AI at IBM. She had a lot of really great insights on how AI is reshaping engineering work, and I'm really excited to share them. But before we get there, you know, Dan, I know you've listened to Dr. Ashoori's conversation, and you have a lot of insights too, in your role at LinearB, about how AI is reshaping engineering work.

Dan Lines: 0:49

Yeah, absolutely. I would say I talk AI. So maybe per week I'm meeting with 10 customers or, or prospects that are utilizing LinearB in any way. We talk AI probably 8 times, 10 times, so like absolutely, I think it's top of mind for everybody.

Andrew Zigler: 1:10

Yeah, unavoidable, an unavoidable topic for sure. It's really kind of hard for it to not come up anywhere. Sometimes when you have a conversation and AI doesn't come up, then it kind of becomes a delight or a rarity these days, right?

Dan Lines: 1:21

haha

Andrew Zigler: 1:22

you know, let's dig into some of that. some of what Maryam is going to be talking about, about how, particularly how AI is going to be impacting the developer experience and developer productivity and how AI is ultimately shaping the skills that developers need to be picking up in order to be more effective and they get their work done. And I wanted to get a sense of, you know, how does that line up with things that you see when you talk with engineering teams and you see how they're realigning?

Dan Lines: 1:48

first of all, like, you know, what she was saying was very interesting, but I can just give my perspective and I'll give it from two different ways. So one is, I'm talking to developer experience teams. So you got developers and you got DevEx teams, DevEx teams. know that they should be or need to be using AI to gain productivity for the business. So that's kind of understood now. Okay, that's my mission. I need to use this AI in some way, shape, or form for my developers. But also what I'm hearing is it's still fairly unknown or unproven. They're having a hard time saying it definitely is. Is, uh, improving productivity or not? So for example, the last customer that I talked to about it was like two days ago. And they were like, yeah, we, uh, rolled out, you know, like co pilot, to the entire engineering organization, did a survey of the teams, like ask people. And it was like one team said, okay, I think it's effective. And most of the team said, I don't know yet. And so I think that's what DevEx teams are facing. Kind of like, Hey, I know I need to be using this stuff. I know it needs to drive productivity. I'm unsure if it is or not. So that's one problem to solve. But I think that the other thing about it, and I know that we'll get into it, is like being more, I think, intentional and specific about how it's going to make you productive, From the developer side, I'm also hearing a range of things. Like we rolled some out at LinearB. We rolled out our AI code review to our developers. Some of the feedback is like amazing. Hey, We need to do this AI code review everywhere. It's already finding bugs and stuff. Like we should keep it. And then other times like you might hear, okay, it still needs a little bit of work. Like it suggests it's, you know, something different or wrong, right? So it's kind of like still in the work. The most advanced thing that I've seen is we had a developer that's actually like managing a team of AI bots. So it's like, okay, now I'm a developer and I have three bots that are like. Getting, the story done or the issue done. I have one creating, tasks. I have one writing code. I have one QA ing. So I know that sounds like a little bit advanced, but I think like for developers, there's a big range right now of where do you fall. Are you like commanding a team of bots? Are you still learning like prompts and that kind of stuff? So, that's what I see in the field.

Andrew Zigler: 4:23

I completely resonate with that. That's what I hear from folks as well, when they talk about the folks that are really getting ahead, that are utilizing this, to kind of get ahead in their productivity. They're splitting their work into a bunch of little agents, a bunch of little workflows. But most importantly, they're managing. Those outputs, they're, being in charge of the decisions and the information that goes into them, but they're not spending as much of the time doing, which is what, uh, you know, Dr. Ashoori really focuses on in our conversation is that shift from being the doer to the decision maker, the decision manager, and how AI kind of gives you that focus, you know?

Dan Lines: 4:57

I really like that part around. I think she said like act doers or like decision makers versus doers. It actually made me think of something pretty interesting. Like when I was developing and I think it still is today. It's like sometimes you have a good manager and sometimes you don't have a good manager. And there is kind of this like age old problem of Sometimes companies or people would take their best developer and be like, you're the manager now because you are the best developer. You're now the manager and you're going to lead a team of developers. And oftentimes that goes really bad. it's because you're trying to get a great doer and make them a manager. And then also like doers and managers, there's just this natural conflict.

Andrew Zigler: 5:42

Yeah, That's the classic engineering problem, right?

Dan Lines: 5:44

right, but now when I was seeing like the, one of our developers at LinearB managing these bots, I thought to myself, okay, we have one of our best engineers, who would never want to actually be a manager, Like manage all the people stuff. Don't want to do that. Actually managing bots really, really well. So she was like, yeah, I'm the doer through these bots. I'm also the decision maker of what they're doing. And I'm getting a ton of work done. Maybe at like 2x, 3x, I don't know, maybe even 10x. And I'm like building a complete app. So it's almost like, I'm just wondering, maybe you can combine the best of both worlds now within one. Human, with many bots.

Andrew Zigler: 6:31

I think that's what we're going to see. way that you're describing it, maybe someone who doesn't want to evolve into a manager role, they love being the doer, that's why you have the manager tracks, you have the staff tracks, and that's why those things are very separate in what they do, because of the mentality of how the folks within them approach their work. But when maybe a lot of the issues that make it, difficult to become a good manager don't exist when you're working with an AI agent or an AI workflow, just because of the lack of all of the social everything that's hanging around it, right? You can boil it down to something that's more straightforward and repeatable. so then suddenly you get folks that maybe don't have the traditional manager skills or the managerial output evolving into those skills because they can do so and it meets them at their level.

Dan Lines: 7:21

Yeah, it's just I know exac It's a little bit out there, but like, imagine One senior developer That's like, I'll just use our company for LinearB Yeah, I lead 18 bots I, I lead 3 teams of 6 But they're all, they're all bots, so I work on three products in parallel. And they, and they report to me. It's like, that could be, you know, the world that we're going. Instead of like, hey, I, I lead a group of 18 people.

Andrew Zigler: 7:47

Right. And what becomes important then is managing all of that context. You kind of become the living context for all of those workflows. They keep it aligned with other co workers who also all have their own little army of workflows. I think it's going to totally change how companies will even start to structure themselves from the beginning. We haven't even seen really what the new structure of, you know, a, An AI native tech startup might look like in a year or two from now, at least on the personnel side.

Dan Lines: 8:18

Yeah, pretty crazy to think about. Very cool though.

Andrew Zigler: 8:21

we're talking about using these tools, um, and about applying them to do wider level decision making. Another really great thing that she called out is about not chasing after them per se, but really understanding your goals and what you're trying to go for. You know, what did you think? Think of this. Yeah. Yeah.

Dan Lines: 8:37

you know, you and I just talked about something kind of, maybe it's in the future and just in a little bit if it feels a little otherworldly, let's bring it down to earth like right now. I think she's absolutely right. From the customers that I'm talking to, and this is the classic mistake in engineering, it's, hey, I want to go try a new technology just to do so, but I don't know what output or outcome I'm actually looking for. That's what I'm hearing when people are like, yeah, I just deployed Copilot everywhere. Now my business is asking me what it did for me. And I can't even answer that question. That's way, way different. And I'll just, again, use the LinearB example of saying, you know what? I measured and I see I have a problem in my code review process. I then went to my CTO and said, Hey, look at this data. We have a problem in the way that we review code. It's causing a big bottleneck for us. I am now going to deploy LinearB AI Code Review or whatever code review you're going to use to solve that problem with AI and prove back to the business that this was a good move because we gain productivity for all the engineers. Cause everyone's doing code review. That's two different worlds, right? One is like, I'm just trying stuff and I'm kind of doing it willy nilly and I'm hoping for a good result, but I can't even measure or say what I'm going for. And the other one's super pinpointed. I know the ROI I'm looking for. I know the output I need. I'm reporting it to the business. I pick an AI tool that's really good at that thing. That's like, I think high maturity, high responsibility.

Andrew Zigler: 10:18

Yeah, it really speaks to like, you know, when you have that tool adoption, you need to make sure that you're managing it well and picking things that have high impact that are going to lead towards the outcomes you want and don't necessarily play into the hype. And so you have to understand up front what you're trying to pinpoint. That way you can pick up the right technology that's going to work for you and still be able to move quickly.

Dan Lines: 10:37

exactly what we just said is what the best DevEx teams are doing. Like the developer experience teams that I'm working with that I think, Oh, okay. You're like leading edge is doing exactly that. They pinpoint exactly where they're going to help developers. They use AI or some type of automation or bot to do so. And then they report back on the outcome, they can track it and even better than I can see, okay, they're starting to put rules and governance around it and making it like in the workflow really really effective Like that's what killer elite DevEx looks like to me right now.

Andrew Zigler: 11:14

And when you talk about these elite teams that you see and all the great things that they're doing, it kind of actually calls to me a little bit about what Dr. Shourie was saying about, the lack of general expertise and skills around this technology still for a lot of the industry, a lot of the market that she's exposed to. We're talking about IBM, so she has a really big surface area. She had a survey that she did even, and we're going to talk about it in our interview with her, where she talked to over a thousand, uh, U. S. software developers and found that, you know, less than 25 percent, or only about 25 percent of them actually felt like they had, a basic level of AI comprehension, right? And, and those are, these are folks that are actually building AI tools and implementing AI right now. Um, and so there's a, there's a really big skills gap, um, that she's calling out.

Dan Lines: 12:02

Yeah, first of all, I think it's really smart and I think it makes sense. And I'm, I'm kind of asking myself like, is this any different than any other technology or skill set gain? Like, yeah, of course, everyone already said AI is changing the world. That's impact, but that's not different than, Hey, I need to learn how to use AI just like any other technology. Technology skill that I need to learn. And I think like the gap is real and I'm sure there's probably already companies doing trainings on it or prompt engineering. I hear that terminology all the time, but basically it's at the end of the day is let me coach you on how to use this AI stuff to make yourself more productive, grow your career. I think it's the same as anything else.

Andrew Zigler: 12:45

Yeah. Ultimately it's just going to demand a new skill set and an understanding of what those skills need to be. so it goes back to everything that she said and what we're going to talk about in just a moment, all the focus that you can get from AI. And so after the break, we're bringing Dr. Maryam Ashoori on the show. You know, we recently discussed on Dev Interrupted, how IBM is shattering market expectations around Gen AI. And after IBM's Q4 earning results, shares of the tech company, they jumped by 13 percent in just one day. And did you know that a staggering 80 percent of IBM's tech revenue actually comes from Consulting, talking one on one with software developers about their problems. And you're about to get a slice of that insight for free right here, now on Dev Interrupted. So you don't want to miss this one.

Ben Lloyd Pearson: 13:33

Let's talk about something that's been on every engineering leader's mind. How do you measure success beyond just Dora metrics? That's exactly what LinearB tackles in their latest guide on how top teams track developer access beyond Dora. you're relying on deployment frequency and lead time or cycle time, you're missing the full picture of how engineering actually drives business impact. This guide breaks down real world metrics that matter. Stuff like developer satisfaction, cognitive load, and engineering impact beyond just shipping fast. Because what's the point of going fast if you can't see where you're going? If you're serious about improving your team, go grab your free copy now. The link's in the show notes.

Andrew Zigler: 14:18

I'm delighted to be joined by Dr. Maryam Ashoori of IBM. Dr. Ashoori, welcome to the show.

Maryam Ashoori: 14:24

Thanks for having me, Andrew.

Andrew Zigler: 14:26

Of course. I want to start by sharing your awesome background with our audience. You're the Senior Director of Product Management for WatsonX IBM. Where you're focusing on simplifying the AI development life cycle, expanding it for enterprise applications. Your background includes leading engineering at Lyft bikes and scooters, pioneering user experience design for AI and even quantum technologies at IBM research, and you hold a PhD in systems design engineering, two master's degrees in artificial intelligence. You're on a board seat for the Virtual Science Teachers, which is a non profit for the mission of making science learning equitable and providing free resources for teachers and students. And you're an adjunct professor at Waterloo, and I really don't know how you find the time to do all of that. That's an incredible resume, and we're really appreciative of your time today.

Maryam Ashoori: 15:19

Thanks for having me.

Andrew Zigler: 15:21

Of course, as a former teacher, I'm really excited about your mission to make science education engaging and accessible, and today has lots in store with us, so I'm just going to jump right in. One of your biggest focuses is on simplifying the AI stack for the enterprise. Why is this so important, and what are the biggest hurdles that we're facing?

Maryam Ashoori: 15:44

Listen, there's been a lot of excitement in the market over the past 18 months around generative AI, but the market is overlooking the complexity of what it takes to deliver on that potential. Behind every intuitive application, there exists an immensely complicated AI stack that our developers must master and take advantage of and harness. To unlock that value, right? So because of that, we've been focusing and looking into how can we simplify this AI stack for our developers? This is an area that is evolving rapidly. There are many tools popping up every day. there is a lack of, uh, AI skills for our developers that we can talk about. And what is the role of IBM in terms of simplifying this AI stack? And we looked into our WatsonX portfolio, starting with that, which is the core IBM AI engine for enterprises to overcome some of these existing development challenges.

Andrew Zigler: 16:43

Okay, so WatsonX is focused on that entire tech stack and not just the application layer. And we certainly hear a lot from how people are adopting it that it kind of becomes a Jenga tower of systems and designs that combine and you might add or remove things and you're afraid it's going to fall. And we're moving towards, uh, you know, starting to standardize this, but it's still very early days. So what does it mean that WatsonX is focusing on that entire tech stack versus the application layer we might be thinking of most days?

Maryam Ashoori: 17:13

Yeah, if you think about a year ago, what excited the market about GenAI was the idea of an accelerated delivery and development of enterprise application. Historically, with traditional ML, the AI lifecycle was starting from data. So basically, enterprises collect a bunch of data and then a data scientist or ML engineer goes in, creates a model, they train the model, and then they hand it off to developers to incorporate that in their application. With generative AI, in theory, a developer in less than five minutes can use one of the LLMs that are hosted somewhere behind an API endpoint and bring those AI capabilities into their applications. So, the LLMs Impact and, um, potential of this technology is, significant. That five minutes access to the model that is hosted behind API gives you an aha moment. An initial wow factor that we saw the market Earning for, or looking for, when they were, exploring Gen AI and investigating Gen AI. But the majority of the market at this point has moved to, from that exploration to production and scale, where the story is different. When you go to production and at the scale of enterprise, optimization comes into the play. What model are you using? What is the size of the model? The largest general purpose model might be very powerful for a wide range of downstream use cases, but may not be necessarily the best option in terms of being cost effective. So if developers really want to capitalize and optimize their investments on the AI, they need to go down the stack, not just the application layer, to be able to gain control and optimize the stack. And because of this reason, we've been also looking into the entire stack, not just the application layer, to see what are the points that we can help, And make available to the developers to help them optimize every layer and build up on them.

Andrew Zigler: 19:18

Great. So it's about using it as a tool at all different parts of the software development lifecycle and the process of providing good software at scale and using AI as a tool and a lens to really get in there and solve those hard problems. you've done an exploration about the complexities of generative AI and enterprise and you spoke with over a thousand developers and uncovered some really interesting challenges recently that you talked about in a LinkedIn post. Can you elaborate on some of these findings for our listeners?

Maryam Ashoori: 19:48

Yes, we recently surveyed about 1, 000 developers in the U. S. who are charged with building AI applications for the enterprise. Survey participants were coming from a range of development roles, including application developer, software engineer, Data scientists. And what we've heard from them was actually reinforcing the challenges that we've been also seeing in the market and hearing from enterprises. The top two that I like to highlight and call out here. One is the rapidly evolving market. Like on a daily basis, a piece of technology is popping up, a new tool, a new model. and then just paired these rapid excitement and the speed of innovation. With the skill level of the developers, which is my really the second point that makes the whole application development extremely complicated and frustrating experience for some of the developers. On one side, the complexity is expanding on the day to day basis. On the other side, if you look into who is, who are those developers, there is a wide range of difference in terms of their generative AI skills. The developers that we talked about too, it's interesting to see that I would say the app developers in our audience, only 25 percent of those app developers self associated themselves with, uh, generative AI knowledge and expertise. 25 percent of application developers. That's it. That's a huge number. And these application developers are all building AI applications for enterprise. So just think about this skill set, the need for simplicity to understand the AI concepts, the expectations from the market to go down this stack, really optimize this stack, and deliver on the potential of generative AI puts a lot of burden on developers. And that's one of the highlights that we saw on the survey.

Andrew Zigler: 21:54

A lot of burden, a lot of pressure, and it puts them in this really difficult spot where they have to take advantage of the productivity gains or otherwise start exploring AI now. They can't afford to wait, but they lack maybe the understanding or a way forward or a way of taking that first step. And that comes to a gap in skills, and that might also come from a sense of being overwhelmed. You hit on a really good point that it's evolving every day. Every day you can go on product hunt and find a new litany of AI powered tools that you could start exploring. And the implications of this is that there's a lot of tool sprawl. Happening right now for developers that are building any kind of application. It doesn't have to be AI powered. It could be a tool that they've been using since the beginning of software development for them. And now suddenly it has AI capabilities and harnessing and using those tools. It's very different from being able to apply them into your entire tech stack. And so how can developers effectively manage the tool sprawl that we're all experiencing right now? What's the strategy?

Maryam Ashoori: 22:59

Well, developers, they need to master a wide range of tools. Like the developers that we talked about, in average, they are using between 5 to 15 tools to create AI applications. So, 5 to 15 tools that are evolving every day. And a small percentage of them, stated that they are willing to spend more than two hours in learning a new tool. So limited time, evolving market, a series of increasing number of, tools that are available to them to learn and figure out if that's the tool that they want to integrate or not. And then the burden of integration, with an evolving stack to their legacy systems. So that's what they are dealing with. parallel to this, also the developers that we talked about focused on enterprise, they always stated the need for performance, flexibility, ease of use and integration as. some of the top essential qualities that they are expecting from AI development tools. But then if you look into enterprise tools, these four characteristics are the rarest to find, right? So not available in the market, evolving, and a number of tools that are emerging every day. So because of that, we've been looking into our own stack, figuring out how to Simplify the stack in a sense that, we are offering not just point solutions to these developers as an integrated stack, providing guidance in terms of what are the available choices and options to you at every level of the stack, and some sort of guidance on where to pick what, and trying to abstract the stack. The development from the underlying AI frameworks and models, so sort of hiding away the complexity of what's going on behind the scene, but helping you to pick the right model and an opportunity to customize the solution that you need, which is really what the developers need.

Andrew Zigler: 25:03

Right, and that's really core to the enterprise problem as well, because you have to scale it, but it has to uniquely fit your organization like a glove, so that it's accurate, so it's compliant even in some cases, and you can actually count on it to help get your work done.

Maryam Ashoori: 25:19

When it comes to enterprise, you have the scale that you mentioned, you have the regulations, especially in highly regulated industries like finance, health care, insurance, like you can just put something out there that you can trace or have transparency and traceability onto how it's performed, which makes it even more difficult. And last but not least is the complicated, often legacy systems that enterprise has that now needs to be integrated with the modern AI stack.

Andrew Zigler: 25:48

Those legacy systems are, I think, the killer part of this equation. The old hulking machines that you're carrying from the past, those algorithms maybe written in a long dead language that you only have a few experts for within your company now. Those are the things that you have to also reimagine. When you think about reimagining enterprise with AI, which is a very difficult kind of obstacle to kind of cross. And you mentioned that, like, when developers look for tools that they use, and this is kind of a universal thing for tool adoption, is they're looking for ease of mastery. They want to get good at it, get better at it, and become an expert in it. They want performance and flexibility, the ease of use, and they want that, integration, the openness to make it do what they want to do. And when you talk about AI and how it's kind of like scalable, like thinking and computational like output of an idea, then when you combine it with these things of like, oh, is it easy to master? Is it performant? Is it flexible? It becomes uniquely tied to the use cases. So everyone is learning in these little silos of how to use AI for their specific use case. And it's about surfacing those universal truths in AI application development, bringing it into your tech stack. And so how do you expect that AI developer experience to change in the coming year?

Maryam Ashoori: 27:03

I would say the highlight of that is productivity. 99 percent of the developers that we talked to and we surveyed used this technology. Cognitive, and really coding assistance in some capacity. So this is like using AI for code generation, code explanation, even using AI for root cause analysis, figuring out what are the root cause of the issues and bugs. Fixes, like recommendations on how to optimize the code. that's what we have been seeing the developers using. we were told by the developers, the survey that we did, that, it typically saves them three hours or more per day for a typical development. Just imagine the efficiency that a developer software engineer can take out of using coding assistant tools to. Generate, expand the reach of what the application can do because now instead of spending the time on figuring out the root cause of a bug and bug fixes, the recommendations made to the developers, they can just approve and move on and spend that energy on thinking about how to optimize further or add new features, or reach, uh, Wider audience with a better experience. So I would expect the coding assistants to continue evolving and, um, massively change the experience of AI developers. one thing that I keep thinking about, about how the software engineers roles and responsibilities are going to be changed in the future is software engineers that are using code assistants, are using AI, for code development are going to replace software engineers that are not using AI today.

Andrew Zigler: 28:48

Yes. So you're hitting on a really good point about, how software developers are using AI now in this cognitive partner, right? When creating code, when working with code, you pointed out a really key Key thing of like, asking it for a code review, or to explain what a chunk of code is doing, or otherwise to talk with you. And this is, this is like the rubber ducking experience brought to life. engineers always have this famous metaphor of, you know, talking to the rubber duck, explain every line of your code to the duck. And if it makes sense out loud, then that's like one extra step of verifying your work. And in this case, the AI is kind of serving as that rubber duck. And it's talking back to you, it's making assessments and judgments on the feedback, what it knows about. You know, your enterprise and what your outputs and impact and goals should be. and so, what you're hitting on is a very good point. That the engineers and everyone who's using those skills, those are the skills of the future. The skills of an engineer isn't solely in code creation. It's also going to be being a mini manager of AI systems, of AI tools, and having the output of maybe even a small team when combining all of those things together. that they're orchestrating. So now we're talking about developers having to build skills and being like a manager for AI in many, in many sense. So how can an engineering team or an individual engineer stay ahead of that curve and adopt that tooling in a way that's going to remain relevant over time? Do you think that means that they need to become like a manager? And think like a manager by default. How does it change the skill set out of the box?

Maryam Ashoori: 30:21

Well, it's the partnership, right? the example that I keep thinking about is Like, when I went to school, calculator was banned from school because we were expected to learn how to do it ourselves. But then, that over time evolved into how to effectively use calculator to focus on problem solving versus how the math works behind the scenes. This is the similar, Metaphor that we can apply to software engineers of today. So instead of actually doing it yourself, you can play the role of an architect to optimize the code. You can play the role of what is the effective technology, and pipeline that I can pull in from different sources to optimize and streamline my own consumption workflow, right? So it's a level of efficiency that are gonna see in enterprise application development that can truly accelerate the reach of what we can do and what we can build upon, AI technologies and bring efficiencies to every single corner of enterprise through that.

Andrew Zigler: 31:32

So it's about becoming an orchestrator of these tools, understanding the inputs and the desired outputs, and really kind of realigning how we think about when we do work as an engineer, maybe the definition of what that work is or the composition of it, it changes to allow you to move at a very rapid pace to be accelerated by AI.

Maryam Ashoori: 31:52

Yeah, one good example of that is agents. You know, there is a lot of excitement around agents these days. It's like 99 percent of the developers that I talk to, they are either exploring AI agents, or they are building AI agents. So if you think about the AI agents, it's a matter of What are the tools that, as a software engineer, you can make available to this agent? And what are the access controls that you are going to delegate to this agent to take actions for you? One good example of that is code interpretation. So now the LLM can generate the code and interpret the code. Let's say it's a SQL code, you can go query a database and grab the information. But at the same time, the software engineer is the one that decides what, the scope of what actions should be performed by this agent? Do I allow this agent to just query data or I'm, it's also okay to edit the data or remove the data points? Because we know there are lots of, limitations associated with, um, LLMs. Hallucination, lack of explainability, this is unsupervised learning, uh, lack of reasoning, limited planning. capabilities. These capabilities are going to evolve through the future, but The current state is the role of the software engineer is critical to define all those constraints to effectively utilize the agent for the use case that they expect.

Andrew Zigler: 33:20

That's a really good way to drive our conversation towards really kind of what everyone's talking about right now, which is this agentic AI experience with an enterprise. And it's kind of caught like wildfire within a lot of communities talking about how they're going to be applying this to their workflows. And it often starts like a definition kind of discussion of like, what is an agent? when does an AI tool become an agent versus something that I'm using ad hoc or on demand like conversational chatbots like we've been used to? I'm wondering how is Watson X AI and, and, your team, how are you approaching building and deploying AI agents at scale when we're still defining what even the basics like trust or good outcomes mean?

Maryam Ashoori: 34:03

Yeah, we've been focusing on two areas. One, targeting developers in terms of what would they need to manage their agent AI lifecycle. All the way from creation of the agents to deployment of the agents to monitoring the agent behaviors. Right. And the two principles that we've been tracking throughout designing this lifecycle management is choices and simplicity. Simplicity, we talked about it, but choices is the essential element today. For optimizations, and we all know like developers love choices, love customizations, not out of the box. So at every single step of the stack, I'll give you an example. for agent creation, you're going to need a series of LLM models. They can come from open source, they're different sizes, different architectures, with different multi modality capabilities, supporting agentic behavior. So, you have access to a range of models. Then one level higher than that, we have libraries and frameworks, AI libraries. Like, for example, LangChain, LanGraph, LamaIndex, CruAI, IBM B. There are a series of agents, agent frameworks that are evolving in the market. And developers need to mix and match, experiment with at this point to figure out what is going to work for them in this situation, right? And then when you go up the stack is their application. The simplicity of abstracting all these from a lower part of the stack. And in addition to all of this, They want the agentic workflows to be connected to external workflows. So a series of tools like web search, calculator, custom tools like custom rack pipeline as an agent. So we want to be able to define custom tools, and also have the access to the ecosystem of tools that are available. Bringing all together. On the build phase, make it available to the developers, so they have options to mix and match to build their agent. And then once they build it, they make a decision, choices again, about where to deploy this agent. On multi cloud, AWS. IBM Cloud, GCP, Microsoft, or on premises. So total flexibility in terms of where to deploy that agent. And then they switch to, OK, so now I have an agent that is deployed behind an endpoint. I want to use it. In production, in my application, but I need to monitor the behavior of the agent. They need to have transparency into what the agent is doing, the series of action, and be able to trace that behavior. So observability is going to be a huge piece of agentic workflows. And that's really the end to end picture for developers that we have been focusing on. Parallel to this, there's a series of pre built agents, like agents that are connected to, let's say, CRM systems or HR systems, or agents specialized for sales and marketing, that we want to be able to import them into the platform like the developer grabs them. And also, customize them. Again, like back to not out of the box, I want to be able to customize it and go through the cycle. these are really the two major areas that we've been focusing on.

Andrew Zigler: 37:26

And for a team that maybe finds themselves with access to these tools or starting to spin these up, what's practical advice you would give somebody if they are put in a position of tying it to ROI right now? Like, experimentation is great and it could be wide open, but if you're put in a position of having to tie it back to a real actionable and impactful use case, what, how would you recommend they do that?

Maryam Ashoori: 37:48

I've seen a lot of developers that chase after technologies as a solution to figure out how they can just integrate their stack with what pops up or is popular in the market. I would say that the shift shouldn't be chasing the technologies. The shift should be focusing on what is the problem that you're trying to solve. Because then, once you understand your use case and the problem that you're trying to solve, you would be able to look into the market, and differentiates between what is a noise and what is actually applicable to you. So instead of mastering a series of tools to figure out which one is applicable into your case, you have a very good view of what I'm looking for and can just do an initial assessment of where the market is going or what's becoming unavailable through the lens of what's going to work for you.

Andrew Zigler: 38:38

So it's about resetting the conversation at the beginning. It's not about experimenting with AI for AI's sake. It's about identifying a core problem to your organization that up until now, you haven't had the resources or the time or the skills or even the technical ability to solve, and seeing if perhaps AI would allow you to unlock that problem.

Maryam Ashoori: 38:59

Precisely.

Andrew Zigler: 39:01

Is there a role within this world that you think is going to be the most impacted? I think they're all going to be impacted in some way, but what role within software engineering do you think has the most change ahead of it?

Maryam Ashoori: 39:14

Well, I would say that the software engineer role, like the title, is not going to be changed. We're going to need software engineers, and we are all software engineers at heart. Like over, over the, I'm just looking at my career, over the past 15, 20 years. That field has evolved, right? Like we've been taking advantage of the evolving architectures. Obviously, the past two years has been insane, but even over the past decade, it's been evolving. So it's going to continue to evolve. I would say that the major change to software engineers, not to the title. It's gonna be to the day-to-day way of how they do their job in terms of, how they take advantage of the AI to go through the whole life cycle of code, code creation, code, reuse, debugging, the use all the way to deployment. CICD pipelines. efficiencies on monitoring, even using AI to monitor those systems. Like when an IT incident happens, for example, now we can see that LLM can generate a report of what are potential causes of, that, incident and generate the potential script that the developer can review and approve to run. So instead of developer going through all these, they free up their time to focus on something else. But also this means that the sort of set of skills that is expected from software engineers are going to evolve. So perfecting the code may not be the skills that you go after, but having an ability to understand what is a perfect, optimized code and this optimization that AI is suggesting, does it make sense or not? Thanks. You see the difference? Instead of acting to perfect it, you, you have a vision of, like, sort of figuring out if that's the right move. Sort of going up the stack in terms of decision maker, making. They are going to be decision makers versus act doers.

Andrew Zigler: 41:19

Yes. This, this makes sense. And, and, and really what I liked what you said about that was, We all kind of become software engineers in a way accessing and creating and working with technology like this has never now been more accessible than it is today, and it will only continue to become more accessible. You're going to have situations and environments where somebody in middle school could sit down with a, with an LLM and spit out an application that maybe like five years ago would have taken a team of 10, you know, adults to make, right? It goes back to how technology becomes more accessible over time. Also about, Why people need to really understand how technology works, so that our world can be a safer and better place with the technology as it grows. It makes me think of even like in the 80s or the 90s, like, think of the earliest people connecting to the internet. Like, those were categorical nerds, right? Like, there was a very small subset of people connecting to the internet, and people wouldn't think about how that would transform the world now. someone not in that world would never be like, oh, I'm not gonna connect to the internet. I never need to learn how to do that. That's not what I do. But now, the internet is ubiquitous, and anybody can access the internet from almost anything. it requires no skill, notably, to do so. It becomes at your fingertips. What we're experiencing, I think, with AI is a lot of those engineering skills and tools becoming at people's fingertips more broadly. So you can see it being applied a lot more places, but the unique expertise and the human factor of what the developer brings to the experience is still going to be very much needed. And so that's kind of what I hear resonating from what you said. Um, and I like the idea that, you know, everyone kind of becomes software engineers in a

Maryam Ashoori: 42:56

And honestly, this is what I'm most excited about when I think about AI in the future. AI development is going to be accessible to basically everyone.

Andrew Zigler: 43:05

Oh, and I, that's the future that I think excites me the most too, is seeing how people who actually are not from a software development world start to imagine creating software for them that works for their unique situations or use cases that is a perspective that engineers before would have never had, just by maybe nature of even being an engineer. By opening the door, it allows a lot of different perspectives, but it also puts on everybody a greater burden to understand the technology and to make it safe and make it effective. That way we can have tools that, you know, make our world a better place, not just maybe a noisier place. are there any particular projects that go on the horizon for you that, like, are exciting or catch your interest?

Maryam Ashoori: 43:46

Yeah, so related to this, one of the exciting part that I keep seeing in the market emerging is no code interfaces for development. and back to our conversation about accessibility with this no code interfaces, not only developers are using them, but also it's opened up to business users. Now business users can go to a no code RAG Retrieval Augmented Generation, which is one of the popular use cases for Gen AI. to create the whole pipeline. And then there is a no code optimization approach on top of that, that we've been able to automatically automate the pipeline. So I'm like, Perfect. Business user now can go create the RAC pipeline and optimize it. Obviously, that technology is like it's going to evolve, but we keep seeing how it's impacting the nature of the development tools that are available in the market today versus where it's going to go in the future.

Andrew Zigler: 44:42

I'm a big fan of that too. just general no code, low code applications, drag and drop builders for getting basic logic out the door. And I think that we've seen in last many years, you know, non engineering, non tech folks embrace and use those tools to drive huge impact within their orgs. And so when you think about combining that with AI, I'm excited to see what the future of maybe non engineers will build for our engineering world. But with that in mind, Dr. Ashoori, thank you so much for joining me. This has been a really fun conversation exploring some of the potentials of the future, but also diving into some real actionable and qualitative results that you got from feedback on developers and on the tools and systems that you work on. And your work is only going to continue to evolve. So where can our audience go to learn more about you and to follow what you're working on?

Maryam Ashoori: 45:30

Watsonx.ai That's my product. That's targeting build, develop, deploy and monitoring AI models. And you can follow me on that. Watsonx.ai

Andrew Zigler: 45:42

Perfect. That's it for this week's episode. A huge thank you again to Dr. Maryam Ashoori of IBM for joining us today and sharing her insights. And to our listeners, if today's conversation has sparked ideas or controversy, don't forget to subscribe to Dev Interrupted. Check out our sub stack for deeper dives onto these topics. And we'd also love to hear your thoughts and continue to the discussions on socials. Maryam and I are both on LinkedIn and would probably love to hear your feedback on what we talked about today. And until next time, see you next week.