Each phase should have a deliverable. It might not be a specific product or anything, it’s okay to be imperfect, but let's have a goal. And when we have a goal and when we have a plan at each phase, then big foundational work might seem achievable. Although you cannot see ROI at 1 and 2, but since I know I can give you that ROI back at phase six. In that case, you'll be more convinced than me going to you and saying that, ‘Hey, I don't really know how to go to phase six, but we need to do phase one.’
In this episode of Dev Interrupted, Conor Bronsdon is joined by Sanghamitra Goswami, Senior Director of Data Science and Machine Learning at PagerDuty. Sanghamitra shares her expertise in AI and data science, including how engineering teams can effectively leverage both within their organizations. She also explores the history and significance of LLMs, strategies for measuring success and ROI, and the importance of foundational data work. The conversation ends with a discussion about practical applications of AI at PagerDuty, including features designed to reduce noise and improve incident resolution.
Episode highlights:
00:56 Why are LLMs important for engineering teams to understand?
03:17 How should engineering leaders think about using AI in their products?
07:57 What sort of plan should engineering leaders have to get buy in for AI?
13:22 Are there ways to show ROI on an investment in AI?
15:08 How should we communicate with customers about AI in our products?
18:53 How can companies find a good use case for AI in their product?
Transcript:
(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)
[00:00:00] Sanghamitra Goswami: Each phase should have a deliverable.
It might not be a specific product or anything, is okay to be imperfect, but let's have a goal.
And when we have a goal and when we have a plan at each phase, then big foundational work might seem achievable. although you cannot see an ROI at 1 and 2, but since I know I can give you that ROI back, at phase six, so that this is very important.
In that case, you'll be more convinced than me going to you and saying that, Hey, I don't really know how to go to phase six, but we need to do phase one.
How can you drive developer productivity, lower costs and deliver better products? On June 20th and 27th LinearB is hosting a workshop that explores the software engineering intelligence category. We'll showcase how data-driven insights and innovative workflow automations can optimize your software delivery practices. At the end of the workshop, you'll leave with a complimentary Gartner market guide to software engineering, [00:01:00] intelligence, and other resources to help you get started. Head to the show notes or LinearB dot IO slash events to sign up today.
[00:01:06] Conor Bronsdon: Welcome back to Dev Interrupted, I'm your co host Conor Bronsdon. Today I'm being joined by Sanghamitra Goswami, Senior Director of Data Science and Machine Learning at PagerDuty. Sanghamitra, thank you so much for joining me.
[00:01:18] Sanghamitra Goswami: Thank you, Conor, for inviting me.
[00:01:20] Conor Bronsdon: Honestly, it's my pleasure.
I think we could really benefit from your expertise as someone who has such a deep understanding of the approach that data scientists have taken to developing AI models. And, I mean, frankly, AI is all the rage right now, right? Everyone's talking about it. Everyone has opinions on what it is. And so it's important that we level set with the audience a bit, and have the opportunity here to pick the brain of a data science leader like you, and understand how engineering teams can translate this and leverage AI models, or LLMs in particular, within their org.
Why are LLMs important for engineering teams to understand?
[00:01:54] Conor Bronsdon: Let's maybe unravel some of the strategies that champion the role of data science, [00:02:00] machine learning, and AI teams, and help our audience understand how to navigate this emerging future and ever expanding landscape. Why don't we talk a bit about the history of LLMs, what they are. and why they're so important.
[00:02:12] Sanghamitra Goswami: You know,
Conor, it's been a crazy time now. Um, nine months back when ChildGBD came out, uh, PagerDuty leadership, they told me, Mitra, we need to do something with, uh, LLMs, and it, it's crazy the way the world is saying that, hey, we, we want to do LLM, let's have a feature that uses LLMs. So what are LLMs? Large language models, they leverage foundational machine learning, AI, deep learning.
Models to understand natural language and to give answers as so they can talk to us like a bot You look at the history of NLP, it's based on NLP, it's based on the NLP models that we have built out. If you look at the history of NLP, in 1948, Shannon and Weaver, the first paper came out. And, and, during that time, we didn't have the computer storage that is possible [00:03:00] now.
We didn't, we couldn't really have a lot of computational power. So, It was not possible to always run these large language models because they require large amounts of text. Right. However, from that start, where Shannon and we were in 1948, if you fast forward, um, I would actually mention there is another milestone where the transformer architecture got introduced and, um, attention is all you need is the paper.
So, if I look at these two milestones and how the landscape has changed, the computational power, GPUs. So with everything in mind, now is the perfect time that we can all reap the benefits of years of research and computational power. And engineering in that sense. So this is the perfect time.
[00:03:52] Conor Bronsdon: Absolutely.
It's, it's really interesting to think about how, even just a few years ago before we realized that we could paralyze processes within AI model [00:04:00] development, uh, using GPUs, there just wasn't this speed of development that we've seen around AI models today. I know. Uh, so I, I'd love to kind of talk about how teams can actionably leverage AI or LLMs, uh, within their tooling.
How should engineering leaders think about using AI in their products?
[00:04:15] Conor Bronsdon: What would be your advice to engineering leaders who are thinking, Hey, I wanna start using AI to extend the capacity of our product. How should they kick off?
[00:04:24] Sanghamitra Goswami: I think,
let's start with ai. Just ai. Someone wants to do AI with their teams. Okay. That is a huge pro, huge challenge because if you look at all the leaders in the industry,
how do we see if something is successful in the industry? We have to get some ROI with our, with all the endeavors, right? And data science, AI, it's an experiment. So when we start building it, it is always not clear how this is going to show up. For example, in AI models, we, we try to build a model. We experimented with historical [00:05:00] data, and then we take the model out in the market.
And then it starts getting all this new data from our customers and our model is giving answers live in the run time. So it's very difficult. It's an experiment that we are running. Yeah. Leaders should be very careful that they know that what are the risks of running these experiments. That's number one.
Number two, they should be very careful on how they measure adoption or how the results of these experiments are being used by end users. Once these two pillars are in place, I think it's easier for leaders to measure value and define ROI. Without these two, AI, we can't do AI or we can't do So these are the two main, I would say, pillars if you want to do AI.
Now there are other, uh, you know, organizational challenges as well. For example, democratization of data. [00:06:00]Everybody talks about it. But it's a difficult job. Their data is always in silo. There are different channels. If you look at marketing, you get, there is social media, there are emails, there is YouTube or other social networks.
So, so many different channels and you have to get the data together. If you look at logistics, there is carrier data in the ocean, carrier data in the Um, and on road, in, in air. So it's just that if we want to measure any process, data is always in silo and there is a huge, there are huge effort that need, that is needed, um, on the side of the person who wants to do AI, to do a successful AI.
That's number one. And we need, we have the architecture, we have the solution, we know that we need a data lake or we need a source of data where all the data can be consolidated. But it's, it's a huge effort. It's a huge [00:07:00] foundational effort that always doesn't show direct ROI. So you have to convince as a leader in AI, you have to convince your leaders that, hey, I'm going to do this foundational effort.
However, you might not see a direct Right now, but it's going to, um, you know, executives talk about this flywheel effect, but it's going to create that flywheel effect at some point. So. That's the discussion they need to
drive a
[00:07:28] Conor Bronsdon: Absolutely. I think it's really important for us to understand both the potential and the risks of ai.
And, and I don't mean, you know, the risks of a GI and you know, this world of like, oh, like crazy cyber villain and creative ai like that. That's, that's fun to think about Unci. Yeah, sure, we can talk about that. But like, like more specifically to your business. There is a challenge where if you don't put these foundations in place, there's major risk to how your business will present itself, whether that model will hallucinate.
And this is where it comes down to these foundational data science concepts you're talking about, like, [00:08:00] is my data siloed? Do we have the right training data? Is that training data validated? Are there issues with that data set that are going to cause long term issues? And when I've talked to other data science leaders like yourself, that is one of the things people really hone in on.
So I'm glad you bring up this foundational piece because A lot of leaders are getting pushed by their board or pushed by their c-suite of like, oh, we need to get AI in their product. But if, if the data that you're feeding in to train the model isn't, some data, isn't data that is actually, uh, validated and maybe peer reviewed or, or, uh, checked on, like there are major risks that you put in play.
[00:08:35] Sanghamitra Goswami: Yes.
Yes, absolutely. You need to know what your data can do. Without that, you know, garbage in, garbage out. You cannot save yourself even with an LLM. So,
yeah.
[00:08:46] Conor Bronsdon: Very well said, and I think it's challenging though for a lot of leaders to get buy in on that foundational work as you point out because there is an immediate ROI.
What sort of plan should engineering leaders have to get buy in for AI?
[00:08:55] Conor Bronsdon: So, how should engineering leaders start to, [00:09:00] uh, get that buy in about ensuring they actually do all the steps needed to be successful? Yes,
[00:09:06] Sanghamitra Goswami: I think, we work in an agile world, so we should have a plan. We should have a plan with different phases. And I always say this to my team that each phase should have a deliverable.
It might not be a specific product or anything, but it should have a goal. It should have a deliverable. I don't know if I read about this Wabi Sabi. It's a Japanese, uh, Guiding Principle. It talks about continuous development and that imperfection is good. And I say this to my team that while we say let's do something, it is okay to be imperfect, but let's have a goal.
And when we have a goal and when we have a plan at each phase, then big foundational work might seem achievable. And that is very important as Say you are my boss, Conor, and I'm talking to you, and I'm [00:10:00] giving you a plan, and I'm saying that, hey, phase 1, 2, I have a goal, and I know I can go to phase 6, and at the end of, although you cannot see an ROI at 1 and 2, but since I know I can give you that ROI back, at phase six, so that this is very important.
If I give you the plan that in that case, you'll be more convinced than me going to you and saying that, Hey, I don't really know how to go to phase six, but we need to do phase one.
[00:10:26] Conor Bronsdon: So if I'm an engineering leader who hasn't had deep experience with data science or ai, and I'm thinking about how do I build this phased approach.
Um, what would be the general steps you would advise, or is there a resource where leaders can go in and say, Hey, let me look at, I don't know, a template to start applying to our specific use case?
[00:10:45] Sanghamitra Goswami: Yes, I think there are many if you, if you look at Google, like, how do you use, use gather ROI for data science projects?
But I would say before we do that, it is very important that in any organization, there is a product counterpart with a data science engineering [00:11:00] manager. I believe there should be other people championing data science rather rather than get So you in Yeah. To get buy-in rather than the data scientists themselves.
So it is critical that you have a friend in the product organization because they can look at the product holistically from a top level view and they can help you. Go ahead.
[00:11:20] Conor Bronsdon: What would you say to people who are having trouble finding that champion or picking the right champion?
[00:11:26] Sanghamitra Goswami: Convince your boss, convince your boss that you need a product partner.
A single person or a group of data scientists can't always do everything. You need to have a, someone else beyond that organization who could champion for you. Sometimes an executive can play that role too, but with, with having some time and common goals and everything, I think it's critical that data science organizations have a product
partner.
[00:11:50] Conor Bronsdon: And I'm sure this creates some translation challenges across the company, as you're trying to bring in these other stakeholders and get people to buy in because you know you need the support, [00:12:00] but maybe the goals across those different organizations can be different. Yes. Um, how would you try to solve that cross organizational translation challenge to, to get these
champions?
[00:12:11] Sanghamitra Goswami: Well, I, I don't think the data scientists can solve it by themselves. What, what they can do is they, they can say that, hey, I understand it's late in your roadmap and I can be ready on my part and I can make it easier for you to understand and access what I'm developing. But I do think executives play a very important role here because they need to drive alignment across different teams at the organization.
Let's say engineering team A and engineering team B both wants to do data science, but they don't have time because their roadmaps are full of other projects. Data, whatever the data scientists do, it won't convince them, right? So the executives need to prioritize that, hey, or, or a product partner who can look at it and say, hey, this data science feature, if we do it, this will drive huge [00:13:00] ROI than a, than a small change or than any other feature that we are taking out this year.
So we need, we need to have executive alignment on roadmap across teams and also. Some other champions, but what the data scientists or data science, uh, data science, uh, organization leaders can do is they can think of, okay, here are some benefits of, um, empowering data scientists and data engineers so that they can write code well.
Um, this is, I'm going off a tangent because, you know, data scientists come from different backgrounds and they are always not the best software engineers. So they need support from data engineers and they need to productionize their code, write production level codes. So what the leader in the data science organization do is make sure that the organization is empowered to Build something that is very easily accessible and can be taken by the engineering team, and the engineering team doesn't spend a lot [00:14:00] of time building that or understanding that.
[00:14:03] Conor Bronsdon: It's interesting because you're talking a lot about these change management concepts, frankly, of like, you know, getting organizational alignment, building up champions within the org, making sure you get buy in, so you can showcase that ROI, ensuring you have these phased rollouts and a clear goal for each step of your process.
Are there ways to show ROI on an investment in AI?
[00:14:20] Conor Bronsdon: What if you're having trouble getting that kind of buy in? Are there ways that, you know, data science or engineering teams can leverage currently available AI models or tooling to showcase the ROI and then create that buy in?
[00:14:36] Sanghamitra Goswami: Yes, there are also many tools in the market that ROI, but it's a little bit difficult.
Once again, I think, uh, It's an experiment. That's how I see data science. So it's not, depending on how much customer, how much data your customer have. You know, some customers might be new. Some customers might not be storing data very [00:15:00] well. So the experiment that I have run on my end might not always be great when I run it with real time data with all my customers.
So having those risk factors. Figuring those out before release or having a slow release so that you can talk to your customers and figure out. Conor is a great customer because we have five years of data and our model is going to give very good results. Whereas Mitra might not be a good customer because she has only six months of data.
Figuring those out and how, what is the fraction of your customers will give good results. So the, the risks, I think thinking those ahead of time. Makes a huge difference. I mean,
[00:15:41] Conor Bronsdon: we've talked about this some on the show before about the importance for leaders of understanding the risks of even exciting opportunities.
Yes, and I think you bring up a good one, which is like, it's really easy for us to over-exaggerate the impact of AI on a particular customer or on, you know, a feature or product where maybe the realistic. Truth is that it's going to [00:16:00]take time for it to develop because you need that, uh, data integrity to, to actually build up, because you need more customer data.
How should we communicate with customers about AI in our products?
[00:16:06] Conor Bronsdon: How should you go about communicating with customers about what you're able to do with AI as you build your program?
[00:16:12] Sanghamitra Goswami: I
think building trust with customers is key. Um, at PagerDuty we have a process called Early Access, where our product is not fully built out, but we are in the Early Access program, we have a prototype, and we can Ask our customers to use it and give us feedback.
I think that feedback is critical. They can tell us that, hey, it's giving great results. They can tell us it's giving very bad results. So then we know and we can improve and we can, so this early access program is very useful.
[00:16:44] Conor Bronsdon: How are
you
leveraging AI at
PagerDuty?
[00:16:47] Sanghamitra Goswami: We do a lot of AI. Um, so We have five features, and when I say features, these are features which have different models in the, um, in the backend.
So we have five AIOps [00:17:00] features, and our AIOps, which used to be an add on for our full IR product, now is a separate BU. So we have, um, A lot of features in AIOps, noise reduction. I was just talking to someone who mentioned that, uh, uh, it's always a problem when you have lots of, um, alerts. And we are talking about security camera, like Google camera.
You keep on getting alerts and then you are lost, right? So, the same thing happens when people use PagerDuty. People use PagerDuty when there is an incident and you are getting alerts. And if you get a lot of alerts, if you're inundated by alerts, you don't know which one to go for. So we have very good noise reduction algorithms and we use AI to, um, build those, uh, noise reduction algorithm.
That's, that's a super smart use case. 'cause that kind of cognitive load, it really makes us start to tune out. I mean, like, I'm sure we've all been guilty of this. Maybe with our email. Sometimes it's, ah, so many, okay, I just gotta get through these things. Yes. And it's so easy to miss something that might be important if you're not really staying on it.[00:18:00]
Uh, that's a great example of how you can leverage the power of AI to assist for your customers. Um, are there other ways that you see PagerDuty leveraging AI in the future?
Yes, we have root cause. So these are, I'm talking about, you know, non NLM AI ops features. We have root cause, we have probable origin.
Uh, what we do with these features is during the incident, during the triage process, we try to provide information to developers who are looking We're trying to figure out what has gone wrong, how can we figure out, how can we resolve the incident faster. So we have a suite of features on that end. On the LLM side of things, we have three new features that are coming out.
These are our first Gen AI features. We have a summarization use case. I think this is a very good use case and uh, one of the ways I always say, once again going off a little tangent, I always say that if you want to do LLM, find a good use case. And I think this is an awesome use case. So [00:19:00] During that incident, developers are trying to solve, you know, a problem and say that, okay, I'm resolving the incident.
But even during that phase, they have to update their stakeholders or external, uh, companies that, who are waiting for information about the incident that is going on. That's a very difficult job. Developers who are always in the back end, they need to write up an email and, you know, divide their attention between solving a problem and drafting up an email.
So you'll give it to the generative AI platform because now they can do it for yourself. And those conversations are already there in Slack, in Zoom, in Microsoft Teams. So why repeat it? Ask your generative AI model to write it for yourself.
[00:19:43] Conor Bronsdon: Smart.
[00:19:43] Sanghamitra Goswami: So I think this is a very, very good use case. and empowering to, to the developers who are using PagerDuty.
How can companies find a good use case for AI in their product?
[00:19:51] Conor Bronsdon: And you mentioned this idea of ensuring you find the right use cases or good use cases. What's the approach that you should think people should take to that?
[00:19:59] Sanghamitra Goswami: [00:20:00] What is the problem that you are solving for? How would you provide relief to your customers or the end users? What would they find useful?
That's the key. And that's true for LLMs too. You want to find the use case, but is your use case solving the problem that is being asked by a lot of your customers?
[00:20:19] Conor Bronsdon: And that's just good business advice, period, right? Like, solve your customer problems. The phrase, like, make your beer taste better has been popularized recently.
It's a great example. We have,
[00:20:28] Sanghamitra Goswami: we say in PagerDuty, champion the customers.
[00:20:32] Conor Bronsdon: Great way to put it. I'd love to just get some more general thoughts from you about your viewpoint on AI in general, where the industry's going, the explosion of success here now that we have paralyzed GPU models, we have years of them working, um, obviously there's been an explosion in the public consciousness of the ability to leverage AI, and as you pointed out, like, this all goes back to Like, old papers, these, this goes back to old sci fi novels, frankly, where we talked [00:21:00] about these ideas.
Yes. And, and now we're seeing them come into reality. Uh, what are some of the things that you're excited about by the current AI revolution that's happening?
[00:21:10] Sanghamitra Goswami: I think I'm seeing very good use cases. One use case that I really loved. I am big on Instagram. You know, I was looking at the photo editing capabilities and you can just take out a person that you didn't like.
[00:21:25] Conor Bronsdon: So if you've got an ex boyfriend
or girlfriend or something like that, like, yeah.
[00:21:31] Sanghamitra Goswami: No, think
about it. I want my picture before the Eiffel Tower. And I don't want anyone else. And I can do that now with AI. So I love it. I love some of the applications that are coming up. This is a fun one, but there are very useful ones in the, if I look around.
Um, recently when I was going to the Chicago airport, they did a, they did a facial scanning. And they didn't actually scan my [00:22:00] boarding pass. Not LLM, but still it's so cool. Where I'm just walking and there is a machine who is scanning my
face.
[00:22:07] Conor Bronsdon: I get some privacy concerns that I have to admit, but it is very cool.
[00:22:10] Sanghamitra Goswami: Yeah. Yeah, it's just so cool. Yeah.
[00:22:12] Conor Bronsdon: . Uh, well, thank you so much for taking the time to chat with me today about this. It's been fascinating to dive into your thoughts about AI. Do you have any closing thoughts you'd like to share with the audience about either how they should approach LLMs or what all of this change means?
[00:22:25] Sanghamitra Goswami: I would say data science leaders fight for, fight for space. Uh, you need to do more. Uh, think of a good use case. Ask your executives for a product partner. Try to prove that this, the features you want to develop, that the use case you are vouching for, you want to build for, is going to solve a customer problem.
And that is needed. Write up. I think writing is very useful. And give it away for people to consider, take feedback. Be vocal. Yeah, I would say
that
[00:22:58] Conor Bronsdon: Well said song. , [00:23:00] thank you so much for coming on the show. It's been a distinct pleasure. If you're someone who's listening to this conversation, uh, consider checking out on YouTube.
We're here in the midst of an incredible lead dev conference, here in Oakland, and I think it would be a ton of fun for you to see us having this conversation live, uh, on, on the YouTube channel. So that's, uh, dev interrupted on YouTube, check it out. And, uh, once again, thanks for
coming on the show.
[00:23:19] Sanghamitra Goswami: Thank you, Connor. Thanks for the invitation.