Podcast
/
Google DeepMind's AI playbook for engineering at hyperspeed

Google DeepMind's AI playbook for engineering at hyperspeed

By Philipp Schmid
|
Blog_Comprehensive_DORA_Guide_2400x1256_3_b701ad1fb5

"It has never been easier to learn how Git works and the Git commands with having LLMs explain to you the concepts… It might sound scary now to become a software developer, but I think there never has been a better or easier time to get started."

What if the traditional engineering career path is being fundamentally rewritten by AI? 

We're joined by Philipp Schmid, Senior AI Developer Relations Engineer at Google DeepMind, to explore how artificial intelligence is not just a tool, but a force reshaping engineering roles, team dynamics, and the foundational methods of skill development. Philipp, with his background at Hugging Face and now at the cutting edge with Google DeepMind, offers a unique perspective on the rise of AI-native teams and engineers who learn faster, work more broadly, and drive innovation at an unprecedented scale.

Philipp offers an inside look at Google DeepMind's engine of AI innovation and breaks down the key differences between Google's flagship Gemini models and the versatile Gemma family of open models, detailing their distinct purposes.We also touch upon exciting takeaways from the recent Google I/O event, including powerful new on-device capabilities and the mind-blowing text-to-video generation with Veo.

Finally, Philipp shares practical advice for engineers and their organizations on navigating this AI-driven landscape, emphasizing continuous learning, an adaptable mindset, and how to effectively leverage a diverse AI toolkit to thrive.

Transcript 

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

[00:00:06] Andrew Zigler: Welcome to Dev Interrupted. I'm your host, Andrew Zigler.

[00:00:09] Ben Lloyd Pearson: And I am your host, Ben Lloyd Pearson.

[00:00:12] Andrew Zigler: This week we're looking at how CloudFlare is using Agentic ai, an interesting new partnership between AI and consumer culture, how developers really feel about their code assistance and a special anniversary that only nerds can celebrate. Ben, what catches your eye?

[00:00:31] Ben Lloyd Pearson: Well, I think I'll save the celebration for the last, so maybe let's, let's start with, the new partnership with ai.

[00:00:38] Andrew Zigler: Yeah, so there's been some interesting news in the last week of OpenAI partnering with, of all companies Mattel, to create and design new products, without necessarily licensing, uh, the toy makers intellectual property key detail. Uh, this isn't a partnership of Mattel somehow giving training data to open ai, although I'm not sure how useful that would really [00:01:00] be.

[00:01:00] Andrew Zigler: But instead, this is a partnership to create an end consumer product. the creation of digital assistance based on Mattel characters or games that are more interactive within the Mattel universe. Um, we're all pretty familiar with Mattel's recently branched out into things like mobile games and even making movies. you might remember the Barbie movie. That came out in recent years. And opening AI has also been courting companies that help them enhance their product. So it's an interesting collaboration between two companies that you wouldn't necessarily expect to work together. and their first collaborative release is expected to come out later this year.

[00:01:33] Ben Lloyd Pearson: Yeah. You know, recently I, I've been thinking a lot about. why is it software engineering that's getting all of the focus with AI disruption? we have like all these companies, like the company that built cursor, like they're rumored to be generating half a billion dollars in revenue, which is like pretty, pretty crazy.

[00:01:50] Ben Lloyd Pearson: But at the same time, likethere are so many other fields out there that have, way more potential for disruption, potentially even faster than software development You know, Andrew, just think about the amount of [00:02:00] content that we produce here at Dev Interrupted like

[00:02:02] Andrew Zigler: Mm-hmm.

[00:02:02] Ben Lloyd Pearson: been completely transformed by AI already.

[00:02:05] Ben Lloyd Pearson: Yet there, there isn't. There aren't a whole lot of products that are coming out, and there's not a whole lot of focus on that, that market for some reason. So, you know, this story is a little bit outside of the typical beat here at Dev Interrupted. It's not about engineering or. Or anything like that. and it's, you know, certainly not about producing like higher volumes of code with ai.

[00:02:24] Ben Lloyd Pearson: but I think what's really happening here is that, we are seeing all these stories about how AI companies are, are fighting in courts over intellectual property ownership. And, you know, I think the future is really gonna be built. At least in AI by companies that have good training data.

[00:02:40] Ben Lloyd Pearson: Like the better your training data is, the better your models are gonna be. if you have poor training data, your models are gonnareflect that. So I can easily imagine a future that we may be moving towards where, there's new regulations, new laws around the use of intellectual property within generative ai.

[00:02:56] Ben Lloyd Pearson: I think what's happening is OpenAI is trying to [00:03:00] use IP to establish a moat. Like if they build these partnerships with third parties and get access to their commercial data, that actually starts to build a bit of a moat for them where they have high quality training data that they potentially are the only ones that are able to use.

[00:03:17] Ben Lloyd Pearson: So, you know, in the meantime, I might take a world to see if I can. You know, maybe work with chat GPT to come up with some really cool car designs with my 4-year-old. 'cause that does seem like a really fun application of this.

[00:03:28] Andrew Zigler: were you a Hot Wheels kid, Ben?

[00:03:30] Ben Lloyd Pearson: you know, I was the gigantic box of cars that were collected through many generations of children

[00:03:37] Andrew Zigler: I was also that kind of kid, but I had a lot of Hot Wheels and in fact, this is full circle for me because my very first computer was the Windows 98 Hot Wheels computer,

[00:03:46] Ben Lloyd Pearson: incredible.

[00:03:48] Andrew Zigler: I had all of the accessories. My dad got it for me for Christmas. It probably honestly sparked all of my downstream interest in technology and probably without having that Hot Wheels computer, I don't know if I'd be on [00:04:00] this.

[00:04:00] Andrew Zigler: Podcast. So, it's really fun to see it all come full circle. hot Wheels and Barbie computers, and their users unite.

[00:04:07] Ben Lloyd Pearson: Yeah. I never I never would've thought that 8-year-old Andrew Zigler could make me-year-old Ben jealous. But here we are. I now want

[00:04:17] Andrew Zigler: it was

[00:04:17] Ben Lloyd Pearson: to go back in time.

[00:04:18] Andrew Zigler: really a formative experience having that

[00:04:21] Ben Lloyd Pearson: Yeah. Yeah.

[00:04:21] Andrew Zigler: Um, wow. All right. So, uh, the next up, let's see, let's, let's talk about an interesting, uh, thing I saw at a CloudFlare in the last week. Uh, there was an article from Max Mitchell that dove into the Cloudflare's, claw generated commits around using, a agentic AI to, to work on parts of their code base.

[00:04:40] Andrew Zigler: And it was really interesting on two levels, you have CloudFlare engaging in this really thoughtful practice of encouraging their engineers to directly commit the prompts they use to generate code along with the code and to source control. this is a smart way of reducing the ambiguity, of code ownership and, and also [00:05:00] being able to, communicate about what the tool is supposed to do or what the commit is achieving. it also allows. A team to build best practices because you're saving all of this great record stuff right into your source control. obviously your mileage is gonna vary. There's gonna be some cases where it doesn't make sense to put that kind of thing in the metadata. but, uh, this was a really interesting analysis at how, CloudFlare used this tool and there were things that stood out to me, and also to Max who wrote this article, about how.

[00:05:29] Andrew Zigler: there's some things like, where humans are still intervening, like moving certain files around or doing some cleanup or doing some renaming. There were some cases where, the tool they were using was, was cursor or otherwise, was trying to use like a bash command to just move a file.

[00:05:43] Andrew Zigler: Like there's sometimes, times where it's helpful to, step in and you see this. In the commits because that thought process of, oh, I would have been faster if I'd done this myself as right in the, the commit. So a pretty, pretty cool, exploration. I really loved on top of it, [00:06:00] Max's, uh, actual like really thoughtful, personal journey.

[00:06:03] Andrew Zigler: Looking through all of those, commits himself and reflecting on what that meant about their development process. A really great article all around.

[00:06:10] Ben Lloyd Pearson: I really love how the engineering culture as a whole, like software engineering culture is really great at just being transparent and sharing ideas and like trying to help each other build best practices. so, and you know, I've been saying for a little while that like, now is the time to be experimenting and learning with ai.

[00:06:28] Ben Lloyd Pearson: there's not a whole lot of like revolutionary productivity improvements happening yet. but there are a lot of people learning very interesting things about how to apply AI to software engineering, and the reality is that like the tooling in this space is still extremely immature. the people who are building AI workflows right now are mostly still doing it through bespoke custom operations.

[00:06:52] Ben Lloyd Pearson: that's just the fact of the, of reality today. And I love seeing these like firsthand or even like a secondhand experience [00:07:00] of ai, because there's still just so much to learn about applying AI to existing workflows and, getting the most out of it.

[00:07:08] Ben Lloyd Pearson: But you know, max made his points better than I ever could.

[00:07:11] Ben Lloyd Pearson: So, uh, you know, our audience really just needs to go to the show notes, go check out the article, go learn something about AI today from somebody who's,in the trenches with it.

[00:07:20] Ben Lloyd Pearson: Yeah. So what's our next story, Andrew?

[00:07:21] Andrew Zigler: I love this article that came out from lead Dev last week. this one's written by Chantal Kani and it actually features a quote from myself in it, as well as many folks in the developer productivity developer experience field. But what I liked about it the most is that it built on a narrative that we've all been experiencing and talking about that, you know, just. Unchecked and goalless AI adoption doesn't help anybody, and it doesn't ultimately deliver on the promises that y'all can't even seem to agree on in the first place. Like it. I really, what I'm saying in, in all of this is that AI coding assistance and their adoption across different organizations is not just [00:08:00] like a one and done zero to one.

[00:08:02] Andrew Zigler: Process. There's a lot of communication that has to happen to make that successful. and part of that is getting your engineers on board. So this is an article that looked at how devs don't necessarily feel more productive when AI assistants are just put in their hands. You know, it needs to be coupled with things like, effective onboarding strategies, education, examples

[00:08:23] Andrew Zigler: This CloudFlare example we just talked about, what a great vehicle for making your devs feel more productive with assistant ai because you can point to this fantastic resource to show them how it works for your own organization. But there's a real disconnect in those kinds of examples that most organizations, in software engineering, which is what this article, Looked at and this kind of is something we've been talking about a lot on Dev Interrupted, which is why I really resonated with the final piece. and it's about that like, you know, velocity isn't everything. because you're going fast doesn't mean you're going to the right.

[00:08:57] Andrew Zigler: Place. And we had an article that we [00:09:00] dropped last Thursday on Dev Interrupted. We've been

[00:09:02] Ben Lloyd Pearson: Yep.

[00:09:03] Andrew Zigler: sends recently. You should go check 'em out. And literally the article is called Faster Coding Isn't Enough. It talks about exactly this. So check out this article in the show notes from Chantal about AI coding assistance and how devs feel, really feel about them.

[00:09:18] Ben Lloyd Pearson: Yeah. And that article was based on some, some research that we conducted and as well as a workshop that we held, recently. we brought in experts from AWS Atlassian ThoughtWorks, people who really know a lot about the forefront of ai. came in and talked about the struggles that engineering teams are facing today.

[00:09:36] Ben Lloyd Pearson: And as a part of this work, we conducted a survey, and we asked participants about their AI usage. And found that the vast majority of AI adoption has been centered around the coding process.

[00:09:47] Ben Lloyd Pearson: So we all know that software engineering is way more than coding, and you have to do things like planning, designing, architecting, testing, deploying, like all of that is a critical [00:10:00] component of the software delivery life cycle. And AI has hardly scratched the surface outside of the IDE. You know, think, think of the theory of constraints.

[00:10:08] Ben Lloyd Pearson: Like if all you do is generate more code, you're creating new bottlenecks, you're not necessarily producing more impact. And that's why you need to have complete visibility into where AI is being deployed, what it's impacting, and where your team is being slowed down. And this is, this is actually my favorite part of what we're doing here at Dev Interrupted right now.

[00:10:29] Ben Lloyd Pearson: Everyone is early on in their journey to AI driven development. And we get to be here to learn from these amazing, amazing experts within our community and share their knowledge with our listeners and our readers. So there's a lot of pressure right now to adopt AI to move faster. Don't put all that pressure on your developers.

[00:10:49] Ben Lloyd Pearson: It's our responsibility as leaders to be strategic about how we deploy AI and to demonstrate the positive impact that it's having on the organization without [00:11:00] overburdening developers with new challenges.

[00:11:02] Andrew Zigler: Yeah, well said.

[00:11:04] Ben Lloyd Pearson: So we've got one last fun story. So, uh, what, what do we have here, Andrew

[00:11:09] Andrew Zigler: Yeah, we're ending this on a fun note. So, congratulations on creating the 1000000000th repository on GitHub. maybe we can, maybe we can edit in

[00:11:18] Ben Lloyd Pearson: Ta.

[00:11:19] Andrew Zigler: some, you know, some fanfare here. Uh, this is a fun article, or rather, this is like a fun news tidbit, really, that, that made its way onto Hacker News. Someone created what ended up being the 1000000000th repository on the GitHub, repository service. And, the result was people swarmed the repository to congratulate the owner and obviously probably whatever they actually had in mind for that repo. Got derailed. But, it did bring to mind a bunch of fun discussions.

[00:11:49] Andrew Zigler: You should really go check them out in this comment thread, about engineers at places with like, it help desks or huge data lakes and, and, and places where they're, they have a [00:12:00] gazillion of certain kind of IDs and people trying to game certain systems to snag. You know, very recognizable or huge rounded numbers,

[00:12:09] Ben Lloyd Pearson: Yeah.

[00:12:10] Andrew Zigler: Uh, so Ben, have you ever gotten like the zillion ID on something?

[00:12:14] Ben Lloyd Pearson: Yeah. Well, this is a funny story because, the, user who created this repo chose a very common swear word to name it. And, uh, it, it all took off when a GitHub employee just showed up and left a comment like, Hey, congratulations on being the billionth repo on GitHub.

[00:12:30] Ben Lloyd Pearson: So, you know, first of all, rest in peace to that poor developer's GitHub profile, who is now, I imagine he's in the mountains in Nepal somewhere, trying to get away from it all, with all the attention that has suddenly come down.

[00:12:42] Ben Lloyd Pearson: but fortunately, like, we've covered some stories in the past where like. Some developer goes viral and the developer masses descend on them and have all sorts of negativity. This has not appear to be that case. There's been a a lot of just lighthearted and positive humor around it.

[00:12:57] Ben Lloyd Pearson: So a lot of typical developer memes, like it's [00:13:00] a pretty funny thing. I would wager a bet that, this developer, this particular developer figured out that this moment was coming and had some sort of script that was just waiting to capture that billionth spot. and if that's the case, like kudos, he did it like good for you.

[00:13:15] Ben Lloyd Pearson: I wouldn't be surprised if there was a whole bunch of other developers out there who are disappointed 'cause their script did not capture the billionth

[00:13:22] Andrew Zigler: I think this might have created some villains for sure. I could

[00:13:25] Ben Lloyd Pearson: imagine

[00:13:25] Andrew Zigler: there were a whole bunch of folks who were like, that number was supposed to

[00:13:28] Ben Lloyd Pearson: be mine Yeah. Imagine being the billionth and one repo.

[00:13:32] Andrew Zigler: that's what I'm saying there someone is that also imagine being the, the one before it. That's a lot of nines.

[00:13:38] Ben Lloyd Pearson: Well, I think it was the same developer too, if you look at the history, I think. I think they tried

[00:13:41] Andrew Zigler: Oh,

[00:13:42] Andrew Zigler: ok so now we're

[00:13:42] Andrew Zigler: getting

[00:13:43] Andrew Zigler: down

[00:13:43] Andrew Zigler: to, the logs All right, ladies and gentlemen, well, we're gonna get to the bottom of it and let you know. Uh.

[00:13:49] Ben Lloyd Pearson: yeah. Developing news. We'll, we'll follow up if anything else comes out. So Andrew, tell us about our guest today.

[00:13:54] Andrew Zigler: Yeah, in just a moment, I'm chatting with Philipp Schmid from Google DeepMind. He's an AI developer [00:14:00] relations expert, even has expertise from hugging face as well. We're gonna get a closer look at Veo and Gemma and learn about how startup culture works inside of a large org like Google.

[00:14:13] Ben Lloyd Pearson: If you're leading an engineering team in 2025, AI isn't optional. It's your next competitive edge. Dev Interrupted and LinearB are out with a new guide. The six trends shaping the future of AI driven development. It breaks down how teams are evolving, moving from scattered tools to orchestrated AI systems,

[00:14:31] Ben Lloyd Pearson: Learn how teams are modernizing infrastructure and building trust in agentic workflows. It's packed with insights from our community of engineering leaders. This is your roadmap for building the AI driven team of tomorrow. Head to the show notes to find a link to the guide, or visit LinearB.io to learn more.

[00:14:51] Andrew Zigler: Today we're chatting with Philipp Schmid. He's a senior AI developer relations engineer at Google DeepMind, and his background also includes being a [00:15:00] technical lead at hugging face. Philipp's career path is a preview of what modern AI native teams are starting to look like as product-focused engineers with AI-driven learning loops, and a whole new way to think about scaling knowledge and productivity within your org. And today we're diving into how AI is fueling the need for these generalist engineers that. Maybe wear multiple hats, but they flex the best muscles that they have because of the skills they have available. And AI assisted self-education is reshaping that whole process, including onboarding and skill growth.

[00:15:32] Andrew Zigler: And we're gonna dive into what some of that means for a small startup like environment like DeepMind within a big tech company like Google. It's gonna be really interesting. So Philipp, thanks so much for joining us on the show.

[00:15:43] Philipp Schmid: Thanks for having.

[00:15:45] Andrew Zigler: So at the beginning here, I want to, I want to kind of set the, the scene for Google DeepMind. You know, it's, it's a smaller kind of startup like organization within Google and your team is building and advocating and educating, doing really whatever it takes to move fast [00:16:00] and scale adoption. and it's a new kind of engineering culture, where the contributors are expanding what they work on, and it's changing what success looks like.

[00:16:07] Andrew Zigler: You're always redefining it. So Philipp, help us have a glimpse at what it's like working in an environment like Google DeepMind within Google.

[00:16:16] Philipp Schmid: Yeah, I think it's. I would say still hard to say. That's like a startup. I mean, Google DeepMind grew a lot in the recent years. There was like this merch with like Google Brain a couple of years ago, and I think now it's like a few thousand people. So it's like not like 10 to 20 anymore. I.

[00:16:34] Andrew Zigler: I.

[00:16:34] Philipp Schmid: like inside the scope of like Google and like Alphabet, it is really still much different and like much more like action driven.

[00:16:42] Philipp Schmid: And I mean you probably have seen like that the last few months with like Gemini and Gemma has been like completely crazy from like shipping on a weekly basis to like new models, new features everywhere. And, since I joined in early February, we, we started with like a DEF Rail team and really like [00:17:00] trying to push towards, helping developers build a community and really help everyone kinder to adopt, the models as if you only have models available and nobody knows how to use them or have like not create documentation, you will not win at the end.

[00:17:14] Andrew Zigler: That's right. so, you know, you talked about it being small and then now it's huge. Several thousand folks, that's a good call out. Like Google DeepMind is big. Right. And how fast did that growth happen? and what was it structured like?

[00:17:26] Philipp Schmid: So I joined in in February, so I'm like also still super new. I know that there was like a, a refocus from like. Google as an org itself to really push, Hey, AI is really something important. We need to work all together. That's where this kind of merge me with Google Brain and DeepMind back then happened and now, like everything AI related is basically under Google DeepMind.

[00:17:49] Philipp Schmid: It's fromGoogle Gemini, uh, Google Gemma. There's a lot of research going on which might not be as Publicly visible. with generative ai, there's a lot of work with, alpha fold is coming out of [00:18:00] DeepMind. There's a lot of work, uh, with weather and machine learning there.

[00:18:03] Philipp Schmid: there's a lot of research happening. And of course, with generative AI becoming mainstream, we need to move towards more like, product based approaches. And since then, uh, we have like a, a more product team. We have AI Studio as a, as like a, a real developer environment.

[00:18:20] Philipp Schmid: and also now like a dev rail team where I'm part of where we try to really help everyone build with, the available DeepMind models.

[00:18:27] Andrew Zigler: So that seems like a, like a next level of, it's almost like a multidisciplinary approach to AI and applying it to use cases and figuring out how to productize it, but also figuring out what's gonna be the long-term impact. You touched on some things like proteins and such, like a, a places where you could use AI to create new medicine or solve, life-changing problems, right? And as you experiment and, try out these new use cases, it requires, a certain kind of mentality as an engineer, not the traditional mentality of, like, you, you know, you write the code, you ship it, [00:19:00] but this new mentality has a lot of experimentation and the lines blur.

[00:19:05] Andrew Zigler: Between roles, what does that look like as like, um, someone who works like on a, as a developer relations engineer? do you find yourself doing a lot of different multidisciplinary tasks?

[00:19:15] Philipp Schmid: Yeah, I, I think like especially developer relations, have like a much different job than they might used to have like a few years ago. And then if we look back even further, there was no developer relations engineer at all. And especially with all of the AI hype and AI evolvement, Software engineering is becoming much different than it used to be, right?

[00:19:35] Philipp Schmid: We have like much more supportive tools from like special code editors like Cursor or Windsurf, which help us a lot to be much faster. We have now like completely full autonomous agents with like tools with OpenAI, codex, uh, there's Claude code. So there's a lot of more tooling for developers, which wasn't there a few years ago.

[00:19:57] Philipp Schmid: So they need to adjust how [00:20:00] you develop software and then also how fast you can, ship. Software changed a lot. I mean, like cloud is still like. Getting popular, but like, I think now, like a, a small team individual can like ship and like produce and create so much more than it used to be a few years ago.

[00:20:16] Philipp Schmid: And like what we see and like what I see, especially when like talking to developers and to people, it's like the team size gets much, much smaller and much, much more people can develop things which they might not have been able before. Right. Five years ago, when you look into an enterprise company, you have had very clear roles in software development.

[00:20:36] Philipp Schmid: You had backend engineers, frontend engineers, architects. Designer, quality, engineers, like many, many different roles. And all of them kind of become now like just, okay, you are a software engineer and you need to do frontend. You need to do backend. You need to define, okay, how am I, or where am I going to deploy my application?

[00:20:57] Philipp Schmid: And it's like more that. It's not that [00:21:00] we have AI software development, it's more like that, there's a new kind of software development. I think we used to call it full stack, but now everyone is kind of a full stack engineer with ai because you get so much support, you can like ship some task or hand over some tasks to the ai, which you might not be, experienced enough to do.

[00:21:20] Philipp Schmid: Or where you think it might be easier for the AI to do like some very simple UI applications or just design can now be done completely autonomous and yeah, like. You have way more different like task and roles and maybe in a few years, or even a few months from now, we like look back and think, yeah, software development kind of evolved even further to just being a manager where you like need to look for your different like small agents or task, what are they doing and like where do I need to intersect?

[00:21:49] Philipp Schmid: Where do I need to help them? And then things become much more autonomous.

[00:21:55] Andrew Zigler: Yeah. That's what we're seeing from when we talk with folks on this podcast is that exact [00:22:00] anecdote about, engineers that are, you know, the definition of fulls stack has now extended into everything. It's like, beyond. On full stack, right? And, and, and what you get is an engineer who's responsible for the decisions behind the actions taken more so than the output directly, which requires a different skillset, a different mindset going into the work that you do. So Phil, what do you see from like successful people who adopt this mentality? Is there something that they all have in common or something that makes 'em stand out as being able to work in this kind of flexible way?

[00:22:31] Philipp Schmid: I think like what really stands out is the people who are very open to adjust how they work, very open for a change and really like trying to use modern technologies because I mean, it's the same with like when the internet came, it will not go away. Like AI will not go away. And yes, maybe now it's like not.

[00:22:50] Philipp Schmid: Good enough, um, or I can code it my myself faster or I really like to go like coding, but like this will will change. And you need to really be able to use [00:23:00] AI and to know how you can, like, benefit from it because otherwise someone else will come in and like do the same work as you do, but maybe more productive or more efficient or can do more of it.

[00:23:10] Philipp Schmid: Because they use ai. And what also like really helped me or helps me, is like not only like focusing on the technical aspects, right? Really thinking about, okay, what am I trying to solve here? What am I trying to build? And like, not think too much about, okay, am I going to use React or Vue for like the implement implementation?

[00:23:30] Philipp Schmid: Because at the end if I use some kind of code editors and they help me to build it. I can build a React application and new application if I know like how JavaScript works and how like all of the things work. And then I just focus on like building and like solving the problems rather than on like the typical like software engineering mind.

[00:23:49] Philipp Schmid: Okay. I will use Postgres and uh, Java backend with Spring Boot and all of those things. And I think those become more and more shallow because we really want to [00:24:00] solve something and like build something and I guess it will not matter that much if it's going to be written in Java or like in Angola or like whatever tool at the end.

[00:24:10] Andrew Zigler: Right. It also depends on like, you know, picking a framework that has a lot of, examples

[00:24:15] Philipp Schmid: Yep.

[00:24:15] Andrew Zigler: also very strong because then you have a lot of it in the training data. That's why you see things like React just going faster and faster. We talked with Lee Robinson at Vercel and, He was really calling out how people, um,you're, you're not really seeing as much specialists as you are generalist

[00:24:29] Philipp Schmid: Yep.

[00:24:29] Andrew Zigler: who can spread across all of those weak spots that they might have had before.

[00:24:32] Andrew Zigler: It doesn't matter if they pick, view versus react because they understand the principles that underline it and they understand what the end goal needs to do, and that's actually all they need to connect the dots. and so there's like two different work styles. You have a. using the agents to do the work and move faster, but then also using the agents to learn and build your own skills.

[00:24:51] Andrew Zigler: That way you can then be a better manager to your AI agents. And like personally, I'm really drawn to how folks use tools [00:25:00] like Gemini to like get up to speed really fast and almost have like a personal tutor. Tutor. And. dev Interrupted was actually a guest at Atlassian team. And there I listened to Ben Gomez, the SVP of Learning and Sustainability at Google, and he talked about using LLMs this way in a Socratic approach of asking questions, going for deeper analysis with an LLM and using it to teach yourself, not just by giving you the answers, but by asking you those critical thinking questions that you need to, to level up how you're looking at the problem. I'm, I'm wondering, you know, how do you see LLM driven self education changing the ways that engineers are learning new tools, especially like yourself in your role where, where you are creating a, like a large surface area that developers are, are trying out.

[00:25:45] Philipp Schmid: No, I think it's super important, like what we see is like the, the one side thinking that Cursor and all of like the other AI tools will replace software engineer. And I'm more on the other side that yes, we will have a lot of like autonomous stuff, but [00:26:00] of Cursor it will not solve some weird Stripe problems or like, might not do like a proper Git stuff.

[00:26:06] Philipp Schmid: So we, we really need software developers for a very long time, at least for the next few years from my point of view. And if you now get started with learning a language, trying to become a software developer. The easy path is like use Cursor and like copy, paste, whatever a code it does.

[00:26:21] Philipp Schmid: And if it doesn't work, like try fix me five times. Like, it is a bit like how it was like 10, 15 years ago where you just copy stuff from Stack Overflow and if it didn't work, you, you copied the second answers and like until you, you kind of like had it solved. And I think like the, better approach to it is like really use Cursor and all of those AI tools, but more also to help you teach like, I mean, it never has been easier to learn how Git works and like the Git commands with having LLMs explain you the concepts.

[00:26:49] Philipp Schmid: Having LLMs create you some kind of like visual like, Charts, um, craft, you can create super simple like your own kind of tests and like, check, okay, [00:27:00] how good am I? And then like, go into details. If you really have no idea what's better, like should I use like a rebasing or should I like directly merge into like main and like, this hasn't been possible before, except you pay a lot of money for like those different boot camps and now you can like really.

[00:27:18] Philipp Schmid: Do it on your own, with your own pace, in your own like depth or dance into something and it's like, it's really cool. I think it might sound scary now to become a software developer, but I think there never has been a better or easier time to get started with software development. And especially if you learn not only how to use those tools also like how the concepts work behind the scenes, then you are one of those software, engineers or developers who doesn't care if he

[00:27:46] Philipp Schmid: uses View or React or Next js, you know, all of the principle, you know how like state management works, how like the different updates happens on like inside the the HTML and then you can like, just use it and I think like something [00:28:00] which might not be talked too much about. It is also that it feels like it's much easier to ask something where you feel not like secure with an LLM than like to a real person, right?

[00:28:13] Philipp Schmid: If you are new at you at a job, you are a junior engineer and you might not understand a certain concept, might not understand a certain library or how like some code works like asking your colleague or a senior engineer. Could feel like, could you make feel insecure? Or you might be worried that they think something bad about you.

[00:28:32] Philipp Schmid: And now with like LLMs, like it's not a person you are talking to and it's really like more of a like a protected space where you can learn on your own temper and like with your own style. And I think that's like the best way on, on how. People learn because like everyone is very individual. Like some people prefer like practical examples or listening to it or like seeing something or reading a long text.

[00:28:54] Philipp Schmid: Having like bullet points, like everyone is like super individual.

[00:28:58] Andrew Zigler: Yeah, that's so [00:29:00] spot on. I recently onboarded for a role, you know, in the last year since AI's been on the scene. And the thing that was different this time is I, I joined and suddenly. All the tools I used had an LLM or a chat bot or some kind of like tooling within it where I could ask those silly questions without feeling like I'm bothering a senior person or like I'm embarrassing myself because I don't know this obscure thing. having those tools helped me go way faster, but also build that foundation that helped me ask those smarter, higher level questions of the people that I worked with. so that was like a force multiplier for me. I've really experienced that, of being able to ask questions of your tools and get feedback from it.

[00:29:38] Andrew Zigler: And so, you know, as engineering leaders are thinking about their own onboarding process, their own internal documentation and training, do you see, uh, or do, do you have any kind of like, Suggestions for how somebody might really accelerate that and leave behind the stuff of the old, like, and, and really embrace these new learning styles.

[00:29:57] Philipp Schmid: I think it mostly [00:30:00] really like think about like where you are most interested and have like your strength and weaknesses and like really try to collaborate with the LLM from day one. Maybe you know. You know about Git, but do you really know about Git? Can you like ask an LLM to like, collaboratively, like talk through it?

[00:30:19] Philipp Schmid: Like maybe I know just about like how to commit, create a branch and merge, but I'm sure like there are 50 more different Git commands. I've never heard about it. And especially if I prepare for a new role or a new project, you can do a lot of deep research or like, just deep search on like, trying to use those different tools to create some kind of overview and then like go into depth and like to the topics where you think you, you want to improve, where you think you are not like, have not a deep knowledge and really like learn, want to learn something new.

[00:30:49] Philipp Schmid: I mean like, it really is like using the tools every day is how you become very good.

[00:30:55] Andrew Zigler: Yeah. you mentioned quite a bit about like using tools like Cursor and Windsurf [00:31:00] and these agentic coding tools, and so like if I were Philipp and I was sitting in front of my computer, like, what do you have open on a typical day? What are, what's like your tool stack that you're using to learn and move really quickly?

[00:31:09] Andrew Zigler: I.

[00:31:10] Philipp Schmid: Yeah, so I use a lot of AI studio, I think normally I have 10 different tops open with different conversations and chats. I use Cursor for like my code editor, but also like I tried all of them and I, even if I'm like at Google, I will, or I keep always an eye on okay, like, what is ChatGPT doing?

[00:31:26] Philipp Schmid: What is Claude doing? What are like Mistral doing? Are there open source alternatives? On Hugging Face, it's like very important that you don't fall down into like one specific hole and like get stuck with it because maybe for your specific use case, you might want to use like ChatGPT because it has a better integration with like the news websites you work with or the knowledge work you do.

[00:31:47] Philipp Schmid: Or maybe you want to use Perplexity because it's like the Google for you. And so it's like. Always good to like keep an eye on like which tools are available and what is like best for your [00:32:00] job and your use case. I mean like, so we have now with lovable v0 bolt new, like those new like text to web app kind of tools, which weren't there like a year ago.

[00:32:11] Philipp Schmid: And especially if you work like in software development and like just stick to like ChatGPT or GitHub Co-pilot or Cursor you, you don't like learn and see about those tools, but they're like super good for like. Getting started or like trying to build something super quickly, super personalized, and.

[00:32:28] Philipp Schmid: You, you will not like, see and learn them if you don't like, try to keep yourself, uh, up to date and like, what are the latest trends? And it's also very important to understand, okay, like where are we currently in terms of AI usage and adoption and like quality, right? Like if I'm just in, in ChatGPT or like maybe in Gemini and I'm software engineer and like keep asking my like typical Stack Overflow questions on like, how can I reverse a list or how can I change the type, then I'm not benefiting from, from all of like the advancements in ai.

[00:32:57] Andrew Zigler: Yeah, and within DeepMind, how do y'all [00:33:00] share information about best practices and using this tooling? 'cause everyone's experimenting and trying new things all the time.

[00:33:06] Philipp Schmid: Yeah, so we like, very similar to every other big organizations, different teams have different preferences and different communication style In our team, like in the DevRel team, where I'm part of, we really, prefer asynchronous communication and like written communication because like we are distributed all over the world.

[00:33:23] Philipp Schmid: I have colleagues in Australia and Japan in the us Uh, we are in Europe with you, so it's not that you like are always online at the same time. So it's like you will not have meetings with everyone. So it's like the best way is like, have written communication. We use like chats a lot. And then of course like Google Docs or like written docs, like the easiest way to share knowledge because people can take a look if they have time.

[00:33:47] Philipp Schmid: They can like share comments. You can like connect with each others and it's searchable. Of course.

[00:33:52] Andrew Zigler: Being distributed all over and working that async nature, the AI kinda becomes like a highway between all of you, all of

[00:33:57] Philipp Schmid: Yeah.

[00:33:58] Andrew Zigler: information that you're doing. We [00:34:00] experience that too. Our team is Async not quite as distributed, but I know a lot of our listeners, you know, they come from teams that are very distributed and they're working that exact way you just described.

[00:34:07] Andrew Zigler: They're, sharing those best practices, but generally in an async nature, which works pretty well for ai. AI goes at your speed, right? And I, I want to definitely dive into some of the things that have happened recently from the Google IO announcement. 'cause there were some announcements and definitely things, coming out around Gemini and Gemma.

[00:34:25] Andrew Zigler: And I wanted to ask you, Philipp, as someone who's really close to the event, what recent news from Google IO stood out to you the most?

[00:34:32] Philipp Schmid: Yeah, I think like there's, we, we need to answer it in like two ways. So like, I'm a developer, I'm like a defer engineer, and I think if I like take like this head on, I would say like the most, like interesting updates for me, of course is like the new Gemma model. I. Uh, which is also now becoming like Gemini Nano, uh, which you can like run on your like local device basically everywhere in inside Chrome, hopefully soon.

[00:34:56] Philipp Schmid: And it supports like text input, uh, image input, [00:35:00] audio input. So it's basically all of the modalities. And there's like a super cool demo from IO where a team builds like a, a local, um, Astra life, basically clone where you had like your Android phone and Gemma controlled the Android phone and also saw and listened to like the camera.

[00:35:17] Philipp Schmid: So basically they were like walking around with the phone and like asking questions about something they see and then also asked Gemma to like take notes and like book something and like Gemma basically interacted with your, with your Android phone, which was like super cool. And then like more on like the agent side of things.

[00:35:37] Philipp Schmid: So like, uh, with the new Gemini models, like the 2.5 models, uh, there's a lot of work going into like reasoning and thinking and like really pushing. Like the maximum we can get in terms of intelligence. And There are like super cool new tools like integrated into Gemini, which make it much easier to build agents.

[00:35:54] Philipp Schmid: So we launched URL context, which basically, um, allows, uh, Gemini to like. [00:36:00] Retrieve context and information from a website or URL you provide and use it to answer your questions. And like the typical issue we have with LLMs is like, Hey, until when they're trained, like they don't learn new things and they're limited to like what they have seen during training.

[00:36:15] Philipp Schmid: So if I ask an LLM, um, well, what is the latest React version? It will tell me the latest react version until it has seen, right? And what all or everyone tries to fix is like, okay, how can we extend it? I like the URL context or the Google search, uh, which we have natively integrated. Basically give Gemini access to like all of the internet and all of our knowledge, which makes it super, super easy to like keep using it for like all of the up to date new libraries and like LLMs becomes super good at coding, right?

[00:36:44] Philipp Schmid: But, what if there's a new library? What if there's a new version? And like with those native tools and integrations, you can like still easily use LLMs, which have a good coding understanding, but know about the new syntax. Know about the new, new versions. [00:37:00] And then if I am like look from a normal, guy who has seen the, the keynote to it.

[00:37:06] Philipp Schmid: Like for me, Veo 3 is like completely crazy. Like the, now that we can generate videos from a text prompt, which are not like only images, but also generate sound and like can't prompt the model, the, the specific dialogue we want like, people to say is like completely mind blown. Like I think after now

[00:37:29] Philipp Schmid: 48 hours almost. There are like so many videos already online created by the community, by creator, where you cannot tell what is real and what is not real.

[00:37:40] Andrew Zigler: That's really wild. I've been watching some of those videos and you're absolutely right that it's harder to tell the difference, especially with the audio. the more avenues in which it can create kind of a media, right? It's multi multimedia. The more convincing it becomes. So it's definitely a fascinating territory ahead as we figure out how to use and apply these tools and [00:38:00] what that might look like beyond just, you know, like a tech demo. But I wanted to click into something that you said, about, LLMs, like on the browser and on the phone. This is a cool concept. I think this is actually kind of, more novel to folks because when people think of LLMs or, or using something ChatGPT, they think of like a, a supersized data center somewhere that's likepowered by all, like its own wind turbines or whatever.

[00:38:22] Andrew Zigler: Like you're thinking of like a huge dedicated, machine that is computing all this stuff. But then when you, when you say, oh, you could also have it on your phone, you could have it just kind of out in the wilds. What kind of opportunities do you see for like LLMs on the edge like that?

[00:38:37] Philipp Schmid: I think that there are like so many opportunities, which we currently not even have thought about. Like the obvious opportunities are, hey, instead of sending your request to the server, which costs a lot of money, send it to your local model model, which costs, just costs the power. So ChatGPT, or like Gemini, they're like subscriptions $20 a month.

[00:38:58] Philipp Schmid: Uh, you buy your phone, which is, [00:39:00] I don't know, a few hundred dollars if you can run it on there. You can ask like the same questions because like models really get so good now even on like the smaller scale that like the typical like day-to-day interactions where you ask like, okay, do you know, can you tell me something about this flower?

[00:39:15] Philipp Schmid: That the model will like really precisely like, identify the flower and like tell you something or like ask something about like very generic question, like all of them work now and it's like very easy to like leverage what you have existing. But then there are like so many use cases where you might not even have like internet access.

[00:39:32] Philipp Schmid: So like in, in Germany, in Europe, it's like, I would say still not very good, uh, when you travel or when you go hiking or something else that you have like a connection. And if you want to like interact you with your phone. You might not be able to like, use your chat application and people might say, yeah, that's not too bad, then I will use it later.

[00:39:51] Philipp Schmid: But like, I mean, it's the same with with, with everything, right? We get used to it and maybe you will not need it, but you get so used to it that you, it's kind of hard [00:40:00] for you to not use it. And like what we see and like what is has been seen in like the Gemma video, it's like. Sooner or later we will interact with our phones differently than we do today.

[00:40:10] Philipp Schmid: Right. If I could now like say to my phone like, Hey, please book a restaurant for me and my friend James next Friday. Try to find the one he mentioned in our messages and not do it manually. Is like something we might or will be able to do in a year from now and it will make our lives so much easier.

[00:40:30] Philipp Schmid: Right. And you can argue that, yeah, you can do it already today with all of the manual work, but we want to do it like automatically or more efficient. And it's kind of the same with like having models locally. And then of course like the biggest point is like you have. Like, no, not on your data. It's like leaving your phone, right?

[00:40:46] Philipp Schmid: If you send something to ChatGPT, there is a chance that someone will like hack OpenAI, hack your account. Um, intercept in between. Or you are not allowed to like send data across the wire because you [00:41:00] work in like a, a restricted domain, like medical or something else. Or you need to know where your like data is located.

[00:41:06] Philipp Schmid: That's where like those open models and local models really can shine.

[00:41:10] Andrew Zigler: And I want to ask too related to that, 'cause the, the use is for a person individually, there's so many, what's your opinion on like how companies and engineering teams should be adopting and using models?

[00:41:21] Andrew Zigler: Do you think that typically just picking something off the shelf and using a, a standard foundation model provider's, uh, usual way to go, or do you see real opportunities for teams to be, maybe hosting or fine tuning their own models locally?

[00:41:36] Philipp Schmid: Sadly, the, the easy answer is neither like, it, it really depends on your use case and where you are in your AI journey. So, like I am coming from Hugging Face, I have a very strong open source background and there is like, there's so many places where open models make so much sense, especially in like in companies and why I'm at.

[00:41:56] Philipp Schmid: But like at the beginning, normally when you think about like trying to [00:42:00] implement or solve something with ai, you should always go with like the easiest s and like the simplest way and like the most effective and like cheapest way. And all of those hosted foundation model aren't only an API call away and you only pay for the tokens you basically use for sending requests and responses.

[00:42:16] Philipp Schmid: Right. And very similar to all of the other solutions we built in the past. We first need to understand like, does it work? Like whatever I'm thinking about using AI for, it can be some kind of like detection from like images in my, my manufacturing. Does it work with the current AI models and like once I like kind of build a prototype using maybe a hosted Gemini model where I just need to create an API key in AI studio and then like implement it and it works, then I can start thinking about, okay, does it scale?

[00:42:46] Philipp Schmid: What is my, my evaluation threshold? I need to achieve to to be able to say, okay, like I need at least 80% accuracy, then it's worth it. And then I can start like really like walking down the road. Do I need to [00:43:00] always have the model available? Right models, hosted models, APIs can go down. Like what happens if the model is not available for an hour?

[00:43:08] Philipp Schmid: Would that be a problem for me? If yes, I need to start looking into hosting it myself or even running it locally, if it is possible, um, do I need to know where the data is processed? Um, then of course I can talk to those providers. Maybe they have like some special big enterprise deals where um, they can share more information about it.

[00:43:28] Philipp Schmid: If not, I can look into like hosting or fine tuning it myself at a current stage. Um, the cost aspect I think gets harder to justify, right? A few months, years ago, always a strong argument was like, Hey, you can fine tune an open model. It'll be cheaper based on your use case, uh, for whatever you do. But currently, like the speed is so fast on how

[00:43:51] Philipp Schmid: better AI models get and how cheap they are that your internal team might not be fast enough to like, spin up all of the GPUs, collect all of [00:44:00] the trading data, fine tune it to be, then cheaper, what's available there. There might be a threshold in terms of like usage when you scale up that like token based usage gets more expensive than like running your own GPUs but,

[00:44:12] Philipp Schmid: that moved like much later than it used to be, like a few months or years ago because all of, all of of this out of the box foundation models are so good at everything. And like really going down like the cost route gets, gets harder and harder, at least for, for initial proof of concept. Of course, if you achieve 85% accuracy with like Gemini, 2.0 Flash and you collect a lot of data, you clean it.

[00:44:38] Philipp Schmid: Then it can make make sense to fine tune Gemma to achieve 90% if you really want to or need to go beyond 80% because it might solve some cost, right? If you only identify nine out of tens and like the 10 cost you like $10,000, then of course it's like a different story. But like the whole like compute cost comparison gets harder and harder because[00:45:00]

[00:45:00] Philipp Schmid: model iteration happens so fast and with every new model iteration, the cost goes down. I think like since GPT-4, like the same level of intelligence was decreased by almost 300x. So for GPT four, you paid like, I think like $60 per, like 1 million tokens or something. And now with Gemini 2.0 Flash, you have like the same level of intelligence.

[00:45:21] Philipp Schmid: It's like 40 cents or something. So like the cost is going down so crazy and it will keep going down. At least that's what we are expecting. There are very, very good reasons why open models might be the right approach, but I would always go with what works, start with what works, what is the easiest to get implemented?

[00:45:38] Philipp Schmid: And then focus on evaluation. And once you have your evaluation, you can like start looking at cost and not only the cost, the model cost, like what's really the total cost, like how much does it change If your use case it's like 5% more, accurate or like the run responses better. And then like all of the security and compliance of course is a big, big point on where our models shine.[00:46:00]

[00:46:00] Andrew Zigler: It's a really effective playbook for how to evaluate, when to use what, and it comes down to moving quickly and then understanding the constraints that you and your problem are operating within. So that's a really, that's a really useful playbook. And I, I wanna double click on something that you mentioned in there that maybe some of our listeners are wondering as well, is if you could explain for us or really break down the difference between Gemini and Gemma and, you know, Gemini, uh, Nano.

[00:46:25] Andrew Zigler: Like, what, what are the differences and how should people be looking at them?

[00:46:29] Philipp Schmid: Yeah, so Google, Gemini, especially Google, DeepMind. Gemini is the, the foundation model out of like, DeepMind we currently have, are at the model family 2.5, where there's a Google Gemini 2.5 Flash, which is, uh, the, the most cost effective model. That's why it's called Flash. And there's also 2.5 Pro, which is like the most intelligent model we currently make.

[00:46:49] Philipp Schmid: And, uh, Gemma is done by a different research team at Google DeepMind, but as the name suggests, they're very familiar to each other. And the Gemma team works very [00:47:00] closely with the Gemini, team to benefit from the research which is done. And Gemma is an open model and it's really focused on local usage, single GPU usage.

[00:47:11] Philipp Schmid: So Gemma Free, which was released in March. Um. Comes in four sizes with like one B, four B, 12 B, and 27 B, and 27 B like still fits on like a single medium-sized GPU, which you can like rent on like Google Cloud for like $500 a month or something. And one or four B, or even like 12 B fit on mobile phones.

[00:47:33] Philipp Schmid: And like the bigger you get, like the 27 B easily runs on a more modern MacBook Pro. So like the, the focus for Gemma is really on enabling everyone to be able to run models locally, play with it, benefit from it, and like Gemini, as it's the hosted model where compute constraints are much different.

[00:47:51] Philipp Schmid: Like we can really focus on, okay, what is the best intelligence we can get in? Like, how can we make the model the most cost effective for the customer? That's why [00:48:00] I mentioned like hosting might not be likethe cheapest way in terms of use currently. But yeah, like both teams work very close with each other.

[00:48:09] Philipp Schmid: Important to remember. Gemini is like the hosted model you can access via an API and Gemini is the model, uh, the open model, which you can fine tune yourself, which you can download, which you can run on your phone, which you can run on your computer, available to use without internet. With internet.

[00:48:24] Andrew Zigler: That was a great breakdown. I appreciate you, giving us the scoop on that because, I've been exploring and looking at the tools and there's so much available and in this conversation I've learned a lot about how these types of AI tools are also getting used on the edge and on on, on devices that maybe people.

[00:48:38] Andrew Zigler: Don't typically think an LLM will run on. And that opens up so many more use cases and opportunities for teams to build new solutions to make our world better. So, you know, Phil, I really appreciate you breaking that down for us. And Philipp, if folks wanted to follow up with you or learn more about the work that you're doing, where can they go to, find out more about Philipp and DeepMind?

[00:48:58] Philipp Schmid: Yeah, so I'm on like [00:49:00] social media. I'm on like Twitter and on on like LinkedIn. Um, like Phil Schmidt is my, my Twitter handle also on GitHub. And yes, like if you have questions related to Gemini, to Gemma, if you have feedback on where we can improve documentation or where we are missing examples or like just.

[00:49:16] Philipp Schmid: Curious about things, don't hesitate. We value every feedback. We are always happy to like, talk to you, to help you build with Gemini. Like that's the whole mission of like the DeepMind DevRel team is like really enable everyone from like senior to junior to even non AI or software engineer, to be able to benefit from like ai.

[00:49:36] Andrew Zigler: That's amazing. if you're listening to this and you're trying out these tools, anything from DeepMind, whether that's Gemini or Gemma, or anything in between, and you're building new tech with this, we wanna hear from you. message us, reach out to us with the things that you are hacking on. you can join our, uh, substack as well and comment under the newsletter where we're gonna give. The scoop on Philipp and all the things we talked about today. You're listening to this podcast on a podcast provider. Leave a comment there, [00:50:00] join our conversation and I'm gonna be sharing and reposting all the stuff that I see from our listeners that are building with your tool stack.

[00:50:06] Andrew Zigler: So definitely join the party because we're talking every week about these groundbreaking techs, and it's really great to hear what our listeners are building. So thanks for joining us, and we'll see you next time when Dev interrupted.

Your next listen