Even if the technology is there, I think the bigger question is trust, right?
Trust comes from transparency. full stop. And I continue to see a dramatic lack of transparency across the board with the way many companies handle what data goes into those models.
I think it's just critical that we push for transparency and trust as it is in things like government and organizations. These are not new principles, and there are ones that will be true no matter where you apply them.
What are the limitations of general large language models, and when should you evaluate more specialized models for your team’s most important use case?
This week, Conor Bronsdon sits down with Brandon Jung, Vice President of Ecosystem at Tabnine, to explore the difference between specialized models and LLMs. Brandon highlights how specialized models outperform LLMs when it comes to specific coding tasks, and how developers can leverage tailored solutions to improve developer productivity and code quality. The conversation covers the importance of data transparency, data origination, cost implications, and regulatory considerations such as the EU's AI Act.
Whether you're a developer looking to boost your productivity or an engineering leader evaluating solutions for your team, this is episode offers important context on the next wave of AI solutions.
Topics:
- 00:31 Specialized models vs. LLMs
- 01:56 The problems with LLMs and data integrity
- 12:34 Why AGI is further away than we think
- 16:11 Evaluating the right models for your engineering team
- 23:42 Is AI code secure?
- 26:22 How to adjust to work with AI effectively 32:48 Training developers in the new AI world
Links:
- Brandon Jung on LinkedIn
- Brandon Jung (@brandoncjung) / X
- Tabnine (@tabnine) / X
- Tabnine AI code assistant | Private, personalized, protected
- Managing Bot-Generated PRs & Reducing Team Workload by 6%
Brandon Jung: 0:00
Even if the technology is there i think the bigger question is trust Right? And trust comes from transparency. Full stop. And I continue to see a dramatic lack of transparency across the board with the way many companies handle what data goes into those models. And we see that again and again with CTOs at OpenAI, not even knowing what went into the model. So, at the point we can't say what goes into it is not going to engender people comfortable and putting more trust into more and more important things into these models. I think it's just critical that we push for transparency and trust as it is in, I don't know, things like government and organizations. Like, these are not new principles and there are ones that will be true no matter where you apply them.
0:46
13% of all pull requests are bot created today and they are creating a unique impact on your SDLC. LinearB his upcoming research exposes the effect bots are having on our teams developer experience and productivity, and engineering org who created a system for managing bot generated pRS are able to reduce their entire review load by over 6% while also making drastic improvements in their security and compliance posture you want to learn how your team can manage bot generated PRS and get early access to linear B's data report. Head to the show notes to register for our upcoming workshop on September 24th or 25th?
Conor Bronsdon: 1:20
Hey everybody. Welcome back to Dev Interrupted. I'm your host, Conor Bronsdon. And today I'm delighted to be joined by Brandon Young, vice president of ecosystem at Tab9. Welcome to the show, Brandon.
Brandon Jung: 1:31
Conor, thanks so much for having us and look forward to it.
Conor Bronsdon: 1:34
Yeah, it's, it's great to have you on the show. This year we've been getting different perspectives from engineering leaders like you on AI. You know, is AGI going to take over our specialized models to the way to go? And with more than 1 million users leveraging tab nine for AI assisted coding, it seems Brandon, that you're firmly on the side of specialized models. Why is that?
Brandon Jung: 2:13
We've always known that? AI is good data in, good data out, bad data in, bad data out. So from that aspect, that's not really changed just because it's generative. You put generative instead of AI. Those basic principles still still apply. So I think the data is going to continue to be a primary reason. And I think clearly cost is going to be something over time that as people learn and use these in different, facets, areas that are much more specialized, the very, very large models become less and less useful. And so, I think both of those are going to play into it. Both the knowledge of the data and the cost, of running the models, uh, will be two that, switch it towards, uh, specialized models, small specialized models versus very large ones. drill
Conor Bronsdon: 3:00
drill into both of those, starting with that data, uh, transparency and data accuracy challenge that you mentioned. What problems do LLMs have when it comes to data integrity and transparency?
Brandon Jung: 3:14
Sure. So, LLMs first off, they just want lots of data. And that's just fundamentally the way that they're set up, uh, is, generally speaking, the rule of thumb is more data is better. Uh, and there's a high correlation between the size of the model and the amount of data you need to train it. And at some point, as you're hitting these, the extraordinarily large models we're hitting now There's just not enough data to, to train them. And so now we're even getting a lot of ideas around synthetic data to feed into these really large models. So there's, there's that aspect, right? the, the secondary aspect is, in terms of what they output. Now, a generative AI model, uh, is by definition, um, not going to give the exact same answer every single time. And it will occasionally have hallucinations. Anyone that says otherwise I just, it's not how it currently works. And if someone magically solves that problem, well, good for them. But I would not see that coming. So if that's the case, then what data goes into the model is really important depending on what comes out. And that varies from, uh, different places. So we've seen this from an image standpoint. What images you train on is what images you get out. And that's played out both early releases of a number of the models early on. Uh, would have had a bias, and then they have another bias because they put a filter on it. Like, there's a process you're always working on on that. but as far as it applies to code, I think the real questions that are going to, that are coming through in our industry is um, questions of copyrighted data, questions of proprietary data. Is that in the model? and again, for some customers, not a problem. A good number of startups, probably not an issue. Uh, large banks. Government agencies, high security companies. Probably pretty important that you know what might come out of that model, and that you have some level of, uh, understanding. So, uh, as, as always, I guess the answer is it depends. Uh, I think there's legislation we can get into that might drive that even more towards, uh, the importance of knowing what's in those models. But, uh, time we'll see over the next 12 months, it's going to be interesting. So, I still
Conor Bronsdon: 5:24
So I still want to talk about this cost element as well, but let's keep going on data transparency and the actual origination and accuracy of data. You mentioned legislation, which is something we don't talk a lot about on the show. What are you seeing happen on the regulatory side that could be impacting this?
Brandon Jung: 5:44
Sure, So it's not surprisingly we all are now functionally think of GDPR as a standard that's required. GDPR naturally came out of the EU. And from a regulation standpoint, the EU tends to be much more forward. First to Market, whatever you want to look at, just tends to be, and the aspect I think most people probably didn't pick up was, mid March, uh, there was a, the EU passed what is the Artificial Intelligence Act, or AI Act, and in it, uh, there's some very important aspects that, uh, talk specifically about having, you cannot have any copyrighted data in your model. Uh, that's going to prove to be super difficult for any of the large models because every one of the large models, uh, is going to, uh, already has that in place. So, um, I think as we go forward from that, that's first off one we're just going to see, uh, driven and I think Europe's going to drive that. Uh, for right or wrong, that's going to show up and we're going to see that in kind of 20, that's I think required that will be implemented in 2025. So Europe is gonna have a, a different stance and usually that's kind of plays downstream into the u uh, us and the rest of the world probably.
Conor Bronsdon: 7:00
it makes total sense that Europe is driving this as they, as you pointed out, typically do around these data and privacy regulations. Do you see specialized models then having a large advantage around tracking data origination, um, copyright, uh, versus these LLMs?
Brandon Jung: 7:18
Yeah, absolutely. I mean, it's, it's an aspect that, we have, we started at like tab nine. We have a, a model. We offer a bunch of models. So if you wanna use a very large model and you're okay with that. Great, go for it. You can use, uh, anything from the GPT stuff to the CLAWD to, to COHERE, to LLAMA, to GEMINI. But all of those, we don't know what's in them. By definition, they're black boxes. and then tab 9 early on said, hey, based on our customers, we've got a model that is based on only fully permissive open source and that you can audit, right? so that you can see everything that was in it. Uh, that's a non trivial effort. And I think To be honest with you, I think we've just all been lazy around the data. If we want to be specific on the data, we've all been lazy in doing what we've known we need to do for data quality for a decade, two decades. It's always, this has always been the case, but I think what happened was the return for having good data, so said another way, having really good data is the key. Is yes, important, but if you're not using it people are like, uh, okay, we won't solve it. But when all of this feeds into a generative AI model, all of a sudden we've kind of found the magic application that I think is going to force a lot of that data quality and cleanliness to really amp. So oddly, it's kind of the pull mechanism to get us all much better about our data quality. Yes,
Conor Bronsdon: 8:52
people are seeing it, and they want to understand why it's happening. They want to understand, oh, you know, how did this get in here? Uh, and if we can track that back, or understand data sourcing better, and origination, it can both improve transparency around where models can improve, while also dealing with some of these regulatory concerns that I think a lot of these models are going to be impacted by, as you point out. So,
Brandon Jung: 9:16
double click. I don't think it's a hallucination, per se, because everything will hallucinate. So if you hallucinate, you're really creating something that doesn't so much exist. The bigger issue is going to be large chunks, in this case for us, of code that is coming from a copyrighted source, right? It's good. I just want to validate just because everyone's like, hey, you're gonna have hallucinations. There's other
Conor Bronsdon: 9:36
important clarification. Yeah. clear inaccuracies or, uh, actually having, copyright issues to your point. and it, it brings up another thing, which is. These other trade offs that are being experienced by LLMs, when they take this, you know, large, try to get all of the use cases into a single model, uh, you know, push for more and more training data. There are risks that come with that, and you mentioned cost as one of them. can you expand on that a bit?
Brandon Jung: 10:09
sure. So, uh, the size of the model, and particularly if you start going into models that are, say, multimodal. So to be clear, multimodal is the notion of like, I can do coding and then I can do pictures and I can do a lot of other things. And that size and model that you're pushing that through is, is enormous. Uh. what you've found if you get to that size of model, the very large models is, the first concern I think we should just all ask is, there's a huge amount of money going into training this, both acquiring the data and building the models. alignment of incentives, or just understand incentives is the first question. So why would these very large companies be building these very large models? very much. The short answer is super straightforward. It runs on their cloud infrastructure, right? And you can see this very clearly from the investments going into, whether it's all the OpenAI and Microsoft's investment there, the multiple investments into Anthropic and into Cohere, Google's having, of course, Gemini. The goal on these, of course, is to drive underlying consumption. That's the motivation, or I should say, It's why they're so interested in it. I'm not saying they would do it just because of that, but it drives a lot of consumption. And so the natural question then is who has control of those and where can you take them and use them? what goes into them? Obviously a big question. And then we haven't seen the cost over. So an API call right now is probably looks to be dramatically subsidized by all the providers in the gold rush of trying to get. Developers and others to use their APIs and we've been down this path. Uber, Lyft, we've seen how this works when a huge amount of VC and capital goes into building something. It's a gold rush to figure out whose whose models get used and then once they get used, but naturally you have to pay for this and these are continually updated. They are not a one and done option. So I think we just have to be, you know, real clear that those very large models, at some point, someone will have to pay for them. And that either comes in one of formats. Either you're putting a bunch of consumption onto the cloud, and it's a lock in to the cloud of whichever was you choose their model, or it becomes something that you have to pay that cost for. So, hence, those big models, eventually there's a Eventually, it has to be paid for, and I think that's that when as that becomes something that people are saying, okay, well now you got to pay for it, or now it has a cost associated with it. I think then you're going to start seeing people moving towards a specialized models. specialized models are better. I think the other aspect that we're seeing why this is, you know, the big models are selected is, When I don't know what I'm going to use it for, and I haven't actually defined my use cases. So in the initial gold rush of, I want to use it for, you know, Today, I would like to write, uh, Shakespeare in the style of Cardi B, or maybe it's Cardi B in the style of Shakespeare, whichever you want. And then, I want to go write some really good Python code. These are two very different worlds. And, either you have to create models, and we have to get better about selecting models for use cases, which you're starting to see, but generally speaking, People are still selecting between what big model do I want to use, not what focused use case I want. And I think, you know, clearly the place that we're already seeing this is HuggingFace, right? You're seeing HuggingFace get a phenomenal amount of traction. That's because that's where this will shift towards is individual models for individual use cases.
Conor Bronsdon: 13:45
I think you'll see some proponents push back and say, well, you know, this arms race is all because we want to be the first ones to get to AGI and artificial general intelligence is going to solve this problem with LLMs. There won't be these trade offs. Uh, you know, that's what we really need here. Um, it's not about creating specialized models right now. The AGI will do that for us. Uh, what do you say to those folks?
Brandon Jung: 14:08
Um, the way these function, I am, I think we're just a long ways, we're a very long ways off a GI, uh, I'm aware that the initial interaction with, uh, gene of AI has a tendency to feel like you are interacting with a human. So it does have some of those aspects to it. so first off, I just think we're a very long ways off from actually getting to that case. I also think, it's also a question of, it's reasonable to say that the first person to AGI, it's a gold rush and whoever gets there is a big win. It's a win. Let's be sort of super clear also, the first person to even get generative AI out at scale, OpenAI, has clearly grabbed the lion's share of that work. So this is not a, I see the alignment and the view that that's, that's useful. it doesn't suggest to me the way these operate unless we get a stepwise function and something dramatically different than the way that these models currently function. That at least for a lot of use cases that this isn't that you shouldn't just be using a smaller generative AI model so even at that case like the cost of running it like the bet is the cost comes way down i'm sure the cost will come down somewhat but um we're talking many orders of magnitude and perhaps even if the technology is there i think the bigger question is trust Right? And trust comes from transparency. Full stop. And I continue to see a dramatic lack of transparency across the board with the way many companies handle what data goes into those models. And we see that again and again with CTOs at OpenAI, not even knowing what went into the model. So, at the point we can't say what goes into it is not going to engender people comfortable and putting more trust into more and more important things into these models. I think it's just critical that we push for transparency and trust as it is in, I don't know, things like government and organizations. Like, these are not new principles and there are ones that will be true no matter where you apply them.
Conor Bronsdon: 16:11
Yeah, to your point, we're kind of at this max height moment where AI has reached, it's, it's, it's on the scene now, people are using it regularly, it's, it's broken containment, but we also have this massive capital expenditure happening and companies that, as you pointed out, are cloud providers can justify that. They're like, great. You know, yeah, it costs us a bunch upfront, but we were going to get to run this in our cloud for years and years. It's all going to be worth in the end. Uh, but so many folks are kind of keeping up with the Joneses here. And, there are going to be a lot of these models that don't necessarily work out. And so it's important to start figuring out the right approach for, for your company out here. And, as we talked about at the start of the show, I mean, tab nine is obviously taking a very tailored approach, uh, helping drive, AI assisted code, and I'd love to kind of dive in a bit about these tailored use cases and how software engineering and dev leaders should be thinking about leveraging specific models or tools, um, with, I think, I'll say debunked this, uh, broader case for the moment. Uh, you know, maybe things will change. Maybe we'll see AGI here in a couple of years and great web. We can all just go use that. But for now finding the right specialized model and understanding the use case that you have for it, uh, is really important. How do you think software engineering leaders, you know, the audience that listens to the show and other developers should be thinking about that challenge of one, figuring out their use case and two, finding the right model.
Brandon Jung: 17:52
Almost every customer we talk to, uh, around the data or around the, generative AI for code use case is very, very interested. Very quickly gets the notion of, well, I do this in a specialized way. And that could be as extreme as things like I've got COBOL or FORTRAN, like some language that, you know, just the rest of the world hasn't seen. Or it could be like, look, I like old crufty Java because we do it this way, but we're six versions back from Java. You know, this is how we, we operate. And, and many of them kind of use cases in between. So, the, the first piece that everyone should be doing, should have been doing, has always been supposed to doing, is like, What is your good data? Like, what are your good repositories? Think of the idea of a golden repo. This is our best practices. Um, you cannot replicate what you have not defined. Okay? You cannot replicate what you have not defined. So, these models I can customize a model a bunch of different ways, right It can go anywhere from on the far end fine tuning a model. That's something that you're going to do right now is the best practice for things in languages like a Fortran or a COBOL. But you have to have quite a bit of data to do that. Now, COBOL, there's more than enough COBOL in most companies that are using it that that's actually, you know, doable. but then there's a lot of much more lightweight ways of doing that with, you know, RAG with vector databases with graph databases that allow for Those technologies allow things like what we would kind of think of as you know coaching expert guidance Such that when you get a dev the first thing they get is oh This is how we implement and do it at Our company right and when we look at what the what they need that's the biggest difference. We're gonna see between a a General model trained on, you know, general, uh, GitHub code, and a model that is trained that is specifically for, uh, your use case or your company. A good amount of the initial adoption has been super, general front end web development, which, again, is going to look very standard across what you see on general GitHub. that's obviously the first place you're going to see it, but, you know, up until five years ago. The largest number of lines of code written every year was COBOL. There's a lot of code there that we need to either modernize, maintain, update, etc. And there is just no shortcut to knowing your data. So step one, like we said, know your data. From there, then the question is, hey, let's, you know, then you want to build a model and, or customize a model or wrap a rag around those models. All are reasonable options. And then the other part that you need to look at is, how do you get the right suggestion to the right user at the right time in the right place? And currently, there's a couple different ways of thinking about this, right? A very large model, the chat style models, are very good for a question and a long answer, and a response that comes in, you know, three, four seconds. We're used to it. It's like, oh, ask a question. Oh, that's fantastic. That's a good part of it. But what we've actually found when we look at the way a developer writes, they kind of write in two options. Discovery mode, I don't know what I want to write, and, you know, I'm hammering out code. I'm in the zone. If you want something to really help you down that path, it's got to be operating in, you know, 150 milliseconds sometime. A large model can't do that. Now, maybe someone magically figures out how to do that, but the cost of doing a model, so let me just sort of draw this out. Most of the way people interact with models today are very large models. And actually not a ton of tokens, relatively going back and forth. This is what drives the cost. If you're trying to run this at very high speed, think every keystroke you add is sending out a whole new set of tokens and need them all back. So you're going at a velocity that's dramatically higher. And that needs a different, very specialized modeling, um, that can operate at a much, much, much higher speed. down to some things like, right now, We mix those models to get you the fastest suggested at the right time in the right place. So there's a lot more nuance at least, and this will kind of apply across other use cases, but coding in particular, takes, uh, multiple models to make that sort of work based off different characteristics. all that being said, know your data, know your use case, know your risk tolerance. So if you have high risk tolerance, hey, if you have high risk tolerance, let Microsoft subsidize the heck out of what you're doing. Because that's what they're doing, and go for it. I think that's a great idea. if that's, alternatively, that's not something that your company's as comfortable or you need to alter it, then there's other options, and, and we, you know, that might be a place that tab 9's a fit.
Conor Bronsdon: 22:39
that's a great description. And I think it's interesting to think through both that risk factor piece and also, you know, what is your use case here? It really does depend. Are you doing front end web development or, are you trying to do something that's, you know, more specialized or are you just saying, Hey, I just need to write a bunch of SQL code because then maybe you can just use ChatGPT, right? Um, it really depends what you are. Trying to get out of the model and how, how you're building your organization for the long term. Um, and so I, I love that you're kind of speaking to the specialization piece and also how much tailoring there is to be done here, because as you highlighted with, uh, you, you mentioned, uh, graph databases and vector databases and the way we have to think about actually managing our data. Um, and so. It's so clear that I, like, if you just look broadly in media, I think you'd get a very small sliver of what you need to do to successfully run an AI program. But as we've explored on the show this last year, there are so many pieces of doing this right. And it's not just finding the right tool, it's You know, getting your data in the level it needs to be so that you can tailor that model to where you need it to be. It's having the right partner in your product org that there's buy in within the org about what you're actually going to get done with it. It's knowing your use case and getting the, whatever buy in you need at the executive level to actually, you know, spend the money you need to spend things up. And then it's understanding that ROI may not be immediate. You may take time to tailor the model to what you need it to be. And you're looking at like this long term operational benefit. Uh, and it really depends on the organization. And I think to me, like talking to folks like yourself has really convinced me that at least in the next couple of years, tailored models are where things are going, because there is so much work to be done to make sure that you are getting things right. And if you are a major enterprise, you can't afford to have a The wrong data in there. You can't afford to have code that is being set up wrong and just pass forward. So understanding like how the workflows around this work and how you're actually passing code back and forth, that's either been AI assisted or fully AI generated by agents. Uh, there's so much that goes into this and it's really easy. I think for, some, Hype focus folks to say, Oh, well, like AI is going to improve efficiency 30%. You know, go, go, go, go, go. Uh, and not think about the work that has to be done to make this effective and actually sustainable long term.
Brandon Jung: 25:11
One other fun question we usually get is does it write secure code? I swear that question comes up like almost every time. Like, it will write good code. I cannot guarantee it's secure, to which usually my first question is, do you have a good full pipeline in place with all your security and everything? Are you following the best practices, Dora metrics? If you're not familiar, uh, the Dora metrics are, in my opinion, Super good to really benchmark as a starting point for an engineering team. Adding generative AI at least into the coding space without having those in place is a recipe for complete disaster because now you've just taken, you just put more in the front end of your, of your process and, it's not gonna go well. So I see a lot of people adopting this before getting your, software development all in place and there's Plenty of really good ways to go about doing that, but until that's operationally in place, high security, high velocity, high functioning, I think actually these tools in some cases will dramatically slow down an organization. it's somewhat obscure developers skills if you don't really know. So, clearly it up levels everyone. It is something that every developer is going to need to learn and adjust and use. So I think those are very clear, but it also may make it more difficult to tell which developers deeply understand nuances of perhaps your most important differentiated code. And so there's going to be many of those best practices always been there of pair programming, etc. Those are going to be critical, uh, to building up capabilities and, building up teams.
Conor Bronsdon: 26:58
Yeah, I've heard these models described as a great initial pair of programmer for you to work with. but there's so much value to doing more within your team around that and how you construct your team. And I would be remiss if I didn't mention here that if you are listening to this and you need to get basic visibility into your engineering org, you want to get those Dora metrics. Uh, go to LinearB. io and sign up for free to get your Dora metrics for your organization of any size. And you can start getting that visibility layer so that you can be more effective in your AI initiative. What are other ways that you think teams need to adjust and maybe construct their internal infrastructure or processes differently to effectively work towards having AI that is actually helping them code and delivering more value faster?
Brandon Jung: 27:51
so, from an adoption standpoint that that's adopting it across the same tools kind of across the board. So a lot of what the way these tools were originally adopted in fairness still for a number of companies are they're adopted by individual developers. So an individual developer is like, I'm going to use this, this tool. Now, um, depending on what that tool that they're using, that tool may be actually exfiltrating your code Into someone else's model, which is not what you're shooting for. So there's that aspect. But I think it's also important for people like what's the best practices and have that and transparent. Yes, we're gonna adopt this tool. Yes, here's how we're gonna use it. It's gonna change the way you do go about development. I think it's gonna change what work gets done. I think we've seen some teams, for example, move towards Uh, what traditionally would have a testing team, so you do development and then you have a whole testing team. Testing is actually one of the fastest ways, fastest and best use cases, because if someone throws here's, you understand when you wrote the code, what are the tests that the, the corner cases you might need, but you're going to miss some, but generally if you went through and say tab nine or another tool throws, you know, here's seven unit tests for you, You probably thought of three, and you can quickly look at other four and be like, Yep, that's great. These are covering what I need. So I think there's going to be some reorientation on where work gets done. Uh, so in the case of testing, I think some of that ironically actually shifts more left. So very large organizations that might have a testing organization. That testing
Conor Bronsdon: 29:25
We're always shifting left. That's all we do.
Brandon Jung: 29:28
Yeah, we just put more and more on the developer. So as we put more and more on the developer, that requires, uh, a higher level of both training coming in. So I think that shift left one part, and then I think the training will be important. Uh, onboarding is going to be super, super critical. and then, as you can do. super specialized models in organizations, say large consulting organizations, I think what you're also going to see is, if you think about it from a large consulting organization, you're going to have a very, very strong developer. Great. Or you're going to have a team of very strong developers, but they're going to move from project to project to project, right? And for them to move, they may have the very strong development capabilities and how they think about it. Now you're going to be able to expand that to some degree. So historically, that's been tied quite a bit, for example, to a language. And your, uh, your opportunity to switch between a Java or a Python or, you know, COBOL, much harder because those are, there's a lot more concepts. And also the nuance of the language requires more, Time to adopt. So I think there's two things that will come up. One, I think you're gonna see a really good developer that deeply understands, development, the core pieces, the mathematics and everything behind development, being able to apply their capabilities across a broader scope. Great. Second, I think that you're gonna see, this goes back to what we're seeing in terms of specialized models. If you are a, a large consulting company, clearly I need a different model for this customer versus this customer, versus this customer.'cause I can't mix their data. And, but if I have those good models, I can now switch developers in much more quickly from project to project to project, and their productivity goes up dramatically. So, I, I think you're going to see very strong developers, this really, really energize them. I think it'll certainly help, probably, it'll help all developers. I will be a little concerned, like new developers, does this allow you to code faster, but not have to wrestle through some of the hardest problems. So I'm giving, oh, there's a solution to a problem, but I don't understand why that's the best solution to the problem. I think this is an area we're just starting to wrestle with. I think we'll see it in academia. Hopefully he's going to really ask these questions in some depth. There's no doubt that a elegant solution coming in super tight, very dense code, it may solve the problem really well. But do I understand what, uh, code is that I just accepted, right?
Conor Bronsdon: 31:57
As a bad dev myself, I'm already noticing that when I am leveraging, uh, AI tools to help myself code. I'm going, Oh, this, this worked. Do I know how it worked? A little bit. Yes.
Brandon Jung: 32:08
Right. I have the general idea. I, but it makes it hard to understand who knows what, right? So if I'm submitting code. And, for example, I don't know where that code came from. So one area that this really will help is when you know that this code came from our best practices that was instantiated and said, oh, this is how we write an algorithm. When the developer accepts that code because it was suggested as your best practices in a company, that's great. Now it's been instantiated, but you also understand that that developer in and of themselves didn't go create it, right? can really understand workflow and can really execute that. But if I really have to optimize a trading algorithm, I need to know who has that concept. So there is, that ties back into the door metrics and tracking who does What where that came from. And so there will be, I'm sure, a few times where we're going to have, we all have imposter syndrome. This is going to set up a lot of developers in a super hard spot of developer imposter syndrome. I got that done But I'm not really sure how I got it done. And that's just, that's culture. We're all gonna have to get much better about being comfortable with the idea of, yes, this works. I'm not really sure exactly why. can we make sure we are, we know where it came from? This ties back to where it came from. So all these kind of come back together on the, hey, we know where it came from. It can be trusted. And if I reuse this a bunch of times, because it was in my model, I understand I reuse it a bunch of times. And if there's a bug or I don't know, Heartbleed, I don't know, we could go down this path a bit, like, we know how that works. So, we've wrestled with these before, they're just at a scale and a velocity that, we're gonna have to have more compassion in engineering, we're gonna have to track a few more things in engineering, we're gonna have to be all more humble as we go through this, for sure. And
Conor Bronsdon: 34:02
training aspect? Um, so this is obviously a tool, or I guess all of these AI coding tools are, as you pointed out, much more effectively used and understood by devs who are more senior, who understand the workflows of the team, who are kind of ready to go already. Uh, and you know, we have data origination. Things to figure out, we have, uh, you know, ROI and alignment challenges for leadership. We have visibility needs, and this is really highlighting the need for increased visibility and understanding of the software engineering process, through things like software engineering intelligence platforms and other solutions. But what about that training aspect? What about for the devs who are newer in their career, either, you know, listening to this show to try to improve, or, you know, on the team of someone who, is a leader that's listening to the show. How do we help them to be more effective and make sure that they are aligned to the approach the organization wants to take?
Brandon Jung: 35:04
Not ironically, but kind of cool, these tools are also pretty good at describing exactly what's going on in a code base. So, when you see the code go, What is this code doing? It's very good at giving you a very good high level of what's going on. So, I think at the 101 201 level, these tools are super helpful in, Oh, okay, this is what that code is doing. it still isn't going to tie, and I'm not sure that, tying it back to the fundamental reasons that you wrote the code the way you did and what the trade offs were. It's not going to be able to do that. It's still going to create a human aspect. So, I think they're helpful in it. but that goes a little back to the PAIR program and also all of us from a cultural standpoint of, you know, suggesting, Hey, go use these tools to understand the code base that you're living in. Which is great, and because we honestly know when someone joins a team, like, hey, go look through the code base, right? Where do I start? How does this matter? So these tools really help consolidate. Okay, I understand, you know, all the guardrails and the basics around what's going on the code base. Awesome. then it's going to be the senior devs. Hopefully what we can move towards is these tools enable a lot of the basic questions to be quickly answered. So if done well Then the vast majority of those conversations is a junior dev coming into a team will be asking questions of so why did we write this algorithm this way? What is the trade off that we did, did it this way? I don't need to ask how it was structured. I don't need to ask where it was used. I don't need to understand the basic, the map. I've already been given the map. I just need to understand, how to navigate through it. So I see the map, why did we write it this way? I think that's going to be the place that we can all get much better. So, done well, I'm actually pretty encouraged that this allows the basic learning for a new person joining a team to get covered by the AI. Awesome. Because we don't want our senior devs having to explain the basics and right now they spend a lot of time doing that. And then we need to create space to wrestle through those harder problems, deeper questions, areas that your code is differentiated and makes you different from whoever you're competing with. I'm still very encouraged by it. I do think it does shift a little bit in terms of how you do, if you're doing a scrum. I think we're going to have to be very, purposeful in defining what we're going to talk about and making sure that we are humble in, if you're new, humble that you don't understand, and senior, asking good questions to validate someone understands why it's operating the way it does. but that's just good people. Like, we all need to do this better. So, I look at it as a positive. We're going to spend more time on the things that are differentiating matter, and less time on those things that are vanilla and unit testing.
Conor Bronsdon: 37:56
Yeah, it sounds like you think engineering leaders who are working on, you know, setting up their AI initiatives, getting these tools really operationalized throughout their teams. Should also be thinking about how they construct their teams and maybe constructing them differently, whether that's aligning senior engineers more tightly with junior engineers, having like clear stages of, of, of growth. How would you kind of think that through?
Brandon Jung: 38:22
First off, I think probably you're getting up with different ratios, and I don't know what those are yet But I think you're gonna see a ratio adjustment through this process
Conor Bronsdon: 38:31
feels like it's already happening, but no one's settled on what the right answer is yet.
Brandon Jung: 38:35
Yes, There's there's that aspect that's gonna come up ratios and The entire onboarding Process is changing for a large number of people. It's one of the primary use cases we're seeing is the onboarding. how do I help onboard and get someone started? and then creating large complex engineering teams kind of have done this already. but the velocity we're creating code is increasing astronomically I think the biggest risk isn't transferring understanding from a senior dev to a junior dev. Like those are generally should be, Hopefully. you have a pretty good scope, junior dev. What I think we're going to see sometimes is that a junior dev is going to take a really interesting complex algorithm and introduce it to the code base. That's not in your code base and no one knew why that works, Right.
Conor Bronsdon: 39:22
What's our code review process here? How do we make sure it's
Brandon Jung: 39:26
Yes, I haven't seen a good answer on this one, because I think, like, if you have your senior dev, a senior dev that's writing a really complex, differentiated way you do it, awesome. That could be, you know, understood, and you have someone there that should document and understand this is why we do it and why we went around this route. But if a junior dev is going through and gets a suggestion that is super elegant, but doesn't fully understand why, and it gets introduced to the process or into your code, How do you troubleshoot that, right? Because you didn't really write it. It was written either some combination probably of a copied piece out of another set of code that was that it was trained on. So, that's going to happen. that's going to be code review. I think the code review process is where this is really going to be interesting. So, not an area that we, we as TAB9 directly play in, but. the code review process. Okay, before I put it into production, you know, does this look like something that we already have? Is this something completely new? What's the level of complexity of the code that I'm about to implement? I think there's some really interesting and there's some good tools that really look at the complexity level of whatever code's going through it. Starting to really flag that. Like, if this is high complexity, before we push it to production, do we know what it is? Right? And probably some correlation between the developer, like, so I got a new junior dev and they just brought in a super complex algorithm.
Conor Bronsdon: 40:45
Yeah,
Brandon Jung: 40:46
Either this junior dev is way stronger than we thought or something but we just have a piece of code here that's super risky we just didn't know about before.
Conor Bronsdon: 40:53
this is something we're trying to address through LinearB's platform with our SEI automations. So, uh, basically focusing on that code review process and saying, okay, Do some organizations want to have two reviewers, you know, mandated on a PR? Now that that can slow things down, so maybe that's not the right approach, but maybe for specific areas of the code base, you want to have a code owner who is an expert to review it and add comments because you need that context added in advance. Or maybe you want to have Um, auto generated labels that mark whether, this was AI assisted code coming to the code base, because then you can track back later and say, Oh, okay. This was partially built by an AI or, or fully built by an AI agent even. It makes this like track back as someone goes wrong or their concerns a lot easier. But it's definitely not solved yet. It's an area where we're very much experimenting with like different approaches. Uh, should a senior dev always be doing code review? Okay. Well, that has a lot of, uh, challenges that creates for those senior devs. Do they want to be doing that much code review? Probably not. Like, how do you have to adjust your team mix to your point earlier to kind of adapt to that? But I think there's a lot more coming on the workflow front. And, uh, for folks who want to check out. Uh, LinearB's Workflow Automations. There are some solutions here to track the impact of JNI code on your code base and better understand it. But I don't know that I've seen a good solution yet for actually like once it's already in your code base, how do we now kind of bring things back to origin? Like, okay, great. Maybe it has a label on the PR that you can go look at and say, okay, well, I know this was assisted by, you know, a tab nine model or a copilot model. Like. What do you do to unpack that? That, that's, that's a really challenging, gnarly problem that we're, I don't think we're, we're solving yet. True.
Brandon Jung: 42:50
It's a trade off. Everything's a trade off.
Conor Bronsdon: 42:52
back to that data origination
Brandon Jung: 42:53
yeah, back to data origination, but there is an aspect, a trade off to that data origination is there's some really good ideas that may exist if you just pulled in all of the copas of More context. or a whole bunch of other interesting programming options, like a bunch of other code, think there's some amazing Java codes that Oracle's written. There's some trade offs to that, right? Like, yes, they write great Java code. No one has any, I don't think anyone's going to question that. But, is that something I have a code base? And the answer is like, yes and no. so, it's not as simple as, it'd be nice if you tighten it all down, but you are also trying as a company to get that innovation. So, optionality, right? Optionality is going to be, as much as you can, uh, Have your cake and eat it too on this right now, I think is super valuable. So being able, where appropriate, to tap into those very large models that have pulled data from everywhere, where you think that risk and you can, it's not a bad idea. Where that's not an option, having to lock it down and, yeah, I think both are there.
Conor Bronsdon: 43:57
We're at a very exciting, but challenging phase where we're really figuring out how to leverage these tools and what the right processes are. And it's to your point earlier, forcing us to, uh, get better at some of these best practices that maybe we've been a little lazy at a time. So, I'm excited for the future of it. I know you are as well, and I can't wait to say, see what you and tab nine do. Brandon, it's, it's been a real pleasure having you on the show today. Thank you so much for joining me here on Dev Interrupted.
Brandon Jung: 44:23
Thank you so much for having us and, uh, having me. I appreciate it. It's been, uh, it's been fantastic. It's got things for me to think about as well.
Conor Bronsdon: 44:29
Do you have any closing thoughts around the future of AI software development, or software development in general that you want to share with our audience?
Brandon Jung: 44:36
I'm excited about it. So I think just other than like, hey, I am definitely excited. I think invest the time to understand it. And again, know your data. Like, there's no two ways about this. We've known this forever. I think that's when we talk about the piece, that's the hard work. it's like, I want a great, I want to go to the beach and be in great shape. Awesome. Start six months before, because you're going to need to be in a gym. Like we all know how that works. This is no different. So I think that's the part everyone can, needs to, should be. And even if you're using these today, there's never a bad time to have a good understanding of where, where that data lives. and what the quality is. So that's the last one. Hey, get good at your data. It's the lifeblood of what you got.
Conor Bronsdon: 45:17
Fantastic. That's a great note to end it on. Uh, and listeners, you can read more about this conversation or see it on YouTube. check out our sub stack to, to dive a lot deeper into everything happening with AI. And the tooling around it, uh, and how to construct your team the right way. We're, we're so excited to continue to feature Brandon tab nine, uh, and the work they're doing throughout, our content and, uh, yeah, Brandon, thanks so much for coming on the show today. Really enjoyed it. see you soon.
Brandon Jung: 45:44
Likewise. Thank you.