The world is what we make it. Tech - and AI - follow the same principles.
On this week’s episode of Dev Interrupted, we sit down with Christina Enchevta, a Director of Engineering at GitHub, to unravel the link between the values we hold and the things we build. We delve into how AI applications mirror our values, intentionally or not, and how this can lead to surprising outcomes, no matter how benevolent our intentions.
Christina also shares practical advice for engineering leaders on how to take and provide constructive feedback, dismantle information silos, and infuse your values into the product development process.
- (0:00) Accelerate State of DevOps survey
- (1:27) GitHub's secret sauce
- (5:05) Being transparent with your dev team
- (11:30) Providing constructive feedback
- (17:45) Generative AI & art
- (22:40) Dismantling information silos
- (26:50) What devs should be excited about
(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)
Conor Bronsdon: Welcome back to Dev Interrupted at Lead Dev New York. I'm very excited to be joined by Christina Entcheva. She is the director of software engineering at GitHub. And I wanna start things off with a bit of a spicy question. What is the secret sauce at GitHub?
Christina Entcheva: Hello I'm very happy to be here.
Thank you for having me. Great. Introduce yourself. I'm just like, let's go. Yep. Like you said my name is Christina. I'm a director of engineering at GitHub. Very happy to be here. Yeah. Love that. Spicy take. Let's just get right into it. There are many things that I think contribute to the unique situation at GitHub, but I really think our secret sauce is our culture.
I know that's like probably super cliche to say, I think in this case it's true. I do think that GitHub culture is a little bit different than your kind of everyday company. We bias very heavily towards asynchronous communication. We bias very heavily towards working out in the open. So it's really important for leaders especially, but for everybody to show their work, show it early.
Sharing whips, whether that be documents or drafts, prs as early as possible is something that like we really strive towards consistently. And, both with PRS and documents and like any other artifacts of information, like having that written artifact is really important and it allows for that asynchronous communication people can absorb context at the time that's best for them.
And it frees us from, the stronghold of meetings as a conduit of information, which like, it just doesn't scale. I think it's, it's not incredibly unique like other companies do this well, but I think that's, but doing it well is really hard. I think doing it well is hard, doing it consistently.
And certainly like working with folks who might not be accustomed to that type of, culture. It can be a transition for folks. But I think it just, it pays dividends over and over. Again, like meetings as a conduit for information just does not scale. We're an asynchronous remote company, so we have a workforce that's all over the world.
It just like literally doesn't work.
Conor Bronsdon: You need to create focus time, right? You have big problems to solve, and meetings cut into that focus time. It's a tax on communication, and so if you can solve that other ways I love that approach. What attracted you and led you to end up at GitHub?
At and in this Quite unique, but like special culture.
Christina Entcheva: Yeah, that's a good question. So I guess, before I joined GitHub, I've been at GitHub just under a year now. Started in May of 2022. And prior to that I was a director of engineering at Thoughtbot. Yeah. Which is, a software consultancy in the Ruby on Rails space.
And the ATE thought bot is actually very similar, very biased towards public asynchronous communication. And that was actually one of the big factors. I had been, I thought, bought, over five years by the time I left, but I had been in the director of engineering role. For about 10 months. I was happy there, things were good.
I just I felt the pull of GitHub, the challenge kinda, someone reached out to me and they kind of nerd sniped me and yeah. And, one thing led to another, but one thing I was extremely motivated by is that same culture, right? I, like transparency is my top value as a leader.
So I definitely saw that in GitHub, in that kind of, like I said, showing your work early type of culture. And that was a big thing that pulled me over. And then also solving problems at scale, right? GI GitHub is at a certain scale where certain challenges come up that you might not get in a smaller company, and that's just, really motivating and exciting to work on.
Conor Bronsdon: I wanna zero in on that transparency piece. You said it's your crucial value as a leader or your top value as a leader. Can you explain for the audience why you think it's so important and how you put it into practice with your teams?
Christina Entcheva: So there are multiple facets to transparency and it's hard not to think about the current macroeconomic climate that we're in right now. Absolutely. And, during this time I think there are certainly, inflection points where there are opportunities for leaders to be straightforward and honest and compassionate about what's going on and bring folks into the fold.
And I think it's important to like, take those opportunities. Another big area where, like this comes into play, I think as an eng leader all the time, is in giving feedback to your teams. Like laterally down, up all around. I am just so passionate about like radical candor. And I love to receive critical feedback or, constructive feedback.
I think feedback is a gift and I'm so thankful every time someone tells me something, I could have done better. And I try to bring that to like my other interactions and be honest with folks and it's hard, it's uncomfortable to say, Hey, like you did that thing and it like didn't really land and.
Here's what I observed, and maybe it would be better if you did it this other way. It's extremely uncomfortable. But I do think ultimately it's kindness to I do that for people I care about, right? The people I care about the most. I'm gonna give them the constructive feedback because I want them to get better.
And I want folks to do the same for me. I think transparency plays into that.
Conor Bronsdon: How do you approach it with team members who maybe have trouble hearing that critical feedback in an unvarnished fashion?
Christina Entcheva: certainly when diving into that realm, it's important to be aware of your context and try to suss out like whether that person is open to that feedback in the first place.
So like before I give any constructive feedback, I would set the playing field to be like, Hey, like I have some feedback for you What would be like the best venue for you to hear it or, certainly if I know that about someone ahead of time, that's even better.
Do they prefer to like hop on a Zoom call or do they prefer to hop on a call? Yeah, and hear it in real time. Do they prefer it written and need some time to process before we meet person? So I try to make sure that the person is like in the space to receive the feedback. First of all, And, how do I deal with, someone who like doesn't take it on board?
I think it depends on the situation, right? Yeah. So if it's a peer of mine who I'm giving feedback because I think that it's something that they could benefit from, but they're not necessarily directly in my reporting chain, it's like a take it or leave it situation. If you don't agree or you don't wanna hear it, like that's totally fine.
Like you do. You, if it's someone is like in my reporting structure somehow, and it's tied to improving, how we work as a company. Like it is important for the person to hear it, and there are, different facets of how you might connect that type of feedback. I try to connect it to business impact.
I try to connect it to like impact on a person insofar as that's. Relevant. Try to help the person like see it outside of their own self and like what the impact that it has outside of them. I think that helps.
Conor Bronsdon: Sounds like you're understanding their motivation too, where it's okay, some people are really motivated by personal improvement.
Maybe they just wanna hear that feedback. They're like, yeah, great. I want, I wanna accomplish and grow. Whereas others may take that harder sometimes. But if you frame it to your point in the context of. Hey, it's challenging for other team members when we approach things this way, and I saw you took that approach with X, Y, Z project.
It's an easier frame than for them to say, oh, I'm impacting the team. Let me approach this differently. And then let's, you like initiate that conversation. Is that kind of how I'm hearing?
Christina Entcheva: Absolutely, yeah. Motivation completely plays into it. And I love what you said about like how it impacts the rest of the team.
Yeah. Myself and I imagine lots of other folks like. Engineering is like a team sport, right? So we're like super cognizant of how our actions affect others. And I do think that can resonate with folks.
Conor Bronsdon: I'll say it's been a challenge I've had to learn as a leader myself, where, as I manage teams, I'm like personally very motivated by feedback, right?
Like I have that sense of, oh, I, I need to fix these things. Let me help solve it. And like I care about how it's impacting the team, of course, but I have that intrinsic motivation around gimme the critical feedback I need to know and I need to know it. And it was an adjustment for me, realizing that other people take that on differently.
And it's also different across cultures, right? Like in the United States as an example, people kind of default to tending to give more positives before a negative to shield feedback, which may be different in other cultures and is different in a lot of other cultures. And sometimes it implies under the explicit.
So I appreciate this approach you're taking with radical candor, but I. I could see it with GitHub's, very distributed global workforce being challenging in certain countries to get feedback one way versus another. How do you make that change depending on where your team's sitting?
Christina Entcheva: Yeah, I think that's a great point about culture.
There's this book, the Culture Map. I haven't, actually read, I made the process of reading it actually,
Conor Bronsdon: So I read that book, but Kelly Vaughn recommended it to me on this show a while back I've been reading it.
Christina Entcheva: Yeah. I should read it. I think it has a ton of great insights, but you're absolutely right that like culturally, there are many different approaches to this type of thing.
And I do think even Americans have a little bit of a stereotype of being like a little in your face. Yeah. Whereas other cultures are like much more nuanced. I guess what I would say for me, regardless of someone's physical location, certainly culture is a factor, but like working to establish, an environment of psychological safety is like step one.
Absolutely. Yeah. If I don't trust someone, I'm not gonna give them constructive feedback. Yeah. And I'm gonna take their constructive feedback differently than if we had established, a foundation of trust. I think regardless of location, it's important to start with that, which is obviously easier said than done.
Conor Bronsdon: What's your approach to establishing that psychological safety and that trust?
Christina Entcheva: The big thing for me honestly, is modeling behavior consistently. Like I talk about transparency. I talk about working out in the open, asynchronous communication, caring for people, obviously. Yeah. I am a people manager so caring for them in small and big ways.
Celebrating them, showing up and giving them opportunities, setting them up for new opportunities. Just try to model that behavior consistently. And over time, I think people take that in.
Conor Bronsdon: Yeah, exactly. Do you actively elicit feedback from your team members saying, Hey, I love getting feedback, but I need it as well, and if so, how do you approach that?
Christina Entcheva: I try to, yeah. every single one-on-one, but fairly regularly, I leave space for. What could I be doing better? which is like more of a generic version of that question. It's hard, to actually get a real answer to that in real time. I don't know if other people have asked you, Hey, what can I be doing better?
People ask me and I'm like, I'm, they're like I don't really know. Or if I do know, I might not want to say in that moment, right about the tricky one. Often ask folks like more specific questions like, I will, I don't know, run a meeting or run a project and I'll be like, Hey, like how do you think that went?
What could I have done better? Yeah. After the fact, which is more of a The retro approach. A little bit of a retro approach. Yeah. And with all of these things, even, earlier when I was talking about giving other folks constructive feedback I don't wanna say it's not personal, but it's not personal.
There are many reasons why folks maybe do things that might, we might not see as optimal. Some things that might be outside of people's control or anything like that. So I don't think of any of it when quotes give me feedback or I give them feedback as personal blame. It's more, hey, let me bring this into your focus, into your context that you might not be aware of.
Yeah. And that might help you like look at the situation differently.
Conor Bronsdon: And it's a good way to understand. Where that approach is coming from for them. And maybe it's a learned behavior from a previous role where they were trying to protect themselves or maybe they're going through something or distracted by another project and they just weren't putting their full attention to it.
Or it could be that they misunderstood the requirements or direction that was given to them and they just needed more context. So there's a lot of reasons. And, 15 more I haven't mentioned here a hundred percent. So I think that's a really apt way to think about it. I'd love to zero in on this thread that I see within your work, because I know you also work as a mentor trying to enable people to grow.
And I can see that kind of idea of wanting to grow new leaders, grow new engineers is really important to you. Can you tell me a bit about your mentorship work?
Christina Entcheva: Sure. Yeah. I came to software engineering from a non-traditional path. I went to a bootcamp, Flatiron School, and then I transitioned to an apprenticeship at Thoughtbot where I stayed for five years.
I am passionate about getting all sorts of folks into software engineering. Doesn't have to be non-traditional path, even the traditional path. So for a couple of years now, I have been working with a group called Emergent Works and they are a nonprofit organization that teaches, coding and digital literacy skills to formerly incarcerated folks.
Fantastic. And folks that are justice involved. And it's been an incredible experience. Like I feel really passionate about changing the face of tech and, building an on-ramp, for folks that, look different than us.
Conor Bronsdon: I, and we also just need to, right? We need more engineers. We need so many more devs across the board.
So we're not gonna find that just through College Pathways, and we're certainly not gonna find the diversity of thought that really can enhance what's happening. So I think that's a fantastic initiative. What have you learned from that process of being involved with these, former justice involved in individuals and helping mentor and encourage them?
Christina Entcheva: Yeah. So again, emergent Works is just an incredible organization. the important thing there again, I think is just a start from a space of psychological safety. Yeah, you're bringing in people who are like very different in lots of ways. certainly, for justice involved folks like, marginalized in a lot of ways.
Absolutely. So it's important to build those bonds. What I have learned through working with those folks is that they are like very apt at learning coding. They are great product thinkers. the program that I did that wrapped up in February, 2020 to give you an idea of how close we came up against Covid.
Not that it's necessarily surprising, but it was just refreshing to see See it come to real life. Yeah. And like this, it's not like charity by any means, like these people have so much to offer. It's just like we, we need to.
Conor Bronsdon: They need a chance to help. Get in the industry and do that.
Christina Entcheva: Exactly.
Yeah. Give folks a chance just like everybody else gets. So yeah, it's been really rewarding.
Conor Bronsdon: That's really cool. And I also just wanna ask you about something I saw as we were doing research for this episode. You're involved with something called the School for, poetic Computation, I believe.
Christina Entcheva: Yeah.
School for Poetic Computation. It's a school in New York, so I enrolled in one of their kind of intensives pretty much at the same time where I did the emergent works mentorship. So again, 2020 got the opportunity to be in person, which was lovely. And school for poetic computation is pretty much what it sounds like.
It's like an experimental computer and art school. I love it. And for me personally, like I, I'm also an artist, like I've had an art practice like my whole life pretty much. And like the intersection of art and technology is just so fun and exciting to me. The motto for school for poetic computation is, less demo more poetry, which I love.
Yeah. And there's like many facets to like what they do. There's lots of like really creative technical projects. The cohort that I was part of was called Code Societies, and it was really like looking at technology through, a social and political lens. I think it, it was, a really nice, accompaniments to the Emergent Works mentorship, and I just learned so much in that relatively short amount of time that just continues to be relevant every day.
Conor Bronsdon: So I have to ask you, as someone who's gone through this program, you are an artist yourself. What are your thoughts on generative AI and how that's changing society? And I can see my producers shaking their heads. I asked this question.
Christina Entcheva: Yeah. I'm so glad you went there. That's actually exactly where I was gonna take it.
Fantastic. Alright. We're on the same page. Yeah. A lot of the learning from that cohort, which was like a straight up like academic, it was a lot of reading, a lot of dense reading. Like I said, a lot of it continues to be relevant today. Generative AI imagery. It's a spicy, it's a spicy topic, right?
Yeah. So I, here's what I would say, like more largely about ai. The AI wave is here. I think we all know it. No Avoiding it. Yeah. It's gonna be a revolution. And, tools like stable diffusion and large language models are available to consumers at a greater rate than ever before. ChatGPT is gaining a foot, a stronghold in, in people's lives very quickly.
It is very exciting. I think that there's a lot to be excited about. For example, as an artist, am I saying hey don't use generative. Don't use stable diffusion. Yeah I'm not, at the same time I think that there are things to be wary about. And there are like potential problems and challenges.
I encourage software engineers, but everybody really to just get educated about the space. Like from an engineering perspective, get educated about like how AI works. In the grand scheme of things, it's not actually like that complicated. I think it sounds more complicated than it actually is.
Get a little bit of education about how it works and then get like the historical foundation of What have been like some of the problems with this type of stuff in the past? How models are trained? Exactly. What are some of the problems that currently exist? What are like current applications, that might be a little bit problematic?
In code societies as school for poet computation, we read, like rha Benjamin Sophia Noble, Simone Brown. These are like, Amazing, like academic thinkers who've done a ton of original research in this space. And, I can talk about this for hours. I won't get into the details.
Keep, yeah. It's important to, to understand historically and currently some of the challenges, some of the problems, and understand that like we impart our values into the system. Whether we mean to or not, we impart our values into a system and. Often with our best intentions, with intentions to be benevolent, we actually end up doing the opposite.
So knowing that, keep that in mind and make sure to build these types of products with the most diverse group of people you can find. I don't think it's realistic to say don't do ai. But if you are going to do ai. Know the history, know how it works, and pursue it with a di diverse group of people.
And I think you're gonna have a better product at the end of it.
Conor Bronsdon: How would you start a primer around the history of AI and what it means for folks who maybe are just starting to get into the subject?
Christina Entcheva: Yeah. Like I said, those three writers were very influential to me.
Sophia Noble writes about Google as a search engine, right? So yeah, this now we're really getting into the spicy takes, but I'm excited, Google is like the world's, source of information, but Google is not a search engine. Google is like an advertising company.
Yeah. And people forget that and people forget, like maybe some of the incentives that go into showing you what's on page one and page two. And the types of results that people get. Dipper depending on your own history and differ, based on the types of searches that you do. So Again, Sophia Noble I think that's a good place to start.
And so much of these biases are just like in the very foundation of seeking information. And you can see how like you might find yourself in a bubble. Myself included, I live in my own bubble. We all do in a bubble. You're right now. Literally in a bubble. Yeah.
Conor Bronsdon: I think that's an interesting way to, to phrase it, because inherently we don't think about pulling back that carpet, of saying, oh, I use Google. It's a tool I use that's helpful. But as I'm Googling something, am I thinking about what drove me to this search result? What's Google's incentives behind? That's not really am I aware that the SEO industry exists? Of course. Like I know optimizing those results is something I think about.
But I think it's a great point cuz there's so many of these systems that help run our day-to-day lives that are massive. Databases for machine learning. Frankly, the bases behind a lot of these AI tools and are learning about us constantly and creating algorithms that show different pieces of content to us, depending on who we are.
And we talk a lot about it as in general oh, the algorithm showed me this, or, this popped up. But actually peeling back why that happened is something that I don't think is being discussed enough even now as this conversation continues to generate all.
Christina Entcheva: Yeah. Yeah, absolutely. This was a while back, but people were talking about like the different Facebook feeds that, someone on the left versus someone on the right might have.
And I just, like the reality is that The results that we get and the information we consume. Like underpins like our understanding of the world. Yes. And our reality. And one thing that like keeps me up at night is is my understanding of the world flawed and incomplete? Do am I wrong?
If I'm wrong, I don't wanna be wrong.
Conor Bronsdon: Information silos that are happening. There's so many studies that are showing that the way we have shaped the internet is creating these massive information, silos that people start falling into and it's hard to get out of. And. You're exposed to different information that, to your point, shapes your view of the real, the world in reality.
And I think you're seeing it in the way there's this massive drift between viewpoints and you're seeing the poles increase as far as, oh, I'm all the way in the North Pole, the south pole, depending on the different topic. So it's a fascinating area for I think a lot of study and exploration and.
I think needs a lot of innovation to try to improve the problem.
Christina Entcheva: Yeah. And I mean us like as software engineers, as software leaders, like we are positioned to impact this, right? Yeah. This is a big reason why I got into tech. I. There's this book, program or be programmed by Douglas Rushkoff that I read like decades ago.
And essentially it says tech has an agenda, right? So learn to program if you wanna have a say, if you wanna have a seat at the table. Yeah. So I wanna have a seat at the table and like we're in a position to like impact this either way, right? For good or for bad. So like at a baseline, like being aware that like tech is not neutral, I think is a good place to start.
Conor Bronsdon: What are other systemic risks or challenges within the industry today?
Christina Entcheva: That's a really good question. Systemic risks or challenges in the industry? How far I'm trying to unwind us from where we're at today. Yeah. I don't know how far I can unwind. Another one that honestly comes to mind is just what? Tech workers look like what our tech course looks like,
Conor Bronsdon: Lack of diversity in the workforce, lack of forming those models and forming decision making.
Christina Entcheva: And again this isn't charity. The lens is not charity like there are multiple studies that show, like diverse groups of people create more successful products. Yeah. So like for those who, like the business case res resonates, there's like a clear business case for having diverse teams having different experiences and backgrounds, like risk proofs your business in a way that like, yes.
Being in an echo chamber doesn't, I think we've all been in that room where it's just an echo chamber and it's you're great. No, you're great. Having a diverse group of ideas and experiences helps you like, identify exactly.
Conor Bronsdon: You need that critical feedback. You need to understand the edge cases that maybe are harder if you understand, based on your personal experience.
I think that's a really resonant point that we maybe don't talk about enough as far as why this matters and why we need to future proof our industry by. Diversifying the thought processes that are being broke.
Christina Entcheva: Yeah, absolutely. Yeah. And I mean like the population, like certainly in the US but around the world, like it's starting to look different.
So even as we talk about who's in the minority, who's in the majority, like those things are changing. So there's a also a very compelling business case.
Conor Bronsdon: You wanna build a product that really sells in Latin America or China or somewhere else You probably need to not just have those engineers in the us you maybe need a team in China or a team in Latin America.
And you can see this with a lot of companies from the US in particular that have tried to go into Chinese markets and then have been undercut by Chinese competitors who understand the culture, understand the needs of that population better. And we can talk about intellectual property theft, all that, that's a conversation as well, but, There's a clear lack of understanding of the market for, I think a lot of companies when they make these giant leaps, if they don't invest in diversifying their workforce, building in teams and getting that culture, in the, into their d n A as a company.
Christina Entcheva: Absolutely. Yeah. I love that.
One thing that has been on my mind, because y'all prime me for it a little bit, is, I guess to talk about what developers should be excited about now. I do think it's actually like a little bit related to what we're talking about.
Conor Bronsdon: Yeah. It's kinda the opposite end in some ways. Like we're talking about systemic risks and challenges. Let's bring it back. What should they be excited about?
Christina Entcheva: Yeah. I do think it's related in that, like in our industry, like so much is changing all the time.
There's always something hot and exciting. I think like the thing that is most exciting to me is like solving customer problems, right? Solving user problems. And you were talking about this earlier, right? What is, a problem that like, is not solved today? What's a problem that maybe hasn't even been identified?
I think of, AI and all of the rust Go, all of these technologies, like they're just tools. They're really just tools to get to a goal, and I think like the most exciting goal for me is solving customer problems and. That I think is like the north star that, developers should be excited about.
I think sometimes we lose sight of that and we get excited about the thing in order just to do the big I've been thinking that. Yeah.
Conor Bronsdon: I'm like, oh, this is really cool and I can do this. And then I'm like, me too. Is this relevant to the user base we're trying to serve?
Christina Entcheva: Me too. Exactly. Yeah. Like I'm excited about Russ, right now I'm starting to learn Russ a little bit.
Awesome. And, I've certainly been in a space where it's just like, where can I use rust? Let's use Rust. It's actually, maybe we shouldn't lead with that. Maybe it's like, where is Rust really the most appropriate tool Yeah. For the problem that we have to solve. And certainly with some of my, software consulting work, I thought bot, like I've seen many different companies, many different ways of working, many different technology stacks and you just really wanna make sure like your technology stack is optimizing for the problem that you're trying to solve.
Not all tech is interchangeable. And you just wanna make sure that you're focused on the problem first.
Conor Bronsdon: Fantastic. I think that's a great note. Christine, I've really enjoyed this conversation. It's been wonderful to dive into AI and the approach you're taking to teams. And I think there's a lot of nuggets in here that leaders and also people who are working to build that kind of leadership and take away.
Do you have any closing thoughts you wanna share?
Christina Entcheva: Thank you so much for having me. Great to be here. Our pleasure. Check out GitHub if you're not on GitHub yet. I love it. And check out emerging works. Thanks so much.
Conor Bronsdon: Definitely shout out emerging works. That's fantastic stuff. and if you're listening and you enjoy this conversation, check out our YouTube.
You can watch us in this dome we're in and maybe see the visual. It's fun.
A 3-part Summer Workshop Series for Engineering Executives
Engineering executives, register now for LinearB's 3-part workshop series designed to improve your team's business outcomes. Learn the three essential steps used by elite software engineering organizations to decrease cycle time by 47% on average and deliver better results: Benchmark, Automate, and Improve.
Don't miss this opportunity to take your team to the next level - save your seat today.