Podcast
/
The hidden costs of pre-computing data

The hidden costs of pre-computing data

By Elliot Marx
|
Blog_Comprehensive_DORA_Guide_2400x1256_5_5d9a40d60e

Is your engineering team wasting budget and sacrificing latency by pre-computing data that most users never see? Chalk co-founder Elliot Marx joins Andrew Zigler to explain why the future of AI relies on real-time pipelines rather than traditional storage. They dive into solving compute challenges for major fintechs, the value of incrementalism, Elliot’s thoughts on and why strong fundamental problem-solving skills still beat specific language expertise in the age of AI assistants.

Recorded live at the Engineering Leadership Conference.

Show Notes

Transcript 

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

[00:00:00] Andrew Zigler: Welcome back to Dev Interrupted. I'm your host, Andrew Zigler and today we're sitting down with Elliot Marx the co-founder of Chalk. Elliot began his career at a firm where he built early risk and credit data infrastructure, and he also co-founded Haven Money, which was acquired by Credit Karma to power its banking products.

[00:00:19] Andrew Zigler: And today we're talking about his company Chalk and how it's tackling the AI and machine learning compute layer. Elliot, welcome to Dev Interrupted.

[00:00:27] Elliot Marx: Thank you so much for having me. 

[00:00:29] Andrew Zigler: Yeah, we're, we're really excited to sit down with you and I just wanna go ahead and jump into it. You know, we're here on site at the Engineering Leadership Conference and everyone here has been talking about AI and how it's transforming their engineering organizations.

[00:00:41] Andrew Zigler: And you working on this problem at Chalk. Can you tell us how y'all are entering the, the, the field of engineering and, and tackling how folks are building and using AI to ship software?

[00:00:52] Elliot Marx: Yeah,

[00:00:53] Elliot Marx: absolutely. So, uh, chalk is building realtime data pipelines, and this is really important as you have all these new models doing [00:01:00] inference right at the edge. Um, so I think the traditional paradigm has been to do a lot of pre-compute, something like a Databricks or a Snowflake, and for huge data sets that you wanna pre-compute a lot of data.

[00:01:10] Elliot Marx: The. They're wonderful products, but if you wanna fetch data really at the moment that you're doing some inference, especially if some of that data is expensive, especially if you need to do a lot of compute at that time, that's kind of where we come into play.

[00:01:21] Andrew Zigler: Your background touches on financial stuff, so it's like regulated, you know, very hard to innovate within that space because you think about privacy and security and a lot of compliance. Right. And so is that something that y'all are tackling as part of Chalk? How do you plug into infrastructures and enterprises that are working with sensitive

[00:01:38] Andrew Zigler: data?

[00:01:39] Elliot Marx: Absolutely. So I got my start in FinTech as you know. At Affirm, we were doing transactional based underwriting. So at the time, someone put something in their credit, in their cart, we wanted to underwrite them for credit. It wasn't like an overnight process where you have like a line revolving line of credit. It was a decision based on the item that you have in your cart, and that really informed a lot of what we do [00:02:00] today, which is how can you actually pull the data at that moment as opposed to pre-computing something because the data just wasn't available.

[00:02:06] Andrew Zigler: Yeah.

[00:02:07] Elliot Marx: I think that, you know, these days we work with lots and lots of FinTech companies, so, um, we power about a third of the world's debit card transactions. For example, we work with Socure, Persona. and Alloy, kind of like the top three anti-fraud platforms. And we process, I don't know, we've probably processed every single social security number in the world.

[00:02:25] Elliot Marx: So we really work with very, very sensitive data and I think a lot of those for a lot of those companies were kind of an opportunity to provide more visibility into the decision making process and introduce governance that they

[00:02:35] Elliot Marx: wanna see.

[00:02:36] Andrew Zigler: Yeah.

[00:02:36] Andrew Zigler: And so how do you get your engineers aligned around this privacy and security kind of, um, you know, it's, it's definitely a, a, a bit of like a, a, a quicksand to move through if you, it, it can be really kind of a, there's a lot of pitfalls involved. So how do you equip and build an engineering organization?

[00:02:53] Andrew Zigler: To innovate within those kinds of constraints.

[00:02:56] Elliot Marx: absolutely. Well, we're building a query planner, which is kind of a, a fun, it's like a [00:03:00] database engine. It looks like the internals of a Postgres or of a snowflake. and a lot of what we're doing is tracking how the data is flowing through these query plans from a technical perspective. And so you get the engineers motivated to do this because it's a fun problem to solve, to figure out, um, how to build a new database engine and how to kind of track the data as it goes through.

[00:03:18] Elliot Marx: From a deployment perspective, we deploy most of our software into the clouds of our customers, and this is great for them because they don't have to worry about their data going to a new place or someone new being able to see it. We're going into FedRAMP deployments, we're in fully air grab deployments, and it's easy to kind of make that work and fit with any compliance regime when it's in a cloud that you're already

[00:03:38] Elliot Marx: operating it.

[00:03:38] Andrew Zigler: So what does

[00:03:39] Andrew Zigler: it mean to be building a compute layer for other teams and softwares to be building into? Because I think for a lot of folks, you know the need for AI compute and the costs involved with it. Is a new conundrum that a not a lot of CTOs are grappling with in conversations with their CFOs, with their board justifying the [00:04:00] expenses of how much they're putting into generative AI, how much they're putting into their AI workflows, and then tracking the outcomes and, you know, the impact of that.

[00:04:09] Andrew Zigler: I'm, I'm, I'm curious, like, what does it mean to be building in that space and what are the kind of questions that engineering leaders ask when they come to you for the first time?

[00:04:16] Elliot Marx: Yeah, absolutely.

[00:04:18] Elliot Marx: I, I mean this stuff can be wildly expensive. Um, and computing all this data can be wildly expensive, and I think that's something we haven't really like thought about.

[00:04:25] Andrew Zigler: Yeah, we're still grappling with it a bit, right? Yeah.

[00:04:28] Elliot Marx: Um, we actually just put out a case study with one of our partners, um, whatnot.

[00:04:31] Elliot Marx: They're like the live largest live streaming marketplace in the United States. And, um, they were previously kind of computing all these predictions for, um, like users and, and what they might want to view that day. Um, and then they were throwing away about like 90% of. So one of the reasons for, uh, moving stuff to real time is kind of counterintuitively to save money because you only compute on the data that you need to compute on as opposed to kind of like computing on everything.

[00:04:59] Andrew Zigler: Or double [00:05:00] computing in some cases,

[00:05:01] Elliot Marx: absolutely. I mean, the other time you see this pop up is when you have to get data that has a cost associated with it. So sometimes it's like a, you know, a compute cost that you're paying because you're running an expensive model, but sometimes it's like literally dollars and cents that you're paying to a vendor because you're asking them to like evaluate something on

[00:05:18] Andrew Zigler: Yeah.

[00:05:18] Elliot Marx: Um, as an example of that, like. We work with Socure, for example, like the largest, uh, to my knowledge, anti-fraud provider in the United States. And they run, you know, a hugely complex set of data that they process in order to make a decision. And banks and credit unions and fintechs are telling them a packet of information.

[00:05:36] Elliot Marx: Like, here's an email, here's a social security number, here's an address. Like, is it real? Should we let it through? Um, and they couldn't possibly pre-compute on like. All possible combinations of emails and phone numbers, uh, and and social security numbers because it would be like, you know, more combinations than. I dunno, the lifetime of the universe to compute, right? Maybe it's just like totally crazy to compute over that. Um, [00:06:00] so they have to compute their data on the fly. And I, and I think that's, you know, also kind of cost-based reason. Um, so I think you'll see a lot of compute in, in this new paradigm, shifting to the time that you're making the inference call, as opposed to trying to like pre-compute everything in part to save money.

[00:06:15] Andrew Zigler: Yeah. And

[00:06:17] Andrew Zigler: you know, I, I'm curious too as part of that, you know, there's the, when you do it, there's also the how fast you get the response. How, how do think about latency, chalk, and what are the. Things that you find most

[00:06:27] Andrew Zigler: important about it.

[00:06:28] Elliot Marx: Yeah, it's, it's totally critical. Um, I, you know, there, there are certain areas where like the latest gen AI stuff, it just cannot go. Um, like a lot of the Rex's systems, they wanna see like P99s at like 20 milliseconds or something like that. And if you think about like running, you know, anything through a com, a chat completion model, there's just absolutely no

[00:06:46] Andrew Zigler: right.

[00:06:47] Elliot Marx: Um, so what you're seeing is still like, you know, extra boost, like GBM, like taking the day, um, and needing to compute data really, really fast. Part of where we come into play and like my philosophy around like, I don't know how data science [00:07:00] and and machine learning engineering should work, is that like SQL and data and Python are the languages of data science?

[00:07:05] Elliot Marx: So we wanna meet people where they're at with that, and then just do everything we can to run those fast. Um, and that includes like taking people's Python code, looking at the abstract syntax tree or the AST and turning it into like, like SQL style expressions that we can evaluate so we can run Python, like many orders of magnitude faster than just like actually running Python by just treating Python as like a language and giving it a new vectorized interpreter.

[00:07:30] Andrew Zigler: And so, you know, in that world where, you know, maybe people start with an idea, data scientist opens up a Python notebook and starts messing around with some data sets, they start cooking, right? And then you, you, you realize that like, oh, this is something that's gonna scale. That's really the kind of where chalk then comes in and is the.

[00:07:46] Andrew Zigler: Platform that allows them to take that research, that work, that incremental bit that's been done by an engineer, and then take it to production to scale and put it at the edge for the masses.

[00:07:55] Elliot Marx: I think like traditionally, you write this stuff twice, like you do it once, just like you said in a notebook, [00:08:00] and then you go talk to the engineering team and you say, Hey, like I had this really good idea, it worked out great. Like, can we reimplement it? Yeah. Here's my demo. Like let's go. Uh, and then you either log in production for a really long time or you ship it and kind of hope for the best.

[00:08:12] Elliot Marx: But sometimes it's a little different. Like when I was at a, we had this bug, we called the Affirm money bug. Um, we started, uh, doing, uh, on inference on on cents and, and doing training on dollars. And, um, so when we went to prod, we were just so confused about the amount of money we were underwriting people for, that we ended up saying yes to everyone who applied for a loan.

[00:08:34] Elliot Marx: Um, and because of that we lost like a ton of money. And this was like the early

[00:08:37] Andrew Zigler: Right.

[00:08:38] Elliot Marx: than 10 years ago. Yeah. Um, and. We want help people avoid that type of problem. We want you to just write it once in like the canonical language for you, whether that's sql, python, data frames, um, and then have it be performant enough when you want it to take it to production so that you don't have to rewrite it.

[00:08:54] Elliot Marx: 'cause that sucks and no one wants to do that. Um, but also so that it's exactly the same as the way that you did it in that [00:09:00] demo.

[00:09:00] Andrew Zigler: I'm curious too because this is, it's very. Fascinating and high impact problem that you solve. And it means a lot to your customers and to their end users to make sure that they have like the best experience possible and it's accurate and it's fast. And, um, within Chalk, how do you foster a customer centric attitude within your engineers and bring them closer to those problems?

[00:09:25] Elliot Marx: We have a Slack channel with every company we work with, um, which is a little overwhelming, like, uh, but it's, it brings us really, really close to our customers. Um, we don't, we, we, they kind of set our roadmap, uh, and that's how we figure out what we're building. It. It's never like confusing what we should be doing because we're always being told so much by the people we work with, what to do.

[00:09:48] Elliot Marx: Um, I think that to build a really great company too, like. I mean, sometimes it doesn't work this way. Sometimes you're like. The origin, origin of, you know, Figma or of [00:10:00] Snowflake, where like people go in a room and like quietly for like three years, like make something. It comes out and it's like, look, we made this like rendering thing that's like c plus plus and the web and like our own like font engine and like, okay.

[00:10:11] Elliot Marx: I mean, totally that worked. Or, you know, snowflake was a similar story with like x Oracle people. I think the way you build a really, really great company though, is you just like, you're, you're very incremental. We believe in incrementalism, like get something out there that works for somebody that does something and then make it better, like after you've got it in somebody's hands.

[00:10:28] Elliot Marx: 'cause you don't really know if it's good or not. Like you have to be told whether or not it's good.

[00:10:31] Andrew Zigler: We've been talking about

[00:10:32] Andrew Zigler: that a lot here, that iterative approach of build it and then get it in front of the customer, put it in their hands as fast as possible, as soon as possible, get their honest, brutal feedback, and then go back to the lab, reiterate, and then do the same thing. And that incremental building is the secret to success.

[00:10:46] Andrew Zigler: Definitely. And how do you, for your engineers, um, break down silos within their different specializations and the things that they focus on and get everyone aligned around solving the same problems?

[00:10:58] Elliot Marx: Yeah. You know, I would [00:11:00] say, I, I told people you can work on whatever you want. Uh, it like, it needs to be loosely aligned with the direction the business is going. Yeah. But I think it matters a lot more that someone has like a big vector of progress in their preferred direction. That like projects down onto like the direction the company wants to go, then it matters that someone like doesn't deviate at all from what the company's doing.

[00:11:19] Elliot Marx: Um, so we really encourage people to like be excited about the things they're working on and if they're not excited about it, like to find something else. Um, we have like all the biggest systems problems and coolest systems problems that, that, that I think there are out there. Like one of the, the folks we're working with told us they have 1% the scale of Facebook and you know, our team's like 55 people.

[00:11:42] Elliot Marx: Or 60 people so you can come and work on like really massive problems. Um, but, but have huge ownership over them. Like the ratio of like Facebook engineers to like scale to like chalk to scale is just so much better. Um, and so I think that's like a really, it's, it's hard to [00:12:00] find, I think these types of systems problems at startups.

[00:12:04] Elliot Marx: These, it is easy to find 'em if you like go to Google or you go to Facebook or whatever it is. But, um, then you're, you're relegated to like. You know the Windows Vista compatibility layer for the, you

[00:12:15] Elliot Marx: know, X, Y, Z thing?

[00:12:17] Andrew Zigler: Yeah. And as an AI company building an AI tool for folks to build their own AI solutions, on top of, how do you still foster? This is a kind of like AI native incubator mentality around constantly experimenting with these new tools and are there things that practices that you've been doing internally to help kind of foster that curiosity?

[00:12:37] Elliot Marx: I mean, we give people credit cards

[00:12:39] Elliot Marx: and then we say like,

[00:12:41] Andrew Zigler: Okay. How does

[00:12:42] Elliot Marx: try some tools out,

[00:12:43] Andrew Zigler: Oh, okay. So you hook 'em up with the, the, the access to go and go buy that Claude code.

[00:12:48] Elliot Marx: Yeah. Like we, and we have corporate accounts where it makes sense or there's like discounts or, you know, privacy or IP protections. But, um, yeah, absolutely. Anyone's allowed to try out whatever they want. And I think what's interesting, we, we see like.[00:13:00]

[00:13:01] Elliot Marx: I see different people try different things and it, it, it, I don't think right now there's like an obvious standard bearing like way to do stuff with AI tools to make yourself more productive. And I think what's right for one person might not be right for another person. Yeah. Like Claude Code might be right for someone.

[00:13:17] Elliot Marx: Cursor might be right for someone like playing GitHub copilot is like, what's right for me? Um, like you can pick whatever you want and, and I think until we get to a place where it's like so obvious that like, you have to use these tools and we Yeah. We'll, we'll just keep experimenting.

[00:13:32] Andrew Zigler: Yeah. And as somebody in your position, how do you think the role of a technical leader is going to evolve in the next year? What are the skills that you think are most important for them to develop?

[00:13:42] Elliot Marx: I don't think they're

[00:13:42] Elliot Marx: gonna change. Like the skills that are most important are like listening to people like at your own company, at places you work with, um, and prioritizing. It's like there's so. It's so obvious that so many things would be good. I think what's hard is like saying no to doing [00:14:00] certain types of problems or projects.

[00:14:02] Elliot Marx: and I don't think that's gonna change 'cause AI is here. I think it's, it's only gonna be more complicated because it's so easy to make like a V0 of something you've gotta decide to like, call even more opportunities than before. But I think it's the same skillset of kind of like listening and using that,

[00:14:16] Elliot Marx: listening to prioritize your work.

[00:14:18] Andrew Zigler: What about

[00:14:18] Andrew Zigler: the skill sets for the engineers that you would hire to enter into an organization? What are the things that are gonna be most important for them to build?

[00:14:25] Elliot Marx: We just

[00:14:25] Elliot Marx: want smart people. Um, I don't really care that you've like written c plus plus. Like we write most of our code in c plus plus and rust. Um, like do a lot of people come in knowing c plus? Like, I don't know. Some, like all the Citadel people we work with have like done a lot of that before, but it's not particularly, I, I mean that's like such a teachable thing.

[00:14:42] Elliot Marx: Yeah.

[00:14:43] Elliot Marx: Um, I think, you know, it's harder to like get the fundamentals. And I think having like, you know, a background in like math or like CS theory or like systems programming or something like that is like probably gonna take people farther than understanding like, oh, I'm like a [00:15:00] great prompter. It just feels like a learnable skill to me.

[00:15:03] Elliot Marx: Or like, I'm great at writing rust or something. It's like, cool. Amazing bonus points. But yeah, like do you wanna learn and, and are you like, capable of learning?

[00:15:11] Andrew Zigler: And so how do you, how would you screen for that?

[00:15:14] Elliot Marx: We try to ask really practical questions like, um, I mean, we ask like a compiler's question that's like based on a, I mean, it is a problem we've solved like a little mini version with like little unit tests and you kind of walk through it and I guess, I think part of an interview is the people who are gonna pass the interview you want to be selling on, like working there.

[00:15:33] Elliot Marx: And I think part of what we have to offer, like really fun problems to work on, genuinely. So we want our interviews to be reflective of the work. 'cause we think that gives the people we're talking to, like a great opportunity to figure out if this is something that they would like.

[00:15:48] Andrew Zigler: And since you're working on this infra layer, there's a lot of opportunities for you to be part of a larger ecosystem and conversation, and I'm curious if Chalk is doing any kind of open source work or if there's things that y'all have kind of rotated into as, as being [00:16:00] part of that ecosystem.

[00:16:00] Elliot Marx: Yeah,

[00:16:01] Elliot Marx: absolutely. So we're big contributors to a project called Velox, uh, by Facebook. Um, and it's our query execution kernel. Um, it's like how we do joins and all of that. uh, we spoke at their conference, um, this summer and, and talked about a bunch of our work there. And we're gonna be open sourcing, um, a really big new project, um, later this fall that I'm super excited to talk about

[00:16:21] Andrew Zigler: Is it a secret?

[00:16:22] Elliot Marx: a little bit.

[00:16:24] Elliot Marx: It won't be secret

[00:16:25] Andrew Zigler: then, then we'll have to get the scoop on that here for, to follow up in our episode so folks can check it out for sure. And you know, I'm curious where, where has your. Own passion for open source come from.

[00:16:34] Elliot Marx: I mean, I've just benefited so much from open source. Um, I mean, like writing software is like my job a little bit, but it's also like my main hobby. Um, it's like I love writing software. Um, and so, uh, I like we all benefit so much from open source that, um, you know, you want to be able to do something similar too.

[00:16:55] Elliot Marx: I want my company to keep going on and like making great software and for that, like we need to like have a business.

[00:16:59] Andrew Zigler: you like [00:17:00] build a legacy that way.

[00:17:00] Elliot Marx: Yeah, it's something that, it's really open source is really enduring. Um, and I think it feels good to be able to put something out there. And I also just love the, like, competitiveness of it.

[00:17:10] Elliot Marx: It's, it's such a pure thing where you put it out there and like anyone can evaluate it and say if it's good or, or not.

[00:17:15] Andrew Zigler: Yeah. No, completely. I think that it's like a competitiveness Who's gonna make the best cutting edge tool? We all push ourselves to ship these amazing things and put 'em in everyone's hands, and then it built it, it lifts everyone up that raises all of the ships. And I love that you're like, you're, you're a coder by heart.

[00:17:31] Andrew Zigler: You know, you love building software. So I've asked this, if everyone who's sat down in the seat and asked this of you, are you vibe coding?

[00:17:37] Elliot Marx: I'm not really five coding. Uh, I've, I, okay. I've tried out clock. I'm, I'm done with trying new editors. So like, I've been writing an IntelliJ for. 20 years or something like, I'm gonna die writing in IntelliJ. So like that, that's, that's it for me. I'm done. Um, that really leaves like, okay, something that runs in the terminal or something that runs in IntelliJ.

[00:17:58] Elliot Marx: So IntelliJ has [00:18:00] like, full line com local models for completion that aren't terribly good. I use GitHub co-pilot, but it's kind of like, it's really, I love it for like error messages and stuff like that. And then I use Claude code for kind of like, I really don't like writing front end, and so anytime I have to touch a front end project, I'm like, I try to Claude code it, and then I make, you know, ask somebody to review for me.

[00:18:23] Elliot Marx: Yeah.

[00:18:24] Andrew Zigler: So do you think that, um, these tools are a way to help elevate engineers and their abilities as a generalist to not get stuck and to be able to

[00:18:33] Elliot Marx: absolutely. I think it's amazing for understanding too, like if you're looking at some code and you don't understand it and you want an explanation, like, that's so fabulous.

[00:18:40] Andrew Zigler: Yeah, just ask it. Just, I, I love, my, one of my favorite things now is, you know, you go to like a new repo or you find something open source on GitHub and it's really cool, and you're like, oh, this ee kind of makes sense. I can maybe figure it out. And then you, you know, maybe you'll download it locally and then you just open it up and, and, and then IDE, and now you can just ask the IDE, like, what is this?

[00:18:58] Elliot Marx: Oh, it's like the best [00:19:00] search tool in the world, right? It's the best search tool we've ever made. And so if, if you have a problem and you're like, would traditionally like Google it or like look on Stack Overflow or something, I think now the obvious answer is like, oh, you just like ask, you know, one of the best models like an opus or whatever it is, like, oh, hey, what do you think of this? And then there.

[00:19:19] Andrew Zigler: So what do you think is next on the horizon for chalk?

[00:19:22] Elliot Marx: Yeah, absolutely. So that open source project I mentioned is a really big deal for us. Um, and I think that's gonna be, um, I don't know, fun to just be able to put something out there and, and kind of like get community behind it. So I'm like so excited about that. Um, We have a, a SQL interpreter on top of chalk.

[00:19:39] Elliot Marx: So like under the hood there's sql, but, um, we haven't exposed it as a way to query, we've always done it through like API clients. Um, so putting SQL on top of us kind of puts us more in the, I don't know, we, we always say we want to be a next generation Databricks. Like that's our aim. We, you know, people talk about it sometimes a feature store.

[00:19:55] Elliot Marx: It's like a pretty bad business and, and not like a super interesting tool. It's kind of just a cash [00:20:00] on top of a Reds or DynamoDB. Um, I think the query planner we're building is like, where stuff is, is really amazing. And, and this putting SQL on top of chalk kind of puts us like, you know, a little more credibly.

[00:20:11] Elliot Marx: Like, oh yeah, that's, I see why this is like Databricks now.

[00:20:14] Andrew Zigler: So, yeah, so you see yourself exactly as that, a building block that's gonna fit into these larger puzzles and solutions that people are building in the future.

[00:20:21] Elliot Marx: Yeah. We, we work with a lot of really scaled companies. Like these are companies that have raised like a couple hundred million dollars or they're like Fortune 500

[00:20:29] Andrew Zigler: they're traditional enterprises that can't move fast and they need to plug into something like chalk to take advantage of the market.

[00:20:34] Elliot Marx: Yeah. Or they're like big fast moving startups, but these kinds of companies are like, they were successful before us with their like data stuff.

[00:20:40] Elliot Marx: Right. It's not like we showed up and like, oh, we can finally do our

[00:20:44] Andrew Zigler: Right, exactly.

[00:20:44] Elliot Marx: Right. They just wanna be able to move faster. They wanna process data more quickly. They want new types of workloads to run they weren't able to do at inference time before. Um, and so we need to interop with everyone's existing systems really well.

[00:20:56] Elliot Marx: Like we don't wanna invent a new storage format. I think that's like crazy. Iceberg is [00:21:00] great. We should all be using Iceberg. Um, and. Like we're, we're not trying to like innovate on storage formats. We're trying to innovate on like, how fast can you process that data and interop and federate, even SQL queries to like other systems..

[00:21:12] Andrew Zigler: Amazing. You know Elliot, it's been incredible to have you. You're in our iconic Dev Interrupted Dome. I mean, you're in the bubble. You're in the hot seat. And for those listening, you know, we're definitely go check out the YouTube channel 'cause we are literally sitting in a dome together in the real world chatting about AI and how it's transforming organizations.

[00:21:28] Andrew Zigler: And Elliot, I'm really thankful for you coming down here today and sitting down with us here on Dev Interrupted. Really excited to follow your story at Chalk. And where can people go to learn more about you and what you're building?

[00:21:38] Elliot Marx: Yeah, please go to, uh, chalk.ai we'd love to hear from you.

[00:21:41] Andrew Zigler: Amazing. Well, we're, we're gonna share that with our listeners and thanks again for joining us and to our listeners, thanks for listening to Dev Interrupted.

Your next listen