Practical AI – Episode #267

Private, open source chat UIs

with Danny Avila of LibreChat

All Episodes

We recently gathered some Practical AI listeners for a live webinar with Danny from LibreChat to discuss the future of private, open source chat UIs. During the discussion we hear about the motivations behind LibreChat, why enterprise users are hosting their own chat UIs, and how Danny (and the LibreChat community) is creating amazing features (like RAG and plugins).



Fly.ioThe home of — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at and check out the speedrun in their docs.

Notes & Links

📝 Edit Notes


1 00:00 Welcome to Practical AI
2 00:43 Practical AI webinar
3 01:33 AI pushback
4 02:45 Danny LibreChat
5 05:57 Own your own data
6 08:04 large scale applications
7 09:42 LibreChat showcase
8 13:04 Switching models in threads
9 16:47 RAG performance
10 17:53 Multi-modality
11 19:24 PredictionGuard usecase
12 25:03 LLM evaluation tools
13 26:25 Plugins?
14 28:16 LIbreChat community
15 29:52 Organizations adopting LibreChat?
16 30:54 Factuality checks
17 32:37 Integrating Flowise & crewAI
18 34:23 Future of LibreChat
19 36:45 Thanks for joining us!
20 37:38 Outro


📝 Edit Transcript


Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to our next Practical AI webinar. This is our second webinar or live event that we’ve done, Chris. We’ve always done the podcast prerecorded, and it’s been fun in last time to talk about TEXT2SQL live, and now to have another chance to have a live webinar. Are you enjoying these?

I am enjoying them. I think we need to do these more often. They’re a lot of fun.

Yeah. And today – so I’ll kind of frame the conversation for today, but first let me also welcome Danny from LibreChat. He’s joined us to talk about the topic today. Thanks for joining, Danny.

Yeah, of course. It’s an honor to be here.

Yeah. Well, thank you for everything you’re doing on the LibreChat project and in the community. Today, as those that are in the webinar know, and maybe if you’re listening later, we’re going to be talking about crafting the next generation of AI chat interfaces… And just to kind of frame the setup for this, we’ve talked a lot of times, and I still frequently, even this week, last week, I hear about people saying “Oh, my company doesn’t let me use the ChatGPT interface”, or even literally today, like three hours ago, I was in a meeting with a number of customers, and they were asking - the question was “How do I get a chat interface that allows me to switch between models and try different things with different models? Like, if I want to try LLaMA 3 or something like that, are there ways for me to do that?” Are you still encountering that as well?

Literally every day. And I’m very sensitive to it. You may recall that I was getting after teachers online, and a teacher pointed out that “Hey, this is not our choice. It’s the school system.” So I’ve been super-sensitive to this ever since then. And we have challenges… I’m hoping today can put a big dent in that one.

Yeah, great. Well, I’m super-happy that we have Danny with us, because this is what Danny has devoted a huge amount of energy to with LibreChat, both in terms of providing an open source chat interface, also providing a chat interface that allows you to plug in different AI systems, whether that be Open AI, or many others, closed source and open source types of models and systems, and providing even functionality related to I think even RAG, and plugins, and other cool stuff. So I’m gonna I think now pass it over to Danny and let him kind of share a little bit about the background of LibreChat, what it is and how he views kind of the need for private or open chat interfaces, and what they’re trying to accomplish with LibreChat, and how they see that fit into the industry, and what’s sort of needed. So over to you, Danny; looking forward to this.

Yeah, absolutely. Part of the original idea was kind of inspired by a ChatGPT leak. I don’t know if you remember this, but there was someone whose messages were being seen by a different user…

You know, he’s from Poland or Russia, and someone woke up one day and all their messages were in Polish. That surprised me, but it really planted the seed for what I wanted to see, and I thought that was just a basic thing to overlook, and just started crafting from that impetus. And yeah, I think it’s inherently completely private with the flexibility of having remote stuff in there too, which I think is important…

But yeah, it honestly started off as a learning experience. ChatGPT had just come out and rocked everyone’s world, so I was like “Wow, I really want to learn how this interface works. What are they doing here?” And I had just started learning about UIs in general, and web development… So it really interested me. And something I saw was - you know, this was such a huge tool in my learning. As I learned this thing was being built out, but also you there was that need right away, because I posted it on GitHub, and the next day I had six stars, and I was blown away. I was like “What? Six stars?!”

“That’s great. Who’s looking at this?”

Yeah, I was already getting picked up by search algorithms, or GitHub’s algorithms, and I was just totally blown away by that, and totally motivated. A lot of people started commenting right away what they want to see… And I think having access to these tools sooner rather than later is going to be a huge thing, no matter what your team size is, what your business is. And obviously, you have to tread carefully with the privacy side, but I think I’ve built something battle-tested at this point, that thankfully it’s as much, if not more so a contribution from people using it have really helped me along the way.

[00:05:57.29] You mentioned kind of the initial problem that you saw, that kind of motivated you to go down this rabbit hole, which was kind of seeing others messages, which is definitely a piece of it in terms of how an application like this manages state, and data, and that sort of thing… What did you find going down that rabbit hole? Is there a way that you can categorize the main categories of things that people have come to find useful about having their own chat interface, rather than one provided by a model? Data, and privacy… But what are those main features or things that are on people’s mind when they’re thinking about having their own interface?

I think for me too it’s just owning your own data. And data is like the new commodity. It’s so valuable, even to these big AI companies. They’re constantly releasing their own interfaces, that are cutting edge, I might add, but they’re also looking to collect data. And I think a trend I want to see in tech, and especially from the open source world, is just owning your own data, that stays between you and these large language models and your company. And you really have that luxury through this app. So that’s a big driver for me, and I think that’s a big component for a lot of people.

Also, as you’re learning from these things, I think it’s so valuable to kind of like categorize and piece through certain conversations you’ve had in the past. And that’s why one of the main features that’s been like a mainstay since the very beginning was being able to search your messages. And to this day, it’s not a feature on ChatGPT or many different interfaces. So I think that’s very interesting to see it play out, but I also know a lot of people, just that one simple feature gets them on board.

I’m curious, as we talk about the applicability of this, for for instance large corporations, and… I work for one of those large corporations which built its own interface some time back; and others will have well. But going forward, that takes a lot of maintenance, a lot of concern. If you were in front of like a chief digital and AI officer or a CTO for a large corporation, and it may have already created its own, what would be your pitch for saying “Come over to LibreChat because of x”? How would you convince the Fortune 500 companies that are out there that this is the way to go, rather than investing on their own, compared to whatever investment they’ve already made that sunk costs?

Number one, it’s completely open source, it’s got a lot of contributions, there’s nothing being hidden in terms of its interoperabilities. And also, number two, it’s highly configurable with any kind of intranet network you might want to do, or it can be completely sealed and even work with large language models without needing to hit some kind of remote service. And it could be completely on local connections. It really just depends on the admin’s level of expertise and connecting all these things, but… Even just using the default Docker Compose, you can just spin up something that’s only available to you, and is configured in a way – if you’re using like insecure default variables, and things like that, it will warn you right away. So yeah, I think those are the top three things I’ll say to try to convince someone.

Well, I think people are eager to see a chat interface. We’ve been talking about them… So we’ll kind of let you go from here and show what you want to show, and I’m sure we’ll have some questions and thoughts as you’re showing these things, and… Yeah, if you could talk through kind of the demo, and what’s on your mind as you’re thinking about the different features that you’re showing.

[00:10:04.28] All look good?

All looks good.

Great. So yeah, I’m running this locally, and I’m using Ollama; I have it hosted on my computer, so I’ll just write “Hi there” and it should – it usually takes a second to load up.

And Ollama, for those that aren’t familiar - do you want to describe that just a second?

I guess it’s hard to find a specific term for what it does, but basically it helps you manage local large language models. It helps you pull down their latest build files, and then it helps with the prompt drafting process, and serves them on an API. So it just makes them really accessible wherever you can run them.

[unintelligible 00:10:46.26] and now that it replied, the next couple of replies should be a little quicker. But basically, this interface is – it should look pretty familiar to a lot of people. Unabashedly taking a lot of inspiration from ChatGPT. There’s just a couple core things… Like I mentioned, the search messages, and if I search here, it’s already picking this up. It’s a previous conversation I had. I was testing some file there. And aside from that, going back to this, this is kind of segmented, but for people who have a need to set more custom parameters, or even just setting instructions here and making sure that it generates what they want to see… So I’ll say “Make sure to write code in Markdown.” And I’ll say “Write me a recursive Python function.” Yeah, so it’s doing the job. Whether or not it needed my instruction, it depends… But it had it, and so it kind of steered it right away to use Markdown, which gets rendered like this.

It looks beautiful. And it even has like the copy code, and the nice sort of Edit button, Copy, all that stuff that one would expect.

And really, it’s pretty simple too, and I like that simplicity. I’ve seen a lot of interfaces kind of get lost with the technical side of it. And I’m sure that has an audience, and those are great interfaces for certain technical things… But something about this is just immediately accessible, I think. And of course, we mentioned that we could switch AI providers, and I heard someone recently call this “Gotta catch ‘em all” Pokémon of AI. But these are just a little showcase of all the different ones we can use. So I personally like Groq, just because of its speed… And it’s blazing fast. This is running LLaMA 3 70-B.

And this was like also a switch… So in the interface that you’re showing, you’re having a conversation… It also was a switch between Ollama and Groq in the same thread. Am I understanding that right?

Yeah, correct.

That’s awesome. Yeah, so you can kind of – and does that message thread sort of history carry through to the different models, I guess?

Yeah. So that’s where the database comes in, just keeping track of the conversation… Not just the back and forth, but also any changes you might make. So if I make changes here on the fly, that will get recorded with the conversation state.

Yeah, I love how you can stay in context of the problem that you’re trying to solve, and yet still optimize against different models, in terms of what they’re better and worse, on the fly, without it kind of taking over and becoming the primary concern. Very nice.

[00:14:03.11] Thanks. Yeah, and that brings up something - really good user feedback I got maybe along the lines there could be like a smart router that kind of knows, or even you could preconfigure beforehand, like “Which is the best AI for this sort of task?” and it just kind of switches it for you. So maybe that’s something down the line I’m still kind of drafting in my head… But of course, as these things evolve, there’s, like we mentioned, RAG… So a lot of people have the expectation for files to work with these things. And of course, LibreChat supports that. And here I just dropped the CSV. I’ll say “Tell me about the sales.” Yeah, and this was just mock data I made up so it’s about [unintelligible 00:14:51.25] different sales data.

So it was able to look at that and just kind of give some context about it. And of course, I could switch the model just as before. So I switched to Cohere, and it didn’t give us as good of responses as GPT 4, but it was able to see that and kind of work with it. And even that, like the file processing - that’s all based off a local RAG solution. So it’s using like a local vector database and a local server that’s just dedicated to the files… And yeah, that’s one of the things there.

One of the things I’m particularly excited about is agents and agent workflows. Of course, we have Open AI recently made a solution of their own, with assistants… And I think people are still discovering the capabilities of this, but I think it’s exciting for me, not just working with what AI companies have as cutting edge, but also as inspiration for the open source side, because I think they model this really well, and it’s giving me ideas of “How can we do this with Ollama? How can we structure something like this with the latest Meta AI models?”

I have a test prompt here, and this is how I generated my sales from before… But I’ll add something here. “Finally, output the file you created.” So this can pretty much not just write code, but execute it within Open AI’s sandbox. So that’s really great for things like data analysis, and just generating mock data like this, too. Yeah, so it might take some time…

While that’s running, there’s a question… Did you test your local RAG system against others like Open AI’s, so maybe some interest in kind of the performance of that RAG system with a variety of models, and locally versus kind of those built into closed systems?

I did, definitely, with Open AI’s especially, because they have a RAG system through assistants… And to be honest, right now it’s not doing nothing too special. It’s what they call naive RAG. But I’ve found even with naive RAG, you have a really good prompt. And so I tested that prompt, several different iterations of it. You can really get something effective almost across the board, like any LLM. And with Open AI’s solution it’s really a black box. You can’t even see the prompt that’s being generated. I’m not sure how to steer it better…

Yeah, transparency is an issue in terms of – well, I guess it’s nice when things go right, and it’s sort of automagical… But then when things aren’t going right, you really wish you could understand a bit more.


So where are you in terms of your multimodal chat story? How far along are you in terms of what you’re trying to get to?

[00:18:03.01] One of my main goals right now is to offer even more access controls and configuration over the interface experience admins want to create. So for example, I understand that, especially the first time someone logs in here, they might not know “Oh, what are all these models? I mean, I recognize Google, I guess…” Or they would need a little more clues, or they might not even think to click here for the model. But I really want to see an update – I’m actively working on this, where there’s just one drop down, and you kind of get a bit of more info on that, what you’re selecting, what it’s good at, and it’s like “Okay, this guy can search the internet, and I need that for this task.” And also, just being able to control which users can access what, too. Because that’s a pretty big need, especially in the enterprise setting.

And in terms of multimodality, in the sense of AI being able to work with different formats, I think down the pipeline we’ll see integrations with videos, but right now we’re handling vision with images… And that’s pretty useful been a huge help for me.

So we started exploring LibreChat at Prediction Guard because a bunch of our customers who are using Prediction Guard wanted a private chat interface, because Prediction Guard itself is a platform that allows you to run large language models in a private secure environment, with safeguards around them for like factuality and toxicity, and prompt injections, and a bunch of other things. And so our customers are all those kind of privacy-focused, security-conscious customers who are maybe running Prediction Guard either on their own infrastructure and want a private chat interface for the models that they’re hosting with Prediction Guard, or they want an interface that’s not a closed one for usage of our models. And so here, what you can see is we’ve taken LibreChat, which again, Danny mentioned is open source, and we’ve been able to take it into our kind of branding… And we have Prediction Guard here where you can set your API key, and use Prediction Guard running on top of our platform. And because it’s open source, because it’s transparent, we are able to take this and also integrate our own sort of flair into this.

I know an engineer from our team, Ed and Danny work together, so thanks for that, where we were able to integrate some of these checks for like toxicity, and integrate our various models into the mix. So still, kind of like Danny was showing in terms of running here - I’m running with Neural-Chat 7B; this is running in a privacy-conserving setup in Intel’s AI cloud on Gaudi 2 infrastructure. So it’s a very unique setup that we’ve kind of optimized… And we’re able to connect to our own model, use this really slick interface, which is LibreChat, it’s just sort of branded a bit with our colors and logos and that sort of thing… But also, we can integrate the unique features of our take on an AI system, right? So let’s say I’m really concerned – because I’m using an open model that doesn’t have some of the guardrails around it like closed source models, I can go into the config here and turn on a toxicity filter to make sure that the model isn’t cursing me out, or giving me any sort of like stuff that I don’t want to see. And so here you can see we have a little toxicity score… Thankfully, it wasn’t very toxic this time around. So continuing… Similar to what Danny was showing, but again, our kind of own take on that, with our models, and kind of the safeguards around that.

[00:22:13.08] One cool thing that we’ve found really useful is that a lot of our customers, they want an interface like this, but they also want it authenticated, so they have their system setup… So we’ve integrated – we’re at G Suite company, so we’ve integrated Google login here… And it’s only our org that can log in, so the Prediction Guard org, and now I’m authenticated. Here’s my chat, like Danny mentioned, that is private and searchable…

So yeah, this has been a really amazing thing for us, where we’ve been able to take and build on the great open source stuff that Danny has built at LibreChat, and create something that works really well for our customers and for our setup. So before I leave and stop screen sharing, I saw that there was a question earlier on about translation with language models. A lot of what we’ve been showing is English; some language models like Open AI, they say that they’ll do other languages, but sometimes that doesn’t always work out.

So we have a translate endpoint in our API, and so we’ve done a bit of this testing with large language models translation, and kind of standard translation systems like Google Translate, and Bing Translate, and others… Or even other models, like NLLB, No Language was Left Behind from Meta. And in our translate endpoint, you can send a translation and then actually get the results along with the score. So we’re using COMET scoring, which is a way to score translations… And I think the question was how well do large language models translate and are able to chat in different languages versus machine translating with a commercial translation system.

So what we’ve seen in scoring, both commercial translation systems and large language models, is that some large language models, depending on the language - like, if you’re going into Hindi, with Open AI, you might get a good translation, or one that is comparable to Google Translate, a small amount of the time, like 5% to 10%. But mostly, the commercial translation systems are generally better. And definitely, as you go down the longer tail of languages, it gets sort of worse and worse. Even in chat in like Mandarin a lot of models don’t do so good, even though that’s kind of the next highest represented language in datasets out there. So yeah, it’s definitely a mixed bag there. I don’t know if Danny or Chris, if you have a comment on that before we go to other questions, but…

I’m good.

So some other questions on the LibreChat side… “Are you building tools for LLM evaluation, since you have all the comparison models out there?” I think they’re kind of imagining “Oh, I can switch between models easily in this interface. How does that help me in an interactive way? It could help me evaluate the performance of different models”, but there’s probably a non-interactive version of that, I guess…

Maybe towards your roadmap… You were describing later the idea of automatically switching to optimize… So this would be an incremental step in that direction.

[00:25:36.00] Yeah, absolutely. Back to the data ownership, I just think it’s absolutely crucial to have built some kind of pipeline for evaluation as well, especially if you’re really into fine-tuning your own models. And really, it’s crazy to think about, but even the data that we have just casually with these large language models, and if they’re a very capable model, it’s almost like a goldmine for the next model. And just having your own ownership, not just some cloud service… I definitely want to start it off simple, being able to thumbs up, thumbs down, but then being able to integrate complex evaluation tools like you guys already have with the toxicity score, or the translation rating… I think that’s awesome.

Cool. Could you talk a little bit – this isn’t one of the questions, but I saw on your documentation a discussion of plugins. Could you talk about that a little bit, like what that means in the context of LibreChat?

So they are inspired by just ChatGPT’s use of plugins, and really what the AI services now refer to as tools, or functions… And really, it’s just a way to be able to interact with some algorithm or API that’s kind of programmed there already, and you’re just kind of letting the model decide the inputs and outputs; or the inputs and rather interpret the outputs. And I’m using it obviously in the plugin system, where you can make requests to DALL-E or Staple Diffusion for image generations, you can search archive papers, things like that.

But really, what I came up with the plugin system specifically is – it’s almost a year old now, which is crazy… But I actually developed it before Open AI had these functions in their API. So in the process, I learned kind of deeply how these LLMs were understanding certain tokens a little better for like formatting. And now we have such a rich environment now for like getting only JSON responses, or being able to use tools with Anthropic… So I’ve got a lot of things planned there, where I want to see just that tool environment really grow. And also for people who are building on top of LibreChat, I want to see better documentation and better developer experience in like adding those extra tools, where “This is a tool only my company can see”, and just being able to pop it in real quick.

I’m looking at your GitHub, and I noticed 117 contributors listed there. As your community built up around this has evolved, going from sole developer in the beginning, and now you have a group of people that are actively contributing at some level, how has that changed the project and changed how you’re spending your time to fulfill the expectations of so many people and all the folks that they’re serving in turn?

Yeah, it’s been amazing. I’ve learned so much in the process. I think I need to be conservative with my estimates on getting things done, so I can address contributions and things like that. And that is definitely a thing I want to keep, if anything, even devote more time, because some people are making really great things. I’ll even shout-out Marco, who is constantly contributing things, and there’s things that he just gets to so much quicker than I do, that I don’t quite find time to review, but it’s sitting there, and it’s great, it’s already working, and I want to dig in the weeds a little bit… But also, I think that’s really what’s helped the project explode too, that there’s such an openness to what people want to see in it.

I just had someone today say that it was their first open source contribution, and I just thought “That’s really cool”, to see just kind of people learning in the process. And I was there, too; I was always kind of daunted of contributing to anything… So just seeing people step in the water - I definitely want to foster that more.

[00:29:53.03] I’m curious, as kind of a follow up to that, has there been a point where you’ve seen adoption occurring, and as part o that adoption – not that you have favorite children, so to speak… And I understand that you’re super-happy for every organization out there. But has there been a moment where you’ve seen some organization that you might be super-familiar with or something, adopt or know about, and kind of went “Holy mackerel, I can’t believe that they’re using my stuff”? Has there been a moment like that for you?

Oh, yeah, for sure. [laughs] I caught wind of Mistral using the app just to prototype their chat interface. That’s the only one I know for sure. But there have been people within Microsoft who are kind of just helping people prototype their own interfaces and things like that, and ghat, to me is a step back and I’m just kind of blown away.

The big boys of this space, definitely.

In our system, in the way that we’ve kind of customized LibreChat here, we use this model-based factuality score, which is actually factual consistency between reference text and text out of an LLM. So you can do a factuality check between two different pieces of text to get a score, that would show kind of factual consistency between the two… Which - that’s kind of the most relevant thing for most LLM use cases, because many people are using RAG, or they have internal company data that represents a source of truth.

So in LibreChat here, we’re working on the integration with the RAG piece, which would be a cool integration there… But for now, we just have this sort of factuality context, so facts that shouldn’t be violated… And I could put something here and turn on the factuality check, and then ask a question. So the fact I put in was that the sky was green. I could ask “What color is the sky?” And then I think Neural Chat will actually respond factually, but I’ll do the check against the gold standard information that I put in, which is actually that the sky is green… And you can see that I get out a factuality score which ranges from zero to one, and in this case it’s very low, because I put in that information about the sky being green.

So yeah, that’s the sort of interesting way – you know, I’m so thankful for this project being open source and being customizable, because this is the kind of cool stuff that people are enabling within their own chat interfaces that we’re working with, and it’s awesome to have a robust system that works well in that way.

Looks like there’s another question… “Danny, how feasible and something that you would venture yourself into is combining LibreChat with such frameworks as Flowise or crewAI… I don’t know how to say that. I don’t know what that is.

I think it’s crewAI.

crewAI. There we go. Yeah. I don’t know if of those things, but…

Yeah, I’m familiar with both. I think Flowise is really great, giving that string-based logic, user interface logic that no programming – you kind of put all the pieces together. I see that being integrated much sooner than crewAI, which is more of like an agent orchestration framework. With Flowise, I think it could serve as kind of like another backend. Just like you see so many endpoints, as I like to call them, which are Mistral, Google, Open AI, and so forth, I could see it being easily integrated like that, where I don’t really want to reinvent the wheel with something like that, because they’ve done such a great job… And I just want to be able to handle the integrations. Because obviously, it’s not going to be everyone’s need.

[00:33:55.13] And for crewAI, I definitely have a lot of ideas there. I’m trying to establish kind of like a framework for agents first, and then potentially get into agent orchestration, where agents are talking to each other, and things like this. But we’re not quite there yet. We’ve got the Open AI side shaping up, but we want to see some open source integrations there.

Awesome. Well, I think this is a good question to maybe kind of draw near to a close here. It’s something asked in the webinar chat, but also something we usually kind of ask people that we’re talking to on the podcast is - you know, you’re following and kind of plugged into all of these different things that are happening in the AI ecosystem, and things even I’m learning about today, that even though we’re… I think we would be plugged into many things that we hear about, but there’s just so much going on. After kind of looking at that landscape and how innovation is happening, how people are using your interface, but also more widely the things that you’re seeing people do in the open source space or otherwise, where do you see all this kind of going? And how do you see the future of both LibreChat, and maybe even just things that you’re – is there something you’re particularly interested in, in the AI space to see how it develops in the coming year?

Yeah, I think I’ve kind of been hinting at this already, but just… It’s the future I want to see, and I feel like a lot of people in tech want to see it too, and it’s the open source future, where these large language models are getting so good every day, there’s a lot more time and money invested in being able to like host these things, just from a consumer grade computer… And I think catering to that is probably going to be direction in my project, and many similar projects, because it even blows my mind that I can use something like LLaMA 3, where a year ago I might have thought “Oh, this is two years away.” And I really think that’s the direction, both on the high level and low level… And I think it’s part of the reason the project’s really taken off, just because these things are so accessible, and we don’t have to pay SaaS subscription money to just use a message for AI.

Yeah, that’s awesome. I think that’s a future that at least Chris and I are looking forward to, and I’m sure many on the webinar.

You mean I get my wallet back at some point? [laughter]

Well, I’m sure there’ll be other things to pay for… A new AI PC or something to run all your local models…

That’s right.

Cool. Well, thank you so much, Danny, for taking time. Thank you to everyone that joined the webinar. This was a ton of fun. We’re going to be doing another one of these webinars very soon. The next one I think will be around multimodal AI, and some practical, hands-on instruction in how to create things like multimodal RAG systems, or kind of search over images and videos… And so that’s going to be a ton of fun, so keep on the watch for that. That’s going to be a fun one. Until then - yeah, looking forward to seeing you next time, Chris, on the podcast.

Absolutely. Thank you, Danny. Thanks to everyone who joined us today.

Yeah, you guys are awesome. Thanks for having me.


Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00