Practical AI – Episode #215

AI search at

with Bryan McCann, co-founder & CTO of

All Episodes

Neural search and chat-based search are all the rage right now. However, has been innovating in these topics long before ChatGPT. In this episode, Bryan McCann from shares insights related to our mental model of Large Language Model (LLM) interactions and practical tips related to integrating LLMs into production systems.



FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at

Fly.ioThe home of — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at and check out the speedrun in their docs.

Notes & Links

📝 Edit Notes


1 00:00 Welcome to Practical AI
2 00:42 Bryan Mccann
3 01:47 What is
4 06:21 What's different in these new algorithms
5 09:28 How has the public view changed
6 11:53 Will this change search engines?
7 15:19 How will enhance tooling?
8 17:41 You and mutli-modality
9 21:17 AI tools for the next generation
10 26:28 Any wisdom worth sharing?
11 29:49 Our future relationship with models
12 34:11 Practical tips for practitioners
13 38:24 How to get started on
14 41:06 Outro


📝 Edit Transcript


Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another episode of Practical AI. This is Daniel Whitenack. I’m a data scientist with SIL International, and I’m joined as always by my co-host, Chris Benson, who is a tech strategist at Lockheed Martin. How are you doing, Chris?

Doing well, Daniel. What are we in search of today? What’s the topic coming up?

That’s a good one. Well, we’ve been talking about ChatGPT, and people using it for search and other things, but we’ve got the real powerhouse with us today, Bryan McCann, who is co-founder and CTO at, which is an AI search engine. How’re you doing, Bryan? I’m fantastic. I’m so excited to be here. I just got finished watching an old episode last night with Demetrius, and laughing about all the MLOps stuff I’ve learned myself at over the past couple of years.

[laughs] Yeah, there’s no shortage of cringe moments in the MLOps journey… But yeah, that was a good one, for sure. Maybe just taking like an initial step back at like AI search engine… How did you come upon this idea that there needed to be a new type of search engine, and in particular one that involves some type of AI within it? How did this idea come to shape, and what is Maybe we can start there.

Great. Yeah. I can tell you all about it. It depends on how far back you really want to go, but… I’ll start back when my co-founder and I were at Salesforce, doing research and natural language processing. My co-founder and our CEO, Richard Socher, was the chief scientist there, and we worked for quite a few years on exactly the kind of technologies that are getting kind of more mass adoption today, like these large language models. When I started out, it was a challenge to get them to do anything. And oftentimes people would question why anybody worked on language models. It was starting to almost become hard to get any publications about them, because they were supposed to be just a theoretical exercise of some kind.

But then over the course of several years in our research, and first transfer learning, and contextualized word vectors, and then pushing multitask learning, and unified approaches to natural language processing, we saw what was happening, and eventually we ran some pretty fun experiments even with authors and writers around collaborations with generative AI, like GPT tools today. And the first moment was that it was just kind of fun and inspiring; it was starting to work. And seeing that, I think both of us started thinking about what was really gonna happen in the next few years. And there’s two ways you go. All of this NLP stuff becomes as good as it is today, and it just goes into making Facebook ads better, or something like that, or Instagram ads, and all that understanding that we work on doesn’t really go into something that I was super-stoked about at the time.

[03:58] I came from a philosophy background, I got into all the natural language stuff, because I was interested in meaning and understanding what meaning was, which took me into the analytic philosophy direction, and focusing on language… So for all of that, to then just channel into, like, telling my little sisters something better on Instagram was not really what I was hoping for. That was the first thing there. And then the second was, I think, after we had both seen much of the research community adopt this direction, which was really not popular when I was just starting out - this was controversial, it was even against software engineering principles. Engineers at various research groups at companies would actually decry some of what we were advocating for, because - well, how can you disentangle and understand where problems are in the model if you’re training on all this data? It was not exactly neat and tidy from a software engineering principle perspective. But after four or five years, everybody was doing it. And so we felt like it was time to start looking at an area where maybe people felt similarly about the likelihood of it changing much, and search was one of those areas where it was very much like the original time in NLP where we were like “Oh, we should do it this way.” And there was a lot of people saying “No, that sounds like a bad idea”, and they come around to it eventually.

Now, with search, I can tell you over the last couple years - yeah, lots of people ask me, “Why the heck would you start a search engine?” But we saw a lot of these technological advances coming, and we saw that there was this inflection point coming. We wanted to be on the other side of research, and kind of directing and channeling that into a better way to do it. And search was really the gateway of the internet for that. It’s for so many people the place they go, that they then find the rest of the internet, they find information, and it becomes like a key point for people, where this technology and understanding them can then help them in different ways.

So with we wanted to really found the company on three values, of like trust, facts and kindness, and leverage this technology to make search more about serving you, and using an understanding, rather than just monetizing your attention.

And from your perspective - I mean, people probably think that at least algorithms have played a role… Maybe generally people think “Oh, there’s sophisticated algorithms behind search.” Now people are talking about like AI-driven search, neural search, semantic search… Could you help us parse out what’s fundamentally different about the things people are talking about now, and they’re referring to that sort of like AI and neural search, as opposed to what might have been going on all along with what isn’t like dumb algorithms, but they’re not in the same sort of classes, this other type?

Yeah, for sure. I think, five years ago the word “generative AI” was really on our radar. Now, we still thought there’s AI involved in search, it just wasn’t the kind that we’re seeing today. So AI has been trying to understand this for a long time. What’s happening now is the algorithms or the kind of AI that we’re using, the neural networks we’re using are much better at understanding the context of what we’re trying to say. I think this is one of the key underlying features of what we’re seeing.

So when you type to it, or you’re talking to it, one of the dominant threads has been understanding context. We started out with training word vectors in NLP. So if people are familiar with word vectors, every single word or token has a vector that’s associated with it. And that was pretty much all the context we had. And we started looking at sentences as a whole to take into consideration as context. And now these things are reading as much of the internet as they can get their hands on, asking datasets, supervised, training data on top of all the unsupervised training data… And with that comes this more nuanced understanding.

Every parameter that we’re adding as these models are getting bigger, they’re recording some subtlety of how we use language, right? Just mimicking our behavior, and picking up on those patterns. So the first is understanding context, and then the second is the generative aspect of it. So there’s taking in a piece of text from you, and understanding what that means, but then there’s reducing text. And I think that’s been the part that’s really, really exciting for people.

Those have been really important for us at, and building a search engine differently… But now with YouChat, for example, these generated responses are really opening up a different way of serving users, that’s totally in line with what we were planning for as well. Because we really wanted to move search from being just about finding blue links to ideally replacing every blue link in as many cases as possible with the thing you’d actually want to do, or the information you’d actually need. And these generative models have essentially memorized a lot of the information on the other side of those links, so it makes it a lot easier for you to access it, and it can spit it back out at you.

I was just gonna say, as kind of a follow-up to that, kind of talking about the world before and the world after… I mean, obviously, the big news thing that’s changed the public’s perception lately was ChatGPT and the public kind of becoming aware of that. And you guys have been out there, leading the way all along, for years at this point. How does that public perception change? How has that changed and YouChat? Aside from just the technology considerations, how has that changed the way you guys are approaching your business in the marketplace with that public perception change at large? It’s a different world from six months ago.

Oh, it’s fantastic. It is. It absolutely is. I think so many of the things that we had started to build it into… You know, we had some generator writing tools, we had image generation tools; we call them apps inside, because they’re a platform for developers to build these apps into it. And what we saw happen was the door kind of opened to do some of these newer things, with more acceptance from a much wider portion of the population.

I think a lot of people had expectations about what search was, and what search should do for them. And even though we were at the forefront of that releasing these things, that kind of moment last fall, really, when it started going viral, is when everybody kind of dropped those expectations and said, “Hey, what is this new technology that could be doing something like search for us in a very different way?” And so with that, YouChat has been super-popular, and it’s becoming more and more popular as part of Right now we have kind of a more default, normal search experience, but then you can also use the conversational approach… And that’s really taking up a ton of traction, and it’s clear, there’s a lot of use cases that it just serves users better for.

So while you all were talking, I was asking YouChat “How can AI be integrated into search?” and at least you’re consistent with YouChat. The answer is “AI can be integrated in search in a variety of ways. For example, AI can be used to provide more accurate search results by understanding user intent in the context of their search query.” So there you go.

There you go.

[11:53] I just mentioned, I have YouChat on my phone, I’ve been playing with it… And really, I don’t even know if I would consider playing it, but using it. And I think one of the things - people are realizing, “Yeah, it’s fun to generate a new Eminem rap song about AI or something in ChatGPT”, but people are starting to think about these like interfaces as tools that can, like you said, give them the content that they’re really after, without them having to follow a bunch of links.

Now, the search industry in general I think has been – there’s money to be made by pointing people to links, right? And ads, like promoting links. From your perspective, how does this shift in people thinking now about like a chat interface, which isn’t driven by these links, influence maybe like the industry at large, and maybe some responses that we’ll see across the industry? Because that’s kind of the bread and butter of how everything works in search, right?

Absolutely. Yeah. It’s a big question right now. And I think it’s one that’s exciting to see evolve over the next - however long. But from our perspective, what we’ve been trying to set up with from pretty much the start - and we released it publicly last fall as well - is this more open platform approach to search, where partners, content creators, developers, whoever it is that owns what’s on the other side of the link has an active role and a clear way to monetize and benefit from anything that these language models are generating.

So for example, a partner can come into and create an app in a couple hours, if they’ve got an API, or they can just give us the data and we’ll create a search API for them, and support the infrastructure… And then we’ll show an app that either allows people to interact with their product, kind of like a “try before you buy” way, or the information from their site, but they own that space. So it’s not like more traditional search engines, where any monetization that happens on their website is their monetization, and traffic has to get to that other place, for those people to monetize. At the app itself is considered theirs, and any monetization that happens is theirs at that point. So it’s kind of flipping the script, in a way, like you said. That’s the biggest shift that we can see happen.

And we can also see a lot more people moving towards paying for some of these tools as productivity tools, and kind of tools that empower them more, rather than just a tool that connects them to different things, which they’re very used to having free. I imagine what’s going to come out of it will be some combination of the two, but there will be more and more of a gift towards providers being closely linked to that content, even if the content is less clearly attributable through a language model. But it’s something that’s - it’s going to be interesting to see how all the different industries adapt, or try not to adapt, and try to keep things the way they are.

So you were talking a little bit about the idea of using it for tooling, and some of the things now, and that got me – as you were kind of describing that before the break, it got me wondering, can you talk about some examples of how YouChat and the technologies underlying that, and the algorithms might be used to enhance tooling? What are some of the things that you’re thinking about when you’re laying awake at night, thinking about what’s next? Where do I want to go with this?

[15:46] Yeah, great question. Back in the summer, and before that, we’d been working mostly on what we called YouCode, and we were starting to bring in a lot more developer resources and generative AI specifically for generating code on behalf of developers. And now with these more conversational interfaces as well, you see people going to them a lot for writing code, even debugging code, debugging code that the AI generates for them already… We see a lot of people going and just saying, “This is what I have in my fridge. What can I make with this?” A much broader set of questions. But then the conversational interface allows you to gather some context on, over the course of conversation, in a much more seamless way, until you can get a satisfying answer or response.

But yeah, lots of marketing, lots of students as well. We had an application called YouWrite, that’s still inside, and that’s been really popular with students, for example, because it can come up in the chat sometimes… Like, you can ask it to do some things, but getting the language model nudging in the right direction is still something that’s challenging for people who don’t quite understand how to do it. And so these productivity tools usually involve a little bit more of a touch from our side to make it like really useful for a particular niche.

I just want to note that going to the refrigerator and saying “This is what I have” is going to be really, really useful for me.

Yeah. For me it’s like “Nothing in there. What do you do?”

Yeah, I was gonna say, it’s pretty sparse in my fridge right now, so I don’t know that there’s a good answer to that regardless… Like, go to the grocery store.

My wife makes fun of me. She’s like “Put something together” and I’m like “What?” So that’s a good use case, actually.

Yeah. So you’re talking about this idea of like a thread of conversation, which people learn about some topic, or the thread provides context for a response or an answer… But I’m also intrigued - like, some of the unique things about YouChat in particular, and sort of more generally, is a little bit more holistic view of like multiple modes or multimodal approaches, to where - hey, I’m not always just getting a text blob, right? Sometimes I want a text blob, and sometimes I actually don’t. Like, if I’m asking about the weather, maybe I want like a little graph of a little card telling me about the weather. So could you explain a little bit maybe how you all are thinking about multimodality in terms of like these sorts of natural language interfaces? Maybe both in terms of like the outputs, but potentially also in terms of the inputs, and how you’re thinking about merging those technologies and those inputs together?

Yeah, great question. It is the future. So much I think of what we’re learning from language - some of it is starting to make its way into image generation, right? There’s many new tools that have come out. But all these other modalities as well. In in particular, YouChat has access to the kind of more traditional search engine underneath it. So it actually uses that more or less; a little metaphorical, but it knows how to understand your intent, and it can go out and ask what kind of information it needs from different sources. And then it can also interact with all of these apps that we’ve created in the open platform.

So if you are looking for weather, it can go and say, “Oh, give me the chart for the weather.” And over time - it’s a little bit of a contrived example, but I would want it to look at that data and be able to answer any question you want about that data. Maybe run its own Python code, doing statistics over that data, if you really wanted that. It should be able to do all of these things. Same goes for finance - if you ask about a particular stock. YouChat is not going to necessarily tell you in text about all the things that you would want to know; you know, the volume, and high, and all those things.

[20:01] It’s going to show you a nice application there, and then over time, we’re enabling YouChat to use that data more and more, ground its responses in that data, in the same way that it’s currently grounding and citing search results right now, to try to lessen the effect of hallucination, which has been like a really kind of widely known problem with some of these generative models, especially in the research days. It was one of the most frustrating aspects of these models. And they’re getting better, and they’re especially better when you ground them in other kinds of data, like our open platform apps, or the search engine results themselves. So we’re going to use that to make it better and better. And I mentioned writing Python code for itself - we want it to be able to pretty much do anything that you’d want to do on the internet. Like, that’s where this kind of technology can go - kind of realizing the promise of some of those early kind of persistent things like Siri, and Cortana. You don’t want to just say, “Oh, tell me about this thing.” You want to be able to have it do things for you. And we want to keep moving more and more towards that vision of kind of – we call it like a “do engine”, instead of just a search engine.

Very inspiring what you’re saying. As you’re looking at the world going forward, and you’re trying to think about like getting those capabilities out into all the places, whether it be something as mundane as you’re in the kitchen, as we talked about jokingly, or whether you’re getting out into vehicles that are either cloud-connected, or on the edge or anything, or maybe even something - another popular topic out there, to throw buzzwords out, with things like Metaverse and stuff… How do you get this capability out into all those use cases in a very practical and functional way for people to start taking advantage of it? Because you have almost unlimited potential in terms of this generative capability that we’re on the forefront of. I have a 10-year-old daughter - I’m imagining, another 10 years out, she’s gonna really have a tremendous college experience, very different from us, because she’s gonna have all these new tools. How are you looking at trying to get this technology into all the places that really affect people’s lives going forward?

It’s very likely that 10 years from now the way that people interact with the internet, and the information that is out there - they’re gonna find it hard to imagine how we did it without such strong language understanding at that point. Because language is this really natural interface for us, right? Like, you can talk about doing pretty much anything. So if something could be on the other side and be that other you out there, kind of doing those things a little bit for you, on your behalf, just the way you would - it’s probably something we can’t really conceive of yet; we can’t really wrap our minds around it. But getting there - you know, there’s traditional ways to do it. Like, we have mobile browser apps. And I think people understand how those work, like Chrome or Safari… We have like a You browser on iOS and on Android. I think desktop browsers are another natural one… But pretty much anywhere you might type in text could eventually become an interface for you to interact with these things. And if you can type text into it, you can also speak to it, and have speech-to-text take care of that for you, if you want to do that.

Any interface where you’re using language, or could use language to communicate with something, is an opportunity, I think, for the next generation of search, and chat, and do engines like That’s the forefront – you know, I have some sci-fi thoughts, paths we could go down… I don’t know about you guys, but I have like an inner voice. Like, I can hear what I’m thinking as text, more or less.

You should definitely go there, because this is what these conversations are all about.

[24:02] Yeah. So not everybody has this. Some people are surprised to learn that as well… But some people can’t see images in their head, for example. Some people don’t have an inner monologue. I’ve, for many years now, since I’ve been working on these things, referred to my inner monologue as my like own inner language model. Right? It kind of even predicts a little bit what you’re going to say next. That’s how you complete people’s sentences, and anticipate things.

You know, so I don’t have any aspirations to work on this… I think there are a lot of things to think about. But in the long run, that’s kind of your language interface, too. Like, what if these things were hooked up to that? If you’re into the neural interface stuff; that’s maybe around – we’re very far from it, but what if your inner monologue could also be supplemented by these things, and your own thoughts, and thought processes? Yeah, not on our roadmap, but…

I get that. I like that thought though, because we’ve come to a point, and I think everyone is coming to this point where things that would have been like “Well, that’s way out there…”, people are starting to kind of go “That’s an interesting idea.” Kind of seeing how fast things have ramped up in recent times. And I think it’s pushing imagination out there at large in terms of what might be coming.

Yeah, it’s really inspiring. At some point, when we were making some of these earlier language models, we were working on our version of like a GPT-2 size model. We called it control. And someone I was working with read at some point a poem that this model had generated, and he was legitimately touched. They were like “Whoa, I actually really like that poem.” They didn’t like poems before. And then we spent weeks in our off time talking about poetry, and trying to find poets they liked, and things like that.

So even in the kind of simple moments, this opening I think that you’re talking about, this dropping of expectations about search, what technology can really do for us - it’s changing the way people think about it. It’s changing people’s lives, in some ways. It’s like inspiring them, getting them to be more creative… And going back to Daniel’s earlier question, when you combine different modalities, images, and vision, and text, and just thinking about what you could do with your own thoughts, if you could actualize them and realize them more easily - I don’t know, that’s a cool journey to be on.

So Bryan, it’s super-interesting to think about where this could be headed… And I’ve had similar experiences to what you talked about a second ago, where it’s like, I kind of have mental block in this scenario, and I go to one of these chat interfaces, and even if it’s just to unblock myself… Like, I start chatting, and then it sort of jumpstarts my mind in a new direction, or something. That’s very intriguing to me.

Now, you’ve been interacting with these models quite a bit over time, and have probably dealt with – you’ve already talked about things like grounding, and hallucination, and the sort of power of the knowledge embedded in these models, that they’ve memorized, and things that people have talked about more recently; you’ve been kind of at the forefront of thinking about these things. So I’m wondering, now that the whole world is talking about all of these things, if you have any sort of wisdom you would like to impart, in terms of either like these topics that people are concerned about, in terms of grounding, and hallucination, or like harmful outputs, or on the other side, like the ways that this is –

I think people were concerned that this is like an automation of our life, but really, people are getting such a benefit from it as like an assistive technology. So the fact that you’ve spent a longer time thinking about these things that many of us have just been hit in the face with - any wisdom or thoughts on that, that you’ve kind of started to develop as your own kind of mental model of these things?

[28:12] Oh, yeah. Yeah, for sure. We should remember that it wasn’t very long ago that they would just repeat themselves over and over again, and they did nothing useful.

And there’s two ways to remember that. One, they’ve probably made it a lot better. As long as we keep going the way we’ve gone, they’re gonna get a lot better. And two is they’re just tools. They’re still just tools, and there’s a lot of things we don’t understand about them. But I think I would suggest trying our best as a community not to anthropomorphize them too much, and think of them as these other people. The chat interface, in particular, gets our mind ready to be talking to a person; even just the UI and the layout of it, and things like that. It looks like we’re talking to a person more, rather than a box where you type in keywords. And we’ve all been trained to do that for 20 years, or something like that. Remember, texting the people - it’s this way, and that’s people. People, AI. People, AI. Now it’s like “Oh, what is this thing?” And try to just keep at the forefront of your mind that it is a tool. It’s an algorithm. There’s like a computer out there, behind the scenes somewhere, doing this stuff. And keep the awareness that sometimes it might be helpful for you to let yourself slip into conversational flow with it as if it’s a person, and if that’s helpful, that opens up inspiration and things like that… But then don’t get too caught up in it. Remember that it’s there for you.

I’ve got a question there that you’re making me think of as you say that, because what you’ve just said really applies to my personal experience. So having grown up with computers - I’m in my early 50s, so decades of computers… In the past year, my relationship with technology has changed. I have I always used it for automation, and kind of productivity, whether it be code, or whether it be other tools that are out there… The thing that’s changed dramatically is that I place a very high premium on creativity. And creativity is something that has historically, prior to the last few years, been the domain of humans, and we always expected creativity to continue to kind of be the more human thing, and computers… But that’s kind of been flipped around. And so we’re seeing tremendous capability, assistive techniques - to use the word assistive that Daniel did a moment ago - in other non AI parts of my life I’m using these AI tools for creative purposes. And so where do you see that going? Because that surprised me, to be able to go to chat and get inspiration, and bring my own creativity to it, and then have in turn extra creativity that is algorithmic-based enhancing that.

So we end up with a product that is creativity that is both human and automation together, that are generating this thing… Which is really cool. And I’m using it to teach children, and stuff like that, and in other parts of my life. And so how do you see that kind of relationship going forward? …you know, when you were kind of talking a little bit about the Sci-Fi influences and stuff like that… And as we’re looking at these models, which are most definitely, as you said, not entities of themselves, not people, but they’re going to get much more powerful in the fairly near future… Where does that go? What does that mean? How does the relationship look going forward between us and those systems?

Yeah. I mean, I fully intend to just get more creative myself. I think I’ve not – over the years I’ve been playing with it, and I think it might help to have been playing with for so long, and seeing them get better and better, and still see all the places that they still fail me, I suppose… But for me, it’s fully incorporated as just like - you think of it as this different thing, right? That it’s like almost competitive with you, on your creativity. But going 10 years in the future, the next generation, like we were talking about before, I doubt they’re gonna think about it that way. They’re gonna think about it like you think about a normal search engine.

[32:19] Another study is right on people who think they know things, but really, they just know how to search for it, and then they feel like they knew the thing… There’s like this appendage experience that we merge with it, somehow, subconsciously. Something like that will probably happen, where this will just feel like part of your creativity. And I would encourage us to also develop the technology that way too, so that it continues the narrative as such that it’s built for us to enhance us, so that we’re more creative. I feel more creative using it. I don’t feel like it’s creative and I’m not.

Me too.

feel like it’s giving me access to data, and a data distribution, in like a really nice, condensed way, more or less, and I can kind of test the waters. I see what all the other people are doing, in some way, some portion of the population, whatever the data is representing. And then I can choose to either do that, or do something else, and change it. And that’s actually really cool from a creative perspective, because sometimes you might be in your own little vacuum or echo chamber, or whatever it is; you think you’re doing something really novel and cool, and it turns out a million other people have done that kind of thing. Now you see - it doesn’t look like it, but what you’re seeing with the language model is what everybody else said. Not everybody… A lot of people. Enough people for the language model to say that. So you’re getting a little bit of a measurement of what’s going on out there, that you couldn’t get before. And if you use that as another input for yourself, I think the way that, you know, humans and chess algorithms are better together, we will continue being better together with these creativity tools, these productivity tools, and we’ll just learn to be better at whatever we want to do. And hopefully, it’ll just free us up to think about the things that we want to think about.

Yeah. And I’m also kind of wondering, selfishly - so we’ve been talking about kind of like… You’ve had the benefit of working with these models for a very long time, and kind of getting an intuition of some of these things, and how to think about them in the capacity that they play, how to think about the UIs around them… But also, you’ve been on the technology side and kind of like at the forefront of integrating these things into your systems, and also like “Hey, there’s output from this model. Should we show this to a user? Hey, there’s output from this model –” or “There’s like these references that we could inject as context into this model? Do we do that?” A lot of people as practitioners are wrestling with those things too, now that like - oh, I can go get LangChain, and pull it down and integrate a bunch of things together, and create this workflow.

So on the practitioner level – I guess more so than like the perception level… On the practitioner level for all of those listeners out there that are like jumping into this, and starting to deal with some of these issues around integrating language models into their own applications, do you have any wisdom that you’d like to impart in terms of like that practical side of things, and like how to go from this concept of integrating generative AI for this particular domain, and this particular problem, to actually realizing that in an application that’s useful for people?

[35:45] Yeah. And this goes back to some of the other tips I’d provide for people interacting with them, too. I think for developers of these things, and practitioners, do try to ground the responses as much as you can. Unless your product that you’re giving people is specifically, you know, say whatever you want language model, and do whatever - if people are actually looking for real information from you, just know that your users, there’s… Your users are also seeing that conversational interface and everything I’ve just described, that is making them feel a little bit like it’s more human than it is. And it might even be their first time seeing one of these things and how well it works… So keep that stuff in mind, and try to anticipate the hallucinations, try to ground things, try to provide clear attribution as much as possible…

And then I would say switch back and forth as much as you can between those moments of surprise and wonder in yourself, and let your expectations drop a little bit, so you can think about what you can do with these things. But then once you implement something, be very skeptical and critical of it, as much as you possibly can, like you would with any other engineering system, any other research project, or experiment you’re running… Go get some numbers. Get close to the data, because it’s very foreign to a lot of people right now. And the only way to do that is to go use it a lot. Not just as just a user, but as a practitioner.

So you’re gonna see that prompts matter a lot, and in ways that you probably won’t anticipate sometimes, and you will anticipate sometimes. So just like start getting your hands dirty, embrace that whole process… It’s quite fun, because it feels like you get a lot farther a lot more quickly than you’re used to. You can kind of dream up a use case and it just kind of works… And so celebrate that, right? Like, I’m loving that moment; you’re like “Oh my gosh, this kind of works.” But then immediately go back and be like “Okay, wait a second… What if we evaluated this properly? What if we put all these conditions around it? Let’s find all the places where it doesn’t work, where it’s gonna let people down.” And with as a search engine turning into a do engine, the goal is to make it impossible for you to fail in what you’re trying to do, more or less. So try to embrace that mindset here, too…

It’s a new tool, you have to use it – I don’t know, if you’ve been a Kubernetes person or Homechart person, like, you’re not gonna just know all those things. This is familiar-looking, because it’s your language, but treat it almost as if it’s a foreign language, and take a little bit of that perspective.

Yeah, I love that. That’s a really good perspective to bring into this. As we wrap up and get to the end here, I’m wondering - we’ve talked a lot about, we’ve talked a lot about YouChat… If you were to encourage listeners - like, maybe they haven’t interacted with yet, or YouChat. What are a few things that they can do, a few places that they can visit to get and kind of understand the experience, and like maybe a few things to try, that would kind of highlight some of the things that we’ve talked about?

Yeah, so there’s You can go there; if you’re on mobile, we do have an app, and I do recommend that, because they provide a better experience on mobile, especially for the chat. Now, if you go to, I’d also encourage you to look for the Chat tab, or go to, because that’s where a lot of this new, exciting stuff is showing up.

I’d also love to just chat with people directly… And we have a Discord. So if you go to, it should be pretty easy to find, and join the community; there’s a link to it. And yeah, come talk to us about your use cases. So far, people love it for writing essays, and emails to professors, and code, and recipes… Lots of people just like asking about themselves, even though the model doesn’t always know exactly… That’s like a fun use case that blends hallucination with truth. But look for those citations. Like I said, treat it like any other thing. Look for the citations, look for grounding, and follow up with us on our Discord. We’re pretty much there all the time, and you can talk to us directly about it.

Yeah, that’s awesome. Well, I have been enjoying interacting with YouChat and other things, and I’m just really happy that we had the chance to have you on the show, Bryan, and learn a little bit more about what you’re doing, and also your really insightful perspective from working with these large language models for so long… So thank you for taking time and joining us, and we’d love to have you back on the show in a year, to think about – I’m sure next year AI search is going to look way different than we expect, because six months down the road it always looks different… But yeah, thank you for your innovation and your insights.

Thank you so much for having me. It was a real pleasure to meet you… And yes, I intend to make it very different in a year. So I’ll be stoked to come back and see how much of what we said holds true, and whether we all think of these things as ourselves at that point. I don’t know, we’ll see what happens.

Sounds good.

Yeah. Thank you.

Yeah. Thanks, Bryan. Bye.


Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00