Practical AI – Episode #250

Open source, on-disk vector search with LanceDB

featuring Chang She, CEO and co-founder of LanceDB

All Episodes

Prashanth Rao mentioned LanceDB as a stand out amongst the many vector DB options in episode #234. Now, Chang She (co-founder and CEO of LanceDB) joins us to talk through the specifics of their open source, on-disk, embedded vector search offering. We talk about how their unique columnar database structure enables serverless deployments and drastic savings (without performance hits) at scale. This one is super practical, so don’t miss it!

Featuring

Sponsors

FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com

Fly.ioThe home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.

Typesense – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!

Notes & Links

📝 Edit Notes

Chapters

1 00:07 Welcome to Practical AI 00:36
2 00:43 Chang She 01:23
3 02:06 Origins of LanceDB 05:51
4 07:57 Top workflow vs infrastructure 02:07
5 10:04 The demand for Gen AI 03:03
6 13:07 What does embedded mean? 02:20
7 15:27 Integrating steps A-Z 04:10
8 19:36 Structure of LanceDB 02:33
9 22:09 Generalities of data structure 01:34
10 23:43 Rust integration 03:24
11 27:07 Future language support 01:28
12 28:34 Real life use cases 06:37
13 35:11 Autonomous use cases 02:31
14 37:42 Exciting developments 02:33
15 40:15 Goodbye 00:42
16 41:05 Outro 00:45

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another episode of Practical AI. This is Daniel Whitenack. I am CEO and founder at Prediction Guard, and I’m joined as always by my co-host, Chris Benson, who is a tech strategist at Lockheed Martin. How are you doing, Chris?

Doing good today. How’s it going, Daniel?

Oh, it’s going great. We were just remarking before actually starting the recording that one of the great things about doing these episodes is that we get the excuse to bring on the show the coolest open source, and tooling, and other projects that I’m using day to day, and get the chance to interact with, and one of those is LanceDB. And we’re really excited today to have with us Chang She, who is the CEO and co-founder at LanceDB. Welcome.

Thanks. Hey, guys. Super-excited to be here. Thanks for having me on.

Yeah, yeah. Well, first off, congrats on all your success. I was scrolling through LinkedIn and saw a video of LanceDB up on the NASDAQ screen in Times Square… So that was cool to see. That must mean good things, I’m assuming.

Yeah, it was possible via Brex and also Essence VC… So a big thanks goes out to them.

Cool. Cool. Yeah. Well, I mentioned I’ve had a chance to look through some of what you’re doing and actually use it day to day; actually, that was a result of a previous episode, that was I think titled “Vector databases beyond the hype” with Prashanth. I think the question that we asked him was like oh, there’s all these vector databases, you’ve compared all of them… What are some of the things that stand out, or some of the vector databases that stand out in terms of what they’re doing technically, or how they’re approaching things… And one of them he called out was LanceDB. I think in particular he was talking about kind of on-disk index stuff. And so I’m sure we’ll get into that and a little bit more, but that’s how I got into it, so I recommend listeners maybe go back and get some context from that episode. But as we get into things, could you maybe give us a little bit of a picture as to how LanceDB came about? I know there’s a lot of hyped vector database stuff out there, and people might not realize how these things were developed, how they came about, what the motivation was… And so if you could just give us a little bit of a sense of that, at least for LanceDB.

Yeah, absolutely. And first, I wanted to also give a big shout-out to Prashanth as well. As you were saying, there’s a lot of hype and noise in this area, there are a lot of different choices… And for users and developers who are building generative AI tooling and applications, it’s always kind of confusing which one is good, and should you listen to the marketing from one tool versus another… So it’s great to see someone with an engineering background, who can write so well, to actually take the time and just try out a ton of different tools and interview a bunch of different companies, and come to his own conclusions. I’m super-happy and excited that he’s a fan of LanceDB, and we hope to make that better for him, and also all of our users.

So, back to LanceDB - so we started the company two years ago at this point, and we didn’t start it out as a vector database company, actually… Because I think if you kind of remember, ChatGPT is barely one year old.

Yeah. The dawn of AI. [laughs]

Yes, exactly. And so the original motivation was actually serving companies building computer vision, and building new data infrastructure for computer vision. So I had been working in this space for a long time; I’ve been building data and machine learning tooling for about almost two decades at this point. I started out my career as a financial quant, and then became involved in Python open source. I was one of the original co-authors of the Pandas library. And that really got me excited about open source, about Python and building tools for data scientists, and machine learning engineers.

And so at the time - this was in 2020 and 2021 - what I observed was at the company I was working for, Tubi TV… It was a streaming company. So we dealt with both machine learning problems for tabular data, and also for unstructured data, like images, and video assets, and things like that. And what I had noticed was that anytime a project touched this multimodal data for AI, from images, to the text for let’s say subtitles, or summaries, to the poster images, these projects always took a lot longer, they were much harder to maintain, and it was difficult to actually put into production.

At the same time - so my co-founder, Lei, who I had met during my days at Cloudera - he was working at Cruise, and dealing with the same issues. And so we put our heads together and our conclusion was that it’s not the top application or workflow layer or orchestration layer that’s the problem, it’s the underlying data infrastructure.

[06:04] If you look at what’s been out there, like - you know, Parquet and ORC has been around, and they’ve been great for tabular data, but they really suck for managing unstructured data. And so we essentially said “Hey, what would it take to build a single source of truth where we can toss in the tabular data plus the unstructured data, and give much better performance at a much lower total cost of ownership, an easier foundation to build on top of for companies dealing with a lot of vision data?”

So this comes in handy when you want to explore your large vision datasets, for let’s say autonomous driving, this comes in really handy for things like recommender systems, and things like that. So we started out building out that storage layer in the open source. And that took about a year’s worth of effort to really get to a shape that is usable, kind of like Parquet, or ORC, and other formats in these tools. And that was when generative AI really burst onto the scene and became a revolutionary technology.

And what happened at the time was we had originally built in a vector index for our computer vision users to say “Hey, let’s deduplicate a bunch of images”, or “Let’s find the most relevant samples for training, for active learning”, and things like that. And it was that open source community that discovered and said “Hey, this can be really good for generative AI as well.” That’s when we separated out another repo to say “Hey, this is a vector database.” And it’s much easier to communicate with the community than to say “Hey, you’re looking for a vector search. Use this columnar format.” And so that’s how we got onto this path.

Quick question for you… It’s really a follow-up to something you said - it’s been a couple of moments now as we were going through that… But I was just curious, when you were talking about kind of going through the analysis on the top workflow versus whether it was infrastructure, and you said y’all concluded infrastructure… I was just wondering, you kind of went on past that into that, but I was kind of wondering, how did y’all come to that determination? …for those of us who are not deeply into that thought process, I was wondering where your head was at when you were doing that.

Yeah, it wasn’t an easy decision or conclusion. Thinking back, it was kind of – so it was like 2022… It initially seemed pretty crazy when we first came up on it. If you think about it, it’s like, why would you make a new data format in 2022? Parquet has been working so well. And I think it was really observing the pain on our own teams, and also we went out and interviewed a lot of folks managing unstructured data… And so for them, it was - one, data was split into many different places. The metadata might be managed in Parquet, and the raw assets are just dumped onto local hard drives or S3… And then you might have other tabular data managed in other systems. And they would always talk about how painful it is to stitch everything together, and manage it all together. And some of the outcomes are – it’s really hard to maintain those datasets in production. You have a Parquet dataset that has the metadata, and then links to S3 or something like that, to all the images… And then somebody moves that S3 directory, or something like that, and now all of your datasets are broken. Or something like we would interview folks and we’re like “Hey, what are you doing to explore your visual datasets?” and things like that. And they’re like “Well, I use MacBook, and there’s this app on Mac called Finder. And if you single-click on a folder, it shows you a bunch of thumbnails. It’s this horrible way to actually work with your data”, but it was because it was so hard to manage all that that machine learning engineers and researchers were stuck with these subpar tools.

[10:03] You mentioned kind of this transition of thinking from some of the original use cases that you were talking about with computer vision, to this world of generative AI that we’re living in now… From my impression from an outsider’s perspective, it seems like LanceDB has kind of positioned itself very well to serve this kind of generative AI use cases, which I’m sure we’ll talk about in a lot more detail later on… I’m wondering, from your perspective, how has that overwhelming demand for the generative AI use case kind of changed your mindset and direction as a company, and a project in open source tooling, and all of that? And what you’re targeting as the use case is moving forward, I guess.

I think certainly generative AI has brought in a lot of different changes and a new thinking. One was the focus around use cases of semantic search, and just retrieval in general. I think with the advent of generative AI, retrieval becomes much more important and ubiquitous. For us, what that means is increased investments in terms of getting the index to work really well and be really scalable, and then making that data management piece to work really well as well, and integrating with frameworks for RAG, and for agents, and for just generative AI in particular.

When we started out, inevitably we were dealing with multi-terabyte to petabyte-scale vision datasets, and things like that… And we’re still dealing with a lot of that. But for generative AI, I think there was a renewed focus on ease of use. Because a lot of users are coming in who don’t have years of experience in data engineering or machine learning engineering, and what they’re looking for is an easy to use and easy to install package, that doesn’t require you to be an expert in any of these underlying technologies.

We also spent some effort into – okay, that was sort of the motivation behind us making LanceDB, the vector database, one, open source, and two, embedded. Because we felt there were lots of options on the market that require you to figure out “Okay, what is the instance I need? How many instances do I need? What type of it? Okay, now I have to shard the data”, and blah, blah, blah. And coming from that data background, what I had been working with a lot is like SQLite, or DuckDB, that just runs as part of your application code, and just would just talk to files that live anywhere. And it was super-easy to install and use. So that’s what gave us that inspiration to make an embedded vector database.

You had just got into this idea of embeddings – or, sorry, embedded databases, which… Well, embeddings are related, but that’s another topic. But the idea that LanceDB is embedded; you mentioned DuckDB and other things that kind of operate in the same sphere… I’m wondering, for those that maybe are trying to position LanceDB’s vector database tooling within a kind of wider ecosystem of vector databases, and plugins to other databases that support vector search… Could you explain a little bit about what does it mean that LanceDB is embedded? What does that mean practically for the user? Maybe people aren’t familiar with that term quite as much… So what does that mean practically for the user, and are there other kind of like general ways that you would differentiate LanceDB’s tooling, and the database, versus some other things out there?

[14:06] So I love geeking out about these topics… So at the very bottom layer, in terms of technology, I think there’s a couple of things that fundamentally set LanceDB apart. One, as you mentioned, is the fact that it’s embedded, or it runs in process. I think we are one of two that can run in process in Python, and we’re the only one in JavaScript that runs and process. Number two is the fact that we have a totally new storage layer through Lance’s columnar format. And what this allows us to do is add data management features on top of the index. And then number three is the fact that the indices, the vector indices and others in LanceDB are disk-based, rather than memory-based, so that it allows us to separate compute and storage, and it allows us to scale up a lot better.

So those are kind of the big value prop positions that these technological choices bring to users of LanceDB. So number one, ease of use, number two, hyper scalability, number three, cost effectiveness, and then number four, the ability to manage all of your data together, and not just the vectors, but also if you think about it, the metadata, and also the raw assets, whether they’re images, text, or videos.

Could you kind of describe a typical use case of a developer doing this, where you’re kind of taking those features that are distinguishing LanceDB from other possibilities, other competition, but just talk about what that workflow looks like, or if there is a major one, or a couple, and just kind of get it very grounded, so somebody that’s listening can kind of understand how they’re going to do it from A to Z when they’re integrating LanceDB into their workflow?

So there’s a couple of prototypical workflows that we see from our users. I think at the smaller scale for LanceDB you’re installing it via like Pip, or npm, or something like that. And in general, you get some input data that comes in as like a Pandas data frame, or maybe a Polars data frame. And then you interface with an embedding model. You can do that yourself, or you can actually configure the LanceDB table to say “Hey, use OpenAI embeddings”, or “Hey, use these Hugging Face embeddings.” LanceDB can actually take care of all that. So that’s a pretty quick data frame to LanceDB, and then you can search it, and then that comes out as data frames or Python [unintelligible 00:16:38.07] or things like that, that plugs into the rest of your workflow, that are likely [unintelligible 00:16:44.09] So that’s number one.

And then number two is really these large-scale use cases, where some of our users have anywhere from like 100 million to multiple billions of vectors in one table. And that’s a much bigger production deployment. Typically, what makes LanceDB stand out in that area is one, it’s very easy for them to process the data using a distributed engine like Spark. And they can write concurrently, and get that done really quickly.

I think we’re one of the few that offers GPU acceleration in terms of indexing… So even for those really large datasets, you can index pretty quickly. And then number three is, because we’re able to actually separate the compute and storage, even at that large vector size, you don’t really need that many query nodes. You can actually just have one or two fairly average and commodity query nodes that runs on your storage of choice, depending on what latency requirements you want, and then just have a very simple architecture.

For these types of architectures, the query nodes are stateless, and they don’t need to talk to each other. So when you need to scale up, or when a node drops out and it has to come back in, there’s no leader election, there’s no coordination; it really lowers the complexity of that whole stack.

[18:12] So another great example of this kind of architecture and the benefits that it brings is Neon, the Neon database. I think Nikita, who’s the founder, recently had a good Twitter thread about the difference between Neon and other databases… And he called it shared data versus shared nothing architecture. And I think that’s also what we kind of strive to deliver in LanceDB versus other vector databases.

Yeah… I know one of the things that I really enjoyed in trying out a lot of things with LanceDB is I can pull up a Colab Notebook and try out – like, I can import LanceDB, I can import a subset of the kind of data that I’m working with, it all runs fine, I don’t have to set up some client-server type of scenario… And then when people ask “Well, how are you going to push this out to larger scale?”, the appeal of just saying “Hey, well, we can just throw up this LanceDB database on S3 and then connect to it” - that’s a very appealing thing for people, because also those storage layers are available everywhere, from on-prem to cloud, to whatever scenarios you’re working with… So it’s very, very flexible for people.

Could you explain a little bit – because this is something… Like, I’ve been asked a couple times, but – so this is my selfish question, because I have you on the line, so you’re helping me with my own day to day work… But when I’m talking to some people, clients that I’m working with, I’m like “Oh, we can just throw this up on an S3, and then access it”, usually their question is something like “Well–” Because they have in their mind a database, it has a compute node, and somehow the performance of queries into the database is tied to the sizing of that compute node, and maybe how that’s clustered or sharded across the database… And then this idea “Oh, I’m just gonna have even just a lambda function that connects to S3, and does a query.” This kind of – in some ways it breaks things in people’s mind, and so a lot of times their question was like “How does that work? How can a query to this large amount of data be efficient when the data is just like sitting there in S3, or in another place?” So could you help me with my answer, I guess, is what I’m asking?

Yeah, absolutely. So this goes back to what we talked about earlier, with separation of compute and storage… And if you’ve been steeped in data warehousing/data engineering land, this has been a big arc of data warehouse innovation in the past decade, by allowing us to scale up the storage versus the compute separately. This is the thing that makes these systems seem magical, where you can process huge amounts of data on what seems pretty commodity or pretty weak compute.

The analogy that I like to make with this situation is kind of like a lot of us are familiar with let’s say DuckDB demos or videos. And you could see instances where DuckDB is processing hundreds of gigabytes of data on just a laptop, and in a very fast amount of time. And they’re able to spit out results almost interactively. And there are companies, from like MotherDuck, to - there’s a new company called [unintelligible 00:21:49.02] that is looking to essentially distribute DuckDB queries on AWS Lambdas. It’s basically the same thing. It’s all about the separation of compute and storage. And that’s only possible if you have the right underlying data architecture for storing vectors and the data itself.

And just for someone that is not a database developer, can you describe in any words the generalities of that data structure that enables such a thing?

[22:21] Yeah, so it’s two things. One is the columnar format. So typically, from gen AI to machine learning you can have very wide tables, but typically, a single query only needs like a couple of columns. So a columnar format allows you to only have to fetch and look at like a very small subset of that data. Number two is that columnar format needs to be paired with an index, like the vector index in this particular scenario… And that vector index, in order to give this separation of compute and storage, has to be based on disk. So you have to store the data on disk, not force the user to hold everything into memory, and then be able to access that very quickly. And then number three is how to connect that index with the columnar format. So a columnar format like Parquet does not give you the ability to do fast, random access. So even if you had that good index, using Parquet you would not be able to get interactive performance in terms of queries. And it’s only by having a new columnar format like Lance, that can give you fast random access and fast scans, that you can successfully put these two together and deliver the things. So those are the three big pillars in our data architecture that makes this possible.

While we were talking here I’m going through GitHub on your repo and stuff, and I was surprised at something that – kind of prompting the next question. It looks like you’re really addressing a wide range of different types of needs. And so there’s obviously Python, as you would expect, but you have JavaScript, and then I was delighted to discover that there’s a Rust client in there, which is - when I’m not doing AI specific things, most of the time that’s my language of choice these days. Could you talk a little bit about kind of two things - the broader, what you’re trying to achieve, how you choose what languages to support, and how you’re getting there… And then if you’ll scratch my itch, what is your intention with that Rust client? Is it ready? Was does it do? …just because I’m fascinated with that. Sorry.

Yeah, absolutely. I love talking about Rust . The Rust package is actually not a client, but – so the core of both the data format and the vector database is actually in Rust. So the Rust crate that we have is actually the database, or the embedded database. And we actually built, for example, for JavaScript – again, the same thing with JavaScript; it’s not just a client, but it’s also an embedded database in JavaScript. So that is actually based on top above the Rust crate. And kind of like you have in Polars, or something like that, you have a Rust core, and then you connect that into JavaScript.

So we had actually started out in 2022 writing in C++, because Parquet is written in C++… Serious data people and database people write in C++, right?

Until they find Rust, of course.

Right. And it was sort of a hack project during Christmas time in 2022, at the end of 2022, where we had to get a hack project for a customer, actually, and where we had to actually reimplement partially the read path for Lance format. What we found was just so good that we decided to just actually rewrite everything in Rust. I think the biggest things were we were a lot more productive; we rewrote roughly six months of solid C++ development in about three weeks with Rust. And this was us learning Rust as beginners, as we went along. A lot of that initial Rust code has again been rewritten over the past year, but it just made us feel a lot more productive.

[26:18] And then number two is the safety that Rust offers you has been amazing. With C++, every release just didn’t have a good feeling. It was almost like “Where’s that next segfault going to come from?” Whereas with Rust, we felt very confident making multiple releases per week, with major features, and we did not see anywhere near the issues that we saw with C++. So everything has been really great.

I know that Rust has become really popular now for – actually, even with vector databases. Qdrant I think is Rust, Pinecone… They’re not open source, but they publicly said that they’ve written their whole stack in Rust as well.

One more question from me along the same line before I let it go, because we’ve hit that sweet spot that I love… Do you think - and this is not specific to LanceDB, but based on what you’re saying, clearly you’re thinking ahead on these things… As we go forward and you see both the AI applications, and you see the different types of workflows and infrastructures becoming broader and more supportive, the multi-language aspect of getting out of only Python, for instance - do you foresee that as a convergence, where you’re seeing language agnosticism developing in this space as it has in other areas of computer science? Or do you think that we’ll still kind of be kind of locked in on the current sets of infrastructure and tooling, very Python-oriented, for the indefinite future? What is your thinking along those lines?

So I think generative AI definitely changes the picture, and I think there’s a very large TypeScript/JavaScript community that has been brought into the arena to build AI tools. I think this is also an underserved segment, where – it’s not just vector databases, but data tooling in general lags far behind in JavaScript/TypeScript land, versus Python. And I think there’s a real opportunity for the open source community to create good tools for this part of the community as well.

I want to hear about some of the actual use cases that you’ve seen people implement with LanceDB… Maybe if there’s ones that stand out like “Oh, this was cool because -” whatever it was; they use it at scale, or it fits a very typical generative AI use case, or whatever… And then maybe something that surprised you, in terms of “Oh, I didn’t –” Always when you put a project out into the world, there’s these things where “Oh, I really didn’t expect people to be using it that way… But yeah, that makes sense.” So do you think of anything that fits into one or both of those categories?

The use cases for LanceDB in the community that I see falls into three or four large buckets. One is of course generative AI, RAG, and things like that. And I think there – I think it’s not so much the use of LanceDB that I think is really cool, but it’s the applications that people build with it that is really cool and amazing. And I think a lot of the applications that people build that is cool, that really takes advantage of LanceDB, is things where you need RAG to be very agile, and that you need it to be really tightly bundled with your application. You can call this RAG from anywhere, and have it return pretty quickly, and without too much complexity.

[30:03] And so this was where I see a lot of folks, from your standard chatbots and chat with documentation, to things like productivity tools, where they build things that help people organize their daily schedules, to much more high stakes things in production, like code generation, or healthcare legal, and things like that.

And so there I think typically you see vector dataset sizes, from like the tens of thousands up to single-digit millions of vectors, typically. So production means you really scale up both the number of datasets that you have, and then the number of vectors that you have…

And one of the cool things that I’ve seen, that takes advantage of LanceDB and Lance format uniquely, is there’s a code analysis tool that analyzes your GitHub repository and plugs it into a RAG customer success sort of tool. And what they want to be able to do is say query the state of the database, like “This today, versus yesterday, versus a week ago”, to say “Hey, was this issue fixed or not?” and “What’s still outstanding?” LanceDB uniquely gives you this ability to version your table, and also do time travel. So you can say – any vector database can do “Give me the 10 most similar things to this input.” Uniquely, what LanceDB gives you the ability to do is say “Give me the 10 most similar as of yesterday, or as of a week ago.” And we do that automatically for you.

And then I think the other big buckets are eCommerce, and a search and like recommender engines. This is like the traditional use case for vector databases… And there you tend to see much bigger single datasets, that are, say, “I want to store item embeddings.” Maybe that’s up to a couple of million, up to 10 million. I want to store item embeddings - that could get up to like hundreds of millions. And you don’t have as many tables, but you have potentially very large tables.

And then of course, the last bucket is this computer vision, AI-native computer vision; either generative computer vision, or things like autonomous vehicles, and things like that. And there’s a whole combination of more complicated use cases that enables active learning, deduplication, and things like that… And the thing that is very unique about the use case of LanceDB in there is companies that are managing all of their training data in LanceDB and Lance format format as well. So you can use the vector database to find the most interesting samples, and then you can actually use the tooling on top of the format to essentially keep your GPU utilization high, and keep your GPU fed very quickly during training, or if you’re fine-tuning, or if you’re running evals, and things like that.

Yeah, so cool. One of the things that has been most fun for me recently is this combination of an LLM LanceDB and DuckDB, where you can create these really cool – so if I’m using an open LLM that can generate SQL queries or something, but I have all of these different SQL tables… Like, what we’re doing is putting descriptions of the SQL fields and tables in LanceDB, and actually on the fly matching and pulling those to generate a prompt, which goes to the LLM to generate the SQL code, which is executed with DuckDB. And this gives you the kind of really nice, natural language query to your data type of scenario, which has been really fun to play with.

[33:56] That’s really good to hear. Actually, sorry to interrupt… Because you kind of [unintelligible 00:33:58.27] So one of the things that’s really cool about DuckDB is it’s an extension mechanism. I think they’ve also published an extension framework for Rust-based extensions. So we have a basic integration going there, and I think in the new year what you can expect from us is actually we’re going to be spending a little bit more time to make that integration be more rich, meaning our goal is for you to be able to write a DuckDB UDF to do vector search, and then the results come back as a DuckDB table, where you can then run additional DuckDB queries on top of that. And the same thing with Polars, right? And the goal is to essentially make it so that vector database is no longer a thing that you even have to think about. People are generally more familiar with DuckDB or Polars as that tool that stitches together the workflow. So we just want that to make it feel even smoother, and more transparent.

A couple of moments ago when you were talking about the use cases, you were talking about autonomous vehicles and stuff… And I was wondering if we could pull that thread a little bit more. It seems like it is a fantastic –

Chris loves drones.

Yeah, I love drones, and I love things that are not by data centers. I love things that are off on the edge, whether it be for inference, including training concerns that you may not have all the things that we’re so spoiled with with our cloud providers out there… And it seems like there’s many types of opportunities to use that. What’s your thinking around that? Have you seen any use cases? Any ideas for the future in that kind of autonomous, on-the-edge world?

Yeah, definitely. So we certainly have – so some of our users are robotics or device companies, where they either collect data and write it as Lance on the edge, or they collect data as let’s say protobuf or something like that, and send it off to be converted into Lance for analytics, vector search, and so on and so forth. I think in this world - you’re going to know it better than me, but what I see is that one is the data is super-complicated. So especially with let’s say vehicles types of use cases; you’re getting visual data from the cameras, you’re getting point clouds from the LiDARs, you’re getting time series data from the sensor readings over time… And then you’ve got manual input data from like the auditors and the drivers that are sitting in the car. You’re also getting metadata about the car, about the weather, about the geography, and all of that.

So being able to manage that and query all that together I think will be super-important for robotics, and vehicles, and any company that’s putting things out there in the real world that’s generating data, and in the physical world. And I think that – yeah, I mean, it’s a really hard problem, but I think the potential is huge… Because I think for AI, we’re going from this era of very canned question & answer, to much more freeform question & answer… But it’s still a little bit passive. You’re asking it for information. But what’s really exciting would be you marry these generalized AI capabilities with a drone, or a robot, or something that can go out and be active out in the real world.

That gets me super-excited about what’s to come. I’m wondering, as we close out here - it’s been a fascinating discussion. At the end here, could you just take a moment and make a few observations about what is exciting from your perspective right now in this practical AI space? …because that’s where you’re living… What excites you about whatever it is - the next six months, the next year, and what you think is kind of coming as this tooling rolls out there further and further, people learn to apply it better and better? What’s exciting for you?

[38:19] That’s a great question. I think there are lots of things that I think holds a lot of promise in the next 6 to 12 months. I think we’ll see - one is this explosion of information retrieval tools. So we already see a lot of companies that are adding generative AI in customer success management, and documentation, and things like that. So I think we’ll see a lot of applications providing value that can be also personalized. Not just like ChatGPT style answers, but actually personalized to their own data, or their own cases, or things like that.

And then number two is I see a lot of successes in very domain-specific agents, that are able to dive deep into legal, or healthcare, or some domain very specifically, and build things that seem magical, whether it’s compliance, or driving better outcomes, or creating things that would democratize a lot of these very deep expertise type of domains…

And then, I think a little bit further out, generalized low-code to no-code tools for you to build very sophisticated applications using generative AI through code generation, and let’s say creative interfaces, and things like that. So those are things I think we’ll deliver in the short-term.

And then, personally, I love games, and I’m actually super-excited about what generative AI brings to gaming. We talked about open world, and things like that… And this can be really open, where you could just get lost for a long, long time in a generative world.

It’s awesome. Thank you so much for taking time to talk with us… And please pass on my thanks to the LanceDB team for making me look good in my day job by giving me great, great tools, that work really well. I appreciate what you all are doing. And yeah, I’m just looking forward to seeing what comes over the coming months. And I encourage our listeners to check out the show notes, follow the links to LanceDB, try it out… It only takes a few minutes. And we hope to talk to you again soon. Thanks so much.

Thank you, Daniel. Thank you, Chris. It was super-fun talking with you guys. And if you have any feedback, please let us know; we hope to make it look even better in the new year.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00