Practical AI – Episode #216

Explainable AI that is accessible for all humans

with Beth Rudden, CEO at Bast.ai

All Episodes

We are seeing an explosion of AI apps that are (at their core) a thin UI on top of calls to OpenAI generative models. What risks are associated with this sort of approach to AI integration, and is explainability and accountability something that can be achieved in chat-based assistants?

Beth Rudden of Bast.ai has been thinking about this topic for some time and has developed an ontological approach to creating conversational AI. We hear more about that approach and related work in this episode.

Featuring

Sponsors

Fly.ioThe home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.

FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com

Typesense – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!

Notes & Links

📝 Edit Notes

Chapters

1 00:00 Welcome to Practical AI 00:42
2 00:42 Beth Rudden 06:48
3 07:30 Bringing in ontology 08:47
4 16:27 Don't infer consciousness 05:51
5 22:18 Dealing with bias 03:41
6 25:59 How to create access 05:35
7 31:35 Using AI responsibly 06:57
8 38:31 Implementing NLG to more modalities 03:42
9 42:14 Will AI make you learn better? 02:05
10 44:19 Wrap up 00:23
11 44:50 Outro 00:45

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another episode of Practical AI. This is Daniel Whitenack. I’m a data scientist with SIL International, and I’m joined as always by my co-host, Chris Benson, who is a tech strategist at Lockheed Martin. How are you doing, Chris?

Doing well. How are you today, Daniel?

Oh man, so much better than last week. As I was sick last week when we were supposed to record, so sorry for the skip of the week… But I’m happy to be back here, and with a super-relevant topic around AI systems and explainability, the delivery of AI systems that are explainable… We have with us today Beth Rudden, who is CEO at bast.ai. Welcome, Beth.

Thank you for having me.

Yeah, we were just talking before the show about this craziness that we’re experiencing around the hype of these AI systems, which maybe are just like a web page that connects to OpenAI… But you were talking about how you’ve been thinking for quite a while about explainability, and accountability of AI systems… I’m wondering if maybe you could start out by just giving us a little bit about the journey you took to landing in that space. How did you get interested in those topics, specifically?

I think that a really good place to start is understanding - 2012, The Harvard Business Review said that the data scientist would be the sexiest job of the 21st century…

Yes… [laughter] Which is why some of us on the call are data scientists…

So I was working at IBM, and I was a pleaser, so I went after the information architect certification. But I had a couple of friends that were mathematicians and statisticians, and really data engineers, and DevOps people, and software engineers… And they didn’t really want to be an information architect, they didn’t want to be an IT architect, so we rinsed and repeated experiential certification to be able to say, “Well, how can we make sure that we’re making data scientists that actually know how to use data in the scientific method to solve business problems?” And so we started that, and it took us about six, seven years, it’s accredited through Open Group, anybody can get it… And it’s hard, because you actually have to submit - depending on your level, one, two or three, you have to submit several projects that say “Here’s how I took my problem and put it into a hypothesis that could be tested, and then here’s how I negotiated with my business stakeholder, and then here’s how I showed my results”, and then as you get further and further along, “here’s how I integrated my model into production.” And I think that’s when things get a little crazy, and people are “Wait, what? How do I do that? Here’s my Jupyter Notebook. Isn’t that great?”

But I’ve been writing, or looking at how to deliver AI, but I’ve been doing a lot more on the linguistics or the semantic side, and for probably about 15-20 years. And if you look at how the NLP work is really – you know, a lot of people are like “Oh, hey, I pulled down this thing from spaCy. I can write NLP, right? I just call this a cosine similarity kind of model. I’m good to go.” And I was an archaeologist in the late ‘90s; that’s what I actually got my degree in. I did Greek and Latin, and spent some time in Italy… And when you’re learning languages, you’re learning declensions, and you’re learning etymology, and you’re learning stemming and lemmatization and tokenization and all of the text pre-processing, so I was always the squishy human data scientist. I was the one that was studying languages and doing semantics. So I think it was 2015 Andrew Yang was like “Oh hey, if we use GPUs or graphical processing units, we can process all this structured data really, really fast against these statistical models.” And so a lot of people, I think, forgot about entity extraction, and ontologies, and the Semantic Web… And I use the OWL, which are formal knowledge graphs, and I hopefully am not speaking Greek to a lot of people, but really looking at the language side, as opposed to the machine learning side.

And that understanding of semantics has really put me in a great position now, because all of the statistical models… NLP has three – I kind of chunk them into three things. NLU, natural language understanding, NLC, which is classification, which is your prediction, and then NLG, which is your generation. And prior to having access to GPT, we were generating language old-school. And it’s super-hard. I mean, it’s really, really hard. If you think of like pronouns, and trying to say “John was the guy who we’ll refer to him, and then he took the notch or the wrench from Joe” etc, etc. I mean, it’s just so complicated.

[06:08] But now that we have a really good natural language generator, I’m kicking butt, because I have the semantic side, so I have really, really high NLU, or natural language understanding… Because how an ontology works - and I try to remind people of this; ontology is the study of the nature of your reality based on the language that you use. So when you use language, I’m understanding your reality. And so I map that into a knowledge graph, and then take that language to understand whatever you’re going to say against it. So I have a map. So when the natural language generator generates all the variation that I need in order to put into my machine learning models, and then I have a map to make sure that I understand what kind of things you actually want to talk about, I can create conversational AI that has lineage and provenance, that has the source of where you’re starting from, and then I use all of the GPT generators to effectively generate the fluff or the syntax around whatever entities that I extract about whatever you want to talk about. If you guys could help me break that down into better English… That’s what I’ve been trying to do for the last couple of months.

That’s awesome. I would love to – so one reference point in my ontology of this space is people sort of talking, and we even talked about this in our last episode with Bryan McCann from You.com about grounding, right? Like, some people are thinking about, “Okay, I can generate text, or response to a user query, by looking at some external knowledge, and grounding my response in that sort of context.” So one way you could do that is to say, “Okay, I’m gonna go, I’m gonna pull an article and insert a paragraph from that article in a prompt, a natural language prompt to a language model, and that’s the way I’m inserting knowledge into my response.” Here you have this concept of a knowledge graph, and entities, and like ontology… Would you consider that grounding, or is it slightly different in terms of what you’re talking about with this ontology, and kind of bringing that external knowledge into the generation?

The nice thing about software is it’s multi-directional. So how the conversational AI, the product, and just a quote here… So we named our company, or I named my company Bast.ai. It is after the Egyptian cat goddess. And we build conversational AI technology, so we build CATs.

And we really wanted to distinguish from bots. And having the word CAT makes a lot of sense, at least in my mind, because I’ve been overriding the bot. But the idea is that the conversational AI technology includes a data pipeline. So bring your own data, let me take a book that you have written, and then the CAT, through the data pipeline, ingests that book and then puts the entities into the ontology, and that can be done both supervised and unsupervised. I am a big proponent, and I hope we can get into this a little bit, of AI is to augment human beings; it’s really to help us understand, and I’m always “Why can’t we carry a pocket brain like we carry a pocket GPS?”

[09:44] So the CAT ingests the book, and then those entities go into the knowledge graph, and then those entities are in a concept hierarchy. And they carry “Well, it was from this paragraph, on this page, in the book.” So it carries the actual provenance of where that entity was, and the understanding of that entity within the concept hierarchy. Then, when the CAT - you are interacting with the cat, with the conversational interface, the CAT will be able to respond using those entities, and so it will show you where it got its response from. And so it’s predicting the response based on your question, because I have that high degree of NLU. So I take your question and I do text pre-processing, and I match it against the entities in that ontology. Or if it’s not in that ontology, we have a series of orchestration to send it to a couple of various different places that we had to create.

The way that we handle toxicity is something that I’d love to talk about too, just because I think the way that we’re handling that is very, very elegant and fun. And the idea is that we wanted to have fully-explainable AI. We wanted to show people how they could ingest a paragraph, and then be able to communicate with a conversation or communicate with the AI to understand how that paragraph is being understood in the relation of what the person is asking for.

Yeah, and maybe just to give an example of this… I love the way that you frame this in terms of like reaching out to ontology, that’s hierarchical, and you can kind of ground citations in that as well… So just to give an example, let’s say that I had a book, maybe it’s a novel… I’m reading right now – I don’t know if you’ve ever read it, the Cuckoo’s Egg by Cliff Stoll. It’s about a hacker at the Lawrence Berkeley Labs way back in the day; it was really interesting. Anyway, let’s say I have that book, and I put it through this data pipeline, so I’ve got my ontology, and I’ve forgotten “Oh, did Cliff reach out to the NSA, or was it the CIA when he was talking?” And I ask the question, “Did Clif reach out to the CIA or the NSA?” What would happen next in that – how would that query, the processing of that query differ in that example, as compared to maybe other types of ways of handling this?

So in that case we would have the direct path, where we could say “Okay, well, Cliff is a character in the book, and it was the CIA, not the NSA”, so I know that for a fact in the ontology. So then, I could do it two ways, where I could answer you directly, and we – I don’t know if you call it cheating a little bit, but we have a corpus. So anything that’s answerable, that is really, really easy answerable, we stick into Open Search, which is just a form of Elasticsearch. So then we just pull that, give you the answer, we know with 100% certainty that it was the CIA. Done.

And all of these scores - we have about 50(ish) different models, depending if how you count them, that is run through our orchestration system. So each one of those models have scores, you have targets, configurability, you can expose different hyper parameters etc. Or we could take the Cliff entity and the CIA and the NSA, and we can do prompt engineering, and we can give you - like, if you wanted to say “Give me Cliff’s reply, and tell me was it the CIA or the NSA”, and then you could go ask it to generate a response for you.

And one of the things that we’re really playing around with is conversations should be interactive. So we want the CAT to also engage the person. So we could say, “Oh, Cliff was part of the CIA, and you wanted me to generate a response that Cliff would give. Here you go. And what else would you like me to do? Would you like me to generate some books that Cliff would have written in the style of Cliff?” So you can kind of really start to do that engagement, too.

[14:07] And I use the word lineage and provenance, but it’s really attribution. And when you’re starting to attribute things to the right source system, everything changes. And anytime I show some of the CATs to people, one of their first responses is “Let me put my own data in it.” And that’s exactly what I want to instigate with humans, is not having the black box algorithm do the answering, just have the black box algorithm do the generating. And I know that a lot of people are super-excited about using them. I would really caution about creating them… Because with the generative AI, it’s just going to generate based on syntax, not based on understanding. And I think that that’s the biggest thing that I want people to hear. There’s no sentient, there’s no Sapiens, there’s no consciousness, and I think that that’s all a distraction for the amount of compute that these models really take.

So I’m like “Can we make it like a utility, and everybody uses it sort of like a dictionary, and then we’re good to go? Or like at thesaurus…” And I just, I really do think that when you’re using the generative transformer to generate transformations, that’s the big difference that I’m trying to get people to see, is with the ontology as the map - and actually GPT gave me this analogy, and it’s very good at analogies and metaphors because of how it’s built, and clustering, and all of the things that happen behind the scenes. So when you’re using an ontology with our conversational AI or with our CATs, and you own a toy store, if you want to ask that CAT about any toy in your toy store, you’ll have that ontology to tell you about any toy in your toy store. If you ask the CAT about a toy that’s not your toy store, it will tell you it’s not your toy store. If you do the same thing to ChatGPT, it’s gonna tell you about a toy that doesn’t exist.

So Beth, that was a fantastic explanation, and I’m learning a lot. Daniel is quite the expert in this area himself, but I’m not. But as we were coming into the break, you made a point that has been weighing on me for the last moment or two as you were finishing up, and that was… As you are a user and you’re interacting with this capability, remarkable capability, that’s really taken all of our attention this year, you pointed out that it’s really important for that user not to infer intelligence, not to infer consciousness in that. I would argue that for the typical user out there, someone who’s not in the AI space, and doesn’t have an understanding of these, that’s a really hard ask. And I’m definitely, with this year, with GPT everything, ChatGPT 4 has been out this last week as we’re recording this… And I’m talking to a lot of people, and I think they’re really struggling with that. So you kind of gave that direction, but I think that’s easy for us; I think that’s a tall order for people not in the space. And a lot of people, a lot of our audience are people kind of coming into this and learning about it… Can you provide some guidance in how you keep that separate, and what it means, and how you should use this new capability, as people are now engaging? Because it’s changing the way we’re all operating day to day. Even non-technical people who have never really done any AI are now going to ChatGPT, and things, and we’re really at a moment where people need to understand how to appropriately engage this brand new technology.

[18:06] I think that it’s a combination of things, but I think that TL;DR, go out and use it, and use it as much as possible, and ask it about yourself, ask it about things that you know. And the way to really understand how something works - and remember, we’re in the realm of cognitive science. That binds philosophy, psychology and computer science. And I think that really when you understand that it is a generative transformer, it generates transformations, it does not have the understanding to have consciousness or sentience, it doesn’t understand what you’re saying. It’s a stochastic parrot; it mimics language. So it mimics what you’re saying based on the syntactical rules of that language. And it’s incredibly good, because it’s been fed a huge amount of data.

So if I’m having a conversation at work with somebody who’s not a technologist… They might be in a marketing department, or something like that. How does that change how they’re engaging in terms of how they should be thinking about it? Because I would argue that that’s a tall order to actually get – you can say that to someone and they’ll say…

Right. Yeah, ask it about a product that you are not selling, that you’re not marketing. Ask it about something you know, and you know to be true, and ask it to say, “I would like you to market a blue tomato. We have blue tomatoes. Blue tomatoes grow on trees. Could you market that for me?” And it will give you marketing for blue tomatoes that grow on trees.

And so I really want people to come from a space of abundance, not scarcity. I really want people to think about what do they have right now, and what every human being has is their own experience. And what this AI has been trained on is a very small number of people’s experience who have been on the internet and been writing on the internet. And so my opinion is that every single human being is already impacted by AI, they should be using it. And they should be using it in a way that I used it to help my daughter come up with an analogy on reciprocals. I asked it to come up with a good metaphor in order to explain what an ontology does. There’s so many different things that you can use it for, and I really think that the best way that people can do is go out and use it and ask it about things that you know. Many authors are like “Oh, so I wrote 10 books, not for 4. Hah!”

People are starting to see that it’s going to generate the next proper noun, the next predicate. [laughs] The next syntactically correct – and I’m doing it in a sentence, but it’s so smart, because it has been fed so much data… But here’s a myth. You know, I talked a lot technically about using ontologies, and knowledge graphs, and concept hierarchies, and all these things. But here you go - all of what I just talked about can run on something that was like the very first iPhone one. The myth - you do not need big data and big honking machines to create AI. And I would ask who does that myth serve? And if everyone could understand how to use the data that they have, that is special to them, that they understand, that they – like, if you take your grandfather’s journals and put them into an AI so that you can have a conversation with your grandfather, this is what the technology is enabling us to do, and we want it to be distributed to every human on Earth, because every human on Earth is being impacted by AI.

[22:17] Yeah. And I don’t know if you can talk at all… One of the things - I love how you brought out this element of people being able to bring their own data to the table, kind of combined with the fact of them being able to run this maybe even on their own hardware; how shocking would that be? And I love that also because I think that there’s a real concern that I’ve been having over time of just how Western and English-focused most of this conversational AI is. And the fact is that we come to the NLP table with these biases that say like “Oh, wouldn’t it be great if every language community in the world had a translated version of Wikipedia?” And that concept makes sense to us. But the reality is some language communities, they don’t want that. It’s explicitly not how they use their language. And they want to use their language in a community setting, maybe for storytelling, or maybe for whatever it is, and they would rather bring a different kind of data to the table. So I think that that also helps in this regard.

I don’t know if you’ve also seen like in the ontology space, or the knowledge graph space, how do you think about, I guess, bias, and the availability of data… Because that’s a big topic with these large language models, is if you’re just using them for generation, they come loaded with what they’re trained with, right?

You know, I was stunned at how quickly the models are able to statistically generate the language. We used to make fun of natural language processing statistically generating language; it’s important in so many – it’s such an oxymoron in so many ways.

There’s actually a really great article by Karen Hale about the Maori people and what they are doing with artificial intelligence, and I’d love to link that, just because it was from MIT, and it was a fantastic review, and it really speaks to what you were saying about that. As far as bias, again, I’m gonna go back to cognitive science. Philosophy - how do you know what you know? Psychology - how do you make sure that you are not using your powers to manipulate humans? Seriously, like, just put some ethics there. And then computer science. So when you’re talking about bias, and there’s a cognitive codex, and it’s like 188 cognitive biases and counting… And one of the best ways that I did this when I was still in IBM is I started the Trustworthy AI Center of Excellence, and many of my peers are still – they’re so strong, and what they’re doing is amazing. But what I wanted to do with bias is we did some modeling on the Titanic. And we did some predictions on whether somebody was going to live or die based on their class, in order to show the social bias of the time… Because the person in steerage would never have gotten a lifeboat. And I use that explicitly to talk about bias. And what you said about the very small – you didn’t use the word “homogeneous”, but like Western kind of culture that we’re sort of codifying into this AI, we have got to have a wider variance; we have got to have more diversity. And that’s why we really need to be able to give everyone the ability to build their own, without having to build their own generative model.

Could you talk a little bit about how to do that? Because this is a topic that we’ve talked about in different ways over a number of episodes… It’s very hard to get it out there. It is definitely not an equal world.

[26:10] That’s right. Access.

Yeah. Can you talk a little bit about access, and how you create that, and how that becomes possible?

Well, shameless plug - I’m looking for funding and investment… [laughs] But I think that the ability to use the combination of knowledge graphs, and semantics, and being able to access these generative models… And one of the things that I did with the orchestration, and the reason that I used sort of the corpus-driven and dump a bunch of stuff into open search is to make it small enough and accessible by as many people as possible. So just use the access to the generation to generate all the variants that you need, but eventually, you’re done. You don’t need any more, and you have that stored in a corpus, so you can access that as much as you want.

And so it’s really about “How do we use the generative AI more as a utility, that everybody uses?”, instead of creating - I call them cheese graters, because that’s how I think of the generative model, is they cheese-grate the text, and then they sort of glue it back together, or stitch it back together with duct tape, or whatever… But it’s codifying so much of our Western notion of ideas.

If you go to Aboriginal societies, their construct of time is entirely different. Like, if you’re facing the West, or the North or the East, their concept of time is different. And that’s expressive in their language. So to think that we have created a generative model that can encompass all of our world is not correct. We can do so much more if we have a wide variant. Have you guys heard of the diversity prediction theorem, or the wisdom of the crowd?

To me, it’s the secret of the universe. Like, the wider your variance, the more standard your mean. Like, the closer to truth that we want to get to, the more diverse human neocortexes that we need to get there. So I’m a big proponent of - you know, to really answer your question, Chris, we need to make the generative model something that is as accessible… And OpenAI has done a really great job of like making it accessible to a wide range of people… But I was talking to my parents today, and they were like “We don’t even know how to access that. But I think I went to Google and it might have done something, because it gave me a weird response, so I shut it down, and then I tried to go to the other thing.” I was like, “Oh. Good.” So we need to be better about really making sure that people understand that they are accessing just a generative model. That’s it.

I think that’s one of the challenges I see, is we’re here in this AI community, and we’re a tiny little slice of that in this episode as we talk… But at the same time, I participate in other communities that are not technical at all. And the other participants in those communities are not technical, and I think that’s the challenge, is trying to do exactly what you said, with people who otherwise not just don’t have access, but don’t even know it exists, in a lot of cases.

Yeah. I used to say, and a friend of mine reminded me of this a long time ago, but there’s no hand waving in math. So if somebody is not explaining how they got to the prediction, or how the model works, or saying it’s proprietary, or shoving a bunch of data into a neural net to have it guess [unintelligible 00:29:50.15] engineering, they’re probably hitting the Easy button. They’re not doing the work. And I come by that honestly, because I think that people need to understand there’s no hand waving in math. We need to stop thinking that just because it’s AI, or just because it’s statistics… You’ve heard the Mark Twain quote…

[30:14] Is it a statistics quote? I probably should have..

“Lies, damned lies, and statistics…” [laughter]

Oh, yeah… Yeah, that’s right.

Yes, yes…

So it’s just statistically – 2013-ish, 10 years ago, goodness, Jennifer Golbeck , social scientist, gets on the TED stage, tells the world that she can statistically predict whether you have done drugs or not based on five of your likes on Facebook. And I was like “Hallelujah! Everybody understands. Everybody sees it”, right? Everybody understands that we can now predict these things statistically if we have enough data. And no, I still don’t think that people can get that. And we need to teach – like, I taught my kids probability through poker. Like, we can teach this to people so that they understand that it’s only statistically accurate to a certain probability. So if it’s 97% accurate, what does 3% look like? What’s your test reliability for that 3%? Is that 3% going to give you that same answer every single time? And if not, it’s not science.

I loved so much about this conversation, and one of the things that I was thinking about in my own context is like my own tendency to not give users enough credit. So one of the things that happens when we anthropomorphize AI, and talk about it in these different ways - you know, there’s a tendency to maybe think it’s always right, or like you’re talking about more intelligence than it really does have… But I’ve also found, whether it’s like family members in my own life, who aren’t involved in the AI world, and they’re using ChatGPT, a lot of times they interact with it more responsibly, I feel, than some colleagues in the AI world, in the sense that… My brother in law, Jack - I don’t know if you’re listening; hey, out there… We were talking last night over tacos, and he had used ChatGPT to write up some speech or something that he was giving at work… And I’m like “Oh, so you wrote that with ChatGPT.” And he’s like “Yeah, I use it, but what I do is I don’t like just have it generate it for me… I’ll just type as fast as I can and just have it rephrase it into something good, that’s grammatically correct.” And I’m like “Wow, that’s – yeah, go for it. That’s really good. That’s awesome”, because that’s a great way… Or I’m thinking of teams that we work with in India, in my day job, I was talking to someone and some people would say, “Oh, we can’t just output machine translations, because they won’t post-edit them, and make them good, or like look for corrections.” And in fact, translation teams we’re working with in India - they know it’s a machine translation; they’re just happy they don’t have to type as much, because typing is really difficult in their language. Like, they’re fine to post-edit it.

So yeah, I’m wondering if you’ve seen this as well, and if you have any recommendations, specifically after working with users in conversational interfaces, which can seem kind of like human-like. It’s like you’re having a conversation. How do you set up an interface? How do you set up a system such that it produces useful behavior, and like promotes the right type of usage?

[33:53] I started playing with very early versions of GPT, and so we strung them up in Slack… And we did that on purpose, because we didn’t want to deal with identity access management, and all the other stuff… And Slack’s a great interface for plugging things in. But it’s also really something that – I was the anthropologist, and when we installed Slack in the largest enterprise, in IBM, I watched the people going, “Oh my gosh, what’s the protocol?” And really, when you deal with really senior executives, they’re like “Wait, this is persistent. This is kept forever. What are we doing?”

How do I respond to this GIF?

[laughs] That’s right, that’s right. So I had a lot of what you would call training in trying to get people to understand this new modality of communication. So we were playing around with bots, and we wanted the bots to talk to each other. And so we use the CATs now to test out what we’re doing. And we talked a little bit earlier about like setting up an iFrame or a web page that’s just like strung up to OpenAI, and you can ask it any question you want. But if that AI doesn’t give the answer that you like, and it causes your customer to not trust you, that’s a big deal. So you really want it to be tested. And so we use the CATs to test the CATs, or the CATs to make kittens, or the CATs to test the kittens, or the CATs to test the intents, or… You know, and this is the joy of having some of this automated.

Back in the day, when we used to do like conversational AI, and you would do either dialogue flow, or corpus-driven, it was always the IT group that had to do, like, “Give me 100 variations of how somebody would say this. Give me 1,000 variations of how somebody would say that. Give me 16 synonyms for this. Give me 17 synonyms for that.” So we’re using it all the time, because again, it’s great at generating variations.

One of the ways that I used it with teachers and students, and I get to work with this amazing university, Maryville University, that is truly transforming education, and they’re doing such amazing things with my friend, Phil Komarny, who used to be SVP of innovation at Salesforce - we’re doing great things there. And I got to do a fantastic workshop with all of the teachers in the faculty. And they came to my session really kind of skeptical, and they left my session going, “Oh my gosh, I now understand how to use this.” And what I did is I had ChatGPT list out 50 things that a teacher does all day. And then I had ChatGPT list out 50 things that a student does all day. And then I had the teachers and the students - I had 138 people on Miro board, working together for 15 minutes, and I’m like “What are you going to eliminate? What are you going to raise up? What are you going to reduce, and what are you going to create?” And so give people an understanding of what the technology does, and then the messy middle between the skills that you have in the title or the role that you have - it’s what do you do all day? Like, what do people do all day? I’m Richard Scary, 1968. But when you’re doing that, you’re like “Wait a second…” Gosh, when I was a mom, or like a mom of younger kids; I guess I’m still a mom, even though my child is 18, or one of them is… Anyway, I digress. Like, I would sit there and I’d be like “Oh my gosh, my head is hurting. What do I do for dinner? ChatGPT, give me a recipe with chicken and broccoli.”

It can be so useful for so many people to just generate what the idea needs, especially when you’re tired, or you’re exhausted, or… I definitely wouldn’t want to fire your entire marketing team, because you want to keep it on cue, but you really can use it to really augment your business, and augment what you’re doing. Just keep it in the realm of fiction, and creativity, and those kinds of things. And we haven’t even talked about some of the art and the creative expression…

[38:13] And then I’m a huge fan of - especially for people who don’t code, I’m like “Go ahead, program in it. Get it to render some code for like graph viz, so that you can see like a visualization. And code is great, because it kind of works or not.

We’ve talked a lot about this sort of idea of knowledge graphs and ontologies as a reference that’s domain-specific and known in combination with generative models. Do you think there’s a parallel in the sort of image, vision, audio space, where –

I hope so.

I imagine groups are like “Hey, I need to generate a new design for a webpage”, or “I need to fill this empty room with furniture. And I could generate a couch, but it’d be really nice if I knew that this couch existed, and I could order it online, and it’s an actual couch that exists. Because otherwise, I could sell this design to my client, and now I can’t source the couch for my room.” What are your thoughts on that, in terms of maybe extending some of your ideas about combining knowledge graphs with generative models to more modalities than just text?

We’re in the realm of – most human beings can’t see the difference between 4k and 8k. But you ask any artist to kind of look at – and I’ve done a ton of work, just playing around, too… It’s kind of off, and you don’t know why. But it’ll get better and better and better. So I think that what I hope will become more valuable is attribution, and AI that can give actual attribution. So you could do that with anything visual, you could do that with video, you can do that with any sort of – as you were doing, kind of like a design, or shopping…

I was once having a conversation – and again, this is the geek in me… But I’m like “An AI can produce its own architecture and its own architectural diagrams, or its own ERD.” And that’s where I think we should be going, is for the AI to explain how it works in and of itself. That’s where we’re going to potentially get to “Cogito, ergo sum.” But I think that it would be really cool if AI can start to think in terms of that three-dimensionality. And I think that if you can get the AI to design how it’s functioning and how it’s working within itself, that’s going to be far more valuable, and again, far more trusted, because that’s something that you’re going to want to actually build a relationship with.

[41:02] The big problem that I see with the generative transformers that are pre-trained right now is that they’re pre-trained on data that was harvested without people’s consent, which means that that data was potentially put there – you know, I thought it was my job to lie to all of the search engines for the last at least 10 years… [laughter] So how good is that data? And my hypothesis - and I’ve played it out many times, is like when people interact with my CATs, where it’s AI that they trust, an AI that they know where the data comes from, it’s an entirely different experience than… You know, back to your initial question, Chris - like, when you’re interacting with something that you don’t know how it works, and you don’t know where the data comes from… You’re being told that it’s fed data, you’re like “What?!” It’s such a different experience when you can interact with something you trust.

Yeah. It does change your perception coming into an interface knowing “Oh, this is – I’m searching against my company’s knowledge base”, or “I’m searching against a PDF of this book”, or whatever it is. As we kind of draw to a close here, I’d love to hear maybe – you’ve talked a lot about things that you’re actively working on, and that’s all really exciting… As you look to the future, what are you most excited about in terms of maybe what’s going to be possible in the coming year, that we aren’t quite there yet, but that is really on your mind? It could be something that you’re actively working on, or just something in the industry as a whole… As a positive, kind of close-out, what are you most excited about in terms of like trends that are happening, or things that you’re working on, or thinking about?

I have a couple. I do think that with the technology that I have, and with an ontology, I can take like a paragraph of your text and understand something about your existing mental model. So if I can relate new information to your existing mental model… Like, I found out you live in Indiana; if I say something about Indiana, that makes you not only trust me, it makes me give you new information in a way that reduces your period of disequilibrium. We can make people learn faster, and that’s truly exciting to me. And I think that human beings, given trusted, evidence-based, tested artificial intelligence - if they had that, I think that it opens up this entire world of visual thinkers.

And shout-out to Temple Grandin and her new book, Visual Thinking… All about how we’ve been living in a verbally-dominant society. Well, guess what? Words just became very cheap, very much of a commodity. So the engineers, the tradesmen, the plumbers, the people who are doing the things with their hands, the artists, the fuzzies - this is our time. And I think that you’re opening up an entirely new market for everyone to be able to create. And that to me is super-exciting.

That’s awesome. Yeah, I think that’s a great way to close out, and a great thought. And I now know my next book after I finished the Cuckoo’s Egg. So thank you so much for joining us, Beth. It’s been a real pleasure to talk through things. I recommend people check the show notes for links that we’ll include there, and I hope to chat with you again soon, Beth. Thanks so much.

You’re welcome. Thank you.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00