Practical AI – Episode #228

From ML to AI to Generative AI

get Fully-Connected with Chris & Daniel

All Episodes

Chris and Daniel take a step back to look at how generative AI fits into the wider landscape of ML/AI and data science. They talk through the differences in how one approaches “traditional” supervised learning and how practitioners are approaching generative AI based solutions (such as those using Midjourney or GPT family models). Finally, they talk through the risk and compliance implications of generative AI, which was in the news this week in the EU.

Featuring

Sponsors

FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com

Fly.ioThe home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.

Changelog News – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today.

Notes & Links

📝 Edit Notes

Chapters

1 00:00 Welcome to Practical AI
2 00:43 Fully Connected
3 02:38 What is AI today?
4 05:54 A big AI misconception
5 14:18 Generative AI
6 16:48 Foundation models
7 18:22 Generative models
8 26:20 Sponsor: Changelog News
9 27:45 AI and the destruction of mankind
10 33:26 AI pilots
11 41:42 Fundamentally transforming humans
12 45:54 Outro

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another Fully Connected episode of Practical AI. This is where Chris and I keep you fully connected with everything that’s happening in the AI community. We’ll take some time to discuss the latest AI news and dig into some learning resources to help you level up your machine learning game. I’m Daniel Whitenack, I’m a data scientist and founder at Prediction Guard, and I’m joined by Chris Benson, an AI strategist. How are you doing, Chris?

Doing very well. How’s it going today, Daniel?

Oh, it’s going great. I got back late during the night a couple of days ago from my time in San Francisco, where we had an in-person podcast meet up, kind of a collab with the Latent Space podcast, which is an awesome AI podcast, if you haven’t heard of it… But yeah, it was a really great time. I was a little bit tired yesterday, but I feel recovered today, which is good, because today is also one of our favorite other podcasts, the MLOps community; they’re having their LLMs in Production part two conference today. And that’s today and tomorrow. So by the time this podcast goes out, it will have passed, but I think they’ll post the talks on YouTube, and all of that. So make sure and check out those talks. There’s a lot of really good ones there.

Yeah, I’m sure there are. I’ve been learning a lot from them.

Yeah, yeah. What a great community. I’ve joined their Slack, and I’m chatting with people about how they’re deploying models and all that stuff, which is fun. Another thing happened today, Chris… I was at a co-working space here and in town - shout-out to Matchbox Co-working; I know a few people there listen to this show… But I ran into a friend, Tanya, and she’s been listening to recent episodes of the show, and she made a really good point, and that’s that we haven’t taken time, for a while anyway, to stop and say, “In this moment that we’re in now, when we say AI, what do we mean?” Like, what is AI now, today?

That’s a really good point. Thank you, Tanya, if I got the name right. The last time that we kind of talked about what it means, there was no such thing as generative AI, for instance.

Yeah, yeah. Definitely not in the way or at least in the term –

The way we’re using it now.

The way the term is used now, yeah. So I think that brings up a good point, Chris… What is generative AI? We can maybe talk about that. But maybe first we should talk about what was AI or machine learning prior to generative AI. That sort of machine learning and AI is still in existence, of course, and being used all throughout industry. But there’s a difference between that and generative AI. In my mind, if you want to think about – actually, this would be true of both kinds of AI. So if you think about AI in general, or machine learning in general, the way I think about it at its most simple form is a data transformation. And you put some type of data into one of these “models”, and you get some other data out. It’s like a software function, essentially.

Now, of course, there’s a lot going on within that function, but at its basic core, an AI model or a machine learning model is something that takes in one form of data and outputs other data; like speech in, and text out, or language - in one language in, and language in another language out.

There is a really old-fashioned term that would be applied, is it’s a filter. Software developers who have been around for a while might know about creating filters. And it’s just an incredibly sophisticated filter, in that you get the one thing, you get a different thing out, and it’s all about the relationship between the two.

Yeah. And that kind of brings in one level of a mental model into how we think about these things. We’re gonna put something into them, we’re gonna get something out. Now, obviously, these models are different than other software functions or filters that people have written in the past right. And the key difference I think that I share with people at least when they’re forming their own mental model around these things is that in kind of normal, quote-unquote - I don’t know if we have normal software engineering anymore, but a normal software engineering, you have a function, and the engineer, the programmer writes all of the logic of that function, and determines what parameters should be used where. Like, “I’m going to accept two numbers, and then I’m going to add those things together, and output the output.” And that is a data transformation, but the logic is completely programmed by the programmer.

It has to all come as an original thought out of a programmer’s head.

[05:36] Correct. And there could be some flexibility, I guess would be the right way to put it… I mean, software is flexible in general. I could have a function that adds two numbers together, and I could add any two numbers together. It doesn’t have to be one and two, it could be 42, and 17, or something like that. However, in a machine learning or AI model, which does one of these transformations, there’s still an element – this is maybe a misconception that people have. There’s still an element of that software function…

Absolutely.

…that is written by humans. It is structured by humans. Have you found that to be a misconception?

I do. I think people who are not in the space intimately as we in this audience would be tend to think of it as – I mean, they won’t admit to it, but they think of it as magic a little bit. I get into a lot of business conversations and I could take out the business words and put in the word magic, and it would still work, the conversation. So it’s a little bit instructive in terms of how people are perceiving it.

Yeah, it’s almost like there is software, but the bit that’s the model is just totally - it manifests itself out of the computer.

In reality, what happens is there’s a thing called an architecture, and all that means is that you just have code that’s written, that does certain things within your function or within your data transformation. That might be adding numbers together, or averaging things, or multiplying different numbers in various ways… And so all of those things are combined or structured actually by a human programmer, often researchers, I guess, in this case would come up with a model architecture. Some people might have heard of Bert, or GPT. These architectures that are a form of a software function, but they have missing pieces in them. And what those missing pieces are are called parameters.

So one example I kind of give sometimes is let’s say that we wanted to write one of these machine learning functions to classify cats or dogs, pictures of cats or dogs. I could have a very simple model architecture which says “If percentage of red in image is greater than x, classify it as a cat. If not classify it as a dog.” Now, I haven’t said what x is… So how do I set this parameter that’s a gap in my machine learning model, the most simple of machine learning models? Well, what I can do is I just take a bunch of examples of cats and dogs, and I try a whole bunch of different x’es, and whichever one gives me the best result, in other words whichever one classifies those the best, I choose that as my parameter. And this is what, at a much larger scale, we call training; people might have heard of that. Now, the models that are used these days don’t have one parameter, like my simple cat/dog model; they have billions of parameters that are set.

Just to add in one little thing, that training process is based on an algorithm, which is just a fairly simple math problem that you iterate through over and over again, comparing your results to what you’re targeting, what you’re trying to get to… And there’s an error there, and you’re trying to reduce that error down to do that. So when people talk about training AI and there’s this kind of mystique associated with it - there’s no mystique really there. It’s just running an algorithm over and over and over again, until you get a more accurate, less error-prone answer. It’s as simple as that.

Yeah, in that sense it’s kind of a brute-force implementation of trial and error. Now, not totally brute force, because trial and error would require you to try every option or every combination, which for a 6 billion parameter model would take the life of the Universe or something to explore… But of course, people have devoted much of their life to optimizing these types of problems. And so it is highly optimized, but at its core, like you’re talking about, you’re trying to reduce an error, or what’s called a loss, and find those optimized parameters to perform a task.

[10:06] So in the case of dog and cat classification, you’d have a bunch of images which are labeled as dog or cat, you would feed those into the model with a bunch of different combinations of these parameters, and then the winning one that reduces the error or the loss would be your set of ideal parameters, which then you can use to classify new images, which don’t have a label yet. So I don’t know if this image is a dog or a cat. I’m going to put that in, and then I can classify that, and that’s what’s called the inference process.

So it’s two steps - there’s a training process, and an inference process. And this is generally what’s called supervised learning, which means that you just have labeled, gold standard labeled examples. And this, I would say, dominated the AI scene, and still dominates much of what’s done in industry. I think people also have the misconception that “Oh, supervised learning is so 2016.”

No, it’s still the vast majority of what’s deployed out there, and what people are actually using in real life. I’m totally guessing, but I would say it least 95% of what is out there in industry is that; and that might be a conservative guess.

Yeah, yeah. So this is still the dominant frame of thinking about machine learning and AI, at least across industry… That has shifted a bit, though. So maybe around like – I mean, at least when it started shifting my mindset was probably around 2019-2020… Some of these what’s called self-supervised models started coming out… The idea being that there was kind of maybe a first shift, and then a second shift that I’ve seen.

So there was an era of data science where - supervised learning, gather your dataset, train your model with those examples, and you have your supervised machine learning model. Well, people gradually learned that if we make our models bigger, and we expose them to enough data for a particular mode, let’s say text, or images - well, if I have a large model that’s been trained to recognize 17 different things in images, I might have a use case where I want to recognize an 18th thing, or maybe three different things. Well, that model already has embedded in it the capability to find really good features of images, and do image classification based on those features. And so I don’t have to retrain a whole model from scratch; what I take is that large model that’s been trained on a lot of images, and I do a process called fine-tuning or transfer learning, to then do this new task. So I saw this first shift - I’m gonna call it a first shift; I mean, this is something people have talked about…

You’ve just coined a phrase; you know that, don’t you?

Yeah. So this would be a shift from thinking purely about supervised learning, training from scratch, with your own data, into this realm of Google trains a big model for image detection, and I take that and I fine-tune it for my own purposes. I’m not starting from scratch, I don’t need as much data… And I would say also, this framework dominates a lot of what’s happening in industry right now. So there’s NLP use cases for this where maybe you have a model that’s trained to translate from English to Arabic…

NLP being natural language processing.

Natural language processing. And you want to translate though to like an Arabic vernacular; you would take that parent - what’s called a parent model, or a base model, or more recently called foundation model, and then fine-tune it to this new scenario where your task is slightly different.

[14:17] Okay, Chris, that brings us to our next wave, or change in the landscape of AI, which - we already talked about this sort of move from purely supervised learning to fine-tuning, from a parent model, a large parent model… And now we’re in this kind of wave of generative AI, which is the kind of first wave of AI that has really hit the public perceptions so widely.

Yes. It’s been the game-changing thing for the public. They’ve been hearing about AI in the media, they’ve been loosely aware of it, but they suddenly had some tools that were powerful and placed directly into their hands. And that has made a huge difference. They came out late last year, I guess, in that sense, but this year really, 2023, has been the year where the public’s perception of AI has substantially changed.

Yeah. And these models, these large models, like those used in the GPT family of models, or open access ones like LLaMA, or Falcon, that people might be seeing, or the image-based ones like Stable Diffusion, or DALL-E - all of these are still fitting this model of a data transformation or a filter; you put some type of data in, you get some type of data out. There are some fundamental – at least for some of these models, there’s some differences in how they’re trained. Remember that training process that we talked about. But then also, there’s quite a big difference in how they’re being used. I think in my mind that’s almost the bigger shift in terms of how people are thinking about using these models.

So you used to, when you would have one of these parent or foundation models, that parent or foundation model wasn’t that useful in and of itself. So you have like the base Bert model or something like that - there are some use cases for that model specifically, but the real power comes that you can downstream, fine-tune that model with your own data for a specific task. So instead of having a general model, you train a machine translation-specific model, or a sentiment analysis-specific model on your own data.

Before we move on from there, just to address for a moment for those who are not familiar with foundation models - the value in doing what Daniel was just describing is in the fact that much of the training that occurs in a model is very resource-intensive and very time-consuming, and is not specific to your problem. And so you can train a model to maybe 90%, 95% of what you want, maybe even farther, and there’s a huge investment there. But it’s that last little bit where you have many, many, many, many use cases that you can fine-tune it for. And so if you can start by having somebody else, like a big cloud provider, do the first giant chunk of training, then you can take that almost-done model and customize it to your need, as can thousands and thousands of other people with different use cases. So you’re transferring the training cost to a large organization that does that anyway. And that’s the value. So you can buy into a large model much easier. I just wanted to clarify that in case anyone wasn’t intimately familiar with foundation models.

[17:59] Yeah, yeah. And that’s part of the reason why the large tech companies are the ones that have dominated the production of these models, like Google and Facebook, or Meta and Open AI etc. have dominated that scene, because they have a lot of resources available to them… Although there are some exceptions to that rule as well.

If we think now towards generative AI, like I mentioned, there’s still this concept of one type of data in, another type of data out. And there’s still this concept of foundation or base model. I think the real shift, although there is some shift in how these large models are being trained, which is - we do have an episode about reinforcement learning from human feedback… So maybe if people are interested more in the details of that sort of training process and how it’s different, you can look back at that episode.

But I think maybe a more significant shift in the distinction of these generative models from previous waves of models is that people now view these foundation models that are being produced these days as useful in and of themselves, without any further fine-tuning… Although sometimes people do use fine-tuning later on. And they’re generative, because the way people are thinking about using these models is by putting a sequence of information in, and getting a completion of that information out. That’s what we mean by generative. So I have some sequence of things in, and the next thing that should come out, the completion is what is “generated”. So that doesn’t necessarily have to be text, at least in how people think about these models. It could be, you know, you start out playing a few notes on your piano, and then the model generates the next bar of music, or something. Or it could be text, like autocomplete; I put in text, and then out comes the completion of that text.

Yeah, it can really – when you think about it, it can be any kind of information sequence over time that’s structured. We see these generative images, we see it in music, we’re seeing text, obviously… And there may be other paradigms to come in terms of how people approach different ways of looking at information. That’s a big topic of interest right now, is kind of turning things on the side, and could you do that? And I think that point right there, about not just the baseline text and image and music and such, but what are other information streams that are possible to apply this approach to? …because it’s already been game-changing in terms of the productivity output from what we’ve just talked about. But that may just be the tip of the iceberg in what’s to come? And we’ll get there as we – I’ll hand it back over to you before we go too far.

Yeah, I think that’s a really good point, Chris, because I’ve in recent days been telling people how I’ve had to rebuild my intuition a little bit as a data scientist, because my knee-jerk reaction as a data scientist is to gather some data and train a model; maybe a generalization, but not so far off from the truth. But now, with these models, I can solve a lot of the problems that I need to solve without doing any training at all, but doing this sort of engineering and processing around the information that goes into a generative model, so it produces the right thing out. So some of this – we can give some examples maybe of generative models and how this works out in practice.

[22:01] Maybe I want to generate an image, a lifestyle image for a product, something like that. I could take the product description, I could take some other elements, like some instructions, and form that into what’s called a prompt input to a model like Stable Diffusion, DALL-E, Midjourney, something like that, and say “Generate an image for this product”, and you inject the product description, “and make it black and white, set in New York photorealistic”, something.

So you can see I am constructing a prompt where I expect the completion of that prompt or the thing generated out of it to be that sort of image that’s grounded in the product description. So that’s one example where you would insert that and you would actually get an image, that image out. I’ve done this with my wife’s products, and it works quite well.

Of course, you could also do that with text. Let’s stick with the marketing example. Maybe I want an ad now to go with my lifestyle image that I’m going to run on Facebook, and so I could use a model, like one of the GPT models from OpenAI, or I could use Cohere, or I could use the Falcon model that was introduced recently, which is a large language model, so it’s a type of generative model… And I could put in a prompt to say “Hey, here’s my product description, and I want to run a sale, something like this. Generate a good Facebook post for me, or a good Instagram post.” And the output of that, the completion or the generation out of that is what’s going to come out. And now I have an image and I have ad copy for that.

But as we’ve mentioned, that doesn’t have to be what we limit ourselves to. There’s music generation models now, and you can describe the mood that you want to put behind maybe a video corresponding to that ad, and generate music out. And maybe I want to convert the image to a video. I could generate video content out of a prompt, and add that in. So you can start to see how chaining all of these things together, multiple calls to these types of models can produce really magical output… And that I think is what’s dominating this current wave of AI that we’re in.

it is, and we’ve barely touched on the use cases… Because I think it’s only limited right now by imagination. My friend, Brett Siegel, on the weekends likes to play with exactly exploring these ideas. And a couple of weeks ago he was saying, “Hey, look, I generated a professional-quality PowerPoint presentation that is indistinguishable from what a PowerPoint professional, with graphics and everything was able to do.” He did that entirely out of - I believe it was the GPT-4 model, with ChatGPT. And he was like “Yeah, I did this in a matter of minutes. I was able to generate the code which would create the PowerPoint, and for every slide, I gave it a single topic of what I cared about, or I’d give a whole section of topic and tell it to create the slides.” And it was amazing; it was better than most people could have done.

Now, that was his weekend project, which is great. But if you look at that, that one use case, think of the number of human hours in businesses all over the world that go into generating presentations, and documentation… And by the time he was done with his brief weekend project, he could do something that would have previously taken him a week a week of worktime, And once the process was in place, he could do it in a few minutes, and that was it. So if that became one of a million use cases that people are starting to do all over the world, that turns into real money in industry; in all industries.

And so that’s just one, which is, I think, representative of why the technology is so amazingly powerful. So if you multiply that times as many things as your imagination come up with, then yes, we have a technology now that we’ve barely tapped into, and which will have an immense impact, whether you think it’s positive or negative, on the world around us.

Break: [26:23]

Well, Chris, I think that was at least a good foundation for foundation models. And hopefully, Tanya, if you’re out there, let me know at the co-working space if that was helpful. But I think it hopefully will be helpful for more than just you. There’s a lot of people wrestling with how to think about these sorts of models, and think about how we should interact with them, and all of those things… And that really brings us maybe to our next kind of noteworthy news trend that’s happening right now, which is around these generative models can do a lot of things. And certain of those things are viewed either for real legitimate reasons or not legitimate reasons, as extremely risky. And I would say there’s both legitimate and non-legitimate reasons. But yeah, a lot of people view these things as risky, more so - and I’m not necessarily talking about automating away jobs, which maybe is another topic that - we’ve talked about this on the show before, but actual risk associated with running these models

Risk to humanity’s survival I think is what we’re –

Yeah, yeah.

Because that’s the context that people tend to talk about it in.

Yeah. What is the what are the views or what are the things that are hitting your desk on that front, Chris?

So people are debating that topic on whether these - admittedly, I think everyone agrees that these are incredibly powerful capabilities. But do they constitute the ability to kind of automate an autonomous risk to us in some form? A lot of times you’ll see people arguing for and against on various specific issues, but the thing that I’ve noticed the most is that they’re not always talking about the same thing. I’ll have two people arguing two sides of the point that I’m watching, but they’re not really talking apples to apples on the two sides. And hearing many such arguments in the last few months, I’ve actually dramatically changed my own perception, and I haven’t heard anyone – I’ll throw it out in a minute, but I haven’t heard anyone say quite the same as what I’ll propose in a few moments, which has to do with that kind of miscommunication, kind of talking past each other.

[30:03] Yeah. I think my strategy has mostly been - although I think these are legitimate things to consider, mostly my response has been to put my head down and build things that I think are useful and practical. And I haven’t necessarily given a lot more time to thinking about the end of humanity as we know it.

So I’ve probably put more time into that part of it than you have…

I think so.

So the thing that I think people focus on the wrong thing on this topic - they focus on whether the existing generative models as we’re now calling them all the time are leading us into kind of artificial general intelligence, AGI. Whether it’s aware, whether it’s conscious, and whether it would have an intent to attack. And I think that that completely misses the point. I think if you want to – and I’ll take two seconds and argue both sides for a second. If you want to argue against current technology being a risk to humanity, then you’re kind of pointing and saying, “Clearly, these models are not conscious, and they are not intelligent in having a broader awareness of the world around, and having their own motivations, and such.” And so the people that are arguing that side scoff at the very nature of the fact that you could suggest that a model could threaten humanity. And within the context of that set of arguments, I think they’re absolutely right.

But there’s also another side to it, which is actually where I am kind of migrating a little bit in my own personal thinking… And that is “What if it doesn’t take AGI to be a threat to humanity?” What if the threat can arise from the fact that – for one thing, we have humans in the mix; we have humans with motivations, that create models, and have specific things that they’re trying to achieve… And so you can take the power of models, and you can shape them in certain ways to address different tasks. And so it might be, possibly, that if there is a danger to humanity - which I don’t know, I’m just speculating… But if there is, it’s shaping a bunch of models that by themselves can do one task really well, or as the generative ones, they can give you sets of things… But you combine them in ways with software and with human intent to do damage. And so that’s what I’m more concerned about, is not that the models will awake, and suddenly become conscious, and decide that they don’t like me very much, and they want to get me; I think that I’m a lot more worried about humans orchestrating a bunch of powerful tools, and maybe automating those tools in such a way that the tool keeps going. It doesn’t take constant intervention. That’s the type of thing that I would actually give a little bit of credence to in my own personal thinking in terms of - when I say that, meaning not that it’s happening, but meaning that it’s worthy of consideration. And when we talk about things like AI ethics and such, that’s where I would focus, is that there are external concerns that don’t require AGI and don’t require consciousness to achieve some really bad outcomes. What do you think of that?

Yeah, so I can give a concrete example that I think fits within your –

[33:32] It’s even in your domain of expertise, but let’s say that we have a large, expensive and dangerous piece of equipment like an airplane or a helicopter. And there is obviously a vast amount of manuals and documentation about the maintenance of that, and the operation of it, and the safety around it etc, etc. So there could be a case, and this would not even involve a bad actor, but let’s say that we put a “Chat with your docs” interface on top of all of these manuals, and the operation information, and the maintenance information, and all that. You could imagine if, again, these models are essentially generating output that’s probable; they don’t know anything about, like you’re saying, reality, or intent, or anything like that. There is no knowledge there, it’s just completion. So they could complete someone’s request saying, “Well, how should I fix this issue with my airplane or helicopter?” and the model could say, “Well just take that part off. It’s a throwaway part. It doesn’t matter”, based on the text that it’s seeing. And that could be a significantly life-endangering decision if the maintenance technician or whoever it is actually trusts that as fact. Now, you could also imagine bad actors getting into that scenario, and modifying information such that it would generate out dangerous information, or something like that. So I think that’s a concrete example that would endanger lives, but does not involve AI becoming sentient, or something.

Yeah. I think that there are many, many, many, many, many use cases you could create along those lines. The thing that I also think people lose sight of is this is evolving so fast. So the capabilities that we’re talking about today, if you look back two years - it’s come a long, long, long way in two years. And two years from now, I’m expecting it to have gone at least that far, if not more. And so it’s a moving target in terms of what those capabilities, which means that the risk profiles associated with what we’re talking about will also change. There may be a time when some more research comes about, things are released, and there is more of a sense of understanding, which is a different thing from consciousness. And I’ve heard that debated recently by some fairly significant figures in the AI world, about whether completion is evolving to understanding. And I don’t know the answer to that, but it would not surprise me to evolve at some point to that level, and beyond. So we have to be conscious of the risk profile changing as we’re trying to identify where things are going.

To your use case, I still feel actually - and I know that probably most of the listeners will not agree with me on this… But I feel very comfortable with modern AI models flying aircraft, personally. And I think in many cases - and I say this as a pilot - that they are far better than the humans that are doing the same. Because you can train the model to essentially have a million hours equivalent, whereas a great human pilot might be 10,000 equivalent experience level.

So one of the things I think – I’m gonna throw out this as a point to address… I think the notion of AI ethics becomes very hard for us to not be outrun on that. So AI ethics has always been chasing the development cycle. That’s been one of the problems, is how do you catch it up to get the decisioning in there early enough to matter… But we’re also seeing the development cycle speeding up. And I’ve had some conversations with people lately about “Is it possible to do a catch-up there, given the evolving state over time?” So there might be a whole AI ethics show we can have at some point in the future about how you address that quagmire there.

[37:49] Yeah. And there was actually a news article or a development this week related to exactly what we’re talking about. So regulators and governments are trying to catch up with the state of generative AI; generally not keeping up, I would say, but this week - I’m just looking at this New York Times article… Europeans take a major step towards regulating AI. So “The European Union took an important step on Wednesday towards passing what would be one of the first major laws to regulate artificial intelligence, a potential model for policymakers around the world as they grapple with how to put guardrails on the rapidly-developing technology.”

So there was this regulation that is taking another step towards passing. And if you look through the article - and I haven’t read the full regulation, but looked at a few links - this is really focused on uses of AI that are seen as risky. And one that’s cited in the article is use of AI to automate processes around utilities - water, and electricity, and all of that, which if it fails, it has vast consequences for large populations of people. So there are these sort of risky scenarios.

I think one point that you have made before, Chris, in relation to the autopilot things and other things like that, which is worth mentioning here, is also thinking about the fact that humans are fallible. So whether we’re talking – machine translation gets a bad name for producing really terrible output in certain cases… Well, I’ve worked in that industry and I know it’s very possible for humans to produce translations that are very, very poor as well. So a question is, for the task you’re considering, I think it’s good to balance both. It is good to think about the risk, because there is risk; like, in a manufacturing plant, or in an aircraft, or in a utility, or wherever it is, there’s risk associated with something going poorly. But also you have to think about, “Well, what is the risk, and how do I test this AI or automated system?” And “What is the risk and how do I test these human operators?” And in reality, one could be safer than the other, and it might not be the one that you would expect from the start.

There’s an emotionalism that drives all these topics… And keeping in mind that it is evolving, I will argue with anyone that there is a point in time in the future where it becomes – or the models, for instance, going back to the airplane flying… The models for those particular tasks, the AI models, are so good that statistically they’re making many orders of magnitudes fewer errors than very experienced human pilots. They don’t get tired. They’ve seen every weather condition in the models, they can navigate through all sorts of stuff… And if I was going to take my family on a transatlantic flight, there’s a point in time where a rational person who’s not driven by their fear and emotion is going to say, “Yes, statistically I’m much more likely to arrive safely at my destination with my family with the AI-driven airplane versus that.” So we can debate when that happens, but I don’t think it’s terribly rational to say “That’s never going to happen. I’d always prefer the human”, because I don’t think the statistics will substantiate that.

Yeah. I think there’s also – I’ve heard a risk proposed, maybe more so over the past months than I heard in the past, which isn’t really about AI automating jobs away, but it’s a risk of how this technology transforms humans, and the things that they do fundamentally. So pilots, I’m sure, like to fly. Right?

I love it.

[42:09] Yup, exactly. So if an AI is better than you, and a regulator, a government regulator comes along and says, “Okay, well, it’s no longer safe for humans to fly, Chris. No license for you.” That’s kind of a bummer, right? I mean, it might be the safe option, but it is a bummer. And it also falls into this area of like content generation. I’ve talked to journalists, and other people, and they are like “Hey, maybe an AI–” I think some of those people are actually saying “Maybe an AI can do as good a job in certain cases, or a better job than human writers in producing certain types of content.” But isn’t it a shame that we’re going to lose our ability to – if that’s no longer needed, how’s that going to shape how humans write into the future?

We are going to change – the nature of humanity will change with this. And it doesn’t take AGI to do that. That’s what I’m kind of getting at. What we do with AI versus what we don’t do with AI is going to fundamentally change how we self-identify. And not only will I most certainly eventually lose my license to AI. Well, that will happen at some point, because putting me in a plane in the air, no matter how good I am, will become too big or an unacceptable risk. But that will also happen with automobiles, at some point. At some point, none of us – you will go to an amusement park to drive a car, or an amusement area to fly a plane, much like we go to amusement parks now to do rollercoasters. Because there’s a point in the future - we can debate when it is - where the technology is so good, it will not make sense to put a human who might have a crash and kill people into the mix. That will happen someday.

So I’ll finish my comment by saying – like, I have a daughter who is 11. In her lifetime, assuming she lives out her life, the nature of what it means to be human, and to live with AI, will dramatically change our self-identification. That’s a big statement, but I feel I’m quite positive that to be the case.

Yeah. And I think closing on a positive note, there’s a lot of benefit that we’re seeing, and we will work through some of these things, and the people that are listening to this podcast, Practical AI - we’ve talked about the mental model of how these things operate… I’d encourage people, get hands-on with these models. They’re not going to be malicious against you, as we’ve talked about. They don’t have any sentience or consciousness. So get hands-on with these models, develop tooling, practical tooling around these models. That’s what I think is needed; we can dive into this topic, develop practical tooling that can help us move forward and create applications that really help our customers, delight our customers, help those around the world, and in various ways. So yeah, I’d encourage people to get hands-on and get involved.

These powerful tools are part of what it means to be human now.

Yeah, yeah, for sure. Well, I mean, we’ve all been cyborgs for some amount of time carrying around cell phones… So it shouldn’t surprise people that things are advancing. But I don’t have my Vision Pro yet from Apple, so we’ll see how that develops. But yeah. Alright, Chris. It’s been fun.

Good conversation, Daniel. Thanks.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00