Practical AI – Episode #163

Eliminate AI failures

with Yaron Singer, CEO of Robust Intelligence

All Episodes

We have all seen how AI models fail, sometimes in spectacular ways. Yaron Singer joins us in this episode to discuss model vulnerabilities and automatic prevention of bad outcomes. By separating concerns and creating a “firewall” around your AI models, it’s possible to secure your AI workflows and prevent model failure.

Featuring

Sponsors

Me, Myself, and AI – A podcast on artificial intelligence and business produced by MIT Sloan Management Review and Boston Consulting Group. Each episode, Sam Ransbotham and Sheervin Khodabandeh talk to AI leaders from organizations like Nasdaq, Spotify, Starbucks, and IKEA. Me, Myself, and AI is available wherever you get your podcasts. Just search Me, Myself, and AI.

Changelog++ – You love our content and you want to take it to the next level by showing your support. We’ll take you closer to the metal with no ads, extended episodes, outtakes, bonus content, a deep discount in our merch store (soon), and more to come. Let’s do this!

FastlyCompute@Edge free for 3 months — plus up to $100k a month in credit for an additional 6 months. Fastly’s Edge cloud network and modern approach to serverless computing allows you to deploy and run complex logic at the edge with unparalleled security and blazing fast computational speed. Head to fastly.com/podcast to take advantage of this limited time promotion!

LaunchDarklyFundamentally change how you deliver software. Innovate faster, deploy fearlessly, and make each release a masterpiece.

Notes & Links

📝 Edit Notes

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another episode of Practical AI. This is Daniel Whitenack. I’m a data scientist with SIL International, and I’m joined as always by my co-host, Chris Benson, who is a tech strategist at Lockheed Martin. How are you doing, Chris?

Doing very well. How are you today, Daniel?

I’m doing well. I think this episode will release in the new year, but there’s a lot of year-end stuff happening to tie up loose ends and project-plan for next year… So all good things, but yeah, trying to do all that stuff…

The end-of-year time-warp here, even though everyone’s hearing it after the fact.

Right, exactly. I’m pretty excited. I know that you and I have talked many different times on the podcast about where AI models can fail or go wrong, either in terms of data or in terms of their behavior… But maybe we haven’t talked as much about ways to mitigate that problem practically, and how people are approaching that. And we’re really excited today, because we have Yaron Singer with us, who is CEO of Robust Intelligence. Of course, they know all about how AI models fail, and how to do something about it… So welcome to the show, Yaron.

Yeah, great to be here. Thanks for having me, Daniel and Chris. I’m a big fan. I think I’ve listened to pretty much all the episodes, so it’s great to be here.

[04:03] Oh, wow. That’s great. Well, it’s wonderful to have you on the show. Could you maybe just start out by getting us up to speed, maybe for those that are out there that aren’t really aware of the different ways in which AI models might fail, and the sort of risks associated with that? Could you give us a little bit of an intro to maybe some of the highlights from that, either things you’ve seen or things in the community that kind of motivated you to think about this problem?

Absolutely. You know, when we say that AI models fails, what do we mean? Failure can come in all of these different shapes and forms… So maybe we start with some examples that a lot of us in the AI community are familiar with. One example that’s been a very famous one was the Microsoft chatbot example, where Microsoft developed this AI chatbot that was a really great technology, but what they did is they basically – the idea was for this chatbot to mimic actual human conversation, and the way that they did that is they actually used data from Twitter to train that bot. And when people figured that out, what they did is they actually fed all sorts of weird, different things to the chatbot, to the point where they realized that they can actually make this chatbot racist. So in the end what you got is pretty quickly, I think it was like 36 hours after it was released, you had this chatbot that was spitting out all these awful racial slurs, and Microsoft ended up shutting it down.

It’s just one example where you have this really elite technology, and once it falls into the wrong hands, it can obviously be led astray and manipulated in all these kind of weird ways that we never expected and intended. So there are all these sort of ways that AI models fail.

Even as of recent, some of us have read about Zillow’s example, where if you went on Wired and you read in June 15th, there was this beautiful headline about how Zillow is gonna improve its predictive pricing on housing using sophisticated neural networks. But then if you read Wired again in November, it explained to you why Zillow failed with its AI-based pricing. Basically, they let go of like 25% of Zillow’s employees, and it had huge economic implications for Zillow. And basically, there the failure came because of changing conditions. Specifically, it was because of the pandemic. So the models were trained using old data, and then they applied them on the world, which was experiencing a pandemic, and that data was very different. So this is what we call distributional drift. And we saw a failure of the AI models there.

So these are just two famous examples, and one of the more recent examples, but of totally different ways in which AI systems can just completely fail, from different reasons. Yeah, so those are some examples… And the implications are obvious. Either we have very bad reputation, very bad risk, we’re putting people’s safety at risk… So those are big problems that we’re experiencing.

I definitely see on the one hand you’ve got these behavioral problems of models, where depending on what data you feed them or train them on or update them with, you kind of get this non-ideal behavior in one way or another. I’m curious about your perspective, now sort of being in this field and seeing how clients are using AI models maybe more and more over time, what is the – from your perspective, is there an increasing risk in the way in which people are using models, like the tasks that they’re applying models to in business use cases? The chatbot one is interesting, and it’s bad PR for Microsoft, but are people starting to maybe apply AI models in maybe more risky business use cases, versus more toy problems or research type of settings?

[08:33] That’s a great question. I think in general, regardless of one organization or another, we are in a world that is adopting algorithmic decision-making that is completely based on AI. And this world is adopting this sort of automated decision-making at an exponential pace. And some examples are examples that have been out there for a while, that we know, but it goes for things that are as simple as AI models being used for determining basically insurance things. For home insurance and car insurance, but also for health insurance. And when these AI models can have these different failure points, that has huge ramifications.

Same with lending. AI is used a lot in lending and deciding who gets a house loan, who doesn’t get a house loan, who gets a car loan, who doesn’t, and how much they have to pay… And we also see this in things like predictive policing, where police departments across the country are using AI models to basically decide on where they’re gonna be putting more forces. And all these things - I think the intentions are good, people wanna make good decisions, and they wanna do this in a fair way, and automate that process somehow. But at the same time, with everything that we know about AI and its discipline nature the risk is enormous.

I’ve got a question about that… As we’ve talked about some of these use cases and you kind of maybe divide them up a little bit on the Microsoft side, there’s failure that is intentional in nature. There’s sort of these - for lack of a better word - adversaries out there, and they’re taking advantage of the weakness of a model, and seeing what they could do with a chatbot… And then you have these kind of environmental influences such as the Zillow thing, where the world changed out from under you; it was no one person’s intention to do that, but nonetheless, it had the same effect. Does the intentional and the unintentional - does that matter as you’re dealing with these failure situations? Does it change the way that you approach the problem? Or does that intention or lack thereof, just because it happened - is that a factor in things?

That’s a very good question… Like, does it matter whether the AI failures are due to an adversary that’s trying to create these failure modes, or they just happen because of changing natural conditions or whatnot. So I think the answer to that is yes and no. It matters and it does not matter. And that’s the approach that we have in our company.

In our company, the part where we think it does not matter is where – basically , we put all these things under a category of risk. What we look at is we abstract the root cause of the risk away from – we sort of say “Well, it doesn’t really matter what has caused that model to fail.” The important thing is that the model has not failed, for whatever reason. So in that sense, we take an approach where we’re agnostic to whether it was an adversary that fooled the model, whether it was the pandemic that changed the conditions, whether somebody really intended to misfeed, to put in racial slurs, or it’s just something that for whatever reason was picked up from the internet.

[12:06] So somehow the root cause does not really matter. What’s important is that we somehow are able to reduce it from a technical perspective to kind of like understand it, and being able to protect the models from this.

Now, yes it does matter, in the sense of the algorithms that we end up using, to kind of protect models from one kind of failure to another kind of failure. So the algorithms that you would use to protect the model from what we call a distributional drift, like kind of changing in conditions due to a pandemic, are different from the algorithms that we use to protect the model from being handed adversarial input.

On that point, are you seeing, as you’re engaging with clients and companies, are you seeing – I guess two questions. One is what is their perception of the main category of risk in those two cases, the adversarial or the sort of data drift and distributional change types of things? And what maybe from your perspective is the reality, I guess? I guess maybe some companies have naturally a higher risk vector for adversarial parties coming against them, just because of what they do, or whatever…

Absolutely.

I don’t know if you have any thought on that…

Yeah, I think that’s a really interesting question. I think that it really depends on the company… But not only the company; even different teams within a company can have different concerns. You can imagine that you have a company where they have a team, and that team is responsible for fraud detection in that team. So the team that’s dealing with fraud detection in the company - they are constantly dealing with adversarial input, and they very much care about protecting their AI from the threat vectors of adversarial input.

The same company can have another team, and that team is dealing with forecasting. Forecasting of different events. And forecasting different events - they don’t think about adversarial input. They worry about rainy days, changing – you know, if we’re thinking about maybe like a company that’s doing ride sharing, or things like that. So they care about the continuously changing conditions and how that affects the models and the predictions.

And if you wanna do this well, you wanna be able to build a system that protects models from both of these types of cases - the cases where there are adversaries that are really just trying to change and manipulate financial transactions, as well as the threat vector of just changing conditions, with no adversary in place, but just sort of like changing conditions that can change the predictions to the models.

So as I was looking through some kind of updated best practices out there… Actually, I think what triggered this was - you know, Chris and I spoke recently on a show about the OpenAI API, because it’s now sort of generally available… And I was reading through a bunch of their documentation in terms of what they’re thinking about in terms of risk, and best practices in terms of using the API to maintain safety and security and privacy and all of these things.

One of the things that they highlighted in there was this sort of automation and human-in-the-loop element of this, where if you’re planning on creating an application, for example with the OpenAI API, it’s considered much higher risk if it’s to automate something. And there’s not going to be any sort of human in the loop to review things.

As someone who kind of works to manage and mitigate the risk in these types of scenarios, how do you view – I guess as the industry gets more sophisticated in handling the risk, are we going to be able to automate risk more? Or maybe is it that we deploy our models and we have monitoring infrastructure that helps us know when things are going wrong, but it’s still automated? I’m wondering how you view this kind of shifting over time and view this need for humans in the loop in terms of the output of models?

I think this is really important and really interesting, and I think that what you’re saying from OpenAI just echoes what we’re seeing from leading companies and platforms in general. When you look at the state of the art, it’s all going towards automation. So if you’re using Databricks today, you can already – in Databricks notebooks you have the option of retraining your models automatically. And in general, that’s where the world is going.

I think that within – I wanna be careful with my predictions here, but we’re a few years away from basically having most of the retraining tasks being done automatically, meaning without the human in the loop. And that’s generally where I think AI is going. If you wanna think about where AI is gonna be three years from now, look at where AI was five or seven years ago. If you think about where we were seven years ago, this was like – it predates scikit-learn, it predates TensorFlow, it predates PyTorch… Where if you heard about AI and you thought, “Oh, maybe that can have an advantage for my business or for my organization”, you would basically need to hire a Ph.D. from some top school to code up in C an SVM. That’s what they would do. The thought of that today seems very – we’re laughing about that, but that’s really how it used to be just in 2014-2015. There’s been a lot of automation since then, and I think that where we’re going is we’re going towards more and more of this automation… Especially when - you know, if you meet a company now and that company has just a handful of models, talk to that company again a year from now; they’ll probably have hundreds of models. And if you met a company now that has hundreds of models, a year from now they’ll probably have thousands of models. So in order to do that at scale, it’s all automation.

So that’s our perspective and philosophy in the company - the world of AI is going towards automation. And as the world of AI is going towards automation, we’re trying to really think and understand “How do we ensure an organization that is using AI, especially in an automated way, is really making sure that it’s not taking on any risk, that it really eliminates all the risk that AI has, especially with assuming that it does automate the retraining of models, and all of that?”

I think automation definitely presents an additional dimension of risk, and it’s really important to understand that risk that we’re taking on, and be thoughtful about it.

[20:02] I’ve got a follow-up question to that. As Daniel was asking that last one, it popped to mind and you started addressing it already, and I wanted to explore it a little bit… And that is, if you’re thinking about this journey from kind of where we are now, and having human-in-the-loop on lots of critical tasks, in lots of industries, especially for retraining, and things… But if we agree that we’re generally moving toward full automation, there’s risks associated with both of those scenarios, and there’s also risks associated with that kind of in-between, where it’s human/automation collaboration, often said, in the industry I’m in man to man training that kind of concept… All of those two poles, plus the journey in the middle, all have a discrete set of risks associated with them. How do you see that? How do you look at those risk sets, and how do you decide what matters? How do you evaluate that? Because I think that’s a really tough question that I end up talking with folks that are grappling with this, is “Well, there’s problems with having humans.” We’re not perfect. We screw up, and we sometimes talk about it in the sense of having a human bring safety… But not always. And there are other times – you can go argue the other way. What’s your perspective on that?

Yeah, there’s an inevitable trade-off, as you were saying. I’m of the opinion that the goal of any technology is to basically support humans, not replace them. On the one hand, we wanna support human decision-making; wherever it can help automate and de-bias human decision-making, we wanna support that, and not replace that. But at the same time, we want to make sure that we’re now replacing human judgment.

I think that the challenge that we have in our world - I don’t know that we will have that choice… Meaning that we don’t have a person sitting in every critical decision junction, and making that critical human judgment. It doesn’t scale. And that’s the biggest challenge that we have.

So with that in mind, what we need to do is we need to make sure that – it’s never gonna be perfect, but we need to know that we’re making every possible effort to make sure that it is as safe and risk-free as much as possible.

That’s a very thoughtful answer, thank you.

Yeah. This whole time I’ve sort of been thinking through these broader issues, and thinking through my own use cases… And as Chris knows, eventually I always get to the point where I’m like, okay, I understand the point here, like “What can we do practically to address these things and mitigate them?” I’m wondering if you could maybe walk us through your journey to robust intelligence in terms of how you kind of came to, I guess, understand what you wanted to focus on in terms of what you wanted to build and offer to the community… Because there are so many problems and so much to address. Obviously, you have to focus on something to start with. So how did you get to that point and how did things get started?

So my journey into this has been through - like a lot of AI practitioners, I think, it started in Academia. I was a Ph.D. student at Berkeley, and then I worked at Google, and then I spent 7 or 8 years at Harvard. I’m a professor of computer science and applied math at Harvard, and basically, what I’ve been working on at Harvard is exactly this topic, the vulnerability, the sensitivity and the failure modes of machine learning models.

So a little bit before my time at Google, and then while I was at Google, I’ve worked on machine learning models, and basically algorithmic decision that is based on machine learning models. And when you do that and you start to do the theory behind it and you try to prove theorems, you start realizing that we have very little foundations for algorithmic decision-making, given machine learning input. And there’s a good reason for why we have very little theoretical understanding - it’s because it’s pretty terrible, to be honest. And that’s kind of what I started studying. Specifically, I’ve been looking at really whether we can make good algorithmic decisions given imperfect machine learning models, and the answer is mathematically no. And this has been my focus at Harvard.

[24:11] So when you say “terrible”, you mean terrible in the sense that you can’t get to a point where you can prove, in many cases, that algorithmic decision-making is a good idea.

Yes, exactly. You actually prove it mathematically. We have mathematical definitions of what it means to learn and what it means to make good decisions, and we have very rigorous models and statements… And when you use these rigorous – and these are the same rigorous models that make machine learning work. When you try to apply a little bit more complex decision-making on top of these results from machine learning models, it all starts to break.

So we’ve been kind of like proving these theorems about how much data you would need in order to make what we call good decisions, and it turns out that the data is exponential in the dimension of the input, which is really bad. And we study the sensitivities of models into very small errors and very small failures, and again, on like infinitely small errors in models can lead to errors that are arbitrarily bad. It’s quite horrible and quite bad, and I’ve actually spent some time in my academic career trying to convince people of this, and giving talks about an inconvenient truth about the algorithms in the era of machine learning. If you look that title up, you’ll see a bunch of lectures coming up, and seminars across –

I’m sure it produces awkward conversations at conferences.

Absolutely. Absolutely, it does… Especially in a time when machine learning is on the rise and on the boom, and your department is gung-ho about hiring more people in machine learning, and now they have this professor who kind of proves all these weird theorems about why it’s not working.

I’m sitting in the same department as Lez Valiant who created the foundations of machine learning… Who’s been, by the way, the most receptive person to this sort of criticism and whatnot, and he’s been so supportive.

Some papers took like 3-4 years to get published, but they got published, and then they got very good recognition. But we started from the theories about the possibilities, and then we moved to algorithms… And my group focused on algorithms for noise robust algorithms for these types of problems, and focusing on what it is that we can do.

As we’ve kind of proved all these impossibility theorems, and then you focus on what is it the algorithms would be able to do - the very big idea that came up was decoupling. So what we realized that one needs to do is basically decouple the part about model building from model security or model safety, meaning that if you try to just train a model that is gonna be robust to, let’s say, adversarial input - we talked about that - I think the best result known, and I think that result is now from like 2-3 years ago, it turns out that if you wanna make your model just by retraining it, to make it more robust to adversarial input, I think in image classification you’re gonna take the accuracy of the model from 98%to 37%, in order to get any sort of reasonable robustness… Which is a trade-off that is just inacceptable, right? If you’re gonna come up to a company and tell them, “Hey, you know how your model has 98% accuracy? Well, in order to make it robust to the 0.001% of input that is bad, I’m gonna take that model’s accuracy to 37%.”

Would there be an analogy there, in the software world having a separation of concerns?

Absolutely.

Okay. For any listeners that are not software people, you want to address the problem you’re trying to address, but at the same time the ancillary things like security that are very important need to be addressed, but you address them separately, to maximize both.

[27:47] Absolutely. Whenever I talk to people for the first time about this I tell them, “Look, there are two considerations. One is the mathematical consideration. And mathematically, if you wanted to make your model “robust” by retraining it to adversarial input, it means that you’re gonna take the accuracy from something like 98% to 37%. It’s just a mathematical fact. Now there’s another aspect of it, which is the product aspect of it, or the engineering aspect of it. And the engineering aspect of it is exactly what you’re talking about, Chris, where if you’re an engineer that’s building a system, you probably should not be the one who’s also responsible for protecting that system. We can just all imagine the nightmare if every time that we wrote software, we’d also have to write the antivirus and the firewall for that software.

In software, we’ve seen such tremendous success exactly because of this decoupling, and layering of different systems and components that know how to work together in almost like this agnostic way. And I think that what we’re trying to do in the company at Robust Intelligence is mimic that kind of decoupling.

Specifically, what we do is we build an AI firewall. An AI firewall is a piece of software that wraps around an AI model to protect it from making mistakes. So it’s one line of code that you add, and that line of code basically stands between the data and the model, and once the data comes in, it monitors, tests, and can even correct it. So basically, their point does not fool the model, does not cause the model to make the sort of mistake or bad prediction. And that’s sort of like this decoupling process, where we’re not trying to build a better model; all we’re trying to do is we’re trying to basically be able to catch bad data. So it’s reducing it to a much, much simpler task.

I definitely resonated with your example around the firewall and antivirus. I know in my own building software career any time I’ve felt like I start poking holes in a firewall to open up ports and configure things, I start feeling extremely uncomfortable, because I have no idea what I’m doing… So that definitely resonates with me.

We talked about this AI firewall component kind of wrapping around the model, between the data and the model. In that firewall, are you looking at out-of-distribution data that’s coming in, or maybe particular – are there other ways that… Some data is more particularly risky than others, so I guess my question is “How do you know, as data comes in, if it’s risky data or if it’s not risky data?”

Yes, that’s a good question. So what we do is basically we test the models. We have a process of stress-testing the models, and we do that either implicitly – if you just install the AI firewall, if you just put the line of code, we basically do that in the background, we do that implicitly. Or sometimes, you know, there are companies where we start from stress-testing, and then we kind of graduate to an AI firewall.

Stress testing - what that basically means for us is we run a series of tests on the model. You know, some of the tests can be “How does the model respond to distributional drift? How does the model respond to unseen categoricals?” Just all these sort of different scenarios and different inputs, and then we’re measuring the response of the models to all these bad things that could happen. And as we’re measuring them, we’re getting a sense of – we’re basically training our own AI firewall.

So by understanding how different input can affect the model in different ways, when new input comes in, we know whether that input is going to lead to some sort of prediction error, some sort of prediction change.

Let me give you a silly example. Suppose that your model – you have some sort of AI model, and maybe that AI model is trying to predict whether somebody’s gonna earn above $100,000 next year. So whenever we take that input, we check all the different features and when we sort of see how the changes in different features is gonna affect the prediction of the model. So maybe for example we look at how does age affect the prediction of the model. And now suppose that data comes in and somebody has accidentally replaced age with the year of birth. So age is maybe from like 36, age was changed into 1985, or 1986, or whatever.

[32:22] That’s basically some sort of human error, an error in feeding in the data, and by that point, the AI firewall is trained and it knows that “A change in this feature can really affect the model prediction”, and certainly understands what the right distribution of age is, and understands that age needs to be something between probably – it’s seeing data from 10 to 90, or something like that… So it understands whenever it sees a big number, like 1985, there’s something wrong here. And then what it can do is basically it cal alert and it can prevent that mistake from happening by even replacing 1985 by the mode of the distribution, or something like that, to kind of output a better prediction.

I’m curious… It’s an interesting approach that you have here. If you are already kind of building your models and you’re deploying them out into production, whatever industry you’re in, and you have your MLOps pipeline in place, and on the software side your DevOps or dev sec ops is kind of evolving into in a place, how do you integrate this together? How do you take what you’re talking about and integrate it into your existing pipeline, so that you gain the benefit of what you’re describing, and yet not kind of breaking your approach, so to speak, in terms of how you’re already doing that? How does all that work together?

Great. So our approach is that the best integration is no integration, and that’s why we have… One way to integrate the product - we call this light integration… It’s where basically we’re not integrating with the model at all; all we do is just take prediction logs. So if you have a model that’s running, and you store a prediction log somewhere, meaning that you have input, and the output of the model, and that’s stored in some sort of CSV file, then basically our product just sits there and runs a CI/CD process, and just continuously reads that CSV file whenever you dump in a new log file. And it just reads it, and that’s it. And it just continuously tests it; we call this continuous testing. So we continuously test the model, without it ever being in the critical path, with basically zero integration.

So it literally is in production, it’s a two-hour integration with Kubernetes… Because again, it doesn’t stand on the critical path of anything like that. When we’re doing something like AI firewall, this is where we’re integration on the – again, it’s a single line of code that we’re integrating on the actual model server, and that involves some libraries, and things like that… But again, it uses that same principle where ultimately it throws data in the form of like prediction logs in the background, so that it doesn’t stand on the critical path of anything in the system.

So that’s something that’s really important, and a lot of it we do – we do it for customers on-premise, because having your data leave the organization is a huge pain. It’s sensitive, there’s a lot of compliance involved, and things like that… So on-premise is actually something that is very important.

I wanna follow-up a little bit on what you’re talking about, sort of this continuous testing I think is what you called it, which is a really cool idea that I like… And probably - I don’t know if that’s less scary to AI people, that we’re monitoring, or something… But I like that idea of continuous testing.

[35:47] I’m wondering – you kind of gave the simple example of maybe this distribution of age in the model, or something like that… And I’m thinking, there are likely these categories, sensitive categories, like personal details of age, or race, or whatever it is… You were giving examples of the police scenarios, and that sort of thing… These might be categories in which variation in that category should necessarily produce invariants in the output, at least if you’re monitoring those well. What’s your perspective on that in terms of how to approach those sensitive categories, and is that something you can program, both in terms of variation and maybe things that should be invariant with change?

So basically in our paradigm one of the basic building blocks is the building block of tests. Basically, you discover whatever it is that you test for. If you’re not gonna test for bias, you’re not gonna discover bias. But if this is something that you care about - and I think a lot of practitioners and organizations should very much care about bias - then you test for it, and whenever it exists, you find it. And that’s actually a suite of tests that we have embedded in the product that is actually very important. So what we do is we automatically test the model. We just automatically go through all the different categories, and we test whether there’s bias in prediction, whether there’s bias in your C, whether there’s bias in false positives, false negatives… All these different things across different categories. And that’s when people discover all these minuses that they had in the model that they never knew about. Some of it is protected categories, and some of it is just other categories that they didn’t know, that they found out that they were not training their models on the right dataset, or they should do different sampling of the data in order to make sure that the model is performance-biased.

That’s cool.

I think testing for bias is such a critical thing that basically all AI practitioners should do.

This has been a fascinating explanation, and definitely a topic that we have not gotten really into at any point in any previous episode. As we wind up, where do you see the future going, both with Robust Intelligence as your enterprise, as your company that is doing things, but also within the larger world of Robust Intelligence, the larger effort in the industry to drive forward, and the evolution? We won’t hold you to any predictions, but if you were to make some predictions on where you think it will go, or where you would like to see it go, I’d love to hear that.

Sure. So let’s start with the things that are not interesting, right? What we all know is we all know that AI is eating the world. We’re gonna be hard-pressed to find an organization that is not adopting AI in a serious way in just a couple of years, I think. Okay, that’s not interesting, so let’s put that aside. But I think when it comes to our little part of the world, when it comes to AI risk, we think that within just a few years - there are two things. I think that any organization that uses AI in a way that can affect people is gonna have to go through some sort of stress-testing by a third party. I think that’s gonna be mandatory. I don’t think that it’s gonna be up to the director of data science in that company. That’s just gonna be regulation. That’s number one.

And number two, also on regulation and best practice, I think just a few years from now, in the same way, I think we’re gonna be hard-pressed to find a company that is not protecting its models with an AI firewall. Not necessarily Robust Intelligence… I don’t know if any other companies are making AI firewalls, but I think within a few years – when we’re gonna have this conversation again three years from now, we’ll go back and remind ourselves, “Hey, do you remember that three years ago companies were actually deploying AI models without any AI firewall? How crazy was that?” I think that’s what we’re gonna find three years from now. That’s where we’re going, in my part of the world.

And you should consider that an invitation, by the way, for three years out to come back and we’ll have that conversation.

The data is gonna be easy to remember, because I think that we’re like the podcast of the year, right? So as we go into 2025, it’ll be good to check where we were with our predictions.

Absolutely.

Yeah, for sure. And hopefully sooner than that as well. It’s been an absolute pleasure to talk through these things with you. As for myself, and I’m guessing Chris as well, we appreciate your work in this space, and pushing these ideas forward and being that voice that has occasional awkward conversations at AI conferences. It’s much needed, and we appreciate your perspective, so… Thanks for joining us.

Awesome, guys. Thank you.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00