Practical AI – Episode #244

Government regulation of AI has arrived

get Fully-Connected with Daniel & Chris

All Episodes

On Monday, October 30, 2023, the U.S. White House issued its Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Two days later, a policy paper was issued by the U.K. government entitled The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023. It was signed by 29 countries, including the United States and China, the global leaders in AI research.

In this Fully Connected episode, Daniel and Chris parse the details and highlight key takeaways from these documents, especially the extensive and detailed executive order, which has the force of law in the United States.

Featuring

Sponsors

Traceroute Podcast – Listen and follow Season 3 of Traceroute starting November 2 on Apple, Spotify, or wherever you get your podcasts!

Changelog News – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today.

Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com

Notes & Links

đź“ť Edit Notes

Chapters

1 00:08 Welcome to Practical AI
2 00:35 Sponsor: Traceroute
3 02:22 Executive order on AI
4 05:18 How our government works
5 08:09 Bletchley Declaration
6 10:33 A focus on safety
7 12:54 Who will enforce it?
8 14:03 Priority in government
9 15:40 Sponsor: Changelog News
10 16:52 Call to AI devs
11 21:20 Tiptoeing the line
12 27:26 Setting the standard
13 30:15 Biological Materials?
14 33:43 Labeling AI content
15 40:10 World's intent
16 41:18 Growing opportunity
17 44:16 Outro

Transcript

đź“ť Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another episode of Practical AI. This is Daniel Whitenack. I am a data scientist and founder at Prediction Guard, and I’m joined as always by my co-host, Chris Benson, who is a tech strategist at Lockheed Martin. Today is a Fully Connected episode with just the two of us, where we’re trying to keep you updated with everything that’s happening in the AI community, and maybe learn some things ourselves that help us level up our own understanding of these topics, and yours as well. So yeah, how are you doing, Chris? Are you keeping fully connected?

I’m definitely fully connected, and there’s been a lot to fully connect to this week. There was a bit of homework going into this episode here… So it’s been interesting. We’ve got a lot to talk about today.

Yeah, there’s a lot happening in the world. Normally, in these episodes - or at least it feels like in recent times in these episodes there’s been a lot of updates on new models, and other things like that… And that’s still happening. So things like Mistral, and other models have come out… But I think the interesting thing maybe that I’ve seen people talking about this week in particular circles back to government interactions with the AI community, and in particular the White House here in the US, the White House, the President’s executive order on Safe, Secure and Trustworthy Artificial Intelligence, which is kind of timed interestingly with other things as well… But yeah, I know that you’re very in tune with the public sector, Chris. Are you seeing a lot of discussion of this in your circles?

Yeah, I would say that we’re actually – as we parse through it, and we’ll talk about the different sections and stuff, I would say a lot of the stuff that would affect my day job in the defense and intelligence world, we’re already kind of doing; not kind of doing, we’re already doing a lot of that stuff. And so there’s a lot of specifics in this executive order, but it’s not starting a new process for us in that world. It’s something we’ve been working on for quite a lot, for years.

So it’s interesting… I was pleasantly surprised with this, because we’ve talked many times on this podcast about the fact that how long it’s taken for governments to start getting a beat on these AI issues, and what does regulation mean, and who’s going to participate, and how you’re going to do it, and all this kind of stuff… We’ve been saying that for years. And finally, on Monday of this week, as we’re recording, Monday, October 30th 2023, we got this executive order issued. And then I believe we got the Bletchley Declaration issued as well later in the week. Do you want to talk about that a little bit?

Yeah, sure. It might be useful – I know we have a wide range of listeners, and if I’m being honest, even myself… We had a friend that just became a US citizen the other week, and…

Congratulations.

Yeah, talking through things with him as like – of course, it’s fresh in his mind, but all of these ways about how our government works, and the various ways in which things can be legally enacted can be quite confusing. So maybe before we jump into things, let’s maybe just touch on an executive order, what might that imply, and how it may be different than certain things that have preceded it… Because there’s been statements on AI, and government thinking about AI in the past here in the US… But from your perspective, what makes an executive order maybe different than some of the things that we’ve seen in the past?

[00:06:13.18] Sure. So noting that I am most certainly not a constitutional attorney or any such thing, just a dude who likes AI…

I would still vote for you, Chris.

Oh, that’s really nice. [laughter] But an executive order, in short - and I’m sure if we have listeners that say I’m slightly off, they can correct us on this… But the President of the United States can issue an executive order, which is a legal device which essentially has the effect of law. It can be overridden a couple of different ways. The US Congress can override it by passing an actual law. So if an executive order is in conflict with a law that is passed in Congress, that law in Congress will trump that. And in addition, an executive order, I believe, can be the US Supreme Court can also override it on a constitutional basis. But unless one of those two things happen, my understanding is executive orders otherwise have the effect for all practical purposes of law in the United States of America.

Yeah. And apparently, these actions are “the most sweeping actions ever taken to protect Americans from the potential risks of AI systems.” It’s funny how the use of adjectives in government has come quite interesting over the years… But this is the most sweeping actions ever taken to protect Americans from the potential risks of AI systems. There are some interesting ones, and I think what might be interesting about at least a couple of these is the way that they might influence the AI industry in the US in particular, but also some ways that government agencies and other entities might become involved in the AI world. So I guess before we jump into the specifics, you also mentioned the Bletchley decision… What’s that, for our listeners? How might that relate to things going on?

Sure. So there was a summit, it was called the AI Safety Summit, that took place on November first and second of 2023, which was just a couple of days ago this week. They issued what they call a policy paper, which was the Bletchley declarations by countries attending the AI safety summit on that date. And it’s fairly short, it’s a few paragraphs long, and my understanding, not being a legal mind, is that it would not be binding in any way legally… But there’s a number of countries listed, there’s a couple of dozen that attended, including the United States, United Kingdom and a lot of other countries around the world, that basically said - the short of it is “We’re acknowledging AI safety is important to us all, it’s by definition an international concern, and that the way to deal with these concerns going forward is for us to all work together, and share information and such as that”, without reading the whole thing to the audience on the show, which I don’t think we have time for.

Yeah. We will link these things in the show notes.

Absolutely. It’s a good thing. I agree with everything they said, and it’s a good kind of a kumbaya, saying we need to work together… But otherwise it’s just saying “Hey, let’s go do these things.” That’s very important, because it is indeed an international concern. I am definitely applauding that. I’ve been more focused a little bit on the executive order, simply because it’s binding, having the effect of law… And we’ll talk about the details, but it gets quite specific in the executive order itself. There’s a facts sheet that kind of gives you a high level, and it doesn’t give a lot of the detail. And I read the fact sheet first, and I was a little bit – I was like “Okay, that’s all great, fluffy stuff”, but when I read the executive order afterwards, it gets down to who’s responsible for what, how long they have to do it, and what they have to do it, and there’s a bunch of specific results expected or standards applied that clearly were people from the AI community, that were very, very knowledgeable. So it wasn’t a political-only group of people that did this. They obviously had expertise available to them. So I ended up being more impressed than I expected with what they came up with.

[00:10:32.12] I haven’t done all of the research, and I don’t know if it’s even published anywhere, about who was involved in the process of developing this… But we can go through some of the high-level things, and there’s a few really helpful… So there’s the facts sheet that you mentioned, there’s also some good articles that I’ve been looking at over the past days, that give some summary information that might be relevant for people as well; the MIT Tech Review has an article, and some other ones… If we don’t dive into the specific wording quite yet - and I just looked at kind of the high level what people are saying about it, what’s standing out to them about it… One thing that I see really driven home is the real focus on safety, and in particular safety and security and the real focus on kind of like labeling and watermarking the output of AI systems, which I think, if I’m understanding right, is framed within the executive order as a safety thing in terms of protecting our citizens from potentially fraudulent or harmful material that might come out of AI models, as well as giving, whether it’s entities related to defense or education or whatever, the ability to identify and discriminate between AI-generated assets or content or text, and non-AI-generated, or I guess human-generated…

We’re talking about a specific point here in it, but it takes both sides of that. It both issues the directive that in essence you must identify AI-generated content, so that you can’t have deceptive representation there. But it also issues the appropriate government agencies to come up with mechanisms by which they may detect people who are not subscribing to that directive, or a foreign – obviously, there are all the other countries in the world, and they are not held by our executive order… So the means to detect all that. But I also would say that that’s not a new idea to the DOD and intelligence community.

And am I understanding this right, Chris, that the executive order would put the obligation on certain government entities to figure out how to enforce this mandate on the other organizations, companies, entities, teams that are within their jurisdiction? Is that a good way to put it, or am I misunderstanding?

Yeah, it leaves a lot undone. And at some point, we probably should jump in and kind of hit the highlights on what they were, but it basically puts the burden on various agency leaders to come – it’ll outline what they have to accomplish, and when it must be accomplished by, and in some cases what the output of that is, but it doesn’t tell them how to accomplish all that. And I have yet to see any place where it ascribed any budget to any of those items. So there are strengths… It was good thinking in many areas, but the dollars that would go into making some of these things happen I did not see assigned. So positives and negatives.

[00:14:04.11] When you read this, when you see this coming through and how things have trended over the recent years with respect to the government’s thinking on AI, is this taking us up from a level too of priority in government and action up to 10? Or how do you see this as escalating the kind of involvement from the government in the AI industry in terms of practical things that we’ll see in, let’s say, the coming year?

Oh, I would say that they don’t have a choice, given that it’s now in an executive order, and it directs them with timelines and specifics on that. If I was leading a federal agency that was being called out to do this, I would probably be scrambling and trying to figure out who I had, and who I could get, and where I was going to pull the money from to accomplish those things… And I’m hoping - maybe a listener knows there’s budgets ascribed that we haven’t heard about yet, and that would be good news… They would have to juggle a little bit in terms of their priorities to make some of these things happen. But I think it’s good. It forces it right up to the top of our list of things to – that the government could happen. And I don’t think it was going to happen from industry alone. We’ve watched for years as commentators on the industry every week, we’ve seen individual companies kind of do their own thing, they kind of compete with each other, because there’s a bit of a marketing tinge to AI safety as well… But nothing that has ascribed the entire industry universally. And so this will clearly do that with external standards.

Okay, Chris, let’s dive into some of these specifics, which I think are really quite interesting. The one that I thought was probably most interesting to me, although I think the others have both good and interesting implications - as my wife and her business would say, “There’s wins and opportunities.” But I think one of the ones that stood out to me was - I’ll just read the wording here, and we can talk about it and some of the wider implications… But one of the things is a requirement that developers - which is interesting to me, because I’m an AI developer, I guess - a requirement that developers of the most powerful AI systems share their safety test results and other critical information with the US government. How does that strike you? What are your thoughts?

Depending on the model and the specifics of that, it will often be the US Department of Commerce that’s receiving those, in some cases it will be military or intelligence, depending on the nature of what the concern is and what the model can do… But that will have to be created; it doesn’t exist today, to the best of my knowledge. If I was leading commerce, I’d be going like “Okay, how are we going to receive this information and store it?” Because they’re probably going to be getting quite a lot of data coming at them with the amount of development in this area.

[00:18:19.23] That’s one of the things that certainly here in the US we’re going to have to learn how to do… Because as of now, once the timeline – I don’t have in front of me how long they have to put that into place, but after a certain number of days, it will be required by law.

If we’re specific on the thing that would be required by law, if I’m understanding this right, it would be someone that is developing a new – I don’t know if you’ve seen this, maybe I just missed it… I’m not sure if this would be new in the sense of training from scratch, or new in the sense of a fine-tuned model… But those that would release large models, like in recent weeks we’ve seen Mistral, and LLaMA 2, and Zephyr, and all these models coming out… The models that are significantly large, like that, foundation models that are significantly large, as those are released, after they’re released, before they’re released - maybe we can talk about that - that the people that are producing those models, the teams, the developers that are producing those models perform some sort of red teaming to probe the models in terms of potentially harmful outputs, and kind of behavioral tests, and perform risk assessments that would be gathered in some sort of coherent way and shared with the US government.

One of the interesting pieces of this part of the executive order in particular, if I’m understanding right, is that it falls under the authority of the Defense Production Act…

Correct.

…which I think in addition to the executive order status might bring this point maybe a little bit higher up in terms of firm legal footing. Again, not an expert in that thing, but that’s my understanding from what I’ve read.

Yeah. And as a non-expert, but the Defense Production Act… Listeners may recognize that, because they’ve been hearing about it a lot over the last year, because it’s been a point in the Russian invasion of Ukraine, it has been repeatedly cited as a mechanism that the US government can use to increase production to support that effort, our allies, the Ukrainians. And so you may have heard that before. And from that I know, as a non-attorney, non-legal mind, that it gives the US government broad powers on requiring commercial companies in the United States, or that we do business with, to meet a certain set of criteria, and there are things that they can be told, “You must go do this, because it’s in the interest of our national security.” And so it’s a fairly sweeping thing, is my understanding. Since it has been referenced here, I agree with you; you not only have the executive order, but you also have reference to a fairly strong point of law from Congress.

One of the interesting takes that I saw on this… This is just a blog post that I’ll link in the show notes, but the comment in the blog post was that this might be one of the things that would kind of lead to a firming up of the players within the AI market as we see it… Because not only is now there a computational data infrastructure burden on those who want to produce these large models, but now there’s more of a regulatory burden that would actually be an additional kind of step here in terms of being a player in the foundation model space. Now –

[00:22:04.24] There’s irony to that, by the way. They explicitly note they’re trying to address equity in the executive order, but by adding the regulatory burden, that will be exclusionary.

Correct. Yeah, you don’t get anything for free, that’s for sure. So this will be interesting… It’s hard for me to think that the progress will slow very quickly around releases of these models. What is very interesting to me is how they will end up deciding - like, if I pull down LLaMA 2, and use a small dataset that I have access to fine-tune it, and then I release that as another foundation model that people can use… So I am modifying the weights, right? At what point between there and training from scratch, which - even large models I don’t think often are… They might start from a starting point with their weights in certain cases… But yeah, at what point in there are you really a developer of a significantly large model? Because both of these models are large… When is it adaptation? When is it fine-tuning? When is it training or releasing, or building, as the executive order says? So yeah, that that’s all kind of mushy in my mind, I think.

Yeah, they do attempt to kind of address that. There’s a section, 4.2, “Ensuring safe and reliable AI”, and they talk about a timeframe… And this is one of those that calls out the Defense Production Act specifically. And within 90 days there’s a set of things that are being required… Which is a very short timeline, when you think about it. Part of that is they have a set of criteria, which is certainly not comprehensive. To your point just a moment ago, it’s better than I expected, but it’s not entirely sufficient. There’s a lot of nuanced questions, as you just raised.

I noticed that one of those, they talked about the quantity of computing power used for training, and they have picked, interestingly, 10 to the 26th integer or floating point operations for kind of as a general point of computation; it’s like a threshold, and if you’re above that, you’re kind of in that large range that they are particularly focusing on. They have reduced that down to 10 to the 23rd integer or floating point operations for models that are based primarily for biological sequencing, and things like that. They have some other aspects in, but they have an interesting threshold that they call out in the executive order.

So you know exactly what’s going to happen here, Chris, is that whatever that number ends up being – so it’s just like in… So the town where my dad grew up in Kentucky is a – it actually still is, I believe, a dry county, meaning in the US this is where you can’t actually purchase packaged liquor in a liquor store. Of course, what happens then is just on the edges of the county you have these, at the sign where the county ends, there’s like 14 liquor stores, right? We’ll see something similar here, right? We’ve been choosing our numbers of parameters and such, 7 billion, 13 billion for various reasons for our models… What’s going to happen is people are going to get really, really good at training models under that threshold, which –

10 to the 25th will be the magic number going forward.

[00:25:46.20] Yeah, which - you know, I think it could actually have a… Even though that’s kind of gaming the system, it could actually have a really nice effect that instead of us trying to always think about more data, bigger model as the way to incrementally improve, this does put a burden on those that want to operate at the lower level under the threshold of regulation… The burden to say “Hey, what if we’re creative, either in our model architecture, or the way we train it, or the way that we fine-tune, or whatever that ends up being, to actually do more with less?” Which I think overall would be a good thing. And Academia is already thinking about these things with things like the [unintelligible 00:26:31.24] workshop, and other things like that. So I think that could actually have a follow-on effect that’s quite positive for the model landscape.

You’ll have a set of players that remain out of necessity – they must play up in that area, because the true LLM range isn’t gonna go away. But that will also be dominated by large players who are already doing regulatory stuff anyway. Maybe they weren’t in this, but they’re accustomed to that. It is exclusionary to those large – you know, things like cloud providers, and such; they’ll be doing that. But you’ll probably have a whole range of mid-sized players that are below the Amazons, Googles, and Microsofts of the world, and the Open AI is of the world, that will play in that, just below that, and build foundational models. Yeah, as you said, it’ll be interesting to see what kind of innovations come from there.

And one other question that I have coming out of that is “How do I know that I hit the 10 to the 26th?” And how do you go – along with this sort of restrictions or legal implications throughout the executive order, this kind of naturally brings up a lot of questions… Like, they also talk about watermarking, which we can talk about here in a second… But the general thought is like whether you’re talking about this computational power, red teaming, behavioral tests, watermarks, labeling, you need standards and tools and tests to help you ensure that you can do these things. Like, how do I know when I hit that threshold? How do I watermark things? etc. etc. And so one of the other things that’s drawn out right away is the developing of standards, tools and tests to ensure that AI systems are safe, secure, and trustworthy.

And this specifically calls out the National Institute of Standards and Technology, or NIST, that you might have heard of before, because they have one of the most precise clocks to help keep the standard of time, and the most precise weights –

[unintelligible 00:28:38.27]

Yeah, the most precise, like “This is exactly what a kilogram is…” And yeah, actually, in my undergrad when I was doing research, I did research at NIST with one of our collaborators… So we were theoretical, they were experimental, and I think mostly all I succeeded in doing was spilling a bunch of carbon nanotubes on the floor. I’m not very good at experiment. But that’s what they’re experts in, minus the occasional intern that spills carbon nanotubes on the floor. But they’re specifically called out to help or set the rigorous standards for what’s phrased in the executive order, extensive red team testing to ensure safety before public release. What are your thoughts on this, Chris?

This goes back to something I mentioned earlier… There’s a lot of figuring out the how that’s undetermined. So you have clearly some bright AI minds that helped construct the executive order, but they’ve left wide open what that means, and what is red teaming, what is red teaming trying – they hit some things that red teaming should be trying to do at a very high level, but it’s up to NIST and the Department of Commerce to come up with what the specifics are on that. And I think we’re all gonna [unintelligible 00:29:58.08] I think the key thing that I would take away from that is that this executive order is the first of many things to follow over the next year from various agencies, as they are trying to fulfill the executive order’s intent.

[00:30:12.15] Well, Chris, the next thing I see in here is biological materials, which I don’t necessarily think about that much, even though I’m made up of biological materials; I guess I don’t consider my own biological self very much. But they talk about protecting against the risks of using AI to engineer dangerous biological materials. What is a dangerous biological material? I guess a bioweapon, is that what we’re talking about here?

It would be bioweapons. And it could be something that we’ve all heard about; when we were in the height of COVID, there was all the theories about whether or not it had been created in a lab in China, or elsewhere, and such as that… And so it could be a weapon by design, it could just be a virus, it could be lots of different things. There are a lot of international laws and domestic laws against these things, but we also have actors around the world who don’t necessarily subscribe to the same values. And so it’s still something that the intelligence and defense communities, both the US and our allies, spend a lot of time thinking about how to address and defend against… Though we follow those laws, our adversaries may not. So yeah…

Yeah. People might be wondering, “Well, how might you practically think about protecting against the development of dangerous biological materials with AI?” And we’ve had previous shows, we can try to find them and link them in the show notes, about using AI to find new drugs, or something like that.

Well, a lot of those projects, whether it be those kind of pharma-related projects, or academic projects related to biology, and AI, or AI and life sciences sort of overlap, a lot of those do have some sort of federal funding behind them, whether that be NIH, or NSF, these sorts of grants… And so one of the things that’s called out here is “Hey, if you want a grant, if you want our money, then you have to agree to establish these standards XYZ”, which to my understanding are not specified in this executive order, but it’s saying “We will create the standards, that will be standards and requirements to receive federal funding for biological research with AI”, or I don’t know, that’s probably also to be determined, how to categorize that.

And we’re talking a lot about biological, but they don’t just address biological in it. There’s kind of some special stuff on biological, but then they also address what they refer to repeatedly as CBRN, which is short for chemical, biological, radiological or nuclear weapons. And so the kind of civilian research on the biological side, but there’s also the military side under the CBRN acronym that they’re addressing on those. And there’s a lot of concern expressed throughout the executive order about all of those being enhanced by AI in terms of finding solutions where you’re using models, how do you handle those, both domestically under this law, and how do we direct agencies to help keep us safe from adversaries that might not respect that.

[00:33:43.05] Yeah. There were two things that I was seeing in the kind of news and commentary on this that were standing out. One we’ve already talked about, which is related to the requirement for the “most powerful AI systems” to share their safety test results. The other one that stood out or it seemed to stand out to many people was the protections that are put in place for establishing ways to detect and label AI-generated content. So this would be images that are generated from text to image, or text to video systems, or audio that’s maybe synthesized, which is continually getting better, or voice clones, that sort of thing… Or also text that would fall into this category too around misinformation and that sort of thing, that you might want to filter out. Or maybe if you’re one of those teachers that want to prevent your students from using ChatGPT to generate their essay, then finding ways to detect AI-generated content and enforcing those in certain contexts. That’s kind of my general reading of this stuff… And I think what I was seeing was there’s a good bit of positive response to this, even from many in the AI community, that recognize “Yeah, this is an important piece of what we will need to do moving into the future, in terms of having to label things, and needing to be able to discriminate between these things”, but also the recognition from those in the AI community that this is still very much a topic of research, which is not figured out yet.

Totally. And I think that’s one of the – it’s interesting, I think that will be a big impact, because that will affect so many industries that are not necessarily ready for… You know, they’ve kind of said “Oh, we can make some money, we can generate content”, and we’ve all been seeing that online, but it’s coming from industries where they have not had the burden of responsibility for it. I think certainly all of us that are in the AI world have used different models to generate text, and stuff, and it started the first time as kind of cool, but then you realize “Wow, this is an amazing business capability.” But now it’s an amazing business capability with a fairly significant responsibility attached to it. It will be interesting… Things like the marketing and branding industries, which I once upon a time I was in, will have to figure out a way to do that and still serve their clients in that particular industry. Because if you just have everything as AI-generated content, that will affect how people perceive the content you’ve just generated trying to satisfy your clients in that particular industry. So there’s a lot of nuance that’s very industry-specific, that’s going to have to happen for that.

Yeah. And I hope that many out there recognize that we need to figure out ways to label this generated content and track it, even if it’s only for practical purposes of like, hey, more and more of this content is gonna get out there, and I don’t want to necessarily always be training my next AI system on AI-generated content; maybe I want human content. But there’s a recognition that it is an active area of research, and there’s also a gray area here, right? So if I have ChatGPT write me a cool blog post, and then I take that out and I modify a few things. And then I put a paragraph back in and have it rephrase, and then I take it out, and then I edit some more things… This is often a very dynamic process, and I think for safety and security and trustworthy AI systems, we would want that kind of back and forth with a human. But it’s not always the scenario where it’s simply human-generated content, or it’s simply AI-generated content.

[00:37:54.17] This does get very mushy, even in automated systems where there’s humans post-editing machine translations, or there’s humans reviewing analysis that’s been generated out of a sequel table, or… I don’t know, there’s all sorts of scenarios here where there’s a lot of gray area. And maybe that’s not the focus of this statement; it might be more these scenarios where you’d want to essentially create a factory of misinformation that’s just pumping out things to Twitter or X… That’s maybe more within what they’re talking about.

I think that working through all of those nuances, and all these different industries, and… I do the same thing - I write a lot of stuff, and I’ll write, and I’ll put what I’ve written into one or more models, and I’ll see what it comes back with, and I’ll choose, and I’ll take part of this, and part of that… I think a lot of people are doing that. I think that this is going to have to – a lot of this will be settled through litigation… So I think the executive order has given a tremendous boost to the AI litigation industry that has been flowering over the last few years… I think we’ll see far more of these nuanced cases, these gray areas decided in court in the next few years.

I have mixed feelings about the fact that it is beyond ability to handle all these cases given the short timelines. I’m glad to see short timelines, instead of many years to get there; especially if they’re unfunded, it will be interesting to see kind of what they come up with. If you’re a department head and you have 90 days to come up with a solution that the executive order requires of you, you probably will not have solutions for all of these things. So we have some interesting times ahead of us, certainly.

Yeah. And I hope that there’s involvement from leaders in the space, large and small. So smaller companies that are really innovating in some of these things, and larger kind of staples of the industry, like Hugging Face and others, that would pour into those things… But all of that will require some sort of minimal exchange of money, even if it’s just to buy people’s time to spend on this, because there’s so many things to work on.

It’ll be really interesting… Now that the burden has been placed on American agencies, and by extension the American people in their industries to comply with all these things, it’ll be interesting to see… You know, we talked at the beginning about the Bletchley declaration, and that intent… And all of these other countries will presumably come out with their own versions of this. And some will be very similar, some may branch out in different ways, based on the values and laws of their own countries… But it will be – to see how this works out, and there will also be some countries that refuse to subscribe to this whatsoever. And not only will they not contribute to this, they may be working very specifically against it, and in turn we’ll have to have very good capabilities for detecting when any of these cases that are within this purview of AI safety and security are being violated by others to an effect that is not good for us. It’s late 2023; I suspect through the end of the decade it will just be absolutely fascinating on how we start sorting through these issues.

[00:41:16.23] Yeah. And to maybe on a slightly positive note, for those of us that are working in day to day in this industry, we are the developers of some of these AI systems… We could look at this and say “Oh, there’s all these various intricacies and such that need to be worked out…” But I do think that there’s encouragement here, in the sense that - hey, some kind of general guidance, firming up of standards, help in kind of understanding how we might behaviorally test, or red team, or assess the risks associated with our models… I think that’s a really encouraging thing in many respects, especially for the vast number of AI developers out there that do actually want their systems to be safe, secure and trustworthy.

Yes, there’s a likely minority of developers out there that are trying to be nefarious and malicious even in what they’re doing with AI, as there will always be with any sort of technology, but I think most of us want to build safe and secure and trustworthy AI systems. And even if you’re doing really good in one of those categories, like you’ve got your red teaming down, there may be other things that come out through these processes with NIST, or the watermarking tooling, or other things that - it’s hard to be an expert in all those things.

So hopefully, as more of this rolls into action, there is money put behind some of it to not only put guardrails around what we can and can’t do, which might be how some people might take this, but actually to give us tools that will enable us to do more, because we know that we’re following good practices and best practices, and we’re being safe and secure. And of course - yeah, there will always be a need for research beyond that, but… Yeah, I think it’s encouraging in that sense.

I totally second everything that you’ve just said. This is an opportunity. There are huge business opportunities in helping people get through regulatory. And we’ve seen that in other industries. So this has come about, we’re hitting regulation in AI for real. Every other time regulation has come out, there’s been whole industries born that helped get through that, and services that make it much easier than it seems today, as we’re first reading what is to come… So I also would encourage everyone to try to embrace it. We do need it for safety; the dangers are real. And let’s do it for ourselves, our children and our larger community. So absolutely, let’s go make this thing a good thing.

Yeah. Alright, Chris, that’s a great way to end, and I look forward to talking to you more in the future weeks about increasingly safe, secure and trustworthy AI.

Absolutely.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. đź’š

Player art
  0:00 / 0:00