Elham Tabassi, the Chief AI Advisor at the U.S. National Institute of Standards & Technology (NIST), joins Chris for an enlightening discussion about the path towards trustworthy AI. Together they explore NISTās āAI Risk Management Frameworkā (AI RMF) within the context of the White Houseās āExecutive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligenceā.
Elham Tabassi: Absolutely. In answering your question, if I can just go back from the release of the AI-RMF in January 2023 to release of the executive order end of October, October 30th, 2023⦠So the AI-RMF was released January 2023. In March of that year we released the AI Resource Center. This is a one-stop shop of knowledge, data, tools for AI risk management. It houses AI-RMF , its playbook, in an interactive, searchable, filterable manner. And by the way, the AI Resource Center is definitely a work in progress, and we want to keep adding to that, and adding more additional capabilities, things such as a standards hub, a repository for metrics. We want it to be really a one-stop shop of all of the information, but also a place for engagement across the different experts.
In June of 2023 - just to give a little bit of context - ChatGPT-3 was released in November 2022, a month or so before the release of AI-RMF. And ChatGPT-4 was released in February, or beginning of March, a month after release of the AI-RMF. So in response to all of these new developments and advancement in technology, we put together a generative AI public working group, where more than 2,000 volunteers help us study and understand the risk of the generative AI.
And then in October, as you said, we received our latest assignment, Executive Order on Safe, Secure, and Trustworthy AI. This executive order really builds on the foundation works that we have been doing from the AI-RMF [unintelligible 00:23:28.05] resource center, to the generative AI public working group, and supercharged our effort to cultivate trust in AI, mostly by giving us some tight timelines of the things to deliver.
[00:23:42.17] The EO specifically directed NIST to develop evaluations, red teaming, safety and cybersecurity guidelines, facilitate development of consensus-based standards, and provide testing environments for evaluations of AI systems. All of these guidelines infrastructures, true to the nature of NIST, will be a voluntary resource for use by the AI community to support trustworthy development and responsible use of AI. We approach delivering on the EO the same way that we do all of our work, going to the community. We put a request for information out to receive input; based on the input that we receive we can put a draft document out for public comments. Based on the comments that we received, we developed the final documents, that we were very pleased that all of them were released by the deadline of July 26th, that the EO had given us.
A quick overview of the things that we put out⦠One of them was a document on a profile of AI-RMF for generative AI. The document number is - at NIST we like to refer to everything with a number. So that document is the NIST AI 600-1.
Itās a cross-sectoral profile, companion resource to the AI risk management framework. Based on the input that we had and the discussions that we had on the generative AI public working group, responses to the RFI and inputs that we have received, I think one main contribution of that document, if I want to summarize it, is its description of the risks that are novel or exacerbated by generative AI technologies. These risks span from CBRN information capabilities, access to synthesis of materially nefarious information that can lead to design capabilities for CBRN, confabulation, dangerous, violent, hateful content, data privacy risks⦠Let me remember the rest. Environmental impact, bias, human AI configuration, information integrity, information security, intellectual property, degrading or abusive content, the concept of the value chain and component integrationā¦
With the generative AI we are moving from the binary deployer/developer kind of actors and dynamics, and now we are having upstream of the third-party components, including data, that are part of this value chain. So one of the things that weāre working in continuing that work is to work with the community to get a better understanding of the technology stack, of the AI stack, if you will, for AI, and understand the role of the different AI actors involved there, so we can do a better risk management.