Elham Tabassi, the Chief AI Advisor at the U.S. National Institute of Standards & Technology (NIST), joins Chris for an enlightening discussion about the path towards trustworthy AI. Together they explore NISTâs âAI Risk Management Frameworkâ (AI RMF) within the context of the White Houseâs âExecutive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligenceâ.
Elham Tabassi: The first thing I will say is that you donât need to implement all of the recommendations in the AI-RMF to have a complete risk management. So our recommendation is start by looking at and reading the AI-RMF. Itâs not a very long document. I forgot, I think itâs about between 30 to 35 pages. So get kind of a holistic understanding of this. And then check out the playbook in the AI Resource Center, where for each of the recommendations - AI-RMF is in high level four functions. Each function is divided into categories, and then subcategories, so in sort of a granular approach. We give recommendations on what to do for the govern, and then for each of those recommendations get into a little bit more granular recommendations.
The playbook for each of the subcategories, which is about, I think, 70 subcategories in the AI-RMF, provides recommendations on suggested actions, and informative documents that you can go read and get more information, and also suggestions about transparency and documentations for implementation of that subcategory.
So we often suggest to get a better understanding of the AI-RMF, spend some time in the playbook to get a better understanding of the type of the things that can be done. And then, based on the use case, based on exactly what you want to do, start by simple, small number of recommendations in the AI-RMF, and start implementing that. Govern or map functions are useful starting points.
[00:38:09.08] Govern provides recommendations about the setup that you need for a successful risk management, so it can give you ideas or an organizationâs ideas about the resources that are needed, the teams that need to do this, so they can align it with their own resources and the teams that they have. And the map functions, as we discussed, gives recommendations of a better understanding of the context, getting answers to what needs to be measured.
I will also add that the functions, govern, map, measure, manage - there is no order on doing that. It depends on the use case, it depends on what needs to be done. The starting point can be recommendations of any of the functions. We usually recommend to start with govern and map. And then start with as few number of the subcategories or recommendations that the resources and the expertise of the entity allows for their implementations. Of course, prioritize in terms of their own risk management.
And then the last thing Iâll also add is also be mindful that the risk management is not a one-time practice that we just do at once and you say âOkay, Iâm done with my risk management.â AI systems - thereâs data drift, model drift, these newer models can change based on the interactions with the users, with the environment⌠So we suggest a continual monitoring and risk management. I think one of the recommendations in the map or govern is to come up with a cadence of repeating the assessments of the risks.
So those would be my recommendations. Another thing that I would say is that - I mentioned AI RC, I mentioned the playbook⌠We also in the AI-RMF talk about profile. So I keep emphasizing the context of the use, and mentioning the importance of the context in AI system deployment, development, and the risk management⌠At the same time, AI-RMF by design is trying to be sector-agnostic and technology-agnostic. We try to kind of come up with the foundations, the common set of the practices thatâs needed to be aware of, and are suggested for risk management. But we also have a section on AI profile, and recommendations on building verticals. These profiles are instantiations of the AI-RMF for a particular use case or domain of use, or technology domain, so that each of the subcategories can be slanted or be aligned with that use case. So there can be a profile of AI-RMF for the example that I used, medical image recognition. There you can imagine a profile of AI-RMF for financial sectors. Thatâs something that we have been asked to work with the community on.
That was a very long intro to say that there are a couple of profiles posted on the AI Resource Center. One is the one that the Department of Labor did for inclusive hiring. Another one that the Department of State did for human rights in AI⌠So that can give some sort of a window to, or idea about where the organizations can start.
In addition to the profile, we have also posted a few use cases, and we will post more use cases, and that is how different organizations are using AI-RMF that can hopefully be more practical examples of how to use AI-RMF.