The Artificial Intelligence Risk Management Framework (AI RMF 1.0) is meant to guide organizations who may design, develop, deploy or use AI systems. Designed by the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST), AI RMF is meant to manage the many risks of AI technologies. 

The AI RMF follows a direction from Congress for NIST and was produced in close collaboration with both private and public sectors. It is intended to adapt to the AI landscape as technologies continue to develop, and to be used by organizations in varying degrees and capacities so that society can benefit from AI technologies while also being protected from its potential harms.

Compared with traditional software, AI poses a number of different risks. AI systems are trained on data that can change over time, sometimes significantly and unexpectedly, affecting the systems in ways that can be difficult to understand. These systems are also “socio-technical” in nature, meaning they are influenced by societal dynamics and human behavior. AI risks can emerge from the complex interplay of these technical and societal factors, affecting people’s lives in situations ranging from their experiences with online chatbots to the results of job and loan applications.   

The AI RMF is divided into two parts. The first part discusses how organizations can frame the risks related to AI and outlines the characteristics of trustworthy AI systems. The second part, the core of the framework, describes four specific functions — govern, map, measure and manage — to help organizations address the risks of AI systems in practice. 

The full framework is available here.