The National Institute of Standards and Technology (NIST) has a new document, Artificial Intelligence and User Trust (NISTIR 8332), that is now open for public comment until July 30, 2021. The document's goal is to stimulate a discussion about how humans trust artificial intelligence (AI) systems and outlines a list of nine factors that contribute to an individual’s potential trust in an artificial intelligence platform.
“Many factors get incorporated into our decisions about trust,” said Brian Stanton, a psychologist who co-authored the draft document with NIST computer scientist Ted Jensen. “It’s how the user thinks and feels about the system and perceives the risks involved in using it.”
The report contributes to the broader NIST effort to help advance trustworthy AI systems. The focus of this latest publication is to understand how humans experience trust as they use or are affected by AI systems.