Security Magazine logo
search
cart
facebook twitter linkedin youtube
  • Sign In
  • Create Account
  • Sign Out
  • My Account
Security Magazine logo
  • NEWS
    • Security Newswire
    • Technologies & Solutions
  • MANAGEMENT
    • Leadership Management
    • Enterprise Services
    • Security Education & Training
    • Logical Security
    • Security & Business Resilience
    • Profiles in Excellence
  • PHYSICAL
    • Access Management
    • Fire & Life Safety
    • Identity Management
    • Physical Security
    • Video Surveillance
    • Case Studies (Physical)
  • CYBER
    • Cybersecurity News
    • More
  • BLOG
  • COLUMNS
    • Career Intelligence
    • Cyber Tactics
    • Cybersecurity Education & Training
    • Leadership & Management
    • Security Talk
  • EXCLUSIVES
    • Annual Guarding Report
    • Most Influential People in Security
    • The Security Benchmark Report
    • Top Guard and Security Officer Companies
    • Top Cybersecurity Leaders
    • Women in Security
  • SECTORS
    • Arenas / Stadiums / Leagues / Entertainment
    • Banking/Finance/Insurance
    • Construction, Real Estate, Property Management
    • Education: K-12
    • Education: University
    • Government: Federal, State and Local
    • Hospitality & Casinos
    • Hospitals & Medical Centers
    • Infrastructure:Electric,Gas & Water
    • Ports: Sea, Land, & Air
    • Retail/Restaurants/Convenience
    • Transportation/Logistics/Supply Chain/Distribution/ Warehousing
  • EVENTS
    • Industry Events
    • Webinars
    • Solutions by Sector
    • Security 500 Conference
  • MEDIA
    • Interactive Spotlight
    • Photo Galleries
    • Podcasts
    • Polls
    • Videos
      • Cybersecurity & Geopolitical Discussion
      • Ask Me Anything (AMA) Series
  • MORE
    • Call for Entries
    • Classifieds & Job Listings
    • Continuing Education
    • Newsletter
    • Sponsor Insights
    • Store
    • White Papers
  • EMAG
    • eMagazine
    • This Month's Content
    • Advertise
  • SIGN UP!
CybersecurityManagementPhysicalSecurity NewswireCybersecurity News

NIST proposes approach to reduce risk of bias in artificial intelligence

artificial intelligence freepik
June 24, 2021

In an effort to counter the often pernicious effect of biases in artificial intelligence (AI) that can damage people’s lives and public trust in AI, the National Institute of Standards and Technology (NIST) is advancing an approach for identifying and managing these biases — and  is requesting the public’s help in improving it. 

NIST outlines the approach in A Proposal for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270), a new publication that forms part of the agency’s broader effort to support the development of trustworthy and responsible AI. NIST is accepting comments on the document until Aug. 5, 2021, and the authors will use the public’s responses to help shape the agenda of several collaborative virtual events NIST will hold in coming months . This series of events is intended to engage the stakeholder community and allow them to provide feedback and recommendations for mitigating the risk of bias in AI. 

“Managing the risk of bias in AI is a critical part of developing trustworthy AI systems, but the path to achieving this remains unclear,” said NIST’s Reva Schwartz, one of the report’s authors. “We want to engage the community in developing voluntary, consensus-based standards for managing AI bias and reducing the risk of harmful outcomes that it can cause.” 

NIST contributes to the research, standards, and data required to realize the full promise of artificial intelligence (AI) as an enabler of American innovation across industry and economic sectors. Working with the AI community, NIST seeks to identify the technical requirements needed to cultivate trust that AI systems are accurate and reliable, safe and secure, explainable, and free from bias. A key but still insufficiently defined building block of trustworthiness is bias in AI-based products and systems. That bias can be purposeful or inadvertent. By hosting discussions and conducting research, NIST is helping to move us closer to agreement on understanding and measuring bias in AI systems.

AI has become a transformative technology as it can often make sense of information more quickly and consistently than humans can. AI now plays a role in everything from disease diagnosis to the digital assistants on our smartphones. But as AI’s applications have grown, so has our realization that its results can be thrown off by biases in the data it is fed — data that captures the real world incompletely or inaccurately.   

The approach the authors propose for managing bias involves a conscientious effort to identify and manage bias at different points in an AI system’s lifecycle, from initial conception to design to release. The goal is to involve stakeholders from many groups both within and outside of the technology sector, allowing perspectives that traditionally have not been heard.  

“We want to bring together the community of AI developers of course, but we also want to involve psychologists, sociologists, legal experts and people from marginalized communities,” said NIST’s Elham Tabassi, a member of the National AI Research Resource Task Force. “We would like perspective from people whom AI affects, both from those who create AI systems and also those who are not directly involved in its creation.” 

The NIST authors’ preparatory research involved a literature survey that included peer-reviewed journals, books and popular news media, as well as industry reports and presentations. It revealed that bias can creep into AI systems at all stages of their development, often in ways that differ depending on the purpose of the AI and the social context in which people use it.  

“An AI tool is often developed for one purpose, but then it gets used in other very different contexts,” Schwartz said. “Many AI applications also have been insufficiently tested, or not tested at all in the context for which they are intended. All these factors can allow bias to go undetected.” 

Because the team members recognize that they do not have all the answers, Schwartz said that it was important to get public feedback — especially from people outside the developer community who do not ordinarily participate in technical discussions.  

"We would like perspective from people whom AI affects, both from those who create AI systems and also those who are not directly involved in its creation." – Elham Tabassi

“We know that bias is prevalent throughout the AI lifecycle,” Schwartz said. “Not knowing where your model is biased, or presuming that there is no bias, would be dangerous. Determining methods for identifying and managing it is a vital next step.” 

Comments on the proposed approach can be submitted by Aug. 5, 2021, by downloading and completing the template form and sending it to ai-bias@list.nist.gov. More information on the collaborative event series  will be posted on this page. 

KEYWORDS: artificial intelligence (AI) NIST public safety risk management

Share This Story

Looking for a reprint of this article?
From high-res PDFs to custom plaques, order your copy today!

Recommended Content

JOIN TODAY
To unlock your recommendations.

Already have an account? Sign In

  • Iintegration and use of emerging tools

    Future Proof Your Security Career with AI Skills

    AI’s evolution demands security leaders master...
    Columns
    By: Jerry J. Brennan and Joanne R. Pollock
  • The 2025 Security Benchmark Report

    The 2025 Security Benchmark Report

    The 2025 Security Benchmark Report surveys enterprise...
    The Security Benchmark Report
    By: Rachelle Blair-Frasier
  • The Most Influential People in Security 2025

    Security’s Most Influential People in Security 2025

    Security Magazine’s 2025 Most Influential People in...
    Most Influential People in Security
    By: Security Staff
Manage My Account
  • Security Newsletter
  • eMagazine Subscriptions
  • Manage My Preferences
  • Online Registration
  • Mobile App
  • Subscription Customer Service

More Videos

Sponsored Content

Sponsored Content is a special paid section where industry companies provide high quality, objective, non-commercial content around topics of interest to the Security audience. All Sponsored Content is supplied by the advertising company and any opinions expressed in this article are those of the author and not necessarily reflect the views of Security or its parent company, BNP Media. Interested in participating in our Sponsored Content section? Contact your local rep!

close
  • critical event management
    Sponsored byEverbridge

    Why a Unified View Across IT, Continuity, and Security Makes or Breaks Crisis Response

  • Charlotte Star Room
    Sponsored byAMAROK

    In an Uncertain Economy, Security Is a Necessity - Not an Afterthought

  • Sureview screen
    Sponsored bySureView Systems

    The Evolution of Automation in the Command Center

Popular Stories

Cybersecurity trends of 2025

3 Top Cybersecurity Trends from 2025

Red laptop

Security Leaders Discuss SitusAMC Cyberattack

Green code

Logitech Confirms Data Breach, Security Leaders Respond

Neon human and android hands

65% of the Forbes AI 50 List Leaked Sensitive Information

The Louvre

After the Theft: Why Camera Upgrades Should Begin With a Risk Assessment

Top Cybersecurity Leaders

Events

September 18, 2025

Security Under Fire: Insights on Active Shooter Preparedness and Recovery

ON DEMAND: In today’s complex threat environment, active shooter incidents demand swift, coordinated and well-informed responses.

December 11, 2025

Responding to Evolving Threats in Retail Environments

Retail security professionals are facing an increasingly complex array of security challenges — everything from organized retail crime to evolving cyber-physical threats and public safety concerns.

View All Submit An Event

Products

Security Culture: A How-to Guide for Improving Security Culture and Dealing with People Risk in Your Organisation

Security Culture: A How-to Guide for Improving Security Culture and Dealing with People Risk in Your Organisation

See More Products

Related Articles

  • AI technology NIST proposes user trust evaluation method

    NIST proposes evaluation model for user trust in artificial intelligence

    See More
  • artificial intelligence

    Artificial Intelligence Use Expected to Increase in Risk and Compliance Efforts

    See More
  • artificial intelligence

    Four Ways Artificial Intelligence Helps Mitigate Risk

    See More

Related Products

See More Products
  • security culture.webp

    Security Culture: A How-to Guide for Improving Security Culture and Dealing with People Risk in Your Organisation

  • 150952519X.jpg

    Intelligence in An Insecure World, 3rd Edition

  • 9781498767118.jpg

    Intelligent Video Surveillance Systems: An Algorithmic Approach

See More Products
×

Sign-up to receive top management & result-driven techniques in the industry.

Join over 20,000+ industry leaders who receive our premium content.

SIGN UP TODAY!
  • RESOURCES
    • Advertise
    • Contact Us
    • Store
    • Want More
  • SIGN UP TODAY
    • Create Account
    • eMagazine
    • Newsletter
    • Customer Service
    • Manage Preferences
  • SERVICES
    • Marketing Services
    • Reprints
    • Market Research
    • List Rental
    • Survey/Respondent Access
  • STAY CONNECTED
    • LinkedIn
    • Facebook
    • YouTube
    • X (Twitter)
  • PRIVACY
    • PRIVACY POLICY
    • TERMS & CONDITIONS
    • DO NOT SELL MY PERSONAL INFORMATION
    • PRIVACY REQUEST
    • ACCESSIBILITY

Copyright ©2025. All Rights Reserved BNP Media.

Design, CMS, Hosting & Web Development :: ePublishing