Synthetic Identities Are Redefining Trust in Biometric Systems

Most people assume they can recognize a real face when they see one, but in practice this is not always the case. A recent report published in the National Library of Medicine’s PubMed Central database found human accuracy in identifying deepfakes can fall below 25 percent in certain conditions. As a result, the need for verification is shifting away from the individual and toward the systems designed to validate identity.
These systems have long relied on the process of capturing a biometric input, comparing it to a stored record and confirming a match. While that model offers a broad range of use cases across government and enterprise environments from border control to digital service access, it’s built on the assumption that the input being evaluated originates from a legitimate, physical source.
Advances in generative AI are challenging that assumption. As synthetic media becomes more realistic and more widely available, systems are increasingly being asked to evaluate inputs that may not come from a camera or sensor. This shift is showing up in how identity standards develop, raising new questions about how verification systems validate the data they receive.
Standards Begin to Address Synthetic Inputs
In response to these emerging risks, identity frameworks are now accounting for inputs that may not come from a live, physical capture. The National Institute of Standards and Technology (NIST) recently updated its biometric data exchange standard, SP 500-290e4, marking its first revision since 2016. The standard defines how biometric data is formatted and shared across systems used in areas such as law enforcement, border security and identity verification.
One notable update is the formal classification of synthetic and morphed facial images as non-biometric content. This distinction signals that AI-generated images must be handled differently from traditional biometric data within verification workflows.
Additional guidance in NIST SP 800-63-4 outlines expectations for identity systems, including considerations for detecting machine-generated content and countering emerging attack methods such as injection attacks.
Together, these updates signal a wider recognition that identity systems must account for inputs that were not part of earlier design assumptions. However, updating standards is only the first step in translating these changes into effective system controls.
Standards define how data must be handled, but they don’t determine how systems operate. In practice, a system may confirm that an image match without evaluating whether the input itself is authentic.
As synthetic media becomes more accessible, this creates a point of friction between how identity is defined in standards and how it is processed in practice. The classification of synthetic images as non-biometric content reinforces that not all inputs can be treated equally.
As these gaps persist, attackers are quickly adapting, introducing new threats that outpace legacy detection methods.
Evolving Threat Models and Detection Approaches
Controls such as liveness detection and presentation attack detection (PAD) were designed to address spoofing at the point of capture. They remain effective in situations where a user interacts directly with a camera or sensor.
However, not all inputs enter the system that way. Injection attacks, for example, require inserting digital content directly into a verification pipeline rather than capturing it through a device. In those cases, front-end controls may never be triggered.
Industry guidance has started to reflect this evolution, with programs such as the FIDO Alliance Face Verification Program testing for deepfake and spoofing resistance, while government-led efforts like the Department of Homeland Security’s Remote Identity Validation Rally (DHS RIVR) are exploring how systems perform against both presentation attacks and more complex scenarios.
Considerations for Government and Regulated Environments
Government systems and regulated environments rely on biometric standards to support interoperability and security across agencies and jurisdictions. To ensure these systems remain effective against synthetic identities, agencies should:
- Evaluate detection capabilities across contexts: Assess whether existing tools remain effective across different operational environments.
- Strengthen end-to-end workflows: Review how verification is structured, including how inputs are validated before and during processing.
- Align systems with evolving standards: Ensure system controls reflect updates like NIST SP 500-290e4 to maintain consistency while adjusting to new technologies and threats.
- Design for constrained environments: Support identity verification in field operations or low-connectivity settings where centralized systems may not be accessible.
- Leverage independent evaluation programs: Use third-party testing programs to assess how identity systems perform against real-world threats such as deepfakes and injection attacks.
Together, these considerations emphasize the need to evaluate identity verification beyond individual controls.
Rethinking Verification Workflows
These developments point to a broader shift in how identity verification is approached.
Standards for identity verification must evolve to address how inputs are introduced, processed and validated before matching occurs. Agencies then need to reassess where their systems assume inputs are valid. Controls that focus only on capture may leave gaps as synthetic media becomes easier to generate and insert into digital systems.
Strengthening identity workflows does not require abandoning existing models, but it does require expanding them. Establishing confidence in the input itself is becoming just as important as confirming the identity.
Looking for a reprint of this article?
From high-res PDFs to custom plaques, order your copy today!









