Last year, in an attempt to address the growing problem of fraud targeting taxpayers, the Internal Revenue Service (IRS) signed a two-year, $86 million contract with identity verification company to provide facial recognition services for users accessing IRS services online.’s solution was rolled out in stages starting in November 2021, accompanied by a single, brief press release from the IRS that was met with indifference by the public.

Their apathy didn’t last long. By January, the IRS’s implementation of’s solution received widespread criticism from advocacy groups and bi-partisan legislators who maintained that forcing taxpayers to provide biometric data was an invasion of privacy. Specifically, critics took issue with the fact that did not allow taxpayers to opt-in; the IRS was effectively forcing people to use a system for verifying and authenticating their identity, without giving them an alternative method for doing so. 

The controversy came to a head on Feb. 7 when the IRS announced it would “transition away from”’s system. Although the statement seemed to indicate it would abandon the engagement altogether, the IRS announced two weeks later that the company’s solution would continue to be offered as one of several authentication options, albeit with significant changes. To begin with, users would no longer be required to submit a selfie or biometric data. Instead, taxpayers who opted in to verification by would have their identity confirmed through a video call with a live agent. In addition, agreed to destroy any biometric data it had already collected.

While avoided a complete split with the IRS, there’s no telling how deeply or for how long the debacle will hinder prospects for the company and its technology. Already, plans to deploy technology in several states are receiving pushback from activists and constituents, and lawmakers in D.C. are pushing for other government agencies using to find alternatives. 

The IRS was in the process of rolling out the technology to combat Stolen Identity Refund Fraud (SIRF), its most pervasive type of fraud. In the 2013 tax filing season, over 5 million tax returns were filed using stolen identities, totaling $30 billion in refunds. But the IRS failed to effectively communicate these eye-popping statistics as the basis for their decision to require facial verification. They also neglected to clarify to taxpayers that the selfies collected in the authentication process would be stored in’s cloud databases, not the government’s. Disclosing this information at the outset and providing an opt-out option is imperative to preserve user trust. And the government should have been offered the choice to bank this information in its own cloud-based databases.

It was also reported that would check the collected selfies against other selfies in their database. The purpose of this was to flag faces that had been previously submitted using alternate identity information, a form of one-to-many (1:many) matching typically associated with surveillance by law enforcement. The IRS and made a critical mistake in failing to disclose their use of 1:many facial recognition. Critics equate 1:many facial recognition with ”Big Brother”-type privacy violations, a viewpoint that resonates with many Americans in today’s fraught social and political climate. Regardless of how accurate this depiction is, the IRS should have known better than to spring’s 1:many facial recognition technology on taxpayers without allowing them the choice to opt-in. 

The path the IRS chose may have been a case of “too much, too soon.” Presumably, the IRS and went with 1:many matching to try to verify users’ identity AND detect fraudulent activity. Some of the blowback could have been avoided had the IRS and instead focused on the former, using one-to-one (1:1) facial biometric matching. Performance of 1:1 matching ensures an individual is who they claim to be. This claim is made by the individual performing the action, under their control and consent. This would be a more viable option for government agencies and other groups that need to be particularly sensitive to privacy rights and public perception. 

If large government agencies such as the IRS don’t understand the technology they procure, there’s a good chance they’ll fumble both the deployment of the solution and the crucial outreach that accompanies it, missteps that will just end up reinforcing the public’s mistrust. Had the IRS fully understood the identity authentication technology they selected, they might have foreseen the controversy and executed an information campaign leading up to the rollout. A robust communications plan touching on ethical concerns, user experience, education, and transparency might have helped to dispel the public’s concerns about facial recognition. 

What can the IRS and other government agencies learn from this?

To protect government services and the data of its citizens from identity fraud and cyberattacks, the White House recently issued mandates for federal government agencies and contractors to implement zero-trust network access strategies that adopt new methods of user authentication. That edict might be tenable for these groups, which are accustomed to strict security measures, but the general public should have the right to choose how they access government services and accommodating those choices must be part of the government’s policies. In requiring the use of facial authentication without providing other options, the IRS was trying to eliminate, in one sweeping move, a vulnerability that fraudsters had exploited with abandon for years. But especially in today’s fractious environment, the government can’t operate on an “all-or-nothing” basis. People need to be given choices about how they verify their identities online, and when, how and why the government uses their likeness. Private companies provide opt-outs and other privacy options, and the government should do the same.

Had the IRS given users an alternative to’s facial recognition authentication and executed a decent communications strategy, they may have been surprised by the number of taxpayers who would have been happy to go the route. Even with just a few lines of text on the login screen, explaining how the technology works and why the IRS is using it ($30 billion in fraudulent refunds!), the opt-in rate while not 100% would likely have been high. A November 2020 AARP survey of 9,000 Americans over the age of 17 found that 90% had encountered a fraud attempt in the preceding year. And the Federal Trade Commission (FTC) estimates that repairing the damage caused by having your identity stolen takes an average of 200 hours of work over six months. Ask a victim of fraud whether they would prefer to endure that ordeal again or use biometric authentication. I have no doubt which one they’d choose. 

Finally, many of the same people who complained about the IRS using are also fiscal conservatives who bristle at the idea of government waste. Would they be swayed knowing that this technology could protect billions of dollars in tax revenue that otherwise would have been lost to criminals?

There are government initiatives that some people will never get on board with, no matter how clear the risks and costs of not taking action. But more foresight, care and respect for the end user, coupled with an information campaign reflecting all of those things, might have turned the IRS / partnership into a positive gamechanger for all stakeholders. The IRS may eventually release figures on the percentage of taxpayers who opt in to the solution, but those results will almost certainly be lower than they might have been had the implementation been better managed. As it stands, it seems that the IRS and have learned little from the tumult of the past few months. Though the IRS opted to continue offering as an authentication option, there has been no accompanying information campaign or improvements to the interface. The implementation continues to be a source of confusion and frustration for taxpayers.