The U.S. Department of the Treasury released a report stating that artificial intelligence (AI) is contributing to an increase in financial fraud. The agency states that AI has allowed for fraudsters to mimic speech or video via AI, convincing a target to allow the fraudster access to financial accounts or information.

While larger financial firms typically have the resources to utilize AI as a defense, smaller firms do not as often possess the resources to do so. Furthermore, organizations that can deploy AI report that it can be challenging to do so, as the adoption of AI technology as a defense can require collaboration among multiple teams and enterprises, including technology, legal, compliance and more. Due to these challenges, many financial firms are slow to deploy AI technology as a defense. 

“The largest barrier for smaller financial institutions in utilizing AI for fraud detection is not model creation but with quality and consistent (standardized) fraud data,” says Narayana Pappu, CEO at Zendata. Entities like financial institutions can act as a node to aggregate the data. Data standardization and quality assessment would be a ripe opportunity for a startup to offer as a service. Techniques, such as differential privacy, can be used to facilitate information between financial institutions without exposing individual customer data, which might be concern preventing smaller financial institutions sharing information with other financial institutions.

The report also found that, within the financial sector, there is a lack of consistency when defining what AI is. The lack of clarity may be harming financial organizations, regulators and clients, and so the report recommends the creation and adoption of a common AI lexicon. 

Marcus Fowler, CEO of Darktrace Federal, says, “As outlined in the U.S. Department of the Treasury’s latest report, the increasing adoption of AI poses both increasing opportunities and increasing risk for organizations. The tools used by attackers and defenders — and the digital environments that need to be defended — are constantly changing and increasingly complex. Specifically, the use of AI among attackers is still in its infancy and while we don’t know exactly how it will evolve, we know it is already lowering the barrier to entry for attackers to deploy sophisticated techniques, faster and at scale. It will take a growing arsenal of defensive AI to effectively protect organizations in the age of offensive AI. Luckily, defensive AI has been protecting against sophisticated threat actors and tools for years.

“Financial services organization have historically been a top target for threat actors, given the very nature of their operations. In response, these organizations often have the most advanced and sophisticated cybersecurity programs, with many starting to leverage AI for cybersecurity years ago according to the report. AI represents the greatest advancement in truly augmenting our cyber workforce and these organizations serve as an excellent example of how AI can be effectively applied to security operations to increase agility and harden defenses against novel threats. We encourage these organizations to facilitate open conversations around their successes and failures deploying AI to help other organizations across sectors accelerate their adoption of AI for cybersecurity. 

“Public and private sector cooperation and partnership will be crucial to achieving AI safety globally. Initiatives like the U.S. Department of the Treasury’s report are instrumental in helping organizations move even faster to realize the positive opportunities and benefits of AI. This report serves as a conversation starter for all organizations — not just financial services — to think about their own adoption and approach to AI and how they can align AI efforts with broader cybersecurity goals and business initiatives.”