As if the challenges of safeguarding public health, national security, infrastructure, personnel and cyber and physical assets during a global pandemic were not enough, security professionals now must manage another form of contagion of perhaps graver long-term concern: a pandemic of viral mis- and disinformation that challenges the stability and future of businesses and economies themselves.  

Disinformation is not a modern-day phenomenon. Governments, lobbying groups, issue advocates and political campaigns have long relied on disinformation as a tool for exploitation and control. What has changed is the ease with which disinformation can be generated and disseminated. Advances in technology allow for the increasingly seamless manipulation or fabrication of video and audio, while the pervasiveness of social media enables false information to be swiftly amplified and propagated by responsive audiences. On Twitter, a single post can now send an entire economy, military or local community into chaos.

Consider these recent examples:

In retribution for Nike and H&M expressing concern about China’s use of forced Uighur labor in cotton production, China lashed back by claiming that apparel sold by those brands contain “dyes or harmful substances [that] may be absorbed by the body through the skin, mouth, etc. and endanger health.”

Activists faked a press release from a Portuguese oil company that pledged to abandon operations in northern Mozambique, embrace a “100 percent renewable future,” compensate local communities for disruption it caused and create thousands of “new decent jobs.”

Chase Bank faced enormous blowback when Lt. Gen. Michael Flynn took to social media to report that the bank is canceling his credit cards because he creates a “reputational risk” for the banking giant. Flynn’s blistering response and the support he received from his followers prompted Chase to retract the decision and claim it was an error.

All three cases occurred in the first nine months of 2021. Welcome to the Disinformation Age, where false narratives are wielded by nation states, business competitors, extremists and other actors and pose an enormous threat to corporations and economies. Existential in nature, disinformation could be the most underappreciated threat that businesses face today, and Chief Security Officers (CSOs) need to take a lead role in assembling corporate defenses and responses. The way citizens, the workforce and, in particular, consumers view facts, define certainty and classify information no longer adheres to traditional rules. Fundamental changes have made it easier for domestic and foreign bad actors to exploit and amplify information to sow discord, push foreign nations’ policy agendas, stoke alarm and ultimately undermine confidence and public trust in corporate America and the core institutions of our democracy.

The new virality of disinformation

The expression, “A lie can travel halfway around the world while the truth is still putting on its boots,” has become a pre-social media relic. Lies now circumnavigate the globe, ricochet incessantly, intermesh inextricably with truth, seed institutional distrust, torpedo brand value, destroy corporate reputations and queue up to repeat the process. And that’s all before truth finds its socks.

Propaganda, misinformation, disinformation and fake news targeting brands have existed throughout history, and business has long been a target. But yesterday’s threats are almost quaint compared to those that businesses face today. Coke, Disney, McDonald’s and other global brands have long battled nagging rumors and hoaxes. Fact-checking website Snopes, for example, includes a section of rumors about Coke that go back decades, including the canard that a tooth, nail, penny or other hard object will dissolve in a glass of Coke overnight.

Turbocharged by the Internet, social media, mobile computing, cloud infrastructure, globalization, cultural and political polarization, widespread institutional distrust, a global pandemic, resource scarcity and other factors, disinformation today is far more strategic and damaging to companies and economies than ever before.

One of the most prominent cases of disinformation targeted e-retailer Wayfair during the summer of 2020. Conspiracy theorists claimed that certain high-end cabinets sold by Wayfair, which bore female names and steep price tags, held dark secrets. Based on that information and random tidbits, conspiracy theorists concluded that the cabinets were used to traffic girls. Worse, when users entered a Wayfair item’s SKU number into a Russian search engine, it would show a picture of a young woman—due to a bug in the search engine. Wayfair’s CEO was bombarded with hate mail and the company received significant negative attention from the episode.

The power of the meme

In its 2009 report Memetic Warfare, the U.S. Defense Advanced Research Projects Agency (DARPA) declared that memes have the power to change individual and group values and behavior, enhance dysfunctional cultures or subcultures and act as a contagion. At first glance, it is astounding that a simple meme could have such an effect; they are typically used to share opinions, express satire, offer wry commentary and produce clever anecdotes.

Memes refer to image macros — illustrations that quickly convey humor or opinion on social media. Sharing a meme not only amuses a group but also defines it. Memes such as Pepe the Frog have become symbols usurped by racist and conspiracist extremist movements — such as QAnon, the Boogaloo and Proud Boys — to instill fear and terror. The New Zealand white supremist terrorist who slaughtered dozens of Muslim worshippers encouraged others in his manifesto to “create memes, post memes, spread memes… Memes have done more for the ethno-nationalist movement than any manifesto.”

Memes now regularly target businesses, especially if they go out on a political or social limb. The social justice-minded ice cream company Ben & Jerry’s was targeted by a meme that read “Ben & Jerry’s unveils new antifa-inspired flavors.” The graphic shows two containers of ice cream: one flavor is “Blood of People You Disagree With,” and the other is “Vegan Coconut Milk Shake (With Bits of Concrete).” The humor embedded in the meme strengthens its critical message.

Another current meme addresses Chick-Fil-A’s ownership. Next to a man eating a sandwich above the corporate logo, a slogan reads: “Funny... The chicken tasted better before I knew it was basted in hate and homophobia.”

The rise of Disinformation-as-a-Service

Disinformation-as-a-Service has commodified fake news. An industry has emerged in which anyone can easily pay to smear a person, organization or institution. A 2020 report issued by the Programme for Democracy & Technology at Oxford University identified 65 companies offering disinformation services. In May 2021, The New York Times reported that social media influencers in France and Germany received proposals from a shadowy source to impugn Pfizer’s COVID-19 vaccine.

In a report published in July 2021, the Network Contagion Research Institute (NCRI) — a not-for-profit that identifies misinformation and disinformation — released its analysis of Russian disinformation attacks against Pfizer, Moderna and Johnson & Johnson to boost the fortunes of that country’s homegrown Sputnik V vaccine. NCRI identified more than 4 million articles published between January 2020 and July 2021 mentioning American pharmaceutical companies involved in COVID-19 vaccine production. “More than half a million are from known disinformation sources, and the content generated from known disinformation outlets generates the most engagement,” the authors conclude. Additionally, the report notes that sources connected to Russia, as well as nonstate disinformation outlets, generate articles that are the most frequently cited by other articles — a contagion of propaganda and lies.

The CSO’s role

Traditionally, CSOs have had a limited role protecting the brand, focusing on counterfeits, black and gray market activity, diversion and similar issues. Marketing officers and brand managers have held the reins, allied with the corporate legal team. But the emergent threat has outstripped yesterday’s prevention, detection and response measures. Fifteen or twenty years ago, IT handled cyber threats without consulting the CSO; nor did CSOs have a clear understanding of the consequences of not sharing information or data with IT. Effective cyberattacks laid bare the extent of corporate siloing. 

Nevertheless, one can fairly ask whether CSOs can absorb yet another obligation. Their duties cover not only physical security, but often loss prevention, cybersecurity, business continuity and crisis management, health and safety, investigations, white collar crime, travel risk management and more. To carry out these duties, CSOs already partner with many C-suite professionals, business unit leaders, and other executives. The COVID-19 pandemic has piled an even greater burden on already encumbered and, in some cases, undervalued security departments, adding temperature screening, social distance monitoring, sanitization, mask enforcement and other roles.

And yet, CSOs may be best suited to take on brand protection. They are most qualified to take on this distinct hydra-headed threat. Brand is one of any organization’s two most valuable assets — the other being its people. Enron, Lincoln Savings and Loan, Lehman Brothers, WorldCom and Theranos represent examples of companies that instantaneously lost all their brand value due to factors other than market dynamics. However, these failures originated from within — mainly from fraudulent activity. Today’s threat contrasts because adversaries, some very sophisticated, are targeting businesses from both the inside and outside.

Specifically, CSOs should consider fostering unconventional partnerships with Chief Marketing Officers and Chief Brand Officers, in addition to legal, finance, HR, IT and other relevant departments. Security professionals have the experience and skill set to identify, prevent and respond to attacks on brand and reputation. For example, some security departments collaborate with their legal and PR teams to provide accurate information on COVID-19, vaccination, face mask use, the effects of 5G and so on.

To deal with metastasizing disinformation, predictably disseminated via social media, security departments need to ingest and analyze vast amounts of data with the help of artificial intelligence/natural language processing. For example, over the summer of 2021, large retailers learned that citizens in Stuart, Florida, were doxing local government officials who had voted to approve the construction of a new Costco store. They could use that information to prevent blowback against their own stores and staff.

Data ingestion and analysis can also identify inappropriate incidents of algorithm-based advertising on websites. In 2017, AT&T Verizon, L’Oreal and Johnson & Johnson had to pull ads from YouTube when they appeared alongside videos promoting terrorism, antisemitism and homophobia.

Artificial intelligence can be used to unmask deepfakes and expose targeted impersonations. The potential for deepfakes to destroy brand value is enormous; it’s inevitable that a deepfake will emerge in which a CEO appears to say something abhorrent, troubling or reprehensible. Audio spoofing has already been used for monetary gain. In 2019, criminals used AI-based software that impersonated the voice of the CEO of a German parent company urging the CEO of its subsidiary British energy firm to immediately transfer €220,000 to a specific account.

Security professionals should contemplate a leadership role in helping organizations identify sources of disinformation that target their brand — or take aim at their industry, partners, boards, employees, executives, customers or even competitors — and contribute to the response. While marketing staff should take the lead with counter-narratives and the legal department may have a role in seeking redress in the courts, security professionals have singular investigative skills and the contacts with law enforcement to locate and shut down (or at least divert) disinformation purveyors.

As the experts in risk management, security professionals should address disinformation as they would other serious threats to the enterprise. They should identify potential adversaries, issues and political stances associated with the business and its leadership, controversial business partners or spokespeople, areas of operation with sensitive issues, potentially questionable business practices and so on. Many brands aim to show their “authenticity” by associating themselves with causes such as abortion/right to life, sustainability, gun ownership, immigration, geopolitics and marriage equality. Security should have a prominent role in fending off the inevitable political and, more concerning, terrorist threats to the company staff and executives. 

The challenge for businesses in the United States and Canada is notably significant. Their very existence relies on consumer trust. Yet, consumer confidence dwindles with every new uncorroborated rumor, every new groundless conspiracy theory, every new fragment of muddled fact intended to persuade citizens that government, corporations and other institutions, both private and public, are not to be trusted. 

We’re in a new era in which targeted disinformation attacks by everyone from fake-news generators, pundits and issue advocates to business competitors and nation state actors are overtaking so-called traditional threats. When the Global Situation Room surveyed 50 former U.S. ambassadors about the top worldwide risks to corporations, disinformation eclipsed issues such as human rights, poverty and climate change. Adversarial social media activity ranked as the fifth most significant threat to organizations in Kroll’s Global Fraud and Risk Report 2019-2020, tied with internal fraud and coming in above IP theft, counterfeiting and corruption.

As disinformation persists, corporations must address these deceptions in more informed and systematic way. But first they need to recognize, acknowledge and heed the once benign marketplace ‘chatter.’