In late August 2020, a doctored video emerged of now President-elect Joe Biden apparently falling asleep during a TV interview. The video was then shared by the Trump administration’s White House Deputy Chief of Staff for Communications, Dan Scavino, and The Wayne Dupree Show C-host Brian Smyth. 

It subsequently emerged that the video was not Joe Biden, but that his image had been inserted over singer Harry Belafonte who was asleep waiting for an interview on TV. The video however had reached hundreds of thousands of viewers before Twitter labeled it as “manipulated media” and added to the narrative of “sleepy Joe.”

It is not a stretch in the current information environment to think of a scenario in which a CEO or board member of a large multinational company suffers a similar deep-fake incident. The impact from these kinds of incidents could send a company’s share price plummeting.

The risk of a disinformation attack against an enterprise or large organization is increasingly serious and needs to be included in companies’ risk preparedness contingency planning.

Identifying terms

In 2018, Now8news reported a false story that Coca-Cola’s Dasani brand was contaminated with parasites and the Food and Drug Administration had accordingly shut down its manufacturing facility.

The story prompted a swift reply by Coca-Cola which stated the story was “false and inflammatory information on a hoax news website.”

Given information operations is a relatively new area, there remains some confusion about the difference between mis- and dis-information. The two can broadly be defined as:

Misinformation: the sharing of information that is inaccurate.

Disinformation: the sharing of information that is deliberately inaccurate to spread a false narrative.

In the case of Coca-Cola an accurate categorization would require knowledge on how the news outlet sourced the article and its motivation in publishing the piece.  In the event that the article was published for nefarious purposes, this would be a disinformation operation.

Assessing the disinformation threat environment

The rapid spread of social media as a provider of news – and the lack of fact-checking – has enabled nefarious actors to weaponize information for their own ends – and potentially with serious consequences for companies’ brands, reputations and security.

The basis and know-how for disinformation campaigns are now relatively well-known and typically involve one or more of the following tactics: brigading,​ ​sock puppets,​ ​deep fakes​,​ ​botnets​, and​ ​content farms​.

The risk and impact of a disinformation operation will depend on the actor and incident. But the issue is particularly troublesome because it can be staged without expanding significant resources and with limited technical skills.

Threat actor: Nation states

Nation states such as China and Russia could conceivably conduct information operations against multinational companies with the aim of damaging their reputation or to lower a targeted company’s share prices.

The capability of nation states to launch disinformation operations such as Russia is significant. In Russia media outlets such as Russia Today often reflect the government’s aims and is a potential outlet for disinformation. The government is also reportedly connected to the Internet Research Agency which has conducted strategic targeted internet troll campaigns.

A coordinated and carefully executed disinformation campaign by a capable actor could have potentially devastating consequences for a company, even in the event of quick and coordinated response by a targeted organization.

A worrying example emerged in January 2019 when a video purportedly showing a Tesla “self-driving vehicle” hit into a robot prototype at an electronic show. The video soon went viral and led to damaging headlines such as “self-driving Tesla kills robot.”

But it soon emerged that the video was a fake and was in fact made by a Russian company which had made the video as a fake as a publicity stunt.

In the coming months it is becoming increasingly likely that a national government or activist group – perhaps involved in a trade dispute or other geopolitical issue – will respond by launching a disinformation campaign against a large multinational company to destroy or damage its reputation.

Threat actor: Business competitors

A more likely scenario for many companies is that is business competitors would coordinate and direct operations to damage its reputation. The impact of such a campaign would be more significant given that information can spread so quickly.

It is certainly not a stretch to imagine rival companies promoting fake news about a competitor whether it is a reported misdeed of a CEO, designing a cheap fake photo of a board member in a compromising situation, or spreading falsehoods’ about an organization’s financial performance.

Such stories could quickly spread into a company’s client base or shareholders and cause significant ramifications.

Meanwhile, Giorgio Patrini the CEO of Deeptrace Labs, has warned over the past year that his firm is aware of the real time use of deepfakes by webcam, which could enable competitors to attend confidential meetings and then distribute variations of the content online.

Threat actor: Disgruntled former employees

Disgruntled employees also increasingly have an avenue to retaliate against their former companies online and cause damage.

A strategic Glassdoor post which is then shared through various social media platforms can have an impact on a company’s reputation and its ability to recruit new employees within competitive industries. Other variations could include a former employee publishing a tell all but factually inaccurate op-ed on LinkedIn or social media post which distorts internal issues such as the treatment of women and minorities.

An ounce of prevention equals a pound of cure

Preparing for a disinformation campaign must follow the planning to manage other strategic risks.

The issue is often more of a challenge for companies with uncertainty around where managing and responding to disinformation campaigns fits in the organizational chart, given that is straddles reputation, physical and cyber issues.

But an important step is for a company to identify its own risk profile and how likely it might experience a disinformation operation.

For instance, a company that is attached to controversial or high-profile political issues, has high-profile employees, or is involved in issues such as climate change, supplying weapons, or involved in actively supporting a presidential candidate, your risk of experiencing a disinformation attack is higher.

Subsequently, companies – like governments – will benefit from red-teaming scenarios of how disinformation campaigns could impact them and how they should respond.

This would likely entail preparing detailed plans of who would manage an incident, which senior executives would be involved in responding and some prepared contacts with media organizations, and messaging to shareholders, employees and externally to news outlets.

Ultimately as the 2020 presidential election campaign has shown, the likelihood of information campaigns is only likely to become more pervasive in the months and years to come. Given the lack of regulation of social media companies, a range of actors will adopt various disinformation tactics to harm businesses.