The word “unprecedented” has been used generously over the past four years amid a pandemic, tumultuous 2020 election and severe geopolitical tension. This summer, however, is truly deserving of the term when the world has seen two massive security and IT events, one presidential candidate nearly assassinated, and another presidential candidate dropping out of the race mere months before election day. These events are certainly what one would classify as unprecedented, and they serve as a hotbed for misinformation, disinformation or malinformation (MDM) to run rampant.
The prevalence of altered and synthetic material on social media makes it more difficult than ever for users to discern fact from fiction, especially as AI-generated video and audio content become more advanced and commonplace. Threat actors know this, and they will exploit it as past performance has shown. The MGM hack in 2023 as a result of sophisticated vishing is a prime example.
Threat actors are using advances in generative AI technology to launch large-scale influence operations programs that target social media users, broadening the reach of false information. This makes accountability on social media platforms all the more important — personal accountability and literacy of the “facts” being presented online, and the responsibility of social media companies to properly identify and remove MDM on their platforms.
How does MDM persist and can it be stopped?
MDM gets stronger as technology advances, and the barrier to entry to use these tools is lowering every day. Unfortunately, deepfakes and generative AI overwhelmingly favor those with malicious intent versus those using it for benign purposes. This summer alone, we’ve seen countless cases of rampant MDM including:
- Donald Trump’s attempted assassination: Countless theories have been swirling online about the motives behind Trump’s assassination attempt as well as the actions of the Secret Service. Photos circulated show Trump with his ear uninjured, as well as photos of the Secret Service smiling following the attempt, all of which have been debunked. Conspiracy theories are also running the gambit of the political spectrum, suggesting that either Trump or Biden themselves among other rabbit-hole theories.
- Joe Biden dropping out of the election race: Following Biden’s exit from the 2024 Presidential Election, disinformation spread almost immediately regarding Kamala Harris and who would fill his spot on the Democratic ticket. False claims about Kamala Harris include she is not American and is lying about her heritage, among other falsities.
- The Microsoft/Crowdstrike global IT outage: Following the global IT outage due to a Crowdstrike software update on July 19, conspiracy theories that the event was related to the 2024 election have become sensationalized online, with some claiming that the outage was a “hoax” or done intentionally to steal the 2024 election for the Democrats. It also resurfaced allegations from Trump and others in his camp that Crowdstrike lied about a Russian hack on the DNC in 2016, which were proven to be false.
These events happened within nine days of each other. When it comes to MDM, chaos begets chaos, and it has a way of compounding especially when events happen sequentially.
MDM can’t be stopped, and it will only get worse as technology advances. But it’s not hopeless. Social media companies are becoming more empowered to combat MDM on their platforms, and they are taking critical steps to keep their users informed as best they can so those users can draw their own conclusions based in fact.
The responsibility of social media companies in combating MDM
These pivotal moments occurred at an important time for social media companies in the age of MDM. In June, the Supreme Court ruled in favor of the Biden administration that the federal government was not censoring or suppressing conservative points of view when working with social media companies to remove misinformation. At its core, this case highlights what is often believed to be a blurry line between censorship and MDM, when in reality the distinction is quite clear.
This ruling solidifies the long-standing and ever-important relationship between the government and social media companies to share certified pieces of truth and to correct intentionally false content. Especially in a politically-charged climate and in an election year, it is more important than ever for social media companies to take an active role in discrediting false information on their platforms that seeks to undermine the political process. As much as social media users have a personal responsibility to vet the information they consume to the best of their ability, technological advances mean that voices and likeness are easily and often manipulated. Meaning, putting the sole burden on users to discern fact from fiction is nearly impossible and ends up doing more harm than good.
The events of 2024 will impact MDM for years to come
As technology that generates synthetic video, audio, and photos is more accessible than ever, it is increasingly more difficult to distinguish fact from fiction online. This will only continue, and likely more rapidly than we can predict as tools become more efficient, more realistic, and, unfortunately, more opportunistic for malicious actors. As a result, intervention and awareness of its effects on MDM cannot be understated.
2024 is a rare moment in history where several major world events are happening at once: geopolitical conflicts occurring in parallel to one another, over 60 elections happening globally, and a deluge of sophisticated cybersecurity attacks with implications worldwide. It’s a perfect circumstance for MDM to spread rapidly, which means collaboration between the public and private sectors and social media users to safeguard all certified sources of information is especially vital.