The COVID-19 pandemic has spawned an infodemic, a vast and complicated mix of information, misinformation and disinformation.
In this environment, false narratives – the virus was “planned,” that it originated as a bioweapon, that COVID-19 symptoms are caused by 5G wireless communications technology – have spread like wildfire across social media and other communication platforms. Some of these bogus narratives play a role in disinformation campaigns.
The notion of disinformation often brings to mind easy-to-spot propaganda peddled by totalitarian states, but the reality is much more complex. Though disinformation does serve an agenda, it is often camouflaged in facts and advanced by innocent and often well-meaning individuals.
As a researcher who studies how communications technologies are used during crises, I’ve found that this mix of information types makes it difficult for people, including those who build and run online platforms, to distinguish an organic rumor from an organized disinformation campaign. And this challenge is not getting any easier as efforts to understand and respond to COVID-19 get caught up in the political machinations of this year’s presidential election.
Rumors, Misinformation and Disinformation
Rumors are, and have always been, common during crisis events. Crises are often accompanied by uncertainty about the event and anxiety about its impacts and how people should respond. People naturally want to resolve that uncertainty and anxiety, and often attempt to do so through collective sensemaking. It’s a process of coming together to gather information and theorize about the unfolding event. Rumors are a natural byproduct.
Rumors aren’t necessarily bad. But the same conditions that produce rumors also make people vulnerable to disinformation, which is more insidious. Unlike rumors and misinformation, which may or may not be intentional, disinformation is false or misleading information spread for a particular objective, often a political or financial aim.
Disinformation has its roots in the practice of dezinformatsiya used by the Soviet Union’s intelligence agencies to attempt to change how people understood and interpreted events in the world. It’s useful to think of disinformation not as a single piece of information or even a single narrative, but as a campaign, a set of actions and narratives produced and spread to deceive for political purpose.
Lawrence Martin-Bittman, a former Soviet intelligence officer who defected from what was then Czechoslovakia and later became a professor of disinformation, described how effective disinformation campaigns are often built around a true or plausible core. They exploit existing biases, divisions and inconsistencies in a targeted group or society. And they often employ “unwitting agents” to spread their content and advance their objectives.
Regardless of the perpetrator, disinformation functions on multiple levels and scales. While a single disinformation campaign may have a specific objective – for instance, changing public opinion about a political candidate or policy – pervasive disinformation works at a more profound level to undermine democratic societies.
The Case of the ‘Plandemic’ video
Distinguishing between unintentional misinformation and intentional disinformation is a critical challenge. Intent is often hard to infer, especially in online spaces where the original source of information can be obscured. In addition, disinformation can be spread by people who believe it to be true. And unintentional misinformation can be strategically amplified as part of a disinformation campaign. Definitions and distinctions get messy, fast.
Consider the case of the “Plandemic” video that blazed across social media platforms in May 2020. The video contained a range of false claims and conspiracy theories about COVID-19. Problematically, it advocated against wearing masks, claiming they would “activate” the virus, and laid the foundations for eventual refusal of a COVID-19 vaccine.
Though many of these false narratives had emerged elsewhere online, the “Plandemic” video brought them together in a single, slickly produced 26-minute video. Before being removed by the platforms for containing harmful medical misinformation, the video propagated widely on Facebook and received millions of YouTube views.
As it spread, it was actively promoted and amplified by public groups on Facebook and networked communities on Twitter associated with the anti-vaccine movement, the QAnon conspiracy theory community and pro-Trump political activism.
But was this a case of misinformation or disinformation? The answer lies in understanding how – and inferring a little about why – the video went viral.
The video’s protagonist was Dr. Judy Mikovits, a discredited scientist who had previously advocated for several false theories in the medical domain – for example, claiming that vaccines cause autism. In the lead-up to the video’s release, she was promoting a new book, which featured many of the narratives that appeared in the “Plandemic” video.
One of those narratives was an accusation against Dr. Anthony Fauci, director of the National Institute for Allergy and Infectious Diseases. At the time, Fauci was a focus of criticism for promoting social distancing measures that some conservatives viewed as harmful to the economy. Public comments from Mikovits and her associates suggest that damaging Fauci’s reputation was a specific goal of their campaign.
In the weeks leading up to the release of the “Plandemic” video, a concerted effort to lift Mikovits’ profile took shape across several social media platforms. A new Twitter account was started in her name, quickly accumulating thousands of followers. She appeared in interviews with hyperpartisan news outlets such as The Epoch Times and True Pundit. Back on Twitter, Mikovits greeted her new followers with the message: “Soon, Dr Fauci, everyone will know who you ‘really are’.”
More recently, Sinclair Broadcast Group, which owns or operates 191 local television stations across the country, had planned to air an interview with Mikovits in which she reiterated the central claims in “Plandemic.” In airing this program, Sinclair would have used the cover and credibility of local news to expose new audiences to these false – and potentially dangerous – narratives. The company is reconsidering its decision after receiving criticism; however, the interview was reportedly posted for a time on the company’s website and was aired by one station.
This background suggests that Mikovits and her collaborators had several objectives beyond simply sharing her misinformed theories about COVID-19. These include financial, political and reputational motives. However, it is also possible that Mikovits is a sincere believer of the information that she was sharing, as were millions of people who shared and retweeted her content online.
What’s Ahead
In the United States, as COVID-19 blurs into the presidential election, we’re likely to continue to see disinformation campaigns employed for political, financial and reputational gain. Domestic activist groups will use these techniques to produce and spread false and misleading narratives about the disease – and about the election. Foreign agents will attempt to join the conversation, often by infiltrating existing groups and attempting to steer them towards their goals.
For example, there will likely be attempts to use the threat of COVID-19 to frighten people away from the polls. Along with those direct attacks on election integrity, there are likely to also be indirect effects – on people’s perceptions of election integrity – from both sincere activists and agents of disinformation campaigns.
Efforts to shape attitudes and policies around voting are already in motion. These include work to draw attention to voter suppression and attempts to frame mail-in voting as vulnerable to fraud. Some of this rhetoric stems from sincere criticism meant to inspire action to make the electoral systems stronger. Other narratives, for example unsupported claims of “voter fraud,” seem to serve the primary aim of undermining trust in those systems.
History teaches that this blending of activism and active measures, of foreign and domestic actors, and of witting and unwitting agents, is nothing new. And certainly the difficulty of distinguishing between these is not made any easier in the connected era. But better understanding these intersections can help researchers, journalists, communications platform designers, policymakers and society at large develop strategies for mitigating the impacts of disinformation during this challenging moment.
ABOUT THE AUTHOR
Kate Starbird is an Associate Professor in the Department of Human Centered Design & Engineering (HCDE) and Director of the Emerging Capacities of Mass Participation (emCOMP) Laboratory. She is also adjunct faculty in the Paul G. Allen School of Computer Science & Engineering and the Information School and a data science fellow at the eScience Institute. Kate’s research is situated within human-computer interaction (HCI) and the emerging field of crisis informatics — the study of how information-communication technologies (ICTs) are used during crisis events. Her research examines how people use social media to seek, share, and make sense of information after natural disasters (such as earthquakes and hurricanes) and man-made disasters (such as acts of terrorism and mass shooting events). More recently, her work has shifted to focus on the spread of disinformation in this context. Kate’s research touches on broader questions about the intersection of technology and society—including the vast potential for online social platforms to empower people to work together to solve problems, as well as salient concerns related to abuse and manipulation of and through these platforms and the consequent erosion of trust in information.
This article is courtesy of The Conversation