In the ongoing battle against harmful misinformation, the efforts by major platforms often come under scrutiny. While Meta (formerly Facebook) made headlines in late 2020 for taking steps to curb COVID-19 vaccine misinformation, a recent study published in Science Advances questions the effectiveness of these measures. The study indicates that while Meta’s decision to remove certain content did lead to a decrease in overall anti-vaccine content on its platform, it may have merely shifted engagement elsewhere, rather than reducing it outright.
Researchers gathered data from CrowdTangle, tracking content from various public pages and groups categorized as “pro” and “anti” vaccine sources. Their findings suggest that anti-vaccine influencers have become adept at evading enforcement on Facebook, utilizing the platform’s inherent content amplification features and inter-platform networks. They constantly adapt by altering keywords, using euphemisms, and guiding their followers to newer, less regulated groups or platforms.
The study highlights that anti-vaccine content is agile enough to outpace policy shifts, even on multiple levels. When one anti-vaccine page is removed, interconnected groups run by the same influencers can easily step in. This structure also aids members of banned groups in finding new spaces or platforms that are more accepting of their content. Anti-vaccine influencers actively encourage engagement, asking for likes and shares to boost their visibility.
David Broniatowski, one of the study’s authors and an associate professor of engineering at George Washington University, emphasized the persistent demand for this content. Even when specific articles or posts are removed, the study found that engagement remains largely unchanged, demonstrating that the audience still seeks such information.
Furthermore, when mainstream platforms like Twitter, YouTube, and Facebook began clamping down on anti-vaccine content, researchers observed an increase in users redirecting to alternative social media platforms like BitChute, Rumble, and Gab, which often cater to far-right and white supremacist communities facing bans on mainstream sites.
This study underscores the interconnected nature of misinformation, revealing that it’s not isolated but part of a broader network that includes conspiracy theories, such as QAnon, and political movements. Addressing this issue, the researchers suggest, requires rethinking the entire information ecosystem rather than relying solely on takedowns and account bans.
While the study doesn’t propose specific policy recommendations, it suggests a potential approach: treating Facebook’s infrastructure more like a regulated building, guided by scientific and safety-informed codes, rather than an open mic night with a code of conduct for performers.
Broniatowski suggests convening stakeholders from platforms, civil society organizations, and government entities to engage in a consensus-building discussion aimed at creating a safer and more enjoyable online experience.
However, it’s important to note that this study provides only a limited snapshot of the ecosystem, focusing on public spaces on Facebook with strong affiliations to vaccine-related keywords. It does not encompass the many private and hidden groups or public-facing pages using coded language to discuss these topics discreetly.
The study’s data covers the period from November 2020 to February 2022, and the fight against COVID-19 vaccine misinformation has continued to evolve since then. Other platforms, like TikTok, have gained popularity for spreading such content, and Facebook has adjusted its rules on COVID-19 misinformation in certain regions.
In a new development, Meta-owned Threads is reportedly blocking some pandemic-related search terms on its microblogging platform, which raises questions about the future of misinformation control in online spaces.