ClickCease

Fake News, Real Harm: How Digital Verification Can Put a Stop to Social Media Bots

As the COVID-19 pandemic continues to claim lives worldwide, another related contagion is spreading like wildfire through social media: fake news. A recent study published in the Journal of Medical Internet Research found that nearly 45% of tweets about the coronavirus were posted and shared by bot accounts. Tragically, the fake news shared through social media has tragic real-world consequences, even sparking the first wave of the anti-mask movement. To prevent bots from continuing to wreak havoc, social media companies need to go beyond the existing algorithms designed to detect bots and invest in phone-centric technology to verify users. 

Thanks to the advent of new features like Facebook News, where users can read, share, and comment on personally curated articles, a growing number of people are turning to social media for their news. More than 50% of US adults get their news from social media. Unfortunately, because social media websites have been slow to invest in fact-checking and bot detection, their platforms have bred a toxic atmosphere where misinformation is constantly disseminated among its users. So what are bots?

Bots are programmed by bad actors, including internet trolls and Russian hackers, to establish fake profiles using public information that generates posts spreading misinformation, rumors, or slander intended to sway public opinion. To legitimize their posts, bad actors will establish “bot farms” that create and manage thousands of fake accounts to inflate follower numbers and increase the likelihood of trending. Because of their sheer number (nearly 15% of accounts on apps like Twitter), bots can garner significant influence on the spread of fake news and political polarization.

Social media companies have taken steps to end the era of misinformation on their platforms by improving their bot detection technology. Facebook removed more than 4.5 billion bot accounts in 2020, Twitter challenged more than 10 million accounts responsible for manipulating information regarding COVID-19, and Instagram recently introduced a new method of reducing bot activity with the aid of algorithms designed to flag behavior deemed “inauthentic.” Indicators of inauthentic behavior include signs of automation, location spoofing, or engaging in coordinated activity with other fake accounts. If an account is flagged for inauthenticity, Instagram will require the account holder to verify their information, storing and deleting the account ID after 30 days when the review is completed. Unfortunately, while social media companies are making some strides, bots continue to proliferate, and fake news continues to spread.

Although Facebook, Instagram, and Twitter are finally demonstrating some progress in preventing malicious bot activity with improved bot detection algorithms, they are still miles behind bot development technology. A research study conducted by Harvard’s Berkman Klein Center for Internet and Society found, for instance, that the gold standard bot detection algorithm continues to get worse at filtering bots over time. As automated bots become more and more sophisticated, they can emulate human-like behavior by piecing together a complex profile with online information, conversing with other people, and liking and commenting on people’s posts. Today, bots are so sophisticated that even people struggle to determine which accounts are real and which are bots. To succeed in the war against misinformation, social media companies must pivot from bot detection to bot prevention. 

One way to prevent the spread of misinformation is to prevent bots from creating accounts in the first place by harnessing the power of phone-centric technology. Social media companies should consider investing in technology that verifies the ownership of an account through tokenized information linked to a user’s mobile phone. By focusing on bot prevention, social media companies can prevent not just the spread of fake news but also the spread of COVID-19.

To learn about Prove’s identity solutions and how to accelerate revenue while mitigating fraud, schedule a demo today.


Keep reading

See all blogs
Read the article: Why Prove Matters When Identity Data Leaks Become Critical Infrastructure Failures
Blog
Why Prove Matters When Identity Data Leaks Become Critical Infrastructure Failures

As large-scale data breaches expose billions of identity records, traditional identity verification and KYC models fail under automated fraud, making cryptographically anchored, persistent digital identity critical infrastructure.

Blog
Read the article: How Prove’s Global Fraud Policy Stops Phone-Based Fraud Others Miss
Blog
How Prove’s Global Fraud Policy Stops Phone-Based Fraud Others Miss

Learn how Prove’s Global Fraud Policy (GFP) uses an adaptive, always-on engine to detect modern phone-based threats like recycled number fraud and eSIM abuse. Discover how organizations can secure account openings and recoveries without increasing user friction.

Blog
Read the article: Prove Supports Safer Internet Day: Championing a Safer, More Trustworthy Digital World
Blog
Prove Supports Safer Internet Day: Championing a Safer, More Trustworthy Digital World

Prove proudly supports the goals and initiatives behind Safer Internet Day, a worldwide effort that brings together individuals, organizations, educators, governments, and businesses to promote the safe and positive use of digital technology for all, especially young people and vulnerable users.

Blog