ClickCease

Fake News, Real Harm: How Digital Verification Can Put a Stop to Social Media Bots

Prove
September 24, 2021

As the COVID-19 pandemic continues to claim lives worldwide, another related contagion is spreading like wildfire through social media: fake news. A recent study published in the Journal of Medical Internet Research found that nearly 45% of tweets about the coronavirus were posted and shared by bot accounts. Tragically, the fake news shared through social media has tragic real-world consequences, even sparking the first wave of the anti-mask movement. To prevent bots from continuing to wreak havoc, social media companies need to go beyond the existing algorithms designed to detect bots and invest in phone-centric technology to verify users. 

Thanks to the advent of new features like Facebook News, where users can read, share, and comment on personally curated articles, a growing number of people are turning to social media for their news. More than 50% of US adults get their news from social media. Unfortunately, because social media websites have been slow to invest in fact-checking and bot detection, their platforms have bred a toxic atmosphere where misinformation is constantly disseminated among its users. So what are bots?

Bots are programmed by bad actors, including internet trolls and Russian hackers, to establish fake profiles using public information that generates posts spreading misinformation, rumors, or slander intended to sway public opinion. To legitimize their posts, bad actors will establish “bot farms” that create and manage thousands of fake accounts to inflate follower numbers and increase the likelihood of trending. Because of their sheer number (nearly 15% of accounts on apps like Twitter), bots can garner significant influence on the spread of fake news and political polarization.

Social media companies have taken steps to end the era of misinformation on their platforms by improving their bot detection technology. Facebook removed more than 4.5 billion bot accounts in 2020, Twitter challenged more than 10 million accounts responsible for manipulating information regarding COVID-19, and Instagram recently introduced a new method of reducing bot activity with the aid of algorithms designed to flag behavior deemed “inauthentic.” Indicators of inauthentic behavior include signs of automation, location spoofing, or engaging in coordinated activity with other fake accounts. If an account is flagged for inauthenticity, Instagram will require the account holder to verify their information, storing and deleting the account ID after 30 days when the review is completed. Unfortunately, while social media companies are making some strides, bots continue to proliferate, and fake news continues to spread.

Although Facebook, Instagram, and Twitter are finally demonstrating some progress in preventing malicious bot activity with improved bot detection algorithms, they are still miles behind bot development technology. A research study conducted by Harvard’s Berkman Klein Center for Internet and Society found, for instance, that the gold standard bot detection algorithm continues to get worse at filtering bots over time. As automated bots become more and more sophisticated, they can emulate human-like behavior by piecing together a complex profile with online information, conversing with other people, and liking and commenting on people’s posts. Today, bots are so sophisticated that even people struggle to determine which accounts are real and which are bots. To succeed in the war against misinformation, social media companies must pivot from bot detection to bot prevention. 

One way to prevent the spread of misinformation is to prevent bots from creating accounts in the first place by harnessing the power of phone-centric technology. Social media companies should consider investing in technology that verifies the ownership of an account through tokenized information linked to a user’s mobile phone. By focusing on bot prevention, social media companies can prevent not just the spread of fake news but also the spread of COVID-19.

To learn about Prove’s identity solutions and how to accelerate revenue while mitigating fraud, schedule a demo today.


Keep reading

See all blogs
Empower Your Development Workflow with Prove Identity’s New Developer Portal

Prove Identity has launched a free Developer Portal for engineers to test out the Prove Pre-Fill® solution, which streamlines the customer onboarding process while preventing fraud.

Nicholas Dewald
September 24, 2024
Emerging Markets and Mobile-First Economies Embracing Digital Identity Verification

PYMNTS interviewed Prove CMO Brad Rosenfeld for the most recent episode of, “What’s Next in Payments,”

Kelley Vallone
September 18, 2024
The Rising Threat of Fake Business Accounts: Mary Ann Miller Offers Guidance on Effective Identity Verification Strategies

Miller was the featured guest on InfoRisk Today, where she explained some of these rising threats and the corresponding need for better, more rigorous identity verification strategies.

Kelley Vallone
September 18, 2024