Social Media Spam Bots and Fake Engagement

Social Media Article
20 mins

When social media posts are awash with fake engagement, it is very hard for us to tell what we're getting right and what we're getting wrong in our social media strategy. This artificial activity, often from automated accounts, skews social data and hinders our ability to understand genuine audience reactions. While it might seem like a good thing on the surface - who could complain about all those easy-earned likes and comments? We would much rather have a modest number of great, genuine engagements.

This article explores the modern landscape of social media bots, which have evolved from simple spam algorithms into sophisticated AI-driven entities. 

What Are Social Media Bots and Why Do They Exist?

At their core, social media bots are algorithms linked to user profiles, designed to automatically generate engagement with other people's content. This activity exists to earn the bot operators followers, sales leads, and ultimately, money. This fundamental motivation has remained constant, driving the creation of ever more convincing automated profiles.

Bots will often specifically target content or users with some connection to whatever it is they are ultimately trying to sell. So Target Internet's page is a magnet for B2B and digital marketing bots, whilst singletons might find themselves targeted by bots posing as remarkably amorous social users with supermodel looks. While it is clear many users can be fooled, recent data on exactly how many mistake bots for real people is unavailable; the most relevant major study on public bot awareness from the Pew Research Center dates back to 2018.

The Scale of the Problem: How Many Bots Are Really Out There?

Quantifying the bot population is a persistent challenge, and the picture has become more complex since early estimates, such as one from 2015 suggesting 7% of Twitter profiles were bots. More recently, the figures vary wildly depending on the source. In June 2022, X Corp's official estimate stated that fewer than 5% of its monetisable daily active users were false or spam accounts. This was publicly disputed by Elon Musk around the same time, who suggested the figure could be 20% or even 'much higher'.

Other platforms offer their own data. In the final quarter of 2024, Meta reported that fake accounts made up an estimated 3% of Facebook's worldwide monthly active users, and that it had taken action against 1.4 billion fake accounts in that period alone. Compiling a complete, independent picture is difficult, as restricted access to platform APIs since 2022 has severely hampered the ability of university researchers to conduct new, large-scale studies across major platforms like X, Instagram, and TikTok.

The Evolution of Bots: From Simple Algorithms to Sophisticated AI

The modern bot is a far cry from its simpler predecessors. The rise of generative AI has led to the creation of highly convincing and evasive automated accounts. These new bots can create multimodal content and mimic human behaviour so effectively that they represent a new frontier in the fight against fake engagement.

The scale of this technological arms race is stark. A 2024 study revealed that all eight major social platforms tested failed to detect and prevent the operation of these advanced, AI-created bots during experiments. Furthermore, other research from 2024 found that even commercial anti-bot services could be evaded with alarming frequency, with evasion rates against two popular services measured at 44.56% and 52.93%. This highlights a significant gap between the capabilities of the latest bots and current enforcement mechanisms.

The Real-World Impact of Misinformation and Influence Campaigns

Beyond skewing engagement metrics for marketers, sophisticated bots are now key players in spreading misinformation and conducting coordinated influence campaigns. Their impact is felt in everything from politics to geopolitics.

For instance, a May 2024 analysis of the U.S. election found that within its sample, 15% of X accounts praising Donald Trump and criticising Joe Biden were identified as fake. In the same analysis, 7% of accounts praising Joe Biden and criticising Donald Trump were also found to be fake. This tactic is also prevalent in global events. During the 2023 Israel-Hamas conflict, numerous investigative reports from organisations like NBC News and ProPublica documented widespread, coordinated bot activity used to amplify certain narratives and spread disinformation, illustrating their potent role in shaping public discourse.

The Fight Against Bots: Detection, Regulation, and Platform Action

In response to this growing threat, a multi-faceted fightback is underway. Platforms are implementing new policies to improve transparency and enforcement. In 2024, X began testing 'automated account labels' to identify bots and also conducted a large-scale purge of accounts violating its rules against platform manipulation and spam.

On the regulatory front, the European Commission adopted new guidelines under the Digital Services Act (DSA) in April 2024. These recommend that large online platforms implement measures such as the clear labelling of AI-generated content to mitigate systemic risks, including those from the automated exploitation of services. Alongside this, the market for enterprise-grade detection tools is maturing, with firms like Forrester evaluating advanced bot management software that offers sophisticated defences for businesses.

Conclusion: Navigating a Landscape of Fake Engagement in 2025

The battle against social media bots has intensified, transforming from a simple spam problem into a complex struggle against AI-driven deception. For marketers, this means that vigilance and sophisticated analytics are more critical than ever to distinguish real engagement from the fake noise that pollutes our data.

As bots continue to influence everything from marketing metrics to political outcomes, understanding this evolving threat is essential for navigating the digital world responsibly and effectively. While the total global economic cost of bot-driven social media fraud remains unquantified for 2025, the broader cost of cybercrime is projected to be trillions of dollars, highlighting the high stakes of this ongoing digital conflict.

 

Citations:

"Source: axios.com"

Source: axios.com, 2025

https://www.axios.com/2022/05/17/elon-musk-twitter-deal-spam-accounts

"Source: disinfocode.eu"

Source: disinfocode.eu, 2025

https://disinfocode.eu/reports/facebook/5/text

"Source: jmir.org"

Source: jmir.org, 2025

https://www.jmir.org/2024/1/e56651/

"Source: arxiv.org"

Source: arxiv.org, 2025

https://arxiv.org/abs/2409.18931

"Source: arxiv.org"

Source: arxiv.org, 2025

https://arxiv.org/abs/2406.07647

"Source: propublica.org"

Source: propublica.org, 2025

https://www.propublica.org/article/x-verified-accounts-misinformation-israel-gaza

"Source: help.x.com"

Source: help.x.com, 2025

https://help.x.com/en/rules-and-policies/profile-labels

"Source: eur-lex.europa.eu"

Source: eur-lex.europa.eu, 2025

https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52024XC03014

"Source: cybersecurityventures.com"

Source: cybersecurityventures.com, 2025

https://www.cybersecurityventures.com/

 

Build a ü free personalised ¥ learning plan to see our course recommendations î for you

Free for 30 days

Build a å free personalised ¥ learning plan to see our course recommendations î for you

Free for 30 days