Wednesday morning, the hashtag #TwitterLockOut started trending after a number of far-right Twitter users woke up and angrily realized they lost thousands of followers.
A few weeks ago, many famous (and not-so-famous) Twitter users also suddenly lost hundreds of thousands of followers from their accounts.
These accounts didn’t become uninteresting or offensive overnight. Rather, the far-right Twitter users had a significant percentage of followers that Twitter identified as Russian Twitter bots and purged overnight. The others were customers of Devumi, a “social media marketing” company whose sketchy tactics are being investigated by the New York State Attorney General’s Office after being lambasted by The New York Times only days before people noticed a drop in followers.
The Times‘s explosive exposé was the culmination of an almost year-long investigation into the shadowy social media tactics of Devumi, which sells the services of an army of bots to amplify its customers’ social media profiles. The Times‘s investigation found that at least 55,000 of Devumi’s Twitter bots use information from accounts linked to real people in a new type of identity theft (I covered how to spot these clone-bots in a previous blog post).
Twitter forbids the purchase of followers, likes, retweets, etc., as well as impersonation in a “misleading or deceptive manner.” And the social media platform ramped up its efforts to tackle spam after it was discovered that bots run by the Kremlin-linked Internet Research Agency were used to sway the results of the 2016 presidential election. However, Twitter’s reporting mechanisms for human users who identify bots continue to be lacking.
Twitter users are split over how the social media platform should handle bots. Many alt-right Twitter users denied that their deleted followers were bots, while others, like Mark Cuban, have gone so far as to suggest that Twitter requires a real name and real person behind every account.
There are many reasons why Twitter doesn’t prevent the creation of bots. Not all bots are created equal, and automated accounts can actually be beneficial to the Twitter community (all of the news outlets you follow on Twitter are at least partially automated). But in the Fake News Era, more human Twitter users have become sophisticated at identifying harmful bots—like the ones posing as real humans to sell Twitter followers or spread propaganda.
So how should Twitter battle bot influence? There needs to be an official registry where bot-like accounts can be flagged for users who don’t want to devote their time to determine whether every account that shows up on their timeline is linked to a real person—and so Twitter’s spam determination policies are more transparent.
Independent Twitter users have already started doing the work: Robhat Labs provides a plug-in software called Botcheck.me that allows its users to check whether an account shows propaganda-bot-like patterns, and report propaganda-bot-like accounts that users find. Twitter Audit is a program created by two users that find the percentage of “fake” followers of any user. Both programs allow you to further investigate the credibility of different accounts on the platform.
Unlike Twitter’s current policy—which allows users to report spam and impersonation accounts for the company to investigate—an official suspected-bot registry would allow users to see which accounts exhibited bot-like behaviors, rather than relying on their own sleuthing. Users with experience identifying harmful bots would be able to see which accounts Twitter was investigating, and could more easily follow up to see whether those accounts had been removed from the platform.
Conservative Twitter’s uproar over losing followers could have been avoided if this fantasy registry existed because a registry would ensure transparency. People charged with crimes are put on public trial before being proven innocent or guilty, why can’t the same be true for bots who spread deception online?
Kremlin-linked groups are not the only organizations that run disinformation campaigns on Twitter, and therefore an official Twitter bot registry would not only list Kremlin-linked bots. The purpose of such a registry would allow more users to see whether Twitter accounts exhibited bot-like behaviors, even if they’re not trained to recognize such accounts.
Twitter is deep into its war against the bots, and the 2016 election proved that it’s not just the platform’s credibility that is at stake. How Twitter chooses to change its policies related to bots will likely affect how companies, politicians, media, and influencers tweet moving forward.