If you use Twitter frequently, you may have noticed how some accounts just don't seem legit.
Maybe they tweet at an extremely unusual rate. Perhaps they constantly use politically charged language, or have a strangely large amount of followers for who they are (or claim to be).
If you've ever thought an account was a bot, you're not alone. Bot accounts spreading politically charged propaganda have become a common nuisance on social media platforms, especially Twitter. While some of these bot accounts are easier to spot than others, some aren't, and that can often lead to problems like the propagation of fake news and vitriolic online behavior.
Enter Rohan Phadte and Ash Bhat, two University of California at Berkeley students collectively known as Robhat Labs. The pair created an artificial intelligence known as Bot Check that is capable of determining with a high certainty if an account exhibits propaganda botlike behavior or not.
"As these bot networks are constantly retweeting these same tweets, you kind of get this feeling of 'Oh, other people have already validated this being retweeted so often and it's so popular, it's like nearly trending.' So you kind of get this feeling of 'Oh, it's probably real,'" Phadte told Circa.
Bot Check's main purpose is to aid in countering these assumptions. Twitter bot accounts often exhibit autonomous or semi-managed behavior which propagates other bot accounts and their tweets. For example, it's very common to see one bot account encourage followers to follow another bot account, which may in fact be part of the same network. By building a large enough group of bot accounts, an individual, state or organization could easily push fake news and propaganda to million of users to the point where they are trending.
Bot Check harnesses artificial intelligence to determine which accounts are run by humans and which appear to be automated. Phadte and Bhat started by examining certain character traits exhibited by bot like accounts, such as an extremely high tweet frequency and other non-human like behaviors. They then fed the AI thousands of "high-confidence" accounts as examples of bots, while using verified Twitter profiles as examples of accounts run by humans. The process is known as machine learning; the more accounts the AI ingested, the more it learns. The more it learns, the better the results. By using the Bot Check website or Chrome extension, users can test any Twitter account they want and the AI will determine whether it exhibits behaviors associated with classified bots or not.
To improve upon accuracy, Robhat Labs also incorporated a "disagree" button so human users can provide feedback on the results.
"If you ever use the extension, one of the most prominent buttons is the disagree button," Bhat told me. "And we actually use this to train our model."
The feedback from users has helped Robhat Labs drop its false positive rate by 1.5 percent, which is pretty remarkable for a machine that already operates with around a 94.5 percent success rate.
Robhat Labs noted that the political propaganda issue affects the entire political spectrum. Bhat noted that while a majority of the bots they queried push more right-wing material, there is also a significant portion on the left. He also noted they can't make a conclusive determination on whether political propaganda bots skew left or right based on the data so far.
The bots give off different characteristics depending on politics. A bot promoting right-wing material with #MAGA might claim it is a mother of four, a patriot and a supporter of Donald Trump, for example. A bot on the left using #ImpeachTrump would describe itself as a member of the LGBT community or a progressive.
"I feel like they are just looking at stereotypes and just trying to implement all of those," said Bhat.
Social media companies like Twitter and Facebook have been criticized for failing to properly deal with the political propaganda bot problem. Some have taken measures to make it more clear to users where information comes from, but Phadte and Bhat have yet to hear anything from Twitter about their product. The apprehension is somewhat understandable, given the potential consequences of calling out users for spreading political propaganda. Many of the accounts used as bots are compromised, and once belonged to real people. The issue becomes even more complicated with politics thrown into the mix.
Phadte and Bhat also acknowledged that their product isn't perfect, considering no model can be one hundred percent accurate. That said, Bot Check at least gives users a useful tool going forward. You can put it to the test yourself at the organization's website.