Researchers have uncovered a huge botnet that mimics legitimate accounts on Twitter to spread a cryptocurrency “giveaway” scam.
As reported by ITPro, the discovery was made during a research effort by Duo Security that looked at 88 million Twitter accounts from May to July and used machine learning to identify bots, malicious or otherwise, on the social media platform.
The team notably found a single network of over 15,000 bots in a three-tiered structure that spread the fake cryptocurrency giveaway, and further evolved as time passed in order to avoid detection.
The Duo team described how the botnet works in a paper to be presented at the 2018 Black Hat cybersecurity event on Wednesday.
Typically, they write, bots first create a spoofed (or copycat) account for a genuine cryptocurrency-related account that would copy the name and profile picture of the legitimate account.
To spread the fake giveaway scam, the bots would reply to tweets posted by the legitimate account, containing a link to entice Twitter users to the scam.
Adding to the complexity, many spoof accounts followed what the researchers termed “hub accounts” and suspect are followed “in an effort to appear legitimate”.
The botnet also employed “amplification bots” – other fake accounts that are used to give “likes” to scam tweets to “to artificially inflate the tweet’s popularity [and] make the cryptocurrency scam appear legitimate.”
The paper states:
“[Searching for connected bots] resulted in a 3 tiered botnet structure consisting of the scam publishing bots, the hub accounts (if any) the bots were following, and the amplification bots that like each created tweet. The mapping shows that the amplification bots like tweets from both clusters, binding them together.”
Intriguingly, the team found that the discoveries allowed them to connect the bots in a way “that can result in the unraveling of the entire botnet.”
While Twitter has been making moves to clamp down on such cryptocurrency scams, Duo writes in its conclusion that the work shows that botnets are still active and can be discovered by “straightforward analysis.”
“We don’t consider the problem solved,” they said.
Going forward, Duo plans to open source the techniques described in the paper in the hope that new techniques can be developed to identify malicious bots, and help “keep Twitter and other social networks a place for healthy online discussion and community.”
New Open Source Tools Help Find Large Twitter Botnets
Duo Security has created open source tools and disclosed techniques that can be useful in identifying automated Twitter accounts, which are often used for malicious purposes.
The trusted access solutions provider, which Cisco recently agreed to acquire for $2.35 billion, has collected and studied 88 million Twitter accounts and over half-a-billion tweets. Based on this data, which the company says is one of the largest random datasets of Twitter accounts analyzed to date, researchers were able to create algorithms for differentiating humans from bots.
The dataset, collected using Twitter’s API, includes profile name, tweet and follower count, avatar, bio, content of tweets, and social network connections.
Researchers created their tools and techniques for identifying bots based on 20 unique account characteristics, including the number of digits in a screen name, followers/following ratio, number of tweets and likes relative to the account’s age, number of users mentioned in a tweet, number of tweets with the same content, percentage of tweets with URLs, time between tweets, and average hours tweeted per day.
Tests conducted by experts led to the discovery of a sophisticated cryptocurrency-related scam botnet powered by at least 15,000 bots. These accounts were designed to use deceptive behaviors to avoid automatic detection, while attempting to obtain money from users by spoofing cryptocurrency exchanges, celebrities and news organizations.
Duo Security informed Twitter of its findings. The social media giant says it’s aware of the problem and claims it’s proactively implementing mechanisms to detect problematic accounts.
“Spam and certain forms of automation are against Twitter’s rules. In many cases, spammy content is hidden on Twitter on the basis of automated detections. When spammy content is hidden on Twitter from areas like search and conversations, that may not affect its availability via the API. This means certain types of spam may be visible via Twitter’s API even if it is not visible on Twitter itself. Less than 5% of Twitter accounts are spam-related,” Twitter said.
Duo Security has published a 46-page research paper describing its findings and techniques.
The company will release its tools as open source on August 8 at the Black Hat conference in Las Vegas.
“Malicious bot detection and prevention is a cat-and-mouse game,” explained Duo Principal R&D Engineer Jordan Wright. “We anticipate that enlisting the help of the research community will enable discovery of new and improving techniques for tracking bots. However, this is a more complex problem than many realize, and as our paper shows, there is still work to be done.”