Twitter is cracking down on fake accounts as it prepares for the midterm elections in the US.
The social media firm issued some updates on its ‘elections integrity work’ late Monday, among which include new guidelines for what kinds of behaviors its taking action against.
What’s more, the firm says it has already deleted 50 profiles believed to be posing as members of the Republican party.
Scroll down for video
The social media firm issued some updates on its ‘elections integrity work’ late Monday, among which include new guidelines for what kinds of behaviors its taking action against
‘As platform manipulation tactics continue to evolve, we are updating and expanding our rules to better reflect how we identify fake accounts, and what types of inauthentic activity violate our guidelines,’ Twitter wrote in a blog post.
‘We now may remove fake accounts engaged in a variety of emergent, malicious behaviors.’
The firm went on to outline some factors it’s keeping an eye on when determining whether or not an account might be fake.
It now considers the use of stock or stolen avatar photos, stolen or copied profile bios and intentionally misleading profile information, like a user’s location, as suspicious.
The firm is also encouraging political candidates use two-factor authentication, or an extra layer of security that’s required to log into an account, but they’re not requiring officials to use it.
Many politicians have already turned on two-factor authentication, Twitter said.
Twitter now considers the use of stock or stolen avatar photos, stolen or copied profile bios and intentionally misleading profile information, like a user’s location, as suspicious
Twitter said it’s also banning any posts that contain hacked materials.
It previously banned users who made hacking threats, but now that’s expanded to prohibit the distribution of hacked information ‘that contains personally identifiable information, may put people in imminent harm or danger, or contains trade secrets.’
However, users are still allowed to share commentary, such as news articles, about a hack or hacked materials.
Finally, the firm is beefing up its ‘enforcement approach’ to bring the hammer down on accounts that ‘deliberately mimic or are intended to replace accounts we have previously suspended for violating our rules.’
It follows Twitter’s recent announcement that it had taken down 770 accounts for spreading misinformation from Iran.
‘Since our initial suspensions…we have continued our investigation, further building our understanding of these networks,’ Twitter’s safety division wrote in a tweet.
‘In addition, we suspended an additional 486 accounts for violating the policies outlined last week.
‘That bringing the total suspended to 770,’ they continued.
Other disinformation campaigns have involved far-right themes, but the content removed involved anti-Trump rhetoric. Pictured are examples of tweets from suspended accounts
As with much of the content discovered from fake accounts, they were sharing content meant to sow division between Americans, often touching on inflammatory political issues
Most of the suspended accounts are believed to be from Iran. Fewer than 100 were located in the US and they tweeted about 867 times, were followed by 1,268 accounts and had joined Twitter less than a year ago.
As with much of the content discovered from fake accounts, they were sharing content meant to sow division between Americans, often touching on inflammatory political issues.
Interestingly, unlike other disinformation campaigns that have involved far-right themes, the content removed by Twitter involved anti-Trump rhetoric.
For example, one post states ‘The exact moment American stopped being “great,” and shows a photo of President Donald Trump being sworn in.
Another one of the identified accounts claimed the FBI was blackmailing Trump.
‘We identified one advertiser from the newly suspended set that ran $30 in ads in 2017,’ Twitter’s safety unit noted.
‘Those ads did not target the US and the billing address was located outside of Iran.
‘We remain engaged with law enforcement and our peer companies on the issue,’ they continued.
Twitter CEO Jack Dorsey recently admitted in an interview with CNN that the company hasn’t ‘figured out’ fake news yet, noting that policing false information is a difficult task
The move comes after Twitter announced last week that it had suspended 284 accounts from its platform, also for engaging in ‘coordinated manipulation.’
It comes as Twitter, Google, Facebook and others face growing scrutiny from users, experts and lawmakers to ramp up security efforts ahead of the 2018 midterm elections.
All of them have struggled to deal with the spread of false information and fake accounts on their platforms.
Twitter CEO Jack Dorsey recently admitted in an interview with CNN that the company hasn’t ‘figured out’ fake news yet, noting that policing false information is a difficult task.
Additionally, Dorsey is set to testify before a U.S. House of Representatives committee on Sept. 5 about how Twitter monitors and polices content.
WHAT ARE TWITTER’S POLICIES?
Graphic violence and adult content
The company does not allow people to post graphic violence.
This could be any form of gory media related to death, serious injury, violence, or surgical procedures.
Adult content – that includes media that is pornographic and/or may be intended to cause sexual arousal – is also banned.
Some form of graphic violence and adult content is allowed in Tweets marked as containing sensitive media.
However, these images are not allowed in profile or header images.
Twitter may sometimes require users to remove excessively graphic violence out of respect for the deceased and their families.
The platform is not allowed to be used to further illegal activities.
Users are not allowed to use badges, including but not limited to the ‘promoted’ or ‘verified’ Twitter badges, unless provided by Twitter.
Accounts using unauthorised badges as part of their profile photos, header photos, display names, or in any way that falsely implies affiliation with Twitter or authorisation from Twitter to display these badges, may be suspended.
Users may not buy or sell Twitter usernames.
Username squatting – when people take the name of a trademark company or a celebrity – is not allowed.
Twitter also has the right to remove accounts that are inactive for more than six months.
Context matters when evaluating for abusive behaviour and determining appropriate enforcement actions.
Factors we may take into consideration include whether the behaviour is targeted at an individual; the report has been filed by the target of the abuse or a bystander or the behaviour is newsworthy and in the legitimate public interest.
Users may not make specific threats of violence or wish for the serious physical harm, death, or disease of an individual or group of people.
This includes, but is not limited to, threatening or promoting terrorism.
Users may not promote or encourage suicide or self-harm. Users may not promote child sexual exploitation.
Users may not direct abuse at someone by sending unwanted sexual content, objectifying them in a sexually explicit manner, or otherwise engaging in sexual misconduct.
Users may not use hateful images or symbols in your profile image or profile header.
Users may not publish or post other people’s private information without their express authorisation and permission.
Users may not post or share intimate photos or videos of someone that were produced or distributed without their consent.
Users may not threaten to expose someone’s private information or intimate media.