Following the unpleasant trolling & abuse directed at Caroline Criado-Perez in the past few days, there has been much talk of how Twitter deals with abusive posts. An online petition has been created asking Twitter to add a “Report Abuse” button which is quick and easy to use.
There are one or two problems with this campaign, which I’d like to just discuss below.
The button already exists
This is obviously the biggest problem with this campaign. A “Report Abuse” button already exists and it clearly hasn’t prevented these abusive tweets directed at Caroline Criado Perez.
The free speech issue
I need to be clear here. I don’t think anybody is arguing that being able to direct abuse and threats at an individual is in any way protected free speech, but one has to be very careful that any system put in place to control abusive messages does not also end up impacting free speech as a form of “collateral damage”.
This isn’t a fantastical notion. Section 127 of the Communications Act was originally created to deal with abusive messages sent to telephone operators. Up until 2011, the go-to precedent for the use of this law was a case dealing with racial abuse. However, in the past few years, this law has been used to successfully prosecute people for making jokes, and in at least one case for making political statements against the war in Afghanistan.
With this in mind, we have to be careful that any measures we pursue in order to control this sort of unpleasant abuse and oppression online, do not themselves become tools of abuse and oppression. An easy to use “Report Abuse” control with an automated response would be open to abuse by all and sundry, and is unlikely to yield the results the petitioners are hoping for.
Proportionality and Scalability
If Twitter makes reporting abuse too easy, this won’t lead to rise in quality abuse reports, it will simply result in the general rise in all abuse reports. Including junk reports such as malicious attempts to get user accounts suspended, and “outraged of Tunbridge Wells” type complaints. Bear in mind that Twitter also needs to deal with live real time credible threats to a person, cases of child grooming and posting of illegal images. Twitter needs to have a system for triaging abuse complaints, such that it deals with urgent and serious real world threats first. One way of doing so is making sure that abuse reports require a degree of human input on behalf of the reporter. This article purportedly from a Facebook moderator gives an idea of the kind of problems created by unwarranted abuse reports. I’m sure nobody would like to think that an active child grooming session, or a posting of a stolen selfie was not dealt with by Twitter because moderation staff were too busy dealing with idiotic trolls instead.
The Education Problem
There are a whole host of reasons why trolls are a routine feature of daily life on social media. Twitter is a platform which gives everybody a voice. The price of that is that the bellends get a voice too. I’m sure we would all agree that there’s nothing in principle wrong with the idea of giving everybody a voice; the issue is that some people’s heads are filled with rubbish, and when they get a voice, they vomit that rubbish all over the internet.
Stopping people saying those things won’t stop them thinking it. It won’t make the world a more pleasant place. It won’t address the underlying problems, and in trying the silence those voices, we create a framework in which we can expect to silence voices which we don’t want to hear. In my view, that’s quite a dangerous place to be intellectually.
We need to be addressing people’s underlying attitudes, and teaching people to treat each other with respect and understanding online. Social media can be dehumanising and disinhibiting. This can bring the worst out in people. Any measures we take need to educate users and seek to change underlying attitudes & behaviours.
What can we do?
Twitter’s “block” button is a symbolic representation of Twitter’s ethos: users have the right to say whatever they like, but you don’t have to listen to it if you don’t want to. It seems to me that this approach might be the best approach to take in dealing with the kinds of problems Caroline Criado-Perez suffered this week. Here’s a couple of things Twitter could consider doing.
Account lockdown “panic mode”.
Suggested by the ever brilliant @flayman – this system would allow a user to put their account into a temporary “panic mode” state, where they would no longer see @ messages from anybody other than their followers or followees. This would have the effect that troll messages got no response and such attacks would be ineffective, meaning that they would subside quickly.
If “panic mode” was implemented at platform level, rather than just in the Twitter client, users attempting to post an @ mention addressed to a locked down account could have their posts rejected and/or twitter could log and monitor posts to the “panic mode” account for abuse handling purposes.
Block rate controls
Twitter can already send users to “Twitter jail” for posting too many posts in a short space of time. It may be possible for Twitter to identify trends in the number / rate / ratio of blocks a user account receives, and translate this into a flagging process which allows accounts to either be publicly flagged as possible troll accounts, or manually checked for abuse by moderators.
Edit: Troll flagging / karma
My suggestion is that the number of blocks which an account receives could be used to set a troll “karma” score for that account. If (say) over the last 30 days:
( number of tweets from the account / number of times blocked ) > [n]
… the user’s account is flagged as a possible troll account. To prevent gaming of this system, the platform would filter out large sudden spikes in blocks over a certain number.
Users would then be able to choose whether they wish to receive mentions from troll flagged accounts. In this way, one can make trolling on Twitter a much less effective and rewarding activity without affecting the openness of the platform.
This would operate much like Twitter jail: it would be rate based and inherently temporary, ensuring that users can’t set up and operate accounts expressly for the purpose of harassing other users.
Any other ideas?
This is a live debate. There are many avenues unexplored here. I would only ask that we proceed with caution and don’t leap to a kneejerk reaction which is either ineffective, counterproductive, illiberal, or worse still, all three at once.