This pandemic has exposed both the good and the bad of social media: we have witnessed its power to generate thousands, if not millions, of signatures for petitions in less than a day. From pressing issues such as police brutality to animal rights, this reach has forced corporations and governments to respond. And yet, in a time so fragile as this, we have also witnessed its capacity to incite violence, to stir hatred and to take people’s lives.
Social media platforms need to do more to protect their users. That has been the message in response to an excess of hate crimes reported by football players on Twitter.
One strategy in response to this is the use of more artificial intelligence to identify profiles with their ‘true life’ identities. Jonathan Hirshler, the chief executive of the data science company Signify, said in an interview with Sky Sports: “Our technology can scan millions of pieces of content and identify the most abusive using publicly available data. We pair this with our open-source investigation capability and in more cases than not, we can also verify the ‘true life’ identities of prolific abusers who hide their real profiles”.
The company has previously used its technology to investigate abuse against Arsenal footballer Granit Xhaka in December 2020, revealing that 1,374 out of 2,004 posts directed to him were marked as potentially abusive. Interestingly, this study also revealed that the top three most abusive accounts were also the most prolific. This says a lot about social media as a platform where negativity towards others is rewarded with followers and likes.
In a sport that currently has no openly gay men across every league in the world, this statistic doesn’t come as surprise when considering the levels of abuse online. After Hector Bellerin had come out in support of Arsenal’s LGBT+ supporters’ group, the company Signify also found an ‘inordinate amount of targeted homophobic abuse’ towards him. Examples like these show how social media platforms are not safe spaces for people to reveal anything about their identities that anonymous individuals may latch onto.
Whilst many of these messages have been reported, some still remain online to this day and many of their ‘true identities’ remain undiscovered. This has called for an end to anonymity on social networks, giving them a means of identifying them if they break the law’. This would allow artificial intelligence technology to detect individuals much faster and therefore be able to prosecute them. Doing so would deter more people from committing these examples of hate crime and therefore create safer, less negative spaces online.
However, this anonymity is vital for people who wish to express their feelings, political opinions, or potentially escape abusive environments. People in certain professions – teachers, for example – also require a level of anonymity. So in the same way that there is a balance between the good and the bad of social media, there needs to be a balance of private and public in ensuring that those who require anonymity are provided it. Some have suggested ideas of accounts having to be verified before it is created, or no tweets for 14 days on new accounts.
But there still remains no answers from social media platforms to the question of online abuse.