The (de)regulation of social media: freedom of expression or a free pass for hate speech?

The importance of accountability on social media

Rachel Spratley

copywriter JWI Global


Since the turn of the twenty-first century, the debate around the regulation, moderation and censorship of social media content has been fierce and fraught.

And now, as the fallout from Elon Musk’s tumultuous takeover of Twitter rumbles on, more questions have been raised about the role of social media platforms in society. Where does their duty lie - to protect freedom of speech, or to stem the rise of potentially harmful and offensive content? Where should the line between government legislation and corporate culpability be drawn? And what does this mean for digital marketers?

Social media has played a pivotal role in raising awareness of socio-political issues. It gives a voice to the disenfranchised and has thus far proved a powerful tool in the global fight against systemic oppression.

Rachel Spratley, copywriter for JWI Global

The argument against regulation

Freedom of speech is a fiercely protected concept. It is preserved in the United Nations’ Universal Declaration of Human Rights and enshrined in law by democracies around the world.

And, from coordinating social activism initiatives such as Black Lives Matter, to the anti-regime protests in Iran, social media has played a pivotal role in raising awareness of socio-political issues. It gives a voice to the disenfranchised and has thus far proved a powerful tool in the global fight against systemic oppression. 

However, it can be argued that regulation poses a threat to social media’s capabilities as a force for positive change. As it stands, there is no universal ‘social media code of conduct’ which sets out clear, transnational rules for regulation. In lieu of this, governments around the world are taking steps to regulate content at a national level, such as the United Kingdom’s new Online Safety Bill. However, as seen last year in India, where its ‘Digital Media Ethics Code’ was allegedly used to suppress online support for the farmers’ protests, this poses a danger to freedom to civil liberties. ‘Regulation’ can all too easily become a euphemism for ‘censorship’.

When not subject to heavy-handed and partisan regulation, social media platforms also give users the opportunity to view information from alternatives to the mainstream media sources, which are vulnerable to political bias. Even ‘big tech’ itself is now coming under fire for employing questionable practices in its regulation. This month, it was revealed that Facebook’s ‘cross-check’ moderation policy seems to shield high-profile Facebook and Instagram users from more rigorous checks, whilst journalists and civil society organisations have ‘less clear pathways’ to access the programme.

The increasing popularity of new and alternative social media platforms, such as Mastodon, as well as increased discourse surrounding Web3 - a decentralised iteration of the World Wide Web which would reduce the power of ‘big tech’ - demonstrate an appetite for platforms and networks which allow for a global flow of news, views and dialogue based on individual liberty and personal accountability, rather than a series of centralised, politically skewed platforms where regulation is susceptible to corporate greed and political influence. 

By banning certain voices and opinions, social media regulation protects its users from harmful content in the short-term. However, it’s arguable that, in the long-term, it leads to a more polarised global community. Upon issuing the Twitter suspension of former U.S. president Trump in wake of the January 6 Capitol Attack, former Twitter CEO Jack Dorsey tweeted his own regret:

‘Having to take these actions [suspending accounts] fragment the public conversation. They divide us. They limit the potential for clarification, redemption, and learning. And sets a precedent I feel is dangerous: the power an individual or corporation has over a part of the global public conversation.’

Without clear rules on what is and isn’t acceptable, social media risks enabling dialogues and behaviours which, in the offline world, would constitute hate crimes.

Rachel Spratley, copywriter for JWI Global

The popularity of ‘alt-tech’ platforms surged following Trump’s suspension from mainstream social media. Some, such as Parler, were temporarily forced offline due to its use by insurrectionists during the Attack. However, their popularity remains amongst users who felt disenfranchised by ‘big tech’, and who sought new communities deeper in the fringes of the internet. In creating ‘safe spaces’ for the ‘marginalised’ to voice their opinions, these platforms often become far-right echo chambers in which extreme and offensive views go unchallenged. 

By giving voices at each end of the socio-political spectrum a space for truly open discourse, social media networks are able to foster productive debate that builds a more balanced global digital community. 

The argument for regulation

However, it is also arguable that the deregulation of social media would create a far more hostile and dangerous atmosphere, both online and offline. Without clear rules on what is and isn’t acceptable, social media risks enabling dialogues and behaviours which, in the offline world, would constitute hate crimes. 

In October 2022, Elon Musk completed his rocky takeover of Twitter. The multi-billionaire’s acquisition of the platform had been a cause for concern amongst certain groups since the process began back in April 2022, and, upon completion of the deal, Musk stated that he would allow ‘any speech on Twitter that didn’t violate the law’, though hate speech would rank lower in the platform’s algorithms. However, although Musk’s time thus far at Twitter’s helm has seen the suspension of controversial high-profile figures such as Kanye West, his tenure has also seen the accounts of former U.S. president Donald Trump, prominent neo-Nazis, and white supremacists reinstated.

Furthermore, research from the Centre for Countering Digital Hate suggests that hate speech has already risen dramatically during Musk’s tenure. The report says that ‘the n-word was used 30,546 times [on Twitter] from November 18th to 24th [2022], the week leading up to Musk’s claims. That’s 260% more than the weekly average for 2022. During that week, a slur for gay men rose 91%; a slur for transgender people rose 63%; a slur for Jews rose 12%; and a slur for Latinos rose 64%’. As a result, advertisers - who are responsible for approximately 90% of Twitter’s revenue - are embarking on a mass exodus in search of platforms which better align with their brand values. Regulation is therefore a vital tool for protecting all users - not just the young and vulnerable - from accessing extreme and harmful content, which not only spreads misinformation but also amplifies views which are counterproductive to the creation of a more cohesive and harmonious world. 

It is also important to interrogate how truly ‘free’ any platform that promises freedom of expression truly is. Recent research from consumer rights group Public Citizen suggests that Trump’s own ‘free speech’ platform, Truth Social, is blocking content on certain topics, and that it features more restrictive terms of service than any other major social media platform. And of course, even the most ardent defenders of free speech aren’t so keen when the joke’s on them. Just days after Musk tweeted “comedy is now legal on Twitter”, the platform suspended the accounts of those who parodied its new owner. In this respect, not only would social media deregulation be irresponsible, but - for as long as human influence is present in the governance of these platforms - it may not even be truly possible.

Acting with accountability 

There is an argument that a human rights-based framework and approach to international regulation of social media will help to create a safer environment for all, whilst protecting individual freedom of expression. However, whilst governments around the world grapple to individually find the balance between safeguarding and censorship, there is still a long way to go before a transnational rulebook could be established. 

And in today’s world, customers expect brands to stand for something - no matter which sector they operate in. When it comes to brand activism, more than half of users want more from brands than corporate statements. They want to see them actively engaged with goals and initiatives. And half of consumers want brands to use social media to demonstrate their social justice commitments. So, both B2B and B2C brands stand to gain from exercising their freedom of expression and standing up for what they believe in.

However, what is clear is that concise and consistent social media regulation not only minimises the chances of widespread misinformation, but it also protects younger and more vulnerable users, and reduces the possibility for polarisation and conflict. In doing so, regulation creates safer environments in both the online and offline world.

Ultimately, irrespective of the steps taken by governments and the platforms themselves, brands and individuals must hold themselves accountable for the content that they publish and engage with on these networks. Freedom of speech cannot exist without real-world consequences, if content is deliberately fabricated to cause harm or stir discord. It is up to brands to help protect their followers and customers from harmful content and misinformation, to take responsibility for what they choose to share on social media, and to lay the framework for more constructive, informed digital discourse. 

Guest Author

Rachel Spratley

copywriter JWI Global


Rachel Spratley is a copywriter for JWI Global. During her career, she has worked with a range of brands - from the pioneers of barefoot luxury travel to the next big names in fintech - to hone their tone of voice, develop standout creative campaigns and to tell brand stories that inform and inspire.

Related Tags

Social Media