The ubiquity of social media platforms is raising increasing concern within the UK government, resulting in urgent calls for more scrutiny on technology companies - this time on child protection. In the past weeks, the chief executive of the National Health Service (NHS), Simon Stevens, the health secretary, Jeremy Hunt, and culture secretary, Matt Hancock, have all spoken out about the need to protect children against the alleged harmful mental health effects of social media platforms.
The fact that an overtly pro-tech Conservative Secretary of State for Digital, Culture, Media and Sport is focusing on restricting mobile phones and social media for young children shows how far the debate around technology in the UK has shifted in recent years. Ever greater emphasis is being placed on the responsibility of the platforms to make the internet a safer place for children by tackling online bullying, self-harm, eating disorders and levels of anxiety or depression. However, this raises three difficult policy questions.
First, do tech companies have the means to stamp out online bullying? While some companies such as Apple and Facebook are experimenting with new tools to tackle smartphone and social media addiction, these initial measures are unlikely to be efficient to tackle other issues which affect children’s mental health, such as online bullying. Even if social media companies were to deploy their most sophisticated algorithms and employ tens of thousands of human moderators to monitor online content for bullying, mistakes are likely to happen, and regulators would need to afford tech companies a significant margin of error in missing a couple of online incidents or mistakenly removing non-offending content. The subjectivity of what constitutes bullying also raises practical questions on how social media companies can fully police this behaviour, along with questions on proportionality.
Second, would this require legal change? Moving to a more interventionist approach in the UK implies amending the EU’s E-Commerce Directive (ECD), the cornerstone of internet regulation. The ECD provisions provide numerous exemptions for digital companies from liability for content hosted on their platforms but uploaded by others. Matt Hancock argued earlier this year that Brexit could be an opportunity for Britain to impose new regulations on platforms to strike a balance between innovation and liability and specifically suggested amending the ECD. The UK would not be the first to question the ECD approach - Paris, Berlin and Brussels have all done so – but a post-Brexit UK would have scope to revise its liability framework.
Third, how far does this spill over into other issues? If the UK decides to amend the ECD to allow exemptions for online bullying, why not also for other legitimate public policy issues such as hate speech, fake news, IP infringement or the sale of unsafe products. This would widen the debate from children’s mental health to the governance of the platform economy.
Ultimately, the appetite of Hancock and the British government to intervene in this way will be balanced against the government’s liberal stance on tech regulation and its desire to court inward tech investment as the UK’s exit from the EU looms ever closer. Ministers may conclude that going further than the EU by implementing stricter liability rules on platforms may impact Britain’s efforts in this regard. But for the tech platforms, the debate reinforces the fact that they will have to improve their self-regulation or face the prospect of being regulated.
Counselling sessions by NSPCC’s Childline to children affected by cyberbullying
Source: Not Alone Anymore: Childline annual review 2016/17
The views expressed in this note can be attributed to the named author(s) only.