Britain is working hard to make social media owners personally liable for harmful contents, and to shut down offending platforms under a government action plan published Monday.
Pieces of legislation to that effect could be passed in the coming months.
The proposals were drawn up after consultations with social media moguls such as Facebook chief Zuckerberg and have faced little resistance from other platforms that have also been blamed for allowing the spread of hate and abuse.
Some of the proposals have sparked concerns from free speech groups.
“What we’re proposing today is that companies that deal with user-generated content should take greater responsibility for keeping those users safe,” culture and media minister Jeremy Wright told BBC radio.
“These are world-leading proposals. No one in the world has done this before.”
The proposed regulations would see social media companies accept “duty of care” obligations that require them to identify and remove “online harms”.
Those that fail would be first issued warnings and then hit progressively with more punitive sanctions.
The government paper suggests that these include “the creation of new liability (civil fines or extended to criminal liability) for individual senior managers”.
The most serious would see internet service providers block non-compliant websites and apps.
“This would only be considered as an option of last resort and deploying such an option would be a decision for the independent regulator alone,” the plan says.
The regulations would only apply in Britain and should have no immediate impact on users elsewhere in the world.
But they may prompt other governments to take notice and follow suit.
Wright suggested the fines would be substantial.
He noted that those available to European Union authorities implementing the General Data Protection Regulation (GDPR) rules on data privacy reached “up to four percent of a company’s turnover”.
“We think we should be looking at something comparable here,” Wright told BBC television.
Wright’s office is navigating a minefield of problems regulating an industry that largely functions outside the bounds of existing legislation — and whose harms are open to interpretation and remain undefined.
A joint letter sent by media executives to the British government in February stressed that legislation must be “technically possible to implement in practise… (and) “be targeted at specific harms”.
The government paper lists both “harms with a clear legal definition” and “harms with a less clear legal definition”.
The first include terrorist activity and a range of cyber-stalking and hate crimes.
The second lists disinformation and “violent content” as a whole.
What types of harm fall where would be established by a new regulator whose enforcement powers would be funded by the social media companies themselves.
– Proportionate response –
The social media boom was born in the spirit of a libertarian Silicon Valley ethos of innovation and non-interference from government.
But the industry is now facing a litany of dangers that range from the spread of state propaganda to promotion of teen suicide and — most graphically last month — the live broadcast of the slaying of 50 Muslim worshippers in New Zealand.
The techUK industry lobby group admitted Monday that platforms’ attempts at self-regulation have fallen short.
But it also urged the government to avoid “creating discrepancies in law between the online and offline worlds”.
“Government have said that this will apply to all user-generated content (and) not only to big tech firms,” techUK policy chief Vinous Ali said.
“If we are going to throw the net out so widely we need to make sure the proposed regulator takes a risk-based and proportionate response.”
Britain’s Article 19 free speech group also warned that the proposed legislation “could violate individuals’ rights to freedom of expression and privacy”.