UK ‘Online Harms’ Bill: at Best Naive, at Worst Authoritarian Censorship and a Cynical ‘Money-Grab’


Yesterday, the UK parliament discussed a proposed ‘Online Harms’ Bill set to be voted on in 2021. The bill proposes granting The Office of Communications (OFCOM) powers to issue enormous fines, alongside powers to block online access to services deemed as failing to protect vulnerable people.  

OFCOM will be granted powers to fine companies up to either 10% of their annual global turnover or £18 million (whichever is higher) if a company is deemed to be failing in their ‘duty of care’ to protect ‘vulnerable users’. Supporters of the bill believe it will aid in “protecting children and vulnerable people” from content that is perceived as harmful.

The bill would apply to any company that hosts user-generated content accessible to UK viewers, with the exception of comment sections of news websites and small business product or service reviews. 

This could result in technology companies receiving ludicrously large fines alongside requirements to publish audits if standards are not met. For example, based on its most recent earnings reports, Facebook could face fines up to $7.1 billion and YouTube's owner Google $16.1 billion. 

The bill does not introduce criminal prosecutions for senior executives as proposed by children’s charities such as the NSPCC and as introduced in a comparable bill in Ireland.

The proposed bill introduces a novel element to pre-existing legislation. Technology companies are to be legally compelled to regulate content that is currently considered “legal but harmful”, such as images depicting self-harm or promoting eating disorders.

The main proponent of the bill, The Secretary of State for Digital, Culture, Media and Sport, Oliver Dowden, reassured critics that the proposed bill should not affect adult users from accessing legal content. 

Dowden states the proposed legislation only seeks to remove content portraying child abuse, portraying self-harm, extremism and fake news. Although these are certainly admirable causes, there are many legitimate concerns with the proposed legislation. 

In his defence of the bill, he states that technology companies will not be the ones defining what is fake news, as this will be done by OFCOM. The bill will also allow content creators the ability to appeal to OFCOM if they feel their content has been removed unjustly. 

‘Illegal Is Not Working, Make It Double-Illegal’

The bill demands that technology firms take action against child abuse imagery and extremist content, which are currently illegal irrespective of whether this bill is passed into law. Therefore, it appears the proposed legislation does not aid in the removal of this content and merely introduces greater fines for inadvertently hosting it. 

It is unclear whether increasing the fines to such a large figure will provide any tangible improvement in a company's ability to accurately remove this content. As manual moderation of content is simply not possible at a scale of hundreds of millions of posts everyday on the larger websites, companies will have to turn to relying on algorithms instead. 

Algorithmic moderation has been heavily criticised as a blunt tool, which too often removes legitimate, law-abiding content. To avoid the extremely large fines in the proposed bill, companies may well have to make these tools even more blunt - as the cost of leaving this content available to see will become far higher. 

A video on YouTube discussing the problems of Islamic extremism may well be removed, as an algorithm might be unable to discriminate between it and actual extremist content. There are innumerable examples of this already occurring. Expanding this further may well exaggerate users’ current frustrations to a point where they abandon using these sites altogether.

The bill received much criticism in the parliamentary discussion for failing to target online scams or fraud, which many believe to be some of the most serious issues with current online communication. This criticism was levelled even by those who otherwise supported the bill, suggesting that this may be a significant oversight by the government. 

It could be argued that the bill is deliberately setting standards that are impossible to enforce. With the sheer amount of user-generated content on popular media sites, no tech company can feasibly police all of it manually or by using imperfect algorithmic moderation. Therefore, the bill is simply using these issues as a scapegoat to gain a large source of revenue in fines from companies that many allege otherwise pay little in tax revenue to the state. 

Legal But Harmful: A Dangerously Vague Category

The mention of fake news in the parliamentary discussion was exclusively focused upon anti-vaccination content, but raises questions about what would qualify as fake news and what would not. Who is a legitimate enough authority to define what is fake and what is not? The bill proposes that OFCOM will do this, but it was unclear on what basis they would be coming to their judgements, which may be cause for concern. 

The precedent set by enforcing the removal of “legal but harmful” content is somewhat concerning too. Although many companies have pre-existing policies to remove this content, they would be legally obligated to, despite the content itself being legal.  

The examples given of “legal but harmful” content often touch on legitimate concerns. These might be images depicting self-harm or promoting eating disorders. However, the scope of this new obligation is far too broad. What is considered ‘harmful’ can be very subjective. 

Should the government even be concerned with shielding its citizens from content that is only psychologically and not physically harmful? Would content educating the public about these harms be removed too? It is yet unclear.

End of Internet Privacy

Perhaps the most egregious proposal of all is to mandate that companies monitor end-to-end encrypted conversations. This provision demands that tech firms police encrypted content shared privately on their platforms. End-to-end encryption allows multiple individuals to communicate without third parties being able to access the data whilst it is being transferred from one user to another. 

Currently, end-to-end encryption is widely used to ensure online privacy, with companies such as WhatsApp being some of the most well-known ones to use this method of communication. However, this bill would see the end of internet privacy, with companies being provided strong financial incentives to monitor all private communications without the monitored individuals’ awareness.

The concerns about the bill’s infringement on internet privacy were also voiced by Alistair Carmichael, a Liberal Democrat, who pointed out the dangerous precedent of invading internet privacy set by the bill.

The aforementioned main proponent of the bill, Oliver Dowden, acknowledged the concern by stating that he understood the value of anonymity online but sought to remove it in some key areas where he believed it was causing the greatest harm. 

Conclusions

As the bill is set for debate at an indeterminate date in 2021, it is unclear what aspects of the proposed bill will be kept and which will be changed by the time it is to be voted on in parliament.

The proposed bill has been drafted to appear to be addressing legitimate concerns, but what is actually being mandated from technology companies is neither feasible nor practical. There is little evidence to suggest that increasing fines, placing legal obligations on removing ‘legal but harmful’ content is going to make technology companies any more able to accurately remove ‘dangerous’ content.  Equally,  granting OFCOM greater powers to define what is ‘fake’ and what is ‘real’ news crosses into dangerous territory, where a government agency starts, in effect, defining what is true and what is not. 

Limiting internet anonymity is perhaps the most harmful aspect of the bill, as it orders companies to monitor private conversations. The only potentially positive change that this bill proposes is for those who have their lawful content unjustly removed to be able to appeal to OFCOM to redress their grievance. However,  even the most charitable view of this bill is that it takes a naive and heavy-handed approach in attempting to safeguard vulnerable people more effectively online. Hopefully the criticism of the worst elements of this bill will be taken into account by the time it is put before the House of Commons in 2021.

Check out our premium content.


Subscribe to Newsletter

Share:

Comments