skip to main content
Close Icon We use cookies to improve your website experience.  To learn about our use of cookies and how you can manage your cookie settings, please see our Cookie Policy.  By continuing to use the website, you consent to our use of cookies.
Global Search Configuration

Ovum view

Summary

Regulators around the world are developing new regulatory approaches to tackling online harmful content, but none has so far established a regulatory framework that tackles a broad range of harms. On April 8, 2019, the UK government released the Online Harms White Paper, which aimed to be the first to achieve this. The new regulatory framework sets out clear standards to help companies tackle harm caused by online content. The government will establish a new statutory "duty of care" to make companies take more responsibility for the safety of their users, and compliance with this duty of care will be overseen and enforced by an independent regulator. The white paper is up for consultation until July 1, 2019.

Regulators must strike a balance between ensuring the safety of users and protecting freedom of expression

On April 8, 2019, the UK government confirmed that it will introduce a mandatory "duty of care" for social media and tech firms. It has published the Online Harms White Paper, which includes new online safety laws. The paper is under consultation until July 1, 2019, and comprises legislative and nonlegislative measures that will make companies more responsible for their users' safety online, which they have up until now been absolved from. It is crucial, though, to strike the right balance between ensuring the safety of users and protecting freedom of expression. Introducing too many restrictions can, in fact, be detrimental to users.

The aim is for the UK to become the safest place in the world to be online, and the white paper is a world first in attempting to ensure that giant tech companies no longer allow unacceptable activities to continue on their platforms. The emergence of online platforms has delivered many social and economic benefits to users, while also posing several challenges to areas such as consumer protection, competition, illegal/harmful content, and privacy. Understandably, this has attracted the interest of regulators around the world, and they have started to identify relevant issues and attempt to devise measures to tackle these. There is already a range of regulatory and voluntary initiatives aimed at addressing harmful content in the UK, but these have not gone far or fast enough – or been consistent enough between different companies – to keep users safe online.

As the digital environment continues to develop, it is becoming increasingly important that the industry takes greater responsibility for the content on its networks. Regulation governing illegal activity does already exist in the UK; however, it can at times be difficult to effectively transpose such legislation to the platform environment. Under the new regime, online companies must take reasonable steps to keep users safe and tackle illegal or harmful activity. Compliance with this duty of care obligation will be overseen by an independent regulator, which will have enforcement powers such as the ability to impose fines or block noncompliant services. The regulator will have the power to require annual transparency reports from companies outlining the prevalence of harmful content on their platforms and what measures they are taking to address this. These reports will be published online by the regulator. The regulator will also have powers to require additional information, including information about the impact of algorithms in selecting content for users, and to ensure that companies proactively report on harms.

Rather than specific harms, the priority will be on the systems that companies must put in place to ensure compliance. This creates a degree of flexibility in the regulation and allows it to be future-proof. The evolving nature of platforms means approaches do need this kind of flexibility to be successful. Companies will be held to account for tackling a broad range of online harms that fall into two categories:

  • illegal activity, such as activities that threaten national security or the physical safety of children including terrorism propaganda, material inciting violence, and online disinformation to undermine democratic values and principles
  • harmful content that is not necessarily illegal but is used to harass, bully, or intimidate people.

The consultation will look at the online services in scope of the regulatory framework; options for appointing an independent regulatory body to implement, oversee, and enforce the new regulatory framework; the enforcement powers of the regulatory body; the redress mechanisms for online users; and measures to ensure regulation is targeted and proportionate for the industry. The white paper proposes that the regulatory framework be applied to companies that allow users to share or discover user-generated content or interact with each other online. These services are offered by a range of companies including social media platforms, file-hosting sites, public discussion forums, messaging services, and search engines.

There had been calls for online companies to be treated as publishers and in the same way be held legally responsible for the content on their sites. However, it would be unworkable to make them directly legally liable for everything on their sites, as this would result in too many restrictions for users. Instead, the UK government's new rules are essentially designed for the unintended consequences of platforms.

Singapore, Australia, and Germany have already made attempts to take similar approaches to the UK to tackling harmful content, but none have been as broad in their coverage, and sometimes measures have been reactive. The Australian government, for example, is planning a bill that will oblige only social media operators to quickly remove violent videos and photos from their platforms. This follows the March 15, 2019, shootings at two mosques in New Zealand. The alleged Australian gunman is believed to have live-streamed the shootings on Facebook, and the video was shared on YouTube, Twitter, and other social media platforms. The new law would introduce penalties for platforms that fail to quickly remove material showing terrorism or other forms of violence. They could face fines of 10% of their annual revenue, and executives could receive up to three years in prison. The operators could also face fines if they fail to notify police once they are aware that their service is being used to broadcast extreme violence taking place in Australia. The Australian government has also asked Japan to discuss tightening regulations on social media at the G20 summit, which will be held in Osaka in June 2019. Therefore, it is likely that discussions around this topic will heat up over the coming months and rise up the regulatory agenda for many more regulators.

Appendix

Further reading

The Regulatory Environment for Platforms, TE0007-001003 (March 2016)

OTT Regulation Tracker: 2H18, GLB005-000105 (January 2019)

Author

Sarah McBride, Analyst, Regulation

sarah.mcbride@ovum.com

Recommended Articles

;

Have any questions? Speak to a Specialist

Europe, Middle East & Africa team: +44 7771 980316


Asia-Pacific team: +61 (0)3 960 16700

US team: +1 212-652-5335

Email us at ClientServices@ovum.com

You can also contact your named/allocated Client Services Executive using their direct dial.
PR enquiries - Email us at pr@ovum.com

Contact marketing - 
marketingdepartment@ovum.com

Already an Ovum client? Login to the Knowledge Center now