skip to main content
Close Icon We use cookies to improve your website experience.  To learn about our use of cookies and how you can manage your cookie settings, please see our Cookie Policy.  By continuing to use the website, you consent to our use of cookies.
Global Search Configuration

Ovum view

Summary

On July 2, 2019, the German Federal Office of Justice (BfJ) imposed a fine of €2m ($2.26m) on Facebook for failing to list all the complaints it received concerning illegal content on its platform in its transparency report for the first half of 2018. Facebook has been accused of only selectively reporting complaints, and the underreporting of hate speech complaints is a clear violation of Germany's transparency law, known as the NetzDG. However, Facebook may choose to appeal the decision.

Several other countries have looked to introduce similar laws to the strict NetzDG to protect users from harmful online content

The NetzDG, which came into effect on January 1, 2018, requires all platforms with more than 2 million users, including Facebook, Google, and Twitter, to remove posts that contain hate speech or incite violence within 24 hours or face fines of up to €50m ($56.5m). Social media companies are also required to file transparency reports on their progress every six months. However, professional networks such as LinkedIn and Xing are expressly excluded from the law, as too are messaging services such as WhatsApp. The NetzDG is among the strictest privacy and online hate speech laws in the world. It outlaws hateful or violence-inciting speech directed at persons or groups on the grounds of religion or ethnicity.

The law has its critics, with some branding it a censorship law, while some large platforms have complained that it lacks clarity. Another criticism has been that companies simply delete offending posts without informing the authorities, which means that the users posting them go unpunished (unless a user threatens to commit a violent crime or posts child pornography, in which case the company is obliged to pass on the details to the authorities).

One of the aims of the law is to make it easier for users to report violations. The BfJ has made an online form available for this purpose. Google has created its own online form to report content, while Twitter has added an option to its existing report function that specifies "comes under the NetzDG." Facebook has created a far more complex system, which is currently independent of its other reporting options. Users are required to find a specific webpage, take a screenshot of the offending post, and choose one of 20 offenses that they consider the post to commit. The BfJ has criticized Facebook for overcomplicating the process; it is far easier for users to complain about posts that they consider to violate the site's softer community standards than those that they consider to violate the NetzDG. Facebook has been accused of only including NetzDG complaints in its report to the BfJ, rather than all the complaints received via the usual flagging channel on the platform, resulting in the company only citing 1,704 complaints in the first half of 2018. This figure is far lower than YouTube, which reported 215,000 for the full year, and Twitter, which reported 265,000 for the full year. It is understandable that this huge discrepancy has resulted in a fine from the BfJ for publishing incomplete data.

As Ovum's OTT Regulation Tracker: 1H19 shows, this is not the first time this year that large platforms such as Facebook have come under fire regarding hate speech. Several other EU countries have looked to introduce similar laws to Germany. The Irish government launched a public consultation on the regulation of harmful content, aiming to introduce new online safety laws that would apply to platforms such as Facebook. It intends to give a regulator the powers necessary to ensure that harmful content can be removed quickly from online platforms. In France, the government is debating legislation that would give a new regulator the power to fine tech companies up to 4% of their global revenues if they do not do enough to remove hateful content from their networks. In the UK, the government has introduced a "duty of care" on platforms to make companies take more responsibility for the safety of their users. These moves are likely to be just the start, though, as it is expected that over the coming months and years many more countries will look to introduce their own laws.

Appendix

Further reading

OTT Regulation Tracker: 1H19, GLB005-000168 (July 2019)

"Ireland attempts to regulate harmful online content without impacting freedom of speech," GLB005-000166 (June 2019)

“In a world first, social media platforms are to be subject to a legal "duty of care" in the UK," GLB005-000149 (April 2019)

Author

Sarah McBride, Analyst, Regulation

sarah.mcbride@ovum.com

Recommended Articles

;

Have any questions? Speak to a Specialist

Europe, Middle East & Africa team: +44 7771 980316


Asia-Pacific team: +61 (0)3 960 16700

US team: +1 212-652-5335

Email us at ClientServices@ovum.com

You can also contact your named/allocated Client Services Executive using their direct dial.
PR enquiries - Email us at pr@ovum.com

Contact marketing - 
marketingdepartment@ovum.com

Already an Ovum client? Login to the Knowledge Center now