Back in September 2017, the EC provided initial guidance on how to proactively detect, remove, and prevent the reappearance of illegal content that incites hatred, violence, and terrorism online, suggesting that internet companies need to invest in automatic detection technologies. It promised to monitor progress in tackling illegal material online and assess whether additional measures are needed. On March 1, 2018, the European Commission (EC) finally released a recommendation setting out operational measures to be taken by companies regarding tackling illegal content online.
Though not mandatory, the EC's recommendations should still encourage a faster response to tackling illegal online content
The EC has in the past indicated the possibility of introducing legislative measures to complement the existing regulatory framework for tackling illegal online content. However, in the meantime, on March 1, 2018, it released a recommendation that set out operational measures to be taken by companies. They apply to all forms of illegal content, including terrorist material, incitement to hatred and violence, child sexual abuse material, counterfeit products, and copyright infringement. Crucially though, these recommendations are not mandatory, so whether ISPs will voluntarily adopt them remains to be seen. It seems the EC still prefers to encourage platforms to self-regulate so has once again avoided threatening internet companies with fines or legal action.
Recommendations include clearer and more transparent rules for notifying and taking down illegal content, such as fast-tracked procedures for "trusted flaggers" and the ability for content providers to contest decisions. The EC has suggested introducing proactive technologies to detect and remove illegal material automatically to speed up the process. Safeguards such as human verification would need to be put in place to ensure decisions to automatically remove content are accurate and in line with data protection and freedom of expression rights. The EC is encouraging the industry to share best practice and cooperate more to support small companies, which have limited resources to implement these recommendations. The measures also suggest that ISPs should work more closely with law enforcement authorities when a criminal offence has been committed. In addition, the commission has recommended more specific provisions for online terrorist material, including a commitment to removing all such content within one hour of its referral, introducing more proactive and automated detection measures to remove terrorist material, and creating fast-track procedures for referrals. The EC has previously been reluctant to impose any timeframes on the removal of illegal content, so the one-hour rule for terrorist material is certainly a considerable step forward.
Up until now, illegal online material in the EU has been largely tackled through a combination of binding and non-binding measures. Various voluntary initiatives, such as the EU Internet Forum on terrorist content online, the Code of Conduct on Countering Illegal Hate Speech Online, and the Memorandum of Understanding on the Sale of Counterfeit Goods, have helped ensure illegal content is removed – internet companies are now removing approximately 70% of illegal hate speech notified to them. However, more still needs to be done, and in particular, ISPs need to respond much faster to referrals, which these recommendations should start to address.
The EC plans to launch a public consultation in 2Q18 and will continue to monitor the actions taken in response to the recommendation. To do this effectively, the commission is requesting member states and ISPs to submit relevant information on terrorist material (including referrals and their follow-up) within three months, and other illegal content within six months.
The Regulatory Environment for Platforms, TE0007-001003 (March 2016)
Sarah McBride, Analyst, Regulation