Policy Briefing, December 2021
This briefing draws on discussion at the Internet Commission’s EU Policy Roundtable in October 2021, which brought together leading Brussels policymakers to explore these new regulatory obligations and review the opportunities and challenges for corporate accountability. [1]
Opacity and information asymmetry have made it difficult to fight illegal content online, including hate speech, CSAM and fraud. Regulators and law enforcement struggle to identify people, lack understanding of content moderation systems, and have incomplete evidence about the scale of illegal content. By obliging companies to undertake risk assessments and demonstrate the steps they are taking, the DSA aims to increase transparency and accountability in the field of content moderation.
At the same time as making sure illegal content is removed, a parallel objective is to ensure that legal content remains online. Overzealous notices and automatic takedowns of content has led to over-blocking of content. In some cases, this has created a chilling effect on freedom of expression and access to information. It is important to ensure removals can be challenged and that complaints and redress mechanisms are fair. It may be necessary to incorporate a human rights impact assessment, including children’s right to participate in an age-appropriate online environment [2].
The DSA aims to increase oversight of algorithms, including recommendation systems and malicious advertising practices that push extreme content and amplify misinformation. Auditing will play a key role in this, as will the continued evolution of co-regulatory mechanisms such as the EU Code of Practice on Disinformation, which was strengthened in 2021 [3].
Such mechanisms may require greater scrutiny, with a wider range of actors from academia, civil society and media being invited to engage with them.
Implementation issues have plagued GDPR enforcement where EU privacy complaints have stockpiled with the Irish Data Protection Commissioner in Dublin. Member States have therefore moved to agree to oversight by the European Commission rather than the authorities in Ireland [4].
Even so, securing the right resources and access to expertise will be crucial for both regulators and in-house compliance teams. An independent enforcement authority might be necessary and it is not clear whether there is an existing body with the diverse range of competencies necessary to tackle the features of the DSA.
To effectively change behaviour, and to ensure regulation is future proof, softer measures may need to be empowered. The DSA obliges very large platforms to develop codes of conduct, which would turn self-regulatory instruments into co-regulatory ones, but will this be enough? An open and inclusive multi-stakeholder approach may be needed to prevent the development of such codes and norms side-lining the democratic process. Alignment of codes of practice and evaluation frameworks across jurisdictions should enable platforms to comply with multiple regulatory requirements more easily. Voluntary schemes, initiatives, and reporting may be the most effective way to prevent companies starting an antisocial trajectory in the first place. Even so, fines could be necessary to encourage companies to uphold standards that they themselves have established.
The DSA requires very large online platforms to conduct risk assessments and demonstrate mitigation measures. Further debate is needed about how this assessment works and who decides which risks are to be assessed. Who sets the audit questions? Presumably not the organisations that are themselves being audited. Framed well, the DSA audits can be consistent with a risk-based approach, allowing regulators to focus on less developed platforms and amplifying successful and innovative practices in a “risk plus benefits” approach. But to assemble an auditor community that can hold the largest Internet companies to account, will require significant capacity building.
The identification and independent evaluation of trust and safety practices benefits from and supports the development of best practice benchmarks. Such benchmarks can also support the development of standards and provide direction to organisations that are seeking to improve. Not only can such benchmarks inform practice, but they can evidence and support the development of the ethical business cultures that differentiate the world’s best organisations. As part of a multi-stakeholder collaboration, the Internet Commission is developing a taxonomy of technical, AI and human practices that are used by digital organisations to achieve ethical outcomes and counter online harms.
Sources:
[1] See http://inetco.org/eurt21 ;
[2] See Digital Futures Commission: https://bit.ly/3rIUWV7 ;
[3] Code of Practice on Disinformation: https://bit.ly/3lLff0v
[4] Politico: https://politi.co/3D4g76h
Want to learn more? We’re happy to help.