Policy Briefing, December 2021
The statutory regulator, Ofcom, could seek access to the algorithms used by organisations to promote and amplify online content, to test whether this is being done in a responsible way. Under the current proposal, advertising is unregulated, which is an inconsistency to be ironed out, as it would allow bad actors to continue to spread misinformation or other harmful content through paid content.
The most sensitive area of the debate may concern the scope of harmful but not illegal content. Organisations will be held to account by Ofcom for delivering on the terms of service that they themselves define. They will be expected to try to spot harmful content as effectively as they reasonably can and to take adequate action in response. It may be unreasonable to expect each and every piece of harmful content to be removed immediately, but organisations might fail in their duty of care if they actively promote harmful content, or if they fail to notice and stop amplification of harmful content on their platforms. In relation to adults’ online safety, the proposed core duty is
This approach may protect freedom of expression and allow ethical organisations to lead the way, including in tackling harms to society such as COVID disinformation.
There are potential unintended impacts of actions intended to prevent harm. For example, the use of filtering and blocking to create safe environments for children and young people might also risk limiting developmental opportunities for them online. And although the roll-out of end-to-end encryption is viewed by many as an essential component of open societies and markets, it may also hide criminal activity and present obstacles to law enforcement.
Codes of practice are likely to be central to the new regime and will set expectations for services in scope. Ofcom will prepare specific codes relating to terrorism and child sexual exploitation and abuse. It will also propose one or more codes relating to:
These will likely be the subject of debate through the parliamentary process, especially as regards misinformation or disinformation and the concept of harms to society as opposed to harms to individuals. It is important to note that the bill gives organisations the flexibility to adopt alternative measures in line with their own risk assessments where that is the right thing to do: this may be helpful where codes remain undefined or where there are conflicts with the approaches in other jurisdictions.
Anonymity may exacerbate bullying, harassment, and intimidation and facilitate online behaviours which are harmful to individuals and to democracy. It is an established feature of some online environments that can be problematic, but it may not be the root cause of much harm. It can also afford protection to minority voices such as LGBT+ communities and can therefore support diversity. But victims of abuse often find it hard to identify who is behind attacks so the police can act. Service providers will have to get better at dealing with abuse, or insist that people put their names to what they say. Verification of users could be a key step in building healthier online cultures, and a digital identity service may be an important enabler that the government could put in place.
Ofcom regards improving the culture of the technology sector as central to its new role. Gathering evidence and shining a light on how effective company practices are, is likely to be one of Ofcom’s biggest contributions in the first period of the regulatory regime. It sees organisational capacities and capabilities as critical, and does not underestimate the challenge of gathering the right information and applying the right skills and expertise to understand organisational systems and cultures. There may be a risk that competition rules deter companies from sharing information and cooperating on the protection of users. Specific guidance or, if necessary, exceptions to competition rules could facilitate cooperation on online safety, including seeking the views of children and young people.
Business organisations are at the front line of digital responsibility because they operate the Internet and have the capacity to act. They should be encouraged to take the lead and differentiate themselves through digital responsibility and positive social impact, enabled by ethical business cultures.
New approaches to evidence and oversight will be needed to navigate legitimate commercial confidentiality and significant information asymmetry. We envisage a role for trusted brokers, independent of business and government, that can support smart regulation and help organisations to demonstrate digital responsibility across multiple jurisdictions.
International and multi-stakeholder collaboration is vital to foster trust, safety and freedom online. It should focus on understanding how everyday harms are driven by the Internet’s complex strategies, systems and processes, and seek to reveal how these drivers result in the symptoms people experience.
Key challenges include algorithmic accountability, the treatment of online content deemed harmful but not illegal, and the potential unintended consequences of filtering and encryption technologies.
Important opportunities include codes of practice, digital identity, and ethical business cultures.
Sources:
[1] Draft Online Safety Bill: https://bit.ly/3CC2cFB
[2] Internet Commission, July 2019: “Policy primer, momentum across Europe for wide-ranging Internet regulation”, http://inetco.org/reg
[3] Article 19, February 12th 2021: “Online harms: Duty of care would restrict free speech in the UK”, https://bit.ly/3s5uwuV
[4] UK Parliament: Draft Online Safety Bill (Joint Committee),https://bit.ly/2VA1cRK
[5] UK Parliament: Digital, Culture, Media and Sport Sub-committee on Online Harms and Disinformation, https://bit.ly/2U7iB3p
Want to learn more? We’re happy to help.