The Internet Commission (as part of the Trust Alliance Group) welcomes the opportunity to respond to the call for inputs regarding the proposed application of Ireland’s First Binding Online Safety Code for Video-Sharing Platform Services.
In our response we have provided:
Trust Alliance Group is a not-for-profit private limited company established in 2002 which runs a range of discrete national Alternative Dispute Resolution (ADR) schemes across different sectors, including the Ofgem-approved Energy Ombudsman and the Communications Ombudsman, approved by Ofcom.
Our purpose is to build, maintain and restore trust and confidence between consumers and businesses and we’re developing diverse capabilities and expertise in a range of areas including digital alternative dispute resolution and case management technology.
The Internet Commission – a non-profit organisation which promotes ethical business practice to counter online harms whilst protecting privacy and freedom of expression and increase platform accountability – was acquired by the Trust Alliance Group in 2022.
The Internet Commission offers:
The Internet Commission is currently working at the intersectional point between digital safety and complaints, that being the EU Digital Services Act’s Article 21 provision, introducing out-of-court dispute settlement bodies to the user redress process.
Our comments to this consultation come from our experience of evaluating global online service providers’ platforms across different online services. Our insight comes from careful study of the procedures, resources, governance and the organisations’ culture driving UGC moderation. Our research has explored critical challenges faced by service providers such as:
Specifically, we share evidence from our evaluation of a diverse cohort of online services including two dating service providers, a gaming service provider, a live-streaming gaming service provider, a news services organisation, and a children’s social media service provider. We retain a focus on procedural accountability; that consumer outcomes, particularly vulnerable communities, are best served by ensuring that processes and procedures are evaluated, and we use this information to identify emerging trends and issues. Being proactive in this fast-moving space is key and our approach allows us to flex against market requirements.
Our independent evaluation takes a look “under the hood” at processes, culture and technology that shape content moderation and offer industry benchmarks UK wide and internationally.
Moreover, in light of emerging legislation and its implementation across the globe, we continue to support businesses that aim to go beyond regulatory compliance and promote best practices, driving a race to the top. By setting these standards, we enable companies to demonstrate their commitment to finding ways to protect their customers.
We are uniquely positioned to support video-sharing platform service providers and their users where the providers intend to use mediation by an independent mediator to resolve any disputes arising from user complaints about them taking or not taking any action in response to the Code.
We would welcome the opportunity to further explore our work and findings with the Coimisiún at any time.
We agree with the proposal to include user-generated content that is indissociable from user-generated videos in the definition of content. On one level, we believe that the form of harm is explicitly different between, for instance, audiovisual content vis-à-vis the caption or comments of said content. This could mean that the video provides visual aids to make, for example, the incitement to hatred more vivid and accessible to its target audience.
However, two examples justify the inclusion:
We believe it is important that providers offer measures to cater for the often-indistinguishable nature of these harms.
In relation to the terms and conditions obligations, concerning restrictions on the upload of content, it is pragmatic to highlight the difference between children and the general public. Treating all uploaded content at the threshold which should apply to children would constitute a restriction of the freedom of expression under Article 10 of the European Convention on Human Rights. It is noted that the explanation of Section 3.4.1 utilises principles from the case law of the European Court of Human Rights to differentiate between content which does (or does not) contribute to civic discourse.
Regarding the two protective measures for providers sharing pornographic content, challenges remain in preventing users getting around this system. As the major porn websites currently utilise an easily avoidable age checkbox, or age-gating system, a regulatory tightening of this system across the industry is commendable. Our Accountability Report 2.0 offers some insights on age verification. In the context of dating platforms, our evidence demonstrates this works well where additional data is shared by the user and can be cross-referenced against the original age provided.
The Code’s requirements as to reporting and flagging of content are well put, and designed to positively impact users.
In our previous response, we referenced extensive data garnered by way of our Accountability Reports. Part of this work focused on an organisation’s reduction in response times to flagged content (over one year) as indicative of an improvement and we support the Coimisiún’s requirement that VSPS providers set and meet targets for timeliness of response. We also recognise the value of proportionality and determination not to be overly prescriptive – accounting for the breadth and types of service and complaints.
However, we would recommend that the Coimisiún set out definitions and/or processes for determining the categorisation of types of harms and associated complaints, against which targets can be set. In this way, a number of benefits may be obtained. For example:
During one of our assessment cycles, we noted that there were significant differences in the time taken to respond to, for example, two categories of flags concerned with harm experienced by children – the names of which indicated little distinction. It would be our recommendation that the Coimisiún provide the guidance set out above to mitigate the risk of technicalities getting in the way of effective risk mitigation for the most vulnerable users. This would not undermine the flexibility of the approach of setting targets for response times, but rather strengthen the ability of the Coimisiún to assess its effectiveness while producing better outcomes for users.
We would also argue that consistency and a standard baseline of understanding on the back end should lead to greater uniformity on the front-end – positively impacting user experience by ensuring that users are not obstructed or put off from reporting by an unfamiliar set of reporting types or processes. This goes hand-in-hand with the Coimisiún’s user friendliness provisions, particularly concerning the use of default options for distinct kinds of harmful or illegal content and/or harmful audiovisual commercial communications on the service. This feature would be a positive step toward standardisation of reporting experiences for users, which can be expected to enhance usability and support the develop of user competency when navigating such tools.
We are encouraged by the inclusion of the requirement that services a) tailor their notifications appropriately for different forms of harmful or illegal content and/or harmful audiovisual commercial communications; and b) state the reasons they believe the content is harmful or illegal content and/or harmful audiovisual commercial communications. These requirements chime with some of the evidence the Internet Commission submitted to the Call for Inputs.
The tailoring of a notification is vital to ensuring a sense of mutual accountability and trust between user and service, as is the furnishing of reasons. The latter is vital in empowering users to be sufficiently informed to follow a complaint to its conclusion, understanding what assessment is being undertaken, what outcomes they might expect and how they can appeal an outcome.
We restate our view that vital to the effective functioning of a reporting system is its integration with enforcement and appeals systems. Our reporting shows that service providers often put together their flagging, reporting, moderation, oversight, and appeals systems in a piecemeal way: building as and when they have resource or where there is a pressing demand. This leads to fragmentation between policies, procedures and systems and a lack of clarity on the journey of a complaint as it is progressed. Users need to be able to understand what activity causes a particular enforcement action - to understand where they went wrong - and be able to appeal if necessary. This also has impacts for moderation staff who must spend time checking across two systems to validate the appeal. A disconnected approach can, and often does, lead to questionable or erroneous moderation decisions.
It would also be of great benefit to users (and the emotional resilience of moderation staff) if expectations were placed on services to effectively signpost users to mental health support, where needed. We are aware of at least one service provider that has partnered with a mental health service to provide this support. In this instance, users may text the name of the organisation to the mental health service provider to be connected with a counsellor immediately. This is an example of best practice and while this level of support may not be achievable for smaller services, we would recommend that the Coimisiún incorporate mental health signposting as a measure under the requirements for notifications to users.
We agree with requirements to establish and operate age verification systems for users of video-sharing platforms with respect to content which may impair the physical, mental, or moral development of minors. We also believe that the Coimisiún’s approach in not prescribing the method to be used - instead, requiring its effectiveness - is a positive one that leaves room for innovation, without allowing providers to evade their duties.
In the text, the Coimisiún notes that, “mere self-declaration of age is not regarded as an effective age verification technique,” despite self-declaration being listed under Sections 11.16-11.21 of the Code. To align with other regulatory texts referring to such techniques, and to avoid confusion, it would be more appropriate to group the list of techniques under ‘age assurance measures,’ which can include both estimating and verifying.
The Coimisiún also proposes the use of age estimation techniques to ‘verify’ a self-declaration of age through behavioural or biometric analysis. There are a number of issues with this:
On this last point, we stress the importance of ensuring that effective and robust age verification is conducted in line with data protection requirements, in particular where it relates to Article 8 and parental consent. We would suggest that the Coimisiún consider the significance of the links between its age verification measures and its parental control measures. Empowering parents to reliably verify parental responsibility online is key to enabling them to provide consent for the processing of their child’s data for age assurance purposes, in line with GDPR, and to ensure the appropriate adult has access to the parental control measures put forward in the Code.
As set out in Section 1 of our response, we believe we are well-placed to discuss the complaints requirements in the Code. Our views in response to this question are also applicable to question 20 on the ‘reporting of complaints’.
Appeals and complaints is an indispensable facet of online safety, as we outlined in our (pre-DSA) Accountability Report 2.0 in 2022. One of our key findings was that the way organisations communicate moderation decisions, apologising for incorrect decisions and build transparent appeals processes, illustrates the extent to which ethical considerations are embedded into their operations. One of our partner companies (a livestreaming platform) implemented an effective approach to complaints and redress by establishing an apology mechanism for users found, via the appeals process, to have been wrongfully banned. Communicating with users in this way promotes a shared sense of accountability. In pursuit of further transparency, users were also sent (pre-written) emails concerning the progress of their appeal. This will soon be supplemented with a dashboard for appeals, containing suspension-specific updates.
Our report also highlighted one of our organisations falling short regarding their complaints and appeals process. This organisation had not integrated its enforcement and appeals system, meaning users could not connect an appeal with a specific enforcement action and moderation staff were forced to check across two systems to validate the appeal. This disconnected approach had negative impacts for both users (who struggled to appeal enforcement action) and moderators (who were subjected to laborious tasks which slowed response times) - creating the risk that questionable and incorrect moderation decisions would go unchallenged. This point demonstrates the importance of an accessible, easy-to-use system which does not deter users from challenging a platform’s decision simply because to do so is an onerous task.
We are increasingly of the view that user access to impartial Digital Dispute Resolution is the missing piece of the puzzle, with regards to making online experiences safer. Provision for the establishment of such a service is made under Article 21 of the European Union’s Digital Services Act.
Our experience operating Alternative Dispute Resolution services in energy and communications markets leads us to believe that access to such a provision in the digital space could offer:
While regulation, guidance and oversight can set the standards by which the market should operate, first-hand evidence of actual user experience will not be captured and consumers will remain unable to challenge final decisions made by providers, even if they are incorrect. For context, the Energy Ombudsman upholds consumer complaints approximately 70% of the time – showing that, even in a highly regulated market, erroneous decisions are made.
TAG is developing our thinking and evidence base, with regard to the provision of Digital Dispute Resolution, and we will share this with the Coimisiún at the earliest opportunity.
As has been highlighted at both European and Irish-level, media literacy is a key tool in the overall objective of enhancing the safety of the digital space, particularly regarding online harms on digital platforms. Our Accountability Report 2.0 contained several notes on the theme of user empowerment and enhancing media literacy as a means of empowering users. As outlined in the Code, a focus on media literacy helps provide the foundation upon which companies can take this approach in a form adjusted to their platform’s unique features. Further, we also agree that given the lacuna in European law (due to constitutional limitations) and Irish laws regarding mandatory media literacy measures, it is appropriate to “provide high-level obligations, elaborated by statutory guidance materials, and there should be a requirement to be transparent about the actions taken and their impact”.
Want to learn more? We’re happy to help.