HomeInsightsSocial media, misinformation, and harmful algorithms: Science, Innovation and Technology Committee publishes report

The Science, Innovation and Technology Committee has published its report into social media, misinformation, and harmful algorithms, concluding that the UK’s online safety regime has “major holes” and is “already out of date”.

The Committee launched its inquiry in the wake of the riots in the UK last summer, and the Report is damning in its conclusions about social media’s role in those events, finding that “social media business models incentivise the spread of content that is damaging and dangerous, and did so in a manner that endangered public safety in the hours and days following the Southport murders”.

The Committee rejects the view that social media companies are platforms rather than publishers, arguing that this leads to an abdication of responsibility when it is the companies – and in particular their recommendation algorithms – that are curating and amplifying content. Whilst recognising that “this is a complex area of law and that defining social media companies as publishers would have major consequences”, the Report calls on the Government to set out its position on this matter and for Ofcom to be empowered to fine social media companies up to 10% of their worldwide revenue if they fail to take appropriate measures to address significant harm on their platforms. It also argues that the Government should compel social media companies to give users a ‘right to reset’ which would remove all data stored by recommender algorithms, and to embed tools within their systems to “identify and algorithmically deprioritise fact-checked misleading content, or content that cites unreliable sources, where it has the potential to cause significant harm”.

The digital advertising industry also comes under criticism, as the Committee argues that it is neither willing nor able to effectively self-regulate and calls on the Government to create a new arms-length body to “regulate and scrutinise the process of digital advertising, covering the complex and opaque automated supply chain that allows for the monetisation of harmful and misleading content”.

Another subject touched upon in the Report is one that was hotly-debated in the run up to the passage of the Online Safety Act: legal but harmful content. Ultimately, efforts to regulate this material did not make their way onto the statute books, largely as a result of concerns about free expression, but the Report recommends that the Government should “introduce duties for platforms to undertake risk assessments and reporting requirements on legal but harmful content such as potentially harmful misinformation, with a focus on the role of recommender algorithms in its spread”.

Finally, the Committee laments the inability of the Online Safety Act 2023 to address the harms associated with generative AI and calls on the Government to “pass legislation that covers generative AI platforms, bringing them in line with other online services that pose a high risk of producing or spreading illegal or harmful content”.

To read the Report in full, click here.