Insights Committee launches inquiry into effects of social media algorithms, Generative AI and harmful online content

The Science, Innovation and Technology Committee has launched an inquiry to investigate the links between social media algorithms, generative AI, and the spread of harmful and false content online.

The inquiry is a response to the demonstrations and riots earlier this summer which, in the words of the Committee, “are believed to have been partially driven by false claims spread on social media platforms about the killing of three children in Southport”. The inquiry will consider “the role of false claims, spread via profit-driven social media algorithms, in the summer riots. It will also investigate the effectiveness of current and proposed regulation for these technologies, including the Online Safety Act, and what further measures might be needed”.

The Committee has launched a Call for Evidence to assist the inquiry’s work, welcoming submissions on the following questions:

    1. To what extent do the business models of social media companies, search engines and others encourage the spread of harmful content, and contribute to wider social harms?
    2. How do social media companies and search engines use algorithms to rank content, how does this reflect their business models, and how does it play into the spread of misinformation, disinformation and harmful content?
    3. What role do generative artificial intelligence (AI) and large language models (LLMs) play in the creation and spread of misinformation, disinformation and harmful content?
    4. What role did social media algorithms play in the riots that took place in the UK in summer 2024?
    5. How effective is the UK’s regulatory and legislative framework on tackling these issues?
    6. How effective will the Online Safety Act be in combatting harmful social media content?
    7. What more should be done to combat potentially harmful social media and AI content?
    8. What role do Ofcom and the National Security Online Information Team play in preventing the spread of harmful and false content online?
    9. Which bodies should be held accountable for the spread of misinformation, disinformation and harmful content as a result of social media and search engines’ use of algorithms and AI?

Commenting on the launch of the inquiry, the Chair of the Committee, Chi Onwurah MP, said, “we shouldn’t accept the spread of false and harmful content as part and parcel of using social media. It’s vital that lessons are learnt, and we ensure it doesn’t fuel riots and violence on our streets again. This is an important opportunity to investigate to what extent social media companies and search engines encourage the spread of harmful and false content online. As part of this, we’ll examine how these companies use algorithms to rank content, and whether their business models encourage the spread of content that can mislead and harm us. We’ll look at how effective the UK’s regulations and legislation are in combatting content like this- weighing up the balance with freedom of speech – and at who is accountable”.

The deadline for responding to the Call for Evidence is 18 December 2024, and more information about the inquiry can be found here.