UK MPs plan to summon Elon Musk to testify about the role of X (formerly Twitter) in spreading disinformation. This testimony will be part of a parliamentary inquiry into the UK riots and the rise of false and harmful AI-generated content, as revealed by The Guardian. Senior executives from Meta, the parent company of Facebook and Instagram, and TikTok are also expected to face questioning as part of the Commons science and technology select committee’s inquiry into social media.
The hearings, set to begin next year, reflect growing concern that the UK’s online safety laws are lagging behind rapidly advancing technologies. Platforms like X are under scrutiny for their role in facilitating the spread of misleading and harmful content. The inquiry will focus on generative AI, which contributed to the dissemination of false images inciting Islamophobic protests following the tragic Southport schoolgirl killings in August. Additionally, MPs will examine how Silicon Valley’s business models encourage the spread of harmful content.
Chi Onwurah, Labour chair of the select committee, expressed her interest in questioning Musk directly. “He has strong views on freedom of expression versus disinformation. It’s critical to understand how he reconciles these views,” she said. Musk, who owns X, was notably absent from the UK government’s international investment summit in September. Onwurah added, “I’d like to make up for that by inviting him to attend our inquiry.”
Former Labour minister Peter Mandelson, expected to become the next UK ambassador to Washington, also urged an end to the strained relations between Musk and the UK government. “He is a technological and commercial phenomenon. Ignoring him would be unwise,” Mandelson stated during a podcast interview. Musk, however, has been vocal in his criticism of the Labour government, recently comparing changes to inheritance tax policies to “Stalinist” actions.
X did not respond to inquiries about whether Musk would testify, though his attendance appears unlikely. The world’s richest man is reportedly preparing for a senior role in the Trump administration and has suggested that “civil war is inevitable” during the Southport-related riots.
The inquiry also comes amid widespread dissatisfaction with X, prompting millions of users to migrate to Bluesky. Many users are leaving X due to misinformation, the reinstatement of controversial figures like Tommy Robinson and Andrew Tate, and new service terms allowing user data to train AI models. While opposition leader Keir Starmer has “no plans” to join Bluesky, the government emphasized the need for platforms that enable broad communication with the public.
During the riots, misinformation naming a Muslim asylum seeker as the alleged attacker in the Southport killings went viral, with accounts boasting over 100,000 followers amplifying the false claims. Ofcom, the UK communications regulator, noted that some platforms were used to spread hate, provoke violence, and incite attacks on religious groups and asylum accommodations.
Next month, Ofcom is set to publish new rules under the Online Safety Act, requiring platforms to prevent illegal material and mitigate risks that provoke violence, stir up hatred, or spread harmful false communications. Companies must remove illegal content and address these safety risks proactively.
The inquiry will also examine AI’s impact on search engines like Google. Recently, Google’s AI-generated overviews were found spreading false, racist claims about African countries, violating its policies. These incidents underscore the urgency of addressing AI-generated disinformation and holding platforms accountable.