Esclusiva

Maggio 29 2025.
 
Ultimo aggiornamento: Maggio 30 2025
The growing fear of superhuman AI

Academics warn of extintion, activists call for international regulation, and skeptics abandon generative tools. A global backlash is taking shape

“Malevolent superintelligence can kill everyone,” warns Roman Yampolskiy, Professor of Computer Science at the University of Louisville. “The development of advanced artificial intelligence is extremely dangerous. We must do everything we can to prevent it from becoming a reality.”

In his books, like AI: Unexplainable, Unpredictable, Uncontrollable (2024, CRC Press), Yampolskiy explores the nature of intelligence, consciousness, values, and knowledge. But what happens when an algorithm can outthink humans?

Yampolskiy isn’t referring to today’s commercial chatbots like Gemini, ChatGPT, or Perplexity. “Subhuman-level AI is a useful tool, but it’s critical not to confuse tools with agents – or subhuman intelligence with superhuman intelligence.”

Artificial general intelligence (AGI) refers to a hypothetical stage in machine learning development in which an AI system can match or exceed human cognitive abilities across any task. A 2023 survey by AI Impacts asked 2,778 researchers when they expected AGI to arrive. On average, they predicted high-level machine intelligence could be achieved by 2040.

Concern over this prospect led nearly 1,500 technology leaders – including Elon Musk – to sign an open letter in March 2023 calling for a six-month pause in AI development. “Currently, it doesn’t look like anyone is pausing, but we saw the same dynamic with tobacco and oil companies,” says Yampolskiy, who supported the letter written by the Future of Life Institute.

This call to action inspired the non-profit Pause AI to organize protests worldwide, with demonstrations taking place in London, Berlin, Stockholm, Kinshasa and Melbourne in February 2025. The activists demand international regulation to prevent companies from building systems that could overpower humankind and potentially lead to extinction.

“Our dominance has benefited some species, like dogs,” says Tom Bibby, Head of Communication at Pause AI. “But for others, like pigs, it means suffering, simply because they offer us resources we want and can’t fight back.” If tech giants create a more intelligent species, he warns, “we must ensure it shares our values and acts in our interest. Otherwise, we risk ending up like the pigs.”

While Pause AI is vocal about risks like job displacement and disinformation driven by generative algorithms, the organization does not call for a total ban on AI. “We’re not asking to limit small companies from training models,” explains Bibby, “especially in fields like medicine, where AI can be incredibly valuable in detecting cancer and diagnosing diseases.” He also acknowledges using ChatGPT for tasks like brainstorming and coding.

A more radical stance is taken by Stop AI, a U.S.-based group whose manifesto calls for the destruction of “any existing general-purpose AI model,” including ChatGPT-4, and demands a total ban on AI-generated images, video, audio, and text. On February 22, three protestors were arrested in San Francisco for blocking the doors of OpenAI, demanding the company be “shut down” to prevent an incoming “apocalypse.”

During the protest, some activists accused OpenAI of involvement in the death of Suchir Balaji, a whistleblower who had alleged that the corporation violated copyright laws to train its large language models. Balaji died by suicide in November 2024. However, these accusations are part of a conspiracy theory and are not supported by evidence.

Not all criticism of AI is based on fears of corporate misconduct or existential threats. Take Francesco, an Italian PhD student in Economics: “Some people devote themselves to study, and our role is to generate ideas we can respond to ourselves.” He describes life as a researcher as a privilege. “Using AI for this purpose doesn’t quite live up to the intellectual responsibility it entails”. For Francesco, it’s not just a matter of pride: “AI is already widely used in writing. If it also starts playing a role in peer review, we risk a gradual sidelining of human intellect in the production of knowledge.”

Read also: Simulated Human: Creativity in the Age of Artificial Intelligence