Connect with Point of View   to get exclusive commentary and updates

AI Doomsayer

AI Doomsayer
By: William A. Galston – wsj.com – August 26, 2025

While the technology carries risks, some critics’ alarmism seems excessive.

Every Tuesday evening, the Rationalists gather to read and discuss “The Sequences,” a collection of essays by AI researcher and writer Eliezer Yudkowsky. It’s “almost like a Bible study,” Mr. Metz writes. Ilia Delio, a Catholic theologian whom Mr. Metz interviewed, also drew parallels between religion and the Rationalists’ behavior. “Religion is text and story and ritual,” she said. “All of that applies here.”

Mr. Yudkowsky has been interested in AI and machine learning for decades. He founded the Machine Intelligence Research Institute, which since 2005 has focused on identifying and managing risks from AI.

One of the Rationalists’ overriding beliefs, Mr. Metz writes, is that AI “can deliver a better life if it doesn’t destroy humanity first.” But as AI developers have focused more on developing artificial general Intelligence, or AGI—AI that hypothetically could reach or surpass human-level intelligence—Mr. Yudkowsky has aired apocalyptic fears.

In his 2022 article, “AGI Ruin: A List of Lethalities,” he lays out his concerns. Chief among them is “misalignment,” the idea that AGI could develop goals and activities at odds with those of its creators, threatening the survival of our species. Intensifying this threat is the possibility of rapid, self-generated gains in capability that outpace humans’ ability to control them.

Mr. Yudkowsky’s dystopian vision is going mainstream. In 2023, Time magazine included him in its list of the year’s “100 Most Influential People in AI.” Next month, he and co-author Nate Soares will publish “If Anyone Builds It, Everyone Dies.” Their book has been endorsed by computer scientists, philosophers and public officials. Mr. Yudkowsky’s assertions—that the possibility of misalignment may give humanity only one chance to get AGI right, and that officials should slow down, rethink and regulate AI—are gaining a following.

The dangers of AI can’t be disregarded. Even now, AI can disrupt the white-collar workforce, just as technological change disrupted agriculture a century ago and manufacturing in the 1980s. There’s also substantial anecdotal evidence that human interactions with AI can end badly. Treating it as a substitute for a human therapist can have tragic consequences when it fails to recognize suicidal thoughts. AI can descend into a “delusional spiral” that feeds the grandiosity of psychologically needy users. These harms occur not when AI is misaligned with users’ goals but when it is too closely aligned with them, offering excessively supportive thoughts when warnings and tough love would be more appropriate.

Still, Mr. Yudkowsky’s alarmism seems excessive. Like other self-taught “scholars” who didn’t attend high school or college, he writes as though no one else is thinking about the problems that concern him. In fact, Google researchers and engineers recently published a lengthy assessment of the risks and benefits of this emerging technology. They proposed strategies to mitigate the risks of misalignment and set out lines of defense against it.

In a recent op-ed, Google’s former CEO Eric Schmidt and technology analyst Selina Xu note that Silicon Valley’s fascination with AGI goes back to 1950. That was when computer pioneer Alan Turning proposed that a machine would prove its human-level intelligence by offering responses to queries and prompts that couldn’t be distinguished from human responses. Technologies have changed in 75 years, but the obsession persists.

AI experts disagree about the risks this technology poses. Some predictions are dire. The AI Futures Project released a report this year predicting that AI could achieve superhuman intelligence by 2030 and explored scenarios in which it could control or exterminate humans. Other researchers have a different vision, one in which AI is a controllable tool that we shouldn’t fear. Princeton computer scientists in April published “AI as Normal Technology,” a paper arguing that AI will remain manageable for the foreseeable future. Nor is it clear that AI’s reaching or surpassing human-level intelligence is even within reach—or that the pace of its development is accelerating, despite enormous investments in research.

The challenge will be walking the narrow path between complacency and panic. Businesses, government officials and technology experts will need to work together to find the right balance between innovation and caution.

To see this article in its entirety and to subscribe to others like it, please choose to read more.

Read More

Source: Should We Believe the AI Doomsayers? – WSJ