The chance of a “full-out Eliezer Yudkowsky doom scenario” was a topic of discussion between hosts David Hoffman and Ryan Sean Adams and Christiano, who directs the nonprofit Alignment Research Center and previously led the language model alignment team at OpenAI. Yudkowsky, who has been called an “AI doomer” for warnings about the possible dangers of the technology for more than 20 years, has been called one.
Christiano claims that his opinions on the anticipated rate of technological development diverge from those of Yudkowsky. “Eliezer is into this speedy transformation once you develop AI,” he remarked. “I don’t hold such an extreme view on that,” he said. “I tend to imagine something more like a year’s transition from AI systems that are a pretty big deal, to kind of accelerating change, followed by further acceleration, et cetera,” the speaker stated. “Once you have that perspective, I think a lot of things might sort of feel like AI problems because they arise right away after you build AI,” he continues.
“Overall, maybe you’re getting more up to a 50/50 chance of doom shortly after you have AI systems that are human level,” he suggested. Christiano adds his voice to the growing chorus of people alarmed by AI advancements.
A recent open letter signed by some AI professionals urged a six-month halt to developing advanced AI. Outside regular business hours, Insider contacted Christiano for comment, but he did not react immediately.