They’re Creating the AI God/Demon: A Third Of AI Researchers Think AI Could Cause “Catastrophic” Outcomes On Par With Nuclear War This Century

“Miracles can be perplexing at first, and artificial intelligence is a very new miracle. “We’re creating God,” the former Google Chief Business Officer Mo Gawdat recently told an interviewer. “We’re summoning the demon,” Elon Musk said a few years ago, in a talk at MIT. In Silicon Valley, good and evil can look much alike, but on the matter of artificial intelligence, the distinction hardly matters. Either way, an encounter with the superhuman is at hand.” Of God and Machines, Stephen Marche, The Atlantic, 9/10/22

James Felton, IFLSCIENCE, 9/ 22/22

“Is it a good sign when 36 percent of a field think it might end in catastrophe?”

A survey of scientists and researchers working in artificial intelligence (AI) has found that around a third of them believe it could cause a catastrophe on par with all-out nuclear war. 

The survey was given to researchers who had co-authored at least two computational linguisticspublications between 2019–2022. It aimed to discover industry views on controversial topics surrounding AI and artificial general intelligence (AGI) – the ability of an AI to think like a human – plus the impact that people in the field of research believe AI will have on society at large. The results are published in a preprint paper that has not yet undergone peer review.

AGI, as the paper notes, is a controversial topic in the field. There are big differences in opinion on whether we are advancing towards it, whether it is something we should be aiming towards at all, and what would happen when humanity gets there. 

“The community in aggregate knows that it’s a controversial issue, and now (courtesy of this survey) we can know that we know that it’s controversial,” the team wrote in their research. Among the (pretty split) findings was that 58 percent of respondents agreed that AGI should be an important concern for natural language processing at all, while 57 percent agreed that recent research had driven us towards AGI.

Where it gets interesting is how AI researchers believe that AGI will affect the world at large.

“73 percent of respondents agree that labor automation from AI could plausibly lead to revolutionary societal change in this century, on at least the scale of the Industrial Revolution,” the researchers wrote of their survey.

Meanwhile, a non-trivial 36 percent of respondents agreed that it is plausible that AI could produce catastrophic outcomes in this century, “on the level of all-out nuclear war”. 

It’s not the most reassuring thing when a significant proportion of a field believes it could lead to humanity’s destruction. However, in the feedback section, some respondents objected to the phrasing of “all-out nuclear war”, writing that they “would agree with less extreme phrasings of the question”. 

“This suggests that our result of 36% is an underestimate of respondents who are seriously concerned about negative impacts of AI systems,” the team wrote.

Though (perhaps with good reason) wary about potential catastrophic consequences of AGI, researchers overwhelmingly agreed that natural language processing has “a positive overall impact on the world, both up to the present day (89 percent) and going into the future (87 percent).”

“While the views are anticorrelated, a substantial minority of 23 percent of respondents agreed with both Q6-2 [that AGI could be catastrophic on par with an all-out nuclear war] and Q3-4 [that NLP has an overall positive impact on the world],” the researchers wrote, “suggesting that they may believe NLP’s potential for positive impact is so great that it even outweighs plausible threats to civilization.”

Among other findings were that 74 percent of AI researchers believe that the private sector is too heavily influencing the field, and that 60 percent believe the carbon footprint of training large models should be a major concern for NLP researchers.

The paper is published on pre-print server arXiv

1 thought on “They’re Creating the AI God/Demon: A Third Of AI Researchers Think AI Could Cause “Catastrophic” Outcomes On Par With Nuclear War This Century”

  1. AI would be dangerous if it is allowed to do anything beyond providing rapid information to the user.
    For example, making decisions would require human morality. AI decision making would be based on mathematical percentages only because it’s only a machine. The consequences of that could be dire, like the rolling of dice.
    As for NLP, I have no idea how that works. I would need to see a demonstration.
    The idea of “carbon footprints” is a farce based on non-existent global “warming”. This alone shows how much their intelligence is lowered by false conclusions based on false beliefs. In this case, perhaps AI would really help them face the facts (lol!).

Comments are closed.