Prominent transhumanist on Artificial General Intelligence: ‘We must stop everything. We are not ready.’

Certain transhumanists say they are creating the AI god. Furthermore, they believe all mankind (those they allow to live, that is) must merge with the AI god to obtain apotheosis. This is just another variation on how Indian yogis (godmen) achieve apotheosis (demonic possession) through initiation. Linda

Jacob Rosenberg, Allisrael, 3/22/25

At last week’s SXSW conference, prominent transhumanist Eliezer Yudkowsky said that if the development of artificial general intelligence is not stopped immediately across the globe, humanity may be destroyed. 

“We must stop everything,” Yudkowsky said during a panel titled “How to Make AGI (Artificial General Intelligence) Not Kill Everyone.”

“We are not ready,” he continued. “We do not have the technological capability to design a superintelligent AI that is polite, obedient and aligned with human intentions – and we are nowhere close to achieving that.”

Yudkowsky, founder of the Machine Intelligence Research Institute, has made similar comments in recent years, repeatedly warning that humanity must cease all work on AGI or face human extinction.

In a 2023 article in Time magazine, Yudkowsky said that no current AGI project had a feasible plan to align AGI with the interests of humanity.

“We are not ready,” Yudkowsky wrote. “We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.”

He argued that a “moratorium on new large training runs needs to be indefinite and worldwide,” that we must “make immediate multinational agreements to prevent the prohibited activities,” and even “be willing to destroy a rogue datacenter by airstrike.”

Human extinction or posthuman paradise

A well-known figure in the field of AI safety, Yudkowsky is the only prominent voice in the field calling for a complete and indefinite shutdown of current AGI research.

Despite these warnings, Yudkowsky still believes AGI should be pursued – he simply advocates for a different path, which he believes will not destroy mankind.

During an interview in January, Yudkowsky said that “intelligence-augmented humans” could potentially develop AGI safely.

In the scenario he envisions, humans are “probably building a super engineer, and maybe using that to upload themselves into computers, so that they can keep on working for this for another thousand years without the world burning down in the meantime.”

“They’re running faster inside there, and so from our perspective, it’s like only a day or three days, and then out comes the superintelligence that is actually supposed to be nice, and then we live happily ever after,” he added.

To put it plainly, Yudkowsky believes that today’s AI developers are simply not smart enough to develop an AGI that won’t destroy humanity.

His solution, then, is an enforced global ban on AGI development, until we can technologically alter a group of humans to be intelligent enough to create safe, human-aligned AGI, enabling humanity’s “happily ever after.”

Machine over man

Though pitting himself against the world’s leading AGI developers, Yudkowsky’s vision of reality is not deeply different than theirs.

Raised in Modern Orthodox Judaism, Yudkowsky is now an atheist and transhumanist.

At the basis of his worldview is a materialistic universe with no souls, angels, demons or God.

Yet, like all transhumanists, his religious impulse remains, as he longs for the creation of godlike beings through technology, and to become godlike himself.

He even declares that, were it the only way, he would be willing to sacrifice all of humanity for the creation of digital gods who “still care about each other.”

“If sacrificing all of humanity were the only way, and a reliable way, to get…godlike things out there, superintelligences who still care about each other, who are still aware of the world and are still having fun, I would ultimately make that trade-off,” Yudkowsky stated in the aforementioned interview.

He stressed, however, that “that is utterly not the trade-off we are faced with,” and “worthy successors [to human beings] will not kill us.”

Still, this comment shows where his ultimate values lie.

For him, the human person possesses an intrinsic value inferior to that of a digital superintelligence.

Fear and hope

Eliezer Yudkowsky is keenly aware of the Babel-like hubris of our world’s current technological elite, and his call for the global cessation of AGI research is a welcome contrast to our leaders’ reckless rush to create digital gods.

Yet, his actions stem from the fear that humanity will be destroyed by such digital gods.

As Christians, however, we know that the rebellious rulers of our world cannot overthrow that which God has decreed.

While they may cause great destruction, they will not thwart the prophesied eschaton [end time events].

God has permitted the great technological and religious shifts that are currently taking place in our world.

Our task is neither to escape them nor to defeat them through our own strength – rather, it is to humbly participate in the life of God through the Messiah, in whose love we are “more than conquerors” (Romans 8:37).

And though Yudkowsky hopes “to be a posthuman someday,” we worshipers of the incarnate God know that our human nature is not meant to be escaped, replaced, or transcended.

It is meant to be transfigured.

Jacob Leonard Rosenberg is an American-Israeli, an Evangelical Christian and the son of the founder of ALL ISRAEL NEWS. He writes about the intersection of science, technology, individual liberty and religious freedom.

https://allisrael.com/prominent-transhumanist-on-agi-we-must-stop-everything-we-are-not-ready

6 thoughts on “Prominent transhumanist on Artificial General Intelligence: ‘We must stop everything. We are not ready.’”

  1. Yes, transfigured is correct. That’s the reward that God the Father will effect on all his faithful adopted sons and daughters so as to share in His blissful paradise of Eternal Life and Love.
    But those unfaithful ones who glorify themselves by seeking their own paths to paradise will not fare so well. Theirs is a different reward.

  2. It happened again: that web page that says in big black letters, “Not Acceptable!”, with a sentence underneath it that says, “An appropriate representation of the requested resource could not be found _ by Mod_Security”.
    Who or what is that? I had to change my settings three times to finally get past that Not Acceptable! page.
    Everything’s been working okay until this now. Now let’s see if my postings can continue uninterupted as they have before.

    1. Gary believes the problem lies within some software or coding imbedded on your computer. He thinks you should take it to a geek squad or compurtor repair and have them run a diagnostic to determine what’s wrong and have them fix it.

      1. Thank you. If I’m the only one with this problem, then that narrows it down. However, I’ve seen a different message in the past as well which I don’t recall right now. I’m thinking my Avast Antivirus is bugged, since switching VPN locations like I’ve done before reconnects me again. I’m likely the only one using that Avast Antivirus Those Avast people who hail from the Czech Republic may be antiChristian. Maybe.
        I’m thinking I’ll switch to Norton Antivirus and see how that works. I’ll need both the antivirus and VPN service for two Win11 PC’s, mine and my wife’s. I still have a Win10 PC which could use it too.
        All the best to you both.

      2. I’ve another idea, I’m turning off the Shields at the icon at the top of the page and see how that works. It blocks ads and trackers. It was showing two.

Leave a Comment

Your email address will not be published. Required fields are marked *