Skip to content

Artificial Super Intelligence and the AI Apocalypse

Apparently, there are competitions to see who is better at predicting the future. Normally, the answer is that no one is better at predicting the future. There are too many unknowns. If, for instance, you could predict which inventions were going to be created, you would have effectively invented those things yourself. Having the idea for something is a huge part of innovation. But, there are competitions set up, and the winners of those competitions are called “super forecasters.” One of these supposed super forecasters, Scott Alexander, was interviewed on the Dwarkesh podcast (2027 Intelligence Explosion: Month-by-Month Model) along with Daniel Kokotajlo, and he is predicting “artificial super intelligence” by 2027. The nice thing about that is that it is so near in the future, most of us will be alive to determine whether he got it right or not.
Artificial super intelligence, ASI, is an amplified AGI, artificial general intelligence – the point where AI will have the jack of many trades abilities of an actual human. The “super” part can seem more plausible because current AI is trained on the entire internet with all the books found therein. Its pattern recognition compresses this information into a smaller size.
Back in the early 2000s, Ken Wilber speculated that if artificial intelligence were ever to be created, it would reach enlightenment in seconds. This idea has problems; ones that Wilber acknowledged himself later, namely that spiritual enlightenment is not something that goes on top of a hierarchy of intellectual smartness – as though one’s IQ hits 200 and then the perception of spiritual realities starts to emerge. Enlightenment might be said to exist alongside intellectual development, with no particular correlation between the two. Certainly, famous mystics are smart and excellent writers, but that is because being a famous mystic depends on writing readable books about one’s experiences and conclusions. There will be plenty of Zen masters with not much of a flair for writing and lots of smart people with not an inkling of spiritual realities.
We know that, in humans, moral reasoning needs the emotions in order to function properly. How a computer could have emotions is unknown. Effectively, no one is even discussing it. The mystic is the master of inward exploration and computation has nothing to do with that. That is why people concerned about the alignment problem, how to align AGI with human morality and purposes, imagine AGI solving problems assigned to it in a manner that humans would not like, like curing cancer by killing all human beings. This imagines AGI as both smart and stupid – with problem-solving abilities but minus common sense. Having an emotional attachment to friends, family, country, and maybe species would provide some kind of bulwark against that kind of mistake, but AGI could not be expected to have one.
But, for argument’s sake, let us imagine ASI and that it is both smart and enlightened; intelligent and has morality and common sense. The super forecaster imagines all the humans being essentially out of work. We would be functionally obsolete – yesterday’s wet technology. All the important decisions would be made by ASI, while we might be left to choose our favorite flavor of yoghurt and whatnot.
In this scenario, we might consider super-smart aliens and the Prime Directive as envisioned by Star Trek: The Next Generation (STNG). Imagine that advances in basic science and technology were to continue without backsliding. Then imagine that alien creatures had been doing this for several million years – longer than the human race has even existed. And then imagine that, contra Oswald Spengler, civilizations do not resemble organisms that have a life cycle from growth, to maturity, to old age, senility and death; that do not lose all self-belief and get invaded by barbarians with “asabiya” – super ingroup cohesion and outgroup hostility, utter conviction, and a warrior mindset unafraid of death. These aliens would seem like gods to us. Now, imagine that their moral development was at the same level as their technological progress. This should not be difficult, because if it were not, they would probably have destroyed themselves through infighting and the like.
STNG imagined a race of immortal beings called the Q who really seemed like gods; they could bring the dead back to life, or do pretty much anything they wished, except they had a puckish, immature sensibility. This would not be impossible because technology developed by other members of our species does not make us personally smarter. It is not explained how the Q did not self-destruct, although there did seem to be warring factions within the Q Continuum, and they even had a civil war in Star Trek: Voyager.
But, going back to the technologically and morally advanced aliens, they would probably invent something like Star Trek’s Prime Directive, which is a policy of non-interference in cultures considerably more backward than the level humans were supposed to have reached. A real-life example would be supplying Stone Age New Zealand Maori with muskets and metal axes, which happened historically. They were then used immediately by one tribe to attack another one, employing this English technology to get one over on their enemies. Likewise, African tribes in South Africa used to have skirmishes using spears and the like. Relatively few people were actually killed. However, the introduction of machine guns garnered from white settlers changed these fights into massacres. In both cases, the introduction of advanced technology to relatively backward people had negative effects. By the time Europeans invented these weapons, they had evolved beyond Stone Age tribalism, though they unleashed new horrors on the world, such as two world wars.
Quite a few Star Trek episodes involved actually breaking the Prime Directive when the humans could not stand leaving the alien cultures to their fates, e.g., if an asteroid was going to destroy their planet or a virus was going to wipe them out. Sometimes, the help would be given surreptitiously, with the recipients thinking they just got lucky, or never knowing how close they were to catastrophe. In many cases, it was deemed pernicious to make the primitive peoples aware that there are such things as phasers (advanced handheld weapons), photon torpedoes, and star ships.
What is being suggested is that ASI, artificial super intelligence – if morally and intellectually superior to humans – would adopt a Prime Directive of its own. There is a value in self-determination and in learning from one’s mistakes. Many a middle-aged parent might feel that he could make better decisions in many areas than his 20-year-old child. But, among other things, that young person must fall in love and have his heart broken if he is going to figure out how romantic love works. He has to learn how to live with other people his age in a house; people with very different ideas about household management derived from their parents’ lifestyles. He has to earn money and pay bills. There is a very noticeable difference in maturity level between someone who has not left home and is dependent on his parents to one who is self-sufficient. Even living in a dorm means a little too much childlike dependence.
If ASI simply took over human societies, this would not only interfere with self-determination but also cause human beings to regress. Slaves, unable to make their own decisions, come to be dependent on their masters and lose initiative. This is what happened with government programs, which were intended to help children from single-parent families, but actually created more single-parent families, making the problem worse. If a husband or male partner were living with the mother and children, no government support was forthcoming. So, the government ends up taking over the role of the father, and the fatherless children become more prone to violence and more likely to join gangs as a surrogate family. The government can then buy the votes of these dependent people with threats and promises. The Prime Directive is there to avoid making things worse and to allow people to develop in their own ways in their own times, and not to provide machine guns to Amazon tribes instead of bows and arrows. Why not an atomic bomb, while you are at it? Islamist fanaticism and low-level theology and modern weaponry that they could not have developed themselves are a bad combination, with many fears of Islamist terrorists using a dirty bomb to stick it to the infidel.
So, if ASI were to come into existence, it should not be expected to lay waste to human societies; to take over and start making all our important decisions based on its superior wisdom, to lay off the entire human workforce, because that is not what those with superior wisdom would do.
Avatar photo

Richard Cocks is an Associate Editor and Contributing Editor of VoegelinView, and has been a faculty member of the Philosophy Department at SUNY Oswego since 2001. Dr. Cocks is an editor and regular contributor at the Orthosphere and has been published at The Brussels Journal, The Sydney Traditionalist Forum, People of Shambhala, The James G. Martin Center for Academic Renewal and the University Bookman.

Back To Top