Skip to content

AI-dolatry — Imitation by Any Other Name?

We bounced off the tree in the middle of the woods, and I knew right then our Humvee would come out of our practice nav session with an ugly scar or two. Why were we completely off-road, so much so that my driver couldn’t even avoid trees? Because a new-fangled tech known as “GPS” was onboard and approved for military use some years before its general release to the public. (Yes, this was long ago.) We could now plow through woods in our USAF-supplied Hummer without the benefit of so much as dirt roads or a trail. No need to stop even to coordinate the compass to map. Technology was now telling us, on the fly, which way was home. Misplaced trees notwithstanding.
It’s not like I had never navigated in the woods before GPS came to the rescue. Years prior, as a pre-teen participating in a Boy Scout-like navigation exercise, I was introduced to using a compass and a map of the woody terrain to get from point A to point B. Our little group of five intrepid scouts took off into the late afternoon sun, trekking across miles of hilly Pennsylvania woods with no adult supervision in sight. We were pitted against other scout groups to see who could arrive at our destination first. Said destination was a not-too-obscure radio tower atop a distant hill. And yes, at least two other groups had to be rescued from the woods in the middle of the night, their compass-and-map skills having failed them as the sun settled into the hills – the prominence of our target, a 200-foot metal tower, notwithstanding.
GPS — A Gateway Drug” to Tech Dependency?
The lessons I gleaned from the compass-and-map versus GPS experiences are a mixed bag. If you’re running for your life in the middle of a military attack, how can you tell your buddy in the seat next to you you’re going to turn off the GPS so as to “keep up the ’ol map-reading skills?” That is not going to happen, of course.
On the other hand, ask nearly anyone these days what it’s like navigating when there’s no cell phone service. Or try the ages-old routine of stopping to ask for directions in this day and age of ubiquitous GPS. The anecdotes of drivers plunging their vehicles into the water just because GPS told them to do so — are more than fluffy urban legends. And I’ve asked youngsters more than once for the address of a store they’re working in only to learn that details like building addresses are so yesterday. (Get a grip, old man.) My point is that we may think we’ll never see the day when we’re left with not much more than 1950s tech, but that day will come. Yes, it will.
Even so, I certainly understand that no one is going to put our various fave tech genies back in the bottle. GPS is here to stay, at least until some point in WWIII. Yes, the Russians already selectively blank out GPS in some of their military-sensitive areas. Guess what that does for GPS-guided smart bombs and missiles? Or consider how missing GPS impacts, say, a mini-van filled with a family fleeing for their lives? The point is that some skills that were once commonplace (e.g., map reading) are going the way of the dinosaurs. But it’s also worth considering that the technologies and devices replacing those ancient skill sets might one day soon be joining the dinosaurs. Has any technology besides fire and rafts lasted forever?
Tech dependency takes many forms. On a less grim but equally important note about how tech addiction creeps in: Despite the blatant nature of big tech’s machinations and maneuvers to get as many open mics in our homes and private spaces as possible, we will not stop talking to our various computers anytime soon. We will continue to choose to share our pillow talk and such with complete strangers rather than give up Star-Trek-like convenience.
If GPS Decays Human-needed Skillsets, Whats Up with AI?
No one is going to end the debate (given a discussion is even permitted!) as to whether or not GPS, cell phones, and ubiquitous mic-enabled computers provide a net good for society. Or not.
On the other hand, even a few moments’ thought will convince most folks that we have given up some important skills and abilities in exchange for these tech wonders. Ask any K-12 teacher (aged 40 or above) how the social and communication skills of their students have decayed over the decades. Query instructors in any field involving spatial and navigation skills as to what their incoming students lack today as compared to, say, students from 15 years ago. And if these anecdotal reports don’t convince you that we are hurting ourselves by misuse of tech, then look up some of the growing numbers of studies showing physiological proof of newly atrophied portions of our brains – atrophied due to disuse. These are parts of our brains (e.g., amygdala, hippocampus) that are critical for things like short-term memory, navigation skills, and even life-essential attributes like a sense of ourselves. Yes. These studies are out there, rife with frightening findings.
Enter, Stage Left: Artificial Intelligence
It seems to me that soccer moms, caring dads, and alert grandparents from coast to coast now have a pretty good clue that excessive cell time, mind-numbing virtual gaming, and even the subtle skill theft induced by overreliance on GPS are not helping turn our children and grandchildren into a generation of geniuses. Some countries even recognize a direct correlation between excessive use of digital technology and increased risk for serious brain-related illnesses. (As an early example, see South Korea’s decade-long concern about digital dementia here. And more recently, see the US’s related concerns as summarized here.)
Funny, though, it’s not at all clear that those of us who can still think clearly, i.e., those of us not permanently scarred from ridiculous amounts of screen time – even if we may not have learned the right lessons about what we did wrong with our children in regards to digital tech. For some time now, we’ve stared in the face of technologies loaded with heroin-like addictive powers. And I’m not convinced trite sayings like, “Well, you have to find a balance…” are adequate for this challenge and are forceful enough to meet the threat at hand to the mental health of our kids. Nor do such proverbial “chicken soup for the soul” approaches address the broader societal impacts, and the collective effects of such powerful technologies.
As has been noted by multiple researchers and pundits, massively wealthy tech corporations actively employ psychologists and PR specialists to help them make their products, well, addictive. (See here.) Yes, this is a sad fact. And even if you reject that fact – should you file it away as yet another pointless conspiracy theory – please consider that it might help explain why some of the world’s wealthiest tech titans limited their children’s use of their own products! (See here.) What would folks have thought at the outset of the automotive industry had Henry Ford been caught forbidding his offspring from driving? Yet we don’t bat an eye at equivalent hypocrisy among today’s tech titans.
What do we miss, though, when we take a too-simplistic approach to fighting back? For certain, it’s easy to skip the vital step of asking for the details about why any given technology becomes a detriment rather than an aid to a healthy society. Yes, asking why is one of those measures for which we allow ourselves no time. And while societal harm from digital tech is an important topic, admittedly, no one wants to hear a philosophical treatise on the subject. Likewise, many of us in our churches, mosques, and synagogues already get our fill of pulpit-based urging toward moderation.
With a modicum of effort, however, we can re-consider an important starting point in fighting back against digital addiction: Return to the proverbial “square one” and ask just what it is that our tech tools and toys are meant to do for us. Without human beings, after all, no technology would exist. And no tools or toys. And since we invent those things to replicate, enhance, reflect, or enlarge originally human capabilities and activities, we are the de facto arbiters of what our toys and tools should be to us and why or why not we should continue building and distributing them.
Yesteryears Automatons Versus the “Gift” We Embrace from Mr. Turing
And there’s the crux of the matter. When the aim of a toy or a mechanical amusement is, well, to amuse, said toy brings with it risks that are relatively easily discerned.
But what happens once we accept a fundamental shift in what we expect from our human-imitating tools and toys? What happens, for example, when we go from decades and decades of relatively innocent amusement with the popular automatons of the 17th and 18th centuries – most of them variations of animated mannequins (see here) – to the Turing Test? (Discussed in detail below). That particular computer test brings with it borderline metaphysical implications!
More specifically, in the salons and fairs of centuries past, no one enjoying entertainment by automatons, with their typical doll-like appearance and simplicity, worried one wit about whether or not a fellow human being would mistake even the most sophisticated of them for a live human person. Yes, those automatons moved. Yes, some of them could change “facial” expressions. Some of these machines played games or even engaged in handwriting. But the point of automatons was diversion and entertainment. People retained enough sense during that era, at least, to keep their human-facing (and, sometimes, human-faced) machines in those categories – where they belonged. And I’d propose that diversion and entertainment are the categories where many of the most complex and sophisticated of our machines today still belong.
But now we increasingly apply, informally and otherwise, the Turing Test to find out if a computer can deceive us into believing we’re interacting with a fellow human being. This change in our expectations for our machines is so massive it’s hard to put into words. Oh, the magnitude of the meaning of this change for society: We want to be deceived by machines! This stance is unbelievably distant from the enchantment- and titillation-seeking typical of yesteryear’s salon play with dressed-up automatons (moving mannequins by any other name).
*
That we’ve allowed the musings of the troubled WWII-era mathematician Alan Turing to become a granite-like standard for the “success” of our most sophisticated machine creations deserves scrutiny. Lots of it, as a matter of fact.
Currently, Turing and his eponymous test are (again) all the rage. Many users of ChatGPT, for instance, suggest that that particular iteration of AI has met the conditions of the “Turing Test,” and roundly so. I’ve no end of friends and colleagues sharing with me the latest essay or graphic they nudged out of ChatGPT. Always the gist is the same: “This is so realistic! Look how creative!” And so forth.
At least two quotes are relevant here. In the first, from Britannica.com, we take a brief look at the Turing Test itself:
Turing sidestepped the debate about exactly how to define thinking [as applied to machine activity] by means of a very practical, albeit subjective, test: if a computer acts, reacts, and interacts like a sentient being, then call it sentient. To avoid prejudicial rejection of evidence of machine intelligence, Turing suggested the “imitation game,” now known as the Turing test: a remote human interrogator, within a fixed time frame, must distinguish between a computer and a human subject based on their replies to various questions posed by the interrogator. By means of a series of such tests, a computer’s success at “thinking” can be measured by its probability of being misidentified as the human subject.
Each day, it becomes increasingly routine to encounter a computer-generated voice or a text-based chat session in which we’re deceived, at least temporarily, into believing we’re conversing or interacting with a fellow human being directly. At some point, perhaps folks simply accept deception-by-machine as being “built into” modern society. And then, likely to our great detriment, we effectively shrug and move on. It could even follow that soon, the new attitude may be to accept “machine parity” in conversing as a given.
Regarding some of the roots of our cult-like capitulation to “talking machines,” Stanford professor Jean-Pierre Dupuy offers the following insight:
[C]ybernetics constituted a decisive step in the rise of antihumanism. Consider, for example, the way in which cybernetics conceived the relationship between man and machine. The philosophers of consciousness were not alone in being caught up in the trap set by a question such as “Will it be possible one day to design a machine that thinks?” The cybernetician’s answer, rather in the spirit of Moliére, was: “Madame, you pride yourself so on thinking. And yet, you are only a machine!” The aim of cognitive science always was – and still is today – the mechanization of the [human] mind, not the humanization of the machine. [Emphasis mine.]
Antihumanism!? Mechanization of the mind?? These Weighty criticisms by Professor Dupuy and ones that deserve a closer look.
We can tackle such heady statements from many angles. This, because theologians, philosophers, sociologists, anthropologists, even psychologists – representatives from all of these disciplines at some point stake a claim to defining what it means to be “human.” How much more so does the interest in the topic explode when something anti-human rears its ugly head?
But – really, Dr. Dupuy – anti-humanism? Here, we’ll fall back on well-established lessons from Pentateuch-respecting Hebrew thought and much of the 2000-year history of Christianity. Specifically, we’ll consider for a moment the unmitigated prohibition and condemnation of idolatry within those traditions. Full disclosure: What follows is my attempt to cut the “Gordian knot” of what is a couple of centuries worth of debate about the definition of “human.” In a sense, I’m bypassing much of the whole debate, or at least I’m transferring it to the better question of what it means to be a person. (More on personhood in a future essay.)
Both Old Testament thought, and all of Christian theology make clear that idolatry – that soul-damning sin – comes about when we substitute representation for the real thing, and do so inappropriately. Or, more to the point, we risk, well, sinning, when we engage representation in such a way as to detract from due respect for real persons. The suggestion that our images, mental and otherwise, can be sources of transgression is made in multiple places throughout the Old Testament. There, we find warnings about setting up images or sculptures of beings, animals, or things that have a real existence (for lack of a more accurate adjective) either here in the material world or in the heavens.
Christ took this whole concept much further: He said that looking upon a woman with lust was the equivalent of the actual death-deserving sin of adultery, thereby making it clear that our very thoughts – the images and actions we visualize in our minds – have metaphysical implications. Oh dear.
So, taking some of the above to extremes, it’s worth asking if the mere representation of a human being equals a transgression, as, for example, some Islamic teachings indicate. Of course, in the diverse world of religious thought, it gets more complicated than that. The long-forgotten fights – some to the death! – that sprung from iconoclasm in the ancient Church revealed that for Roman Catholics and Orthodox Christians, at least, not all representation of things (or beings) in the heavens is idolatrous. We’ll spare the Byzantine theology details here, but one main point of it all is that simple representation in and of itself does not comprise idolatry.
Even within the Old Testament, we see indications that the aptness, intent, and, again, for lack of a better adjective, the correctness of representation impacts whether or not it stoops to the level of idolatry.
So, for example, we have the straightforward prohibition in Exodus against making graven images: “You shall not make for yourself a carved image – any likeness of anything that is in heaven above, or that is in the earth beneath, or that is in the water under the earth.” Followed only a few chapters later by direct instructions for making images here on earth of things in heaven for the purposes of adorning the very place where God intended to meet the Hebrew people: “And you shall make two cherubim of gold; of hammered work you shall make them at the two ends of the mercy seat. Make one cherub at one end, and the other cherub at the other end;” Unless we’re willing to accept outright contradiction in Holy Scripture (I’m not), then we need to take a more considered look at what, in fact, comprises idolatry.
*
What on earth do the instructions for adorning the Ark of the Covenant have to do with Artificial Intelligence? Well, glad you asked. Because, in a sense, the answer is everything.
Among the many ancient schools of human thought, theology, at least, makes clear that human representation is anything but a neutral activity. The implications (as understood in the 2000-year-old Christian tradition) are that representation done wrong equals idolatry. And idolatry brings death and destruction. Representation done right, however, whether it be in the wrought decorations of the Ark, as described in the passage from Exodus above, or the thousands upon thousands of venerated icons of saints in the Eastern Christian tradition, well, that sort of representation leads to life and light.
And so finally, we come ’round to an important question for the digital computing era: What exactly does AI represent? If the “I” in that abbreviation means human intelligence, then right there, we have a major problem, and that problem is most certainly related to the “aptness of representation” issue discussed above.
A “Deliberate” Problem, if You Will
The problem arises because the only type of intelligence possible in digital computation machines, aka computers, is based upon deliberation. And for those who truly understand how a digital computer works, it’s clear that no amount of sophisticated programming can ever lift the output of any digital computer or network – neural or otherwise – above the level of deliberation. (And yes, computer-philes, I’m aware of the impressive power of machine learning and neural networks.)
Yet there’s a ray of light in that even the most staunch of secular materialists can’t deny or discount other types of human intelligence, types that do not rely upon the endless labeling, naming, weighting, and sorting that are the indispensable foundations of deliberation. One ready-at-hand example of non-deliberative human intelligence is intuition, but there are certainly other types of non-deliberative intelligence as well.
Complicating everything is the fact that within our decidedly secular/materialistic era, most other forms of non-deliberative human intelligence do remain besieged with skepticism; many folks today rail against the authenticity of some types of knowledge and wisdom acquisition – divine revelation comes to mind – even as towering secular figures like Sigmund Freud and Carl Jung make clear that not every human intention and action can be traced back to analytical thinking and syllogistic logic (those staples of deliberative thinking).
And long, long before our transcendence-rejecting era, both Saint Maximos (see his Disputations with Pyrrhus) and Aristotle (see his Nicomachean Ethics) made clear that deliberation is by no means the apex of human thought. Certainly not in the sense that our computer-laden era elevates it.
To that point, I maintain that the name “Artificial Intelligence,” strictly speaking, is none of the above; in an era in which deliberative thinking is nearly universally lionized, generating endless strings of syllogistic logic executed via electronic switches is not artificial anything, per se. It’s more akin to machine-assisted maintenance of the status quo, or perhaps it’s a form of exaggeration (of one type of existing ways of thinking). Said another way: To imply that AI is human “intelligence” and that it differs only in its origins from, say, the very traditional birthing and human rearing of a Saint Augustine, a Mozart, or a Ghandi – well, this is misleading in the most horrific of ways.
Certainly, it’s not a catchy moniker to rename AI as Machine-based Artifice to Mimic One Type of Human Intelligence (MAMOTHI ??), but that faux acronym captures what we’re really doing with today’s neural networks and machine learning. As discussed above, electronic switching, which is the staple activity of electronic computation, is no more an example of genuine intelligence than a schoolboy reciting back his multiplication tables. Both activities may have practical applications in the material world, but intelligence they are not. When AI was an obscure concept known best to cloistered “mad scientist” types, such mistaken labeling of the technology had limited repercussions; but in our era, try to make it through the day without hearing some popular reference to AI. Ubiquity brings familiarity. Familiarity too often brings a reduction of focused attention. Reduced attention leads nearly always to unnoticed risks and dangers.
*
So, having touched ever-so-lightly upon the mammoth place (no retrospective to the neologism, MAMOTHI, implied. OK – maybe a little) that discussions of human intelligence have in the traditional arts and sciences, let’s return to the aforementioned “Gordian knot” approach to these issues. Let’s take another look at the description of the Turing Test quoted above. A few keywords stand out from Britannica’s definition. Let’s examine, in turn: “subjective,” “imitation game,” and “misidentified.”
“Subjective” as Part of the Turing Test
For many, many of the world’s people, one of the three great monotheistic religions (Judaism, Christianity, and Islam) serves as home. And all three of those homes-for-billions recognize the Book of Genesis from the Bible. In Genesis, we find an inescapable foundation of what it means to be human in the following passage: “Let us make mankind in our image, in our likeness….”
Does that passage, purporting to be from the lips of God Himself, suggest we employ “subjectivity” as part of our pursuit of a “definition” of what it means to be human? The One God, the creator of all and of everyone who exists, does not equivocate: He is the foundation of who we are, which suggests quite obviously that the first element of Turing’s approach – subjectivity – should have no role in creating a test for human intelligence. At least not if by subjectivity we mean, as followers of Nietzsche and any number of today’s trans-humanists do, standards and definitions of intelligence generated by humans in a deliberate vacuum that excludes the divine, that disses all reverence for and belief in a transcendent Creator.
I’m sorry if proud mathematicians, physics scientists, and secular humanists have neither time nor inclination to engage the millennia’s worth of theology-based discussion as to what it means to be human. Yes, metaphysics can be as painful to digest as it is esoteric and arcane. So what? (Dear lovers of scientism: The bread you made with materialist-only flour and logic-only yeast did not rise. Put on your big-boy pants, roll up your sleeves, and plunge your hands into the metaphysical and theological goo. Time to bake again.)
And discard any of Nietzsche’s recipes while you’re at it. They’re missing divine transcendence as the essential leaven of all human recipes. Two world wars, with a third one pending, suggest that the Anti-Christs Greatest Kitchen Hits – Divine-gluten-free Cookbook no longer deserves a place on your shelf.
“Imitation” as Part of the Turing Test
The second element we’ll tackle – Turing’s emphasis on “imitation” – is nearly as obviously flawed as the first: We all recognize the inescapable role of imitation in a human baby’s development. Much like Freud, Piaget, and countless parents throughout the ages, I marvel at a months-old baby’s reflective smiles and emotions. The baby’s seeming automatic desire to mimic – or, more impressively, to reciprocate as a forerunner to human empathy – can’t fail to impress. And yet, there’s not a parent out there who would believe that same baby had grown and developed properly if, by his teen years, he was still imitating others as his primary means of navigating existence.
We don’t expect each of our offspring to rise to the creative level of, say, Shakespeare or Michelangelo. But any human whose apex of behavior is mere imitation at the expense of all genuine innovation and creativity will be seen as a victim of stunted growth.
Yet today, we follow Turing’s lead in deeming our “talking machines” to have arrived at human intelligence because they simply imitate well? That’s a mistake with phenomenal import.
“Misidentified” as Part of the Turing Test
And now we come to the truly fatal flaw of the Turing Test.
I’m not certain how often irony alone destroys an argument or a conceptual model, but this may be a case in which irony should do exactly that. Turing’s test would have us confirm the successful creation of human intelligence via the unsuccessful implementation of our own human intelligence in analyzing the output of a machine. What profound (and unnoticed) irony! To clarify the force and import of the mistake, the nature of this profound error: If I said to you, I want you to judge this roomful of original sculptures, but first, I want you to show me that you’re capable of being fooled by un-original sculptures, you’d laugh at the logic of my approach to finding and capturing “good” judgment. Rightly, you would laugh. Admittedly, there’s a twist in the Turing Test; that we can be fooled by the un-original is meant to show how good the artifice was. But again – if genuine human intelligence is an unmitigated and un-adulterated good, then shouldn’t we be looking for tests (and, more specifically, for testers) who demonstrate that they cannot be fooled by imposters? Perhaps what Turing really needed for his test were, say, clairvoyants, prophets, and such?
Given the Alleged Shortage of Clairvoyants and Prophets in the IT World…
Dupuy rightly worries about us degrading our own humanity when we reduce human intelligence to mechanized filtering between choices (in my words, deliberation). And Searle rightly points out that even a live human being challenged with a mental task such as language translation risks providing a solution devoid of understanding – understanding being the sine qua non of intelligence – if his approach fails to rise above mere rule-following.
It’s reductionist and perhaps even dismissive to boil down all of the damage and potential harms just discussed to simple learned helplessness. But we discussed a specific problem above, namely, loss of navigation skills due to too-easy and/or too-frequent application of machine-based help (in the form of GPS in our example). And there’s a clear parallel between that singular issue and the loss of overall humanness that will follow if we allow ourselves to elevate deliberation – the only type of “thinking” of which a machine is capable – to undeserved status as human intelligence. Specifically, we should be concerned that embracing solutions while being ourselves devoid of understanding is, perhaps, the very definition of learned helplessness.
I’ll further argue that the AI industry “getting it right” in regard to today’s understanding-based follies would actually be a “bad” thing! What evidence exists that most of us would avoid the lazy, easy solution of leaning on our AI machines to the exclusion of developing and deploying new human-developed solutions and creations? Once the habit(s) of leaning on the handy tech-rectangle-in-hand (i.e., cell phones today; horrific chip implants tomorrow?) are developed, why would a young teen bother to apply his God-given creativity and innovation to any new challenge coming his way? And how much “community building” will flourish if tech-bio abominations wander about believing all their needs are answered “from above” by a chip tapping into big data in the sky?
And for those who suggest an AI “displaying understanding” would simply be humankind “creating” a new being just as the Heavenly Father created us previously – I’d remind them that the concept of “the fall” inhering in Hebrew scripture throws a bit of a curveball into the game. A fallen race codifying one narrow type of thinking (i.e., deliberation) to create a more perfect race of machines? Asked more simply: You propose a fallen race can create an un-fallen race? OK. If you say so…
Stated another way: Every Christian child, taught properly, knows he faces a lifetime of repentance. What lines of code in AI will teach a machine to “repent,” knowing its metal and silicon are but dust and its power supply is notoriously vulnerable to disruption? Is uninterruptible filtering of data and endless switch-based selection between weighted choices the new definition of eternal life? Ugh. Speaking for myself – am not certain I wish to walk those streets of silicon with Ray Kurzweil and Lee Silver.
*
There is no quick summary here. This, because the problems we’ve described unfold in our bad and lazy habits, or more accurately, in thousands of tiny compromises we permit ourselves within the actions and accepted dependencies that build our collective self images, day in and day out.
The starting point for correcting all of this is simple, however: Better we underestimate AI as the equivalent of yesteryear’s entertaining automatons than to overestimate it as, well, human intelligence. Said another way – better to underestimate our machines, which suffer no soteriological, ontological, nor eschatological harm in the error – than to underestimate ourselves. We, humans, enjoy too noble a heritage to mislabel one another as being merely complex machines. If you don’t feel offended that some believe they are building machines that are your equivalent – or even your superior – then at least consider that participating in such foolishness risks offending your Creator. And that offense has an infamous name: Idolatry.
Avatar photo

Jeff Krinock is a writer based in Johnstown, PA. He is a USAF veteran (helicopter and fighter pilot) and recently ended a post-Air Force,15-year stint as a global consultant for IBM Corporation. His education includes a BA in Biblical Literature and an MA in Human Relations, as well as post-baccalaureate and graduate work in everything from creative writing to aviation to business. He studied briefly under the prolific sci-fi pioneer and legend, the late Jack Williamson and also under the prolific British poet, Antony Oldknow. Current non-fiction interests include the work of the late Orthodox writer Father John Zizioulas, the mimetic theory of the late French philosopher Rene Gerard, and the concepts around gnomic willing as taught by St. Maximos the Confessor.

Back To Top