Skip to content

Responsibly and Realistically Sentient

“We still have judgement here, that we but teach
Bloody instructions, which being taught return
To plague the inventor.” ~ Macbeth

 

The “mythology of technology” is how Charles Hugh Smith frames our collective problem in the opening sentence of his recent article on artificial intelligence (AI).
Mythology is the sometime(s) creator of kings.  I can barely think of a more apt word to summarize the lenses through which the tech-enthralled West looks at the machines that we’ve placed as our masters.
Many before Smith have warned us.  Some, like the late Joseph Weizenbaum of MIT, slipped a bit of genie from the bottle before attempting to slam the bottle shut again.  At the opening of our digital computer era, Weizenbaum created the world-shaking ELIZA program. It got everyone from his secretary to Trekkie fans enthused about talking to and with a computer program. 
One day, while observing his secretary’s delusional wish for privacy while she “conversed” with his ELIZA program, Weizenbaum saw immediately (as so few after him have been able to see) that men like him – folks who create machines that mimic human reasoning and then parrot back mere representation of human speech and conversation – were tempting men to foolishness.  Foolishness so profound that it carries spiritual implications for human society. 
Idolatry takes many varied forms.  Perhaps Weizenbaum, like me, even saw in our reactions to artificial intelligence a rough parallel between the lust and twisted image worship induced by photo- and video-based pornography?  Those types of mistakes begin with a misplaced emphasis on the image over the real, and our wish to create from our fallen imaginations originates in our mortal and hormone-addled, fear-riddled, and mood-dependent brains, not in the mind and creativity of the Almighty. Both human images and human words are representations of more fundamental aspects of our humanity.  Symbols in place of the real.  Or in place of forms, perhaps, if you’re a fan of Plato.
Is it not possible that our misuse and misunderstanding of cleverly filtered and re-compiled words could, like visual pornography, twist something fundamentally good and beautiful until what’s left is a sort of human lust for satisfaction from symbols, a fruitless quest for fulfillment that can only ever come from an encounter with that which is real?
Returning for a moment to the insights of Charles Hugh Smith.  He stated:
If we pull aside the [technology] mythology’s curtain, we find that AI mimics human intelligence, and this mimicry is so enthralling that we take it as evidence of actual intelligence. But mimicry of intelligence isn’t intelligence, and so while AI mimicry is a powerful tool, it isn’t intelligent.
Smith’s insight is powerful.  Yet in quoting him, I feel something perhaps akin to the frustration God Himself must have felt with Moses and the recalcitrant gold-worshiping Israelites in the wilderness. As it happens, multiple figures from academia have been prodding us to avoid AI-based idol worship for quite some time.  To no avail, whatsoever.  We remain as deaf to wise warnings and guidance as were those dancing at the feet of their desert idols were to Moses after descending from Sinai and the presence of God.
To wit, years before Smith’s insights about “AI as mimicry,” another titanic figure from academia, Stanford’s John Searle, prodded folks to see the (AI) emperor’s state of undress just as Weizenbaum had done years before him.  Professor Searle invented the now famous mental experiment that’s come to be known as the “Chinese Room Argument.”  Searle posited that given a roomful of comprehensive resources on how to speak the Chinese language, a person isolated in that room could be handed passages written in English, and given enough time, could access the room’s vast language resources to spit out a perfect translation in Chinese.  He would simply follow the rules and conventions outlined in the room’s language instruction books, and without having the slightest genuine comprehension of the Mandarin he generated on paper, he could send back to his English-speaking colleagues a “translation” that would bear all the accuracy that a professional human translator could provide. 
Searle’s thought experiment didn’t even deal directly with machines, per se.  His point was that even human beings who choose a path of merely following rules for thinking can, at times, operate devoid of genuine understanding.  How much more so, then, must this be true of the “talking machines” we’re now creating – even the most sophisticated among them?
*
I worked for years and years on projects involving IBM’s impressive Watson technology.  Yes, that Watson.  The one that can win the highly language-dependent TV game show known as Jeopardy.  But my own realization about the unavoidable machine-like nature of AI came not from my years rubbing virtual elbows with Watson.
No, Watson’s Jeopardy-level verbal prowess was not my source of illumination.  My own light came to be when I ordered a copy of one of my favorite childhood toys from eBay, which was a simple plastic computer named, Dr. Nim.  The eponymous word nim, of course, refers to a simple game based on mathematics.
The rules of nim are easy enough: Each player in turn takes 1, 2, or 3 marbles from a pile of 12 (actually, the pile can contain any multiple of 4).  By simply tracking the math involved for groups of four marbles, you can play the game so that each time you end up taking the last marble it makes you the winner.  Every time.  How?  When beginning the game with 12 marbles, you break play into groups of four marbles.  Why?  Because whoever has his turn when only four marbles are left will lose the game, explained as follows:  If Player One takes one of the four remaining marbles, Player Two takes three marbles, leaving no marble(s) for Player One to take (who, according to the rules of nim, thereby loses).  If instead, Player One takes two marbles, Player Two takes two marbles, leaving Player One as the marble-less loser once again.  Similar math applies if Player One takes three marbles.  In each case, Player Two knows that with only four marbles left, whoever takes the first turn will lose. 
The designer of the Dr. Nim plastic computer simply took the rules of nim and, via a few plastic switches (mechanical switches, that is), created a machine that could “play” nim, the ancient child’s game, to perfection.
As a child I traversed the usual timeline of new-toy emotions with my Dr. Nim plastic computer, with its plastic frame red as a firetruck.  Initial fascination, followed by “wowing” the adults in the room with demos, followed too quickly by inevitable feelings of, “Yep, that was fun for a while.  But is that all that piece of plastic can do?”  My engineer father, however, analyzed the plastic toy and concluded that it was a little piece of educational genius.  As happens so often in life, decades later I realize just how correct my Dad was on that point.
Digital computing – the backbone of AI – relies without fail on quantification-based deliberation.  The word, deliberation, of course, has been panned to some degree by no less a historical figure than Aristotle, in his Nicomachean Ethics.  Whatever else we may say about the human act of deliberating, no matter the potential adjustments to our understanding made, for example, by Hume or Kant, deliberation, as humans know it, involves a seemingly endless weighing of alternative forms of action.  And, of course, one cannot weigh what one cannot in some way quantify, nor sort that which he cannot label. This has massive implications in a society that elevates deliberation to an undeserved spot as the highest form and expression of human sentience.  Repeating, for emphasis: Undeserved spot.
*
Dr. Nim, the plastic toy computer – unlike every digital computer today – had all its innards, all its switches and toggles, made large and garishly visible.  By design, of course.  Dr. Nim, the toy, was meant to allow young people to see and understand exactly how a “computer” computes.  Deliberation, as it turns out, can be as observable as it is sometimes pedantic and uninspired.  (But that’s another point for another essay.)
As an adult, having by this point in my life programmed demonstrations of speech recognition software, returning to such a fundamental form of computer was eye opening in a very unique way.  Dr. Nim, the toy, boasts finger-sized switches?  And these housed in a red plastic frame we deign to label a computer?  By contrast, the more miniaturized are the millions upon millions of switches in an electronic computer, the less visible, the less accessible they are.  And – seemingly by default – the more mysterious those switches are.  Further, in our society, the goal of anthropomorphizing as many aspects of electronic computers as possible remains firmly in place (thank you so much for popularizing that goal, Spock and Kirk).
So, putting those two trends together – i.e., hiding the machine’s inner workings and anthropomorphizing its outputs and actions – results, obviously to me at least, in fertile grounds for idol making.  Hide the electro-mechanical working parts via miniaturization, elevate clever mimicry so that such a secondary human action as imitation becomes primary in our awareness, and what you have left is a potential cult comprised of those who worship smoke and mirrors.  Nearly literally so.
*
There’s another children’s game out there – this one even more basic than my beloved Dr. Nim – that brings an even brighter light illuminating just how simple are our computing machines.  In fact, my pejorative, simple, does not begin to subsume my disdain (bordering on contempt) for the disparity between how computers are constructed versus the human emotion and energy we lay at their silicon and copper feet.  (Yes, I’m unveiling the lede a bit late: In short, to me, computer-worship has unleashed nothing short of havoc on society.)
But returning to illuminating children’s games: As far back as 1961 some researchers were already working on machine learning – the current backbone of artificial intelligence.  One of those researchers was Donald Michie (1923 – 2007).  In support of Michie’s goal – which was to demonstrate as clearly and simply as possible the actual mechanism and action of machine learning in progress – he created a game known by the spirited acronym, MENACE (Machine Educable Noughts and Crosses Engine).
The logic “program” that makes MENACE work unfolds to the players as each Tic-tac-toe move is made, bit by bit (no pun intended).  Without going into too much detail – detail best explained in the video linked above – I’ll just say that it is in the playing of MENACE that changes are made to its “program.” Those “runtime” changes end up improving MENACE’s approach.  In creating MENACE – a sort of machine that seemingly adjusts its behavior in response to stimulus – Michie displays deep understanding of machine learning; a somewhat lost lesson, however, is that human activity is still required for the MENACE “machine” to improve.  Notwithstanding its completely unconventional structure – if a collection of matchboxes can be deemed a structure – MENACE “computes” every bit as much as any state-of-the-art electronic computer.  Yet, just like any Cray or Watson or HAL (pick your iconic name), MENACE does nothing, creates nothing, builds nothing, and displays nothing without its human creators.
Does the MENACE machine “learn?” That is, of course, the question.  And it’s a question that is dependent, without the slightest doubt, upon the agreed-upon definition of “learn.” Even without clearing the philosophical air with key definitions, however, we can say that no machine yet made comes close to fulfilling a purpose sans human involvement.  Nor (I assert, unequivocally) will any human ever create a machine capable of meaningful behavior while devoid of human involvement.  That’s because when it comes to machinery, humans generate meaning, goals and ends, behaviors, ability to modify actions, and activities in response to stimuli – all of the above, and then some.  To say that a machine can generate meaning is a bit like saying the sun creates our understanding of light.  Don’t confuse powerful effect with the essence of a thing, which is an ancient mistake if ever there was one.
The point I’m making about expecting sentience to arise from machine learning is somewhat akin to expecting a word or a number to generate and carry a meaning in and of itself.  I don’t bring the number “7” to a party and introduce it (Sesame Street gags and themes notwithstanding).  Nor do I treat a single word as having or generating purpose or intention (black magic and mantras notwithstanding).  The symbols we use are representative, and what they represent is generated by, through, and among, human beings.  No number of machine switches, no matter the level of sophistication in how they are arranged, will change that fact and will create meaning where no human being denoted said meaning previously.  Hence, no machine will self-generate meaning, sentience, or even genuine innovation.
*
The question, “Where do we go from here?” looms.  Answers do not.  Clearly, the thought of waiting for some learned tech “insider” to sound the alarm and to pull us away from the feet of our golden calf won’t work; because as we saw with Weizenbaum and Searle, and to a lesser extent, even with warnings elicited from Gates and Hawking, warnings about the dangers of AI fall on the deaf ears of today’s public.
Perhaps even more worrisome is our collective penchant for converting warnings about the nature of AI into concerns that relate to how certain greedy, avaricious, (or far more likely, evil) men will implement and utilize AI.  We hear phrases like, “AI will soon control the world and might make a decision to kill all humans when asked to end cancer.”  What?  WHAT??  I’m to believe that a series of non-sentient electronic switches will somehow, without human help or intervention, take over our infrastructures, our pharmaceutical industry, our agriculture, our waterways, and our highways?  I’m to assume that with no decision from generals and admirals some series of silicon components will take over our nuclear launch codes and facilities?  The golden calf of AI has no feet or hands.  Unless we willingly lend that calf ours.
So the goal, for me, anyway, is to convince the near-future generals, admirals, pharmacists, billionaires, presidents, and congressmen today that machines are machines. They are not sentient, and they are certainly not in and of themselves evil in such a way that they can “end the world” without the considerable help of, well, evil and/or misguided human beings.  Aaron fashioned the original golden calf.  Thousands upon thousands of us are in the process of fashioning the new silicon-based golden calf.
Acknowledge that fact.  Acknowledge our collective roles.  Then turn 180 on your heels, walk away from the silicon, and live to fight, work, and play another day.  In other words, be sentient.  Responsibly and realistically sentient.  Something no calf will ever do or be.
Avatar photo

Jeff Krinock is a writer based in Johnstown, PA. He is a USAF veteran (helicopter and fighter pilot) and recently ended a post-Air Force,15-year stint as a global consultant for IBM Corporation. His education includes a BA in Biblical Literature and an MA in Human Relations, as well as post-baccalaureate and graduate work in everything from creative writing to aviation to business. He studied briefly under the prolific sci-fi pioneer and legend, the late Jack Williamson and also under the prolific British poet, Antony Oldknow. Current non-fiction interests include the work of the late Orthodox writer Father John Zizioulas, the mimetic theory of the late French philosopher Rene Gerard, and the concepts around gnomic willing as taught by St. Maximos the Confessor.

Back To Top