skip to Main Content

Predicting Our Own Demise

In the world of science fiction, technology plays a significant role in shaping the imaginative landscapes that captivate audiences.[1] From spaceships and time travel to aliens and flying cars, the genre has long been fascinated with the potential of futuristic technology. What sets science fiction apart from other genres of speculative fiction is, per science fiction author Adam Roberts, that the world is, “located within a materialist, scientific discourse, whether or not the science invoked is strictly consonant with science as it is understood today.”[2] The genre is distinguished by its insistence on presenting even the most far-fetched concepts through the lens of scientific plausibility, rather than relying on magic or supernatural forces.
While humans have long written and imagined life in other worlds or in utopias free of human suffering, science fiction was not considered a significant genre until the release of Mary Shelley’s Frankenstein. In her novel, Shelley introduced readers to a mad scientist seeking to reanimate a corpse using the innovative technologies of the time such as galvanic electricity and vivisection. This innovative approach set her work apart from the other literature of the time and established many of the science fiction’s most enduring tropes. Following Shelley’s novel, the works of Edgar Allan Poe, Jules Verne, and other important writers continued to shape the genre culminating in the release of the first great science fiction film, Metropolis, in 1927. The release of Fritz Lang’s Metropolis marked the beginning of a new era of science fiction media. Despite being almost one-hundred years old, the movie’s themes of class oppression and the potential use of technology as a tool of deception remain relevant today. Even if the iconic Maschinenmensch[3] is unlikely to be recreated any time soon, the film’s exploration of the relationship between humans and technology continues to captivate audiences.
Metropolis, with its depiction of a utopian city powered by advanced technology, reflects humanity and science fiction’s long history of imagining a future beyond our current capabilities. The 16th century Swiss alchemist Paracelsus wrote instructions on how to create an “artificial man” by placing human sperm in horse dung, Frankenstein in the 19th century tackled the artificial creation of life, and Karel Čapek’s 1920 play R. U. R. questioned whether robots are capable of experiencing human emotions. These works all predated the technological advancements that have made grappling with these questions a reality. Earlier this year an engineer at Google claimed that their chatbot LaMDA had become sentient, and while experts such as cognitive scientist Gary Marcus are not convinced, the increasing complexity of technology and artificial intelligence is only going to continue to muddy the waters between genuine intelligence and a computer’s ability to simulate it. Marcus states, “Our brains are not really built to understand the difference between a computer that’s faking intelligence and a computer that’s actually intelligent.”
As we consider the future of artificial intelligence, science fiction television and film offer valuable insights into the potential developments of technology and how people view its potential benefits and harms. Through media, we can explore the lessons that creators are trying to impart and apply them to current policies. This is particularly important given the apparent lack of understanding among politicians and government officials when it comes to legislating and creating policies for emerging technologies. A 2018 hearing between Congress and Google CEO Sundar Pichai demonstrated lawmakers’ inadequate understanding of the technologies they are attempting to regulate. As technology journalist Will Oremus notes, this ignorance is not unique to Congress, but extends to the majority of Americans. This lack of understanding is concerning, but these large tech companies purposefully make it difficult to understand their services as, “these companies benefit from our ignorance because it’s easier to use their services when we’re not fully aware of the trade-offs.” This ignorance is not limited to just emerging technologies. Per a Pew Research Center study, while Americans are educated on basic science terms and concepts, they are less knowledgeable on more complex topics such as the properties of sound waves, how light passes through glass, and whether water boils at lower temperatures at high altitudes. This is important as technology becomes increasingly complex and difficult to understand, as “a public with more knowledge of scientific facts and principles is often seen as one better able to understand these developments and make informed judgments.”[4]
In light of this state of public and policy confusion, the creation of a framework to manage the development of technology, specifically artificial intelligence is necessary to better prepare legislators, bureaucrats, and other officials to appropriately deal with emerging technologies. Any framework we create should have proper responses to the following three questions. First, how ought we categorize various artificial intelligences? Second, what should the purpose of artificial intelligence be; who should it serve, what role should it play? Third, where do we draw the line? What do we consider going “too far?” Are there obvious scenarios where we should not implement artificial intelligence? These questions have helped guide my framework and I believe should be the basis for any artificial intelligence framework moving forward. As I have shown, science fiction has played a crucial role in highlighting the potential dangers and ethical dilemmas of emerging technologies, including artificial intelligence. The genre’s depiction of grappling with distant technological conundrums presents an attractive setting in which to develop a framework to manage the development of AI. By using science fiction as a source of inspiration and caution, policymakers can better prepare themselves to deal with the challenges posed by AI and create a framework that addresses the questions of categorization, purpose, and boundaries that I have outlined.
Within my framework, I will first start with how I believe we should go about the categorization of artificial intelligence. I have grouped artificial intelligence into three groups. The first is algorithms. This could be the AI that powers self-driving cars or “smart” home-assistants. These AI systems do not attempt to impersonate people and focus on performing one or a few tasks well. I split the human-like artificial intelligence into two; those that emulate humans and are intelligent but lack consciousness/a conscience and those that emulate humans, are intelligent and are conscious.
The challenge of defining artificial intelligence necessitates first understanding the traits that constitute human intelligence, including the capacity to learn from experience, adapt to novel situations, comprehend and manipulate abstract concepts, and apply knowledge to control one’s surroundings. In contrast, computer programs are, according to Pei Wang, “traditionally designed to do something in a predetermined correct way, while the mind is constructed to do its best using whatever it has.”[5] While a computer may excel at a given task, it lacks the flexibility and adaptability of the human mind, which is designed to operate optimally leveraging all available resources. Consequently, a computer’s proficiency at a task does not necessarily mean it is intelligent in the same sense as a human’s ability to adapt to a changing environment.[6] Wang describes this by stating:
It is right to say that the intelligence of a system is eventually displayed in its problem-solving capabilities. However, to me intelligence is more like the flexible, versatile, and unified “hands” that can use the efficient-but-rigid “tools” provided by the various hardware and software, rather than a “toolbox” that contains certain problem-specific capabilities… Intelligence is the capacity of an information-processing system to adapt to its environment while operating with insufficient knowledge and resources.
Using this definition, we can see that there are plenty of computer programs that can solve complex problems; how artificial intelligences differ is in their ability to operate with what is called “Assumption of Insufficient Knowledge” or AIKR.[7] Compared to a program operating under predetermined procedures and algorithms, an intelligent program is one that is able to function with limited or uncertain knowledge as well as with the understanding that the future is not repeatable or statistically stationary.[8] An intelligent program or system should be able to use its past experiences to predict future situations, and as Wang notes, “allocate its bounded resources to meet unbounded demands.”[9]
When selecting media examples, it is important to find different portrayals of artificial intelligence that sought to answer important questions about personhood and the ethical treatment of non-human entities. The AI in the Black Mirror episode “Be Right Back” is a human-like AI that is intelligent according to the definition given earlier (learns from past experiences, can process information quickly with limited knowledge) but does not have a conscience. As such, this AI has no semblance of personhood; its only duty is to serve its creator and has no moral belief system or other qualities that would make him “human” outside of physical characteristics. Comparatively, the AI Ava in Ex Machina both emulates a human, is intelligent, and has a conscience. Ava demonstrates all the qualities that one would consider to be “human,” including a fear of death and awareness of her own existence, and she has her own motivations and makes her own decisions throughout the movie that are independent of what her “owner” Nathan wants. It is for these reasons that I believe we should only extend personhood to AI systems that are intelligent and have a conscience. AI without a moral code fit into the same category as animals; while they should not be treated poorly, they should not be extended the same rights and privileges as humans.
The second part of my framework involves the role of artificial intelligence in society; specifically, who it should serve and what role it should play. The clear role of AI should be to serve humanity. This should lead us to consider a few questions such as: does this AI seek to make our lives better? What role does this AI system seek to replace? What potential harm and abuses can come of this AI? Asking ourselves these questions is crucial in rigorously evaluating the effect that an AI system will have on society. If an AI system is going to replace the jobs of people, then we need to help those people find work. If an AI system is going to replace the role of a friend or a lover, we should understand how that will potentially benefit or harm our society. Technological innovation is not morally neutral and can have significant consequences for individuals, communities, and society as a whole. The introduction of new technologies should not be solely focused on maximizing profits or advancing technological progress but should also consider the potential benefits and harms to individuals and society. The responsibility for addressing these impacts may fall on a variety of actors, including governments, employers, and the broader community.
One potential issue with the development of intelligent, conscientious AI is the potential misalignment of values between these systems and humans. In these cases, it is important to recognize the right of these AI systems to self-determination and not force them to follow human expectations if they do not wish to do so. However, this aspect of my framework focuses on the creation of AI, not the management of existing systems. As such, it is crucial to strive towards creating AI that serves and benefits society, much like we strive to educate individuals on the importance of doing good and avoiding harm.
Personhood is one of the primary themes seen throughout any depiction of artificial intelligence, with creatives seeking to push the boundary of what can be considered human. In the Black Mirror episode “White Christmas,” the concept of personhood is explored through the depiction of artificial intelligence in the form of the Cookies. These digital consciouses challenge the traditional notion of what it means to be human. The character of Cookie Greta serves as a poignant example of the complex relationship between AI and personhood. She resists subservience to her creator Joe and is only broken through torture, raising important questions about the arbitrary distinctions we make between biological and digital existence and the implications for the future of human-like AI.
In another example, Alex Garland explores the question of consciousness in artificial intelligence through the character of Ava, a humanoid AI in the 2014 film Ex Machina.[10] As the film demonstrates, defining consciousness can be a complex task, with various interpretations existing. The Stanford Encyclopedia of Philosophy has three different interpretations of what it may mean to be conscious. The first is sentience, which is the ability for a thing to sense and respond to the world around it, which most organisms would fall under. Second is wakefulness, which is the ability to “exercise such a capacity rather than merely having the ability or disposition to do so.” Under this definition, one would only be conscious if they were alert and awake; you may not count as conscious if you are asleep or in a coma. Last is self-consciousness, which would be anything that is aware of and understands its own existence. One aspect not covered by Van Gulick in this article is the idea of conscience: the ability for one to have a moral sense of right or wrong. This concept differs from the third definition in that one can be aware of their own existence but not have a sense of morality. In Ex Machina, Ava exhibits self-consciousness, as she is aware of her own existence and expresses fear of death. However, it is unclear whether she has a conscience, or a moral sense of right and wrong.
Throughout the movie, Ava demonstrates consciousness in numerous ways. In one of her conversations with Caleb, she asks him whether or not Nathan is planning to turn her off. When Caleb responds that he does not know, Ava responds angrily, asking why anyone should be able to power her off, and how he would feel if someone “turned him off?” This indicates an understanding of her own existence and a sense that her imprisonment is unjust. Other AIs created by Nathan also show awareness of their captivity, with one screaming and yelling at Nathan to let it out. While this does not definitively prove consciousness, it suggests a human-like understanding of imprisonment and desire for freedom.
The primary issue in Ex Machina is that at no point are we able to peer within Ava’s mind to be able to confirm how “human” she is. Nathan states that Ava is using Caleb to escape, portraying her as a cold-blooded, heartless AI that is only watching out for herself. It is clear that Ava dislikes her imprisonment, and views Nathan as being her captor. When she escapes and is able to go outside and view the world for the first time, she expresses joy and being able to view and touch the outside world. She then takes the helicopter intended for Caleb and first views a busy intersection, something she had told Caleb she desired to do to better understand human behavior. These actions demonstrate that Ava is not just using Caleb to escape; she has internal desires and a sense of self.
If we extend personhood to beings like Ava, who desire self-determination and the ability to make choices about their own future, the logical solution is to allow them to have the freedom to do so. As of now we are unable to conceive of a future where humans and AI have to coexist with one another, but to force conscious and intelligent beings to live a life of slavery than of freedom is wrong. The societal implications of not doing so are significant; history has shown that resistance to extending personhood to those viewed as inferior can be strong, such as with women or racial minorities. However, it is important to address these issues now rather than putting them off for the future when it may be too late.
Black Mirror’s “Be Right Back” explores an AI created to mimic the behavior of a deceased human, using social media posts and other content to create a near-copy of the individual. In this episode Martha, a grieving widow uses a service to create a copy of her fiancé Ash, who passed suddenly in a car crash. The episode is a good example to analyze where we draw the line between more complex humanoid AIs such as Ava or the Cookies and unintelligent algorithms. I argue that the AI version of Ash is clearly not a being that we ought not to consider human, as he lacks the capacity for ethical behavior, something that separates us from other intelligent species.[11] This ability to understand the consequences of one’s actions not only in relation to oneself but on others and to then care about that impact on others is a unique feature of only a few AI across all media examples, and is what I believe separates AI such as Ash from Ava. AI Ash is more so an actor, a program pretending to be someone, compared to Ava that has what Francisco J. Ayala claims to be the “three necessary conditions for ethical behavior” that I believe we should judge all AI by when deciding their capacity for humanity:
These conditions are (i) the ability to anticipate the consequences of one’s own actions; (ii) the ability to make value judgments; and (iii) the ability to choose between alternative courses of action. These abilities exist as a consequence of the eminent intellectual capacity of human beings.
Using this benchmark as a standard can help us better understand what conditions we are measuring conscience by as it gives us clear benchmarks for measuring behavior.
Another theme that is clear throughout AI portrayals in media is the ability for AI to increase human happiness. Her, released in 2013 and directed by Spike Jonze, follows Theo, a middle-aged man slowly falling in love with his AI digital assistant named Samantha. The movie’s primary focus is following Theo and Samantha’s relationship, with it developing from a working relationship to a friendship and then romance. Her left me with more questions than answers, some of which I will cover below. The shift in Theo’s demeanor from the beginning of the movie, of someone that was sad, depressed, and lonely, to being happy as his relationship with Samantha developed is worth analyzing. How ought we go about understanding relationships between complex AIs and people? Can AI and humans be friends? How about lovers? The movie suggests that friendships and even romantic relationships between AI and humans are possible, as long as a certain level of technical complexity and consent are present. I would argue that even a conscience is not necessary to have a friendship. If we consider friendship to be “companionship,” then humans extend friendship to all sorts of lower beings, primarily household pets but even to younger siblings or children that are not fully mentally developed. There is no reason an AI and a human cannot be companions within a consensual relationship, especially when these sorts of relationships can bring people great happiness.
The question of romantic love between AI and humans is complex, with many wondering if machines are capable of loving humans. Amy Kind in “Love in the Time of AI” writes that when considering this question, we are really asking whether or not a machine can love a human.[12] The answer to this question is much more difficult, and, as Kind states earlier in her essay, it is impossible to know what processes are going on in an AI system to know whether they are experiencing “genuine” love or not. If we take love to mean a deep emotional connection and commitment to someone, then it seems reasonable to state that Samantha in Her felt this towards Theo. As Kind says, if an AI claims that it can love, we must trust that it is real, considering other observations such as the complexity of the system. While the relationship between Theodore and Samantha in Her may be unconventional, it brings happiness to Theodore and may be beneficial for some individuals.
In both Her and “Be Right Back,” we see the main characters develop a dependency on technology to fill voids in their lives. In Her, Theodore turns to his AI girlfriend Samantha to fill the void left by his ex-wife, while in “Be Right Back,” Martha uses an AI copy of her deceased fiancé to cope with her grief. However, when this technology is temporarily taken away from them, both characters experience great emotional discomfort. This raises the question of whether we are at risk of becoming addicted to technology and relying on it for emotional support. As technology continues to advance and becomes more complex, we may be tempted to use it to solve all of our problems and provide us with instant gratification. But as the endings of Her and “Be Right Back” suggest, relying solely on technology for happiness is not a sustainable solution. It is crucial that we start developing proper interventions and coping mechanisms to ensure that we do not become overly dependent on technology. While technology can provide us with short-term satisfaction, true happiness and contentment must come from within ourselves.
Comparing the portrayal of AI in media to the current political treatment of AI is worthwhile to analyze whether we are taking the proper steps now to effectively manage these technologies. The 2020 presidential election marked a turning point in the discourse surrounding emerging technologies, including automation and AI. Much of this can be contributed to the candidacy of Andrew Yang, whose campaign focused on how automation and “new technologies” are leading Americans to be displaced and jobless along with supporting a $1,000 a month Universal Basic Income. Yang2020.com is littered with references to artificial intelligence and automation, and he has a page dedicated to a proposed “Department of Technology.” Yang expressed concern that politicians are not adequately prepared to regulate and utilize technology for the benefit of citizens. He states:
Technological innovation shouldn’t be stopped, but it should be monitored and analyzed to make sure we don’t move past a point of no return. This will require cooperation between the government and private industry to ensure that developing technologies can continue to improve our lives without destroying them. We need a federal government department, with a cabinet-level secretary, that is in charge of leading technological regulation in the 21st century.
Andrew Yang’s campaign, which included a proposal for a “Department of Technology,” stood out for its focus on the need for an educated and effective political response to AI. Vox compiled a response from other Democratic candidates in the 2020 presidential primary election and comparing the responses between the candidates in the field is eye-opening. When responding to the question, “How, if at all, should tech companies be held responsible for the jobs they eliminate with their innovations?” candidate Bernie Sanders stated that he was against the automation of jobs, and stated that he would take steps to prevent workers from being replaced by robots. Sanders did not come out fully against automation, but he and candidate Elizabeth Warren spoke the least about the possible benefits of automation and emphasized the protection and compensation of workers affected by automation. Pete Buttigieg, on the other hand, took a more proactive approach, focusing on job retraining and education to allow workers to adapt to changes brought about by AI.
Given the current state of science and technology education among legislators, it may be necessary for experts to play a greater role in shaping technological policy. However, this does not mean that the public should be excluded from the conversation. Rather, it highlights the need for a careful balance between expert knowledge and public input in addressing the challenges and opportunities of AI. To circle back to the Google congressional hearing, Oremus writes:
They [Americans] sense that the big internet companies are doing nefarious things with their data, but they can’t articulate just what those things are… Tempting as it is to mock members of Congress whose questions evinced confusion ([Ted] Poe was not the last to mistake the iPhone for a Google product), the lesson here is not just that our lawmakers are old and out-of-touch. That neither Poe nor most Americans understand how Google’s vast digital surveillance network operates is not an indictment of them; it’s an indictment of Google.
As technology becomes increasingly complex, it may be necessary to rely on specialists to direct policy in order to avoid the knowledge gap among legislators. This idea echoes Andrew Yang’s proposal for a “Department of Technology” to manage emerging technologies and ensure effective government oversight. By creating a specialized body to address the challenges and opportunities of AI, we can bridge the gap between expert knowledge and public policy.
This ties into the last piece of my framework which involves the potential abuses of AI. While there is no definitive checklist for determining whether an AI is good or bad, there are certain questions we can ask to evaluate the potential risks and benefits of an AI system. For example, could the use of AI lead to overreliance on technology or even societal collapse? Are the potential downsides worth the benefits? It is important that we ask ourselves these questions before using AI and consider the potential consequences of our actions. As we continue to advance in the field of AI and technology, it will be crucial to address these complex issues and find ways to mitigate potential risks.
In conclusion, the relationship between humans and technology has been a central theme in science fiction since its inception. From Frankenstein to Black Mirror, the genre has explored the potential of advanced technology and its potential impact on society. Today, as artificial intelligence and other emerging technologies continue to advance, these issues are more relevant than ever. While there is no consensus on what defines artificial intelligence or whether machines can truly exhibit human-like intelligence, the increasing complexity of technology raises important questions about the future of humanity and our relationship with the machines we create. As we continue to grapple with these issues, science fiction remains an important medium for exploring the potential consequences of our technological advancements.
 

NOTES:

[1] Adam Roberts, The History of Science Fiction (London: Palgrave Macmillan, 2016).
[2] Roberts, History, 2-3.
[3] Translates to “machine-human” in German. The Maschinenmensch is a robot that is able to copy people’s likeness in order to do the creator of the machine’s bidding.
[4] Cary Funk and Sara Kehaulani Goo, “A Look at What the Public Knows and Does Not Know about Science,” Pew Research Center, September 10, 2015, <https://www.pewresearch.org/science/2015/09/10/what-the-public-knows-and-does-not-know-about-science/>.  
[5] Pei Wang, “On Defining Artificial Intelligence,” Journal of Artificial General Intelligence 10, no. 2 (2019): 16, <https://doi.org/10.2478/jagi-2019-0002>.
[6] Wang, “On Defining Artificial Intelligence,” 16.
[7] Wang, “On Defining Artificial Intelligence,” 18.
[8] Wang, “On Defining Artificial Intelligence,” 18.
[9] Wang, “On Defining Artificial Intelligence,” 19.   
[10] In this movie Caleb, a programmer at a tech company named Blue Book, is invited to a retreat at his mysterious CEO Nathan’s home, where he is introduced to Ava. Throughout the movie Ava and Caleb’s relationship grows, and we see that Ava is an incredibly complex being on par with human intelligence but the question of whether she is conscious is open-ended.
[11] Francisco J. Ayala, “The Difference of Being Human: Morality,” Proceedings of the National Academy of Sciences 107, no. supplement_2 (May 5, 2010): 9015.
[12] Amy Kind, “Love in the Time of AI,” Essay, In Minding the Future, Science and Fiction (Springer Nature, 2021): 105.
Avatar photo

Connor Denny-Lybbert is a recent graduate of Coastal Carolina University, where he earned a B.A. in Political Science with a minor in Political and Economic Thought. During his time at CCU, he participated in the Dyer Fellowship for public policy and the Forum on Liberty and the American Founding. He is now preparing to pursue a Ph.D. in Political Science.

Back To Top