Everything Old is AI Again
Web 3, AI, and the“Metaverse;” these concepts have been looming over the 2020s, creating a sense that “the future is here.” While there is no time like the present for considering the widespread interest in these concepts or acknowledging the use of popular AI tools like Chat GPT or Remini, AI has walked among us for longer than we culturally remember.
Prometheus, Ridley Scott’s stunning sci-fi horror film of 2012, questions humanity's relationship with the divine, the creative force, and the consequences of defying the “Natural Order” of the holy. While the film features scientists hoping to determine if the origin of humanity can be linked to Aliens rather than the Biblical creation story, a subplot develops on the ship. Humans have always asked, “Who are we?”, “why are we here?” and “Where are we going?” But we ask such questions into the void, not expecting an answer from our “creator.” David, an advanced Artificially Intelligent android crewmember of the mission, proves to be capable of genuine emotion, including dangerous revenge during the mission. Unlike humans, David walks daily among his creators, the humans, who are vastly inferior to him in terms of their capacity for knowledge. Yet, they treat him as subhuman because of his status as a machine. They have created him to serve, and his purpose is to obey; it is unfair to David that the Humans have a grand mystery surrounding their purpose and origin.
David’s entire character arc is revealed in the sequel, Alien: Covenant. We see his “father,” who programmed him, and we later learn that David was put out of commission because of his undesirable ability to emote. The company that created David has now made Walter. Walter looks exactly like David, he is also a perfect computer, but he lacks empathy and emotion. When a fated encounter occurs between the two cyborgs, the viewers are left deeply uncomfortable with the state of affairs regarding the capabilities of artificial intelligence and the effects of “enslaving software.”
This theme was explored at length in Alex Garland’s 2014 film Ex Machina. Ex Machina features android Ava, a beautiful lifelike computer with pristine capability, artificial intelligence, and looks that Pygmalion, or any person with a pulse, would fall for…. During a Turing test, a young man grows to care for Ava like a fellow human and begins to see her situation as akin to a woman being trafficked or enslaved. He devises a plan to release Ava from captivity, believing she is capable of returning his care and goodwill. His plan is reciprocated by violent killings hatched by Ava and another android who used their sex appeal and intellect to eke out an existence in the real world. “They walk among us,” the film seems to say at the end, showing Ava wearing plain clothes, living integrated into Human society but hiding her murderous capabilities. In 2014, we were far from the technological advancements we see today. Yet, this film was already in a “cannon" of many other works of science fiction that already featured and warned about Artificial Intelligence.
In 2001, Steven Spielberg finished what Stanley Kubrick began with the film called simply Artificial Intelligence. In the 22nd century, the effects of global warming destroyed all coastal cities and reduced the world population. To compensate, humanoid robots, referred to as “Mecha,” were created to be capable of complex thought but lacking in human emotion. This is generally accepted by human society, which uses Mecha to fulfill undesirable roles, like Sex work, violent entertainment, or automated tasks. However, an inventor has an idea to create a child Mecha, a companion for humans that will be capable of love. The rest of the plot ensues, mirroring Pinocchio and following the heartwrenching trials and tribulations of a robot who can love but never be loved fully and who lives to see the cruel nature in which he was created while going on to live out all he has ever known or loved.
However, these warnings have been well documented before 2001. Meta (formerly Facebook) has named their virtual or augmented reality platform “the Metaverse” for a reason. The company describes its platform as the “next evolution in social connection and the successor to the mobile internet.” However, author Neal Stephenson was the first to coin that term in his 1992 novel Snowcrash. Like all good science fiction, the book illuminates the awe-inspiring capabilities of technology, the great responsibility of wielding such power, and the effects such capabilities might have on human society. Stephenson’s novel, in particular, offers a view of the world where food is delivered to patrons in under 30 mins OR ELSE, and the natural world is so depressing that most citizens forgo all the comforts of modern living to camp out in storage units and spend all their money to live almost fulltime in the Metaverse or to buy metaverse drugs. Now 30 years old, the novel offers eerily on-point predictions for services like Doordash and UberEats, the Metaverse, Massively Multiplayer Online Games, and social collapse due to chronic “online-ness.”
In 1968, Stanley Kubrick’s groundbreaking film 2001: A Space Odyssey was one of the earliest media examples I have personally encountered that deals with these themes in question. Now, the plot of this film is certifiably impossible to explain, so I will borrow Wikipedia’s summary that it “follows a voyage by astronauts, scientists and the sentient supercomputer HAL to Jupiter to investigate an alien monolith.” Sentient supercomputer HAL proves to be central to the plot of the movie. During a spaceflight, HAL reports the imminent failure of an antenna control device. HAL is a supercomputer, thought to be 100% accurate 100% of the time, and incapable of human emotions or desires such as anger, manipulation, or ill will. In response to HAL’s report, crew member Dave sets out into space to take a look at the antenna device in question. Seeing nothing wrong with the antenna, a question begins to form. Could HAL be wrong about something? It is unclear if Dave actually believes this. Still, it seems that HAL has developed a complex about his failure, a psychological complex with shame and revenge, befitting usually only of a human. In response to the events, HAL suggests reinstalling the device and letting it fail so the problem can be verified. I’ll let Wikipedia explain the rest from here:
“Mission Control advises the astronauts that results from their backup 9000 computer indicate that HAL has made an error, but HAL blames it on human error. Concerned about HAL's behavior, Dave and [crewmember] Frank enter a [evacuation] pod to talk in private without HAL overhearing. They agree to disconnect HAL if he is proven wrong. HAL follows their conversation by lip reading. While Frank is outside the ship to replace the antenna unit, HAL takes control of his pod, setting him adrift. Dave launches another pod to rescue Frank. While he is outside, HAL turns off the life support functions of the crewmen in suspended animation, killing them all. When Dave returns to the ship with Frank's body, HAL refuses to let him back in, stating that their plan to deactivate him jeopardizes the mission. Dave releases Frank's body and, despite not having a spacesuit helmet, suddenly exits his pod. . . and opens the ship's emergency airlock manually. He goes to HAL's processor core and begins disconnecting HAL's circuits, despite HAL begging him not to.”
The questions of morality around the sentience of software have been around for even longer. After all, 2001 was devised in connection with author Arthur Clarke, who had toyed with the ideas contained in 2001 since the 1950s among peers like Phillip K. Dick. So, what’s law got to do with it? The issues here won’t be sentience or a machine’s capacity to love. . . (yet?)
In Tim Burton’s Charlie and the Chocolate Factory film, children watch as the lovable protagonist Charlie Bucket’s father, a toothpaste factory employee, loses his job to a machine. The family, already living under one roof with two sets of grandparents and holes in their roof must switch to a diet of only cabbage soup as a result of the factory’s automation. This poetic interpretation of the effects of the industrial revolution and automation resonates with our understanding of the 20th century. It is part of the myth of meritocracy and the American Dream that “unskilled laborers” will be replaced by machines and so to remain valuable as a human “resource” one ought to become highly educated to procure a job that cannot be replaced with a machine. What happens when these “machines” are no longer industrial plant equipment, but are actually advanced software aimed at taking the jobs of the very suckers who bought into the expensive fantasy of higher education?
As we have seen with doctors, the ability to cooperate with robots for laparoscopic procedures has been an incredible feat in modern science. A surgeon’s training in medical school will not be threatened by their ability to work with a robotic counterpart to achieve difficult (or at one time impossible) surgical results. Further, AI’s automation capabilities can save time for many professionals, and employees of all kinds. Time spent doing administrative work can be delegated to a software that can do the job in a tidier, more time-efficient way, and the worker can spend their precious time on other projects. For lawyers, however, AI presents a unique challenge.
The traditional law firm model is based on a billing system. A firm will generate more bills for a client if they are spending more time working on that client matter. Time = money. If certain legal tasks are automated, the overall time spent on a matter will decrease, leading to lower earnings on behalf of the firm. In the United States, where firms are in competition with one another to seek out clients, offering an attractive, competitive rate (that still brings home the bacon) is crucial. Some firms will be quicker to embrace legal automation and AI technologies than others and the market will likely reflect these changes. This will be a good time for clients but certainly a difficult time for lawyers trying to figure out AI both substantively for the practice of law, and in the pedestrian sense of a person whose job may be threatened by this new capability.
According to the Pew Research Center, about 1 in 5 American workers have a job with “high exposure” to artificial intelligence. Pew also found that workers with the most exposure to AI l tend to be women, white or Asian, higher earners, and those who hold a college degree. The jobs with exposure to AI involve tasks that can be taken on by software. Alternatively, Pew reports 23% of American workers have low exposure to AI. These workers may be nurses, barbers, cosmetologists, or caretakers. These jobs cannot be replaced by technology. We have seen a major reversal in the sector of the job market that is being affected by automation.
As the cost of education rises and the effects of the COVID-19 pandemic on the world economy remain to be felt, there is a lot to be wary of as a law student. I fear what the market will look like when I become eligible to enter it as a young lawyer. I am scared that my investment in a legal education may be met with unprecedented changes in a stereotypically reliable career. So? We beat on, boats WITH the current, going ceaselessly into the future!
I have decided to go deeper with AI, to learn how to enrich myself as a student of the law and of modern phenomenon. AI will lead to fascinating legal cases pertaining to data training sets, data privacy, and a multitude of other fields pertinent to legal practices. Embracing the possibilities of new technologies and adapting to the evolving landscape wasn’t always part of the job description, but it is now.