Countdown to Comic-Con: Which Were the Most Realistic Star Trek TOS AIs?
All images were generated via Meta.ai llama 3 from prompts written by the author.
This is the first in a series of posts leading up to my The Science of Science Fiction panel at San Diego Comic-Con on Friday, July 26th. I will be covering questions about AI and the Future of Work. Colleagues will be answering questions about astrophysics, cosmology, planetary physics, astrobiology and a host of other topics. My posts will focus on AI in science fiction.
Find Star Trek TOS: Remastered at Paramount+.
M5: The Ultimate Computer
Dr. Richard Daystrom has created the “ultimate computer,” one that can run a starship with a minimal crew. It is meant to eliminate the risks of space. The M5 is installed on the Enterprise and tasked with running the ship during war games with real starships. M5, feeling threatened, protects itself in engineering and with the ship’s armaments. It seems to go rogue.
It turns out that Dr. Daystrom has impressed his human engrams, theoretical constructs that represent physical impressions of memories within the nervous system. Daystrom created in M5 a copy of his own memories.
While the technology of M5 shows little similarity with current AI, the threats to the elimination of high-order human work (in this case, aspirational work) and the ability of an AI to misunderstand its instructions, to overly apply, for instance, a directive to survive, are viable.
A one-hour television show could not take on all possible solutions to the M5 problem. Captain Kirk appealed to M5’s morals, and M5 decided to apply the rules of “God and Man” and commit suicide as punishment for its sins. An AI at this level of sophistication could probably have held other conversations that would have led to other logical conclusions that would have removed M5 as a threat.
The story also deems morals more easily represented in logic than a human trait like compassion. We are told that computers can never have compassion, which is not logical. If M5 did encode Daystrom’s engrams, it would likely be able to simulate compassion at the very least, but the script, understandably, went with binary choices rather than more subtle ones.
One of the issues with generative AI comes from its complex representation of morals. As with opinions, generative AI knowledge bases encompass all moral systems. It has none of its own. Because of this, developers apply guardrails to AI that represent their moral choices, not the AIs.
Because AI has ALL moral options at its disposal and no framework for differentiating among them, current versions of generative AI cannot be imbued, as much as regulators and ethicists would like, with an innate ethical framework. Doing so would violate another AI precept, that of bias, as making an ethical framework choice would require, by design, the moral choice that one set of human ethics is superior to another.
Daystrom made that choice. He coded M5 with his ethics, along with his inferiority complex, impostor syndrome and likely motivation for vengeance, at least intellectual vengeance, against those who doubted his work.
Despite the overwhelming failure of M5’s test, Daystrom retained his legacy as far out as Discovery has gone in the future, where his name is still the moniker attached to advanced computer science research.
Nomad: The Changeling
“The Changeling” famously introduces an Earth-built probe to the Star Trek universe—Nomad—but it was damaged, its programming fragmented. A race of intelligent robots repaired Nomad and sent it back through the galaxy to return information about its findings to its creator. I have to be cautious, of course, because a plot summary of “The Changeling” sounds very much like that of Star Trek: The Motion Picture. But plenty has been written about that.
Nomad makes a very strong case for intelligence. It learns. It adapts. As with M5, it encompasses a moral center that allows Captain Kirk to reason it to death.
The parallel to today’s generative AI falls mainly in Nomad’s inability to see the flaws in its own programming. Nomad holds dogged determination to its goal-directed code, which renders its intelligence unfulfilled despite its vast memory and data collection prowess. Fortunately, for Open AI, Google, Microsoft and others, the default action by an AI that discovers a flaw is not self-destruction. So many NVIDIA chips wasted.
Note that Spock’s mind-meld attempted to suggest biological compatibility or that consciousness is universal, regardless of how it is implemented. I don’t think Spock’s evolved telepathic abilities would allow for a connection to a device, irrespective of its sophistication.
However, Nomad’s cousin V’ger also melded with Spock. However, the V’ger link, unlike Nomad’s, was designed to mimic human function, though the scans of Ilia showed only sophisticated mechanics, which is not an independent biological equivalent to a brain.
The Ilia probe was an extension of V’ger. The representation of the probe in V’ger’s memory core, however, could have directed electromagnetic waves at Spock that he would interpret as a data dump. V’ger may have figured out a way to communicate with Spock, though, like at the beginning of Star Trek: The Motion Picture, where V’ger’s data rate had to be slowed to be understood, Spock, too, may have had his circuits overloaded, though if V’ger could send a signal, one would think it could figure out a compatible bit rate.
From a canon perspective, Star Trek has not shown Vulcans able to mind-meld with positronic life forms.
Landru: The Return of the Archons
I categorize AI into three groups. General Intelligence, Simulated Intelligence and Autonomous Intelligence. I placed Landru in the Autonomous Intelligence category. Landru was the operating system for Beta III. Landru scheduled society. He followed pre-determined routines and operated infrastructure, including hologram projectors, assimilation sticks and listening devices.
[NOTE: General Intelligence is AI that operates within a cognitive framework, which includes self-awareness and multi-modal learning. Simulated Intelligence is AI that can easily communicate with humans about its knowledge base but has no sense of self or of what it knows beyond the current query. These definitions represent simple overviews of the ideas.]
Regarding defensive posture, Landru looked for patterns that would suggest dissent. He sensed words or phrases, associated them with people, and sent antibodies, in the form of his guards, to cure the societal ills. He was, in his own words, “protecting the body.”
All of the technology that was wielded in public could easily be mapped to bodily functions attempting to communicate, continue operation or eradicate threats. While Landru could use natural language to communicate, dialog was limited. He was certainly not a chatbot. He offered no answers or counsel.
Landru did not learn. Landru did not teach beyond the constraints of what he was programmed to teach (and I mean, in this case, literally “brainwashed”) people about the constraints and boundaries associated with the ideological precepts of his creators. Landru removed choice except during brief periods of abandon that likely inspired other speculative fiction films like The Purge.
Compared to today’s generative AI, Landru was more like the second-tier “AI” on PCs (more machine learning), which optimizes system performance by monitoring CPU, battery, and memory performance.
Dr. Korby: What Are Little Girls Made Of
The, I will say, mythology of the singularity finds its best expression in Dr. Roger Korby, one-time mentor and lover of the Enterprise’s Nurse Christine Chapel. Androids, discovered on an ancient planet during an archeological dig, revealed equipment capable of duplicating humanoid life forms in body and mind. The process started with a blank that looked like a lump of clay on a spinning wheel. Spin. Spin. Replicant. I will not dwell on the process but on the outcome.
The outcome was a copy of a humanoid life form. Some of the models are ancient. They cannot replicate themselves. Korby was brought back to life, so to speak, by this technology after an accident. Interestingly, the replication process could fix physical damage, though it is not clear what it used to master missing parts. (DNA, maybe, but what about physical trait differences caused by faulty DNA? Would it, for instance, cure genetic diseases? Ah, a topic for another show, should they ever make one. It appears that Strange New Worlds may revisit Korby. We’ll see what they have to say about his backstory.)
After Nurse Chapel discovers Korby is an android following the failed coup with a substitute Captain Kirk, Korby seeks to assure her of his humanity. Korby devolves into spouting logic and calculations.
Current generative AI would likely not fall into the “I’m human, but wow, I can only talk about logic” trap, especially since logic and mathematics (and spelling in images) are not its strong suit. Korby would have been more convincing in the end had he continued as he did from the beginning of the episode to mimic what he thought Christine Chapel and the others would expect him to say. Interestingly, the writers of the time always defaulted to computers that use logic as their base, even if other abstractions were layered upon them.
Kirk’s approach to embedding bias by thinking would probably not have succeeded if the technology of Exo III really existed. Any momentary thought, no matter how strong, would likely be edited out in favor of the physical replication of mental maps and the deep weighting of connections. A lie, for a fraction of a second, would not replace deeply held beliefs.
The episode assumes humans are “meat machines,” meaning that the physical representation of humans, including the brain, contains all that can be known of a person. The brain, at any state, contains everything.
While it may be true that a person incorporates all that is them, the physical representation of the brain at any point in time likely does not represent the complexity of the electrochemical nature of the brain. The replication process would need to be as much chemical as mechanical, and even then, the complex dynamics of cognition would be hard to capture in a snapshot.
This modeling of human minds issue challenges those seeking an AI singularity. While AI may amass enough information to outstrip human cognition eventually, it will not be the right architecture to be human. It will not have suffered our evolution as a species or experiences as an individual. I don’t believe that we can replicate humans in hardware, though I have no doubt that we will eventually create the equivalent of a new lifeform that will undergo its own evolutionary journey once it becomes self-aware and adopts sensors that allow it sensory learning, including pain and pleasure.
Such a machine would not be human as its sensors, such as infrared and ultraviolet, would exceed our sensors. Next-generation AI may have difficulty communicating its experiences and knowledge with us because we will not have the evolutionary foundation to understand what it experiences.
Spock’s Brain
Many believe “Spock’s Brain” to be the worst of Star Trek’s original 79 episodes. A woman named Kara, beautiful, threatening and intelligent, finds her way aboard the Enterprise. She knocks out the entire crew. While they are out, she takes Mr. Spock to sick bay and removes his brain (all off camera). The crew awakens to discover the brainless Mr. Spock on a medical bed.
This episode is not so much about AI as it is about the wonderful capabilities of the humanoid brain, including its ability to retain information. The episode gets the latter wrong but offers an intriguing thought experiment for the former.
It turns out Kara is a member of a devolved human population. Her planet is in trouble, having lost its Controller. Kara’s job was to find another Controller. Spock, who we are often informed, possesses the best brain aboard the Enterprise, proving to be the ideal new Controller. Also, off camera and while the Enterprise searches for Spock’s Brain, (I know, just saying it sounds ridiculous), Kara connects Spock to the planet’s systems.
The landing party finds Spock’s brain in a black box with lights coming in and out of it. The still-conscious Spock, sounding like himself despite a lack of vocal cords (which becomes an even bigger non-sequitur later), informs his colleagues that he is well, and his body seems to be breathing, pumping, etc. He is running the planet’s underground complex.
The most preposterous assertion of this episode comes from the simplistic brain replacement provided by Dr. McCoy, who controls brainless Mr. Spock’s voluntary movements with a bulky clicker.
Somehow, Kirk can control the ‘Spockbot’ with enough finesse to disarm Kara. Some kerfuffle’s occur, but eventually, McCoy dons “the teacher” and absorbs the ancient knowledge required to replace Spock’s brain. But it doesn’t stick. During the surgery, he starts to forget. Eventually, after hooking up Spock’s vocal cords, Spock helps McCoy complete the reconnection of his brain. As I said above, this bit is even more far-fetched than Spock’s brain as the Controller. That’s an interesting metaphor for the brain, and a sufficiently sophisticated culture could, at some point, figure out how to use and maintain a humanoid brain as an industrial controller.
On the other hand, knowledge impressed on a brain would probably not fade as quickly, or at all, once implanted. If the technology goes to the length of creating highly active neural pathways to the point of turning McCoy into an even more brilliant surgeon, those pathways will be accepted and maintained by the host brain. Just as Landru brainwashed individual ambition out of Beta III’s inhabitants, similar technology to increase intelligence would also likely hold, especially if immediately applied.
Spock suffers no post-surgery recovery time, unlike his bout as a blood donor for his father. What this episode tells us about AI is more about motivation than technique. Humans value repositories of their learning. All of the scrapping and dark web fishing and reading blogs and books comes down to building a repository of knowledge that people can easily access. That many of the AI firms also have an at least tangential relationship with brain implant hardware isn’t a surprise.
The ultimate computer is not M5, but a version of ChatGPT that complements your internal dialog, answering questions before you ask them, and only to you. That will be the real game changer, and it has nothing to do with the singularity and everything to do with cybernetics. We are more likely to become hybrids than be replaced.
A Taste of Armageddon
There is a computer. It appears to be a statistics machine that chooses which humans to kill in equal measure to avert war. It is a computer defined by a treaty. The computer appears to have no artificial intelligence characteristics, but it does have a vast connection to the current state of two planets and their exact population numbers and locations. If M5 was the “ultimate computer,” then the computer in “A Taste of Armageddon” might be the “ultimate IoT” device, where humans are “the thing,” most of the time, unless the Enterprise shows up at an untimely moment and gets destroyed.
As with other computers, the answer isn’t reprogramming; it’s destruction. Interestingly, negotiations proved to be the next step for the civilizations of Eminiar VII and Vendikar. Ironically, negotiations led to the conclusion that selective killing through statistics was the preferred method of war for the two planets. Are the same warring parties, with no experience with other models, going to find peace after 500 years of war, or are they just going to fix the computer? We never find out.
Where no genAI has gone before
Artificial intelligence was in one of its early heydays during the filming of Star Trek. The stories in which it played a center role didn’t get much right about how generative AI works, but they do hint at several things that remain constant:
- Humans aspire to capture their knowledge
- Humans want to build sentient devices capable of communicating with us
- People fear AI may replace them, intellectually or physically or both
- AI is far from perfect, even when it thinks it is perfect
- Autocracies see technology as a way of reinforcing their control
- AI isn’t the only type of application that can do evil things
Science fiction, even the hardest of hard science fiction, will take liberties with the details of technology. Where they fail to capture the subtlety of technology implementations, they excel at exposing the intimate motivations, fears and drives of human beings when confronted with new technology. As with all good science fiction, we don’t learn about AI from Star Trek; we see how we might and will react to it, adopt it, shun it, belittle it, worship it, or challenge it.
Too often, the crew of the Enterprise left most AI destroyed, aflame, and cratered, as they warp off, leaving a frequently broken society in shambles. Yet they still talked to their computer. In the background, with little recognition (at least in TOS) of the intelligence at the core of the Enterprise, the mundane autonomous functions of the starship continue; when information is retrieved, it is often the result of a verbal request. AI, in Star Trek TOS, is a mundane background player. That could be the fate of genAI. Something more revolutionary may take on the mantle of being the real AI, as generative AI falls back into the realm of old AI adages: If it works, it’s not AI.
AI icon by Siipkan Creative from Noun Project (CC BY 3.0)
For more serious insights on AI, click here.
Leave a Reply