Countdown to Comic-Con: Are Cylons Sentient? And Can LLMs Wake Up?
Cylons, the “robots” of Battlestar Galactica, demonstrate multiple levels of sentience. It is important in a discussion about sentient computers, however, that we differentiate intelligence from sentience and consciousness.
Many computer programs are intelligent, able to pass various tests, play chess, solve equations, and, of course, most recently, effectively converse in various languages, write computer code and draw images. These actions represent types of “intelligence,” but they do not portend sentience, which requires a sense of feeling, or consciousness, which requires attributes such as self-awareness and intentionality.
Recent research suggests that many animals possess some intelligence beyond instinct, especially Corvids (crows and their kin), mammals like the cetaceans and the great apes, and even our societal companions, dogs and cats. Computers, however, routinely outperform all animals and most humans on some cognitive tasks that require reason and problem-solving.
That reality of problem-solving and reason does not make computers sentient, and it never will. They may be able to outthink us in several areas, but that cannot out “be” us. Consciousness requires subjective perceptual experiences with context, meaning that they are not momentary, but last in a way that influences future behaviors. Conscious beings remember. Their experiences shape not just what they know but who they are. The Cambridge Declaration of Consciousness proclaimed on 7 July 2012 at Cambridge University that many non-human animals possess the neuroanatomical, neurochemical, and neurophysiological structures that allow for conscious states and that they can display intentional behaviors. We are not alone, but our conscious cohabitors of the earth are organic, not mechanical.
While computers may be “intelligent,” they are not built on neuroanatomical, neurochemical, and neurophysiological architectures that support consciousness. Large language models (LLMs), for instance, by their design, are all-inclusive, capturing multiple versions of the truth and all opinions represented in their training. They may weigh one idea above another based on their training data. They cannot state why they believe something to be true, even at the fundamental level of offering their reasoning for a conclusion. Forecasting expectations based on statistics creates realistic conversations, but it does not generate meaningful dialog.
AIs also present no intentionality. People worry about them taking actions that will be harmful to humans. They will only do so at the instructions of a human with the intention to harm humans (or anything else), or because a human is negligent in their accountability to determine the moral or ethical considerations of an instruction given to an AI.
How Are Cylons Sentient?
In the Battlestar Galactica universe, later showrunners, most notably those associated with Caprica, retconned the universe with the Cylon original story with a McGuffin, a technology called the Meta-cognitive processor (MCP). In storytelling, the MCP (yes, the same acronym as the Master Control Program in Tron) creates the fictional architecture that allows consciousness to be represented in hardware. Viewers are not privy to the details, only the results. Zoe Graystone’s consciousness, including her memories, gets uploaded into an MCP.
Thus, a Cylon becomes the active embodiment of Zoe Graystone’s life spirit.
From the plot of Battlestar Galactica, the Cylons clearly demonstrate intelligence, building new versions of themselves and creating vast armadas of ships and fighters. What they don’t demonstrate, however, is growth. Cylons, arguably, are locked into the religious reality imposed by the remnants of Zoe’s consciousness that is embedded in all of them. They do show intentionality and learning to a degree, but they tend not to challenge their core beliefs. Zoe’s religion acts as a constraint on Cylon evolution.
So, like Star Trek’s V’ger, the Cylons require humanity to complete them. The human ability to adapt, to imagine, to decide to do things that don’t make logical sense, and to value intuition at times over reason is something the Cylons can’t master.
Are the Cylons sentient? I think so, but their limitations help illustrate the possible boundary conditions for any technology (or concept). Many certainly express opinions and act intentionally, but they tend to act in service to a pre-determined vision, where disagreements fall into the category of alternatives rather than radical departures.
A being like a Cylon, however, could evolve more human capabilities, but even with accelerated evolution, they have not or were not given, enough cycles to find technological equivalents of code expressions that adapt their programming. Their overwhelming victory of logic, deception, and productivity quickly made them the apex species with few environmental factors to influence their future development.
I would argue that their eventual realization that they need humans to go beyond their programming is an indicator of their recognition of this constraint, but it does not necessarily suggest that they can incorporate the results of that realization into their programming and evolve into more complete entities.
As the entertainment industry contemplates a reboot of the Cylon story, exploring what happens after they realize their needs would provide ample grist for the storytelling machine.
Can LLMs Wake Up?
As for LLMs, they are not capable of waking up, experiencing sentience, or consciousness. Despite ever-increasing levels of “intelligence” and greater capabilities to respond and absorb information, their organizational principles are not aligned with the needs of conscious, self-aware individuals. LLMs are organized for rapid pattern recognition tasks (and not so rapid pattern learning of those patterns).
Neural processors represent an aspect of how the human brain works, but they do not incorporate the totality or complexity of human brains. And even if hardware developers eventually create a computer with the processing power of the human brain, it would not be enough to generate consciousness because any uploaded human consciousness would still be running in an emulation (non-native), meaning the huge processing capability would be performing sub-optimally (like running an Intel processor emulation on an Apple M-series chip). It might be fast, but it won’t be as fast as the human brain.
Perhaps most importantly, any hardware emulation will be missing the bio-electrical-chemical characteristics of the brain that clearly play a role beyond triggering synapses and encoding information. If we are just machines, we are wet machines, which makes us fundamentally different from hardware-only devices.
For those who ascribe to philosophies that include an extra-human element to imbue consciousness, no amount of cognitive simulation will even convince them that machines can be conscious. Consciousness derives from something beyond and may live beyond the confines of corporal beings. Computer science has little room for traditional spiritual beliefs.
I find it interesting that the reboot of Battlestar Galactica focused so much on the religious aspects of the Cylons. It suggested at the basic level that not only could computers eventually incorporate supernatural elements, but they could believe in them, suggesting, in a way, that the supernatural is logical. I see this more as Battlestar Galactica’s creators and writers leaning into their McGuffin, the MCP, to give the Cylons access to a belief system that is at once decisively human and also a constraint. The Cylons are forced to problem-solve what it means to be human. They cannot escape their creators.
AI icon by Siipkan Creative from Noun Project (CC BY 3.0)
For more serious insights on AI, click here.
Did you like Countdown to Comic-Con: Are Cylons Sentient? And Can LLMs Wake Up? If so, leave a comment, like the post or share it on the social platform of your choice.
Leave a Reply