AI and Ethics: Do we need a new ethics to deal with autonomous technology?
The Open Letter to the UN (AUTONOMOUS WEAPONS: AN OPEN LETTER FROM AI & ROBOTICS RESEARCHERS) suggested we need a new ethics framework to deal with AI. Perhaps we should understand how to apply the frameworks that already exist, and how all moral frameworks tend to fail in the face of power and threat.
Revisiting The Golden Rule
One of my favorite religious “jokes” go something like this: A young woman walks up to a Rabbi, a Buddhist Monk, and a Muslim Imam. They are all asked, “what is the core teaching of your religion?” The Rabbi answers, “That which is hateful to you, do not do to your fellow. That is the whole Torah; the rest is the explanation; go and learn it.” The Monk answers, “Treat people the way you’d like to be treated.” The Imam says, “‘Whoever wishes to be delivered from the fire and to enter Paradise should treat people as he wishes to be treated.”
The young woman replies, “But those are all the same thing.”
The Buddhist Monk turns to her and says, “After a few thousand years you learn something.”
The basic ideas required to guard against the misuse of technology already exists in most religions. Technologists should adopt the ubiquitous “golden rule” as a first-order premise. They should always ask not only can we do something, but should we—and under what constraints? How might our invention harm others, and how can we avoid this harm?
As Rabbi Aaron Meyer of Temple De Hirsh Sinai commented when discussing an earlier draft of this post: “the golden rule is the tip of the shared morality iceberg.”
Invention and inherited morality
When inventors create new things they initially look only to the good side of the technology. Huo yao, the Chinese “fire potion,” we know as gunpowder was a byproduct of a search for a life-extending elixir. Within two hundred years, the Chinese military was using it for fire arrows. Despite the continued influence of Buddhism in China, the military of the Song Dynasty used the new technology without adhering to the precept: “Treat people the way you’d like to be treated.”
In the modern era, the United States created the atomic bomb. Now many countries own atomic weapons caches. At their core, all nations with atomic weapons claim a moral code that either demands not using the weapons except in the most dire of circumstances or affirms their right, within their own moral framework, to use any weapon any way they wish. Regardless of the reasoning, all nations see themselves as moral, and they justify permission to use nuclear weapons through some path of moral and political logic.
The most recent technology to evolve into military use involves pattern recognition, artificial reasoning, machine learning, big data, and when incorporated into robots, the ability to act in the physical world. We can now create weapon systems capable of carrying out human-assigned goals without further human intervention once deployed.
That phrase, without further human intervention, proves important because humans set all high-level goals in these systems. The goal is a human invention. Even if the action to achieve the goal executes from native learning, awareness, and rules within the autonomous weapon. Autonomous weapons do not set their own top-level goals. Autonomous weapons do not turn themselves on nor do they act without human guidance.
In many action films when a military unit finds their backs against a wall, someone of authority will often yell, “defend this spot at all costs.” A group of human soldiers includes with that order, their own moral judgments, along with rules of engagement that will likely force them to hesitate before actually carrying out that order to its ultimate conclusion. Captured Nazi soldiers at the end of World War II discovered that “just following orders” was not a viable defense in the face of war-time atrocities.
The goal to “defend this spot at all costs,” inserted into an AI, is what people like Elon Musk and Bill Gates worry about. An AI ordered to defend something at all costs may not include enough context to make what humans would define as a moral choice. Taken literarlly, the AI could decide to pick up and throw soldiers at oncoming combatants. An AI could decide that fellow troops were getting in the way and just kill them to provide more degrees of freedom for firing options. An AI could decide the order included an authorization to deploy a tactical nuclear device to head-off calculated future threats a few miles out.
AI could include programming that forces compliance with the rules of engagement. That AI would likely follow those rules with more consistency than most humans. The rules of engagement, however, may make the machine appear less effective. If so, some future military leader could decide that he or she wants purity of action from their autonomous weapons. Once deployed, military leaders may not want their AIs to over-think the situation, but to simply execute to win.
There is also the special case of those who are religious but feel that when confronted by the action in the name of another religion or ideology that does not easily mesh with theirs, they justify moving their own moral line in the ground because they are more right than the other. A religious political leader might say: “my religion or personal God is right, and what I’m doing defends His rightness.” They put the caveats necessary to rationalize their moral ambiguities in play. “If I’m crossing a moral line, I’m doing it in the name of my God, and therefore what I’m doing is ok.”
Perception and belief
AIs not coded with ethics will not infer ethics from their data. The data itself may actually cause an algorithm to make amoral inferences. As pointed out by Dartmouth College’s Hany Farid, on NPR’s Science Friday (Do Predictive Algorithms Have A Place In Public Policy?), algorithms act on the biases in data because we don’t recognize the bias in the data. To researchers “color blind” criminal data should be the fairest data set. They found, however, the removal of race as an input reinforces the bias toward older white people not being criminals, and most young black men being criminals.
Number of prior convictions is correlated to race. There are asymmetries in our society with the frequency of arrest, prosecution, and conviction based on race. So number of convictions is a proxy for race. So even though it looks like the algorithm should be race-blind, they are not necessarily race-blind. —Hany Farid
Machines interpret data using the logic provided—and their results reflect data that may prove far from impartial. When developers simplify data to create a more tractable problem space, they eliminate context and ambiguity humans might call upon in edge cases to make a moral judgment. When presented with a bigger problem, humans tend to want more data, not less.
In blackbox situations, where machine code proves impenetrable to humans understanding the logic within, no moral behavior exists to be unmasked. The training sets aimed at solving a specific problem only receive data associated with that problem. There are currently no meta-processes that permit a machine learning algorithm turned, for instance, to facial recognition, to call upon an ethical framework to pass judgment on its findings.
Why the need to invent new framework for AI and ethics?
The need to suggest a new ethical and moral framework reflects the entrepreneurial impulse to fix problems with new solutions. If the military needs new technology, and that technology should behave ethically they reason, we need to legislate compliance and build new moral frameworks into the code.
Technology leaders are not calling for constraints on AI and robotics research but on the application of AI to weapons applications.
The stories of gunpowder and nuclear weapons suggest that even when a moral framework exists, human behavior will rationalize around it for some other purpose or goal.
Creative inventors often reflect a general ambivalence toward religion even if they adhere to the moral compasses that pervade society. Their ambivalence leads them to a distrust the ability of others to follow the precepts of ancient moral principles.
And they aren’t wrong. Although almost all religions include a “golden rule” which offers a concise summary of the religion’s moral guidance, many people in power don’t follow religious doctrine even if they ascribe to one. They do not perceive that the “golden rule” applies to them or their mission—even if they apply it in interpersonal situations. People who adopt an adaptive morality will have few qualms about abandoning the rule when it does not suit their goals. They are also likely to abandon any new framework, even if they sign it, should they feel threatened or perceive a political advantage in doing so.
While the Open Letter to the UN (AUTONOMOUS WEAPONS: AN OPEN LETTER FROM AI & ROBOTICS RESEARCHERS) may well increase dialog about the threats of military grade AI deployed in autonomous systems, it does not suggest that makers of these systems simply adopt existing moral precepts, probably because of ambivalence toward religion and a general distrust of it. To weapons makers, the issue is always one of reflecting buyer need. If the manufacturer’s code includes too many constraints, their machines may appear crippled and therefore unsellable to some segment of the weapons-buying market.
Despite the plethora of moral guidance from traditional, progressive and conservative religions, humanity shows little willingness to adopt a more holistic and inclusive worldview that would eliminate the need for war and therefore the need for military grade AI. While scholars may reach consensus on basic moral precepts that describe equivalences and moral parity between the religions, religion itself remains a point of contention for many. This reinforces ambivalence for many who already question the value of religion and its codes of conduct—further fueling the entrepreneurial drive to “fix the problem.”
The truth is, creating a new framework will not increase compliance, and it will never prove as pervasive or as persuasive as existing “golden rules” and the moral and ethical foundations of those rules.
Moral ambiguity and national security
That factions in the world continue to escalate the arms race through successive generations of technologies is perhaps the best proof that no appeal to a higher power, religious or political, will solve the problem of arms proliferation or the use of arms in a way contrary to global well-being. In various places over the last several years we have seen the use of chemical weapons against civilians despite global accords against them.
That Germany, France, the United Kingdom, the United States and Austria all used lethal gas during World War I demonstrates the ambiguity of the moral line when a threatened country believes in its rightness. It acts for the good of the world, so its survival outweighs the moral cost of eliminating its enemy. As lax adherence to the Geneva Protocol and other treaties by various political factions throughout the world point out, trust in the “golden rule,” even a “golden rule” with specifics and punishments written by modern leaders for modern leaders, does not constrain all bad actors.
Even one government, oligarch or despot using a banned weapon and getting away with it invalidates the effectiveness of a treaty at its core. Those who adhere to the “golden rule” as principal would not use such weapons even if a treaty did not exist.
More so, a treaty designed to implore those with ambiguous moral frameworks to adopt a “golden rule” of sorts in a specific instance with specific personal or political consequences should they fail to comply often does little but spur its desire to circumvent the treaty’s provisions. Without the punishment, the threat means little—and even with the punishments, those in power often continue to do harm despite consequences to their citizens. The negative reinforcement of poor behavior contradicts the meaning of the “golden rule,” which works best when voluntarily embraced and followed.
Accepting the rules
Creating a new set of ethical rules for AI will do little but make those who set the rules as their goal feel better. To create a framework for ethically guided military AI we need not invent anything new, save a knowledge representation and execution framework capable of reflecting the thousands of years of moral insight already available. We can make machines that will follow the “golden rule” as part of their programming. It takes only one rogue actor, however, to invalidate the rules and spawn a moral cold war.
The technology dignitaries who seek to create and legislate a new morality through their Open Letter should spend their time learning how to apply existing moral frameworks to technology. Perhaps time spent understanding the pervasiveness of malware and curbing its amoral roots, or understanding the social impact of big data-driven advertising business models will prove a more practical near-term application of moral social engineering. The results of that work can help inform how to constrain any unscrupulous use of AI in the future. Such work may only confirm that any “golden rule,” ancient or modern, acts only as inspiration for ethical behavior, not code for its enforcement.
[…] A recent article from Serious Insights, a blog by new Founding Member Daniel Rasmus discusses the role of traditional ethical systems in critiquing the development and deployment of autonomous artificially intelligent weaponry. Ultimately, he asks, is a new set of AI ethical rules needed, or could the technology dignitaries who signed an Open Letter to the UN Convention on Conventional Weapons better spend their time learning how to apply existing moral frameworks to technology? Read the entire post here: https://www.seriousinsights.net/ai-and-ethics-do-we-need-a-new-ethics-to-deal-with-autonomous-techno…. […]