Cyberethics in the 21st Century: The Reign of the Machines
Our relationships with our inventions undermine our values
In the fourth century BCE, Plato noted that Socrates was fond of beginning many of his dialogues with the phrase “Know thyself.”
The statement in effect prefaced that what was going to happen next would be in its very essence both personal and human. The emphasis on self-awareness became a foundation for the identity of oneself as an individual. This set the tone for what was to happen in the development of relationships between individuals and between the individual and the society at large in much of Western civilization for the next 2,000 years.
The advent of the Space Age and the technological revolution that it represented fostered a shift in the fundamentals of social interaction. That was reflected in the shift of the ethical system that had until then been at the base of Western culture from an individual-based worldview to one that trended toward a global perspective. Perhaps the clearest example of this was the statement from Apollo 8’s Bill Anders in 1968 as the craft emerged from the back side of the moon and the crew became the first humans to see an Earthrise: “Oh, my God! Look at that picture over there! It’s the Earth coming up. Wow, is that pretty!” The view held by mankind of its home and place in the universe was instantly altered, all thanks to technology. With this new perspective, the ethical underpinnings of society acquired an expanded index against which to be measured … infinite in extent.
Prior to the 18th century, most social, cultural, and commercial interaction could be described as stemming from a man-man interface. In other words, people related to each other in a face-to-face mode, whether it was shopping, voting, or even fighting in war. Ethical and value systems reflected this relationship as society developed concepts of right and wrong and good and evil and then refined them based upon the common experience of mankind. Technology was used to implement these values and was unquestionably under the control of human beings. Whether that technology was a sword or a printing press, the human element had to be present in order for it to function. For that reason, ethical behavior was always present in this system because of the human decision to initiate the activation of the technology.
During the Industrial Revolution in the 18th and 19th centuries, society’s socioeconomic structure began transitioning to what can be termed a man-machine interface, in which technology became an extension of man in his activities. Whether it was in the creation of textile mills or the manufacture of steel, machines were developed to perform tasks with an eye to efficiency and, of course, profitability. The human operator of the machine became an integral part of the process without regard to a decision on his or her part as to the initiation of the process.
To that extent, the man-machine interface diminished the importance of the individual. Into the 20th Century, the factory system and the automobile were extensions of humans, enabling them to perform tasks and to extend themselves well beyond what had hitherto been possible. This process accelerated dramatically with the Cold War and its demands for technological superiority over potential threats to global survival. At the same time, the human cost in the ethical development of a society focused on efficiency as a primary value, which became a subject of debate.
It is in this context that the Space Age wrought its magic on society and created the machine-machine interface, in which the requirement for human action was steadily reduced in direct inverse proportion to the ability of machines to communicate with each other and accomplish more tasks and more complicated ones at that. With humanity’s role diminished, the ethical and value systems that had hitherto defined human society likewise appeared to diminish. The humanity that had driven the functioning of our system has been steadily replaced by machine logic. And the system we know as Western civilization might have been deprived of one of the bases of its validity and might well need redefinition.
Both chronologically and philosophically, the issue for society now is one of cyberethics: The relationship between the ethical and legal systems developed to serve humanity from ancient times to the present is contrasted with the ability of computer-driven technology to operate outside those conventions with almost no limits.
* * *
As time and technology progressed beyond Apollo and the space shuttle, toward the close of the 20th century there was another, almost imperceptible yet undeniable, shift to a machine-man interface that reflects the action of a machine programmed to perform increasingly complex tasks formerly done by, and independent of, a human being. As humans more and more depend upon (and defer to) the ability of their creations to relieve them of responsibility for decisions, man has tended to become the extension of the machine that he created, rather than vice versa.
A common example of this problem is the use of the ATM machine to obtain cash from a bank account. People customarily depend upon the machine to supply the money on demand … but what if it does not? What if the machine fails to dispense money yet prints out a receipt that says it did? What can the human do if the bank refuses to refund the money?
In a lawsuit on this issue, the bank supplied a computer printout that showed that the ATM had dispensed the money. The problem was how to test the truth of the printout when it was the machine testifying and not a real person. The case was dismissed because failing to do so would have been tantamount to elevating the machine above human beings, contrary to the Western tradition. The Space Shuttle Columbia disaster provides a catastrophic example of this human deference to the wisdom of the machine in the performance of its assigned tasks. The inability of the crew to counteract the computer-driven response to the overheating of one side of the spacecraft inexorably altered the attitude of the spacecraft so that it was no longer aerodynamic. At the speed of reentry, the structural systems simply failed, and the crew was lost.
At its simplest, the issue is the propriety of human reliance on the wisdom of the machine. From an ethical perspective, unquestioning reliance on a human creation, which by definition is flawed at some level, probably is not a terribly good idea. Put another way, “Is your car smarter than you are?” If a driver should be tempted to answer, “Of course not!” then consider first whether the car will automatically refuse to start if it does not recognize the driver’s fingerprint or voice, as some cars do, or if the driver has had too much to drink. Similarly, if the car indicates to the driver it is too close to another car (which, by the way, is also warning its driver that they are too close), does that relieve the driver of the responsibility to look around at the traffic? At its root, however, is the more important question: Does the machine feel, know, or even care? Can it?
Perhaps nowhere in our society does this collision between the ethics that have defined mankind’s behavior for thousands of years and the inability of the machine to “feel” take place more frequently than in the practice of medicine. For example, some types of microsurgeries that are routinely performed today, such as certain neurological repairs in the reattachment of severed limbs, did not exist 10 years ago. The societal value of these types of procedures and the development of the technology to allow them to happen is beyond question in the cyberethical context.
At the same time, technology has allowed the artificial prolongation of physical existence beyond what would have happened without the technology, regardless of brain function or other “quality of life” aspects of humanity. Given the legal context of the decisions regarding the termination of the technological support of a patient, the so-called “right to die,” cyberethics impacts prolongation simply because technology allows it. Should the physical existence be extended indefinitely contrary to the patient’s wishes, as reflected in the case of Terri Schiavo some years ago? To that extent, the question remains whether merely prolonging physical existence is an inherently valid societal value to be supported technologically.
* * *
At one time in Dallas County, Texas, the Medical-Legal Emergency Assistance Project existed to deal with a patient taken to an emergency room in a life-or-death situation yet unable to communicate consent to medical treatment. In this situation, the project functioned whereby the treating physician could obtain from a judge a court order allowing treatment to proceed. Of course, the denial of the order was also a possibility, in which case the patient would likely die. The burden of the decision rested on the judge involved in the process, and the decision had to be made on very limited yet real-time and urgent information, much of which came from machines attached to the patient.
Put another way, the process depended solely upon human decision-making regardless of what technology might ultimately be employed. Ironically, the decision rested in the hands of a person who was not a medically trained specialist but one more likely to be a product of the liberal arts — the judge. It was part of my duties as a judge to make the decision whether to grant or deny the order. The ethical issue arises both because of the source of the information and the fact that while machines can extend human existence under certain circumstances, the decision itself is characteristically based upon intangible factors utterly external to the machine. Those factors necessarily could include the educational background of the judge involved, whether as a liberal artist or an engineer, as well as the judge’s personal background. For example, if the situation involved a complicated childbirth, the decision could be conditioned by religious or philosophical views of which the patient involved had little knowledge.
On one such occasion I received a call that involved a 35-year-old man who had arrived at the emergency room with liver failure, failing kidneys, and AIDS. He was going in and out of consciousness at the time of the call, and the physicians wanted an order permitting them to apply both heroic surgical measures and an open-ended application of machines to sustain the patient. They admitted that he would likely soon lapse into a permanent coma. I told them to call me back when he went into a coma that, according to at least one psychiatrist and one neurologist, was irreversible, and I went to supper with my family. While at dinner, the cashier came to our table with notice of a phone call. The physicians reported that the patient had gone into an irreversible coma. Even if he came out of it, he would be a vegetable and would not live long because of the other problems he had. I told the physicians to make him comfortable and to let him go.
While some may disagree with the decision, at least it can be said that it retained its humanity, however flawed, and did not abdicate to a mechanical thought process or, worse, to a machine. On a personal level, consider that when there is a family member who is comatose, who is on life support and could remain so indefinitely, the decision as to when, or even if, to terminate the mechanical aid is an ethical one based upon the value system that is in the decision-maker, not a preprogrammed machine.
* * *
In modern medicine, human beings are more routinely deferring to robots that allow impaired human beings to perform normal human tasks, and life-support machines are the prime example. At present, the machines exist to provide information to the human decision-makers. That said, it is not beyond imagination that an algorithm could be developed that, should the machine detect a condition that is potentially terminal, it simply shuts off the support as a proactive, societally prevalidated decision.
The prevalidation decision could be the product of some sort of cost-benefit analysis that, in effect, says, “Society cannot afford this person.” This hints that the definition of the machine-man interface may be in the process of being expanded by what is sometimes referred to as transhumanism. The word itself implies that human beings as unaugmented organisms may have entered a period of obsolescence. In its most basic manifestations, transhumanism is expressed by the use of scientific devices, whether chemical or mechanical, to extend to extraordinary extremes what would otherwise be normal human capabilities.
This issue of control over the device and the extent of the control by the human being remains at the heart of the debate about the transhuman phase of the machine-man interface.
Rather than abdicate trustingly to machines in this process, our society needs to consider Isaac Asimov’s First Law of Robotics: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” This formulation clearly implies that there is a machine-man interface between the robot and the human being in which the human being is viewed as the primary component of the system.
In 2009, college professors Robin Murphy and David D. Woods, both experts in human-robot systems, formulated The Three Laws of Responsible Robotics. Their first law stated, “A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.” While this at least injected the concept of ethics into the decision-making process, there remains the issue of what the machine might be capable of doing independently, for example, making a life-or-death decision in the absence of close supervision by a human being. This issue of control over the device and the extent of the control by the human being remains at the heart of the debate about the transhuman phase of the machine-man interface.
The implementation of technologies to improve human physiology and intellect are rapidly unfolding with increased sophistication. One example of the impact of transhumanism is the experimental use of the emerging psychotropic drugs to relieve the symptoms of PTSD in military personnel. While this is not technology in the traditional sense, psychotropic drugs are a good example of cyberethics in the applied world. These drugs are keyed to those portions of the brain that are affected, thus acting in the transhuman context. The result is that the intellectual stunting effect of PTSD is artificially diminished, and the humanity of the patient is “liberated.”
Kevin Warwick, a British engineer and cybernetics expert, earned the nickname Captain Cyborg for installing in his arm a “telepathy chip” that allowed his brain to communicate wirelessly with a robotic hand that moved as his brain dictated. Warwick is known for his pioneering work on direct interfaces between computer systems and the human nervous systems. Consider the life-altering benefits of returning functionality or improving functionality to an extremity by employing this technology. These advances would undoubtedly have the potential to assist humans and extend lives. At some point, perhaps soon, the science will exist to support this level of transhumanism.
* * *
It takes little imagination to conceive a cyborg designed to encase the body of a Stephen Hawking that can then be operated indefinitely wirelessly by his brain and “telepathically” operate other machines as well. Imagine this technology applied to space travel, reducing or eliminating concerns of astronaut survival. Indeed, it might already be possible to design a spacecraft that would not only protect the pilot but operate as an extension of the mind of the astronaut to the extent that man and machine enjoy a certain symbiosis during their voyages. Training for such missions would be quite different from what has gone before, with role distinctions in the man-machine-man interface severely blurred.
The machine might be able to perform the tasks assigned to it by the human brain, but what might the machine do if, for whatever reason, the brain were to cease directing it? Does society’s readiness to allow the application of such mechanical logic derived from machines to achieve once humanly impossible objectives pose a danger to our value system? This goes beyond the benefits that transhumanism could bestow and reduces human individuality to irrelevance. If the machine should not have an algorithm that draws that ethical line, then it possibly should not be making that ultimate decision; otherwise allowing the machine to do so is as irresponsible as we are in creating it. To that extent, as Anaïs Nin pointed out, “We do not see things as they are, we see things as we are.”
It is clear today that, in the words of the old television series The Six Million Dollar Man, we physically can rebuild someone better and faster than before. But should we? At what point does the machine no longer serve as an extension of the person but the person become an extension of the machine? Does the ability of the machine to extend the limits of human physical existence and capabilities impact the totality of his or her humanity in such a way as to violate the First Law: Asimov’s or the Responsible Robotics Laws? Until the machine that man is capable of creating to extend his or her individuality can “know itself,” or at least “feel,” whether or not it should be allowed to operate independently will continue to be at the center of the cyberethical problem. Just as Clemenceau said that war is too important to be left to the generals, how the machine-man interface evolves from here might be too important to be left merely to those who can build the Six Million Dollar Man.
The concept of cyberethics was originally articulated by the author in the paper “The Terminator Missed a Chip!: Cyberethics,” presented at the International Astronautical Congress of 1995 in Oslo, Norway, and originally published by the American Institute of Aeronautics and Astronautics Inc.