In his classic dystopian novel “1984,” George Orwell describes a world polluted with trashy newspapers, cheap novels, and pornographic films. In this world even the music is “composed entirely by mechanical means on a special kind of kaleidoscope known as a versificator.”
Our own world is swimming in a similar moral cesspool, and Orwell didn’t have to use much of his prescience to predict such a place. But the detail about machine-made music is astonishing. Our own 1984 was indeed awash in synthesized songs and mechanical devices, and computers have been increasingly tapped to make tunes. In most cases these are nothing more than tools, often used by ingenious artists to great effect.
But recently Orwell’s forecast hit even closer to home. In October 2016 several artists collaborated with IBM’s Watson computer to actually write a song. The computer suddenly moved from tool to partner. The artificial intelligence (AI) system analyzed years of written data, song lyrics, and chord progressions to contribute to the final product. The song itself (“Not Easy”) is an enjoyable bit of pop craftsmanship, but where might this lead us?
From Making Machines to Respecting Their Dignity
Beloved British writer G.K. Chesterton noted that it is a decadent culture that employs professionals to fight, dance, and rule for them. So what are we to say about a society that enlists computers to help with something so personal as writing our songs? How will such a society view these computers? More importantly, how will we view ourselves in relation to the machines? We can to look at an entry-level college textbook for an inkling.
In her book “Think: Critical Thinking And Logic Skills For Everyday Life,” Judith A. Boss defines anthropocentrism as “a belief that humans are the central or most significant entities of the universe [that] can blind people to the capabilities of other animals.” The text then claims the belief that artificial intelligence will never match human intelligence is birthed from the same blinding anthropocentrism. Boss expounds, “when we argue that an intelligent computer or robot/android is not conscious or is not capable of intentions or enjoyment because we cannot prove it, we are committing the fallacy of ignorance.”
Scientist Herbert A. Simon (considered the father of AI) carries this view through its logical progression. Not only does he think machines already have emotions but also human worth. It’s said that, “he maintains that the belief that intelligent computers are not thinking and conscious is based on a prejudice against AI, just as people at one time were genuinely convinced that women and individuals of African descent were not really capable of rational thought.”
Yes. He said prejudice. The use of the word prejudice must be noted because it places a moral stigma (to the level of racism and sexism) on a person who believes humans have more inherent worth than AI computers. The statement is also patently insensitive to women and minorities who have actually experienced real discrimination. Yet this is not a random view held by a Twitter troll or blogger in a basement, it is from a Nobel Prize winner who already believes computers have emotions.
How to Prioritize a Computer Over a Human Baby
Since consciousness or self-awareness are key for both Boss and Simon when determining the worth of computers, how might such a worldview lead us to view humans who are not self-aware? In his controversial work “Practical Ethics,” noted utilitarian and Princeton University professor Peter Singer writes, “Human babies are not born self-aware, or capable of grasping that they exist over time. They are not persons…the life of a newborn is of less value than the life of a pig, a dog, or a chimpanzee.”
Knowing he believes such a thing, might he also place higher value on a machine exhibiting self-awareness over an infant that did not? One struggles to see why not, especially considering he once said of humans incorporating AI into their lives, “If it does make better decisions and leads to a more peaceful world with less suffering, maybe that is better — even if it is less human.”
We are living in a world where college texts chide students over our anthropocentrism, Nobel Prize winners cry racism over AI discrimination, and a Princeton prof like Singer freely admits that he values a sentient pig more than he does a newborn human. Jesus commands us to care for “the least of these,” but swimming in a warped worldview can drown our efforts to obey. We must guard against this type of utilitarian thinking so artificially intelligent machines are not considered more valuable than those on the fringes of society: The infants, the infirm, the disabled, the elderly.
This Is Nothing New to Human Thought
It wouldn’t be the first time a utilitarian worldview hurt such vulnerable humans. The Greeks considered sick people to be inferior and even Plato recommended in “The Republic” that society send sick people away. Nazis viewed the physically and mentally handicapped as a societal burden and as early as 1933 passed a law forcing the sterilization of 350,000 people they considered likely to produce “inferior” children.
In the Far East, China’s one-child policy was based on the mindset that such a law would be of pragmatic good for the state. The birth control movement in 1920s America had roots in utilitarian theory as well. Writing in The New York Times, Planned Parenthood founder Margaret Sanger said birth control “means the release and cultivation of the better elements in our society, and the gradual suppression, elimination and eventual extirpation of defective stocks – those human weeds which threaten the blooming of the finest flowers of American civilization.”
In contrast to such stark utilitarianism stands the broad wisdom of the Judeo-Christian worldview undergirded by a belief in the worth of the individual as a unique creation made in God’s image. Anything less than such a creed will leave us floundering in the face of the coming artificial age.
We should celebrate scientific advances, maybe even the uses of AI, but to be truly ethical we must never forget computers aren’t human. We should guard against a day when a stale and stupefied mindset has so warped our culture that people who don’t provide practical use to society are deemed less important than our useful machines. Instinctively, we should know human beings have intrinsic worth. Any parent knows a child has immeasurable value before he or she can do anything.
Healthy societies throughout recorded history have respected their elders and cared for their young long before and long after they can “contribute.” Imbedded deep in our race, some might say from the very beginning, is the memory that humans are inherently worth more than any machine no matter what they’ve done, or haven’t done for us lately.
Orwell’s “1984” speaks of one of those mechanically made melodies “haunting London.” One can imagine its automated unoriginality hanging in the air like a fog. Those familiar with the novel may also recall the one thing that made such a song bearable. It was the voice of a poor woman who sang it while laboring at her laundry. It was her voice that made it pleasing. Her voice that made it music. Her voice that made it human.