As I was writing yesterday’s article I began to ask myself: “Self, now that the marriage equality debate is essentially over, what will be the next civil rights movement?” Realistically, it will probably be transgender rights as well as continuation African-American and women’s rights. But what about beyond that? Probably because I love science fiction so much, I started to think about the civil rights movements of 100 or 200 years into the future. During that time, it seems obvious that the next civil rights movements will be about clones and artificial intelligence.
Human cloning seems like something out of science fiction, but it is not to far off. Currently there are two fields of human cloning: therapeutic human cloning and reproductive human cloning. Therapeutic cloning involves the creation of new tissues and organs that can be used as transplants or in medical procedures. This is an active area of research but has not yet become major medical practice as of this year. Reproductive human cloning involves completely cloning a new human being from the material of a host or parent organism.
Reproductive cloning has been tested with animals, to various effects. Unfortunately, modern animal cloning has only yield viable embryos in 3% of experiments and even these successes were plagued with genetic abnormalities that significantly diminished their quality of life. But science is always improving.
The idea of human cloning is realistic enough that 13 states have passed laws that prohibit both therapeutic and reproductive cloning which already seems like a set up for a new debate about the legality of cloning. If human cloning does pick up and those 13 states keep their laws, how will that effect new human clones? Will they be unable to live and work in those states? Will potential parents wanting human clones have to move out of those states to build their family?
Even if those details get ironed out, we need to consider the feelings and thoughts of clones themselves. First, we need to consider why humans would even start cloning each other. An obvious advantage would be for parents who are infertile or otherwise unwilling to take part in natural pregnancies. They would be able to acquire and raise a new human baby as their own instead of through natural child birth. With the debates raging over same-sex adoption, and abortion we can easily see how this would turn into a huge issue.
It also seems likely that we would use clones for jobs that natural birthed humans do not want to do, such as manual labor and military endeavors. But wait, you might say, wouldn’t those jobs just be done by artificial intelligences? Yes, and no. I believe that in the future the human race will start to develop an AI paranoia, and we will begin to restrict what an AI can actually do. Would we really want independent AIs handling our weaponry? Probably not, if we do not trust them. It would be much safer and more comfortable to use humans, and even better for natural born humans to have cloned humans doing the dirty jobs.
But this is where we get into trouble. If a clone is purpose-grown for a certain task, is it his or her destiny to always do that task? If they are human, then they have the same thoughts and desires that we have, the only difference being how they were born. So what do you say to a cloned soldier who no longer wants to fight? Would you tell him that he has to because he was born that way? Or do you let them live their lives as they want, which essentially creates a whole new human race. These could be big concerns in the future, but not as big as concerns over artificial intelligence.
While human cloning seems to be pretty far off, artificial intelligence is already here. Computers are getting smarter and smarter and are now starting to teach themselves. Not only have we seen the rise of robots such as self driving cars in the past decade, we have also seen the rise of software that teaches itself new techniques, computers that have independently become experts at stock trading and even computers that are classical composers making music indistinguishable as computer generated music. The age of robots is coming upon us.
At first this seems like a good deal, since artificial intelligences will do the jobs that you hate to do, like filling paperwork or doing the dishes, but as AI technology increases over time, it has become obvious that we will eventually have to face the possibilities of sentient computers and how they fit into our society. If you think that a debate about whether clones should be given full rights would be sticky, think about the same debate with an AI. Not only do they not look like us, they also do not think like us. In fact, we can not really prove if they are sentient or not.
Many people have heard of the Turing test, which purports to be a test of the sentience of an AI. However, over the past few years, scientists have begun to recognize the limitations of the Turing test. When Alan Turing proposed the test back in 1950, he was aware that there were limitations to what his test could actually do. Mainly, he was aware that a problem existed where it was impossible to tell if a machine was giving correct answers and not adaptive ones. You can program a machine to give correct answers, but it is impossible to tell if the machine is actually developing responses to the questions not just relying on preset programming to give the answers. Theoretically, you can program a machine to give correct answers that simulate artificial intelligence without actually being intelligent. But even if that was the case, who is to say that a simulated intelligence is not an intelligence in its own right? Unfortunately the Turing test does not allow for gradient intelligence.
As AI becomes increasingly smarter, the fact that we do not have a good test of machine sentience provides a huge problem. Not only are we unable to prove real intelligence using human techniques, we may actually be facing the fact that an AI does not think like a human being. We seem to have a chauvinistic perspective of machine intelligence that tells us that true intelligence is only of the human variety, while machines may become just as intelligent as us, just using different techniques.
So at what point do we declare that a computer is sentient (or human like enough) to give it the rights of a human being? Do the protections of the Constitution apply to machine intelligence just as much as they apply to human intelligence? That may seem like a stupid question, but it is one that we might have to face.
To use the clone example from above: if a self teaching computer (which does exist) begins to not want to be a stock trader anymore and instead wants to compose classical music, do we stop it? As computers acquire more human-like qualities, these are questions that we have to face. Can a computer feel love? Can two computers marry each other?
This is a debate that will completely change our perspective on what life is, and I can see strong reactions against treating artificial intelligences as humans. But this will be something that we will have to face, and it might be sooner than you expect.