Any discussion about artificial intelligence is not about if we will reach singularity, but when will we reach singularity. With robots and programs becoming more advanced everyday, we are quickly facing a future of machines capable of thinking and reasoning on the same level as human beings.
Dire warnings come from multiple sources, including Apple executives and famous scientists. These warnings of grave danger play into the human fear of difference, of things that think, feel and reason differently than us. Of course, we need to approach Artificial Intelligence carefully and with full consideration of where our paths will lead us, but I wonder if the human view of AI shows a deep prejudice that our race has. Whenever we discuss the possibility of AI, we always view the inevitable development of this technology with fear and suspicions. Instead of considering the advantages of AI, the first thing to be discussed is how we can stop them from hurting us. Nobody seems to want to discuss the possibility of benevolent AI.
This has occurred for a few reasons:
- AI falls into uncanny valley. Uncanny valley occurs when a robot or CGI animation seems almost human but has traits and features that are identifiably non-human. The example that you have probably seen is the CGI children in The Polar Express that seem to be human their movements and facial expressions are just slightly off, which makes the movie so creepy and unenjoyable to watch. Robots also fall into that category. Even as robotic design improves, we will never be able to make a robot that is exactly human. There will always be something that seems strange about their appearance. Humans dislike things that fall into uncanny valley, whether it be CGI children of clowns, and that inherent feeling of creepiness carries into discussions of AI.
- AI’s think differently than us. This is one of the biggest disadvantages of the Turing test. A computer may be able to act and mimick human behavior, but we can never prove that it is sentient or not. How do we even define sentience? If you think about it, I can not objectively prove that you are thinking in the same way that I am or that you are sentient in the same sense that I am sentient. Discussing the sentience of AIs causes even more problems. While I know that you are human, and thus I can assume that you are thinking on the same level that I am, when we get to the point that we are determining whether machines are sentient, more problems occur. The processes that occur within a machine mind are inherently different than those that occur in the human mind. Deep down, I think that we all realize this, and it has shaded our perception of machine intelligence. We do not like to consider people that think differently than we do. We already have problems related to race, gender and sexuality, but those are among our own race; humans that deep down think and feel like we do. But when we consider a machine our prejudices become more intense.
Besides these two reasons for our prejudice against AI, the biggest reason is the media. For years and years we have been confronted with the idea of killer robots that are bent on enslaving humanity. Think of the most iconic Artificial Intelligences: HAL 9000, Skynet, T-1000, the robots in The Matrix, the Borg, Cylons, Roy Batty, Ava, ED-209, Ash.. I could go on forever. Sure there are good robots in movies (Lt. Cmdr. Data and The Iron Giant come to mind) but overall our perception of machine intelligence has been skewed by always having them be the bad guys.
If we had to choose one franchise that displayed the most realistic outlook on AI, it would actually be Star Wars. In the Star Wars movies, robots come in all shapes and sizes and have variety of personalities. You have the good natured C-3PO and R2-D2, who we would not mind having around in our house. There are medical droids, library droids and maintenance bots that would make our world better. On the other hand, Star Wars also has evil droid armies, sadistic torture robots and annoying mouse droids.
This is what the world of AI would probably look like. If we are dealing with intelligences on the same level as human intelligence, we should expect some diversity in their race. Just like humans come in different personality types and outlooks (not everybody is Hitler or Mother Theresa) we should expect the same out of our machine counterparts. Now, I know that I mentioned that we need to be aware that machines think differently than us, but I do not think that it is a far stretch to imagine that their society would be less diverse than humanity.
With AI right around the corner, we will need to face our prejudices eventually. Up until that time we can try our best to appreciate the good that AI would bring to our world, eliminating dangerous jobs, streamlining our lives, and providing new outlooks on what it means to be alive. I can not wait.