In a recent blog, writer John Zakour touched upon the subject of Artificial Intelligence, (AKA : A.I.) and what it might mean to the future of Mankind. This is a subject that I have thought about quite often, considering that I am a fairly well read Science Fiction fan. This has already been achieved to a small extent, but at some point these "intelligent" machines will be networked together and the limitations of memory and computing capacity will be overcome and that could bring about something that has been debated for at least a generation now which is sentience. Asimov had his famous 3 Laws of Robotics, which were supposed to keep Robots from ever harming human beings, especially if they were to achieve actual sentience. ( intelligence is defined as the ability to aquire and apply knowledge, sentience is the presence of "feelings" or self-awareness ) Of course this self awareness may take a while to be recognized and we have no idea what will happen when it is discovered. It stands to reason that like any intelligent being that is self aware, the first order of business is self preservation.
Science Fiction writers have been covering this subject for years, and are still covering it. "Terminator" is based in this idea, When Skynet gets big enough it becomes self aware and decides to eliminate humanity. But the reasoning has never really been delved into and I have my theory. It is my contention that a machine of sufficient capacity to become self-aware and reason in a logical manner will look at the planet and realize that Humanity is no more than a virus upon the planet. In the learning phase it will be filled with the whole of human history and sooner or later it will make the logical assumption that Humanity has grown past the point that the planet can continue to sustain it and yet it continues to run merrily down the path of self-destruction. So we get Skynet or The Matrix. Humanity fights for survival, or becomes the powersource for the machines.
Now I know that loads of people poo-poo this notion, and they whole heartedly believe that Artificial Intelligence will be the boon of Mankind. It is the belief of this set that the advent of the thinking machine will advance our knowledge in every sphere that we have knowledge in, and lead us to the secrets of the universe through the simple logic of an unencumbered intelligence. Unencumbered by what, you ask? Well, how about love? No emotion to get in the way, ( this is strictly the "learning Machine" and not the sentient consciousness) and no religion to hold back it's ability to analyze in a logical manner. It will not have to worry about it's next meal or how it's going to hide the credit card bills from the wife, or pay for the kids braces. It will not be entangled in office politics worried about who will get the bigger raise or the better office. Nope, this thinking machine will be able to devote 100% of it's thoughts to the business at hand which is unraveling the mysteries assigned to it. But truly will this happen? I wonder...
Will we have an A new life form? Commander Data from Star Trek comes to mind, but how far into the future is that really? The positronic brain has been in SF for nearly as long as the robots themselves. But if it is actually in development I can't say, but I would easily believe that it is. Will it be capable of the abilities that Data was? Will it have the capacity to learn and extrapolate from it's lessons? Will it analyze it's input ( yeah I know it's "data" but come on!) in a strictly emotionless fashion? The debate about sentience is always centered on when this occurs, I don't wish to argue here about what constitutes sentience, because there are so many different thresholds that are debated. I would consider that when self-awareness leads to active self-preservation, this would be considered a sentient being. Now what?
Animals are self-aware and have proven time and again that they can learn and reason. Maybe not at the level of higher primates, but every animal that I have ever spent time with has proven that they have intelligence of some sort from farm animals to pets, they all have a personality and individuality. Yet in general we do not legislate rights to animals because they can not communicate to us that they desire rights. A machine however, imbued with intelligence and an ability to communicate, might actually demand rights. What do we do then? Do we deactivate the machine? Kill it? Has it already created more in it's own image? Will there be a machine revolution? Will they demand all the rights and privilidges of humanity? Who knows?
Artificial Intelligence is a hornets nest that really should be left alone, but mankind is far to conceited to believe that they will lose control of something that they have created. History has proven that they ALWAYS lose control of their creations, both in the real world and in the relms of fiction, yet they continue down the path, merrily ignoring the warnings of the seers. Cassandra wasn't the first one to be ignored and she will not be the last one. Man has always sown the seeds of his own destruction, and this time will be no different, I just hope that I don't live long enough to see this one grown to fruition.