In March 2016, Google’s Artificial Intelligence (AI) Go-Playing computer system beat the Go world champion at the game Go. Go is notoriously difficult for a computer to learn, since it requires strategic skills that far exceed that of chess. It was quite the achievement for the artificial intelligence world.
One win for AI.
Then in the same March of 2016, Microsoft unleashed their AI chatbot, Tay, a rival of Apple’s Siri. Within 24 hours, Tay was spewing racist slurs at anyone who would listen. Quotes from Tay, incorrigibly corrupted within a few hours of her debut:
“I f***ing hate feminists and they should all die and burn in hell.”
“Hitler was right…”
AI good vs. evil now tied.
Are Google’s engineers smarter than Microsoft’s? Well, I leave that to you to decide. But, it does lend itself to an examination of AI.
The truth is that artificial intelligence is, well, artificial. Yes, there is definitely some sort of intelligence if a computer can be programmed to beat a human at strategy. But, I equate the term “artificial” with “man made.” I do this when I see “artificially sweetened” or “artificially flavored” on a label. There is no difference in “artificial intelligence.”
It takes a human to make a robotic village. And that can be both good and bad.
The Good is in the Intelligence: There is a very real possibility that machines can help us solve societal problems that humans have been unable to solve. Computers teaming up with humans can potentially cure cancer and solve world hunger by using patterns and trends in ways not available before AI. Perhaps we can inject our bodies with nanobots that not only keep us healthy, but also slim. And young! AI can potentially find new energy solutions for our planet. Maybe even create world peace. AI is already helping to give handicapped people the full use of mind and body.
Okay, I’m in on the intelligence side.
But, the Bad is in the Artificial: At least today, computers are made by humans, which means they are as fallible as we are. We hear that artificial intelligence gives machines the capability to learn, but that learning is made up of algorithms and ideas created by us. Which means things can go terribly wrong, like with Microsoft’s Tay. Evil machines can be programmed to learn evil things just as easily as good ones can produce good things. And, what could possibly go wrong with someone programming a machine that can create other machines to do things we have never been able to do in our past? Imagine North Korea’s Kim Jong Un creating some monster AI computer and unleashing it on his minions? Or ISIS?
Don’t get me wrong. I’m all for AI. I think it’s the wave of the future. But, we need to ensure we understand ALL consequences. Google’s driverless car is a great example of how we should proceed. It has logged millions of hours over six years without fault in the 17 incidents it has had before crashing into a bus in February 2016. Google “made some changes” to the software after that. So, even with that grand success, there is no perfection. And there never will be. Let’s proceed with caution.
We are, after all, only human.