“In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines”, George Dyson
In his introduction to Google’s Zeitgeist Americas 2012 conference on Monday Oct 15th, the Executive Chairman of the company, Eric Schmidt, said (min. 7:23):
But the real breakthrough for people like us, sort of the well-to-do in the information sense of well-to-do, will be that there’s a whole new generation of robots coming along. And these robots will represent us and do gesture recognition. You’ll send your robot — I don’t like to stay at night. I’ll send my robot out to go to the party and they can represent me. (Eric Schmidt, “The World Around Us”)
Hearing his words, I could not help thinking of Eric’s robot dancing and drinking with his wife at that party and, later, who knows? In the film Multiplicity, Doug Kinney is a construction contractor who never has enough time for his wife and family. With the help of a geneticist, he manages to create several “xerox” Dougs. The clones seem like the perfect solution until they begin to take over his home, his job, and his bed. In Eric’s case, his fortune is probably a much more interesting booty.
I must admit that the idea of sending my own robot to a night party I don’t care a fig is absolutely tempting, but I am not quite sure I would trust my robot. And not because I am rich as Eric Schmidt. Call me paranoid, but Eric’s remark also reminded me of a much more grave reflection about the dark side of advanced technologies: Bill Joy’s “Why the future doesn’t need us“, a well known paper published by Wired in 2000.
The paper is a terrific piece of advice on the potential consequences of allowing our technologies –machines– to gain ever more relevance. It starts with a long quote from Ray Kurzweil’s “The Age of Spiritual Machines“, which contains a long passage by Neo-Luddite Ted Kaczynski –the Unabomber:
If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave
human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions
Although the paper is great, Bill Joy’s answer –relinquishment– is not. Just because we can do something, ought we to do it? Taking the development of nuclear weapons as a cautionary tale, Bill sought to find ethical boundaries for our pursuit of scientific and technological knowledge:
If we could agree, as a species, what we wanted, where we were headed, and why, then we would make our future much less dangerous – then we might understand what we can and should relinquish
In my humble opinion, relinquishment is at most a second best: reducing our technological possibilities will reduce risks but it will also reduce the maximum potential rewards attainable in theory. And what if somehow, some day, that maximum becomes a critical survival factor? Furthermore, I think that relinquishment is an unachievable solution in practice. “If we could agree” is the key sentence here: “If we could agree what we wanted, where we are headed and why”, then we would not need to relinquish: a better agreement would surely be possible.
I think I will go to the party after all, and pay the price of being human, weak and curious, because the truth is that I am longing to meet Eric’s wife’s own robot, and see what happens then.
Featured Image: Isaac Asimov, “I Robot”, First Edition Cover