The Presidency and the 21st Century’s Big Threat: GNR
by N. M. GUARIGLIA
June 27, 2012
Recently, I had the pleasure of meeting one of the chiefs of cyber-security at Raytheon, the defense technology firm. The subject turned to cyber-warfare. "When ones and zeroes are involved," he said, "offense will find a way to win." Encryption defenses are important and will get better in due time. But to have true information security, one may have to go off the grid. The Raytheon executive said, "When we go from mega-, giga, and terabytes to peta-, exa-, and zetta-, we will be entering a brave new world of the infinitesimally small."
When one considers the future of this century, there are several existential threats. The possibility of great-power war is one (with China or Russia, perhaps). This isn't that likely, I believe, due to the enduring Cold War principles of mutually assured destruction. Another threat is the possibility of nuclear terrorism or biological terrorism (and the incomprehensible aftermath).
There is another threat, however, that is almost never talked about by national candidates for president: "GNR." Genetics (biotechnology), Nanotechnology (quantum science), and Robotics (Artificial Intelligence; A.I). As Ray Kurzweil explains, GNR is riding the wave of information technology and its exponential growth. If you were to take 30 steps linearly, you'd be at 30. If you were to take 30 steps exponentially, you'd be at a billion. This principle is described in Moore's Law and has come to be called the Singularity: the scientifically foreseeable point in the near-to-medium-future in which human beings have created technological intelligences so intelligent-billions of times more intelligent than today's strongest computers, as well as the human brain-and so subatomic-as small to an apple as an apple is to Earth-that we will have created nothing less than "nano-gods."
As "brain-builder" Hugo de Garis explains, in one of the more intelligent interviews available on the subject, these "gods" will then enter our minds. Probably by way of eye drops (or another noninvasive way).
The MIT professor Seth Lloyd has estimated the maximum computational capacity allowable within the laws of physics. He shows how a kilogram of matter equals pi times energy, divided by what is known as Planck's constant. According to Kurzweil's interpretation of this calculation, even inanimate matter-such as a rock-coated with Artificial Intelligence will, theoretically, have the computational capacity of five trillion-trillion human civilizations. In other words, our computers will be able to perform the human brain equivalent of all human thought over the last 10,000 years in just one ten-thousandth of a nanosecond. And if exponential trends are even remotely accurate, this will be possible around mid-century.
There is clearly promise in this, but there is also great peril. It is a deeply philosophical discussion. A person either comprehends this trajectory, and prepares for it, or puts it out of his or her mind. The implications are enormous. Will this transcendence expedite our evolution, or will it destroy our individuality, our liberty, and our humanness? Could either the users or preventers turn tyrannical? Who will guard the guardians? Will attempts to control and regulate these technologies succeed in accomplishing precisely the dystopia we may fear the technologies themselves will create? Will we merge with these intelligences or will they be distinct entities? Does the future need us at all?
There seems to be a sense amongst humanity that something big is right around the corner, something unequivocal. Collectively, we've taken to apocalyptic assumptions. Nearly half of Americans think the Rapture will happen by mid-century. This "event," however, won't be found in the Mayan Calendar, but rather in Sagan's Cosmic Calendar. It won't be coming out of the clouds, but rather into our brains. Something along the lines of the Singularity is where we are going. Exponential trends haven't stopped for more than a half-century and there is no reason to believe they will stop anytime soon.
Information is power. It is, as Ramez Naam says, an infinite resource on a finite planet. As free people, we should encourage the dissemination of information technologies under one condition: our security and liberty are not endangered. In the future, the government may assume undue authority and force information companies into subservience for authoritarian reasons, or these companies, in trying to avoid total subservience, and in trying to destroy their competition without competing, may preemptively give the government what it wants. This is not free-market capitalism, nor is it humanism. This is a form of fascism.
This will be the most consequential century in the history of life on Earth. Technology is man's greatest invention. It is a fine servant, but a most dangerous master. We should neither concede its control to a central authority nor prove to become dependent on it, for we will have sullied both human integrity and individual liberty. The next president, to his surprise, will likely have to address the potentialities of transhumanism, both good and bad.
"We have Makers and Breakers in the world," Vernor Vinge once said. "The Makers have created so many wonderful things, and it's easy for the Breakers to use them. The Breakers have all sorts of motives, including just the motive of breaking it-because it's so beautiful and people are so happy with it."
Contributing Editor N.M. Guariglia is an essayist who writes on Islam and Middle Eastern geopolitics.