A Transcendental Argument

is_there_anybody_out_there__by_ermzam-d4t4w4cAs the Hollywood blockbuster Transcendence was released on April 18 (with not very good ratings, btw), Stephen Hawking reflected on the future of Artificial Intelligence and Humanity. AI research is alive and kicking, and recent landmarks such as Google’s self-driving cars or IBM’s Watson winning at jeopardy will probably pale against what the coming decades will bring:

… there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. (Stephen Hawking: ‘Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?’)

Stephen Hawking thinks that success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. Advances in technology –hugely beneficial though they are– render us vulnerable in new ways. As our powers expand, so will the scale of their potential consequences—intended and unintended, positive and negative.

Existential risks are those that threaten the entire future of humanity. Humanity has survived so far what we might call natural existential risks for hundreds of thousands of years: storms, volcanoes, earthquakes, asteroids and pestilence. Thus it is prima facie unlikely that any of them will do us in within the next hundred. However, our species is introducing entirely new kinds of existential risk.

Most of the biggest existential risks seem now linked to potential technological breakthroughs: advanced forms of biotechnology, molecular nanotechnology, or machine intelligence that might be developed in the decades ahead. Our track record of survival is limited to just a few decades in the presence of nuclear weaponry and we have no track record of surviving to what’s coming. According to Martin Rees, we can’t be too sanguine about the ability of governments to cope if disaster strikes:

Our interconnected world depends on elaborate networks: electric power grids, air traffic control, international finance, just-in-time delivery and so forth. Unless these are highly resilient, their manifest benefits could be outweighed by catastrophic (albeit rare) breakdowns cascading through the system. Pandemics could spread at the speed of jet aircraft, causing maximal havoc in the shambolic but burgeoning megacities of the developing world. Social media could spread psychic contagion – rumours and panic –literally at the speed of light.

Malign or foolhardy individuals or small groups have far more power and leverage than in the past. Concern about cyber-attack, by criminals or by hostile nations, is rising sharply. Advances in synthetic biology, likewise, offer huge potential for medicine and agriculture – but they amplify the risk of bioerror or bioterror. And last year some researchers who’d shown that it was surprisingly easy to make an influenza virus both virulent and transmissible were pressured to redact some details of their publication. (The Conversation: “Astronomer Royal on science, environment and the future”)

Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside a few non-profit institutes. Despite their importance, anthropogenic existential risks remain poorly understood. it is striking how little academic attention these issues have received compared to other topics that one would say are clearly less important (see the following figure from Nick Bolstrom’s paper on Existential Risks).

Existential Risks Papers
Academic prioritization. Number of academic papers on various topics as appear in Nick Bolstrom’s “Existential Risk Prevention as Global Priority”

Yet the question is worth exploring. And not just because the stakes are high: after all, if we get extinct, insurance companies will have no beneficiaries to compensate. The question is worth exploring from a purely ontological point of view. Statistically we seem to be an anomaly as Enrico Fermi realized in 1950, when he noticed that the fact that we have not been contacted yet by any alien civilization suggests that at least one of the steps from dead matter to interstellar civilization must be exceedingly unlikely. The reason might be that the emergence of intelligent life is extremely rare, but also, and more worryingly, the result of intelligent life emerging frequently but subsequently failing to survive for long. This bottleneck for the emergence of intelligent civilizations is referred to as the Great Filter.

We don’t know whether we already cleared the filter in humanity’s past or it is awaiting us in the future. Now let me tell you why you should pay more attention to current search for habitable planets. Every new discovery of an Earth-like planet in the habitable zone makes it less plausible that there are simply no planets aside from Earth that might support life. If one of those planets is found teeming with intelligent life, then that would be really bad news for humanity because the more complex the life we found, the higher the likelihood that the Great Filter is located in the path between habitable planet and the technological stages of a civilization’s development.

____________________

Featured Image: Eric Mazas, Is there anybody out there!

2 comments

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.