In this interesting essay, George Dyson puts the current cyber-security hysteria into context. Here follows my own idiosyncratic and interested reading:
Once a technology is available, it will be used. Furthermore, to a large extent we are forced to use it and go all the way up in the exploration of the whole space of possibilities a technology opens. Absolute control is not feasible and relinquishment is not an option. Dyson mentions the voluntary restriction on the use of chemical and biological weapons as a “good example” to follow. I have to disagree. Just look at what is happening in Syria right now.
Therefore what the NSA, with the connivance of Google, Yahoo, Microsoft and the like is doing is both unavoidable and the logical consequence of what Alan Turing was getting at in 1939 when he wondered “how far it is possible to eliminate intuition, and leave only ingenuity.” It might be the only available way to continue our progress path in an increasingly sophisticated world within the limits of our limited “built-in” capabilities.
The ultimate goal of surveillance and intelligence analysis is to learn not only what is being said, and what is being done, but what is being thought. This goal appears within reach at last:
If Google has taught us anything, it is that if you simply capture enough links, over time, you can establish meaning, follow ideas, and reconstruct someone’s thoughts.
And though it is reasonable to wonder if a machine could ever know what anyone thinks, the fact is that, for every practical purpose “a reasonable guess” is all you need.
A machine does not need to know what one thinks—no more than one person ever really knows what another person thinks. A reasonable guess at what you are thinking is good enough.
Knowing someone’s thoughts is not enough for absolute control, and for the same reason that Hilbert programme failed (emphasis and link added):
It will never be entirely possible to systematically distinguish truly dangerous ideas from good ones that appear suspicious, without trying them out. Any formal system that is granted (or assumes) the absolute power to protect itself against dangerous ideas will of necessity also be defensive against original and creative thoughts. And, for both human beings individually and for human society collectively, that will be our loss. This is the fatal flaw in the ideal of a security state.
The key question is to what extent it is necessary to keep today’s main spying programs secret:
There will always be illicit spying, but it should be kept within reasonable bounds. It is disturbing if laws had to be broken to conduct the PRISM surveillance program, but, if laws didn’t have to be broken, that’s worse.
A secret program can be brought into the open, to the benefit of all, without necessarily being brought to a halt.
Very likely, we will never be able to design a system which is capable to protect privacy and freedom of thought and, at the same time, keep all the benefits and promises of big data. Therefore, the only thing we can ask for, at least in the short-term, is a bit of transparency and a lot of common sense.
We are facing a fundamental decision between whether and/or when human intelligence or machine intelligence is given the upper hand, and we should proceed to decide in a more democratic way. Dyson ends paraphrasing Eisenhower’s farewell address:
A vital element in keeping the peace is our military establishment. Our arms must be mighty, ready for instant action. (…) We must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex. (…) Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite. (Eisenhower’s Farewell Address to the Nation)
In summary:
Yes, we need big data, and big algorithms—but beware.