Science is likely moving at a slower pace than it should due to the massive size of what we already know. The exponential growth in the number of papers makes it increasingly difficult for researchers to keep track of all the publications relevant to their work, let alone to connect disparate strands of research.
In 2014 there were more than 2000 maths papers posted to the online repository arXiv.org each month, more than in any other discipline, and the rate is increasing. “If you have too many new results that keep appearing, many just go unnoticed,” (New Scientist, “Our number’s up: Machines will do maths we’ll never understand”)
Our society has become much better at generating new information than analysing what it already has.
One way to cope with information overload in scientific research is to develop algorithms able to mine scientific literature, something which is already possible and it is happening in different areas. Computers are used routinely to look for new drugs and treatments for neglected diseases, to assist in proving mathematical theorems, and some even claim automated invention will speed up technological progress:
Human inventors who learn to leverage computer-automated innovation will leapfrog peers who continue to invent the old-fashioned way,
Algorithm-led discovery opens up the possibility to discoveries that no human being can ever understand. The first major computer-assisted proof was published 40 years ago and it immediately sparked a row. It was a solution to the four-colour theorem. Mathematicians were reluctant to accept the computer-assisted proof because no one could ever verify all the intermediate steps in it. But even if someone claimed to, should we trust him or her? What if there was an error in the software code?
To this day, no one has come up with a more elegant, insightful proof. So we’re left in the unsettling position of knowing that the four-color theorem is true but still not knowing why. (Steven Strogatz, “The End of Insight”)
Things can only get worse:
Last year Alexei Lisitsa and Boris Konev at the University of Liverpool, UK, published a computer-assisted proof so long that it totalled 13 gigabytes, roughly the size of Wikipedia. Each line of the proof is readable, but for anyone to go through the entire result would take several tedious lifetimes. (New Scientist, “Our number’s up: Machines will do maths we’ll never understand”)
Steven Strogatz thinks that insight is becoming impossible, at least at the frontiers of mathematics, and Vladimir Voevodsky glimpses a mathematical realm beyond human skills (See figure below.)
It is very difficult at the present to go into the high levels of complexity and abstraction, because it just doesn’t fit into our heads very well, (Ibid.)
Non-understandable pieces of reasoning suggest that there might be answers to fundamental questions too complicated for us to understand, answers that only machines can provide. We pride ourselves on our ability to understand our universe. Whatever its complexity, we believe that we will be able to write down the ultimate equations articulating a theory of everything. But what if our intuition is wrong? What if there are hard limits to our ability to understand the laws of nature?
Doron Zeilberger of Rutgers University in Newark, New Jersey, thinks there will even come a time when human mathematicians will no longer be able to contribute. “For the next hundred years humans will still be needed as coaches to guide computers,” he says. But after that? “They could still do it as an intellectual sport, and play each other like human chess players still do today, even though they are much inferior to machines.” (Ibid.)
And the most intriguing question: What does this all imply for the meaning of truth? Is it possible for something to be true but not understandable? Is truth like the sound of that lonely falling tree?
(1) This thought experiment is used to motivate a philosophical inquiry: Can something exist without being perceived?