Machine learning deals with various kinds of statistical problems, such as pattern recognition, regression, and clustering, sharing a similar goal which can be roughly stated as: “Approximate a target concept given a bounded amount of data about it.”
Understanding what is learnable is a fundamental goal of machine learning. A team of researchers has linked(1) this question to the continuum hypothesis:
There is no set whose cardinality is strictly between that of the integers and the real numbers.
Building on Kurt Gödel’s incompleteness theorems, Paul Cohen proved in 1963 that this hypothesis can be proved neither true nor false within standard mathematical language. Using this result the team have shown that the question of ‘learnability’ is undecidable or, more specifically, that there are certain “learning” scenarios which cannot be proved nor refuted using the standard axioms of mathematics.
The mathematical foundations of machine learning play a key role in the development of the field. They improve our understanding and provide tools for designing new learning paradigms. The advantages of mathematics, however, sometimes come with a cost. Gödel and Cohen showed, in a nutshell, that not everything is provable. Here we show that machine learning shares this fate. We describe simple scenarios where learnability cannot be proved nor refuted using the standard axioms of mathematics. Our proof is based on the fact the continuum hypothesis cannot be proved nor refuted. We show that, in some cases, a solution to the ‘estimating the maximum’ problem is equivalent to the continuum hypothesis. The main idea is to prove an equivalence between learnability and compression.
Although it is not yet clear whether this result will have much impact on the practice, it is yet another “heavyweight result on the limits of our knowledge,” which has come as a surprise to the team. They did not expect this phenomenon to show up in a relatively simple problem in machine learning.
Featured Image: Kurt Gödel by AK Rockefeller