
Do you remember Chris Anderson’s prediction about “The End of Theory”? It was 2008 and big data was all the rage.
Now a new algorithm, devised by physicist Hong Qin at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory, applies machine learning to a set of observational data of planetary orbits from Mercury, Venus, Earth, Mars, Ceres, and Jupiter, similar to what Kepler inherited from Tycho Brahe in 1601. But instead of inferring a set of continuous differential equations, Qin bets on discrete field theory. It seems he was inspired in part by Nick Bostrom’s simulation hypothesis. Qin thinks that if we live in a simulation, our world has to be discrete.
But more interesting than the mathematical details, it’s the presence of that Anderson’s prediction lurking like a ghost in the machine of scientific discovery and our capability to actually “understand” reality. Qin’s approach raises questions about the nature of science:
Usually in physics, you make observations, create a theory based on those observations, and then use that theory to predict new observations. What I’m doing is replacing this process with a type of black box that can produce accurate predictions without using a traditional theory or law.
Essentially, I bypassed all the fundamental ingredients of physics. I go directly from data to data (…) There is no law of physics in the middle.”
This is the abstract:
A method for machine learning and serving of discrete field theories in physics is developed. The learning algorithm trains a discrete field theory from a set of observational data on a spacetime lattice, and the serving algorithm uses the learned discrete field theory to predict new observations of the field for new boundary and initial conditions. The approach of learning discrete field theories overcomes the difficulties associated with learning continuous theories by artificial intelligence. The serving algorithm of discrete field theories belongs to the family of structure-preserving geometric algorithms, which have been proven to be superior to the conventional algorithms based on discretization of differential equations. The effectiveness of the method and algorithms developed is demonstrated using the examples of nonlinear oscillations and the Kepler problem. In particular, the learning algorithm learns a discrete field theory from a set of data of planetary orbits similar to what Kepler inherited from Tycho Brahe in 1601, and the serving algorithm correctly predicts other planetary orbits, including parabolic and hyperbolic escaping orbits, of the solar system without learning or knowing Newton’s laws of motion and universal gravitation. The proposed algorithms are expected to be applicable when the effects of special relativity and general relativity are important.
Qin, H. (2020). Machine learning and serving of discrete field theories. Scientific Reports 10, 19329.
____________________
Image: Chingraph in Where should we draw the line between rejecting and embracing black box AI?
Very interesting.
But I fail to see the point of the discussion. Of course, using just data it should be possible to find a framework that will encompass all data. That will result into a model that could be used to make predictions.
But this is nothing new. This is how models are made.
In fact, mathematics is a sort of “machine” or “procedure” that helps to find models for explaining reality. It proved to be very useful, and for that reason, it has been accepted since the XV Century.
Computers (AI) are just another tool for devising models, they are just a tool for doing mathematics or, rather, to find patterns.
But, of course, the machine do not understand anything. Similarly, nobody would claim that “Mathematics” understand the problem, they are just a tool for devising and using the model.
Models can be so difficult to understand (by humans) that, frequently, scientist do not really understand them, even if they are used to make predictions (e.g. Quantum physics) . I would even argue that Newton’s gravitation is too complex to understand without differential calculus.
It is not a simple answer, but we typically think we understand when there is a model (equations) that allow us to understand (or think we do) the whole picture. The black box approach is when what we have is a (dark) model. We may ask and receive a answer, but we only see that pair. Today’s more salient debate is likely this one of neural nets. We feed them with data, they will configure following a clear algorithm but without us being able to visualize or understand the result of that training.