
Thermodynamics of computation is a subfield of physics that explores what the fundamental laws of physics say about the relationship between energy and computation. It has important implications for the absolute minimum amount of energy required to perform computations.
In a paper published in the American Physical Society’s Physical Review Research, Artemy Kolchinsky and David Wolpert, Santa Fe Institute researchers, combine techniques from Algoritmic Information Theory and stochastic thermodynamics to analyze the thermodynamics of Turing Machines(1).
Wolpert and Kolchinsky’s work shows that:
The energy required by a computation depends on how much more compressible the output of the computation is than the input. “To stretch a Shakespeare analogy, imagine a Turing machine reads-in the entire works of Shakespeare, and then outputs a single sonnet,” explains Kolchinsky. “The output has a much shorter compression than the input. Any physical process that carries out that computation would, relatively speaking, require a lot of energy.”
SFI, Thermodynamics of computation: A quest to find the cost of running a Turing machine

“Our results point to new kinds of relationships between energy and computation,” says Kolchinsky. “This broadens our understanding of the connection between contemporary physics and information, which is one of the most exciting research areas in physics.”
I agree. Because if computation is a process somehow related to learning, understanding or knowing, revealing the truth if such thing exists at all, and if the amount of computation is constrained by physics, it is clear that the amount of truth that will be able to eventally reveal is somehow limited. Will we ever know all the truth about the universe we live in and we use to compute it? I have no idea, but I would guess the answer is NO.
____________________
(1) Kolchinsky, A., and Wolpert, D.H. (2020). Thermodynamic costs of Turing machines. Phys. Rev. Research 2, 033312.
Featured Image: Wikimedia Commons
Very good title for the post.
The question that puzzles me is why we are able to know anything at all. How can generalizations of complex processes be obtained with very limited energy, even if those generalizations are no more than approximations. But those approximations can, sometimes, be very good
I share your opinion that knowing the whole truth is likely to be impossible, but the question is how to approximate to a workable solution