- There is a programming language that is reversible: Janus [1]. You could write a (lossless) data compression algorithm in this language, and if run in reverse this would uncompress. In theory you could do all types of computation, but the "output" (when run forward) would need to contain the old state. With reversible computing, there is no erased information. Landauer's principle links information theory with thermodynamics: putting order to things necessarily produces heat (ordering something locally requires "disorder" somewhere else). That is why Toffoli gates are so efficient: if the process is inherently reversible, less heat need to be produced. Arguably, heat is not "just" disorder: it is a way to preserve the information in the system. The universe is just one gigantic reversible computation. An so, if we all live in a simulation, maybe the simulation is written in Janus?
[1] https://en.wikipedia.org/wiki/Janus_(time-reversible_computi...
- That sounds interesting. I just checked out the examples in the Haskell Janus implementation: https://github.com/mbudde/jana
- > The universe is just one gigantic reversible computation.
Assuming that the Many-Worlds interpretation is true.
- Actually this is true whichever interpretation you take, give or take some knowledge around black holes. I think hawking actually proved that this is true regardless of how black holes work due to hawking radiation.
- > Actually this is true whichever interpretation you take
In the Copenhagen interpretation the collapse of the wave function explicitly violates unitarity (and thus reversibility).
- (This is way beyond my area of expertise so excuse me that this might be a stupid idea.)
I assume the following happens: while a (small) subsystem is in "pure state" (in quantum coherence), no information flows out of this subsystem. Then, when measuring, information flows out and other information flows in, which disturbs the pure state. This collapses of the wave function (quantum decoherence). For all practical purposes, it looks like quantum decoherence is irreversible, but technically this could still be reversible; it's just that the subsystem (that is in coherence) got much, much larger. Sure, for all practical purposes it's then irreversible, but for us most of physics anyway looks irreversible (eg. black holes).
- The problem is that the larger subsystem includes an observer in a superposition of states of observing different measured values. And we never observe this. Copenhagen interpretation doesn't deal with this at all. It just states this empirical fact.
- So if I understand correctly, you are saying the observer doesn't feel like he is in a superposition (multiple states at once). Sure: I agree that observers never experience being in a superposition.
But don't think that necessarily means we are in a Many-Worlds. I rather think that we don't have enough knowledge in this area. Assuming we live in a simulation, an alternative explanation would be, that unlikely branches are not further simulated to save energy. And in this case, superposition is just branch prediction :-)
- You just need unitarity.
- Unitarity means that information (about quantum states) is not lost, despite it appearing otherwise after a measurement. The Many-Worlds interpretation seems to be the simplest way to explain where this information has gone.
- Could you really do general compression in this language? I was under the impression that the output is always the same size or larger than the input.
- Yes you can do compression, if the text is compressible. The playground [1] has a "run length encoding" example.
Maybe you meant sorting. You can implement sorting algorithms, as long as you store the information which entries were swapped. (To "unsort" the entries when running in reverse). So, an array that is already sorted doesn't need much additional information; one that is unsorted will require a lot of "undo" space. I think this is the easiest example to see the relation between reversible computing and thermodynamics: in thermodynamics, to bring "order" to a system requires "unorder" (heat) somewhere else.
There are also examples for encryption / decryption, but I find compression and sorting more interesting.
- You could package all your data into a zip using this language but you would also have a worthless stretch of memory seemingly filled with noise / things you’re not interested in.
- Why do you think so? The code example shows that you can do RLE (run length encoding) without noise / additional space. I'm pretty sure you can do zip as well. It would just be very hard to implement, but it wouldn't necessarily require that the output contains noise.
- Hmm. As a physicist my intuition is that information-preserving transformations are unitary (unitary transformations are 1-to-1). If a compression algorithm is going to yield a bit string (the zip file, for example) shorter than the original it can't be 1-to-1. So it must yield the zip file and some other stuff to make up for the space saved by the compression.
- > any Boolean function can be computed reversibly
This only holds because the system returns its raw (a, b) inputs unchanged. It doesn't seem like a useful property. Of course we can "reverse any function" if we store the inputs! Reversing here just means yielding back the inputs.
- > Reversing here just means yielding back the inputs.
Not quite I think - the example gate they give has (a,b,c) as input and it doesn't return c. So it's not yielding all the inputs back.
Furthermore: if you always returned all the inputs, and also computed other values, the outputs from the gates would be strictly increasing in size, so you wouldn't be able to use a finite set of gates to build a computer of arbitrary size.
- c is the constant 1. You don't need to store that.
> Furthermore [this doesn't scale].
Precisely.
https://en.wikipedia.org/wiki/Toffoli_gate has better details.
- On the topic of scaling, reversible computations are more energy efficient than non-reversible ones, see also the OP. Outputting the original inputs might seem silly and wasteful superficially but if you discarded them (as "heat"), you'd just be back to building a non-reversible, likely much less efficient gate.
- c is an actual input variable, it's not a constant. You can set it to be the constant 1 which the article does in one of the examples, but this isn't mandatory.
- C is computable from (a, b) regardless.
- > We’re far from the ultimate physical limit, but reversibility still provides practical efficiency gains today.
I'd love to hear more about this. Where it's used today and how big are the gains?
- So if Toffolli gates can serve as a substrate for NAND and NAND is computationally universal. What happens when I build a hash function out of Trofolli-backed NAND gates? Can I run it in reverse and generate pre-images?
- Your circuit will contain auxiliary output bits. That way it's reversible knowing all output bits, but irreversible knowing only the actual hash output.
Quite similar to how SHA3/Keccak is built from a reversible permutation, but becomes irreversible one the output it truncated.
- These 3 way gates also seem to be part of the recent algorithmic advances in Quantum Computing that have put useful applications in reach sooner than previously expected and it kind of reminds me how AI made a lot of the real gains with math optimizations like dropping to NVFP4 etc it all seems like we have been feeling our way in the dark but just starting to touch things
- *Toffoli
- I'm always surprised at the high frequency of major typos in titles from HN posts
- Sounds like a type of pasta
- Or a fine Prosecco:
- Cellular Automata Machines: A New Environment for Modeling, by Tommaso Toffoli and Norman Margolus
https://donhopkins.com/home/cam-book.pdf
CAM6 Demo:
https://www.youtube.com/watch?v=LyLMHxRNuck
Demo of Don Hopkins' CAM6 Cellular Automata Machine simulator.
Live App: https://donhopkins.com/home/CAM6
Github Repo: https://github.com/SimHacker/CAM6
Javacript Source Code: https://github.com/SimHacker/CAM6/blob/master/javascript/CAM...
Comments from the code:
// This code originally started life as a CAM6 simulator written in C // and Forth, based on the original CAM6 hardware and compatible with // the brilliant Forth software developed by Toffoli and Margolus. But // then it took on a life of its own (not to mention a lot of other CA // rules), and evolved into supporting many other cellular automata // rules and image processing effects. Eventually it was translated to // C++ and Python, and then more recently it has finally been // rewritten from the ground up in JavaScript. // The CAM6 hardware and Forth software for defining rules and // orchestrating simulations is thoroughly described in this wonderful // book by Tommaso Toffoli and Norman Margolus of MIT. // Cellular Automata Machines: A New Environment for Modeling // Published April 1987 by MIT Press. ISBN: 9780262200608. // https://mitpress.mit.edu/9780262526319/cellular-automata-machines/
- It’s Toffoli (two f one l)
- ok, the Landauer limit defines the minimum energy for a bit flip but I don't see how a Toffoli gate would require less energy for a bit flip let alone come into the region of the Landauer limit. Could someone with more knowledge enlighten us (or at least me)?
- The Landauer limit defines minimum energy for a bit *erasure*.
A reversible gate doesn't involve any such erasure and therefore Landauer's principle doesn't apply to it.
What will happen in practice if you do an entirely reversible computation is that you end up with the data you care about and a giant pile of scratch memory that you're going to need to zero out if you ever want to reuse it. Or perhaps you rewind the computation all the way back to the beginning to unscratch the scratch memory but you're going to at least need to pay to copy the output somewhere.
- IANAP, but my understanding is that the Landauer limit defines the minimum energy of forcing a unknown bit into a known state. Physics as we know it is fully reversible at the microscale - every possible state have exactly one ancestor state. An irreversible process (that is, one that would force to macroscopically distinguishable states into a single one) is only possible if we conduct the "unknowness" aka entropy away from our computer - i. e. generate heat. Toffoli gate are reversible, and therefore in theory you can implement it in a way that is not subject to the Landauer limit.
Obviously, implementing one as a CMOS gate wouldn't be enough. Reversible gates would be very different. AFAIR they need to have a fan-out of one - you can't just wire an output to two inputs without losing reversibility.
- For example all the quantum computing is reversible and really doesn't want qbits to interact (hence get any energy) with the outside. So if you ignore all the supporting apparatus in theory it could work without spending energy. Toffoli gates can be used/realized in quantum computes.
- Michael Frank himself (the subject of the IEEE article "Reversible computing escapes the lab") showed up for the HN discussion and answered questions here:
Reversible computing escapes the lab (ieee.org)
https://spectrum.ieee.org/reversible-computing
>Michael Frank has spent his career as an academic researcher working over three decades in a very peculiar niche of computer engineering. According to Frank, that peculiar niche’s time has finally come. “I decided earlier this year that it was the right time to try to commercialize this stuff,” Frank says. In July 2024, he left his position as a senior engineering scientist at Sandia National Laboratories to join a startup, U.S. and U.K.-based Vaire Computing.
https://news.ycombinator.com/item?id=42660606
mikepfrank on Jan 14, 2025 | parent | next [–]
Hi, someone pointed me at your comment, so I thought I'd reply. First, the circuit techniques that aren't reversible aren't truly, fully adiabatic either -- they're only quasi-adiabatic. In fact, if you strictly follow the switching rules required for fully adiabatic operation, then (ignoring leakage) you cannot erase information -- none of the allowed operations achieve that.
Second, to say reversible operation "only saves an extra 20%" over quasi-adiabatic techniques is misleading. Suppose a given quasi-adiabatic technique saves 79% of the energy, and a fully adiabatic, reversible version saves you "an extra 20%" -- well, then now that's 99%. But, if you're dissipating 1% of the energy of a conventional circuit, and the quasi-adiabatic technique is dissipating 21%, that's 21x more energy efficient! And so you can achieve 21x greater performance within a given power budget.
Next, to say "resistive losses dominate the losses" is also misleading. The resistive losses scale down arbitrarily as the transition time is increased. We can actually operate adiabatic circuits all the way down to the regime where resistive losses are about as low as the losses due to leakage. The max energy savings factor is on the order of the square root of the on/off ratio of the devices.
Regarding "adiabatic circuits can typically only provide an order of magnitude power savings" -- this isn't true for reversible CMOS! Also, "power" is not even the right number to look at -- you want to look at power per unit performance, or in other words energy per operation. Reducing operating frequency reduces the power of conventional CMOS, but does not directly reduce energy per operation or improve energy efficiency. (It can allow you to indirectly reduce it though, by using a lower switching voltage.)
You are correct that adiabatic circuits can benefit from frequency scaling more than traditional CMOS -- since lowering the frequency actually directly lowers energy dissipation per operation in adiabatic circuits. The specific 4000x number (which includes some benefits from scaling) comes from the analysis outlined in this talk -- see links below - but we have also confirmed energy savings of about this magnitude in detailed (Cadence/Spectre) simulations of test circuits in various processes. Of course, in practice the energy savings is limited by the resonator Q value. And a switched-capacitor design (like a stepped voltage supply) would do much worse, due to the energy required to control the switches.
https://www.sandia.gov/app/uploads/sites/210/2022/06/SRC-tal... https://www.youtube.com/watch?v=vALCJJs9Dtw
Happy to answer any questions.
[...many questions and answers...]
He also popped in on this discussion:
Reversible computing with mechanical links and pivots (tennysontbardwell.com)
https://tennysontbardwell.com/blog/2025/04/30/mechanical-com...
- https://news.ycombinator.com/item?id=16007128
Reversible Computing (2016) [video] (youtube.com)
https://www.youtube.com/watch?v=rVmZTGeIwnc
A modern computer makes billions of calculations per second. The calculations have a "forward direction". For example, if the result of x + y is 4, you cannot "compute backwards" and find out what x and y are equal to.
But calculations can be reversible. For instance one could say that x is 3, and then have enough information to run the calculation backwards. This is particularly interesting because physics dictates that computers based on reversible calculations use less energy than ones based on non-reversible calculations.
Lecturer: Postdoc Holger Bock Axelsen from the Department of Computer Science, University of Copenhagen
- https://news.ycombinator.com/item?id=42701524
DonHopkins on March 30, 2023 | parent | context | favorite | on: Pause Giant AI Experiments: An Open Letter
Tipler's Omega Point cosmology:
https://en.wikipedia.org/wiki/Frank_J._Tipler#The_Omega_Poin...
>The Omega Point cosmology
>The Omega Point is a term Tipler uses to describe a cosmological state in the distant proper-time future of the universe.[6] He claims that this point is required to exist due to the laws of physics. According to him, it is required, for the known laws of physics to be consistent, that intelligent life take over all matter in the universe and eventually force its collapse. During that collapse, the computational capacity of the universe diverges to infinity, and environments emulated with that computational capacity last for an infinite duration as the universe attains a cosmological singularity. This singularity is Tipler's Omega Point.[7] With computational resources diverging to infinity, Tipler states that a society in the far future would be able to resurrect the dead by emulating alternative universes.[8] Tipler identifies the Omega Point with God, since, in his view, the Omega Point has all the properties of God claimed by most traditional religions.[8][9]
>Tipler's argument of the omega point being required by the laws of physics is a more recent development that arose after the publication of his 1994 book The Physics of Immortality. In that book (and in papers he had published up to that time), Tipler had offered the Omega Point cosmology as a hypothesis, while still claiming to confine the analysis to the known laws of physics.[10]
>Tipler, along with co-author physicist John D. Barrow, defined the "final anthropic principle" (FAP) in their 1986 book The Anthropic Cosmological Principle as a generalization of the anthropic principle:
>Intelligent information-processing must come into existence in the Universe, and, once it comes into existence, will never die out.[11]
>One paraphrasing of Tipler's argument for FAP runs as follows: For the universe to physically exist, it must contain living observers. Our universe obviously exists. There must be an "Omega Point" that sustains life forever.[12]
>Tipler purportedly used Dyson's eternal intelligence hypothesis to back up his arguments.
Cellular Automata Machines: A New Environment for Modeling:
https://news.ycombinator.com/item?id=30735397
>It's also very useful for understanding other massively distributed locally interacting parallel systems, epidemiology, economics, morphogenesis (reaction-diffusion systems, like how a fertilized egg divides and specializes into an organism), GPU programming and optimization, neural networks and machine learning, information and chaos theory, and physics itself.
>I've discussed the book and the code I wrote based on it with Norm Margolus, one of the authors, and he mentioned that he really likes rules that are based on simulating physics, and also thinks reversible cellular automata rules are extremely important (and energy efficient in a big way, in how they relate to physics and thermodynamics).
>The book has interesting sections about physical simulations like spin glasses (Ising Spin model of the magnetic state of atoms of solid matter), and reversible billiard ball simulations (like deterministic reversible "smoke and mirrors" with clouds of moving particles bouncing off of pinball bumpers and each other).
Spin Glass:
https://en.wikipedia.org/wiki/Spin_glass
>In condensed matter physics, a spin glass is a magnetic state characterized by randomness, besides cooperative behavior in freezing of spins at a temperature called 'freezing temperature' Tf. Magnetic spins are, roughly speaking, the orientation of the north and south magnetic poles in three-dimensional space. In ferromagnetic solids, component atoms' magnetic spins all align in the same direction. Spin glass when contrasted with a ferromagnet is defined as "disordered" magnetic state in which spins are aligned randomly or not with a regular pattern and the couplings too are random.
Billiard Ball Computer:
https://en.wikipedia.org/wiki/Billiard-ball_computer
>A billiard-ball computer, a type of conservative logic circuit, is an idealized model of a reversible mechanical computer based on Newtonian dynamics, proposed in 1982 by Edward Fredkin and Tommaso Toffoli. Instead of using electronic signals like a conventional computer, it relies on the motion of spherical billiard balls in a friction-free environment made of buffers against which the balls bounce perfectly. It was devised to investigate the relation between computation and reversible processes in physics.
Reversible Cellular Automata:
https://en.wikipedia.org/wiki/Reversible_cellular_automaton
>A reversible cellular automaton is a cellular automaton in which every configuration has a unique predecessor. That is, it is a regular grid of cells, each containing a state drawn from a finite set of states, with a rule for updating all cells simultaneously based on the states of their neighbors, such that the previous state of any cell before an update can be determined uniquely from the updated states of all the cells. The time-reversed dynamics of a reversible cellular automaton can always be described by another cellular automaton rule, possibly on a much larger neighborhood.
>[...] Reversible cellular automata form a natural model of reversible computing, a technology that could lead to ultra-low-power computing devices. Quantum cellular automata, one way of performing computations using the principles of quantum mechanics, are often required to be reversible. Additionally, many problems in physical modeling, such as the motion of particles in an ideal gas or the Ising model of alignment of magnetic charges, are naturally reversible and can be simulated by reversible cellular automata.
Theory of Self-Reproducing Automata: John von Neumann's Quantum Mechanical Universal Constructors:
https://news.ycombinator.com/item?id=22738268
[...] Third, the probabilistic quantum mechanical kind, which could mutate and model evolutionary processes, and rip holes in the space-time continuum, which he unfortunately (or fortunately, the the sake of humanity) didn't have time to fully explore before his tragic death.
>p. 99 of "Theory of Self-Reproducing Automata":
>Von Neumann had been interested in the applications of probability theory throughout his career; his work on the foundations of quantum mechanics and his theory of games are examples. When he became interested in automata, it was natural for him to apply probability theory here also. The Third Lecture of Part I of the present work is devoted to this subject. His "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components" is the first work on probabilistic automata, that is, automata in which the transitions between states are probabilistic rather than deterministic. Whenever he discussed self-reproduction, he mentioned mutations, which are random changes of elements (cf. p. 86 above and Sec. 1.7.4.2 below). In Section 1.1.2.1 above and Section 1.8 below he posed the problems of modeling evolutionary processes in the framework of automata theory, of quantizing natural selection, and of explaining how highly efficient, complex, powerful automata can evolve from inefficient, simple, weak automata. A complete solution to these problems would give us a probabilistic model of self-reproduction and evolution. [9]
[9] For some related work, see J. H. Holland, "Outline for a Logical Theory of Adaptive Systems", and "Concerning Efficient Adaptive Systems".
https://www.deepdyve.com/lp/association-for-computing-machin...
https://deepblue.lib.umich.edu/bitstream/handle/2027.42/5578...