- Deep networks these days can have hundreds of layers. Any non-digital system is going to pick up noise and gain variations at each stage. It might be possible to get around this with threshold circuits.
The problem then becomes training. The algorithm of choice is back propagation, which requires determining derivatives across the whole network. Doing training on an analog system would require tweaking each value as a batch of values is input multiple times to find the slope. This is impractical for large networks, as training usually requires a billion rounds.
- The term I've heard for this sort of thing is "Physical Neural Networks" or "PNN"s. My impression is that one of the big things holding them back is that because we can't manufacture components to perfect tolerances, you can't train a single model and reuse it like you can with digital logic. Even if you can get close, every single circuit needs some amount of tuning. And we haven't worked out great ways to train them.
There's a lot of research going on in this space though, because yeah, nature can solve certain mathematical problems more efficiently than digital systems.
There's a decent review article that came out recently: https://www.nature.com/articles/s41586-025-09384-2 or https://arxiv.org/html/2406.03372v1