3 points by liamdgray 13 hours ago | 4 comments
- Abstract: "Large Language Models (LLMs) exhibit remarkable capabilities but suffer from apparent precision loss, reframed here as information spreading. This reframing shifts the problem from computational precision to an information-theoretic communication issue. We address the K:V and V:K memory problem in LLMs by introducing HDRAM (Holographically Defined Random Access Memory), a symbolic memory framework treating transformer latent space as a spread-spectrum channel. Built upon hypertokens, structured symbolic codes integrating classical error-correcting codes (ECC), holographic computing, and quantum-inspired search, HDRAM recovers distributed information through principled despreading. These phase-coherent memory addresses enable efficient key-value operations and Grover-style search in latent space. By combining ECC grammar with compressed sensing and Krylov subspace alignment, HDRAM significantly improves associative retrieval without architectural changes, demonstrating how Classical-Holographic-Quantum-inspired (CHQ) principles can fortify transformer architectures."
- I ran across this paper because the recent "subliminal learning" results reminded me of holography. So I asked o4-mini-high to explore potential relationships. It lead me to this. https://chatgpt.com/share/688f863d-1ec0-800f-a0ce-c93b649a45...
- author here, lmk if have any questions!
- Pretty gross snake oil.