Building the science of predictive systems for AI safety

Author

Adam Shai

Published

July 16, 2024

Abstract
What computational structure are we building into large language models when we train them on next-token prediction? Here, we present evidence that this structure is given by the meta-dynamics of belief updating over hidden states of the data generating process. Leveraging the theory of optimal prediction from Computational Mechanics, we anticipate and then find that belief states are linearly represented in the residual stream of transformers, even in cases where the predicted belief state geometry has highly nontrivial fractal structure. Furthermore we demonstrate that the inferred belief states contain information about the entire future, beyond the local next-token prediction that the transformers are explicitly trained on. We will then zoom out and quickly mention a number of fun and interesting topics which Computational Mechanics touches on, and will follow the conversation wherever it goes.