•

Applied to Open Problems in AIXI Agent Foundations by Cole Wyeth ago

•

Applied to Free Will and Dodging Anvils: AIXI Off-Policy by Cole Wyeth ago

•

Applied to New intro textbook on AIXI by Alex_Altair ago

•

Applied to Hutter-Prize for Prompts by rokosbasilisk ago

•

Applied to The Ethics of ACI by Akira Pyinya ago

•

Applied to Intuitive Explanation of AIXI by Thomas Larsen ago

•

Applied to Summary of the Acausal Attack Issue for AIXI by Multicore ago

•

Applied to Potential Alignment mental tool:
Keeping track of the types by Ruben Bloom ago

•

Applied to Occam's Razor and the Universal Prior by Peter Chatain ago

**AIXI** is a mathematical formalism for a hypothetical (super)intelligent, developed by Marcus Hutter (2005, 2007). AIXI is not computable, and so does not serve as a design for a real-world AI, but is considered a valuable theoretical illustration with both positive and negative aspects (things AIXI would be able to do and things it arguably ~~couldn'~~couldn't do).

AIXI can be viewed as the border between AI problems that would be ~~'simple'~~'simple' to solve using unlimited computing power and problems which are structurally ~~'complicated'~~'complicated'.

Hutter (2007) describes AIXI as a combination of decision theory and algorithmic information theory: ~~"Decision~~"Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. ~~Solomonoff’~~Solomonoff’s theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas and get a parameterless theory of universal Artificial Intelligence.~~"~~"

To do this, AIXI guesses at a probability distribution for its environment, using Solomonoff induction, a formalization of ~~Occam'~~Occam's razor: Simpler computations are more likely *a priori* to describe the environment than more complex ones. This probability distribution is then Bayes-updated by how well each model fits the evidence (or more precisely, by throwing out all computations which have not exactly fit the environmental data so far, but for technical reasons this is roughly equivalent as a model). AIXI then calculates the expected reward of each action it might choose--weighting the likelihood of possible environments as mentioned. It chooses the best action by extrapolating its actions into its future time horizon recursively, using the assumption that at each step into the future it will again choose the best possible action using the same procedure.

The ~~agent'~~agent's intelligence is defined by its expected reward across all environments, weighting their likelihood by their complexity.

- R.V. Yampolskiy, J. Fox (2012) Artificial General Intelligence and the Human Mental Model. In Amnon H. Eden, Johnny
~~Sø~~Søraker, James H. Moor, Eric Steinhart (Eds.), The Singularity Hypothesis.The Frontiers Collection. London: Springer. - M. Hutter (2007) Universal Algorithmic Intelligence: A mathematical top->down approach. In Goertzel & Pennachin (eds.), Artificial General Intelligence, 227-287. Berlin: Springer.
- M. Hutter, (2005) Universal Artificial Intelligence: Sequential decisions based on algorithmic probability. Berlin: Springer.
- J. Veness, K.S. Ng, M. Hutter, W. Uther and D. Silver (2011) A Monte-Carlo AIXI Approximation,
*Journal of*40, 95-142]~~Artiﬁ~~Artiﬁcial Intelligence Research

AIXIis a mathematical formalism for a hypothetical (super)~~intelligent~~intelligence, developed by Marcus Hutter (2005, 2007). AIXI is not computable, and so does not serve as a design for a real-world AI, but is considered a valuable theoretical illustration with both positive and negative aspects (things AIXI would be able to do and things it arguably couldn't do).