14

From Wikipedia:

AIXI ['ai̯k͡siː] is a theoretical mathematical formalism for artificial general intelligence. It combines Solomonoff induction with sequential decision theory. AIXI was first proposed by Marcus Hutter in 2000[1] and the results below are proved in Hutter's 2005 book Universal Artificial Intelligence.[2]

Albeit non-computable, approximations are possible, such as AIXItl. Finding approximations to AIXI could be an objective way for solving AI.

Is AIXI really a big deal in artificial general intelligence research? Can it be thought as a central concept for the field? If so, why don't we have more publications on this subject (or maybe we have and I'm not aware of them)?

nbro
  • 39,006
  • 12
  • 98
  • 176
rcpinto
  • 2,089
  • 1
  • 16
  • 31

4 Answers4

8

"Current artificial intelligence research" is a pretty broad field. From where I sit, in a mostly CS realm, people are focused on narrow intelligence that can do economically relevant work on narrow tasks. (That is, predicting when components will fail, predicting which ads a user will click on, and so on.)

For those sorts of tools, the generality of a formalism like AIXI is a weakness instead of a strength. You don't need to take an AI that could in theory compute anything, and then slowly train it to focus on what you want, when you could just directly shape a tool that is the mirror of your task.

I'm not as familiar with AGI research itself, but my impression is that AIXI is, to some extent, the simplest idea that could work--it takes all the hard part and pushes it into computation, so it's 'just an engineering challenge.' (This is the bit about 'finding approximations to AIXI.') The question then becomes, is starting at AIXI and trying to approximate down a more or less fruitful research path than starting at something small and functional, and trying to build up?

My impression is the latter is much more common, but again, I only see a small corner of this space.

Matthew Gray
  • 4,252
  • 17
  • 27
  • You're not actually addressing the questions in the [current post](https://ai.stackexchange.com/q/145/2444). The first question is "is AIXI really a big deal in _artificial general intelligence research_?". The question strictly asks about the importance of AIXI in **AGI** _research_, it does not ask about if you think other specific tools are better for the corresponding tasks instead of narrowing approximations of AGI models to the same specific tasks. In the post, another question is: "why don't we have more publications on this subject?" No answer to this question in your post. – nbro Sep 14 '17 at 16:23
6

Is AIXI really a big deal in artificial general intelligence research?

Yes, it is a great theoretical contribution to AGI. AFAIK, it is the most serious attempt to build a theoretical framework or foundation for AGI. Similar works are Schmidhuber's Gödel Machines and SOAR architecture.

AIXI is an abstract and non-anthropomorphic framework for AGI which builds on top of the reinforcement learning field, without a few usual assumptions (e.g., without the Markov and ergodicity assumptions, which guarantees that the agent can easily recover from any mistakes it made in the past). Even though some optimality properties of AIXI have been proved, it is (Turing) uncomputable (it cannot be run on a computer), and so it is of very limited practical usefulness. Nonetheless, in the Hutter's book Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability (2005), where several properties of AIXI are rigorously proved, a computable but intractable version of AIXI, AIXItl, is also described. Furthermore, in the paper A Monte Carlo AIXI Approximation (2009), by Joel Veness et al., a computable and tractable approximation of AIXI is introduced. So, there have been some attempts to make AIXI practically useful.

The article What is AIXI? — An Introduction to General Reinforcement Learning (2015), by Jan Leike, which is one of the contributors to the development and evolution of the AIXI framework, gives a gentle introduction to the AIXI agent. See also The AIXI Architecture at the Stanford Encyclopedia of Philosophy for a possibly gentler introduction to AIXI.

Can it be thought of as a central concept for the field?

Yes, the introduction of AIXI and related research has contributed to the evolution of the AGI field. There have been several discussions and published papers, after its introduction in 2000 by Hutter in the paper A Theory of Universal Artificial Intelligence based on Algorithmic Complexity.

See e.g. section 7, "Examples of Superintelligences", of the paper Artificial General Intelligence and the Human Mental Model (2012), by Roman V. Yampolskiy and Joshua Fox. See also https://wiki.lesswrong.com/wiki/AIXI which contains a discussion regarding a few problems related to AIXI, which need to be solved or possibly avoided in future AGI frameworks. Furthermore, see also this and this articles.

If so, why don't we have more publications on this subject (or maybe we have and I'm not aware of them)?

There have been several publications, mainly by Marcus Hutter and associated researchers. You can see Marcus Hutter's publications on the following web page: http://www.hutter1.net/official/publ.htm.

If you are interested in contributing to this theory, there are several ways. If you are mathematically well educated, you can attempt to solve some of the problems described here (which are also mentioned in the Hutter's 2005 book mentioned above). Furthermore, you can also contribute to new approximations or improvements of existing approximations of the AIXI agent. Finally, you can build your new AGI framework by avoiding the problems associated with the AIXI framework. See also projects promoted by Hutter. It may be a good idea to also take into account e.g. Gödel Machines and related work, before attempting to introduce a new framework (provided you are capable of it).

I think that this theory has not attracted more people probably because it is highly technical and mathematical (so it is not very easy to understand unless you have a very solid background in reinforcement learning, probability theory, etc.). I also think that most people (in the AI community) are not interested in theories, but they are mainly guided by practical and useful results.

nickw
  • 317
  • 1
  • 7
nbro
  • 39,006
  • 12
  • 98
  • 176
4

AIXI is really a conceptual framework. All the hard work of actually compressing the environment still remains.

To further discuss the question raised in Matthew Graves answer: given our current limited level of ability to represent complex environments, it seems to me that it doesn't make a lot of practical difference whether you start with AIXI as defining the 'top' of the system and working down (e.g. via supposedly generalized compression methods) or start at the 'bottom' and try solve problems in a single domain via domain-specific methods that (you hope) can subsequently be abstracted to provide cross-domain compression.

NietzscheanAI
  • 7,206
  • 22
  • 36
  • The second paragraph is the result of your only opinion. You give zero argumentations/explanations why you think like that. For me, "given our current limited level of ability to represent complex environments" is definitely not a sufficient explanation or argumentation. – nbro Sep 14 '17 at 16:26
  • @nbro To quote a famous AI researcher: "We have yet to represent *even a single concept* on a computer", certainly not with the kind of malleability that comes naturally to humans. Thus in practice, its difficult to determine the utility of AIXI because we don't have a strong notion of the kinds of representations it needs to manipulate, or how it could usefully manipulate them. – NietzscheanAI Sep 14 '17 at 20:03
-2

AIXI is important, the reinforcement learning we already see is a smaller classical version of the full intractable model, that one would require quantum computation to see it come to fruition.