Back to Search
Start Over
Asymptotic non-learnability of universal agents with computable horizon functions
- Source :
-
Theoretical Computer Science . Feb2013, Vol. 473, p149-156. 8p. - Publication Year :
- 2013
-
Abstract
- Abstract: Finding the universal artificial intelligent agent is the old dream of AI scientists. Solomonoff Induction was one big step towards this, giving a universal solution to the general problem of sequence prediction by defining a universal prior distribution. Hutter defined the AIXI model, which extends the latter to the reinforcement learning framework, where almost all if not all AI problems can be formulated. However, new difficulties arise because the agent is now active, whereas it is only passive in the sequence prediction case. This makes proving AIXI’s optimality difficult. In fact, we prove that the current definition of AIXI can sometimes be suboptimal in a certain sense, but that this behavior is still the most rational one, hence emphasizing the difficulty of universal reinforcement learning. [Copyright &y& Elsevier]
Details
- Language :
- English
- ISSN :
- 03043975
- Volume :
- 473
- Database :
- Academic Search Index
- Journal :
- Theoretical Computer Science
- Publication Type :
- Academic Journal
- Accession number :
- 85282376
- Full Text :
- https://doi.org/10.1016/j.tcs.2012.10.014