Item description for Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability by Marcus Hutter...
This book presents sequential decision theory from a novel algorithmic information theory perspective. While the former is suited for active agents in known environments, the latter is suited for passive prediction in unknown environments. The book introduces these two different ideas and removes the limitations by unifying them to one parameter-free theory of an optimal reinforcement learning agent embedded in an unknown environment. Most AI problems can easily be formulated within this theory, reducing the conceptual problems to pure computational ones. Considered problem classes include sequence prediction, strategic games, function minimization, reinforcement and supervised learning. The discussion includes formal definitions of intelligence order relations, the horizon problem and relations to other approaches. One intention of this book is to excite a broader AI audience about abstract algorithmic information theory concepts, and conversely to inform theorists about exciting applications to AI.
Promise Angels is dedicated to bringing you great books at great prices. Whether you read for entertainment, to learn, or for literacy - you will find what you want at promiseangels.com!
Est. Packaging Dimensions: Length: 0.75" Width: 6.25" Height: 9.25" Weight: 1.4 lbs.
Release Date Nov 29, 2004
ISBN 3540221395 ISBN13 9783540221395
Availability 95 units. Availability accurate as of Oct 24, 2016 03:09.
Usually ships within one to two business days from La Vergne, TN.
Orders shipping to an address other than a confirmed Credit Card / Paypal Billing address may incur and additional processing delay.
More About Marcus Hutter
Marcus Hutter received his masters in computer sciences in 1992 at the Technical University in Munich, Germany. After his PhD in theoretical particle physics he developed algorithms in a medical software company for 5 years. For four years he has been working as a researcher at the AI institute IDSIA in Lugano, Switzerland. His current interests are centered around reinforcement learning, algorithmic information theory and statistics, universal induction schemes, adaptive control theory, and related areas.
IDSIA (Istituto Dalle Molle di Studi sull'Intelligenza Artificiale) is a non-profit oriented research institute for artificial intelligence, affiliated with both the University of Lugano and SUPSI. It focusses on machine learning (artificial neural networks, reinforcement learning), optimal universal artificial intelligence and optimal rational agents, operations research, complexity theory, and robotics. In Business Week's "X-Lab Survey" IDSIA was ranked in fourth place in the category "Computer Science - Biologically Inspired," after much larger institutions. IDSIA also ranked in the top 10 of the broader category "Artificial Intelligence."
Reviews - What do customers think about Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability?
A gem under a pile of unnecessary mathematical obfuscation Jan 14, 2008
This is probably the most rigorous attempt to formalize AI. The book succeeds in presenting the state-of-the-art AI theory from a technical point of view, but neglects intuition, and is difficult to read for the novice and thus inaccessible to a wider audience. I will try to be explain the ideas in a less technical manner in this review.
The main idea of the book in combining classical control theory concepts with Bayesian inference and algorithmic information theory. The author avoids to struggle with anthropocentric aspects of intelligence (which are subject to a fierce debate that is sometimes of metaphysical nature) by defining intelligent agents as utility maximizing-systems. The core ideas are, in a nutshell:
1) Goal: Build a system with an I/O stream interfaced with an environment, where inputs are observations and outputs are actions, that optimizes some cumulative reward function over the observations. Two ingredients are necessary: model the a priori unknown environment and solve for the reward-maximizing actions.
2) Model: This is a probability distribution over future observations conditioned on the past (actions and observations). Instead of using any particular domain-specific model, he uses a weighted average of "all" models. By "all models", the set of all mechanically calculable models is meant, i.e. the set of all algorithmically approximable probability models.
3) Policy: Given the model, all possible futures can be simulated (up to a predefined horizon) by trying out all possible interaction paths. Essentially, a huge decision tree is constructed. Having this information, it is easy to solve for the best policy. Just pick at each step the action that promises the highest expected future reward. This is done using Bellman's optimality equations.
Why does this theoretically work? If the environment is equal to one of the sub-models in the mixture, then the combined model's posterior estimation converges to the environment. The model is updated step by step using Bayes' rule. Since the model becomes more accurate, the policy based on it converges to the optimum. This is certainly very impressive. Algorithmic information theory is the main tool to derive the theory. Most convergence results depend on the complexities of the models.
Does it work in practice? Unfortunately, the presented solution cannot be implemented in practice since it is incomputable. Even worse, there is at the moment no principled way to downscale his approach (and make it practical), since we don't know how to simplify (a) model the mixture and (b) the computation of the policy. The author makes these points very clear in his book. I believe that these are the main challenges for future AI research.
The PROs: This is the first time I see a unified, formal and mathematically sound presentation of artificial intelligence. The proposed theoretical solution provides invaluable insight about the nature of learning and acting-hidden even in very subtle details in his approach and in his equations. Whereas you might feel that classical AI or commonplace Machine Learning theory looks like a patchwork of interesting concepts and methods, here (almost) everything fits nicely together into a coherent and elegant solution. Once you have studied and understood this book (which is taking years in my case), it is very difficult to go back to the traditional approaches of AI.
The CONTRAs: I have however some critiques against this book. Hutter is a brilliant mathematician and sharp thinker. Unfortunately his writing style is very formal and he sometimes neglects intuition. The book introduces difficult notation (although some of it pays off in the long run) that ends up obfuscating simple ideas. The overly mathematical style has certainly not helped to the spread of the proposal.
To summarize, this books represents a giant leap in the theory of AI. If you have advanced mathematical training and enough patience to study it, then this book is for you. For the more practically-oriented researcher in AI, I recommend waiting more time until a more user-friendly version of this book is published.
Axiomatic Artificial Intelligence Theories May 24, 2007
Hutter's book is the most recent attempt to put artificial intelligence on a firm mathematical footing. (For an earlier effort see, for instance, Theory of Problem Solving: An Approach to Artificial Intelligence, Ranan Banerji, Elsevier, 1969) If successful, such a foundation would permit us to elaborate and explore intelligence by applying the formal methods of mathematics (e.g., theorem proving).
Hutter starts from Werbos' definition of intelligence: "a system to handle all of the calculations from crude inputs through to overt actions in an adaptive way so as to maximize some measure of performance over time" which seems reasonable. (P. J. Werbos, IEEE Trans. Systems, Man, and Cybernetics, 1987, pg 7)
Finding all of the proper axioms for such a mathematical theory of intelligence is still an open and difficult problem, however. Hutter places great stock in Occam's razor. But there is experimental evidence that Occam's razor is incorrect. (The Myth of Simplicity, M. Bunge, Prentice-Hall, 1963) See also, Machine Learning, Tom Mitchell, McGraw-Hill, 1997, pg 65-66. Rather than saying that nature IS simple I believe that it is more correct to say that we are forced to approximate nature with simple models because "our" (both human and AI) memory and processing power is limited.
I am also unsure that we should assume a scalar utility. In Theory of Games and Economic Behavior (Princeton U. Press, 1944, pg 19-20) von Neumann and Morgenstern said: "We have conceded that one may doubt whether a person can always decide which of two alternatives...he prefers...It leads to what may be described as a many-dimensional vector concept of utility." Vector utility (value pluralism) has been employed in AI in my Asa H system (Trans. Kansas Academy of Science, vol. 109, # 3/4, pg 159, 2006)
I suppose, then, that I object to the word "Universal" in Hutter's title. I think that he is exploring only one kind of intelligence and that there are others.
Very ambitious project. Sep 27, 2005
This book differs from most books on the theoretical formulations of artificial intelligence in that it attempts to give a more rigorous accounting of machine learning and to rank machines according to their intelligence. To accomplish this ranking, the author introduces a concept called `universal artificial intelligence,' which is constructed in the context of algorithmic information theory. In fact, the book could be considered to be a formulation of artificial intelligence from the standpoint of algorithmic information theory, and is strongly dependent on such notions as Kolmogorov complexity, the Solomonoff universal prior, Martin-Lof random sequences and Occam's razor. These are all straightforward mathematical concepts with which to work with, the only issue for researchers being their efficacy in giving a useful notion of machine intelligence.
The author begins the book with a "short tour" of what will be discussed in the book, and this serves as helpful motivation for the reader. The reader is expected to have a background in algorithmic information theory, but the author does give a brief review of it in chapter two. In addition, a background in sequential decision theory and control theory would allow a deeper appreciation of the author's approach. In chapter four, he even gives a dictionary that maps concepts in artificial intelligence to those in control theory. For example, an `agent' in AI is a `controller' in control theory, a `belief state' in AI is an `information state' in control theory, and `temporal difference learning' in AI is `dynamic programming' or `value/policy iteration' in control theory. Most interestingly, this mapping illustrates the idea that notions of learning, exploration, adaptation, that one views as "intelligent" can be given interpretations that one does not normally view as intelligent. The re-interpretation of `intelligent' concepts as `unintelligent' ones is typical in the history of AI and is no doubt responsible for the belief that machine intelligence has not yet been achieved.
The author's formulations are very dependent on the notion of Occam's razor with its emphasis on simple explanations. The measurement of complexity that is used in algorithmic information theory is that of Kolmogorov complexity, which one can use to measure the a prior plausibility of a particular string of symbols. The author though wants to use the `Solomonoff universal prior', which is defined as the probability that the output of a universal Turing machine starts with the string when presented with fair coin tosses on the input tape. As the author points out, this quantity is however not a probability measure, but only a `semimeasure', since it is not normalized to 1, but he shows how to bound it by expressions involving the Kolmogorov complexity.
The author also makes use of the agent model, but where now the agent is assumed to be acting in a probabilistic environment, with which it is undergoing a series of cycles. In the k-th cycle, the agent performs an action, which then results in a perception, and the (k+1)-th cycle then begins. The goal of the agent is to maximize future rewards, which are provided by the environment. The author then studies the case where the probability distribution of the environment is known, in order to motivate the notion of a `universal algorithmic agent (AIXI).' This type of agent does not attempt to learn the true probability distribution of the environment, but instead replaces it by a generalized universal prior that converges to it. This prior is a generalization of the Solomonoff universal prior and involves taking a weighted sum over all environments (programs) that give a certain output given the history of a particular sequence presented to it. The AIXI system is uniquely defined by the universal prior and the relation specifying its outputs. The author is careful to point out that the output relation is dependent on the lifespan or initial horizon of the agent. Other than this dependence the AIXI machine is a system that does not have any adjustable parameters.
The author's approach is very ambitious, for he attempts to define when an agent or machine could be considered to be `universally optimal.' Such a machine would be able to find the solution to any problem (with the assumption that it is indeed solvable) and be able to learn any task (with the assumption that it is learnable). The process or program by which the machine does this is `optimal' in the sense that no other program can solve or learn significantly faster than it can. The machine is `universal' in that it is independent of the true environment, and thus can function in any domain. This means that a universal optimal machine could perform financial time series prediction as well as discover and prove new results in mathematics, and do so better than any other machine. The notion of a universally optimal machine is useful in the author's view since it allows the construction of an `intelligence order relation' on the "policies" of a machine. A policy is thought of as a program that takes information and delivers it to the environment. A policy p is `more intelligent' than a policy p' if p delivers a higher expected reward than p'.
The author is aware that his constructions need justification from current practices in AI if they are to be useful. He therefore gives several examples dealing with game playing, sequence prediction, function minimization, and reinforcement and supervised learning as evidence of the power of his approach. These examples are all interesting in the abstract, but if his approach is to be fruitful in practice it is imperative that he give explicit recommendations on how to construct a policy that would allow a machine to be as universal and optimal (realistically) as he defines it (formally) in the book. Even more problematic though would be the awesome task of checking (proving) whether a policy is indeed universally optimal. This might be even more difficult than the actual construction of the policy itself.
The State of the Art as it Exists Today Mar 31, 2005
Artificial Intelligence has proven to be one of those elusive holy grails of computing. Playing chess (very, very well) has proven possible, while driving a car or surviving in the wilderness is a long, long way from possible. Even the definition of these problems has proven impossible.
This book first makes the assumption that unlimited computational resources are available, and then proceeds to develop a universal theory of decision making to derive a rational reinforcement learning agent.
Even this approach is incomputable and impossible to implement. Chapter 7 presents a modified approach that will reduce the computational requirements, although they remain huge.
Chapter 8 summarizes the assumptions, problems, limitations, performance of this approach, and concludes with some less technical remarks on various philosophical issues.
This is a highly theoretical book that describes the state of the art in AI approaches. Each chapter concludes with a series of problems which vary from "Very Easy, solvable from the top of your head" to "If you can solve this you should publish it in the professional literature."
This is the state of the art as it exists today.
Theoretical universal AI Feb 28, 2005
Solomonoff's famous inference model solves the inductive learning problem in a universal and provably very powerful way. Many methods from statistics (maximum likelihood, maximum entropy, minimum description length...) can be shown to be special cases of the model described by Solomonoff. However Solomonoff Induction has two significant shortcomings: Firstly it is not computable, and secondly it only deals with passive environments. Although many problems can be formulated in terms of sequence prediction (for example categorisation), in AI in general an agent must be able to deal with an active environment where the agent's decisions affect the future state of the environment.
In essence, the AIXI model, the main topic of this book, is an extension of Solomonoff Induction to this much more general space of active environments. Although the model itself is very simple (it is really just Solomonoff's model with an expectimax tree added to examine the potential consequences of the agent's actions) the resulting analysis is now more difficult than in the passive case. While optimality can be show in certain senses, the powerful convergence bounds that Solomonoff induction has now appear to be difficult to establish.
Like Solomonoff induction, AIXI also suffers from computability problems. In the one of the final sections a modified version of AIXI is presented which is shown to be computable and optimal in some sense. Practically this algorithm would be much too slow, but this is a clear step away from abstract models which can in theory be implemented.
For anybody interested in universal theories of artificial intelligence this book is a must. The presentation is quite technical in places and thus the reader should have some understanding of theoretical computer science, statistics and Kolmogorov complexity.