The basic problem in AI is to create a model for how the mind works.

The basic problem for the mind is to create models of the world.

Therefore, the basic problem in AI is to create basic models of how models work. In other words, we've got to figure out how to predict the world based on our limited knowledge of it. The basic problem is: "How do we make models?"

If there is a universal grand solution to AI, then it will be a universal way of making models and applying them to form predictions and to act on those predictions. There have been claims to this accomplishment already. Two such claims are AIXI (see "A Gentle Introduction to the Universal Algorithmic Agent AIXI", Marcus Hutter, 17 January 2003) and OOPS (see "The New AI: General and Sound and Relevant for Physics", Jurgen Schmidhuber, November 2003). Both make the following assumptions: a model is essentially a turing machine program, and a model should reproduce the data perfectly. Apart from that, the two are very different.

AIXI uses the "length prior", which assumes that the shortest turing machine program that can reproduce the data exactly is the best model. Because of this, AIXI requires an infinite amount of time to find this program; it cannot know if there exists some very short program that merely takes a very long time to calculate the output unless it waits for all possible programs (each possible combinations of ones and zeros, each being executed as if they were programs) to run their course.

OOPS uses the "speed prior" instead, which means that it assumes that the shortest program based on calculation time, not code length, is the best model. It is easy to see that there is a sort of loose correspondence between the two; shorter code will generally take less time to execute. But they are vastly different. The speed prior makes the optimal model much easier to find: we run all possible programs in parallel, starting with "1" and "0", then branching "1" to "10" and "11" and branching "0" to "01" and "00", et cetera. As we grow models in this way, some branches fall behind in creating data and some speed ahead; this is because, as we execute the branches, the programs jump around their instruction sets (the ones and zeros) and create data sort of at random, whenever the instruction set says to. Whenever one of these branches makes an output that conflicts with the data, the branch gets pruned. (remember, both AIXI and OOPS require models to conform exactly to the data.) The branch that gets to the finish line first, having reconstructed the entire dataset from it's program, wins. It is selected out of the bunch as the proper model. This may take a while if the dataset is large, but we are still much better off than with AIXI. (Note: there is more to OOPS than the speed prior. It is also a framework for creating smaller programs that get integrated into larger programs that get integrated into larger programs... et cetera. "OOPS" stands for "Optimal Ordered Problem Solver", which is meant to reflect the idea of using the speed prior first on small problems, then on larger and larger problems, integrating the smaller solution-programs as possible substructures of the larger programs, which (to abbreviate the actual methods extremely) are considered as additional options other than "1" and "0". So the more OOPS has learned, the larger and more sophisticated it's programs. But we will ignore this additional structure, because it is irrelevant to the discussion.)

Before I try to point out problems in these methods in particular, let's examine some general ideas. Both of these methods are general models of the model-making process-- not models of the world, but rather models of models of the world. Which one is right? Does it make any sense to ask the question?

Basically, what the question asks is "Which one produces better models"? The one that produces better models is obviously the better one, and if it actually produces better models than any other possible model of models, it is the best, and can be called the True Model of Models-- the universal solution to AI.

Also, we've got to ignore the fact that AIXI takes an extremely long time to calculate. This only proves that if we don't have sufficient resources, the best we can do is OOPS. AIXI might still be the Ultimate Model of Models; OOPS would merely be the easy-to-calculate alternative.

So, can we figure out which is better? Which matches the way the world works most closely? Which will make better models of our world?

This is actually a very strange question. Which one approaches the "real" model of the world most quickly? Which one will more easily happen upon the Ultimate Truth Behind Everything? Both AIXI and OOPS look for the program that will generate the universe-- the first using the length prior, the second using the speed prior.

The length prior assumes that the Ultimate Truth is a short program. The speed prior assumes that it's a fast program. Or, to put it another way, the length prior assumes that the universe was generated randomly, but favoring the short; the speed prior assumes it was generated randomly, but favoring the quick.

Now, we don't know the ultimate program for the universe. But we know a few things about it-- essentially, we know that it generally seems to follow some laws of physics that we've accumulated. So from that, we can start to look at which prior is better. What we've got to do is think of these laws as generated randomly, and try to figure out how they were generated-- we've got to try to come up with a pattern behind the physical laws.

Now, to do this in an objective way, we've got to have a scientific way of distinguishing between good and bad patterns. We can't go around suggesting models for the physical laws at random; we've got to have some criteria... a model that tells us which patterns are right and wrong.... a model to help us make models...

But wait! This means that to judge between different models of how to make models, we've got to already have a model for making models!! We can't decide which one is better unless we decide which one is better!

And that ends my narrative. For now.

## Thursday, October 12, 2006

Subscribe to:
Post Comments (Atom)

## No comments:

## Post a Comment