Why Turing Machines Aren't the Most General Type of Model
I realized recently that, while it seems reasonable to use "any computer program" as the most general type of model for the world (particularly for computer-based artificial intelligence), it certainly isn't the most general model type. Computer programs represent computable models. A more general class of models is definable models.
An example of something that's definable but not computable: all numbers that can be printed out by a program that's less than 100 kb in size. The reason this can't be computed? You might think at first that you could in theory generate all programs less than 1 kb in size, run them, and record the outputs. But you can't! This is because of the "halting problem": there is no general way of knowing how long a program will take to run, and if it will ever stop. Many of the programs you generate will contain infinite loops, and so will never produce any output (or will continue to produce output forever-- let's say we want to discard the output in such a case, because we don't know what to do with infinitely long numbers). So, basically, while you might expect the process to merely take a long time, it will actually take forever: you will never finish running the programs, so you'll never get the completed list of numbers. And if you stop early, there's always the chance that you're missing a number or two (since there is no general way to distinguish a program that's on an infinite loop from one that is just taking its time).
So, why is this important? Why do we want an AI to be able to make models of the world that are definable but not computable? Because physicists and mathematicians do it all the time, so it is obviously a basic human faculty, necessary to the understanding of the universe we live in (unless you think physicists and mathematicians have been somehow warped by their disciplines into creating these actually meaningless models).
One point of irony: because of the halting problem, most AI theories that restrict the AI's domain of models to the computable are themselves uncomputable (they must be approximated, in practice). This means that although a human could understand the general guiding principle behind the design, any AI based on such a theory would be incapable of comprehending the concepts behind its construction!
Does this mean that computers are fundamentally unable to think at a human level? Actually, no. An AI could make any model representable by formal logic, rather than any model representable by Turing machine, would be able to reason about any definable model. It would be able to understand both the principles behind a Turing-machine-based AI, and those behind itself. (Serious anti-AI folks would invoke Godel's Theorem here, though, and claim that any formal system is incapable of fully comprehending itself. This is true, but also appears to apply to humans, so isn't as concerning.)
Tuesday, October 09, 2007
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment