The Importance of Uncomputable Models
Inspired by this blog.
I have mentioned most of these thoughts before, but I am writing them up in one post as a cohesive argument. I will argue that uncomputable models are important. I am not arguing that people think in ways that computers cannot, but rather than people and computers alike can benefit from using models that have "holes" which represent meaningful questions that can't be answered definitively by running a computation.
The world that we live in consists of a bunch of macro-objects. What are macro-objects? Roughly speaking, they are coherent structures of micro-objects, which recur in both time and space.
A macro-object "emerges" from the micro-dynamics. It is a structure which persists, propagating itself into the future, like a wave in water.
A spherical bubble is a macro-object, because slight warps in the sphere get evened out over time. (Until it pops.)
Similarly, a water droplet, a planet, and a star.
An atom is an emergent object (though not macro-level) because the positive and negative charges of the electrons and protons enter into a stable relationship.
A grasshopper is a stable entity because its metabolism maintains a homeostasis which allows it to live for a fair amount of time.
Similarly for all living organisms.
And, so on.
The key idea here is that all of these objects are in a convergent state: a state that (within limits) it always returns to after wandering away from.
Now, to logical foundations. The halting problem is unsolvable; there is no computable algorithm which can tell you which computations do and do not halt. But, suppose we could run our computer an infinite amount of time to get the answer. A machine that can do this is called a hypercomputer. Then, we could solve the halting problem by waiting to see if the computation in question halted; so, we can use hypercomputations to answer many questions that regular computations cannot answer. However, we've got a new type of problem. Some computations will flip back and forth between answers infinitely often, so that when we run them an infinite amount of time, the output is indeterminate. The result of the hypercomputation is then undefined, or "nonconvergent".
Asking whether a hypercomputation converges is analagous to asking whether a normal computation halts. In a specific sense, it is twice as hard: if we solved the halting problem for normal computations, and made a little magic box that can give the correct answer for halting questions, and connect that to an ordinary computer, then we have a machine equivalent to a hypercomputer. Asking whether the programs of the new machine halt is actually equivalent to asking if hypercomputations converge.
So, halting is uncomputable, but convergence is doubly so!
Yet, I have argued that convergence is all around us. On an everyday basis, we deal with convergent states as if they were solid entities. So, I argue, we are viewing the world through an uncomputable model.
Mathematically, one should expect reasoning about convergence to be quite hard (assuming, as I do, that human reasoning is computable). Everyday reasoning is not "quite hard" in this way. We mitigate the full force of the uncomputability of our models with many coping strategies; we mainly reason under the assumption of convergence (for structures that have converged in the past), rather than attempting to question this assumption. We have to learn when things converge and fail to converge. Yet, even so, using uncomputable models is easier than trying to apply computable models to the problem. Asking whether a structure is generally convergent is a very useful abbreviation, approximately summing up a lot of questions about the state at a given time.
Also, it is important to admit that the mathematically pure concept of convergence is not quite what we are interested in. In practical situations, we are interested in whether something is quickly convergent. This is not uncomputable; however, it can be more expensive to check then reasoning abstractly about convergence. So (and this is probably the weakest point in my argument) I think it is worthwhile to keep reasoning about the uncomputable models.
Another interesting point is that, much of the time, while we have a fairly good concept of the emergent convergent objects we deal with day to day, we do not have such a good idea of the underlying dynamic. This means that, in practice, we do not ask too many convergence-hard questions. Often, we think we already have those answers, and instead we ask what sort of underlying structure might give rise to them.
PS--
I am being somewhat messy here, because "convergent" in the case of hypercomputation does not mean quite the same thing as "convergent" in the case of emergent objects. For one thing, convergence of entities has to do with correcting for disturbances from the external environment, while convergence for hypercomputations does not. I think this difference does not harm the argument. As I see it, emergent-convergence is a more general problem, having hypercomputation-convergence as a subproblem.
Thursday, June 18, 2009
Subscribe to:
Post Comments (Atom)
I like your viewpoint of convergence as the way to think of physical "objectness"; it is a very thought-provoking way to think about the "ordered chaos" of the world... although I'm not quite sure exactly what you're getting at when you talk about applying uncomputable models.
ReplyDeleteNext thing we know, you'll be suggesting the usefulness of real numbers!