Some Questions
1. Where does our knowledge about nonhalting programs come from?
An arithmetical logic is doomed to incompleteness because no logic can capture all the facts about nonhalting programs. However, in practice, we humans use logics that capture some of these facts. Which ones, and why? Well, mostly, we justify what we know about nonhalting programs via a special proof method called mathematical induction. The correctness of arguments by mathematical induction is fairly obvious (once the method of argument has been properly explained to a person, anyway); but why? What justification can be provided? I've read a justification based on infinitary logic, but while interesting, the easiest way to formalize the argument in first-order logic would just rely on mathematical induction again. So how do we humans know mathematical induction is right?
I've argued in the past that the best way to deal with the halting problem is to (1) assume a program doesn't halt, and (2) run it for a really long time, revising the assumption if it turns out that it does halt. This general idea could also account for us humans having a certain amount of inborn knowledge about the halting problem: evolution weeds out those with wrong inborn assumptions (which lead to wrong results), but favors those with the right assumptions (which lead to correct reasoning). But, is this really where we get mathematical induction? Is it just something that happens to work, and got embedded in our genes? That doesn't seem right! It seems totally implausible!
A second source of our knowledge about the halting problem is our seeming knowledge about the consistancy of formal systems. Goedel's 2nd incompleteness theorem shows that any (sufficiently powerful) formal system cannot prove its own consistancy. But, as humans, we reason that if the system is correct it must be consistant, since the truth is not inconsistant. The formal systems agree with us on the first point (they "believe" their own formulas, so implicitly support their own correctness), but cannot reason from that to their own consistancy.
This is equivalent to knowledge about halting, because what we are saying is that a program that proves theorems until it arrives at an inconsistancy will never halt. So, utilizing our human insight, we can add that fact to a logical system to increase its knowledge about halting! Call the first system S1, and the second S2. Since we believed that S1 was consistant, and S2 differs only in that it contains more true information, we will believe S2 to be consistant. If we add *that* fact, we can form a system S3. We know S3 is consistant, so we can record that knowledge to get S4. And so on. In this way, a human appears to have an infinite supply of true infomation concerning the halting problem, even over and above our knowledge coming from mathematical induction (which we can suppose was encoded in S1 already).
So, we have two mysterious sources of information about the halting problem. Also, it seems that these two sources can be linked, because the truth is we can do somewhat better than simply saying "the axioms of arithmetic are obviously true, and therefore consistant". We can instead look to the proof by Gentzen. This proof relies on a method of argument that is stronger than normal mathematical induction, called transfinite induction. Basically, this extends normal induction to fairly small infinities (I think the exact class would be computable ordinals?). It seems that this principle extends to proving the consistancy of more powerful logics as well: we can prove their consistancy by using some variant of infinite induction, so long as we pick infinities larger than the system itself can refer to. (If anyone has a better reference about this, I'd love to hear about it.)
So, this is how it looks: we seem to have an uncanny ability to list larger and larger infinities, while any formal system is limited to some ceiling. Furthermore, it seems we somehow justify applying mathematical induction to any such infinity, yielding more truths about halting then any particular formal system can muster.
This situation is very strange. First, the process of listing larger infinities seems well defined, yet we can't define it (because when we do, we can automatically come up with a larger infinity by saying "the first one not reacable by that mathod"). Second, mathematical induction seems justifiable; it seems different than the random (truly unknowable) facts about halting that "cause" Goedelian incompleteness. But if it is different, then where is the difference?
2. A concept of 'procedure' could be grounded in our own thought process, but what grounds the mathematical notion of 'set', especially 'uncountable set'?
I have mentioned this before, of coruse. It continues to puzzle me. Uncountable sets are hugely infinite, so it seems really really obvious that we cannot have any actual examples to ground our reference to them.
The elementary facts about uncountable sets are proven by defining them in terms of powersets. The powerset P of a set S is defined as the set of all subsets of a set. The powerset of the set of integers is uncountable. However, there is a problem: when we try to give an actual definition, "all subsets" is hard to properly define. For finite sets, it is easy; but for infinite sets, it tends to collapse to "all referencable subsets". Mathematicians know that there are far more subsets that cannot be referenced than subsets that can be referenced, so this is no good.
Monday, September 22, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment