Some Comments on the Previous Post

One thing that is very interesting about what I talked about last time-- it provides a surprisingly practical purpose for detailed talk about infinities. The bigger infinities you know how to talk about, the more logical systems you can prove consistent, giving you more information about which Turing machines never halt. But this is very strange! These Turing machines are not being run for so long that they enter into the realm of higher infinities. They either halt in finite time, or keep looping for one infinity (the smallest infinity, sometimes called omega). Why, then, does talk of greater and greater infinities become useful?

The straightforward answer is to recount how that information is used to prove consistency (and therefore non-halting-ness) of a system. First, we assign a mathematical semantics to a system; a meaning for the system-states. Second, we show that the initial meanings are true. Third, we show that if one state is true, then the transition rules guarantee that the next state is true as well; in other words, we show that all manipulations are truth-preserving. This is the step where infinities come in. Although no infinities are involved in the actual states and transitions, the semantics may include infinities, so our reasoning may rely on facts about infinities. This sounds silly, but it creeps in more readily then you might expect. If the logic that we are proving consistent uses no infinity, then we must use a single infinity (omega) when proving it consistent; if it uses a small infinity, we must use a larger one to prove it consistent; and so on.

Utterly strange! Why does this method of proving consistency work? We're somehow making the proof easier by treating facts about finite stuff as if they were facts about infinite, unreachable stuffs.

So, it seems that self-reference (trying to prove oneself consistent) would give grounding for as many infinities as one might desire. OK, but, how do we actually get them? A normal logic is stuck with just the infinities it has, even when it starts to reason about itself.

## Wednesday, September 24, 2008

## Monday, September 22, 2008

1. Where does our knowledge about nonhalting programs come from?

An arithmetical logic is doomed to incompleteness because no logic can capture all the facts about nonhalting programs. However, in practice, we humans use logics that capture some of these facts. Which ones, and why? Well, mostly, we justify what we know about nonhalting programs via a special proof method called mathematical induction. The correctness of arguments by mathematical induction is fairly obvious (once the method of argument has been properly explained to a person, anyway); but why? What justification can be provided? I've read a justification based on infinitary logic, but while interesting, the easiest way to formalize the argument in first-order logic would just rely on mathematical induction again. So how do we humans know mathematical induction is right?

I've argued in the past that the best way to deal with the halting problem is to (1) assume a program doesn't halt, and (2) run it for a really long time, revising the assumption if it turns out that it does halt. This general idea could also account for us humans having a certain amount of inborn knowledge about the halting problem: evolution weeds out those with wrong inborn assumptions (which lead to wrong results), but favors those with the right assumptions (which lead to correct reasoning). But, is this really where we get mathematical induction? Is it just something that happens to work, and got embedded in our genes? That doesn't seem right! It seems totally implausible!

A second source of our knowledge about the halting problem is our seeming knowledge about the consistancy of formal systems. Goedel's 2nd incompleteness theorem shows that any (sufficiently powerful) formal system cannot prove its own consistancy. But, as humans, we reason that if the system is correct it must be consistant, since the truth is not inconsistant. The formal systems agree with us on the first point (they "believe" their own formulas, so implicitly support their own correctness), but cannot reason from that to their own consistancy.

This is equivalent to knowledge about halting, because what we are saying is that a program that proves theorems until it arrives at an inconsistancy will never halt. So, utilizing our human insight, we can add that fact to a logical system to increase its knowledge about halting! Call the first system S1, and the second S2. Since we believed that S1 was consistant, and S2 differs only in that it contains more true information, we will believe S2 to be consistant. If we add *that* fact, we can form a system S3. We know S3 is consistant, so we can record that knowledge to get S4. And so on. In this way, a human appears to have an infinite supply of true infomation concerning the halting problem, even over and above our knowledge coming from mathematical induction (which we can suppose was encoded in S1 already).

So, we have two mysterious sources of information about the halting problem. Also, it seems that these two sources can be linked, because the truth is we can do somewhat better than simply saying "the axioms of arithmetic are obviously true, and therefore consistant". We can instead look to the proof by Gentzen. This proof relies on a method of argument that is stronger than normal mathematical induction, called transfinite induction. Basically, this extends normal induction to fairly small infinities (I think the exact class would be computable ordinals?). It seems that this principle extends to proving the consistancy of more powerful logics as well: we can prove their consistancy by using some variant of infinite induction, so long as we pick infinities larger than the system itself can refer to. (If anyone has a better reference about this, I'd love to hear about it.)

So, this is how it looks: we seem to have an uncanny ability to list larger and larger infinities, while any formal system is limited to some ceiling. Furthermore, it seems we somehow justify applying mathematical induction to any such infinity, yielding more truths about halting then any particular formal system can muster.

This situation is very strange. First, the process of listing larger infinities seems well defined, yet we can't define it (because when we do, we can automatically come up with a larger infinity by saying "the first one not reacable by that mathod"). Second, mathematical induction seems justifiable; it seems different than the random (truly unknowable) facts about halting that "cause" Goedelian incompleteness. But if it is different, then where is the difference?

2. A concept of 'procedure' could be grounded in our own thought process, but what grounds the mathematical notion of 'set', especially 'uncountable set'?

I have mentioned this before, of coruse. It continues to puzzle me. Uncountable sets are hugely infinite, so it seems really really obvious that we cannot have any actual examples to ground our reference to them.

The elementary facts about uncountable sets are proven by defining them in terms of powersets. The powerset P of a set S is defined as the set of all subsets of a set. The powerset of the set of integers is uncountable. However, there is a problem: when we try to give an actual definition, "all subsets" is hard to properly define. For finite sets, it is easy; but for infinite sets, it tends to collapse to "all referencable subsets". Mathematicians know that there are far more subsets that cannot be referenced than subsets that can be referenced, so this is no good.

## Wednesday, September 03, 2008

I have thought of an amusing and elegant variation of my grounded arithmetical logic (a variation on standard lambda logic, which I discussed here, here, and almost everywhere inbetween).

The logic supposedly gets its grounding from the possession of a black box that executes programs represented as lambda terms-- the idea being that they literally are programs, so cannot fail to be grounded (despite the fact that no logical description of the concept of computation is complete, which makes it sound impossible to represent logically).

But, this idea is odd; it sounds as if the logic is getting grounding from an external black box that it is hooked to. What is needed is an embodiment of procedural-ness. But, there is a natural source of this from within the logic: deduction is a procedural task, which is itself Turing-complete; so, it can serve as a representation for arbitrary computations, just as well as lambda-reduction can.

So, the idea is to ground the logic in its own deduction system. This is not a radically new idea; provability logic is a well-established field of study. But, in any case, the idea is useful here if we don't want the logic's groundedness to rely on a lambda-evaluator that is outside of the logic itself.

This is satisfying for another reason, as well. I talked previously, somewhat, about the "reason" that my version of lambda logic is more semantically capable than plain first-order logic: it is able to represent concepts with a closed-world assumption in addition to the standard open-world assumption. What this means is, the logic can define a predicate by stating some rules that hold for the entity, and then adding "that's all". Statements that don't follow from the rules are automatically false. This concept offers an explanation for where unprovably true statements come from: the "that's all" operator has a meaning that cannot possibly be covered by an associated proof method. (Or, at least not one that terminates in finite time.) So it is the origin of much mathematical mystery.

Unfortunately, it cannot be the source of all mysteries; it allows us to talk about arithmetical truths, but not set-theoretical ones.

Anyway. The second nice feature of a logic grounded in its own deduction system is that it allows the "that's all" concept to be put directly to use. We can define a predicate by saying it is true if and only if it is provably true via a given set of statements.

This construction would have some interesting behavior. The first thing to realize is the all-encompassing reach of the closed-world assumption. The construction should not be used, for example, when defining some property of bicycles: if we discover a new bicycle, we will be unable to describe it with that property. It is better suited for defining properties of sets that can be completely pre-specified, like the natural numbers. We will not happen upon any new numbers, so we are safe. To apply predicates to bicycles while still using the closed-world construction, we would need to do something fancy such as define a class of all possible bicycle-types, define a closed-world predicate on that, and then apply the predicate indirectly to actual bicycles by classifying them as instances of the abstract bicycle-types.

The strength of the system could be increased even further by adding truth-predicates via one of the methods in this post, for example... but set theory would still not be grounded in the system.

Subscribe to:
Posts (Atom)