So.
More logic stuff.
I've spoken in the past about using probabilistic justifications (or, when simplicity is desired, nonmonotonic justufucations) for mathematical belief. This provides at least some account of how a system can meaningfully refer to uncomputable concepts such as the natural numbers. (Calling the natural numbers "uncomputable" is sure to raise a few eyebrows. What I mean is that there are undecidable propositions concerning the natural numbers.) The natural question is: how much can be probabilistically justified?
To classify one statement as computably decidable, or probabilistically decidable, or neither, is quite accurate; any one statement or its negation could be simply added as an axiom, and thus made decidable (or more realistically, something else might be added as an axiom, rendering the statement decidable). Similarly, a statement might be made probabilistically decidable with the addition of an axiom that caused it to be equivalent to some probabilistically decidable statement. (Most trivially, the axiom "A <-> B" where "A" is a probabilistically jsutifiable statement and "B" was previosly not. Such an axiom might seem contrived, but it is true if A and B are both true!)
Still, it is convenient to talk in this way, and can be made meaningful. One way is by fixing the axioms we're dealing with; so, for example, we might be working just with robinson arithmetic or peano arithmetic. (I would prefer robinson arithmetic, since I would like to leave the epistemic status of mathematical induction open, but peano arithmetic assumes it to be true.) But this is not the only possibility. For now, I prefer something like the following:
- Basic functions of arithmetic are computable. These vary from axiomatixation to axiomatization. At one extreme, we could provide axioms giving the behavior of all the common functions on natural numbers; at the other extreme, we could have no functions in our language, instead having relations sufficient to specify the structure. Most common is to take only the sucessor function as basic, and define addition, multiplication, et cetera from these. These differences may cause trouble for me (ie, result in non-equivalent definitions), but I'll ignore that for now...
- Basic predicates and relations are computably decidable. Again, there is some variation about which relations are taken as basic, and which are taken as defined. Usually equality is taken as basic, but I've heard of greater-than being taken instead.
- Statements whose truth is a function of a finite number of computably decidable statements are computably decidable. This lets us form larger decidable statements with the normal boolean functions, and or and not. It also allows bound quantifiers to be used, such as "For all numbers less than 100..."
- Statements whose truth is a function of an infinite number of computably decidable statements are probabilistically decidable if any reasonable probabilistic decision process is guaranteed to converge.
It might also be convenient to define "computably justifiable", meaning that if a statement is true, it is computably decidable. (For example, "there exists a place in the decimal expansion of pi at which the number 9 is repeated 24 times" is verifiable if it is true (we can simply calculate the digits of pi until we find the spot), but if it is false, it is not.) "Computably falsifiable" similarly means that a statement can be falsified finitely if it is indeed false. "Probabilistically justifiable" would mean a statement was probabilistically decidable if it was true, and "probabilistically falsifiable" would mean that it was probabilistically decidable if false.
The basics are all set up, so let's get to the main show.
The question is: what statements are probabilistically justifiable?
I think I have mentioned before that the delta-2 statements on the arithmetic hierarchy are definitely probabilistically decidable, since they are limit-computable. For any statements not in delta-2, there is no guarantee that they will converge to the correct value (they might not converge, or worse, they might converge to the wrong value). Still, it is an interesting question whether *some* of these converge, and whether applying probabilistic methods is still better than guessing randomly (if there are more that converge correctly than converge incorrectly). But that is not the question for today...
Previously, I have restricted this question to statements in arithmetic. This seems sensible. However, lately I've been looking at the concept of proof-theoretic strength. Roughly, a logic is considered stronger if it proves the same truths plus more. This brought my attention to the idea of probabilistically justifying statements in higher logics based on their relevance to arithmetic. Could set theory, for example, be justified by its usefulness in describing true statements of arithmetic?
This led to a question: are there any statements that can be made, which purely assert things about arithmetic, but cannot be formulated in the language of first-order arithmetic? Obviously arithmetic can assert any single thing about arithmetic, so the question is whether all combinations of arithmetic statements that can be asserted by some logic can be asserted in arithmetic. ("Combinations of statements" refers to the same thing that "truth functions of statements" referred to in the earlier definition. Now, obviously, arithmetic can specify any finite truth-functional statement; so that challenge comes when we examine infinite truth-functional statements, such as those that are probabilistically justifiable.)
More formally:
Definition: Logic B is "effectively complete" with respect to logic A if and only if for every computably enumerable set S of statements in logic A, there exists a statement in logic B which imples exactly S closed under the deductive consequence relation of B. (We need to close the set, because we can't penalize logic B for not being able to imply x&y without automatically implying y, for example.)
Conjecture: Arithmetic is effectively complete with respect to itself.
Whether this is true or false, there are some interesting consequences. I'll note some of them in both directions, but keep in mind that (since only one direction is true) half of the consequences I mention will be false.
If the conjecture is false, then even if we are suspicious of mathematical entities other than numbers, it seems that we'll have good reason to want a more mathematically expressive language then first-order arithmetic. The idea of "effective completeness" as I defined it is fairly weak; it does not seem at all implausible to say that if we care about a domain, we should want a logic which is effectively complete with respect to it. So, if arithmetic is not, it seems like another is needed. Furthermore, these new statements may have the hope of probabilistic justification.
This does not necessarily justify set theory. It most certainly doesn't justify a Platonist attitude towards sets (ie, believing in sets as actual entities rather than mere symbol-manipulation). Still, it starts the climb towards abstraction, and that's a slippery slope. (ha ha, mixed metaphor.)
If the statement is true, on the other hand, that means first-order arithmetic can state anything that can be effectively stated. (That qualification, "effective", means: arithmetic can state any combination of statements that some computer program could generate in a (possibly nonterminating) sequence.) This would mean that any higher logic, in its capacity to refer to combinations of arithmetic statements, is unnecessary; in some sense it is already hiding within arithmetic, so it doesn't need to be invented seperately. Any higher-logic statement that could be justified probabilistically via its relationship to arithmetic could actually be interpreted as some statement in arithmetic: once we found the sentence's interpretation, we could not distinguish between the two in terms of their meaning.
So which is true?
I spent quite a while pondering it, and went down a few wrong paths trying to find the answer, but I won't waste time describing those. Eventually, I realized that I had an example right in front of me of a sentence that is not ever stated in the language of first-order arithmetic, yet assers an effectively enumerable set of such statements: mathematical induction!
Mathematical induction is always axiomized in peano arithmetic via the aid of an axiom schema. An axiom schema is an axiom with a hole in it, into which is supposed to be placed every possible statement that can be made in the system. Here is the induction schema:
"If X[0] and X[n]->X[n+1] for all n, then X[n] for all n."
In other words, if we can prove that X holds for zero, and that whenever X holds for a number it holds for the next number, then we have proved that X holds for all numbers.
"X" is the hole into which any particular statement that can be made about a number is to be dropped. This technique, making a statement about all statements, isn't actually avaliable to us in first-order arithmetic. Officially, the axiom schema is a stand-in for an infinite number of axioms.
So, you see, if everyone uses an axiom schema, I realized that it's probably because we can't summarize that infinite statement in the language of arithmetic alone. Furthermore, I realized that someone had probably proven that fact by now. So I looked it up, and it's true. My conjecture was false.
This means that higher logics are necessary, in a fairly strong sense. Which logics? Well, adding the truth predicate for arithmetic (ie jumping to the metalanguage for arithmetic) gets us effective completeness for first-order arithmetic statements. If we really were only concerned with arithmetic, we would stop there. But this new language will (I suspect) still not be effectively complete with respect to itself. So we could add a third layer to the cake, using a truth predicate for the second logic to form a meta-meta-language that would be effectively complete for it... probably, we can continue in this manner for the entire Tarski hierarchy of truth predicates. But instead, we could tackle head-on the problem of creating a language that is effectively complete with respect to itself.
Such a think might be impossible, I admit... it might lead to paradox in the same way that a language containing its own truth predicate can.
My idea is that, in addition to some standard stock of standard machinery (conjunction, negation, a domain such as arithmetic to talk about...), we add a turing-complete representation for constructing statement-enumerators. A statement-enumerator would be treated as the combined assertion of all statements it generated. These generators would be capable of generating other generator-sentences (otherwise it would just be another metalanguage).
A generator would be true if all its generated sentences were.
Sure enough, we can construct a generator that generates only its own negation. Assuming classical truth values, paradox ensues. So, we need to allow some sentences to be neither true nor false...
Same old game.
Edit-
Actually, this scheme might fail only because I'm overstepping what is needed for effective completeness. Specifically, the sentence "A generator would be true if all its generated sentences were" seems convenient, but is not necessary.
No comments:
Post a Comment