Friday, April 10, 2009

Enumerating Infinity

What are we doing when we list infinities? Particularly, I take ordinal infinities as my example.

The most natural ordinals, which people will tend to stay within if you ask them to invent bigger and bigger ordinals and give them the basic definitions, are the recursive ordinals. In fact, people will tend to stay within what I'd call the "prinitive recursive" ordinals:

infinity, infinity + 1, infinity + 2, .... infinity + infinity (= 2*infinity), 3*infinity, 4*infinity, ... infinity*infinity (= infinity^2), infinity^3... infinity^infinity (=infinity^^2), infinity^(infinity^infinity) (=infinity^^3), infinity^(infinity^(infinity^infinity)) (=infinity^^4) ...... infinity^^infinity ..... infinity^^^infinity ..... infinity^^^^infinity .....

The above uses up-arrow notation.

Enumerating computable ordinals involves finding halting computations, such as the numerical operations used above. Sticking infinity into a computable function basically stands for the infinite series produced by sticking finite numbers in that same location. So just "infinity" stands for:

1, 2, 3, 4, 5, ....

We can define + as the concatenation of two different orderings, which leaves the internals of the orderings untouched but specifies that all elements from the second will be larger than any elements from the first. So, "infinity + 1" stands for:

1, 2, 3, 4, .... A.

Here, "A" is a pseudo-number larger than all the finite numbers. (In other words, X is the first infinite number.)

"infinity + infinity":

1, 2, 3, ... A, A+1, A+2, A+3 ...

Here, there are essentially two infinite series, the one that starts with 0, and the one that starts with A.

"infinity + infinity + infinity":

1, 2, 3, ... A, A+1, A+2, ... B, B+1, B+2 ...

Now there are 3 series; 0, A, and B.

"infinity * infinity":

1, 2, 3, ... A, A+1, A+2 ... B, B+1 ... C, C+1 ... D, D+1 ... ...

Now we have an infinite number of series.

And so on. We can imagine each computable function as a method of specifying an infinite tree. The tree is "computable" because we can generate any finite portion of the tree using the computable function. Approximately speaking, the more tangled and structured and multiferous the branching of the tree, the larger the ordinal. But if we try just generating computations at random, then they may not halt; so we've got to try to put as much tangledness in as we can without putting in so much tangledness that things become ill-defined. This can potentially take any knowledge we've got about the halting problem.

If this were all there was to enumerating infinities, then I would say that something like probabilistic mathematical truth explains what it is we're doing. However, mathematicians (especially logicians) talk about ordinals that are not computable. These include Kleene's O, the first uncomputable ordinal (which makes it the ordering on all computable ordinals), as well as many higher countable ordinals; the first uncountable ordinal, which is of course just the ordering of all countable ordinals; and so on.

To make the initial jump outside of the recursive ordinals, to Kleene's O, we need to make an interesting sort of move: we admit our ignorance. We give up the hope of being able to enumerate every ordinal on the way, and make due with saying "if the function halts, then the ordinal is well-defined". Since we never will have all the information about which functions halt, we'll always be somewhat ignorant of which of these ordinals are well-defined. Yet, we look at their ordering as a mathematical entity, and start talking about it.

This gives the impression that we'll need to give up more and more if we want to climb higher in the ordinals. But how much can we give up before we've given everything up?

I propose that, in general, the process of listing ordinals is a process of deciding which things are well-defined. If we give that up, we've given up too much.

Here, "well-defined" means "having a description on some level of Tarski's hierarchy of truth", or alternatively, "having a description on some level of my hierarchy of nonsense".

Of course, this is circular. Those hierarchies are both defined in terms of ordinals, so defining ordinals in terms of them appears unproductive. However, it is not completely unproductive. Let's take Tarski's hierarchy as the working example. Let 0 represent first-order logic, 1 represent the theory of truth for that (specifically, the theory with enough axioms to be equivalent in strength to Peano Arithmetic), 2 to be the theory of truth-in-1, and so on.

The thing I want to note here is that the ordinal assigned to a level in the hierarchy is far lower than the largest ordinals whose existance can be proven in that theory. Suppose I lay down axioms for Tarski's hierarchy in terms of ordinals, and then lay down axioms for ordinals which require definability in Tarski's hierarchy. It seems that I'll get a large number of ordinals in this manner. If I start out believing that the ordinal 1 is well-defined, then I'll believe all the ordinals proven well-defined by Peano arithmetic are well-defined. That is a rather large number of ordinals. Since I believe in them, I'll believe in all the levels of the Tarski hierarchy corresponding to them... lots of levels! This gives me many more ordinals to believe in, which gives me many more levels to believe in, and so on.

Of course, this stops somewhere (in the same sense that counting up stops somewhere...). It will only imply the existence of so many ordinals, assuming that it is a consistent theory. Furthermore, if it is consistent, then I can name an ordinal that it does not: the ordering of all the ordinals the system talks about. Let's call this the "outside ordinal" for the system. (This is a bit trickier to specify than it looks at first, however. We can't just say "the ordering of all ordinals the system will consider well-defined". The system will have gaps in its knowledge. For example, it will prove a bunch of recursive ordinals to be well-defined, but not all of them; it then jumps to Kleene's O, because it can talk about the set of well-defined recursive ordinals. Even more clearly: the system might be able to prove that the first uncountable ordinal is well-defined, but it will not be able to prove that all ordinals below this are well defined... there are uncountably many of them!

The main precaution that must be taken is to prevent the system from taking "the ordering over all ordinals" to be an ordinal. This is like me saying "Consider the set of all well-defined ordinals. Consider the ordering on these as an ordinal, Q. Take Q + 1..." This is not allowed!

OK. Given that, let's think about what happens when we add probabilistic justification to the system. We can think of the system as (eventually) knowing the truth about the halting problem (for any particular instance). This means that it is (eventually) correct in its judgements about well-definedness for computable ordinals. Thanks to the feedback effect of the system, this will mean that it is (eventually) correct in its judgements concerning a whole lot of ordinals. All of them? No: just as there are ordinal notations for which the halting problem determines well-definedness, there are ordinal notations for which the convergence problem determines well-definedness. (For a definition of the convergence problem, see here.) Still, this is an interesting class of ordinals.

So how could one go even further? Well, perhaps we could consider the "outside ordinal" for the version of the system that knows all the halting truths...

No comments:

Post a Comment