Godel incompleteness is not a sufficient measure of completeness.
To begin this argument, I need to talk about Skolem's Paradox, an intriguing result concerning the foundation of mathematics.
Skolem's Paradox shows that the logical constructions of set theory will always be insufficient, in a way slightly worse than Godel's Incompleteness. The problem is that any logical characterization of set theory fails to characterize sets completely. The same rule applies to many entities. For example, any attempt at a logical description of the notion "Turing Machine" will similarly fail. Also "number", "connectedness", "higher-order predicate", and "logic" itself, to name a few. All of these cannot be described logically.
The basis for this claim lies in the fact that these entities cannot be described in first-order logic, which (if you haven't heard of it) is a restricted logic that doesn't fall prey to Godel's theorem (because it is not sufficiently powerful to represent its own rules, and so doesn't contain self-reference). 1st-order logic is often taught as if it were Logic, period. In fact, some have argued that this is the truth: after all, what is logic if there isn't a complete set of deduction rules? However, 1st order logic is too weak to describe the majority of mathematical entities.
Nonetheless, mathematicians try. What Skolem did was essentially call them on the error, using a nice theorem he and Lowenheim had developed. In fact, the same error is committed *whenever* a logic attempts to be anything more than 1st-order: all you're really doing is trying to make 1st-order logic talk about higher-order entities. (In a sense, this is *why* Godel's Theorem is true; that is, it's *why* the deduction rules are incomplete: they can't ever really do anything more than 1st-order logic.)
But this leaves a big gaping hole. How DO we reason about mathematical objects? If logical constructions fail, what are we doing? Math SEEMS logical. The easy way out is to act like humans have a magical ability to talk about these things, while any formal construction is doomed to fail. Luckily, this is not the only way out.
But the other way out is *much* more complicated! Sorry.
The other way begins with the notion of "limit computation". This is an idea in theoretical computer science, which (sort of) allows a computer to calculate things that it cannot normally calculate. The prime example of something a computer cannot calculate is the halting problem.
The halting problem is very much like Godel's theorem. The two seem to me to be the same theorem developed in different settings. Godel's theorem says that there will be truths about formal logic that formal logic cannot deduce, while the halting problem shows that there will be facts about computer programs that computer programs will be unable to calculate.
Perhaps it seems odd that we can have a mathematically well-defined value but be unable to compute it, just as it seems odd that we have well-defined mathematical entities with no proper logical characterization. The intriguing thing, though, is that the halting problem is "limit-computable".
A limit-computer is a computer that is allowed to revise its output. The program never halts; instead, the output "converges" to the correct answer.
This convergence occurs in a finite amount of time, but the problem is that we don't know for sure *when* it has occurred; so to get guaranteed results, we've got to wait forever. But giving up the sureness of finite-time computation allows us to construct programs that do what others cannot.
So, translate this back to logic. We can increase the number of entities we can reason about by allowing facts to be revised: we may make a wrong inference, so long as any wrong inferences will be corrected along the way. This is called a trial-and-error predicate.
But there's more! :)
Just as no halting program can solve the halting problem, no converging program can solve the "convergence problem". Just as we can ask if a normal program halts, we can ask if a limit-program converges to an answer or keeps revising its answer forever.
But just as we can solve the halting problem by resorting to limit-computers, we can solve the convergence problem by resorting to an augmented limit-computer that has access to another computer (meaning it can run as many limit-computations as it likes). Equivalently, we can give the computer as many infinities as it needs, rather than just one, to converge (which again amounts to it being able to run as many limit computations as it likes). In fact, we can give a computer larger and larger infinities of computation time, resulting in the ability to compute more and more things.
The question arises: if we give the computer "as large an infinity as it likes", can we compute any mathematically well-defined value? I do not know the answer. But it *sounds* reasonable...
If we're willing to grant this wild assumption, then we can again transfer this development to the logical domain. Essentially, we allow trial-and-error predicates to be defined in terms of other trial-and-error predicates. This gives up the guarantee that all fallible inferences will be eventually corrected (unless by "eventually" we mean "after arbitrarily large infinities of time have passed").
Why is all this in a blog about AI?
Well, if I'm right, then the "magical" quality that humans posses and formal systems do not is the ability to make fallible inferences. Any AI based in infallible logic would be unable to understand mathematics, but an AI that included a good fallible reasoning system would be able to. Perhaps this comes automatically with any good learning algorithm, but perhaps not; perhaps only learning systems with very specific properties are sufficient. This needs further research! One avenue is "nonmonotonic logic", which is very similar to the logic I'm proposing.
However, standard nonmonotonic logic doesn't have quite as much machinery as I want... I think it is equal to normal limit-computation, rather than the forms of computation involving larger infinities.
But that's enough speculation for today.