Monday, July 21, 2008

New Grounding Results

In this post I detailed a different (and hopefully better) grounding requirement for an AI logic. I've now come up with a logic that satisfies the requirement, based on the speculations in that post.

This logic is nicely simple. All I do is add lambda calculus to first-order logic. The lambda calculus is used as a new way of notating terms, acting like function-symbols act normally. The difference is that lambda-calculus is able to uniquely define computations, while normal function symbols cannot.

The reason first-order logic cannot uniquely define computations is that it cannot rule out nonstandard interpretations. Let's say we try to define some method of computation (such as lambda calculus, Turing machines, etc.). Determining the next step in the computation is always very simple; it wouldn't be a very useful definition of computation otherwise. And since a computation is nothing but a series of these simple steps, it might seem like all we need to do is define what happens in a single step. Once we do that, the logic knows what to do next at each stage, and can carry any particular computation from start to finish without error.

The problem here is that we are only specifying what the computation does do, not what it doesn't. In other words, the first-order logic will carry out the computation correctly, but it will not "know" that the steps it is taking are the only steps the computation takes, or that the result it gets is the only result of that computation. This may sound inane, but first-order logic makes no assumptions. It only knows what we tell it.

The problem is, with just first-order logic, there is no way to say what we need to here. If there was an "and that's all" operator, we could list the positive facts about computations, and say "that's all". The logic would then know that any computational step not derivable from those basic steps never happens. But there is no such operator. I can tell it various things about what the computation does not do, but I will never finish my list.

Another way of putting this is that we can positively characterize computation, but we cannot negatively characterize it. Why not? Well, first-order logic is complete; this means that the rules of deduction can find all implications of statements we make. If we were able to state all negative facts, it would be able to find all consequences of them. But this would go against fundamental facts of computation. In particular, we know there is no way of deducing which computations eventually halt. If computations could be totally negatively characterized in a complete logic like first-order logic, the deduction rules would be capable of telling us this. So, this must be impossible.

In a sense, the reason we can't characterize computations negatively is because first-order logic has an open-world assumption. This means that if the deduction rules do not prove a statement true or false, it could be either. This is as opposed to a closed-world assumption, which would mean that if a statement is not proven true, it must be false.

So how does inserting lambda calculus into the mess help?

Lambda calculus is sufficiently closed to solve the problem. While a first-order theory can be added to without changing the meaning of what's already been stated, a program (stated in lambda calculus) cannot be altered freely. If we modify it, we simply have a different program on our hands. This means we can completely characterize many things that first-order logic alone cannot.

Fleshing out some more details of the logic:

-The deduction system should be a mix of a regular first-order deduction system, and a set of rules for the normal manipulation of the lambda calculus. This is as opposed to axioms describing how to manipulate the lambda calculus. Axioms would be in first-order logic and would therefore do us no good in terms of my grounding requirement, because they would not negatively characterize. The rules negatively characterize because they cannot be added to. (This is a petty distinction in a way; in practice, it wouldn't matter much, since nobody would add more facts to a running system concerning the lambda-calculus manipulation. But they could, and the system would accept these facts, whereas they can't add new rules without changing the system.)

-There is a question about how much to tell the first-order logic about its embedded lambda calculus. We could tell it nothing. But it seems practical for the system to have explicit first-order knowledge about the lambda-terms, so that it could deduce some facts about their manipulation (perhaps deducing facts about the results of certain calculations more quickly than could be explicitly checked by running all of those calculations). However, I have already said that we could keep telling the system more and more facts about computation, and yet never finish. So where do we stop? I have two ideas here. First, it seems reasonable to tell the system only the positive facts about the lambda calculus, since the negative ones are the ones that cause trouble. A few basic negative facts could be added at whim, if desired. This is very flexible. Second, we do not need to limit ourselves to first-order logic. We have lambda calculus at our disposal, and we can use it! In addition to placing the rules of manipulation outside of the system, as deduction rules, we could encode them within the system, using lambda calculus. This would allow the system to "know the deduction rules" despite it being impossible to completely characterize them in first-order logic.

No comments:

Post a Comment