## Tuesday, July 29, 2008

What's Up?

Last time, I gave a fairly simple logic that satisfies me in its ability to refer to mathematical concepts that fall in the realm mathematicians call the Arithmetical Hierarchy. Fun. But perhaps you think something is fishy-- it isn't too hard to design a deduction system that uses this logic I've described, but it doesn't seem like it would be able to do anything that a normal logic engine can't do. What's up?

Perhaps you're thinking that I'm being a bit philosophical, that I'm not worrying about something that actually matters to people trying to implement an AI system. The issue sounds pretty abstract. "Is the logic grounded? Does it Refer?"

Well, anyway, here's why it actually matters.

The logic doesn't do much by itself. But it can be conveniently augmented by a nonmonotonic reasoning system like the one I talked about previously. What's nice is that the nonmonotonic theory is automatically specified by the structure of the first-order-logic-plus-lambda-calculus theory, automatically taking care of some issues that were troubling me. (If I wanted to make that nonmonotonic logic look like a normal nonmonotonic logic, then I would need to add some strange control operators specifying a nesting of nonmonotonicness corresponding to the arithmetical levels. Maybe that is OK. But it gets worse if you ask what happens when someone comes along and arranges those nesting operators in a way that creates recursion.)

A computable predicate is a sentence containing unbound terms, halting lambda terms, and predicates that can be determined immediately (such as equality for numbers that are given in a standardized notation). Also, boolean operations on such predicates. When we add qunatifiers to these statements, we can easily determine where they fall on the arithmetical hierarchy (although we cannot guarantee that this is the lowest order, since there could be a simpler way of writing the same thing, and it could fall on a lower order.) This allows us to use the procedure previously defined to nonmonotonically reason about all statements.

Another way that this union of lambda calculus and first-order logic could matter would be in inductive systems. If a system learns models based on the combined logic, it might be able to usefully differentiate between models that would be equivalent otherwise. As I discussed in the last post, the difference is that the semantics of lambda calculus assumes a closed world in which no new rules or propositions will be added, while the first-order world is fundamentally open. A theory that uses lambdas is therefore strictly more specific than the pseudo-equivalent first-order theory that is able to make all the same positive deductions; the first-order theory fails to rule out some things that the theory with lambda can. How can we take advantage of this?

Well. Via probability theory, a more specific theory should make the actual data more likely, by ruling out a greater number of alternative possibilities. To get this to work for us, we've got to actually enforce the open-world-ness of pure first-order logic when we assign probabilities. This is not typically something people worry too much about. It is easy to see this as a bit philosophical; I might be fixing a problem that people do not have, because they go ahead and make closed-world assumptions where they need to in practice. But I am describing a formal way of looking at these closed-world assupmptions, and that seems useful.