Monday, November 24, 2008

An Interesting Mailing List

I recently joined the everything list. It looks to be some good philosophical fun. The list practice is to submit a "joining post" that reviews intellectual background. I am replicating mine here.

----------------------------------

Hi everyone!

My name is Abram Demski. My interest, when it comes to this list, is:
what is the correct logic, the logic that can refer to (and reason
about) any mathematical structure? The logic that can define
everything definable? If every possible universe exists, then of
course we've got to ask which universes are possible. As someone
mentioned recently, a sensible approach is to take the logically
consistent ones. So, I'm asking: in what logic?

I am also interested in issues of specifying a probability
distribution over these probabilities, and what such a probability
distribution really means. Again there was some recent discussion on
this... I was very tempted to comment, but I wanted to lurk a while to
get the idea of the group before posting my join post.

Following is my view on what the big questions are when it comes to
specifying the correct logic.

The first two big puzzles are presented to us by Godel's
incompleteness theorem and Tarski's undefinability theorem. The way I
see it, Godel's theorem presents a "little" puzzle, which points us in
the direction of the "big" puzzle presented by Tarski's theorem.

http://en.wikipedia.org/wiki/Godel%27s_theorem
http://en.wikipedia.org/wiki/Tarski%27s_undefinability_theorem

The little puzzle is this: Godel's theorem tells us that any
sufficiently strong logic does not have a complete set of deduction
rules; the axioms will fail to capture all truths about the logical
entities we're trying to talk about. But if these entities cannot be
completely axiomized, then in what sense are they well-defined? How is
logic logical, if it is doomed to be incompletely specified? One way
out here is to say that numbers (which happen to be the logical
entities that Godel showed were doomed to incompleteness, though of
course the incompleteness theorem has since been generalized to other
domains) really are incompletely specified: the axioms are incomplete
in that they fail to prove every sentence about numbers either true or
false, but they are complete in that the ones they miss are in some
sense actually not specified by our notion of number. I don't like
this answer, because it is equivalent to saying that the halting
problem really has no answer in the cases where it is undecidable.

http://en.wikipedia.org/wiki/Halting_problem

Instead, I prefer to say that while decidable facts correspond to
finite computations, undecidable facts simply correspond to infinite
computations; so, there is still a well-defined procedure for deciding
them, it simply takes too long for us to complete. For the case of
number theory, this can be formalized with the arithmetical hierarchy:

http://en.wikipedia.org/wiki/Arithmetical_hierarchy

Essentially, each new quantifier amounts to a potentially infinite
number of cases we need to check. There are similar hierarchies for
broader domains:

http://en.wikipedia.org/wiki/Hyperarithmetical_hierarchy
http://en.wikipedia.org/wiki/Analytical_hierarchy
http://en.wikipedia.org/wiki/Projective_hierarchy

This brings us to the "big" puzzle. To specify the logic an refer to
any structure I want, I need to define the largest of these
hierarchies: a hierarchy that includes all truths of mathematics.
Unfortunately, Tarski's undefinability theorem presents a roadblock to
this project: If I can use logic L to define a hierarchy H, then H
will necessarily fail to include all truths of L. To describe the
hierarchy of truths for L, I will always need a more powerful language
L+1. Tarski proved this under some broad assumptions; since Tarski's
theorem completely blocks my project, it appears I need to examine
these assumptions and reject some of them.

I am, of course, not the first to pursue such a goal. There is an
abundant literature on theories of truth. From what I've seen, the
important potential solutions are Kripke's fixed-points, revision
theories, and paraconsistent theories:

http://en.wikipedia.org/wiki/Saul_Kripke#Truth
http://plato.stanford.edu/entries/truth-revision/
http://en.wikipedia.org/wiki/Paraconsistent_logic

All of these solutions create reference gaps: they define a language L
that can talk about all of its truths, and therefore could construct
its own hierarchy in one sense, but in addition to simple true and
false more complicated truth-states are admitted that the language
cannot properly refer to. For Kripke's theory, we are unable to talk
about the sentences that are neither-true-nor-false. For revision
theories, we are unable to talk about which sentences have unstable
truth values or multiple stable truth values. In paraconsistent logic,
we are able to refer to sentences that are both-true-and-false, but we
can't state within the language that a statement is *only* true or
*only* false (to my knowledge; paraconsistent theory is not my strong
suit). So using these three theories, if we want a hierarchy that
defines all the truth value *combinations* within L, we're still out
of luck.


As I said, I'm also interested in the notion of probability. I
disagree with Solomonoff's universal distribution
(http://en.wikipedia.org/wiki/Ray_Solomonoff), because it assumes that
the universe is computable. I cannot say whether the universe we
actually live in is computable or not; however, I argue that,
regardless, an uncomputable universe is at least conceivable, even if
it has a low credibility. So, a universal probability distribution
should include that possibility.

I also want to know exactly what it means to measure a probability. I
think use of subjective probabilities is OK; a probability can reflect
a state of belief. But, I think the reason that this is an effective
way of reasoning is because these subjective probabilities tend to
converge to the "true" probabilities as we gain experience. It seems
to me that this "true probability" needs to be a frequency. It also
seems to me that this would be meaningful even in universes that
actually happened to have totally deterministic physics-- so by a
"true probability" I don't mean to imply a physically random outcome,
though I don't mean to rule it out either (like uncomputable
universes, I think it should be admitted as possible).

Well, I think that is about it. For now.

Tuesday, November 11, 2008

Fixing Naive Lambda Logic

In this old post, I explained why naive lambda logic is paradoxical, and explained how to fix it. The fix that I suggested restricted lambda statements to represent finite computations, which is really what they are for anyway. However, in this recent post, I suggest that many naive, inconsistent logics (such as the naive lambda logic) are actually referentially complete: they can state any mathematically meaningful idea. The problem is that they also can state some meaningless things.

So, it is interesting to consider how one might fix naive lambda logic in a way that keeps the meaningful infinite stuff while removing the nonsense.

To do that, though, a definition of meaningfulness is needed. As I said in the more recent post, a statement's meaning needs to come ultimately from a basic level of meaningful statements (such as statements about sensory data). A meaningful statement should either be on this base level, or its meaning should be derived purely from other meaningful statements.

In the lambda logic, it is natural to think of a term as meaningful if it evaluates to something. A statement, then, is meaningful if its truth relies only on the value of meaningful terms. To make this precise, we've got to specify the infinite build-up process by which we assign meaning. For each operation that forms a new sentence, we've got to define how meaning is carried over. One of these rules, for example, will be:

If statement X is meaningful and statement Y is meaningful, then the statement X and Y is meaningful.

We need a rule like this for every operation: and, or, not, quantifiers, lambda, and the lambda-value relation. That may sound simple, but complications quickly build up. And is meaningful when its arguments are meaningful. But what about or? It seems like X or Y could be meaningfully true if X is true but Y is meaningless. But if we want this to work, then it would also be sensible to allow not Y to be meaningfully true when Y is meaningless. (That behavior of not would make the behavior of or that I mentioned consistent with classical logic.) But that equates meaningless with false, which seems wrong.

Another problem arises with the treatment of quantifiers. Do quantifiers range over all possible terms, or only meaningful ones? It makes a difference!

There are many different places we can run to get standard solutions to these problems: free logic, the revision theory of truth, fixed-point theories, and others.

A third problem, perhaps worse, arises from the concept "not meaningful". For a concept to be meaningful seems straightforward: it should be built up in a meaningful way from other meaningful statements. But trouble presents itself when we discuss non-meaning.

Imagine the base-level facts as ground from which trees are growing. The trees, of course, are the meaningful statements that can be built up. Meaningless statements would be branches hanging in midair, attempting to grow from themselves, or from other meaningless statements. (Meaningful statements can also have self-reference, but have an attachment to the ground somewhere.)

Now, when we consider in the concept "meaningless", we see some weird stuff happening: somehow the paradoxical branches that grow from nothing are able to support meaningful branches, such as the statement "This sentence is false" is meaningless. Or, even stranger, consider "This sentences is meaningless". It appears to be meaningful but false. Or, consider "This sentence is either false or meaningless". If it is true, it is false or meaningless; if it is false, it is true; if it is meaningless, then it is true. It looks like the only way to deal with it is to say that it is meaningless to ask which of the categories it is assigned to: it is meaningless to talk about its meaningfulness or meaninglessness.

To handle cases like these requires a sort of back-and-forth between meaningful and meaningless. We can't just grow meaning from the ground up and then declare the rest meaningless; in declaring things meaningless we allow more meaningful statements, so we've got to go back and add them in. That in turn might change the set of meaningless statements, and so on. If in doing this we are changing our assessments of various statements (going back and forth between "meaningful" and "meaningless"), then we are doing something similar to what the revision theory of truth recommends. On the other hand, I like the idea of marking things "definitely meaningful" and "definitely meaningless". A back-and-forth woulds still be needed, but all decisions would be final.

Anyway. Suppose we resolve all of those issues. Another interesting issue comes up: infinite lambda statements.

An infinite lambda statement could directly represent the mathematical entities that I want the system to be able to reason about. For example, an arbitrary real number would be any function from natural numbers to the integers 0 through 9 (if we want decimal notation), represented by a finite or infinite lambda statement. (Note: the calculation itself could always be fixed to halt in finite time, despite the lambda statement being infinite.) The interesting thing is that if the logic has been satisfactorily rigged up, it will be in some sense as if infinite lambda statements were allowed, even though they aren't.

This suggests that we need to be careful of the semantics, and therefore of how nonmonotonic reasoning is used. Are the quantifiers interpreted as ranging over all actually-representable lambda terms, or are they also ranging over the unrepresentable infinite ones? If the logic is to talk about unrepresentables properly, the quantifiers will have to range over them. But then nonmonotonic reasoning will not converge to the correct answers: it will converge to answers that hold for the finite terms only. This will sometimes be correct for the infinite case, but not always. The matter seems complicated, and I'm not yet sure how to deal with it.

Monday, November 10, 2008

How to Reason with Four-valued Definitional Set Theory

1. There is a "ground level" composed of statements in first-order logic that can be reasoned about in the usual manner.
2. The axiom of set comprehension is interpreted as allowing arbitrary definitions of sets (rather than asserting the actual existence of sets). A general logic of definitions would allow us to define any sort of object with any sort of definition, and reason about it from there; in this case, though, I only am worried about defining sets.
3. The definition of a set S can be "for all x: x is an element of S if and only if [statement]" for any statement (in the full language, not just the first-order part).
4. If "x is an element of S" can be derived from the definition of S plus any relevant information on x, then it is so. Similarly for "x is not an element of S". "Relevant information" here means the definition of x if x is a defined set, plus all information existing at the first-order level. Notice that the definition of S might imply both "x is an element of S" and "x is not an element of X", or neither of these.

That's it! Unless I've missed something. But, anyway, one thing that is very specifically not included is a way for contradictions (or anything) to leak over from the level of defined objects to the "ground level" of normal logic.

Oh, I suppose this should be added:

5. Statements of the full language can be reasoned about with some sort of intuitionistic paraconsistent logic.

This should compound statements to be constructed; for example, if two different statements about sets are true by definition, then "statement1 and statement2" should be treated as a true sentence. I think an OK way of doing this would be to allow the Gentzen-style [Edit: oh, I meant Prawitz-style!] introduction/elimination rules for and, or, forall, and exists, skipping over the rules applying to implication and negation. But, don't take my word for it!

Now, the system is definitely not referentially complete. It lacks the ability to refer to its own possible truth-value combinations. I'd need to add a nonmonotonic operator allowing the logic to conclude that a a statement was not true-by-definition and not false-by-definition, which of course gain credibility as attempts to prove a statement true/false fail. This gets somewhat confusing, because false-by-definition is distinct from not true-by-definition. Anyway, soon I'd be adding in the full machinery of a logic of "implies". So, the four-valued set theory doesn't seem like a complete foundation. Still, it is interesting!

Saturday, November 08, 2008

The Lure of Paraconsistency

A paraconsistent logic is a logic that allows some contradictions. To make this somewhat bearable, paraconsistent logics find various ways of stopping the rule of explosion: "from a contradiction, anything follows". The rule of explosion holds in both classical logic and intuitionistic logic, and makes inconsistent theories very uninteresting: there is only one of them, and in it all things are both true and false. Any theory that turns out to be inconsistent is logically equivalent to this boring theory.

So, contradictions are not as bad in paraconsistent logics-- the explosion is contained, so it's not like everything falls apart. We might still be irked, and ask how this counts as "logic" if it is explicitly contradictory, but at least we can do interesting formal work without worry.

For example, it would be possible to construct paraconsistant versions of naive set theory, my naive lambda calculus, and naive theories of truth. In the context of my most recent posts, it is interesting to consider a naive theory of logical implication: collapse all of the different levels of implication (the systems I named one, two, infinity, A(infinity, infinity)...) into a single implication relation that can talk about any level.

Now, the temptation: these "naive" foundational theories appear to be referentially complete! Frege was able to use naive set theory to construct basic mathematics, and indeed it doesn't seem as if there is any bound to what can be constructed in it. On my end, a naive theory of implication obviously contains all the levels I might construct: the system one is the fragment in which implication is only wrapped around first-order statements, two is the fragment in which implications are additionally allowed to be wrapped around statements in one, infinity is the fragment in which we can use any of one, two, three.... infinity^infinity allows us to use one, two, three,... infinity+1, infinity+2, infinity+3...

The problem, of course, is that in addition to allowing all meaningful statements to enter, we apparently let some meaningless statements through as well. But, at the moment, I know of no other way to let in all the meaningful ones! So paraconsistent logic is looking pretty good at the moment.

Essentially, it seems, a meaningful statement is any statement constructable in a naive theory that happens to have a truth value that relies on other meaningful statements. Those other meaningful statements gain their meaning from yet other meaningful statements, and so on down the line until we reach good solid first-order facts that everything relies on. But, the path down to basic statements can be arbitrarily complicated; so, it is impossible to construct a logic that contains all of the meaningful statements and none of the meaningless ones, because we can't know ahead of time which is which for every single case.

I found an argument that paraconsistent logic isn't the only way to preserve naive set theory (free version), but it apparently is only hinting at the possibility, not providing a concrete proposal. Actually, I've made some relevant speculations myself... in this post, towards the end, I talk about a "logic of definitions". Such a logic would be four-valued: true, false, both, neither. A definition can easily be incomplete (rendering some statements neither true nor false), but it can just as easily be inconsistant (rendering some statements both true and false). This is suited particularly well to the way the comprehension principle works in naive set theory; essentially, we can read the naive comprehension principle as stating that any set that has a definition, exists. The trouble comes from the fact that some such definitions are contradictory!

This seems nice; just alter naive set theory to use a four-valued logic and there you go, you've got your foundational logic that can do anything we might want to do. But I'm not about to claim that... first off, I haven't even begun to define how the four-valued logic would work. Second, that scheme omits the extra grounding that nonmonotonic methods seem somewhat capable of providing; I would want to look into that omission... Third, the non-classical manipulations provided by the four-valued logic may not be sufficiently similar to classical manipulations to justify much of classical mathematics. That would be a big warning sign. So, generally, the idea needs to be worked out in much more detail before it can be judged. (But, it looks like there is some work on four-valued logics... here, here, here... haven't read them yet.)


But, anyway, the naive theories (and the idea of using paraconsistant logic to make them workable) are quite valuable in that they provide a glimpse into the possible, showing that it is not utterly crazy to ask for a logic that can define any infinity a human can define... we just might have to give up consistency.

Tuesday, November 04, 2008

Normative Grounding

At the end of the last post, I mentioned a normative definition of grounding: a concept is grounded in a system if the system captures everything worth capturing about the way we reason about that concept. Perhaps "grounding" isn't the best term for this, but whatever you call it, this is an important criteria. This principle should also cut the other way: if something is not worth capturing, it should not be in the system. At the end of the last post I listed a few ways in which the system fails the first test: things that we do (to our advantage) that the system doesn't capture. But the system also has problems in the other direction: it does things that it has no good reason to do.

The expanded notion of meaning that I've made such a big deal about, which allows concepts that are meaningful in terms of other concepts but not in terms of base-level data, seems like a normative stretch. So what if a concept would be useful if we knew it to be true/false? The fact remains that it is useless, since we cannot know one way or the other!

Yet, we silly humans seem to use such concepts. We are (it seems) even able to prove or disprove a few of them, so that we do know them to be true or false. How and why?

The arithmetical hierarchy is defined in terms of a base-level class of computable predicates. However, thanks to the halting problem, it is impossible to tell which predicates are computable and which aren't. So, we never know for certain which level a statement is on. In terms of the formalism I've described, I suppose this would reflect our ability to disprove some statements of the form "true(...)" or "implies(...,...)"; for example, if we can show that a statement is a logical contradiction, then we know that it is not a tautology. (Do I need to add that to the list of manipulation rules?? Is it somehow justifiable if the system defines "true(X)" as "there exists a proof of X"??) So, some proof-searches are provably nonhalting, without resorting to mathematical induction... meaning some of the higher-level statements that appear undecidable will turn out to be decidable, and some that appear un-limit-decidable will turn out to be limit-decidable after all. Since we can't tell which ones will do this ahead of time, there may be normative justification for keeping them all...

Sunday, November 02, 2008

Back to the Ground

In the previous post, I described a way of constructing arbitrarily powerful logics. This description does not amount to an algorithm-- it invokes something human mathematicians do (listing larger infinities), but doesn't provide a way to implement that on a computer. Worse, it is known that by the standard definition of algorithm, such an algorithm does not exist; I can only hope that it does exist as a limit-computable algorithm. (Note: I just found out that the term "super-recursive algorithm" that I used last time can refer to algorithms decidedly not implementable on normal computer hardware... so I guess there is not a single term that refers to exactly those super-recursive algorithms that are implementable on standard hardware. "Limit-computable" is closer...) But, I think this is likely, since I think humans use such an algorithm.

Anyway. What I did not describe was a way of "grounding" each new level. I have previously discussed at length an idea for grounding "two", based on non-monotonic logic and an understanding of meaning within the network of beliefs rather than only in terms of the outside world. But, what about grounding the higher operators? With no manipulation rules, they are quite pointless!

Actually, the operators are not exactly the problem. If we wrap two statements in an "implies" statement, the truth/falsehood of the result relies on no more then the validity of the deduction from the first statement to the second. In other words, in some sense we aren't adding anything, we're just making explicit something we already do. But then something strange happens: we're suddenly able to wrap quantifiers around the statement. This becomes problematic. At that stage, we've got to create the manipulation rules that the next stage will wrap up in an explicit operator.

Actually, the same problem applies for the system two. I haven't specifically talked about how one statement in two might be derived from another.

But, really, the use of the "implies" operator is not essential: I could just have well used a "true" operator that took one statement. "A implies B" is true in the metalogic when "A -> B" is provably true in the base logic, with "->" representing truth-functional implication.

Reducing "implies" to "true" in this way changes the appearance of the whole system greatly-- it makes obvious the direct connection to Tarski's Undefinability Theorem and the hierarchy of truth predicates that it was originally taken to entail.

So, the manipulation rules for "true" are sufficient to account for the manipulation of "implies". First, we can adopt the famous T-schema:

something <=> true(something)

This does not lead to paradox because there is indeed a hierarchy established, rather then a single "true".

Second, boolean combinations and sentences with quantifiers are subject to the typical deduction rules.

Third, quantifiers are additionally subject to fallible rules: existential quantifiers are fallibly refuted by (infallibly) refuting many instances, and universal quantifiers are fallibly affirmed by (infallibly) affirming many instances. (One may wish to allow fallible evidence as well, but there is no normative justification for doing so: the resulting beliefs will never converge to the truth!)

fourth, "true(something)" is to be interpreted as "exists(affirmation of something in a lower logical system)", so that it is fallibly refutable like other existential statements. ("Affirmation" means proof if the lower system is one, limit-proof if the system is two, and so on. (So, in general, a proof of finite or infinite length.) Different logical primitives could be used in order to let this definition of "true" actually occur within the system; in particular, we could introduce more and more powerful quantifiers rather than more and more powerful truth-operators.)

So, do these definitions ground the system's mathematical knowledge (Assuming we;ve also supplied the system with a method of enumerating infinities)? Well, it's hard to say, isn't it? "Grounding" is a philosphical issue. We need a concrete definition. Try this: a system's knowledge of X is grounded iff it reasons about X in the way that a human who knew about X would reason. More precisely, it captures everything worth capturing about the human way of reasoning. The answer to the question still isn't obvious; I do not have at my disposal a list of all the things worth capturing about human mathematical reasoning. But, I do have a partial list of things that I think are worth capturing...

-first-order reasoning
-limit-computable fallible reasoning
-arithmetical meaning
-set-theoretic meaning
-mathematical induction
-transfinite induction

The theory above does not totally encompass all of limit-computation (it uses sigma-1 and pi-1 but not delta-2!). It does not entirely explain set-theoretical intuition (it includes "grounding" for uncountable entities, since it (hypothetically) enumerates any infinity a mathematician might; but, it doesn't support direct talk about sets). It doesn't support mathematical induction or transfinite mathematical induction.

So, still work to be done!