Provable Truths
The picture I've been using mostly for the idea of knowable truths goes something like this: quantifier-free statements are computable (we can tell if they are true or false); statements that can be reduced to a normal form containing only existential quantifiers are computably verifiable (if they are true, we can know this by eventually finding an example) but only probabalistically falsifiable (if they are false, we'll never know it for sure, but we can suspect it because we've tried for a long time to find an example and failed); statemets similarly containing only universal quantifiers are falsifiable but only probabilistically verifiable; statements whose normal form universally asserts an existential statement are probabilistically falsifiable; statements asserting the existence of something satisfying a universal are probabilistically verifyable. Nothing else appears to be knowable in any significant sense.
Nonetheless, even Robinson arithmetic gives us more definite truths than this scheme does. Statements like "For all x, x=x" can be verified, not just falsified. Should this be looked at as probabilistic knowledge? This view becomes understandable if we think of equality as a black box, which we know nothing about. We just feed it numbers, and accept what we get out. If not the axioms of equality, at least the first-order tautologies seem knowable: perhaps "for all x, x=x" is probabilistic knowledge of the behavior of equality, but "for all x, x=x if x=x" is clear and knowable... right? Well, this too could be questioned by claiming that the truth functions are also to be treated as black boxes.
Why would we take this view? Perhaps we claim that knowledge of the actual structure of equality, and similarly of the actual structure of truth functions, should be represented at the metalanguage level. Then, "for all x, x=x" would be something we could prove in the metalanguage, thanks to our additional information about equality, but not in the language itself. I think there is something intuitively appealing about this, despite the restrictive nature of the claim.
The base logic would contain rules for reducing equality statements to true/false, and similarly rules that reduced a truth-function whose arguments were already true or false. A statement that reduces to "true" can then be concluded. In addition to these rules, there would be a rule that allowed existential quantifications to be concluded from examples; this would be a fairly complicated process, as we can pull out any set of identical pieces and replace them with a variable. Once an existential statement has been concluded, it can reduce to "true" in larger expressions. Universal quantifiers can be defined from existentials as usual, which allows us to conclude their negation by finding counterexamples (but does not allow us to affirm any universals).
How could a metalanguage be constructed along similar lines, but yielding the first-order truths we're used to? Not sure. What's needed most is the ability to see that even though we don't have all the information for a truth-function to be calculated, the additional information will create the same answer no matter what.
Wednesday, April 08, 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment