New Grounding Criteria
In this post I talk about the need for a logic's semantics to be in some sense determined by its deduction rules. If this isn't the case, then the logic is not suitable for artificial intelligence: the system's behavior will not reflect the intended meaning of the symbols. In other words, the logic's meaning needs to be "grounded" by its use.
Unfortunately, although I presented some ideas (and added more in this post) I didn't come up with a real definition of the necessary relationship between syntax and semantics.
The general idea that I had was to ground a statement by specifying a sensible procedure for determining the truth of the statement. Due to Godel's theorem, it is impossible for such a procedure to be always-correct, so I set about attempting to define approximately correct methods.
But, now, I am questioning that approach. I've decided on a different (still somewhat ambiguous) grounding requirement. Rather than requiring that the system know what to do to get a statement, I could require that it knows what to do with it once it has it. In other words, a statement that is not meaningful on its own (in terms of manipulating the world) may be meaningful in terms of other statements (it helps manipulate meaningful statements).
Again, this covers the arithmetical hierarchy (see previous). We start with the computable predicates, which are well-grounded. If I have a grounded predicate, I can make a new grounded predicate by adding a universal quantifier over one of the variables. If I knew some instance of the new predicate to be true, I would be able to conclude that an infinite number of instances of the old were true (namely, all instances obtainable by putting some number in the spot currently occupied by the universally quantified variable). Existential quantifications have a less direct grounding: if we knew an existential statement to be true, we could conclude the falsehood of a universal statement concerning the opposite predicate (meaning "there exists X for which P(X)" is grounded in "it is not the case that for all X, not P(X)").
Because we know what would be true if some statement were true, we can attempt to use the scientific method to test the truth of statements. This is essentially what I was doing with my previous attempts. However, as I mentioned then, such methods will not even eventually converge to the right answer (for the tough cases); they will keep flipping back and forth, and even the frequencies of such flipping are meaningless (otherwise we could use them to decide). Nonetheless, human nature makes us want to try, I think...
Obviously I have a bit of work to do. For example... while this provides some grounding for arithmetical statements, does it provide enough to really fix their desired meaning? (This is particularly unclear for the existential statements.) Also, what else can be characterized like this? Is this method only useful for the arithmetical truths?