Friday, August 21, 2009

Climbing the Mountain

Undefinability is a harsh reality.

Any solution that one offers is still vulnerable to the attack: full self-reference seems impossible. Regardless of how clever you think you've been, if you propose a solution, I can point to a structure that can't be defined within that system: the system itself. If that's not the case, then the system is inconsistent. (And even then, even if we allow inconsistent systems, I know of no system which could be said to fully describe itself.)

What this all suggests is that human beings (along with a broad range of sentient entities) are fundamentally incapable of articulating our own inner logic. Why? For the same reason that a powerset is larger than a set: if we could fully refer to ourselves, our logic would be "larger than itself" (loosely speaking).

It comes down to a problem of wanting to be able to form sentences that mean anything we might want. If we're fully capable, then we can construct a sentence that is true precisely when it is not true... and the system falls apart.

Kripke's fixed point theory offers a nice fix: with some specific machinery, we're able to call these sentences "undefined". But now we can't refer properly to this idea, "undefined". So we've got a complete theory of truth (one might say), but we're still stuck with regards to undefinability.

So, it looks like we've got to accept it: we can't find a mind (artificial or otherwise) that fulfills the imperative "Know Thyself". The self remains shrouded in mystery. What's a self-respecting AI engineer to do? Am I forced to always design minds with less power of logical reference than my own, because I could not comprehend a design that was truly up to human capability? Are artificial intelligences doomed to be fancy calculators, lacking "understanding" because they will always have a weaker logical structure?

First, no. That doesn't really follow. It's quite possible to use an algorithm that can in principle learn anything: evolution. For example, one could build an artificial mind that held an initially simple program within it, mutated the recently run areas of code when punishment occured, and strengthened recently run code against mutation when rewarded. Or, a little more sophisticated, one could implement Schmidhuber's Sucess Story algorithm, which always and only keeps apparently beneficial mutations, is capable of learning what and when to mutate, can learn to delay reward, and has other desireable features. And yet again, we could try William Pearson's design, which sets up an artificial economy of agents which can co-evolve to produce the desired behavior. With these styles of approaches, there is not a worry of fundamental limitation: such systems can learn the correct logic if it exists (it just might take quite a while!). The worry, rather, is that these aprroaches do not take full advantage of the data at hand. There is no guarantee that they will perform better given more processing power and memory, either. In short, they are not a model of rationality.

This could be taken as an indication that studying logic and rationality is not directly relevant to AI, but I would not agree with such an argument. For one thing, it is possible to derive a model of rationality from such approaches. If they work, there is a reason. The techniques each essentially provide some way of evaluating how a particular program of behavior is doing, together with a technique of searching through the possible behaviors. One could consider the space of all possible programs that might have generated the behavior so far, rather than the single program that actually did. One then takes the best program from that space, or perhaps a weighted vote. Obviously there will be some details to fill in (which is to say that such models of rationality don't just follow directly from the evolutionary algorithms employed), but the general approach is clear... such a system would take an infinite amount of processing power to compute, so one would need to use approximations; the more computing power given, the closer the approximation could be. All the data at hand is now being used, because the system now has the ability to go back and re-think details of the past, asking if particular sensory patterns might have been clues warning of a punishment, et cetera.

So why not accept such models of rationality? I have two reasons... first, they are purely reinforcement-learning-based. Agents based on these models can be driven only by pleasure and pain. There is no ability to consider external, unobserved objects; everything consists of patterns of directly observed sensation. Second, even if one is OK with purely reward-based systems, it is not clear that these are optimal. The evaluation criteria for the programs is not totally clear. There needs to be some assumption that punishment and reward are associated with recently taken actions, and recently executed code, but it cannot be too strong... The sucess story approach looks at things in terms of modifying a basic policy, and a modification is held responsible for all reward and punishment after the point at which it is made. The simple mutation-based scheme I described instead would use some decaying recent-use marker to decide responsibility. William Pearson suggests dividing time up into large sections, and splitting up the total goodness of the section as the payment for the programs that were in charge for that time. Each of these will result in different models of rationality.

So, I desire an approach which contains explicit talk of an outside world, so that one can state goals in such language, and furthermore can apply utility theory to evaluate actions toward those goals in an optimal way. But, that takes me back to the problem: which explicit logic do I use? Am I doomed to only understand logics less powerful than my own internal logic, and hence, to create AI systems limited by such logics?

One way out which I'm currently thinking about is this: a system may be initially self-ignorant, but may learn more about itself over time. This idea came from the thought that if I was shown the correct logic, I could refer to its truth predicate as an external thing, and so appear to have greater logical power than it, without really causing a problem. Furthermore, it seems I could learn about it over time, perhaps eventually gaining more referential power.

In understanding one's own logic, one becomes more powerful, and again does not understand one's own logic. The "correct logic", then, could be imagined to be the (unreachable) result of an infinite amount of self-study. But can we properly refer to such a limit? If so, it seems we've got an interesting situation on our hands, since we'd be able to talk about the truth predicate of a language more referentially powerful than any other... Does the limit in fact exist?

I need to formalize this idea to evaluate it further.

4 comments:

  1. The Australian philosopher colin leslie dean has shown that mathematics and science ends in meaninglessness ie self contradiction

    http://gamahucherpress.yellowgum.com/books/philosophy/Absurd_math_science4.pdf

    he has also shown godels first incompleteness theorem is meaningless as godel cant tell us what makes mathematical statements true

    http://gamahucherpress.yellowgum.com/books/philosophy/GODEL5.pdf

    and dean has shown godels second incompleteness theorem like wise ends in meaninglessness

    http://gamahucherpress.yellowgum.com/books/philosophy/GODEL5.pdf

    GODEL IS SELF-CONTRADICTORY
    But here is a contradiction Godel must prove that a system cannot be proven to be consistent based upon the premise that the logic he uses must be consistent . If the logic he uses is

    ReplyDelete
  2. Heh thanks for referencing my theories. I'm not sure of what my thoughts were last we communicated on this subject. Here is a current overview.

    The first layer (the ecology/economy) is not a theory of rationality. Just a method of selecting between different programs and systems of programs. So yes it is incomplete as far as AI goes. Also random evolution is not very good for computer programs so only more complex method of introducing variance will flourish. Complex methods can flourish because they can add code to the programs they create which passes on some of the reward that they get to them.

    The first experiment I would like to perform is seed the system with populations of agents within the system looking for probabilistic relations between variables (both external inputs and processed data). And then other agents that attempt to use these relations for logical inference and prediction of the world and itself. And also for a degree of self-programming.

    Another layer of agents on top of this would look for grammars in data and attempt to find patterns that could be translated into statements (this would be necessary for human style communication).

    There are other layers of agents needed, probably. I'll try and follow your blog in the future. I'm getting back into thinking about things. I really need to make an economic system to start experimenting with, but I'm being too perfectionist.

    ReplyDelete
  3. You coded one for your thesis, right? It's not good enough? :)

    Right now, I have little faith in the economic learning model-- in particular, I am not sure that complex models of reproduction-with-variation (such as you mention) will be much more effective at getting money than simplistic random mutation. Intuitively, I feel your system would have a very strong tendency to lock in prematurely to the first marginally better strategy it finds. (This is not even guaranteed to be a local optimal, because there is no guarantee that the system will keep exploring.) Given that, it's unclear to me why the economic model should be expected to be a good "operating system" within which to build more complicated structures.

    Where are you located these days?

    ReplyDelete
  4. Unfortunately not. I've got theoretical problems with that system. I need to make one where agents can't sabotage the working of other agents, without either paying a cost or being easy to spot so Tit for Tat and game theoretic concerns come into play.

    My original system couldn't be run real-time as there was no way for programs to know what other programs were using processing power, memory bandwidth or energy (in a mobile system). Only memory and virtual processing power was accounted for. All these meant that it would be evolutionarily stable for deleterious agents to hog those resources so that agents "in charge" would get less positive feedback. Leading to sub-optimal results. I've got to find a system where such things are impossible (and the policing doesn't take too much overhead).

    Random changes are generally pretty bad. In my dissertation the random changer stopped changing pretty quickly (offspring were more likely to be bad, so it was better to keep the program the same).

    My rough desiderata for an "operating system" is.

    1) As much of the internals of the system as possible must be allowed to adapt to the problems the system faces. This includes allowing the methods of adaptation to adapt.
    2) As attempted adaptations may be negative for the system there must be a way for the system to weed out bad adaptations, no matter where they get.

    Are there any papers/blog posts I should read that I've missed out on, on these subjects.

    I'm not convinced you want to guarantee a system will keeping on exploring, to be honest. I don't want a robot that tries sticking forks in plug sockets to see what happens, for example. Or the coding equivalent of doing the same.

    I'm not any where interesting, just London. I gave up on these things for a while. But I'm just getting back into things. I'm thinking about creating a simple robotic arm and a simple economy to play around with it. Although if there are better systems in the works, I'm interested.

    ReplyDelete