I've figured out that, although the system can represent arbitrary models, it can't learn all of them. The reason has to do with being unable to restrict a recursive definition. Because of the way recursive definitions are learned, the system is unable to entertain imaginary alternative structures and test for their occurence in the data (as it could for non-nested patterns). The basic reason for this is that nested patterns emerge from the system, being represented only implicitly through non-nested patterns. They aren't represented explicitly, so they can't be fiddled around with. A nested pattern is defined by looping back onto its defining context, not by looping back on an actual definition (as discussed before); if there is no context to hold it, it can't be defined. Why this is bad: since the system is unable to come up with imaginary nested patterns, it can't (on its own) imagine arbitrary calculations "off to the side" (I called then "algebraic manipulations" before) that are a key feature of turing-complete learning. (Or, at least, I have come to that conclusion. It's possible that I'm wrong, and the system as stated can learn turing-complete worlds after all.)
I've figured out a way to make it do so, although it's not particularly pleasing. Two modifications must be made. First, aggregate definitions must be able to contain alternatives. This is an explicit version of what happens when multiple items occur in a paticular context. Second, definitions must be able to be explicitly recursive, containing the object being defined; this is an explicit version of what happens when an object is its own context.
Interestingly, we might require that these two things can only occur at the same time-- a recursive call only occurs in an alternative, and an alternative only can occur if it contains a recursive call. The first of these two will almost always be true anyway (otherwise all instances of the object being defined will be infinitely long; this could only be true of concepts such as "string of text", and the system doesn't really need to come up with the concept of an infinite string of text, although it seems human). The second restriction does not bar turing-complete learning (as far as I know), so it doesn't change the representational power of the system; but it still would probably hurt the system's power.
There are a few modifications on this theme. Firts, of we're allowing alternatives, then why not allow denials, too? This seems like it would make a good deal of difference for some purposes and not much for others. It would make the system more difficult to compute-- rather than identifying an aggregate when we see all of its parts, we would not need to worry about seeing a lack of certain things. This is more troublesome than it sounds, because in many cases the system only knows what is, not what isn't. (Imaginary cases, of course; this is how I visualizew thew system, but that doesn't mean much about what it will be like in practice.) A denial of an abstract entity requires us not just to fail to find an instance of it currently instantiated in the location in question, but to find no instance after instantiating all possible entities in that location. Since the system is now supposed to be turing-complete, this runs into the halting problem. Ideally the system should be able to guess after looking for a good amount of time: "It doesn't seem as if there is an instance there, although I can't be sure."
Another variation: Most aggregates with explicit alternatives or self-reference should come from a recognized implicit alternative or self-reference. Can I change this "most" to "all"? I think the following version works: "All aggregate classes with explicit alternatives or self-reference either reflect aggregate classes with implicit alternatives or self-reference, or are formed by reducing the number of alternatives in such a class." The problem with the implicit alternatives is that they are too all-encompassing, embracing anything that occurs in the defining context. Making such alternatives explicit and then reducing the number of alternatives fixes this. Is that good enough? More thought is needed.
Tuesday, June 12, 2007
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment