tag:blogger.com,1999:blog-284176472017-09-08T16:52:08.062-07:00Artificial IntelligenceAbram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.comBlogger86125tag:blogger.com,1999:blog-28417647.post-17455328630835579712010-01-18T15:16:00.001-08:002010-07-30T21:39:41.838-07:00<span style="font-size: 180%;">Moving<span style="font-size: 100%;"> </span></span><br /><br /><br /><br /><span style="font-size: small;">I've been considering starting over with this blog thing for a while, so I thought I'd put up a post warning the few people who follow this blog. My reason for the move is that I feel much of what I have said on this blog is, well, foolish given what I've learned. At the same time, my current beliefs are not that far off what I have said, making it hard to correct what I've said without long explanations. In addition to this, there is a great deal that I have not written, which I should have; and what I have written, I have not organized in any fashion.<br /><br />So, I feel that it is best to start from the top and explain my beliefs, goals, intuitions, and so on.</span><br /><br /><br /><br /><span style="font-size: small;">I'll be starting on another Google blog, <a href="http://dragonlogic-ai.blogspot.com/">The Logic of Thought</a>. I realise, the name may sound a bit pretentious-- I do not claim to have the answer. Still, that seems like a fair label for the question I am asking.</span><br /><span style="font-size: small;"><br /></span><br /><span style="font-size: small;">Edit-- I changed the name to "In Seach of a Logic of Thought."</span><br /><span style="font-size: small;">Edit again-- Now it's "In Search of Logic." </span><span style="font-size: small;">Ok, last name change I promise.</span><span style="font-size: small;"> :)<br /></span>Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com2tag:blogger.com,1999:blog-28417647.post-85667318493480908092009-08-21T10:50:00.000-07:002009-08-23T12:22:00.072-07:00<span style="font-size:180%;">Climbing the Mountain</span><br /><br />Undefinability is a harsh reality.<br /><br />Any solution that one offers is still vulnerable to the attack: full self-reference seems impossible. Regardless of how clever you think you've been, <span style="font-style: italic;">if</span> you propose a solution, I can point to a structure that can't be defined within that system: the system itself. If that's not the case, then the system is inconsistent. (And even <span style="font-style: italic;">then</span>, even if we <span style="font-style: italic;">allow</span> inconsistent systems, I know of no system which could be said to fully describe itself.)<br /><br />What this all suggests is that human beings (along with a broad range of sentient entities) are <span style="font-style: italic;">fundamentally</span> incapable of articulating our own inner logic. Why? For the same reason that a powerset is larger than a set: if we could fully refer to ourselves, our logic would be "larger than itself" (loosely speaking).<br /><br />It comes down to a problem of wanting to be able to form sentences that mean anything we might want. If we're fully capable, then we can construct a sentence that is true precisely when it is not true... and the system falls apart.<br /><br />Kripke's fixed point theory offers a nice fix: with some specific machinery, we're able to call these sentences "undefined". But now we can't refer properly to this idea, "undefined". So we've got a complete theory of truth (one might say), but we're still stuck with regards to undefinability.<br /><br />So, it looks like we've got to accept it: we can't find a mind (artificial or otherwise) that fulfills the imperative "Know Thyself". The self remains shrouded in mystery. What's a self-respecting AI engineer to do? Am I forced to always design minds with less power of logical reference than my own, because I could not comprehend a design that was truly up to human capability? Are artificial intelligences doomed to be fancy calculators, lacking "understanding" because they will always have a weaker logical structure?<br /><br />First, no. That doesn't really follow. It's quite possible to use an algorithm that can <span style="font-style: italic;">in principle</span> learn anything: evolution. For example, one could build an artificial mind that held an initially simple program within it, mutated the recently run areas of code when punishment occured, and strengthened recently run code against mutation when rewarded. Or, a little more sophisticated, one could implement Schmidhuber's Sucess Story algorithm, which always and only keeps apparently beneficial mutations, is capable of learning what and when to mutate, can learn to delay reward, and has other desireable features. And yet again, we could try William Pearson's design, which sets up an artificial economy of agents which can co-evolve to produce the desired behavior. With these styles of approaches, there is not a worry of fundamental limitation: such systems can learn the correct logic if it exists (it just might take quite a while!). The worry, rather, is that these aprroaches do not take full advantage of the data at hand. There is no guarantee that they will perform better given more processing power and memory, either. In short, they are not a model of rationality.<br /><br />This could be taken as an indication that studying logic and rationality is not directly relevant to AI, but I would not agree with such an argument. For one thing, it <span style="font-style: italic;">is</span> possible to derive a model of rationality from such approaches. If they work, there is a reason. The techniques each essentially provide some way of evaluating how a particular program of behavior is doing, together with a technique of searching through the possible behaviors. One could consider the space of all possible programs that might have generated the behavior so far, rather than the single program that actually did. One then takes the best program from that space, or perhaps a weighted vote. Obviously there will be some details to fill in (which is to say that such models of rationality don't just follow <span style="font-style: italic;">directly</span> from the evolutionary algorithms employed), but the general approach is clear... such a system would take an infinite amount of processing power to compute, so one would need to use approximations; the more computing power given, the closer the approximation could be. All the data at hand is now being used, because the system now has the ability to go back and re-think details of the past, asking if particular sensory patterns might have been clues warning of a punishment, et cetera.<br /><br />So why not accept such models of rationality? I have two reasons... first, they are purely reinforcement-learning-based. Agents based on these models can be driven only by pleasure and pain. There is no ability to consider external, unobserved objects; everything consists of patterns of directly observed sensation. Second, even if one is OK with purely reward-based systems, it is not clear that these are optimal. The evaluation criteria for the programs is not totally clear. There needs to be some assumption that punishment and reward are associated with recently taken actions, and recently executed code, but it cannot be too strong... The sucess story approach looks at things in terms of modifying a basic policy, and a modification is held responsible for all reward and punishment after the point at which it is made. The simple mutation-based scheme I described instead would use some decaying recent-use marker to decide responsibility. William Pearson suggests dividing time up into large sections, and splitting up the total goodness of the section as the payment for the programs that were in charge for that time. Each of these will result in different models of rationality.<br /><br />So, I desire an approach which contains explicit talk of an outside world, so that one can state goals in such language, and furthermore can apply utility theory to evaluate actions toward those goals in an optimal way. But, that takes me back to the problem: which explicit logic do I use? Am I doomed to only understand logics less powerful than my own internal logic, and hence, to create AI systems limited by such logics?<br /><br />One way out which I'm currently thinking about is this: a system may be initially self-ignorant, but may learn more about itself over time. This idea came from the thought that if I was shown the correct logic, I could refer to its truth predicate as an <span style="font-style: italic;">external</span> thing, and so <span style="font-style: italic;">appear</span> to have greater logical power than it, without really causing a problem. Furthermore, it seems I could learn about it over time, perhaps eventually gaining more referential power.<br /><br />In understanding one's own logic, one becomes more powerful, and again does not understand one's own logic. The "correct logic", then, could be imagined to be the (unreachable) result of an infinite amount of self-study. But can we properly refer to such a limit? If so, it seems we've got an interesting situation on our hands, since we'd be able to talk about the truth predicate of a language more referentially powerful than any other... Does the limit in fact exist?<br /><br />I need to formalize this idea to evaluate it further.Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com4tag:blogger.com,1999:blog-28417647.post-88977885472498441802009-06-18T12:26:00.000-07:002009-06-18T14:32:55.111-07:00<span style="font-size:180%;">The Importance of Uncomputable Models</span><br /><br />Inspired by <a href="http://supermodelling.net/?p=130">this blog</a>.<br /><br />I have mentioned most of these thoughts before, but I am writing them up in one post as a cohesive argument. I will argue that uncomputable models are important. I am not arguing that people think in ways that computers cannot, but rather than people and computers alike can benefit from using models that have "holes" which represent meaningful questions that can't be answered definitively by running a computation.<br /><br />The world that we live in consists of a bunch of macro-objects. What are macro-objects? Roughly speaking, they are coherent structures of micro-objects, which recur in both time and space.<br /><br />A macro-object "emerges" from the micro-dynamics. It is a structure which persists, propagating itself into the future, like a wave in water.<br /><br />A spherical bubble is a macro-object, because slight warps in the sphere get evened out over time. (Until it pops.)<br /><br />Similarly, a water droplet, a planet, and a star.<br /><br />An atom is an emergent object (though not macro-level) because the positive and negative charges of the electrons and protons enter into a stable relationship.<br /><br />A grasshopper is a stable entity because its metabolism maintains a homeostasis which allows it to live for a fair amount of time.<br /><br />Similarly for all living organisms.<br /><br />And, so on.<br /><br />The key idea here is that all of these objects are in a <span style="font-style: italic;">convergent</span> state: a state that (within limits) it always returns to after wandering away from.<br /><br />Now, to logical foundations. The halting problem is unsolvable; there is no computable algorithm which can tell you which computations do and do not halt. But, suppose we could run our computer an infinite amount of time to get the answer. A machine that can do this is called a hypercomputer. Then, we could solve the halting problem by waiting to see if the computation in question halted; so, we can use hypercomputations to answer many questions that regular computations cannot answer. However, we've got a new type of problem. Some computations will flip back and forth between answers infinitely often, so that when we run them an infinite amount of time, the output is indeterminate. The result of the hypercomputation is then undefined, or "nonconvergent".<br /><br />Asking whether a hypercomputation converges is analagous to asking whether a normal computation halts. In a specific sense, it is twice as hard: if we solved the halting problem for normal computations, and made a little magic box that can give the correct answer for halting questions, and connect that to an ordinary computer, then we have a machine equivalent to a hypercomputer. Asking whether the programs of the new machine halt is actually equivalent to asking if hypercomputations converge.<br /><br />So, halting is uncomputable, but convergence is doubly so!<br /><br />Yet, I have argued that convergence is all around us. On an everyday basis, we deal with convergent states as if they were solid entities. So, I argue, we are viewing the world through an uncomputable model.<br /><br />Mathematically, one should expect reasoning about convergence to be quite hard (assuming, as I do, that human reasoning is computable). Everyday reasoning is not "quite hard" in this way. We mitigate the full force of the uncomputability of our models with many coping strategies; we mainly reason under the <span style="font-style: italic;">assumption</span> of convergence (for structures that have converged in the past), rather than attempting to question this assumption. We have to <span style="font-style: italic;">learn</span> when things converge and fail to converge. Yet, even so, using uncomputable models is easier than trying to apply computable models to the problem. Asking whether a structure is generally convergent is a <span style="font-style: italic;">very</span> useful abbreviation, approximately summing up a lot of questions about the state at a given time.<br /><br />Also, it is important to admit that the mathematically pure concept of convergence is not quite what we are interested in. In practical situations, we are interested in whether something is <span style="font-style: italic;">quickly</span> convergent. This is not uncomputable; however, it can be more expensive to check then reasoning abstractly about convergence. So (and this is probably the weakest point in my argument) I think it is worthwhile to keep reasoning about the uncomputable models.<br /><br />Another interesting point is that, much of the time, while we have a fairly good concept of the emergent convergent objects we deal with day to day, we do not have such a good idea of the underlying dynamic. This means that, in practice, we do not ask <span style="font-style: italic;">too</span> many convergence-hard questions. Often, we think we already have those answers, and instead we ask what sort of underlying structure might give rise to them.<br /><br />PS--<br /><br />I am being somewhat messy here, because "convergent" in the case of hypercomputation does not mean quite the same thing as "convergent" in the case of emergent objects. For one thing, convergence of entities has to do with correcting for disturbances from the external environment, while convergence for hypercomputations does not. I think this difference does not harm the argument. As I see it, emergent-convergence is a more general problem, having hypercomputation-convergence as a subproblem.Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com2tag:blogger.com,1999:blog-28417647.post-21410336640213441232009-06-14T14:03:00.000-07:002009-06-15T15:01:04.223-07:00<span style="font-size:180%;">What They Don't Tell You About Type Checking</span><br /><a href="http://en.wikipedia.org/wiki/Typed_lambda_calculus"><br />Typed lambda calculus</a> is not Turing-complete.<br /><br />There. I said it.<br /><br />More specifically, <a href="http://en.wikipedia.org/wiki/Simply_typed_lambda_calculus">simply typed lambda calculus</a> is not Turing-complete, and neither are any variants that are both <a href="http://en.wikipedia.org/wiki/Normalization_property_%28lambda-calculus%29">strongly normalizing</a> and have decidable type-checking. This is because programs that the type-checker verifies are guaranteed to compute a result. If such a type-checker allowed a Turing-complete set of programs, it would be a solution to the <a href="http://en.wikipedia.org/wiki/Halting_problem">halting problem</a>!<br /><br />Really, I should have put two and two together earlier on this one. I suppose this is what comes of picking lambda calculus up by reading diverse things in diverse places rather than learning it from one authoritative source.<br /><br />What this indicates for me is that, at least in many cases, the point of allowing more and more sophisticated type systems is to get closer to a Turing-complete system. <span style="font-style: italic;">That</span> is why people add things like <a href="http://en.wikipedia.org/wiki/Polymorphism_%28computer_science%29#Parametric_polymorphism">parametric polymorphism</a>, <a href="http://en.wikipedia.org/wiki/Dependent_type_theory">dependent types</a>, and <a href="http://en.wikipedia.org/wiki/Kind_%28type_theory%29">kinds</a>. When we add these to typed lambda calculus, it doesn't just get "more expressive" in the sense that a high-level programming language is more expressive than machine code; it is literally able to do things that a simpler type system could not.<br /><br />This doesn't mean that strongly typed programming languages are not Turing-complete. Typically the type-checkers for these will <span style="font-style: italic;">not</span> guarantee that the program contains no infinite loops. So, one must be careful to figure out exactly what one is dealing with.Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com0tag:blogger.com,1999:blog-28417647.post-85747469198507295632009-06-10T12:56:00.000-07:002009-06-10T14:57:11.055-07:00<span style="font-size:180%;">Sets and Truth</span><br /><br />In this post, I will explain a way to create a theory of sets, given a theory of truth. These two foundational issues are typically treated as separate matters, so asking for a particular relationship to hold between a theory of truth and a set theory adds some interesting constraints to the situation.<br /><br />(Part of this post is an edited version of an email I sent originally to Randall Holmes.)<br /><br />Claim: A set is essentially a sentence with a hole in it. Set membership is characterized by the truth/falsehood of the sentences when we fill the holes.<br /><br />The major justification for this way of understanding sets is the way we use the comprehension principle to talk about sets. The comprehension principle is what allows us to say, in a set theory, that if we can describe the membership requirements for a set then that set exists. For example, I can describe the requirement "it must be a prime number that is over 5 digits long", so the set of prime numbers over five digits long exists. (The comprehension principle with no restrictions leads to naive set theory, however, which is paradoxical.)<br /><br />This view is not far from the way Frege explained sets, as I understand it. However, he distinguished the set as the <span style="font-style: italic;">extension</span> of the sentence-with-hole; meaning, the things for which it is true.<br /><br />So, suppose we've got a logic with enough machinery to represent computable functions, and furthermore we've got a description of the language in itself (ie, Godel-numbering-or-something-equivalent). Furthermore, we've got some theory of truth. Then, via the claim, we can already talk about sets, even though they haven't been purposefully introduced. In particular, "x is an element of y" is<br /><div id=":12n" class="ii gt"> interpreted as:<br /><br />"When "x" is used to fill in the partial sentence Y, the result is true"<br /><br />where "x" is the <span style="font-style: italic;">name</span> of the term x (Godel-numbering-or-equivalent, again, for those who are familiar with such things), and Y is the sentence-with-hole corresponding to the set y.<br /><br />The truth predicate is needed here in order to assert the result of the substitution. With the naive theory of truth, it is <span style="font-style: italic;">always</span> meaningful to apply the truth predicate to a sentence. So, the naive theory of truth gives us the naive theory of sets, in which set-membership is meaningful for any set we can describe. Of course, this is inconsistent under classical logic.<br /><br />So, what I'm saying is: if the claim is accepted, then the set theory is pinned down completely by the theory of truth. The naive theory of truth gives us the naive theory of sets. A tarski-hierarchy theory of truth gives us something vaguely resembling type theory. Kripke's theory of truth gives us a theory in which all sets exist, but not all membership evaluations are meaningful. In particular, Russel's set "all sets that do not contain themselves" exists. We can meaningfully say that any set of integers is in Russel's set, and that the set of all sets (which exists) is not. The paradoxical situation, in which we ask if Russel's set is a member of itself, is simply meaningless.<br /><br />So good so far. But, there is the issue of extensionality to deal with. The axiom of extensionality is taken as a very basic fact of set theory, one that not even nonstandard set theories consider rejecting. Given the above discussion, however, the axiom of extentionality would be false. Two different sentences-with-holes can be logically equivalent, and so have the same extension. For example, "_ is an even prime number" and "_ added to itself equals 4" are the same set, but they are different sentences.<br /><br />My solution here is to interpret the notion of set-equality as being a notion of logical equivalence between sentences-with-holes, rather than one of syntactic equivalence. In other words, "x=y" for two sets x and y needs to be interpreted as saying that X and Y mutually imply each other given any slot-filler, rather than just as saying X=Y. But this is doable within the language, since all we need to do is quantify over the slot-filler-names.<br /><br />This can be thought of as my way of interpreting Frege's concept of the "extension" of a sentence-with-hole. Rather than being a seperate entity, the extension is a "view" of the sentence: the sentence up-to-equivalence.<br /></div>Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com1tag:blogger.com,1999:blog-28417647.post-80289749280358809192009-04-27T13:13:00.000-07:002009-04-27T13:14:22.671-07:00<span style="font-size:180%;">Thoughts on Guiding Inference</span><br /><br />One of the big rifts between the way things are described in the theoretical foundations of mathematics and the way things are actually done in real mathematics is that real mathematicians are constantly defining new objects to study. Perhaps someone sees a particular combination of properties pop up a few times, so they give that combination a name and start studying it directly. Perhaps someone decides a particular restriction of a given problem will be easier to solve, and so gives that a name and starts studying it. Perhaps someone wants to know how many X need to be combined to make Y, and they call the number the "X number" of Y, and start studying the properties of it as X and Y are varied. And so on.<br /><br />In fact, even when studying the foundations of mathematics, such names are often snuck in as "unofficial notation" that is thought of as abbreviating the official notation. Unofficial notation makes axioms and other formulae easier to read.<br /><br />Why do we do this? I can think of two explanations. First, one can explain this in the same way "named entities" are explained in other domains: it is compressive to name common entities. This means it takes up less room in memory, and also as a side benifit it usually makes the world easier to predict. This is an important explanation, but not the focus of this post. The second explanation is that naming things is a means of inference control.<br /><br />When we name things, what we proceed to do is to determine some basic properties of the new entities. (This seems true in non-mathematical discourse as well, but perhaps not as clear-cut.) We come up with properties of the new entities, and look for ways of calculating those properties. We come up with theorems that hold for the entities. We look for existence and uniqueness proofs. And so on. What I'm arguing is that all of this activity is basically focused on helping later inference.<br /><br />One particular phenomenon serves as a good illustration of this: the behavior of mathematicians when studying sequences of numbers that emerge from particular mathematical entities. The prime numbers; the number of groups of each finite size; the number of graphs of a given size; the fibonacci sequence. The big thing to try to do with a sequence is to "find the pattern". What could this mean? If the sequence is mathematically well-defined, don't we know the pattern already? A mathematician will find the first few values of a sequence, and then study them looking for relationships. Often what is sought is a recursive definition of the sequence: a function that calculates the next number based on the direct previous number, or several of the previous numbers, or perhaps all of the previous numbers. If we've already got a recursive form of the sequence, we might try to find a "closed form" version. My argument here is that all of this behavior is explained by the desire to make later reasoning as expedient as possible. Finding the recursive form is not merely a sort of curiosity, like wondering if it is possible to keep a spider alive in a paper cup; rather, the recursive form is a faster way of calculating the sequence then the method that follows directly from the definition. Similarly, the closed form wil usually be even faster.<br /><br />So, what I'm saying essentially is that a reasoning system should (when it isn't busy doing other things) be looking for nice entities related to the task at hand, and nice quick ways of calculating stuff about those entities. What "nice entity" means is based on two (sometimes conflicting) motivations: the entity should be compressive (meaning it should help make descriptions take up less space), and it should be tractable (meaning reasoning about it should be quick).<br /><br />How should the search for good ways of calculating be carried out? By the same methods as general inference. The system should be able to apply good reasoning methods that it finds back onto the task of looking for good reasoning methods. Of course, for this to work very well, the system needs to be able to have fairly good reasoning methods to begin with.<br /><br />What form should the problem-solving methods that the system finds take?<br /><br />Definitely they should be able to take on the form of typical algorithms: closed-form expressions, recursive expressions, and generally any . Definitely these should be associated with the necessery criteria for application to an object (the criteria that guarantee correct results). Probably they would also be associated with provable upper bounds on runtime, so that the method chosen for a particular case (where multiple methods might apply) would be the one with the lowest time-estimate for that case. (Problems might arise for difficult-to-calculate time bounds; perhaps estimators would be required to be linear time.)<br /><br />Some extra features could probably improve upon this basic setup:<br /><br />-Allow possibly-non-terminating algorithms. This would make "search for a proof" count amoung the methods (as the last resort), which strikes me as elegent.<br /><br />-Allow learned expected time bounds<br /><br />-Allow "soft" problem-solving strategies (of possibly various sorts) to be learned; ie, heuristicsAbram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com0tag:blogger.com,1999:blog-28417647.post-455696168687764092009-04-27T12:31:00.000-07:002009-05-18T20:09:46.934-07:00<span style="font-size:180%;">Correction</span><br /><br />Last time, I wrote about the apparent limitations of logical systems arising purely from a formal axiomatization of the Tarski hierarchy. I said:<br /><br /><blockquote>So, if this gets us a lot of math, what is missing?<br /><br />The obvious answer is "a truth predicate for that system". This doesn't lend much insight, though.</blockquote><br /><br />I should have thought before I spoke. A more powerful language cannot be constructed by adding a truth predicate. The semantics of the language is that it should be referring to the Tarski hierarchy! A truth predicate over such a language would need to be a truth predicate over the entire Tarski hierarchy. Such a truth predicate would apparently correspond to an ordinal above all ordinals. Not good!<br /><br />As a side note: I've been looking at Quine's "New Foundations", and in that system, it is possible to talk about the ordering of all ordinals. Surprisingly, this ordinal does not cause problems. So, maybe in New Foundations the above move would be OK. With a naive view of the ordinals, though, it is not.<br /><br />Keeping the naive view, it seems like I should deny the possibility of enriching the language by adding a truth predicate. Does that mean that I should say that the language is maximally rich? I don't think so. I suspect the situation is more like what happens with Kripke's language that contains it's own truth predicate: the language can be expanded in other, new directions.<br /><br />[edit- some of the text was garbled as posted. I may have lost a paragraph or so.]Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com0tag:blogger.com,1999:blog-28417647.post-1350035078495812072009-04-25T13:28:00.000-07:002009-04-25T16:08:42.125-07:00<span style="font-size:180%;">Limitations</span><br /><br />(Analysis of the system proposed in <a href="http://dragonlogic-ai.blogspot.com/2009/04/enumerating-infinity-what-are-we-doing.html">this post</a>)<br /><br />The idea of creating axiom systems describing Tarski's hierarchy in terms of ordinals, and ordinals in terms of Tarski's hierarchy, was discussed previously. It seems that the circularity would help prove the existence of lots of ordinals (and lots of Tarskian levels), hopefully without leading to paradox. So, if this gets us a lot of math, what is missing?<br /><br />The obvious answer is "a truth predicate for that system". This doesn't lend much insight, though.<br /><br />My bet is that we don't get anything uncountable by such a method. Where would it come from? We're constructing truth-predicates for countable languages, and (initially) countable sets of countable languages. Sure, the set of all countable ordinals is uncountable. But there is no reason to believe that we get the set of all countable ordinals!<br /><br />My feeling is that if I studied more about the hyperarithmetical hierarchy, there would be an obvious mapping between some portion of it and the system(s) under consideration.<br /><br />In some ways, the notion of "uncountable" seems to come from nowhere. All the formal constructions for "real number" and related entities seem to rely at some point on some other uncountable set. It's a bit like the (obviously illicit) method of defining a really big ordinal: "The ordering of all ordinals that can be defined without reference to this ordinal".<br /><br />Yet I do feel that the notion is meaningful! So, where might it come from?<br /><br />I have several ideas, none of them entirely convincing.<br /><br />One idea is that uncountable sets can be motivated by considering the space of possible sequences of input from the external environment, assuming a countably infinite amount of time. I've seen similar ideas elsewhere. One might counter that all structures die eventually, so all sequences of input in real-world situations are finite; this makes the set into a mere countable infinity. On the other hand, one might say that this is merely the case in practice; the idea of an infinitely long-lived structure is still sound, and even physically possible (if not probable). But even so, I don't feel like this is a really complete justification.<br /><br />Another attempt would be to claim that we need to allow for the possibility of infinitely long statements, despite not being able to actually make and manipulate such statements. (Universal statements and such may abbreviate infinitely many claims, but they are not literally infinitely long.) This idea is motivated by the following consideration: a nice way of getting a theory of set theory from a non-set-theoretic foundational logic is to think of a set as a statement with an unfilled slot into which entity-names can be put to make complete statements. Claiming that a set contains an element is thought of as claiming that the statement is true of that object. At first, this might seem to fully justify naive set theory: a set exists for each statement-with-hole. But, this can only work if the theory contains its own truth predicate, so that we can make arbitrary talk about whether a statement is true when we fill a slot with a given element. The amount of set theory that gets captured by this method depends basically on the extent to which the theory is capapble of self-reference; the naive theory of truth corresponds to naive set theory.<br /><br />This is interesting by itself, but the point I'm making here is that if we want to have an uncountable number of sets (for example if we believe in the uncountable powerset of the real numbers), then we'll want a logic that acts as if infinite statements exist. What this means is an interesting question; we can't actually use these infinite statements, so what's the difference with the logic?<br /><br />One difference is that I don't automatically have a way of interpreting talk about turing machines and numbers and other equivalent things anymore. I was justifying this referential power via talk about statements: they provide an immediate example of such entities. If we posit infinite statements that are "out there", somewhere in our set of sentences, we lose that quick way of grounding the idea of "finite". (This could be taken to show that such a method of grounding is not very real in the first place. :P)<br /><br />Semantically, then, we think of the base-language as an infinitary logic, rather than regular first-order logic. The language formed by adding the first truth predicate is then thought of as already containing talk about uncountable sets. (This changes the axoims that we'd be justified in using to manipulate the truth predicate.)<br /><br />All in all, I think this direction is mathematically interesting, but I'm not sure that it is really a route to justify uncountables.<br /><br />A relevant question is: why do mathematicians think that uncountables exist? The proof is given by taking the powerset of some countably infinite set, which is defined as the set of all subsets of the countably infinite set. It's then shown that no function exists that maps the powerset onto the countable set. This can be done even in systems that does not really have any uncountable sets: the set of subsets of a countably infinite set will map onto the original set, but not by a function within the system. So from inside the system, it will look as if there are uncountables.<br /><br />If this explains what mathematicians are doing, then mathematicians are being fooled into thinking there are real uncountables... but how can I say that? I'm just being fooled into thinking that there is a difference, a "real" and "not real", right?<br /><br />I think it is plausible that this weirdness would go away, or at least change significantly, in a logic that resolves other foundational problems. We might have a much better story for why the function that would map the uncountable set onto the countable set doesn't exist, so that it becomes implausible to claim that it exists from the outside but not from the inside. (But would that make the uncountables "real"?)Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com0tag:blogger.com,1999:blog-28417647.post-25475337177176770942009-04-25T13:21:00.000-07:002009-04-25T13:24:50.901-07:00<span style="font-size:180%;">Some musings on interpretations</span><br /><br />(Originally posted to <a href="http://groups.google.com/group/one-logic">this logic mailing list I'm trying to get started</a>. All are welcome! Except if you know nothing about logic and aren't willing to learn. Then you're not welcome. Sorry.)<br /><p>An interpretation is a translation of a statement from one logic into<br />another. I was looking at interpretations recently because they appear<br />to be a way to show that semantically stronger logics (specifically,<br />logics formed by adding a truth predicate) really are stronger in<br />practice. The stronger logic can interpret the weaker, but the weaker<br />cannot interpret the stronger. (This is one type of proof-theoretic<br />strength.)<br /></p><p>Normally, one does not allow just *any* interpretation... for example,<br />we wouldn't want a universal quantifier in the first logic to be<br />interpreted as an existential one in the new logic. It is typical (so<br />far as I have seen) to require the logical connectives to remain<br />unchanged, and only allow minimal changes to the quantifiers (such as<br />translating them to be restricted quantifiers).<br /></p><p>Yet, all we care about is structure in these interpretations. For<br />example, if we're interpreting talk about numbers into a logic that<br />talks about sets, we don't really care if the resulting translated<br />sentences don't have anything to do with sizes of sets-- all we're<br />supposed to worry about is relationships. For example, our new<br />translated notion of "addition" should tell is that "5" and "6" makes<br />"11". (Usually when people do construct interpretations, of course,<br />they try to find ones that make some intuitive sense.)<br /></p><p>So if all we care about is structure, why constrain the way logical<br />connectives and quantifiers are translated? What happens if we get rid<br />of these constraints?<br /></p><p>Well... we can't get rid of *all* the constraints: we still need to<br />capture what we mean by "interpretation". Not every function from one<br />language to the other counts! The way I see it, we want to keep the<br />following minimal constraint:<br /></p><p>C1: If A |- B in L1, then f(A) |- f(B) in L2<br /></p><p>Where f(X) is the translation function, "X |- Y" means Y can be proven<br />from X, L1 is the language being translated, L2 is the language being<br />translated into, and A and B are statements in L1.<br /></p><p>I've also thought of the following:<br /></p><p>C2: A |- B in L1 if and only if f(A) |- f(B) in L2<br /></p><p>But, that is not much like the standard notion of interpretation.<br />Essentially it means that the interpretation into L2 adds no extra<br />knowledge about the entities in L1. But, if we're looking for strong<br />languages, extra knowledge is a good thing (as long as it is true<br />knowledge). (One could argue that this justification doesn't apply if<br />we're only worrying about structure, though, I think. Specifically, of<br />when we say "structure" we mean not just structures of what's provable<br />but also structures of what isn't, we should take C2 as the proper<br />constraint. I'll proceed with C1 since it is the more standard-looking<br />constraint.)<br /></p><p>Anyway. What now happens to the earlier assertion, that semantically<br />stronger languages are also proof-theoretically stronger, because they<br />can interpret more logics?<br /></p><p>Answer: plain first-order logic can interpret any logic L with a<br />computable notion of provability.<br /></p><p>Proof: Arbitrarily declare that some of the function symbols represent<br />the operations necessary to build up statements in L (for example, one<br />function might be chosen for each character in the alphabet of L, so<br />that composing those functions in a particular sequence would<br />represent writing down characters in that sequence). For an<br />L-statement X, call the first-order version of X built just from these<br />function symbols h(X). Write down some statements describing<br />L-provability. Writing down the rules of L like this is possible<br />because first-order logic is Turing-complete. Take the conjunction of<br />the statements about L and call it R (for "rules"). The interpretation<br />of an L-statement X will simply be R->h(X), where "->" is material<br />implication. This will satisfy C1 because wherever L proves A from B,<br />first-order logic will prove that if the rules of L-provability hold,<br />then A is L-provable from B. (This proof is still pretty messy, but<br />nonetheless I'm guessing I'm going into more detail here than needed,<br />so I'll leave it messy for now.)<br /></p><p>What does this mean? To me this means that proof-theoretic strength is<br />not a very justifiable way of enticing people over to the side of<br />logics with strong semantics. First-order logic is in some sense<br />proof-theoretically all-powerful (if we allow arbitrary<br />interpretations, and if we're dealing with computable provability). If<br />someone is set in their path of using just first-order logic, I cannot<br />convince them just by talking about first-order truth; first-order<br />logic doesn't have a strong enough semantics to talk about first-order<br />truth, but they can interpret what I am saying by just listening to<br />the computable inference rules of my more-powerful logic. Every time I<br />say X, they can interpret me as meaning "The rules I've laid out imply<br />X". They will then be able to assert that first-order reasoning can<br />justify all the reasoning I'm doing, without even needing any new<br />axioms.<br /></p>I'll then try to argue that I'm actually asserting X, not just<br />asserting that X follows from the rules I've laid out... but if<br />they're *really* applying the interpretation properly, they'll tell me<br />that I'm making a meaningless distinction, since they'll think<br />R->(R->h(X)) is the same as R->h(X). (If they make this reasoning<br />explicit, though, I have them: I can assert that I'm not saying<br />R->h(X), I'm saying h(X).)Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com0tag:blogger.com,1999:blog-28417647.post-85080380235789439092009-04-17T09:21:00.000-07:002009-04-17T09:21:00.769-07:00<span style="font-size:180%;">Truth and Nonsense</span><br /><br />Continues <a href="http://dragonlogic-ai.blogspot.com/2009/03/after-having-thought-about-it-i-am.html">this</a> post.<br /><br />I think now that it is relatively straightforward to establish a correspondence between the Tarski hierarchy of truth and my hierarchy of nonsense.<br /><br />Basically, the two hierarchies diverge thanks to two different notions of the correct way to add a "truth" predicate to a base language. The Tarski hierarchy adds a metalanguage that only talks about truth in the base language. The nonsense hierarchy instead prefers Kripke's method, in which we construct a metalanguage that contains its own truth predicate. Both approaches can be thought of as constructing a metalanguage on top of a base language, but the Tarski hierarchy <span style="font-style: italic;">keeps doing so</span>, resulting in a hierarchy of truth, whereas the Kripke fixed-point construction cannot be iterated- doing so adds nothing more. To continue upwords in the Kripke construction, we proceed in a different direction, adding <span style="font-style: italic;">nonsense</span> predicates.<br /><br />When we use the Kripke truth construction, we can clearly interpret the first Tarski iteration: all the truth-sentences that talk only about truth of base-language statements will be there, provided we have enough machinery to interpret the restricted quantifiers. (Details here will depend on the exact construction.) The semantics assigns them the same truth values; none of these sentences will come up undefined. (I'm talking about semantic interpretations, not proof-theoretic ones... again, details need to be worked out.) The second iteration of Tarskian truth will similarly be inside the Kripke construction; since the first iteration gets definite truth values, the second does. So it goes for as long as the Kripke construction can interpret the restricted quantifiers; that is, for as long as the characteristics of a particular level of the Tarski hierarchy are definable given the tools that the Kripke construction has at its disposal. For example, if these tools can only define computable structures, I'd suppose that the Kripke construction would interpret the portions of the Tarski hierarchy corresponding to the computable ordinals. (That's just a guess. More research required!)<br /><br />In any case, given a particular amount of expressiveness in the base langauge, the Kripke construction will add a definite amount of expressiveness, corresponding to climbing a particular number of Tarski-hierarchy steps. (Probably this is known; I imagine someone has researched the semantic expressiveness of the Kripke least-fixed-point...) So what happens when we add in more nonsense predicates? Well, adding in nonsense predicates basically allows us to climb that same number of levels again; each nonsense predicate plays the role of allowing us to talk fully about the semantic structure of the construction-so-far (the role that the <span style="font-style: italic;">truth</span> predicate plays in the Tarski hierarchy). This can be thought of as adding that amount of structure to the base language. Then, the Kripke truth construction can do its work on that increased amount of structure. So, we jump up the same number of steps on the Tarski hierarchy for every nonsense predicate added.<br /><br />Eventually, since the amount of structure added by the truth predicate is always fixed, the scene will be dominated by the hierarchical structure added by the nonsense predicates. Still, it seems clear that each level will correspond in a definite way to a level on the Tarski hierarchy. The nonsense hierarchy merely forces one to make larger jumps at a time.Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com0tag:blogger.com,1999:blog-28417647.post-51045643354209419142009-04-10T11:31:00.000-07:002009-04-24T14:38:18.320-07:00<span style="font-size:180%;">Enumerating Infinity</span><br /><br />What are we doing when we list infinities? Particularly, I take <a href="http://www.google.com/search?q=ordinal+site:http://en.wikipedia.org/wiki+-inurl:%22User:%22+-inurl:Talk:+-inurl:%22User_talk:%22+-inurl:%22Template:%22+-inurl:%22Template_talk:%22&btnI=I%27m+Feeling+Lucky">ordinal infinities</a> as my example.<br /><br />The most natural ordinals, which people will tend to stay within if you ask them to invent bigger and bigger ordinals and give them the basic definitions, are the <a href="http://en.wikipedia.org/wiki/Recursive_ordinal">recursive ordinals</a>. In fact, people will tend to stay within what I'd call the "prinitive recursive" ordinals:<br /><br />infinity, infinity + 1, infinity + 2, .... infinity + infinity (= 2*infinity), 3*infinity, 4*infinity, ... infinity*infinity (= infinity^2), infinity^3... infinity^infinity (=infinity^^2), infinity^(infinity^infinity) (=infinity^^3), infinity^(infinity^(infinity^infinity)) (=infinity^^4) ...... infinity^^infinity ..... infinity^^^infinity ..... infinity^^^^infinity .....<br /><br />The above uses <a href="http://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation">up-arrow notation</a>.<br /><br />Enumerating computable ordinals involves finding halting computations, such as the numerical operations used above. Sticking infinity into a computable function basically stands for the infinite <span style="font-style: italic;">series</span> produced by sticking <span style="font-style: italic;">finite</span> numbers in that same location. So just "infinity" stands for:<br /><br />1, 2, 3, 4, 5, ....<br /><br />We can define + as the concatenation of two different orderings, which leaves the internals of the orderings untouched but specifies that all elements from the second will be larger than any elements from the first. So, "infinity + 1" stands for:<br /><br />1, 2, 3, 4, .... A.<br /><br />Here, "A" is a pseudo-number larger than all the finite numbers. (In other words, X is the first infinite number.)<br /><br />"infinity + infinity":<br /><br />1, 2, 3, ... A, A+1, A+2, A+3 ...<br /><br />Here, there are essentially two infinite series, the one that starts with 0, and the one that starts with A.<br /><br />"infinity + infinity + infinity":<br /><br />1, 2, 3, ... A, A+1, A+2, ... B, B+1, B+2 ...<br /><br />Now there are 3 series; 0, A, and B.<br /><br />"infinity * infinity":<br /><br />1, 2, 3, ... A, A+1, A+2 ... B, B+1 ... C, C+1 ... D, D+1 ... ...<br /><br />Now we have an infinite number of series.<br /><br />And so on. We can imagine each computable function as a method of specifying an infinite tree. The tree is "computable" because we can generate any finite portion of the tree using the computable function. Approximately speaking, the more tangled and structured and multiferous the branching of the tree, the larger the ordinal. But if we try just generating computations at random, then they may not halt; so we've got to try to put as much tangledness in as we can without putting in <span style="font-style: italic;">so</span> much tangledness that things become ill-defined. This can potentially take any knowledge we've got about the <a href="http://en.wikipedia.org/wiki/Halting_problem">halting problem</a>.<br /><br />If this were all there was to enumerating infinities, then I would say that something like probabilistic <a href="http://dragonlogic-ai.blogspot.com/2008/06/basic-hyperlogic-my-search-for-proper.html">mathematical truth</a> explains what it is we're doing. However, mathematicians (especially logicians) talk about ordinals that are not computable. These include <a href="http://en.wikipedia.org/wiki/Kleene%27s_O">Kleene's O</a>, the first uncomputable ordinal (which makes it the ordering on all computable ordinals), as well as many <a href="http://en.wikipedia.org/wiki/Large_countable_ordinal#Beyond_recursive_ordinals">higher countable ordinals</a>; the first uncountable ordinal, which is of course just the ordering of all countable ordinals; and so on.<br /><br />To make the initial jump outside of the recursive ordinals, to Kleene's O, we need to make an interesting sort of move: we admit our ignorance. We give up the hope of being able to enumerate every ordinal on the way, and make due with saying "if the function halts, then the ordinal is well-defined". Since we never will have all the information about which functions halt, we'll always be somewhat ignorant of which of these ordinals are well-defined. Yet, we look at their ordering as a mathematical entity, and start talking about it.<br /><br />This gives the impression that we'll need to give up more and more if we want to climb higher in the ordinals. But how much can we give up before we've given everything up?<br /><br />I propose that, in general, the process of listing ordinals is a process of deciding which things are <span style="font-style: italic;">well-defined</span>. If we give <span style="font-style: italic;">that</span> up, we've given up too much.<br /><br />Here, "well-defined" means "having a description on some level of Tarski's hierarchy of truth", or alternatively, "having a description on some level of my hierarchy of nonsense".<br /><br />Of course, this is circular. Those hierarchies are both defined in terms of ordinals, so defining ordinals in terms of them appears unproductive. However, it is not <span style="font-style: italic;">completely</span> unproductive. Let's take Tarski's hierarchy as the working example. Let 0 represent first-order logic, 1 represent the theory of truth for that (specifically, the theory with enough axioms to be equivalent in strength to Peano Arithmetic), 2 to be the theory of truth-in-1, and so on.<br /><br />The thing I want to note here is that the ordinal assigned to a level in the hierarchy is far lower than the largest ordinals whose existance can be proven in that theory. Suppose I lay down axioms for Tarski's hierarchy in terms of ordinals, and then lay down axioms for ordinals which require definability in Tarski's hierarchy. It seems that I'll get a large number of ordinals in this manner. If I start out believing that the ordinal 1 is well-defined, then I'll believe all the ordinals proven well-defined by Peano arithmetic are well-defined. That is a rather large number of ordinals. Since I believe in them, I'll believe in all the levels of the Tarski hierarchy corresponding to them... lots of levels! This gives me many more ordinals to believe in, which gives me many more levels to believe in, and so on.<br /><br />Of course, this stops somewhere (in the same sense that counting up stops somewhere...). It will only imply the existence of so many ordinals, assuming that it is a consistent theory. Furthermore, if it is consistent, then I can name an ordinal that it does not: the ordering of all the ordinals the system talks about. Let's call this the "outside ordinal" for the system. (This is a bit trickier to specify than it looks at first, however. We can't just say "the ordering of all ordinals the system will consider well-defined". The system will have gaps in its knowledge. For example, it will prove a bunch of recursive ordinals to be well-defined, but not all of them; it then jumps to Kleene's O, because it can <span style="font-style: italic;">talk about</span> the set of well-defined recursive ordinals. Even more clearly: the system might be able to prove that the first uncountable ordinal is well-defined, but it will not be able to prove that all ordinals below this are well defined... there are uncountably many of them!<br /><br />The main precaution that must be taken is to prevent the system from taking "the ordering over all ordinals" to be an ordinal. This is like me saying "Consider the set of all well-defined ordinals. Consider the ordering on these as an ordinal, Q. Take Q + 1..." This is not allowed!<br /><br />OK. Given that, let's think about what happens when we add probabilistic justification to the system. We can think of the system as (eventually) knowing the truth about the halting problem (for any particular instance). This means that it is (eventually) correct in its judgements about well-definedness for computable ordinals. Thanks to the feedback effect of the system, this will mean that it is (eventually) correct in its judgements concerning a whole lot of ordinals. All of them? No: just as there are ordinal notations for which the halting problem determines well-definedness, there are ordinal notations for which the convergence problem determines well-definedness. (For a definition of the convergence problem, see <a href="http://www.idsia.ch/%7Ejuergen/kolmogorov.html">here</a>.) Still, this is an interesting class of ordinals.<br /><br />So how could one go even further? Well, perhaps we could consider the "outside ordinal" for the version of the system that knows all the halting truths...Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com0tag:blogger.com,1999:blog-28417647.post-88802985789864662852009-04-08T12:36:00.000-07:002009-04-11T11:39:09.497-07:00<span style="font-size:180%;">Some Numbers</span><br /><br />Continues <a href="http://dragonlogic-ai.blogspot.com/2009/04/risk-estimate-4-probability-estimates.html">Risk Estimate</a>.<br /><br />I am generally disappointed by the lack of detailed tracking of the progress of computing power... I do not know where to find projections based on large amounts of up-to-date data. The most often cited projection is Kurzweil's, based on data from up to 2000; for example, <a href="http://images.google.com/imgres?imgurl=http://brent.kearneys.ca/wp-content/uploads/2009/02/ss08_exponential_growth_large.jpg&imgrefurl=http://brent.kearneys.ca/2009/02/14/the-singularity-summit-2008/&usg=__c3EBL4Ep1vpTVtvYcTWgmKes6Pk=&h=870&w=1020&sz=141&hl=en&start=14&um=1&tbnid=DGCdHNouYbYr5M:&tbnh=128&tbnw=150&prev=/images%3Fq%3Dkurzweil%2Bexponential%26hl%3Den%26client%3Dfirefox-a%26rls%3Dcom.ubuntu:en-US:unofficial%26sa%3DN%26um%3D1">here</a>. I found one that may be <a href="http://images.google.com/imgres?imgurl=http://havemacwillblog.com/wordpress/wp-content/gallery/article-diagrams/compint.gif&imgrefurl=http://havemacwillblog.com/2009/02/02/what-will-happen-when-computers-exceed-our-inteligence/&usg=__rbLvACKNmakUZHS-1ExWG_60gWg=&h=380&w=530&sz=23&hl=en&start=21&um=1&tbnid=_i-kBd56wtNHgM:&tbnh=95&tbnw=132&prev=/images%3Fq%3Dkurzweil%2Bexponential%26ndsp%3D18%26hl%3Den%26client%3Dfirefox-a%26rls%3Dcom.ubuntu:en-US:unofficial%26sa%3DN%26start%3D18%26um%3D1">more recent</a>, but it does not show data points, just a curve. These curves are based on what $1000 gets you. It is also interesting to note the <a href="http://images.google.com/imgres?imgurl=http://www.vetta.org/VettaPics/SuperComputerPower.jpg&imgrefurl=http://www.vetta.org/2006/09/moore-power/&usg=__yTEEebWVASQ4un8DUuGrcR6XCV4=&h=429&w=589&sz=41&hl=en&start=56&um=1&tbnid=kunKimWbdQhJ8M:&tbnh=98&tbnw=135&prev=/images%3Fq%3Dkurzweil%2Bexponential%26ndsp%3D18%26hl%3Den%26client%3Dfirefox-a%26rls%3Dcom.ubuntu:en-US:unofficial%26sa%3DN%26start%3D54%26um%3D1">supercomputing curve</a>.<br /><br />Moore himself believes that the exponential trend will stop in <a href="http://news.cnet.com/8301-10784_3-9780752-7.html">10 to 15 years</a>, but he does not take into account 3d chips (by his own admission) or other possible ways around the halting spot. I think it is safe to allow a few more years based just on that... but since the halting spot is a point of contention, for now I'll just give the analysis based on the raw exponential curve.<br /><br />By the curve, computers reach human-level processing power between 2020 and 2030. leave the curve at exponential, rather than going super-exponential as Kurzweil suggests we should, the rate looks to be around 5 orders of magnitude for every 10 to 20 years. (I know that is a big range, but I don't have much data to go on.) So if a computer is 1 person in 2025, it is 10 people in 2040, and 100 people in 2055.<br /><br />What does it mean for a computer to be as smart as 10 or 100 people? A first approximation would be to say that it would be able to accomplish as much intellectual activity as a group of 10 or 100 people that were completely unified in their aim, and could communicate with eachother continuously. But even this estimate is low, because there is a large amount of redundancy in 10 such people. It is hard to estimate how much, but roughly we could say that only 1 visual cortex is needed, and the other 9 people could be using that processign power for something else; only 1 motor cortex is needed, so the other 9 could be using that processing power for something else; and so on. This might (roughly) double the amount of thinking power, after the first person (who needs their motor and visual cortex intact).<br /><br />So how safe is this? I'd say as soon as we get roughly human level we've got a significant risk of the AI deciding it would be awesome to "get loose" and use spare computer power stolen from the internet. My estimate could be greatly improved here, but there are about 7 billion people in the world, and more will be here by 2020, so assuming 1/8th have computers on the internet (that is where the estimate is shaky) we're talking a 1-billion-fold increase in computing power as soon as an AI is able to "get loose". That assumes that the 1 billion computers on the net are roughly equivalent in power to the 1 that the AI started on (in line with the idea that we're estimating based on what $1000 of computing power is). By my earlier estimate, an AI as smart as 1 person becomes as smart as 2 billion. But once we go distributed, the advanteges I was talking about go away; the massive redundancy becomes necessary. So, back to 1 billion. (The idea that "smartness" merely doubles when we get rid of the inefficiencies of distributed existence is blatantly silly for the cae of billions of people... oh well.)<br /><br />So, could the world defend itself against the intellectual effort of 1 billion virtual people? We've got 8 billion on our side (by that point)... plus we start out with a good-sized advantage in terms of control of resources. How much could 1 billion hackers aquire on short notice?<br /><br />For now, I'll give that a 50% chance if extinction assuming bad intent on the AIs part, and assuming it is able to get on the internet, and assuming it is able to create a virus of some kind (or use some other scheme) to get a fair chunk of the internet's computing power. I'll give 50% probability to those other two as well... making 12.5% probability of extreme badness given evil intent. So the question is, how probable is bad intent in this scenario?<br /><br />By the way, this estimate puts current computers at around the level of a mouse. Do the best current AIs acheive this? Seems doubtful, I'd say. Mice are pretty smart. They accomplish a fair amount of visual recognition, and furthermore, they are able to put it to good use. (The second part is what we have the least ability to model, I think... goal-oriented systems that can flexibly use highly structured sensory info.) So, by the model so far, AI progress will more probably be sudden then gradual... someone will put together an algorithm capable of taking full advantage of the hardware, and things will change.<br /><br />I'd say that might happen in anywhere between 5 and 20 years. The outcome if it happens in 5 years are very different from those if it happens in 20. If it happens in 5 years, I'd say good results are fairly certain. 10 years, and there is more concern. 20 years and Very Bad Things have fair chances, maybe as high as my 10% "halt everything" level.<br /><br />Let's take that arbitrary statement and mathematize it... 5 to 10 years = .1% chance of Really Bad Things, 10 to 15 = 1%, 15 to 20 = 10%.<br /><br />Giving each option a 1/3 probability, we have around 3.7%.<br /><br />But, giving my assumptions, the best way to reduce risk appears to be trying to find a good algorithm quickly (favoring open research). At some point between 2015 and 2020, the risk goes beyond 1% (which I arbitrarily label as the point of "real concern") and the strategy should turn towards making sure the first good-enough algorithm is also endowed with a safe goal system.<br /><br />It should be obvious that this estimate is an <span style="font-style: italic;">extremely</span> rough one. More later?<br /><br />---[edit]---<br /><br />One of the most obvious factors not considered here is the chance that the brain is badly-designed enough that a human level AI could be run on current PCs. The probability of this is nonzero, but if the brain were really so inefficient (= 5 orders of magnitude of inefficiency), I would expect that human AI programmers would already be outperforming it. The idea that current AIs are not as smart as mice despite having roughly as much hardware suggests that brains are fairly well-engineered. (The statement "roughly as much hardware" here needs to be made more specific. However, as long as it is inside 5 orders of magnitude, the argument makes some sense.)Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com1tag:blogger.com,1999:blog-28417647.post-9301721773723837202009-04-08T09:51:00.000-07:002009-04-08T09:51:00.817-07:00<span style="font-size:180%;">Provable Truths</span><br /><br />The picture I've been using mostly for the idea of knowable truths goes something like this: quantifier-free statements are computable (we can tell if they are true or false); statements that can be reduced to a normal form containing only existential quantifiers are computably verifiable (if they are true, we can know this by eventually finding an example) but only probabalistically falsifiable (if they are false, we'll never know it for sure, but we can suspect it because we've tried for a long time to find an example and failed); statemets similarly containing only universal quantifiers are falsifiable but only probabilistically verifiable; statements whose normal form universally asserts an existential statement are probabilistically falsifiable; statements asserting the existence of something satisfying a universal are probabilistically verifyable. Nothing else appears to be knowable in any significant sense.<br /><br />Nonetheless, even Robinson arithmetic gives us more definite truths than this scheme does. Statements like "For all x, x=x" can be verified, not just falsified. Should this be looked at as probabilistic knowledge? This view becomes understandable if we think of equality as a black box, which we know nothing about. We just feed it numbers, and accept what we get out. If not the axioms of equality, at least the first-order tautologies seem knowable: perhaps "for all x, x=x" is probabilistic knowledge of the behavior of equality, but "for all x, x=x if x=x" is clear and knowable... right? Well, this too could be questioned by claiming that the truth functions are also to be treated as black boxes.<br /><br />Why would we take this view? Perhaps we claim that knowledge of the actual structure of equality, and similarly of the actual structure of truth functions, should be represented at the metalanguage level. Then, "for all x, x=x" would be something we could prove in the metalanguage, thanks to our additional information about equality, but not in the language itself. I think there is something intuitively appealing about this, despite the restrictive nature of the claim.<br /><br />The base logic would contain rules for reducing equality statements to true/false, and similarly rules that reduced a truth-function whose arguments were already true or false. A statement that reduces to "true" can then be concluded. In addition to these rules, there would be a rule that allowed existential quantifications to be concluded from examples; this would be a fairly complicated process, as we can pull out any set of identical pieces and replace them with a variable. Once an existential statement has been concluded, it can reduce to "true" in larger expressions. Universal quantifiers can be defined from existentials as usual, which allows us to conclude their negation by finding counterexamples (but does not allow us to <span style="font-style: italic;">affirm</span> any universals).<br /><br />How could a metalanguage be constructed along similar lines, but yielding the first-order truths we're used to? Not sure. What's needed most is the ability to see that even though we don't have all the information for a truth-function to be calculated, the additional information will create the same answer no matter what.Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com0tag:blogger.com,1999:blog-28417647.post-65609826165930492132009-04-06T09:47:00.000-07:002009-04-06T13:38:49.883-07:00<span style="font-size:180%;">Risk Estimate</span><br /><br />4 probability estimates seem essential to me.<br /><br /><ol><li>A projection of the growth of computing power. This is of course extensively discussed, so it is at least somewhat reasonable to use standard predictions.<br /></li><li>Estimation of probable AI intelligence levels given different hardware levels. There is a lot of hidden complication here. First, how do we want to measure intelligence? How should we define intelligence level? The most relevant way of doing this is to estimate real-world problem solving capabilities (in each important area). Second, what we want isn't just a fixed maximum intelligence level, but rather a curve over time; this captures information about intelligence explosions due to fast learning or recursive self-improvement. Third, ideally we don't just want one probable curve for each hardware level but rather a probability distribution over such curves.</li><li>Estimation of probability of unfriendly behavior, given various classes of probable goal functions that AI programmers might come up with. Of course we'll also want to assign a probability to each of these goal-classes. The probability of unfriendly behavior depends mainly on the intelligence level reached (which depends in turn on the hardware level). An AI that is only somewhat smarter than humans will more probably have incentive to be nice to humans for the same reasons that humans have incentive to be nice to eachother.</li><li>Estimation of human ability to to respond to various intelligence levels if an AI of that level turns out to be unfriendly.</li></ol>Chaining all these estimates together would result in a fair estimate of AI risk.Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com0tag:blogger.com,1999:blog-28417647.post-9071550989726367422009-04-02T09:30:00.000-07:002009-04-06T06:51:55.987-07:00<span style="font-size:180%;">2 Futures</span><br /><br />The question of AI ethics is not merely a matter of estimating the probability of Really Bad Things; it is a matter of weighing the probability of Really Bad Things against the probability of Really Good Things.<br /><br />This makes some of the stuff in the previous post less relevant (though not completely irrelevant). The major question becomes: in various scenarios, what is the ratio of good to bad?<br /><br />When thinking about these issues, I find myself going back and forth between two concepts of the future:<br /><br /><ol><li>AIs increase in intelligence slowly. For a long time, AIs remain at human level or anyway human-understandable level. There is no miracle of recursive self-improvement; you put in what you get out.</li><li>The first AI smart enough to begin recursive self-improvement quickly goes beyond the human-understandable level of intelligence, and has its way with the future.</li></ol>I think of the first as "the normal-world outcome" and the second as "the science-fiction outcome". A good argument could be made for reversing the labels. The first future is much more like what is normally written about in scifi; AIs typically need to be at the human-understandable level in order for humans to write about them, and furthermore AI is usually a side-part of the story (so admitting the possibility of explosive recursive improvement would get in the way of the plot). Thus, scifi has reason to unrealistically limit AI success.<br /><br />I have argued already that the first outcome would be safer than the second; the opportunity for Really Good Things is less, but this (I would think) is outweighed by the far reduced probability of Reall Bad Things. (In scenario 1, sociopathic AIs would occur, but they would be of roughly human level and so could be contained; by the time AIs were too powerful, the art of creating ethical AIs would be fairly well perfected. (Also, even if an unethical AI was created, there would be a safety net of nearly-as-smart AIs to contain it.)<br /><br />Of course, there isn't freedom to choose between the two outcomes, for the most part. Either recursive self-improvement (aka RSI) is a powerful possibility, or it isn't. To some extent, the outcome is dependent on whether the algorithmic research outpaces hardware developments, or vice versa (as discussed last time). The "race between software and hardware" mindset would have it that the best strategy for avoiding Really Bad Things is to make algorithmic progress as quickly as possible-- which favors open research, et cetera. But even if algorithms outpace hardware, there could still be an RSI tipping-point: suddenly the hardware is good enough to support RSI, and outcome 2 occurs. So this isn't well-justified. What is needed is a solid analysis of RSI.<br /><br /><a href="http://www.blogger.com/www.mattmahoney.net">Matt Mahony</a> offers an argument that <a href="http://www.mattmahoney.net/rsi.pdf">RSI is inherently slow</a>. However, this model is limited by the (purposeful) restriction to strict <span style="font-style: italic;">self</span>-improvement; in particular, the exclusion of learning. Since explosive RSI-like learning capabilities are essentially as dangerous, this is isn't a <span style="font-style: italic;">particularly</span> helpful model. Shane Legg makes <a href="http://www.vetta.org/documents/IDSIA-12-06-1.pdf">similar claims</a> for learning. As it happens, I <a href="http://www.mail-archive.com/agi@v2.listbox.com/msg13113.html">disagree</a> with those claims:<br /><blockquote><br /><pre>3 days ago Matt Mahoney referenced a paper by Shane Legg, supposedly<br />formally proving this point:<br /><br /><a rel="nofollow" href="http://www.vetta.org/documents/IDSIA-12-06-1.pdf">http://www.vetta.org/documents/IDSIA-12-06-1.pdf</a><br /><br />I read it, and must say that I disagree with the interpretations<br />provided for the theorems. Specifically, one conclusion is that<br />because programs of high Kolmogorov complexity are required if we want<br />to guarantee the ability to learn sequences of comparably high<br />Kolmogorov complexity, AI needs to be an experimental science. So,<br />Shane Legg is assuming that highly complex programs are difficult to<br />invent. But there is an easy counterexample to this, which also<br />addresses your above point:<br /><br />Given is T, the amount of computation time the algorithm is given<br />between sensory-inputs. Sensory inputs can ideally be thought of as<br />coming in at the rate of 1 bit per T cpu cycles (fitting with the<br />framework in the paper, which has data come in 1 bit at a time),<br />although in practice it would probably come in batches. Each time<br />period T:<br />--add the new input to a memory of all data that's come in so far<br />--Treat the memory as the output of a computer program in some<br />specific language. Run the program backwards, inferring everything<br />that can be inferred about its structure. A zero or one can only be<br />printed by particular basic print statements. It is impossible to know<br />for certain where conditional statements are, where loops are, and so<br />on, but at least the space of possibilities is well defined (since we<br />know which programming language we've chosen). Every time a choice<br />like this occurs, we split the simulation, so that we will quickly be<br />running a very large number of programs backwards.<br />--Whenever we get a complete program from this process, we need to run<br />it forwards (again, simulating it in parallel with everything else<br />that is going on). We record what it predicts as the NEXT data, along<br />with the program's length (because we will be treating shorter<br />programs as better models, and trusting what they tell us more<br />strongly than we trust longer programs).<br />--Because there are so many things going on at once, this will run<br />VERY slowly; however, we will simply terminate the process at time T<br />and take the best prediction we have at that point. (If we hadn't<br />gotten any yet, let's just say we predict 0.)<br /><br />A more sophisticated version of that alg was presented at the AGI<br />conference in this paper:<br /><br /><a rel="nofollow" href="http://www.agiri.org/docs/ComputationalApproximation.pdf">http://www.agiri.org/docs/ComputationalApproximation.pdf</a><br /><br />The algorithm will be able to learn any program, if given enough time!<br /><br />NOW, why did Shane Legg's paper say that such a thing was impossible?<br />Well, in the formalism of the paper, the above "algorithm" cheated: it<br />isn't an algorithm at all! Fun, huh?<br /><br />The reason is because I parameterized it in terms of that number T.<br />So, technically, it is a class of algorithms; we get a specific<br />algorithm by choosing a T-value. If we choose a very large T-value,<br />the algorithm coulf be very "complex", in terms of Kolmogorov<br />complexity. However, it will not be complex to humans, since it will<br />just be another instance of the same general algorithm! In fact, it<br />would just be the same algorithm given more time to do its job.<br /><br />So, on that grounds, I disagree with Shane Legg: it is still possible<br />to find algorithms analytically, despite the found algorithms being<br />"of high complexity". Or, to rephrase, there are simple algorithms of<br />high Kolmogorov complexity, and those algorithms do the job that can't<br />be done by algorithms of low Kolmogoriv complexity.<br /></pre></blockquote><pre><br /></pre>(As an aside, the next paragraph in my email uses a mathematically flawed argument: "<span style="font-style: italic;">Once we have the</span><span style="font-style: italic;font-family:monospace;" > </span><span style="font-style: italic;">ordering, I can define the game as: create the lowest video sequence</span><span style="font-style: italic;font-family:monospace;" > </span><span style="font-style: italic;">not definable in set theory.</span>" What I was trying to do was diagonalize set theory, to define a set-theoretically undefinable video sequence. The right way to do this is not to take the "lowest" sequence not defineable by set theory. Instead: put an ordering on set-theoretical definitions of sequences. For the Nth frame of our non-definable video sequence, look at the Nth set-theoretic video sequence; if the screen is all white in that sequence, take our sequence to be all black on that frame; otherwise, take it to be all white. :D)<br /><br />Still, all I'm claiming is that we can construct an algorithm that will perform better if it is given more time. That provides a sort of counterexample, but not one that is explosive (until the point where an AI gets the ability to improve its own hardware).<br /><br />[Edit: In the comments, Shane Legg points out that this doesn't really provide a counterexample. We can construct an algorithm that will do better given more time, but that does not mean that for every sequence we might want to learn, there is some amount of processing time that will allow the algorithm to converge correctly. In fact, that's false; there will be sequences that can't converge, for any fixed amount of time we allow the algorithm. Shane also corrects me on the conclusions he draws: he does not, in fact, conclude that "ai must be an experimental science".]<br /><br />Eliezer makes a <a href="http://www.overcomingbias.com/2008/05/faster-than-ein.html">vivid argument</a> for the possibility of explosively fast learning. Here is the key mathematical bit:<br /><br /><p></p><blockquote><p>I occasionally run into people who say something like, "There's a theoretical limit on how much you can deduce about the outside world, given a finite amount of sensory data."</p> <p>Yes. There is. The theoretical limit is that every time you see 1 additional bit, it cannot be expected to eliminate more than half of the remaining hypotheses (half the remaining probability mass, rather). And that a redundant message, cannot convey more information than the compressed version of itself. Nor can a bit convey any information about a quantity, with which it has correlation <em>exactly zero,</em> across the probable worlds you imagine.</p> <p>But nothing I've depicted this human civilization doing, even <em>begins</em> to approach the theoretical limits set by the formalism of Solomonoff induction. It doesn't approach the picture you could get if you could search through <em>every single computable hypothesis</em>, weighted by their simplicity, and do Bayesian updates on <em>all</em> of them.</p></blockquote><p></p><br />He does, however, admit reservations about the possibility of this power being manifested in the physical universe:<br /><br /><p></p><blockquote><p>No one - not even a Bayesian superintelligence - will ever come remotely close to making efficient use of their sensory information...</p> <p>...is what I would like to say, but I don't trust my ability to set limits on the abilities of Bayesian superintelligences.</p> <p>(Though I'd bet money on it, if there were some way to judge the bet. Just not at very extreme odds.)</p></blockquote><p></p><br /><br />But, of course, an entity (say) halfway between humans and a Bayesian Superintelligence is still rather dangerous.Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com7tag:blogger.com,1999:blog-28417647.post-11314549205003451462009-03-27T15:48:00.000-07:002009-03-29T15:22:44.731-07:00<span style="font-size:180%;">AI Moral Issues</span><br /><br />I recently had a discussion with <a href="http://yudkowsky.net/">Eliezer Yudkowski</a>. I contacted him mainly because I thought from what I knew of his thinking that he would be interested in foundational logical issues, yet I saw no technical details of this sort in his many online writings (except for a <a href="http://yudkowsky.net/rational/lobs-theorem">cartoon guide to Lob's Theorem</a>). I quickly learned the reason for this. Eliezer believes that there is <a href="http://yudkowsky.net/singularity/ai-risk">great risk</a> associated with AI, as I already knew. What I had not guessed (perhaps I should have) was that he therefore considers technical details too dangerous to release.<br /><br />My natural disposition is to favor open research. So, I am very reluctant to accept this position. Of course, that is not in itself an argument! If I want to reject the conclusion, I need an actual reason-- but furthermore I should search just as hard for arguments in the other direction, lest I bias my judgement.<br /><br />(That second part is hard... although, it is made easier by the large body of literature Eliezer himself has produced.)<br /><br />The key question is the actual probability of Very Bad Things happening. If the chance of disaster thanks to AI research is above, say, .1%, we should be somewhat concerned. If the chances are above 1%, we should be really concerned. If the chances are above 10%, we should put a hold on technological development altogether until we can find a better solution. (<a href="http://aipanic.com/">AI panic</a> puts the current probability at 26.3%!)<br /><br />There are two issues here: the probability that AIs will become sufficiently powerful to pose a serious threat, and the probability that they would then proceed to do bad things. These topics are large enough <span style="font-style: italic;">individually</span> to merit a long discussion, but I'll try to jot down my thoughts.<br /><br /><span style="font-size:130%;">AI Power</span><br /><br />From what I've read (not from my recent discussion with him), Eliezer's main argument for the first point is that AIs will, because they are software, have a far greater ability to self-improve than humans have. Thus, if they start out with approximately human-level intelligence, they could end up far smarter than us just by thinking for a while. "Thinking for a while" could mean a few days, or a few years-- the risk is still there, so long as the AIs would eventually reach superhuman intelligence.<br /><br />Personally, I think the human mind has a fairly high ability to self-improve. Sure, we can't modify our neural substrate directly, but it can equally be said that an AI can't modify its silicon. Our software could in principle be equally maleable; and the possibility, combined with the assumption that self-improvement is highly effective, would <span style="font-style: italic;">suggest</span> that evolution would have hit on such a solution.<br /><br />However. Eliezer likes to say that evolution is not very smart. This is certainly true. So, I don't really know how to assign a probability to evolution having hit upon a comparatively flexible form of recursive self-improvement. So, I need to take into account more evidence.<br /><br /><ul><li>I can't "examine my own source code". An AI could. (If humans could do this, AI would be an easy problem!)<br /></li><li>I can learn new ways of thinking. I can even consciously examine them, and decide to accept/reject them.</li><li>I can't wish habits away.</li><li>An AI could run detailed simulations of parts of itself to test improved algorithms.</li><li>I can run mental simulations of myself to some extent (and do so).<br /></li></ul>Tentative conclusion: An AI could be a better self-improver than I, but it would be a difference in degree, not a difference in kind.<br /><br />This conclusion is neither here nor there in terms of the question at hand.<br /><br />Other arguments to the effect that AIs could gain enough power are ready at hand, however. The main one is that if a particular amount of computing power can create human-level intelligence, it seems pretty obvious that larger amounts of computing power will create larger amounts of intelligence. It also seems like people would willingly cross this threshold, even competing to create the largest AI brain. Even if self-improvement turns out to be a total flop, this would create superhuman intelligences. If one such intelligence decided to grab more hardware (for example using a virus to grab computing power from the internet) the amount of computing power avaliable to it would probably become rather large rather fast.<br /><br />(All of this assumes human-level AI is possible in the first place: a notable assumption, but one I will not examine for the moment.)<br /><br />The argument from superhuman intelligence to superhuman power is fairly straightforward, though perhaps not 100% certain. The AI could hack into things, accumulate large amounts of money through careful investment, buy a private army of robots... more probably it would come up with a much cleverer plan. How smart, exactly, does it need to get in order for this to be a danger? Estimates range from the level of a smart human (because a smart human can already be dangerous) to the intelligence of all humans combined (because that is what the AI would be up against). For the really bad scenarios to occur, it would seem, the AI needs to be capable of major scientific innovation; innovation on the order of at least a team of scientists. (I must admit that, here and now, a single scientist could potentially release a deadly disease from a lab-- this involves no innovation, since the diseases are already there. But, this is because a select few scientists have special access to these diseases. An AI <span style="font-style: italic;">might</span> get such access, but that doesn't seem especially probable until a time when AIs are all around and being given all sorts of other responsibilities... at which point, if the AIs are actually unfriendly, the disease is only one of many worries.)<br /><br />One question remains: is there enough computing power around <span style="font-style: italic;">currently</span> to cause concern? This is something that <span style="font-style: italic;">did</span> come up in the conversation. If current machines are not risky, then it could be possible today to hit upon the right AI design, yet not achaive human-level intelligence using it. Personally, I think this would be ideal. Such a scenario, with AIs gradually increasing in intelligence as the hardware increased in capability, would give humans time to experiment with AI technology, and also consider its consequences. (Indeed, some argue that this is already the situation: that the fundamental algorithms are already known, and the hardware just needs to catch up. I don't agree, although I can't deny that a large amount of progress has already been made.)<br /><br />Eliezer argued that human-level intelligence on modern-day machines was plausible, because evolution is not a good engineer, so human-level intelligence may require far less hardware than the human brain provides. Estimates based on the brain's computing power vary quite widely, because it is not at all clear what in the brain constitutes useful computation and what does not. Low estimates, so far as I am aware, put the brain's computing power near to today's largest supercomputer. High estimates can basically go as far as one likes, claiming that chemistry or even quantum physics needs to be simulated in order to capture what is happening to create intelligence.<br /><br />Of course, the internet is vastly more powerful than a single machine. But the risk of an AI escaping to the internet does not seem very high until that AI is at least near human level <span style="font-style: italic;">pre</span>-escape. So, what is the probability that current machines could be human-level with the current algorithm?<br /><br />My faith in evolution's engineering capabilities is somewhat higher than Eliezer's. Specifically, Eliezer is (from what I've read) quite fond of the study of <a href="http://en.wikipedia.org/wiki/List_of_cognitive_biases">cognitive bias</a> that has become a popular subfield of psycholgy. While I enjoy Eliezer's writings on rationality, which explicates many of these biases, I am reluctant to call them design flaws. <span style="font-style: italic;">Upon reflection</span>, there are better ways of doing things, and explicating these better ways is an important project. But my best guess is that each cognitive bias we have is there for a reason, essentially because it makes for a good pre-reflection guess. So, rather than design flaws, I see the cognitive biases as clever engineering tricks. (I do not know <span style="font-style: italic;">exactly</span> how far from Eliezer's way of thinking this falls.) This is merely a default assumption; if I studied the cognitive bias literature longer, attempted to come up with explanations for each bias, and failed, I might change my mind. But, for now, I am not comfortable with assuming large amounts of mental inefficiency... although I admit I <span style="font-style: italic;">do</span> have to postulate <span style="font-style: italic;">some</span>. On the other hand, I also have to postulate a fairly high amount of inefficiency to human-made AI, because it is a hard problem.<br /><br />So, again, this leads neither here nor there.<br /><br />But, the <span style="font-style: italic;">real</span> question is not whether <span style="font-style: italic;">today</span>'s computers are human level; the more critical question is quite a bit more complicated. Essentially:<br /><br /><span style="font-style: italic;">Will there be a critical advance in software that occurs at a time when the existing hardware is enough to create an AI hazard, or will software advances come before hardware advances, such that humanity has sufficient time to get used to the implications and plan ahead?</span><br /><br />Again, a difficult question to answer. Yet it is a really important question!<br /><br />A historical view will of course show a fairly good match-up between amount of processing power and results. <a href="http://www.transhumanist.com/volume1/moravec.htm">This old paper</a> begins with such an account geared towards computer vision. Yet, there are real advances in algorithms happening, and they will continue to happen. A small but striking example of sudden improvement in algorithms is provided by <a href="http://www.cs.cmu.edu/%7Eavrim/graphplan.html">Graphplan</a>, an algorithm which changed the AI planning landscape in 1997. Of course, today, the algorithms are even better. So, hardware clearly isn't everything.<br /><br />A proper estimate would involve a serous analysis of the pace of advance in computing-- how probable is it that Moore's law will keep its pace, speed up, slow down, et cetera-- and likewise an analysis of progress in AI algorithmics. But, equally important is the other side of the question; "<span style="font-style: italic;">such that humanity has sufficient time to get used to the implications and plan ahead</span>". How much hope is there of this, assuming that the software is available before the hardware?<br /><br />I've said that I think this is the ideal outcome-- that people have a while to first get used to near-human-level AI, then human-level, then superhuman level. Part of why I think this is that, in this scenario, there would probably be many superhuman AIs rather than just one. I think this would improve the situation greatly, but the reasons are more a topic for the section on whether an AI would in fact do bad things. In terms of AI power, the situation is not as persuasive. It seems perfectly possible that a society that became used to the presence of AIs would give them various sorts of power withought thinking too hard, or perhaps even thinking that AIs in power were safer than humans in power.<br /><br />The thing is, they <span style="font-style: italic;">might</span> be right-- depending on how well-designed the AIs were. Which brings us to:<br /><br /><span style="font-size:130%;">AI Ethics</span><br /><br /><span style="font-style: italic;">If an AI gained sufficient power, would it destroy humanity?</span><br /><br />Of course, that depends on many variables. The real question is:<br /><br />Given various scenarious for an AI of sufficient power being created, what would that AI do?<br /><br />The major scenarios under consideration:<br /><br /><ul><li>Many powerful AIs (an many not-so-powerful AIs) are developed as part of an ongoing, incremental process open to essentially all of humankind<br /> </li><li>A single powerful AI is developed suddenly, by a single organization, as a result of a similarly open process<br /> </li><li>A powerful AI is developed by a single organization, but as a result of a closed process designed to minimize the risk of AI methods falling into the wrong hands</li></ul> Eliezer argues that the second scenario is not as safe as the third. Suppose the added effort to make a friendly powerful AI as opposed to just-any-powerful-AI is 1 year. Then, in an open process, an organization not very concerned with friendlyness will be able to create an AI 1 year before an organization concerned with friendliness.<br /><br />This, of course, depends on the idea that it is easier to create an unfriendly AI than a friendly one. Eliezer has written at length on this. The key concept is that a mind can have any particular goal; that there is no system of ethics that will be universally accepted by any sufficiently intelligent mind, because we can quite literally program a mind that has the exact opposite set of values for any given set. (Just reverse the sign on the utility function.)<br /><br />The argument I gave to eliezer for universal ethics is essentially the same as a view once argued by <a href="http://transhumangoodness.blogspot.com/2007/10/road-to-universal-ethics-universal.html">Roko</a>. (Roko has since been convinced that Eliezer is correct.) He calls it the theory of "instrumental values". The idea is that most rational agents will value truth, science, technology, creativity, and several other key items. It is <span style="font-style: italic;">possible</span> to contrive utility functions that will not value these things, but most will. Therefore, a future created by a bad AI will not be devoid of value <a href="http://www.overcomingbias.com/2009/01/fragile-value.html">as Eliezer argues</a>; rather, unless it has a really weird utility function, it will look a lot like the future that a good AI would create.<br /><br />This is a topic I want to go into a lot of detail on, and if the first part of the post hadn't ended up being so long, I probably would. Instead, I'll blog more about it later...<br /><br />For now, it is important to observe (as Eliezer did in our conversation) that reguardless of far-off future similarities, a good AI and a bad AI have an immediate, critical difference: a bad AI will very probably consider humans a waste of resources, and do away with us.<br /><br />Similarly, a badly designed but well-intentioned AI will very probably result in a future devoid of purpose for humans. Maximizing happiness will justify forced drugging. Maximizing a more complicated value based on what people actually consider good may easily result in a locking-in of currently enjoyed hobbies, art forms, et cetera. Maximizing instrumental values might easily lead to the destruction of humans.<br /><br />The terrible, horrible dillemma (it seems to me) is that once a single AI gains power, be it good or bad, it seems that the single utility function that such an AI is programmed with becomes completely locked in. Any flaws in the utility function, no matter how small they may seem, will be locked in forever as well.<br /><br />There are various ways of getting around this, to some extent. One approach I've been tossing around in my head for a while is that an AI should be uncertain about its own goals, similar to human uncertainty about what is ethical. This is entirelyadmissable within a Bayesian formalism. What, then, would the AI take as evidence concerning ethics? I visualize it something like this: the AI would have a fair amound of (hand-programmed) knowledge about what sorts of things are <span style="font-style: italic;">probably</span> ethical, and it would search for simple rules that would meet these criteria. A better theory would be one that fit better to the patchwork of preconceptions about ethics. Preconceptions would include things like "what a human considers ethical, is more probably ethical..." "what a human, given large amounts of time to reflect, would consider ethical, is more probably ethical..." as well as simpler statements like "killing an unwilling victim is unethical with high probability", creating pain, and so on. A few different Three-Laws style systems could "fight it out", so to speak.<br /><br /><a href="http://www.overcomingbias.com/2009/01/free-to-optimize.html">Eliezer suggests</a> a different sort of solution: an AI should behave in a highly lawful manner, setting definite rules and consequences, rather than merely doing whatever it takes to do "good" as defined by humanity. He's suggesting this as a solution to a somewhat different problem, but it applies about as well here. An AI that booted up, calculated a good set of laws for utopia, set up physical mechanisms to enforce those laws, and then shuts off, will <span style="font-style: italic;">not</span> lock the future into a single utility function. It will of course give it a huge push in a particular direction, but that is quite different. It is purposefully leaving the future open, because an open future is a plus according to (at least) the majority of humans.<br /><br />The third option that I can think of is one I've already mentioned: have several powerful AIs rather than one. This still carries a large risk. 20 AIs that decide humans are useless are just as bad as 1 AI that decides humans are useless. However, 20 AIs with well-intentioned-but-wrong utility functions are probably much better than 1, so long as they all have <span style="font-style: italic;">different</span> well-intentioned utility functions.<br /><br />The AIs would probably have incentive to enforce a balance of power. If one AI becomes obviously more powerful than the others, the others have incentive to gang up on it, because that one persuing its utility function to the utmost is probably far worse for the others than whatever the group consensus is. That consensus should be something favoring humans, since the individual goals are all random variations of that theme... if we look at all the goals, and ask what they have in common, favoring humanity should be the answer.<br /><br />Of course, that result isn't entirely certain. First, the average of many mistaken goals is not necessarily a good goal. Second, the average is not necessarily the sort of compromize that would result. Third, once a compromize has been agreed upon, the AIs might (rather than maintaining their standoff) all rewrite their utility functions to reflect the consensus, and effectively merge. (This would be to avoid any future defectors; the utility of stopping other possible defectors might be higher than the utility of keeping your ability to defect by keeping your utility function, thus making it rational to agree to a group rewrite.) In this case, the lock-in that I'm afraid of would happen anyway (although we'd be locked in to a probably-less-terrible utility function). Fourth, the situation might not result in a standoff in the first place. Even with several AIs to begin with, one could gain an upper hand.<br /><br />Friendly AI (as Eliezer calls it) is a hard question. But, it is not the question at hand. The question at hand is:<br /><br /><span style="font-style: italic;">Which is better, an open research process or a closed one?</span><br /><br />I've touched on a few of the factors, but I haven't come close to a definite answer. A proper answer requires an examination of the friendliness issue; an analysis of the curve of technology's growth (especially as it relates to computing power); an examination of what sort of theoretical advances could create a powerful AI, and in particular how suddenly they could occur; an idea of AIs future place in society (both sub-human, human-level, and superhuman AI), which requires a socio-economic theory of what we will do with AI; and a theory of AI psychology, mapping the space of possible minds (focusing on which minds are friendly to humans).<br /><br />I'll try to address each of these issues more closely in the next few posts.Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com5tag:blogger.com,1999:blog-28417647.post-62298772511634398442009-03-22T10:48:00.000-07:002009-03-22T11:55:43.403-07:00After having thought about it, I am becoming more convinced that the correct approach to the issues mentioned in these <a href="http://dragonlogic-ai.blogspot.com/2009/03/more-on-last-times-subject-system-i.html">two</a> <a href="http://dragonlogic-ai.blogspot.com/2009/03/ok.html">posts</a> is to accept the line of reasoning that deduces an enumerator's truth from the truth of all its consequences, thus (as mentioned) inviting paradox.<br /><br />Consider the original stated purpose of my investigation: to examine which statements are probabilistically justifiable. To probabilistically accept one of the enumerators simply because (so far) all of its implications appear to be true would be a violation of the semantics, unless the truth value of an enumerator<span style="font-style: italic;"> really was</span> just the conjunction of the enumerated statements. So to satisfy my original purpose, such a thing is needed.<br /><br />As I said, paradox can be avoided by allowing enumerators to be neither true nor false. In particular, it is natural to suppose a construction similar to Kripke's least-fixed-point theory of truth: an enumerator is "ungrounded" if it implies some chain of enumerators which never bottoms out to actual statements; ungrounded enumerators are neither true nor false.<br /><br />The problem is that we will now want to refer to the ungrounded-ness of those sentences, since it appears to be a rather important property. For this we need to augment the language. This can be done in multiple ways, but it will ultimately lead to a new reference failure. And filling in <span style="font-style: italic;">that</span> hole will lead to yet another hole to fill. In general I would deal with this by using my <a href="http://dragonlogic-ai.blogspot.com/2008/12/general-theory-well-i-have-been.html">theory that almost works</a>. This entails building an infinite hierarchy of truth values which starts:<br /><br /><span style="font-style: italic;">True, False, Meaningless1, Meaningless2, Meaningless3, MeaninglessInfinity, MeaninglessInfinity+1...</span><br /><br />I am generally interested in investigating whether this hierarchy is equal in power to the usual Tarski hierarchy, namely,<br /><br /><span style="font-style: italic;">True1, True2, True3, ... TrueInfinity, TrueInfinity+1, ...</span><br /><br />The difference basically springs out of the use of a loopy truth predicate (a truth predicate that can apply truly to sentences which contain the <span style="font-style: italic;">same</span> truth predicate). Asking for a single truth predicate appears to force the existence of infinitely many meaninglessness predicates. Is a loopy truth predicate automatically more powerful? Not obviously so: the loopy truth predicate will have a maximal amount of mathematical structure that it implies (the least fixed point), which does not even include uncountable entities. The Tarski hierarchy will continue into the uncountables, and further. (I should note that that statement is not universally accepted... but if someone suggests that the Tarski hierarchy should stop at some well-defined level, can't I just say, OK, let's make a truth predicate for that level? Shy not keep going? And we don't actually need an uncountable number of truth predicates: we need an ordinal notation that can refer to uncountable ordinals, and we just make the truth predicate take an ordinal as an agrument.)<br /><br />So the question is: is there an isomorphism between the mathematical structures captured by the Tarski heirarchy of truth, and my hierarchy of nonsense.Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com1tag:blogger.com,1999:blog-28417647.post-17214534080425204712009-03-19T10:35:00.000-07:002009-03-22T10:48:10.024-07:00<span style="font-weight: bold;font-size:130%;" >Maximally Structured Theory</span><br /><br />The proof-theoretic concept of a theory's "strength" has little to do with the sorts of considerations in the previous two posts. Proof theory normally worries about how much a system <span style="font-style: italic;">proves</span>, while I've been worried about how much a system can <span style="font-style: italic;">say</span>.<br /><br />The sort of proof-theoretic strength I've read about (I admit there are at least two, and I don't know much about the other one, known as <a href="http://en.wikipedia.org/wiki/Proof-theoretic_ordinal">ordinal strength</a>) is defined in terms of interpretations. One system can be interpreted in another if, roughly, we can "translate" the provable statements in one into provable statements in the other, and the translation preserves the basic logical operators (so and -> and, or -> or...). There is no requirement that the translation of an unprovable statement be likewise unprovable, which means that the logic we're translating into may add more provable statements. This is, of course, the point: of two systems A and B, B is stronger than A if B interprets A but A does not interpret B. This indicates that B proves more statements than A.<br /><br />Now suppose that system C is also stronger than A. It may be the case that system C is not comparable to system B: neither can interpret the other, so there is no way to say that one is more powerful than another. We can think of this as a disagreement between the two systems; both prove more than A proves, but B might prove some statement where C instead proves the negation of that statement.<br /><br />Is one system right and the other system wrong? Well, if both systems are supposed to be talking about the same mathematical entity, then yes. (Or they could both be wrong!) But if not, we can instead think of the different systems as describing separate mathematical entities. It is then natural (in my opinion) to assert that the <span style="font-style: italic;">proper</span> foundation of mathematics should interpret <span style="font-style: italic;">both</span>. This would mean that it contained both structures within it.<br /><br />I am somewhat surprised that this is not the current approach. Instead, people seem to be concerned with whether or not the axiom of choice is actually true or not. I suppose this is my first deviation from the classical 'platonist' view :).<br /><br />So, suppose we restrict ourselves to finite first-order theories. How much structure do we get?<br /><br />First, just because we restrict ourselves to finite theories doesn't mean we can't interpret an infinite theory. Peano arithmetic, which I've been talking about for the last two posts, is a perfect example; it has an infinite number of axioms, but they are computably enumerable, so we can create an axiom system that enumerates them by allowing the use of extra function symbols.<br /><br />Second, we can interpret far more than first-order theories; higher-order theories can be interpreted as first-order theories with restricted quantifiers.<br /><br />What we <span style="font-style: italic;">can't</span> interpret is a <span style="font-style: italic;">fundamentally</span> infinite theory, such as... well, the truths of arithmetic. No finite theory will interpret the infinite theory containing all those truths. (Notice, now I'm being Platonist, and claiming that there actually is a fact of the matter.)<br /><br />What will the maximal theory look like, the theory that can interpret any others? Will it be finite?<br /><br />Let's first construct an infinite theory that interprets all finite theories. We can't interpret inconsistent theories without being inconsistent, so let's throw those out... then, we enumerate all the consistent theories (in order of size, let's say, using some other rule to break ties), but using a <span style="font-style: italic;">new</span> set of function symbols for each theory, so that they don't interfere with eachother. (Each finite theory can only use a finite set of function symbols, so there is no trouble here.) Using different function symbols allows us to simply take the union, forming an infinite theory.<br /><br />This infinite theory cannot be converted into a finite form, unfortunately, because the enumeration is not computable: we can't simply distinguish between theories that are consistent and theories that are not.<br /><br />Now... if we take a logic wich lacks the rule of explosion, ie, a paraconsistent logic, we wouldn't need to worry so much about avoiding inconsistent theories. The inconsistent theories would be isolated, and wouldn't "infect" the others. In this case we could computably enumerate all theories (still making sure to take different function names for each), and therefore we could make a <span style="font-style: italic;">finite theory </span>which performed the enumeration. (The finite theory would use only a finite number of function symbols, but it would provide an interpretation for an infinite number of them, via some encoding.)<br /><br />This provides an interesting argument in favor of paraconsistent logic. But, not necessarily a definitive one. It seems to me that one could claim that the paraconsistent logic is not really <span style="font-style: italic;">interpreting</span> the classical theories, since it is putting them into a logic with a nonclassical semantics. Perhaps one could define what the paraconsistent logic is really doing, and construct a classical theory that does that much. I don't know.<br /><br />[edit]<br />postscript--<br /><br />Don't think that it ends here, with one infinite (unspecifiable) theory to rule over all finite theories. We can talk about other such infinite theories, such as the theory containing all the truths of first-order arithmetc, the theory containing all the truths of second-order arithmetic, et cetera, the truths of set theory... (Actually, I don't think just "the truths of set theory" is well specified. "The truths of set theory under the iterative conception of set" is one way of being more specific.) So, we'll want to talk about whether these infinite theories can interpret eachother, and hence we'll have an increasing chain of "maximally structured" systems with respect to particular reference sets... the maximally structured theory with respect to finite theories is countably infinite; so is the set of truths of first-order arithmetic; the max. struct. theory w.r.t countably infinite theories will be uncountably infinite; and so it goes.<br /><br />Alternatively, the paraconsistant version of this hierarchy appears to stop right away, with a finite theory that interprets all finite theories. This might be taken as a strength or a weakness...Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com0tag:blogger.com,1999:blog-28417647.post-29551502530669382042009-03-18T18:23:00.000-07:002009-03-18T19:36:23.000-07:00<span style="font-weight: bold;font-size:130%;" >More on Last Time's Subject</span><br /><br />The system I suggested last time, at the end, can actually be stated in a much simpler (more standard) way.<br /><br />Rather than adding in a totally new notation for representing enumerations of sentences in a Turing-complete manner, all we <span style="font-style: italic;">really</span> need is to allow the use of arbitrary function symbols when making assertions. This sounds trivial! However (as the considerations from last time show) it makes a big difference in the actual behavior of the system.<br /><br />Added function symbols are a standard part of the machinery of first-order logic, so in some ways it seems only natural to allow their use in first-order arithmetic. However, they are not normally considered a part of that language; functions in first-order arithmetic must be <span style="font-style: italic;">constructed</span> rather than <span style="font-style: italic;">defined</span>.<br /><br />Considering functions part of the language obviously proves no new theorems about arithmetic, because we don't add any axioms that include the function symbols. The only thing that changes is what we can say. So what <span style="font-style: italic;">can</span> we say?<br /><br />Well, it allows us to say a whole lot. With function symbols, we can go as far as defining a truth predicate for arithmetic! This means that we can build the metalanguage for arithmetic. We can also go further, building the metametalanguage (the language containing the truth predicate for the metalanguage), et cetera. However, don't be fooled; there is an important difference between <span style="font-style: italic;">being able to build it</span> and <span style="font-style: italic;">having it as part of the system</span>. Normally when we talk about a language containing the truth predicate we mean that we've already got the rules to make it work in the axioms, not that we can add them if we like.<br /><br />Adding function symbols to arithmetic follows the advice of the edit to the previous post, namely, we don't include any claim that the added statements are true just because all the sentences of arithmetic that they imply are true. This is essentially because the logical system is treating the function-symbols as <span style="font-style: italic;">actual things</span>, not just as notational conveniences. The logic thinks there is a truth of the matter about how "F" behaves, even before we add axioms to define its behavior. It has no idea what that truth <span style="font-style: italic;">is</span>, but it assumes that the truth exists. So even if a particular function-based statement implies only true statements about arithmetic, and so looks totally un-risky, the logic still will think that it implies a bunch of risky statements about the function symbols involved.<br /><br />It would be interesting to develop a semantics for the function-symbols that did not result in this conclusion... however, as mentioned in the previous post, in that direction there is a strong risk of paradox. Some sort of non-classical system of truth values will almost certainly be needed.Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com0tag:blogger.com,1999:blog-28417647.post-89230747928571092372009-03-15T13:44:00.000-07:002009-03-18T09:44:24.287-07:00OK.<br /><br />So.<br /><br />More logic stuff.<br /><br />I've spoken in the past about using probabilistic justifications (or, when simplicity is desired, nonmonotonic justufucations) for mathematical belief. This provides at least some account of how a system can meaningfully refer to uncomputable concepts such as the natural numbers. (Calling the natural numbers "uncomputable" is sure to raise a few eyebrows. What I mean is that there are undecidable propositions concerning the natural numbers.) The natural question is: how much can be probabilistically justified?<br /><br />To classify one statement as computably decidable, or probabilistically decidable, or neither, is quite accurate; any one statement or its negation could be simply added as an axiom, and thus made decidable (or more realistically, something else might be added as an axiom, rendering the statement decidable). Similarly, a statement might be made probabilistically decidable with the addition of an axiom that caused it to be equivalent to some probabilistically decidable statement. (Most trivially, the axiom "A <-> B" where "A" is a probabilistically jsutifiable statement and "B" was previosly not. Such an axiom might seem contrived, but it is true if A and B are both true!)<br /><br />Still, it is convenient to talk in this way, and can be made meaningful. One way is by fixing the axioms we're dealing with; so, for example, we might be working just with robinson arithmetic or peano arithmetic. (I would prefer robinson arithmetic, since I would like to leave the epistemic status of mathematical induction open, but peano arithmetic assumes it to be true.) But this is not the only possibility. For now, I prefer something like the following:<br /><br /><ul><li><span style="font-style: italic;">Basic functions of arithmetic are computable.</span> These vary from axiomatixation to axiomatization. At one extreme, we could provide axioms giving the behavior of all the common functions on natural numbers; at the other extreme, we could have no functions in our language, instead having relations sufficient to specify the structure. Most common is to take only the sucessor function as basic, and define addition, multiplication, et cetera from these. These differences <span style="font-style: italic;">may</span> cause trouble for me (ie, result in non-equivalent definitions), but I'll ignore that for now...</li><li><span style="font-style: italic;">Basic predicates and relations are computably decidable.</span> Again, there is some variation about which relations are taken as basic, and which are taken as defined. Usually equality is taken as basic, but I've heard of greater-than being taken instead.</li><li><span style="font-style: italic;">Statements whose truth is a function of a finite number of computably decidable statements are computably decidable</span>. This lets us form larger decidable statements with the normal boolean functions, <span style="font-weight: bold;">and</span> <span style="font-weight: bold;">or</span> and <span style="font-weight: bold;">not</span>. It also allows bound quantifiers to be used, such as "For all numbers less than 100..."</li><li style="font-style: italic;">Statements whose truth is a function of an infinite number of computably decidable statements are probabilistically decidable if any reasonable probabilistic decision process is guaranteed to converge.</li></ul>A "reasonable probabilistic decision process" is intended to be something like the procedure described in <a href="http://dragonlogic-ai.blogspot.com/2008/06/basic-hyperlogic-my-search-for-proper.html">this post</a>. Obviously I need to make that a bit more well-defined, but it should be good enough for now.<br /><br />It might also be convenient to define "computably justifiable", meaning that <span style="font-style: italic;">if</span> a statement is true, it is computably decidable. (For example, "there exists a place in the decimal expansion of pi at which the number 9 is repeated 24 times" is verifiable if it is true (we can simply calculate the digits of pi until we find the spot), but if it is false, it is not.) "Computably falsifiable" similarly means that a statement can be falsified finitely if it is indeed false. "Probabilistically justifiable" would mean a statement was probabilistically decidable if it was true, and "probabilistically falsifiable" would mean that it was probabilistically decidable if false.<br /><br />The basics are all set up, so let's get to the main show.<br /><br />The question is: what statements are probabilistically justifiable?<br /><br />I think I have mentioned before that the delta-2 statements on the arithmetic hierarchy are definitely probabilistically decidable, since they are limit-computable. For any statements not in delta-2, there is no guarantee that they will converge to the correct value (they might not converge, or worse, they might converge to the wrong value). Still, it is an interesting question whether *some* of these converge, and whether applying probabilistic methods is still better than guessing randomly (if there are more that converge correctly than converge incorrectly). But that is not the question for today...<br /><br />Previously, I have restricted this question to statements in arithmetic. This seems sensible. However, lately I've been looking at the concept of proof-theoretic strength. Roughly, a logic is considered stronger if it proves the same truths plus more. This brought my attention to the idea of probabilistically justifying statements in higher logics based on their relevance to arithmetic. Could set theory, for example, be justified by its usefulness in describing true statements of arithmetic?<br /><br />This led to a question: <span style="font-style: italic;">are</span> there any statements that can be made, which purely assert things about arithmetic, but cannot be formulated in the language of first-order arithmetic? Obviously arithmetic can assert any <span style="font-style: italic;">single</span> thing about arithmetic, so the question is whether all <span style="font-style: italic;">combinations</span> of arithmetic statements that can be asserted by some logic can be asserted in arithmetic. ("Combinations of statements" refers to the same thing that "truth functions of statements" referred to in the earlier definition. Now, obviously, arithmetic can specify any finite truth-functional statement; so that challenge comes when we examine infinite truth-functional statements, such as those that are probabilistically justifiable.)<br /><br />More formally:<br /><br />Definition: <span style="font-style: italic;">Logic B is "<span style="font-weight: bold;">effectively complete</span>" with respect to logic A if and only if for every computably enumerable set S of statements in logic A, there exists a statement in logic B which imples exactly S closed under the deductive consequence relation of B.</span> (We need to close the set, because we can't penalize logic B for not being able to imply x&y without automatically implying y, for example.)<br /><br />Conjecture: <span style="font-style: italic;">Arithmetic is effectively complete with respect to itself.</span><br /><br />Whether this is true or false, there are some interesting consequences. I'll note some of them in both directions, but keep in mind that (since only one direction is true) half of the consequences I mention will be false.<br /><br />If the conjecture is false, then even if we are suspicious of mathematical entities other than numbers, it seems that we'll have good reason to want a more mathematically expressive language then first-order arithmetic. The idea of "effective completeness" as I defined it is fairly weak; it does not seem at all implausible to say that if we care about a domain, we should want a logic which is effectively complete with respect to it. So, if arithmetic is <span style="font-style: italic;">not</span>, it seems like another is needed. <span style="font-style: italic;">Furthermore</span>, these new statements may have the hope of probabilistic justification.<br /><br />This does not necessarily justify set theory. It most certainly doesn't justify a Platonist attitude towards sets (ie, believing in sets as actual entities rather than mere symbol-manipulation). Still, it starts the climb towards abstraction, and that's a slippery slope. (ha ha, mixed metaphor.)<br /><br />If the statement is <span style="font-style: italic;">true</span>, on the other hand, that means first-order arithmetic <span style="font-style: italic;">can</span> state anything that can be effectively stated. (That qualification, "effective", means: arithmetic can state any combination of statements that some computer program could generate in a (possibly nonterminating) sequence.) This would mean that any higher logic, in its capacity to refer to combinations of arithmetic statements, is unnecessary; in some sense it is already hiding within arithmetic, so it doesn't need to be invented seperately. Any higher-logic statement that could be justified probabilistically via its relationship to arithmetic could actually be interpreted <span style="font-style: italic;">as</span> some statement in arithmetic: once we found the sentence's interpretation, we could not distinguish between the two in terms of their meaning.<br /><br />So which is true?<br /><br />I spent quite a while pondering it, and went down a few wrong paths trying to find the answer, but I won't waste time describing those. Eventually, I realized that I had an example right in front of me of a sentence that is <span style="font-style: italic;">not</span> ever stated in the language of first-order arithmetic, yet assers an effectively enumerable set of such statements: mathematical induction!<br /><br />Mathematical induction is always axiomized in peano arithmetic via the aid of an <a href="http://en.wikipedia.org/wiki/Axiom_schema">axiom schema</a>. An axiom schema is an axiom with a hole in it, into which is supposed to be placed every possible statement that can be made in the system. Here is the induction schema:<br /><br />"If X[0] and X[n]->X[n+1] for all n, then X[n] for all n."<br /><br />In other words, if we can prove that X holds for zero, and that whenever X holds for a number it holds for the next number, then we have proved that X holds for all numbers.<br /><br />"X" is the hole into which any particular statement that can be made about a number is to be dropped. This technique, making a statement about all statements, isn't actually avaliable to us in first-order arithmetic. Officially, the axiom schema is a stand-in for an infinite number of axioms.<br /><br />So, you see, if everyone uses an axiom schema, I realized that it's probably because we <span style="font-style: italic;">can't summarize</span> that infinite statement in the language of arithmetic alone. Furthermore, I realized that someone had probably proven that fact by now. So I looked it up, and it's true. My conjecture was false.<br /><br />This means that higher logics are necessary, in a fairly strong sense. Which logics? Well, adding the truth predicate for arithmetic (ie jumping to the metalanguage for arithmetic) gets us effective completeness for first-order arithmetic statements. If we really were only concerned with arithmetic, we would stop there. But this new language will (I suspect) still not be effectively complete with respect to itself. So we could add a <span style="font-style: italic;">third</span> layer to the cake, using a truth predicate for the <span style="font-style: italic;">second</span> logic to form a meta-meta-language that would be effectively complete for <span style="font-style: italic;">it</span>... probably, we can continue in this manner for the entire Tarski hierarchy of truth predicates. But instead, we could tackle head-on the problem of creating a language that is effectively complete with respect to itself.<br /><br />Such a think might be impossible, I admit... it might lead to paradox in the same way that a language containing its own truth predicate can.<br /><br />My idea is that, in addition to some standard stock of standard machinery (conjunction, negation, a domain such as arithmetic to talk about...), we add a turing-complete representation for constructing statement-enumerators. A statement-enumerator would be treated as the combined assertion of all statements it generated. These generators would be capable of generating other generator-sentences (otherwise it would just be another metalanguage).<br /><br />A generator would be true if all its generated sentences were.<br /><br />Sure enough, we can construct a generator that generates only its own negation. Assuming classical truth values, paradox ensues. So, we need to allow some sentences to be neither true nor false...<br /><br />Same old game.<br /><br /><span style="font-weight: bold;">Edit-</span><br /><br />Actually, this scheme might fail only because I'm overstepping what is needed for effective completeness. Specifically, the sentence "<span style="font-style: italic;">A generator would be true if all its generated sentences were</span>" seems convenient, but is not necessary.Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com0tag:blogger.com,1999:blog-28417647.post-13479796761865284662009-02-09T14:03:00.000-08:002009-05-19T14:19:52.859-07:00<span style="font-size:180%;">Compression</span><br /><br />As promised in the previous post, I'll describe what I'm currently working on.<br /><br />Really, this is a continuation of <a href="http://dragonlogic-ai.blogspot.com/2008/06/something-cool-i-ran-into-some.html">this post</a>. The ideas I outlined there have the same basic shape, but have evolved a good deal.<br /><br />The basic idea: Any Turing-complete representation can be "run backwards" to generate the search space of all (and only) the programs that generate the desired piece of data. This approach promises to be much more effective then a genetic programming solution (for the special case to which it can be applied). A possible immediate application of this is data compression, which is formally equivalent to looking for short programs that output a desired piece of data (or, it is if we include the decompression program when measuring the size of the file, as is often required in <a href="http://mailcom.com/challenge/">compression</a> <a href="http://prize.hutter1.net/">contests</a> to prevent cheating).<br /><br />Realizing that compression was the application area I should be focusing on helped simplify my design a great deal. There may be a theoretical connection between compression and prediction, but converting between the two is not computationally simple. Converting compression to prediction can be done in two ways (so far as I can see):<br /><br />-generate futures and then compress them to see how probable they are (shorter = more probable)<br />-generate (short) compressed representations of the future and decompress them to see what they actually mean<br /><br />The first method can use the "running backwards" trick, trimming the search space. The second cannot, so it may generate meaningless futures (compressed representations that do not actually decompress to anything). However, the second has the advantage of starting with short (probable) representations; the first will go through many improbable possibilities. I can think of many little tricks one could try to improve and/or combine the two methods, but my point is, it is a hard problem. I was attempting to tackle this problem for most of last semester. Eventually, I may try again. But for now, compression is a good testbed for the basic algorithm I'm working on.<br /><br />What about the other direction: can we convert from prediction to compression in an effective manner? It turns out that the answer is yes. There is an awesome algorithm called <a href="http://en.wikipedia.org/wiki/Arithmetic_coding">arithmetic coding</a> that does just that. This is the basis of the <a href="http://en.wikipedia.org/wiki/PAQ">PAQ</a> compression algorithm, the current leader in the Hutter Prize and Calgary Challenge (both linked to earlier).<br /><br />However, this strategy inherently takes the same amount of time for decompression as it takes for compression. PAQ and similar methods need to form predictions as they decompress in the same way that they form predictions as they compress. This is what takes the majority of the time. More "direct" methods, like the one I am working on, take far less time for decompression: compression is a process of searching for the shortest representation of a file, while decompression simply involves interpreting that short representation.<br /><br />The Hutter prize places a time limit on decompression, but not on compression; so in that way, my method has the upper hand. In theory I could let my algorithm search for months, and the resulting file might decompress in minutes. (A rough estimate would be that the decompress time is the logarithm of the compress time.)<br /><br />By the way, these compression contests are far from practical in nature. Real applications of compression prefer compression methods that work fairly fast for both compression and decompression. The Hutter prize is designed to increase interest in the theoretical connection between prediction and compression, as it relates to artificial intelligence. Thus, it has a very loose time retriction, allowing for "smart" compression methods such as PAQ that take a long hard look at the data.<br /><br />But, enough about contests. Time for some more details concerning the algorithm itself.<br /><br />The major design decision is the choice of Turing-complete representation. The algorithm actually depends critically on this. Representations vary greatly in their properties; specifically, how well a heuristic can guide the serch. The major heuristic to consider is simply size: if a move decreases size immediately, it is somewhat more probable that it decreases size in the long run, too.<br /><br />The "classic" choice would be to just go with Turing machines. Heuristics on pure turing machines, however, happen to be <span style="font-style: italic;">terrible</span>. There are a very small number of options for backwards-moves, and usually none of them immediately changethe size of the current representation. We are running blind!<br /><br />For most of last semester, I was working on a version that would use lambda calculus as the representation. Lambda calculus is much better; in particular, it allows us to immediately take advantage of any repetitions that we recognize. Noticing repetitions is the most direct way to apply the shortness heuristic; it is the method used by <a href="http://sequitur.info/">sequitur</a>, one of the algorithms inspiring my current direction.<br /><br />Lambda calculus can compress repetitions using "abstraction". However, an abstraction needstoactually be put somewhere when it is made. There is no convenient heuristic for where to put it (at least, not that I've thought of). For example, if I see a string "abcabcabc", I could either abstract at the global level, which would allow me to compress any other occurrence of "abc" that occurs into the same abstraction, or at the local level, which might allow me to compress other local groupings of three repetitions. Also, I could place the abstraction anywhere inbetween, if I thought some of the context was important. "Anywhere inbetween" is a rather large search space. This and other issues led me away from lambda calculus as the representation of choice.<br /><a href="http://en.wikipedia.org/wiki/Formal_grammar"><br />Generative grammars</a> are another option. These appeal to me, partly because they are the formalism chosen by sequitur. Like lambda calculus, they offer a similar way of abbreviating repetitions. With grammars, the choice is what to rewrite them to, rather than where to place their abstraction. Sequitur can be seen as the special case of always rewriting repetitions to a single symbol, which represents the sequence that had repeated. More sophisticated types of patterns can be represented by allowing more complicated rewrites. Heuristically, simpler rewrites can be preferred.<br /><br />Unfortunately, there is a serious problem with this representation. Unlike lambda expressions, grammars are not necessarily <span style="font-style: italic;">confluent</span>, which means that when we use the grammar to generate data we may get a different file then intended. It is possible to specify a convention telling us how to apply the grammar to get a file (for example, "always expand the first applicable rule going left-right, deciding ties by using the shorter of the rules"). However, a restriction on forward-execution must be mirrored by a corresponding restriction on backwards-execution, which means the search during the compression phase is more constrained. In many cases it is good news when we can constrain a search more; but in this case, it hurts the ability of the heuristic to guide us. (It seems there is a fairly direct connection here: restricting the branching factor leads to a worse heuristic unless we're very careful.)<br /><br />I also spent some time creating a set of restrictions that ensure confluence, but although these systems salvaged much of the situation, they were not as good as the option I finally settled on: combinators.<br /><br />Combinators differ from lambda expressions in that functions are given names and have external definitions, whereas in lambda expressions the function definitions are in-place via abstraction. Minimally, one can use just two combinators, S and K, for a Turing-complete formalism. (If you want to know the definitions of S and K, <a href="http://en.wikipedia.org/wiki/SKI_combinator_calculus">look them up</a>.) However. for my purposes, it makes sense to invent new combinators as needed.<br /><br />The idea behind a heuristic for combinators is to not only look for exact repetition, but also repetition of pieces with holes. Not only can we notice multiple instances of "abc", but also "a_c". Repetitions with holes suggest combinators that take hole-fillers as arguments.<br /><br />Turing-completeness is assured by the fact that the system could invent S and K, although it probably won't. (For one thing, S and K are not the only set of operators that allow expressive completeness. For another, they are too "abstract" to be suggested directly by data... the heuristic will not give them a high score. Although, they are short, so their score should not be <span style="font-style: italic;">too</span> low...)<br /><br />That, then, is the basic idea. There are many directions for expansion, which might help the algorithm (and might not).<br /><br />-More complicated functions<br />--functions that take sequences as arguments, not just single terms (this is technically possible without expanding the formalism, via a trivial combinator A (x,y) -> xy; however, some work could be done to getthe heuristic to look for this sort of thing, essentially by keeping an eye out for repeated patterns with variable-sized holes rather than single-slot holes)<br />--probabilistic functions (this may or may not provide a compression advantage, since we've got to keep track of the extra info to make things deterministic anyway, but even if it creates no advantage it might make things more human-readable)<br />--functions with more complicated definitions, such as if-then statements<br /><br />-more sophisticated heuristic & more sophisticated search<br />--best-first vs semi-random<br />--learning<br />---memoize "reduced forms" of commonly-encountered structures?<br />---learn which heurustic info to tabulate?<br />---learn sophisticated search strategies?Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com0tag:blogger.com,1999:blog-28417647.post-91639388239800501622008-12-21T21:25:00.000-08:002008-12-22T23:35:11.388-08:00<span style="font-size:180%;">Back to AI</span><br /><br />OK, here we go.<br /><br />AI is full of many fairly well-developed methods of "propositional" learning: probabilistic, fuzzy, and otherwise fancified versions of stuff we can do with boolean algebra but no quantifiers. In other words, current AI is fairly good at manipulating systems that have a fixed number of variables, but not so good at arbitrary logical structures.<br /><br />A simple illustration of the two categories:<br /><br />Propositional Version:<br />I give the AI data on 3,000 patients, which includes information about which patients had a list of 20 common symptoms, and what of 10 diseases those patients were ultumately determined to be suffering from. The AI learns to predict disease from symptoms.<br /><br />Predicate Version:<br />I give the AI data on 3,000 patients, with logical descriptions of symptoms and logical descriptions of diseases. Again, the AI is to learn to predict diseases from symptoms.<br /><br />In the first case, the AI is learning a mapping from 20 variables (the twenty symptoms) to 10 variables (the diseases). In the second case, the AI is learning a mapping from one infinite space (possible logical descriptions of symptoms) to another (possible logical descriptions of diseases).<br /><br />There are some borderline cases. For example, if we are learning to map arbitrary real numbers to arbitrary real numbers, the space is infinite, and we *could* treat the real numbers as logical entities rather than merely continuous variables, but this is almost never done; so, it is propositional. (Treating the real numbers as logical entities would mean worrying about interesting exact numbers like 1, 2, 3, pi, e, 1/2, and so on. For propositional methods, 1 is not any more special than 1.002.) It gets fuzzier... suppose we are learning about integers rather than real numbers. We could adapt the same propositional strategies we used in the real-number case, restricting them to integers. It might work well. But with integers there is a greater chance that the exact number matters, and a greater chance that the number should be treated as a logical entity. Perhaps a pattern relies on one of the integers being even. A propositional method will not see this, and will need to learn from the data which integers the pattern applies to and which it doesn't. So, it will only be able to extend the pattern to cases it has seen before. If this sort of pattern is likely, more sophisticated methods become critical. Yet, we're still working with a fixed number of variables. The more sophisticated methods become <span style="font-style: italic;">really</span> critical when a given problem instance can contain a totally new variable, yet one that patterns from old variables might apply to. And then they become <span style="font-style: italic;">really really</span> critical when problem instances can't even be totally separated, because they all fit into one big structure of logical relationships...<br /><br />The transition from the first type of AI to the second is now taking place, largely under the banner of "probabilistic relational models". Hooray!<br /><br />(I've labeled the second type "predicate" to distinguish it from "propositional". This way of naming the two types is not uncommon, but many people use "relational" to describe the second class, instead. Another convenient term is "logical".)<br /><br />Anyway, I'm going to outline one way of carrying over the progress that's been made with propositional models to the higher level... this is an idea I've been sort of thinking about on the back burner, partly as a way of making an inherently parallel learning method. By no means do I think this is the only way of going forward, and in fact I'll probably be working in a different direction myself for at least a little while... maybe I'll write about that later.<br /><br />So, how can we use propositional algorithms in the wider domain (whatever we choose to call it)?<br /><br />I described propositional models as models having no quantifiers. Really, though, a universal quantifier over all the variables is taken for granted in the propositional setting. If a propositional method learns that a particular variable is "2 or 3", it doesn't mean that for some one case it is either two or three; it means for <span style="font-style: italic;">any</span> case it is either two or three. This single quantifier gives us some ability to do relational-style learning.<br /><br />Suppose we want to learn a probabilistic model for pictures. We could treat every pixel as a variable, and learn correlations between them from the dataset. This approach would require a very large dataset before any interesting patterns could be found. It would need to re-learn every object in every possible location, because it would have no way of generalizing from one location to another. A smarter way to apply propositional methods to the problem would be to treat a small square, say 10 by 10, as a single "instance"; we learn correlations between variables in such squares. With this approach, we might be able to get something interesting from even one image. There is a cost, of course: where before it was impossible to generalize from one location in the picture to another, now it is impossible to <span style="font-style: italic;">not</span> do so. For example, the method would not notice that an egg appears in the top right corner of each picture; it will simply notice that an egg is a common object. This can be helped, by adding the location information as an extra variable.<br /><br />More generally, for any structured data, this approach can be used to learn local structure. For linear data such as text, we can use a fixed number of letters rather than a fixed number of pixels. For tree-structured data (such as computer program code, HTML...), we can use a branching segment. But, we can go even further: there is a totally general mapping that will handle <span style="font-style: italic;">any</span> logical structure. Any data can be represented in the form of predicate statements: a series of entities, each of which can have properties and relationships with other entities. Just as we can select a random 10 by 10 area of a picture and ask what the colors are, we can select 10 entities at random and ask what their properties and relationships are. Let's call this the "universal mapping".<br /><br />The universal mapping allows us to learn logical structure, but it does have some serious limitations. Suppose once again that we're looking at linearly structured data such as text, but in predicate calculus form, and with the universal mappping. We have a bunch of entities ordered by a next/previous relation, and with a "letter" property that distinguishes each entity as 'a', 'b', 'c', ... 'A', 'B', 'C', ... 'space', 'comma', 'period', et cetera. Now suppose that we sample entities at random. If we've got much text, it will be very unlikely that we'll pick letters that occur next to eachother. We're learning a correct distribution, technically, but we're not looking at the "interesting" part of it (the line). The algorithm could eventually learn what it was supposed to, but it would take too much time, and besides that it would not be in a directly usable form (since the distribution we wnat is embedded in a larger, useless distribution). If we used a special-purpose linear mapping that did not sample so randomly, we'd be much better off.<br /><br />OK. Anyway, that is the "state of the art" so to speak. These are all basically a type of markov model. So, what is my new suggestion?<br /><br />Well, first, let's look at one more relevant detail of the current propositional algorithms. Many of them are hierarchical in form. This means that rather than finding complex patterns directly in the set of variables, the algorithm creates additional variables, and finds simpler relationships. Simpler relationships that include additional entities amount to the more complex relationships that we could have looked for in the first place; but the process is easier to manage by abstracting it in this way. This abstraction is iterated: second and third levels of hidden variables can be created to manage increasingly complex relationships, which treat the lower levels exactly as the first level treats the visible variables.<br /><br />Two recent sucessful examples are <a href="http://www.scholarpedia.org/article/Deep_belief_networks">Hinton's deep belief nets</a> and <a href="http://www.numenta.com/">Numenta's Hierarchical Temporal Memory</a>.<br /><br />So in the propositional domain, we can talk about hidden variables. In the predicate domain, we can also talk about hidden variables, but we can add hidden relations and even hidden entities to the list. For propositional methods, hidden variables are just an efficiency issue. In the predicate domain, it is very different: hidden stuff dramatically increases the representational power (and therefore the model search space).<br /><br />Starting simple with hidden variables: if each entity is allowed to posses hidden variables, then the modeling power has changed from <a href="http://en.wikipedia.org/wiki/Markov_chain">markov-model</a> to <a href="http://en.wikipedia.org/wiki/Hidden_Markov_model">hidden-markov-model</a>.<br /><br />Adding hidden entities and relations is enough to boost the representational power up to <a href="http://en.wikipedia.org/wiki/Turing_complete">turing-completeness</a>: any computable pattern can be represented.<br /><br />So, OK, enough background. Now, the question is, how can we best use the power of existing propositional methods to search a turing-complete model space? Here is the idea I've been pondering recently...<br /><br />For the moment, let's say we're working with linearly structured data, for simplicity. One turing-complete formalism is the <a href="http://en.wikipedia.org/wiki/Cellular_automata">cellular automaton</a>. These bear some structural similarity to a hierarchical network of variables. The major difference is that all cells are the same, meaning that all variables are interacting with their neighbors in exactly the same manner. My basic idea is to semi-smoothly integrate deep belief network learning with cellular automaton learning. For the linearly structured data, then, the system would be searching for two-dimensional automata. Intuitively, learning a cellular automaton of hidden variables is similar to learning the physical laws that hold sway equally at any point in space. We can see directly how those physical laws work for the visible space, and we generalize, assuming that there are nearby hidden spaces that partially explain the visible space, and farther-off hidden spaces, and so on, but that all follow the same physical laws.<br /><br />Well, that was nearly 20 paragraphs of introduction for 1 paragraph of an idea... but, that introductory material was interesting in itself, and it will be useful to refer to in future posts. So, more later...Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com0tag:blogger.com,1999:blog-28417647.post-89756881446261582782008-12-05T19:51:00.000-08:002008-12-05T21:55:27.306-08:00<span style="font-size:180%;">A General Theory</span><br /><br />Well, I have been climbing the ladder of infinities for a while now on this blog. Soon I must get back to how this all applies to AI... but not quite yet.<br /><br />Here is a theory that almost works.<br /><br />Start with first-order logic. This can be your favorite non-classical variation if you wish; intuitionistic, paraconsistent, relevant, whatever you want. There is only one requirement: it needs to be strong enough to be Turing-complete (more specifically, logical consequence should be completely enumerable but not co-enumerable, thanks to the good old halting problem). Call this the <span style="font-weight: bold;">base language</span>.<br /><br />Add to this your favorite theory of truth. For maximum effectiveness, it should include the infinite hierarchy of truth that I suggested in <a href="http://dragonlogic-ai.blogspot.com/2008/10/progress-after-pondering-previous-post.html">this post</a>. This is not a difficult requirement: either a revision theory of truth or a fixed-point theory will do, as well as many less-well-known theories, I'm sure... and anyway, the requirement is not totally necessary, as I'll attempt to make clear. Anyway, call this language the <span style="font-weight: bold;">language of truth</span>. The theory of truth that you choose will assign actual truth-values to the statements in this language.<br /><br />Now we have what we consider a solid theory of truth. But, as I pointed out in the message I copied in the <a href="http://dragonlogic-ai.blogspot.com/2008/11/interesting-mailing-list-i-recently.html">previous post</a>, all such theories appear to have referential gaps: some statements will have a status not nameable within the theory, which we can name and reason about as people standing outside of the theory. The type of referential gap will depend on the type of theory of truth that was used. In the most common case, the gap will be sentences that are not assigned either "true" or "false". The theory of truth will be able to state that a sentences is true, or false, but not that it is neither. Defenders of such theories of truth will attempt to claim that we really can't say that; for example, one way of arguing is saying that such sentences have undefined truth values, but we could later add logical conventions that ascribe true or false to them. However, such arguments are self-defeating: the argument needs to refer to the class of sentences that are in this intermediate state, so it generally must invent a label for them (such as "undefined"). This name is precisely the reference gap of the theory, and cannot be stated inside the theory.<br /><br />So, the next step is to add to the language whatever labels we need in order to fill the gap(s) that exist in the theory of truth. I call this stage a <span style="font-style: italic;">theory of meaning</span>, because I think it is important to point out that the theory of truth is not incomplete just because of the gaps; it may be a complete and valid theory of truth, it just is not a complete theory of logical/mathematical reference. Call the new gap-filling language the <span style="font-weight: bold;">first language of meaning</span>. I will generally pretend that there is only one gap, as in the simple case. Call this gap <span style="font-weight: bold;">1-meaningles</span>. (The idea doesn't seem hard to modify to approaches that create multiple gaps.)<br /><br />Assigning truth values to this language can generally be accomplished by relying on the original theory of truth that we chose. First, label the 1-meaningless sentences as such. Second, modify the theory of truth to act upon 3 truth values rather than the usual 2. This will involve some decisions, but as you can see I am not too concerned with details in this post. Generally, we'll have to decide things like whether it is 1-meaningless or just false to claim that a 1-meaningless sentences is true. Once we've made these decisions, we simply apply the method.<br /><br />This, of course, creates another gap; having 3 truth values rather than 2 is not so radical as to change the result there. Call the new gap <span style="font-weight: bold;">2-meaningless</span>, and call the language that includes it the <span style="font-weight: bold;">second language of meaning</span>. Assign truth-values to this language in the same way, by expanding the method to include 4 truth values.<br /><br />By now you get the idea. We define 5-meaningless, 6-meaningless, and so on. And if you read the first post I mentioned (<a href="http://dragonlogic-ai.blogspot.com/2008/10/progress-after-pondering-previous-post.html">this post</a>), then you'll probably also realize that I want to similarly define infinity-meaningless, inf+1-meaningless, inf+inf-meaningless, and so on. More specifically, I want to define a type of meaninglessness corresponding to <span style="font-style: italic;">every ordinal number</span>. As I hinted at the beginning, this rather large hierarchy should smooth out most differences in referential power of the methods involved; so, a really weak initial theory of truth should still do the trick in the end, gaining maximal referential power after unstateably many iterations.<br /><br />Now for the pleasant surprise. Once I've done this, I can prove that there is <span style="font-style: italic;">no referential gap left</span>. If I had a final gap, it would correspond to an ordinal number larger than all other ordinal numbers (including itself)! This cannot be, so the theory is safe, almost as if it were protected by a magic charm.<br /><br />For a few weeks now I've been satisfied with this conclusion. But, early on, I realized that I didn't know what the logic should say about a statement like "This sentence is either false or some type of meaningless". A day or so ago I realized what was going on. Each new language can refer to any combination of the truth-states from the levels under it, but obviously there is no top level (since there is no ordinal larger than all others), so we don't have permission to refer to any combination of values from any level we want; we are only allowed to refer to combinations that have some upper bound. The phrase "some type of meaningless" has no upper bound; it attempts to refer to the entire unbounded list.<br /><br />There is some legitimate mathematical tradition that could be used to justify this limitation of the logic. One is not supposed to tamper with all ordinals at once. So, I could simply say that it's OK to stop here. Of course, I'm using the same sort of self-defeating argument that is so common in this area... in order to say that we can't refer to the list of all ordinals, I am doing exactly that.<br /><br />Another way out would be to allow the newly-problematic statements to be both true and false, using some paraconsistent logic. This is similarly justified by the motto "one is not supposed to tamper with all ordinals at once". The taboo surrounding the list of all ordinals arises, of course, from the fact that contradictions follow quite quickly from reasoning about it. So, I could somewhat legitimately argue that it is literally an inconsistent yet well-defined mathematical entity.<br /><br />However, this does not give me maximal expressive power... I will end up wanting to invent new notation to fill reference-gaps in the paraconsistent theory, and on we go.<br /><br />So, a more direct approach would be to allow unrestricted statements about ordinals, and then do the same thing we've been doing... assign these statements truth values in the obvious ways, then apply an expanded theory of truth to add a truth predicate to that language, then call anything that isn't assigned a value "super-meaningless", expand the theory of truth to give that a predicate, invent 2-super-meaningless, 3-super-meaningless, inf-, and the whole ordinal hierarchy again. Then what? Well, we'll have the same sort of gap for this second ordinal hierarchy. So by doing the whole thing <span style="font-style: italic;">again</span>, we can create super-super-meaningless, super-super-super-meaningless, and infinite versions with any ordinal number of supers . What next? Well, we'll have the same problem again...<br /><br />But notice what I'm doing...<br /><br />All of this is quite seriously illegal if we take the notion of ordinal number seriously, because I'm simply constructing a hierarchy of ordinals <span style="font-style: italic;">larger than all ordinals</span>. An ordinal cannot be larger than all ordinals! And even less can there be a whole hierarchy up there...<br /><br />This demonstrates the force of the two limitative arguments (either "no, you seriously cannot refer to those things" or "sure, go ahead, but you'll derive contradictions"). Even though there really really seems to be a solid next step that can be taken, it is either a brick wall or a gaping ravine...<br /><br />So, like I said, it seems to be a theory that almost works.Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com1tag:blogger.com,1999:blog-28417647.post-14204134723299560172008-11-24T10:55:00.000-08:002008-11-24T11:00:46.705-08:00<span style="font-size:180%;">An Interesting Mailing List</span><br /><br />I recently joined the <a href="http://www.weidai.com/everything.html">everything list</a>. It looks to be some good philosophical fun. The list practice is to submit a "joining post" that reviews intellectual background. I am replicating mine here.<br /><br />----------------------------------<br /><br />Hi everyone!<br /><br />My name is Abram Demski. My interest, when it comes to this list, is:<br />what is the correct logic, the logic that can refer to (and reason<br />about) any mathematical structure? The logic that can define<br />everything definable? If every possible universe exists, then of<br />course we've got to ask which universes are possible. As someone<br />mentioned recently, a sensible approach is to take the logically<br />consistent ones. So, I'm asking: in what logic?<br /><br />I am also interested in issues of specifying a probability<br />distribution over these probabilities, and what such a probability<br />distribution really means. Again there was some recent discussion on<br />this... I was very tempted to comment, but I wanted to lurk a while to<br />get the idea of the group before posting my join post.<br /><br />Following is my view on what the big questions are when it comes to<br />specifying the correct logic.<br /><br />The first two big puzzles are presented to us by Godel's<br />incompleteness theorem and Tarski's undefinability theorem. The way I<br />see it, Godel's theorem presents a "little" puzzle, which points us in<br />the direction of the "big" puzzle presented by Tarski's theorem.<br /><br /><a href="http://en.wikipedia.org/wiki/Godel%27s_theorem" target="_blank">http://en.wikipedia.org/wiki/<wbr>Godel%27s_theorem</a><br /><a href="http://en.wikipedia.org/wiki/Tarski%27s_undefinability_theorem" target="_blank">http://en.wikipedia.org/wiki/<wbr>Tarski%27s_undefinability_<wbr>theorem</a><br /><br />The little puzzle is this: Godel's theorem tells us that any<br />sufficiently strong logic does not have a complete set of deduction<br />rules; the axioms will fail to capture all truths about the logical<br />entities we're trying to talk about. But if these entities cannot be<br />completely axiomized, then in what sense are they well-defined? How is<br />logic logical, if it is doomed to be incompletely specified? One way<br />out here is to say that numbers (which happen to be the logical<br />entities that Godel showed were doomed to incompleteness, though of<br />course the incompleteness theorem has since been generalized to other<br />domains) really are incompletely specified: the axioms are incomplete<br />in that they fail to prove every sentence about numbers either true or<br />false, but they are complete in that the ones they miss are in some<br />sense actually not specified by our notion of number. I don't like<br />this answer, because it is equivalent to saying that the halting<br />problem really has no answer in the cases where it is undecidable.<br /><br /><a href="http://en.wikipedia.org/wiki/Halting_problem" target="_blank">http://en.wikipedia.org/wiki/<wbr>Halting_problem</a><br /><br />Instead, I prefer to say that while decidable facts correspond to<br />finite computations, undecidable facts simply correspond to infinite<br />computations; so, there is still a well-defined procedure for deciding<br />them, it simply takes too long for us to complete. For the case of<br />number theory, this can be formalized with the arithmetical hierarchy:<br /><br /><a href="http://en.wikipedia.org/wiki/Arithmetical_hierarchy" target="_blank">http://en.wikipedia.org/wiki/<wbr>Arithmetical_hierarchy</a><br /><br />Essentially, each new quantifier amounts to a potentially infinite<br />number of cases we need to check. There are similar hierarchies for<br />broader domains:<br /><br /><a href="http://en.wikipedia.org/wiki/Hyperarithmetical_hierarchy" target="_blank">http://en.wikipedia.org/wiki/<wbr>Hyperarithmetical_hierarchy</a><br /><a href="http://en.wikipedia.org/wiki/Analytical_hierarchy" target="_blank">http://en.wikipedia.org/wiki/<wbr>Analytical_hierarchy</a><br /><a href="http://en.wikipedia.org/wiki/Projective_hierarchy" target="_blank">http://en.wikipedia.org/wiki/<wbr>Projective_hierarchy</a><br /><br />This brings us to the "big" puzzle. To specify the logic an refer to<br />any structure I want, I need to define the largest of these<br />hierarchies: a hierarchy that includes all truths of mathematics.<br />Unfortunately, Tarski's undefinability theorem presents a roadblock to<br />this project: If I can use logic L to define a hierarchy H, then H<br />will necessarily fail to include all truths of L. To describe the<br />hierarchy of truths for L, I will always need a more powerful language<br />L+1. Tarski proved this under some broad assumptions; since Tarski's<br />theorem completely blocks my project, it appears I need to examine<br />these assumptions and reject some of them.<br /><br />I am, of course, not the first to pursue such a goal. There is an<br />abundant literature on theories of truth. From what I've seen, the<br />important potential solutions are Kripke's fixed-points, revision<br />theories, and paraconsistent theories:<br /><br /><a href="http://en.wikipedia.org/wiki/Saul_Kripke#Truth" target="_blank">http://en.wikipedia.org/wiki/<wbr>Saul_Kripke#Truth</a><br /><a href="http://plato.stanford.edu/entries/truth-revision/" target="_blank">http://plato.stanford.edu/<wbr>entries/truth-revision/</a><br /><a href="http://en.wikipedia.org/wiki/Paraconsistent_logic" target="_blank">http://en.wikipedia.org/wiki/<wbr>Paraconsistent_logic</a><br /><br />All of these solutions create reference gaps: they define a language L<br />that can talk about all of its truths, and therefore could construct<br />its own hierarchy in one sense, but in addition to simple true and<br />false more complicated truth-states are admitted that the language<br />cannot properly refer to. For Kripke's theory, we are unable to talk<br />about the sentences that are neither-true-nor-false. For revision<br />theories, we are unable to talk about which sentences have unstable<br />truth values or multiple stable truth values. In paraconsistent logic,<br />we are able to refer to sentences that are both-true-and-false, but we<br />can't state within the language that a statement is *only* true or<br />*only* false (to my knowledge; paraconsistent theory is not my strong<br />suit). So using these three theories, if we want a hierarchy that<br />defines all the truth value *combinations* within L, we're still out<br />of luck.<br /><br /><br />As I said, I'm also interested in the notion of probability. I<br />disagree with Solomonoff's universal distribution<br />(<a href="http://en.wikipedia.org/wiki/Ray_Solomonoff" target="_blank">http://en.wikipedia.org/wiki/<wbr>Ray_Solomonoff</a>), because it assumes that<br />the universe is computable. I cannot say whether the universe we<br />actually live in is computable or not; however, I argue that,<br />regardless, an uncomputable universe is at least conceivable, even if<br />it has a low credibility. So, a universal probability distribution<br />should include that possibility.<br /><br />I also want to know exactly what it means to measure a probability. I<br />think use of subjective probabilities is OK; a probability can reflect<br />a state of belief. But, I think the reason that this is an effective<br />way of reasoning is because these subjective probabilities tend to<br />converge to the "true" probabilities as we gain experience. It seems<br />to me that this "true probability" needs to be a frequency. It also<br />seems to me that this would be meaningful even in universes that<br />actually happened to have totally deterministic physics-- so by a<br />"true probability" I don't mean to imply a physically random outcome,<br />though I don't mean to rule it out either (like uncomputable<br />universes, I think it should be admitted as possible).<br /><br />Well, I think that is about it. For now.<br /><span style="color:#888888;"><br /></span>Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com0tag:blogger.com,1999:blog-28417647.post-34572410447286220142008-11-11T06:56:00.000-08:002008-11-15T09:54:30.859-08:00<span style="font-size:180%;">Fixing Naive Lambda Logic</span><br /><br />In <a href="http://dragonlogic-ai.blogspot.com/2008/08/paradox-i-discovered-paradox-in-version.html">this old post</a>, I explained why naive lambda logic is paradoxical, and explained how to fix it. The fix that I suggested restricted lambda statements to represent finite computations, which is really what they are for anyway. However, in <a href="http://dragonlogic-ai.blogspot.com/2008/11/lure-of-paraconsistency-paraconsistent.html">this recent post</a>, I suggest that many naive, inconsistent logics (such as the naive lambda logic) are actually referentially complete: they can state any mathematically meaningful idea. The problem is that they also can state some meaningless things.<br /><br />So, it is interesting to consider how one might fix naive lambda logic in a way that keeps the meaningful infinite stuff while removing the nonsense.<br /><br />To do that, though, a definition of meaningfulness is needed. As I said in the more recent post, a statement's meaning needs to come ultimately from a basic level of meaningful statements (such as statements about sensory data). A meaningful statement should either be on this base level, or its meaning should be derived purely from other meaningful statements.<br /><br />In the lambda logic, it is natural to think of a term as meaningful if it evaluates to something. A statement, then, is meaningful if its truth relies only on the value of meaningful terms. To make this precise, we've got to specify the infinite build-up process by which we assign meaning. For each operation that forms a new sentence, we've got to define how meaning is carried over. One of these rules, for example, will be:<br /><br /><span style="font-style: italic;">If statement <span style="font-weight: bold;">X</span> is meaningful and statement <span style="font-weight: bold;">Y</span> is meaningful, then the statement <span style="font-weight: bold;">X and Y</span> is meaningful.</span><br /><br />We need a rule like this for every operation: <span style="font-weight: bold;">and</span>, <span style="font-weight: bold;">or</span>, <span style="font-weight: bold;">not</span>, quantifiers, lambda, and the lambda-value relation. That may sound simple, but complications quickly build up. <span style="font-weight: bold;">And</span> is meaningful when its arguments are meaningful. But what about <span style="font-weight: bold;">or</span>? It seems like <span style="font-weight: bold;">X or Y</span> could be meaningfully true if <span style="font-weight: bold;">X</span> is true but <span style="font-weight: bold;">Y</span> is meaningless. But if we want this to work, then it would also be sensible to allow <span style="font-weight: bold;">not Y</span> to be meaningfully true when <span style="font-weight: bold;">Y</span> is meaningless. (That behavior of <span style="font-weight: bold;">not</span> would make the behavior of <span style="font-weight: bold;">or</span> that I mentioned consistent with classical logic.) But that equates <span style="font-style: italic;">meaningless</span> with <span style="font-style: italic;">false</span>, which seems wrong.<br /><br />Another problem arises with the treatment of quantifiers. Do quantifiers range over all possible terms, or only meaningful ones? It makes a difference!<br /><br />There are many different places we can run to get standard solutions to these problems: <a href="http://en.wikipedia.org/wiki/Free_logic">free logic</a>, <a href="http://www.science.uva.nl/%7Eseop/entries/truth-revision/">the revision theory of truth</a>, fixed-point theories, and others.<br /><br />A third problem, perhaps worse, arises from the concept "not meaningful". For a concept to be meaningful seems straightforward: it should be built up in a meaningful way from other meaningful statements. But trouble presents itself when we discuss non-meaning.<br /><br />Imagine the base-level facts as ground from which trees are growing. The trees, of course, are the meaningful statements that can be built up. Meaningless statements would be branches hanging in midair, attempting to grow from themselves, or from other meaningless statements. (Meaningful statements can also have self-reference, but have an attachment to the ground somewhere.)<br /><br />Now, when we consider in the concept "meaningless", we see some weird stuff happening: somehow the paradoxical branches that grow from nothing are able to support meaningful branches, such as the statement <span style="font-style: italic;">"This sentence is false" is meaningless</span>. Or, even stranger, consider "This sentences is meaningless". It appears to be meaningful but false. Or, consider "This sentence is either false or meaningless". If it is true, it is false or meaningless; if it is false, it is true; if it is meaningless, then it is true. It looks like the only way to deal with it is to say that it is meaningless to ask which of the categories it is assigned to: it is meaningless to talk about its meaningfulness or meaninglessness.<br /><br />To handle cases like these requires a sort of back-and-forth between meaningful and meaningless. We can't just grow meaning from the ground up and then declare the rest meaningless; in declaring things meaningless we allow more meaningful statements, so we've got to go back and add them in. That in turn might change the set of meaningless statements, and so on. If in doing this we are changing our assessments of various statements (going back and forth between "meaningful" and "meaningless"), then we are doing something similar to what the revision theory of truth recommends. On the other hand, I like the idea of marking things "definitely meaningful" and "definitely meaningless". A back-and-forth woulds still be needed, but all decisions would be final.<br /><br />Anyway. Suppose we resolve all of those issues. Another interesting issue comes up: infinite lambda statements.<br /><br />An infinite lambda statement could directly represent the mathematical entities that I want the system to be able to reason about. For example, an arbitrary real number would be any function from natural numbers to the integers 0 through 9 (if we want decimal notation), represented by a finite or infinite lambda statement. (Note: the calculation itself could always be fixed to halt in finite time, despite the lambda statement being infinite.) The interesting thing is that if the logic has been satisfactorily rigged up, it will be in some sense <span style="font-style: italic;">as if</span> infinite lambda statements were allowed, even though they aren't.<br /><br />This suggests that we need to be careful of the semantics, and therefore of how nonmonotonic reasoning is used. Are the quantifiers interpreted as ranging over all actually-representable lambda terms, or are they also ranging over the unrepresentable infinite ones? If the logic is to talk about unrepresentables properly, the quantifiers will have to range over them. But then nonmonotonic reasoning will not converge to the correct answers: it will converge to answers that hold for the finite terms only. This will sometimes be correct for the infinite case, but not always. The matter seems complicated, and I'm not yet sure how to deal with it.Abram Demskihttps://plus.google.com/111568410659864255951noreply@blogger.com0