Wednesday, April 08, 2009

Some Numbers

Continues Risk Estimate.

I am generally disappointed by the lack of detailed tracking of the progress of computing power... I do not know where to find projections based on large amounts of up-to-date data. The most often cited projection is Kurzweil's, based on data from up to 2000; for example, here. I found one that may be more recent, but it does not show data points, just a curve. These curves are based on what $1000 gets you. It is also interesting to note the supercomputing curve.

Moore himself believes that the exponential trend will stop in 10 to 15 years, but he does not take into account 3d chips (by his own admission) or other possible ways around the halting spot. I think it is safe to allow a few more years based just on that... but since the halting spot is a point of contention, for now I'll just give the analysis based on the raw exponential curve.

By the curve, computers reach human-level processing power between 2020 and 2030. leave the curve at exponential, rather than going super-exponential as Kurzweil suggests we should, the rate looks to be around 5 orders of magnitude for every 10 to 20 years. (I know that is a big range, but I don't have much data to go on.) So if a computer is 1 person in 2025, it is 10 people in 2040, and 100 people in 2055.

What does it mean for a computer to be as smart as 10 or 100 people? A first approximation would be to say that it would be able to accomplish as much intellectual activity as a group of 10 or 100 people that were completely unified in their aim, and could communicate with eachother continuously. But even this estimate is low, because there is a large amount of redundancy in 10 such people. It is hard to estimate how much, but roughly we could say that only 1 visual cortex is needed, and the other 9 people could be using that processign power for something else; only 1 motor cortex is needed, so the other 9 could be using that processing power for something else; and so on. This might (roughly) double the amount of thinking power, after the first person (who needs their motor and visual cortex intact).

So how safe is this? I'd say as soon as we get roughly human level we've got a significant risk of the AI deciding it would be awesome to "get loose" and use spare computer power stolen from the internet. My estimate could be greatly improved here, but there are about 7 billion people in the world, and more will be here by 2020, so assuming 1/8th have computers on the internet (that is where the estimate is shaky) we're talking a 1-billion-fold increase in computing power as soon as an AI is able to "get loose". That assumes that the 1 billion computers on the net are roughly equivalent in power to the 1 that the AI started on (in line with the idea that we're estimating based on what $1000 of computing power is). By my earlier estimate, an AI as smart as 1 person becomes as smart as 2 billion. But once we go distributed, the advanteges I was talking about go away; the massive redundancy becomes necessary. So, back to 1 billion. (The idea that "smartness" merely doubles when we get rid of the inefficiencies of distributed existence is blatantly silly for the cae of billions of people... oh well.)

So, could the world defend itself against the intellectual effort of 1 billion virtual people? We've got 8 billion on our side (by that point)... plus we start out with a good-sized advantage in terms of control of resources. How much could 1 billion hackers aquire on short notice?

For now, I'll give that a 50% chance if extinction assuming bad intent on the AIs part, and assuming it is able to get on the internet, and assuming it is able to create a virus of some kind (or use some other scheme) to get a fair chunk of the internet's computing power. I'll give 50% probability to those other two as well... making 12.5% probability of extreme badness given evil intent. So the question is, how probable is bad intent in this scenario?

By the way, this estimate puts current computers at around the level of a mouse. Do the best current AIs acheive this? Seems doubtful, I'd say. Mice are pretty smart. They accomplish a fair amount of visual recognition, and furthermore, they are able to put it to good use. (The second part is what we have the least ability to model, I think... goal-oriented systems that can flexibly use highly structured sensory info.) So, by the model so far, AI progress will more probably be sudden then gradual... someone will put together an algorithm capable of taking full advantage of the hardware, and things will change.

I'd say that might happen in anywhere between 5 and 20 years. The outcome if it happens in 5 years are very different from those if it happens in 20. If it happens in 5 years, I'd say good results are fairly certain. 10 years, and there is more concern. 20 years and Very Bad Things have fair chances, maybe as high as my 10% "halt everything" level.

Let's take that arbitrary statement and mathematize it... 5 to 10 years = .1% chance of Really Bad Things, 10 to 15 = 1%, 15 to 20 = 10%.

Giving each option a 1/3 probability, we have around 3.7%.

But, giving my assumptions, the best way to reduce risk appears to be trying to find a good algorithm quickly (favoring open research). At some point between 2015 and 2020, the risk goes beyond 1% (which I arbitrarily label as the point of "real concern") and the strategy should turn towards making sure the first good-enough algorithm is also endowed with a safe goal system.

It should be obvious that this estimate is an extremely rough one. More later?

---[edit]---

One of the most obvious factors not considered here is the chance that the brain is badly-designed enough that a human level AI could be run on current PCs. The probability of this is nonzero, but if the brain were really so inefficient (= 5 orders of magnitude of inefficiency), I would expect that human AI programmers would already be outperforming it. The idea that current AIs are not as smart as mice despite having roughly as much hardware suggests that brains are fairly well-engineered. (The statement "roughly as much hardware" here needs to be made more specific. However, as long as it is inside 5 orders of magnitude, the argument makes some sense.)

1 comment:

  1. Nice post! I'm considering your last paragraph/edit. I think we can safely say that the human brain is not as efficient as it could be. You say 5 orders of magnitude is unlikely, which it might be, but I wonder how we could get a better grasp on the actual order of magnitude difference between brain architecture and the best architecture when it comes to optimizers. I have two quick thought:

    1) Humans are barely optimizers: The brain has a whole bunch of different goals all over the place all of which are, as a whole, very very crudely approximating some perfect gene-propogation algorithm. Maybe a coherent explicit goal with an architecture that matches it would be vastly more efficient.

    2) Serializability: Brains do massive parallelization of algorthms and processors do massive serialization of algorithms. This might sound weird, but maybe serialization is ultimately much better for efficient computing than parallelization. The brain is warped by 50Hz neurons and needs to cheat/cache rather profoundly to deal with it (or so I would guess).

    ReplyDelete