Sunday, May 21, 2006

Classification of AI

This classification system is somewhat based on cybernetics, somewhat based on terminology of the AI field, and somewhat based on my own metaphysics. The paper is essentially in order of importance.

The idea is this: although the details of an AI differ, all artificial intelligence must accomplish one, several, or all of a few basic tasks. We can classify artificial intelligences by what requirements they fulfill; this tells us how "complete" they are.

I will detail the main elements by listing and then subdividing. Each level could be presumably subdivided further.

The Two Elements:

1. Knowledge-seeking
2. Goal-seeking

A simple organism starts out with only direct reaction: it metabolizes food, and performs other acts of homeostasis. Eventually, simple knowlegde with direct response is incorporated; various things start to count as pain and pleasure, and a simple awareness develops. Bacteria will often know to go towards food, for example. The knowledge and goal is not learned by an individual, but learned by evolution and kept in "genetic memory".

Eventually, more complexity arises. We can characterize a mind as consisting of intertwined knowledge-base and goal-base.

Knowledge is applied to goal-oriented behavior. Without knowledge, a goal is useless; without a goal, knowledge is useless. Both need the other. These are the two complementary elements of mind.


The Four Elements:

1. Induction
2. Deduction

3. Emotion
4. Planning

Both knowledge and goals must be first created and then expanded.

Knowledge creation is called induction. It is pattern finding; it deals with identifying known objects and learning new ones.

Knowledge expansion is called deduction. It is logic; it deals with extrapolating further from patterns found in data.

Goal creation is called emotion. We create goals based on different sorts of pain and pleasure.

Goal expansion is called planning. It essentially harnesses logic to control the environment.


Induction is my chief aim. Deduction closely resembles the cold, hard logic that computers already do so well. A computer is able to use a model that we input. But to make a new model? That's something that still needs work. Induction is what interfaces directly with the outside world in the form of senses. It deals with data. An A.I. that dealt only with induction would take data in and spit out an analysis. It would perform mathematical regressions, statistical analysis, or the like. Or perhaps it would be more advanced, finding more general types of patterns as well. But, essentially, it would only provide the user with a model of the data. Neural nets are also under this category, generally, although they might be used for pain/pleasure learning.

Deduction, however, is still something of an issue. There is not one version of "logic", despite the impression people may get. There are many versions of formal logic. Some have hidden contradictions, and there is no way to know which until the contradiction is found. On the other hand, a computer can do most acts of deduction rather well. A program that only performs deduction is rather common. A calculator is one. However, much more advanced types of deduction can also be carried out. Besides advanced mathematics, we can input models of the world and get various results. These models may be formulated in various logical languages. Bayesian networks are popular now. Formal logic is arguably more general.

Goal creation is not strictly necessary for an A.I.; usually, the goal for the A.I. will be set. Even biological organisms do not really need goal creation; the simple, single goal of maximizing pleasure would do well enough. However, it is expedient to create new goals. We "like" or "dislike" things originally because they give us pleasure in some way, but we eventually disconnect the liking/disliking from the original reason. This gives us ethics, in a way. We decide that certain things are "good" and "bad", rather than simply to our advantage or against it. Our ultimate ideas of good and bad replace our sense of pain and pleasure. An AI based on emotion would probably be based on behaviorism; it would act according to the rewards and punishments received in the past. It might also go one level higher, adding fight-or-flight response and other emotions.

Goal expansion caries out the whims of the emotions. It is heavily based in deduction, but may use more particular search strategies or logical languages. It is difficult to make an AI with only goal-elements, but this type of deduction can be the focus of an AI, and has been the focus of many.


These could possibly be split up further; splitting up "induction" further, in fact, is the subject of the first post to this blog. Also, one might wish to add "data" and "actions" as elements, because the types of data and the types of action vary from AI to AI. One AI may be in a fully self-animating robot, while another may only have text output. One may read medical data, while another may read books. We get a chart something like this:

data -> { -> (induction -> deduction) -> (emotion -> planning) -> } -> actions


But there is one more element that we humans have: honesty. We have a natural tendency to say what we think. Talking is not generally a goal-oriented behavior; we do not plan everything that we say according to what it will get us, but first think to the truth. We CAN say things because we decide it will accomplish something. But our automatic action is not to do this, but to say what's on our mind.

We view language not like just any data, but like data that talks about the world. Most inputs are merely part of the world. But language is a part of the world that represents another part. For an AI to truly use language, it must have this view.

It could possibly develop naturally, without hard-wiring; an intelligence could learn that what people say can often be used to predict the world. Similarly, an AI could learn to say the truth by default as a part of goal-creation; if it has to communicate its needs regularly, it could develop as a separate goal. However, it could also be programmed specifically in addition to or in replacement of other goal-elements. A calculator is a perfect example of a truth-speaker. It is an excellent conversationalist. It always knows just what to say; it always has a response (even if it's ERROR: DIVISION BY ZERO). Any deducer could have a similar conversational model of deducing and reporting facts from things that the other person says.

"It's sunny out today."

"There must not be many clouds."

"No. It rained last night. I guess that used them up."

"If the rainwater evaporates, there could be cloud formation."

"I saw three porcupines dead on the roadside."

"At that rate, the local population will be depleted by 2006."

Such an AI could have no goals whatever. All it needs to do is reason, and decide what facts are most relevant. On the other hand, one might want both goals and a truth-saying default. This would be possible as well, though much more difficult.


Oh, and one more: copycat learning. This also may be an important element in an intelligence. Whereas truth-saying is mainly deduction, copycat learning relies mainly on induction. Many chatbot AIs use the copycat strategy to mimic human speech.


Most importantly, we have interplay. These separate elements are not really so separate. Each element is constantly active. Each depends on the others. The ultimate behavior of the AI emerges as a synthesis of many elemental activities.

No comments:

Post a Comment