Q. Why bother stating philosophical presuppositions? Why not just get on with the AI?
A. AI shares many concerns with philosophy--with metaphysics, epistemology, philosophy of mind and other branches of philosophy. This is because AI concerns the creation of an artificial mind. However, AI has to treat these questions in more detail than philosophers customarily consider relevant.
In principle, an evolutionary approach need not involve philosophical presuppositions. However, many putative evolutionary approaches are crippled by impoverished philosophical assumptions. For example, the systems often only admit patterns in appearance and can't even represent reality behind appearance. [McCarthy ] presents a challenge to learning systems to learn reality behind appearance.
AI research not based on stated philosophical presuppositions usually turns out to be based on unstated philosophical presuppositions. These are often so wrong as to interfere with developing intelligent systems.
That it should be possible to make machines as intelligent as humans involves some philosophical premises, although the possibility is probably accepted by a majority of philosophers. The way we propose to build intelligent machines, i.e. via logical AI, makes more presuppositions, some of which may be new.
This chapter concentrates on stating the presuppositions and their relations to AI without much philosophical argument. A later chapter presents arguments and discusses other opinions.
A robot also needs to believe that the world exists independently of itself. Science tells us that humans evolved in a world which formerly did not contain humans. Given this, it is odd to regard the world as a human construct. It is even more odd to program a robot to regard the world as its own construct. What the robot believes about the world in general doesn't arise for the limited robots of today, because the languages they are programmed to use can't express assertions about the world in general. This limits what they can learn or can be told--and hence what we can get them to do for us.
In every case, we try to design it so that what it will believe about the world is as accurate as possible, though not usually as detailed as possible. Debugging and improving the robot includes detecting false beliefs about the world and changing the way it acquires information to maximize the correspondence between what it believes and the facts of world. The terms the robot uses to refer to entities need to correspond to the entities so that the sentences will express facts about these entities. We have in mind both material objects and other entities, e.g. plans.
Already this involves a philosophical presupposition--that which is called the correspondence theory of truth. AI also needs a correspondence theory of reference , i.e. that a mental structure can refer to an external object and can be judged by the accuracy of the reference.
As with science, a robot's theories are tested experimentally, but the concepts robots use are often not defined in terms of experiments. Their properties are partially axiomatized, and some axioms relate terms to observations.
The important consequence of the correspondence theory is that when we design robots, we need to keep in mind the relation between appearance, the information coming through the robot's sensors, and reality. Only in certain simple cases, e.g. the position in a chess game, does the robot have sufficient access to reality for this distinction to be ignored.
Some robots react directly to their inputs without memory or inferences. It is our scientific (i.e. not philosophical) contention that these are inadequate for human-level intelligence, because the world contains too many important entities that cannot be observed directly.
A robot that reasons about the acquisition of information must itself be aware of these relations. In order that a robot should not always believe what it sees with its own eyes, it must distinguish between appearance and reality. [McCarthy ] presents a challenge problem requiring the discovery of reality behind appearance.
From Socrates on philosophers have found many inadequacies in common sense usage, e.g. common sense notions of the meanings of words. The corrections are often elaborations, making distinctions blurred in common sense usage. Unfortunately, there is no end to philosophical elaboration, and the theories become very complex. However, some of the elaborations seem essential to avoid confusion in some circumstances. Here's a candidate for the way out of the maze.
Robots will need both the simplest common sense usages and to be able to tolerate elaborations when required. For this we have proposed two notions--contexts as formal objects [McCarthy 1993] and [McCarthy and Buvac 1997] and elaboration tolerance [McCarthy 1999]
To use this information, the English (or its logical equivalent) is just as essential as the formula, and common sense knowledge of the world is required to make the measurements required to use or verify the formula.
We often consider several related concepts, where others have tried to get by with one. Suppose a man sees a dog. Is seeing a relation between the man and the dog or a relation between the man and an appearance of a dog? Some purport to refute calling seeing a relation between the man and the dog by pointing out that the man may actually see a hologram or picture of the dog. AI needs the relation between the man and the appearance of a dog, the relation between the man and the dog and also the relation between dogs and appearances of them. None is most fundamental.
To a child, all kinds are natural kinds, i.e. kinds about which the child is ready to learn more. The idea of a concept having an if-and-only-if definition comes later--perhaps at ages 10-13. Taking that further, natural kind seems to be a context relative notion. Thus some part of income tax law is a natural kind to me, whereas it might have an if-and-only-if definition to an expert.
Curiously enough, many of the notions studied in philosophy are not natural kinds, e.g. proposition, meaning, necessity. When they are regarded as natural kinds, then fruitless arguments about what they really are take place. AI needs these concepts but must be able to work with limited notions of them.
Our emphasis on the first class character of approximate entities may be new. It means that we can quantify over approximate entities and also express how an entity is approximate. An article on approximate theories and approximate entities is forthcoming.
We discuss our choices and those of robots by considering non-determinist approximations to a determinist world--or at least a world more determinist than is needed in the approximation. The philosophical name for this view is compatibilism. I think compatibilism is a requisite for AI research reaching human-level intelligence.
In practice, regarding an observed system as having choices is necessary when ever a human or robot knows more about the relation of the system to the environment than about what goes on within the system. This is discussed in [McCarthy 1996].
Confusion about this is the basis of the Searle Chinese room fallacy [Searle 1984]. The man in the hypothetical Chinese room is interpreting the software of a Chinese personality. Interpreting a program does not require having the knowledge possessed by that program. This would be obvious if people could interpret other personalities at a practical speed, but Chinese room software interpreted by an unaided human might run at the speed of an actual Chinese.
If one settles for a Chinese conversation on the level of Eliza [Weizenbaum 1965], then, according to Weizenbaum (1999 personal communication), the program can be hand simulated with reasonable performance.