Philosophy has a more direct relation to artificial intelligence than it has to other sciences. Both subjects require the formalization of common sense knowledge and repair of its deficiencies. Since a robot with general intelligence requires some general view of the world, deficiencies in the programmers' introspection of their own world-views can result in operational weaknesses in the program. Thus many programs, including Winograd's SHRDLU, regard the history of their world as a sequence of situations each of which is produced by an event occurring in a previous situation of the sequence. To handle concurrent events, such programs must be rebuilt and not just provided with more facts.
This section is organized as a collection of disconnected remarks some of which have a direct technical character, while others concern the general structure of knowledge of the world. Some of them simply give sophisticated justifications for some things that programmers are inclined to do anyway, so some people may regard them as superfluous.
1. Building a view of the world into the structure of a program does not in itself give the program the ability to state the view explicitly. Thus, none of the programs that presuppose history as a sequence of situations can make the assertion ``History is a sequence of situations''. Indeed, for a human to make his presuppositions explicit is often beyond his individual capabilities, and the sciences of psychology and philosophy still have unsolved problems in doing so.
2. Common sense requires scientific formulation. Both AI and philosophy require it, and philosophy might even be regarded as an attempt to make common sense into a science.
3. AI and philosophy both suffer from the following dilemma. Both need precise formalizations, but the fundamental structure of the world has not yet been discovered, so imprecise and even inconsistent formulations need to be used. If the imprecision merely concerned the values to be given to numerical constants, there wouldn't be great difficulty, but there is a need to use theories which are grossly wrong in general within domains where they are valid. The above-mentioned history-as-a-sequence-of-situations is such a theory. The sense in which this theory is an approximation to a more sophisticated theory hasn't been examined.
4. (McCarthy 1979a) discusses the need to use concepts that are meaningful only in an approximate theory. Relative to a Cartesian product co-ordinatization of situations, counterfactual sentences of the form ``If co-ordinate x had the value c and the other co-ordinates retained their values, then p would be true'' can be meaningful. Thus, within a suitable theory, the assertion ``The skier wouldn't have fallen if he had put his weight on his downhill ski'' is meaningful and perhaps true, but it is hard to give it meaning as a statement about the world of atoms and wave functions, because it is not clear what different wave functions are specified by ``if he had put his weight on his downhill ski''. We need an AI formalism that can use such statements but can go beyond them to the next level of approximation when possible and necessary. I now think that circumscription is a tool that will allow drawing conclusions from a given approximate theory for use in given circumstances without a total commitment to the theory.
5. One can imagine constructing programs either as empiricists or as realists. An empiricist program would build only theories connecting its sense data with its actions. A realist program would try to find facts about a world that existed independently of the program and would not suppose that the only reality is what might somehow interact with the program.
I favor building realist programs with the following example in mind. It has been shown that the Life two dimensional cellular automaton is universal as a computer and as a constructor. Therefore, there could be configurations of Life cells acting as self-reproducing computers with sensory and motor capabilities with respect to the rest of the Life plane. The program in such a computer could study the physics of its world by making theories and experiments to test them and might eventually come up with the theory that its fundamental physics is that of the Life cellular automaton.
We can test our theories of epistemology and common sense reasoning by asking if they would permit the Life-world computer to conclude, on the basis of experiments, that its physics was that of Life. If our epistemology isn't adequate for such a simple universe, it surely isn't good enough for our much more complicated universe. This example is one of the reasons for preferring to build realist rather than empiricist programs. The empiricist program, if it was smart enough, would only end up with a statement that ``my experiences are best organized as if there were a Life cellular automaton and events isomorphic to my thoughts occurred in a certain subconfiguration of it''. Thus it would get a result equivalent to that of the realist program but more complicated and with less certainty.
More generally, we can imagine a metaphilosophy that has the same relation to philosophy that metamathematics has to mathematics. Metaphilosophy would study mathematical systems consisting of an ``epistemologist'' seeking knowledge in accordance with the epistemology to be tested and interacting with a ``world''. It would study what information about the world a given philosophy would obtain. This would depend also on the structure of the world and the ``epistemologist's'' opportunities to interact.
AI could benefit from building some very simple systems of this kind, and so might philosophy.
McCarthy, J. and Hayes, P.J. (1969). Some philosophical problems from the standpoint of artificial intelligence. Machine Intelligence 4, pp. 463-502 (eds Meltzer, B. and Michie, D.). Edinburgh: Edinburgh University Press. (Reprinted in B. L. Webber and N. J. Nilsson (eds.), Readings in Artificial Intelligence, Tioga, 1981, pp. 431-450; also in M. J. Ginsberg (ed.), Readings in Nonmonotonic Reasoning, Morgan Kaufmann, 1987, pp. 26-45; also in (McCarthy 1990).)
McCarthy, J. (1977). Minimal inference--a new way of jumping to conclusions. (Published under the title: Circumscription--a form of nonmonotonic reasoning, Artificial Intelligence, Vol. 13, Numbers 1,2, April. Reprinted in B. L. Webber and N. J. Nilsson (eds.), Readings in Artificial Intelligence, Tioga, 1981, pp. 466-472; also in M. J. Ginsberg (ed.), Readings in Nonmonotonic Reasoning, Morgan Kaufmann, 1987, pp. 145-152; also in (McCarthy 1990).)
McCarthy, J. (1979a). Ascribing mental qualities to machines. Philosophical Perspectives in Artificial Intelligence, Ringle, Martin (ed.), Humanities Press, 1979. (Reprinted in (McCarthy 1990).)
McCarthy, J. (1979b). First order theories of individual concepts and propositions, J.E.Hayes, D.Michie and L.I.Mikulich (eds.), Machine Intelligence 9, Ellis Horwood. (Reprinted in (McCarthy 1990).)
McCarthy, John (1990). Formalizing Common Sense, Ablex 1990.
Moore, Robert C. (1977). Reasoning about Knowledge and Action, 1977 IJCAI Proceedings.