Artificial Intelligence: The Very Idea. By JOHN HAUGELAND.
The MIT Press, Cambridge, MA, 1985. xii + 290 pp. $14.95.
ISBN 0-262-08153-9. A Bradford Book.
Alas, John Haugeland has got the Very Idea wrong and made a few other important errors. Nevertheless, this is an excellent book because of the number of things he has got right, his fair-mindedness and his excellent explanations of the connections between AI and older philosophical issues.
His first error is regarding AI as a branch of biology, whereas it's really a branch of computer science---somewhat related to a branch of biology. As a branch of computer science, AI concerns how a machine should decide how to achieve goals under certain conditions of information and computational resources. In this respect it's like linear programming. Indeed if achieving goals always amounted to finding the maximum of a linear function given a collection of linear inequality constraints, then AI would be included in linear programming. However, the problems we want machines to solve are often quite different.
The key question in describing AI is characterizing the problems that require intelligence to solve and the methods available for solving them. For example, the roles of pattern matching, search and learning from experience need to be discussed. Haugeland doesn't attempt any general discussion of this, although many of his examples are relevant.
Haugeland's second mistake is to omit discussing mathematical logic as a way of representing the machine's information about the world and the consequences of action in the world. Using logic is what gives AI a chance to match the modularity of human representation of information, e.g. the fact that we can receive information with provided by someone long since dead who had no idea how it was going to be used.
The third mistake is to omit discussing expert systems. AI has resulted in a certain technology that has both capabilities and limitations that represent the current state of AI as a science. He justifies this omission by remarking that expert systems have no pyschological pretensions. I suppose this dismissal is a consequence of regarding AI as biology rather than computer science.
Moreover, it seems to me that GOFAI (his abbreviation of ``good old-fashioned AI'') doesn't rest on the theory that intelligence is computation, an assertion whose vagueness makes me nervous. The theory is that intelligent behavior can be realized computationally. The extent to which human intelligence is realized digitally is a matter for psychologists and physiologists. For example, hormones may intervene in human thought processes in an analog way and may have chemical roles beyond communication, e.g. the same substance may digest food and signal that a person is full.
Here are some of the things he has got right.
First of all, Haugeland has got right the polarization between the scoffers and the boosters of AI---the self-assurance of both sides about the main philosophical issue. The scoffers say it's ridiculous---``like imagining that your car (really) hates you'' vs. the view that it's only a matter of time until we understand intelligence well enough to program it. This reviewer is a booster and accepts Haugeland's characterization subject to some qualifications not important enough to mention.
Second he's right about the abstractness of the AI approach to intelligence. We consider it inessential whether the intelligence is implemented by electronics or by neurochemical mechanisms or even by a person manipulating pieces of paper according to rules he can follow but whose purpose he doesn't understand.
The discussion of the relation between arguments about the possibility of AI and philosophical arguments going back to Aristotle, Hobbes, Descartes and Leibniz and Hume is perhaps the main content of the book. It shows that many issues raised by these philosophers are alive today in an entirely different technological context. However, it's hard to trace any influence of this philosophy on present AI thought or even to argue that reading Hobbes would be helpful. What people are trying to do today is almost entirely determined by their experience with modern computing facilities rather than by old arguments however inspired.
Haugeland doesn't discuss very much the influence of AI on philosophical thought except to acknowledge its existence. There is much more about that in Aaron Sloman's {\it The Computer Revolution in Philosophy}, although Sloman's arguments don't seem to convince many of his former colleagues in philosophy.
Up to:Send comments to jmc@cs.stanford.edu
The number of hits on this page since 1999 Feb 28.
This is the end of the review proper. However, it had to be shortened for publication, and a lot of the material in the following notes couldn't be incorporated. The notes are rough, and you shouldn't read them unless you are very interested.
John Haugeland, a philosopher, has got the ``very idea'' essentially correct in this determinedly non-technical book. Unfortunately, discussing the philosophy of AI non-technically has imposes as severe limitations as does a non-technical discussion of the philosophy of mathematics or quantum mechanics. Besides that, he omits to discuss the use of mathematical logic in AI---which could be discussed non-technically to a considerable extent. We begin with the positive content of the book, after which we will discuss the limitations of the non-technical approach which characterizes almost all writing about AI by philosophers, even in the professional philosophical literature.
Haugeland perhaps ignores Tarskian semantics. - perhaps in both senses.
p..98 Chaitin
p. 112 GOFAI doesn't rest on the theory that intelligence is computation. The theory is that intelligent behavior can be realized computationally. The extent to which human intelligence is realized digitally is a matter for psychologists and physiologists. For example, the chemistry of hormones may play intervene in human thought processes in an analog way.
The book is misleadingly non-technical.
Logic is ignored.
The state of AI technology.
The idea of AI doesn't actually depend on whether human thinking is essentially computational, although I think it is substantially true. Suppose that an important part of thinking is analog. The most plausible hypothesis in that direction is that the quantitative amounts of the different hormones that are released determine certain decisions. Then we might be prepared to supplement the digital computations representing reasoning by a digital simulation of the important analog processes. This would work unless these processes were too extensive to be economically simulated.
Artificial intelligence is a science under development. It has substantial conceptual problems. Under these conditions it is not an easy task to summarize the field for the layman---or even for the practitioner. Maybe it's as though someone tried to summarize atomic physics in 1910.
Physical symbol hypothesis
How is meaning possible in a physical/mechanical universe?
An example of how philosophy gets itself entangled.
The discussion of meaning would benefit from inclusion of Tarskian model-theoretic semantics.
Perhaps the book's biggest weakness is that it gives little picture of AI as a research activity. AI researchers only rarely ask what intelligence is, while they spend most of their time asking how computers can be made to do something in particular.
This is illustrated by a problem that has been unsolved since the 1950s. Arthur Samuel (19xx,19xx) wrote programs for playing checkers that learned to optimize the coefficients of the linear polynomial that evaluated positions, e.g. it learned the best weights to be ascribed to the numbers of kings and single men, control of the center and the back rows and other functions of position discussed in the books about checkers. It replayed master games and adjusted its coefficients to predict the moves considered good. However, checker books also contain information that can't readily be fitted into a position evaluation function.
For example, a king can hold two single men of the opposite side against the edge of the board so that neither can advance without being captured. If the opponent allows this to persist until both sides king their remaining men, the side holding the other's two singles, will outnumber the opponent by one on the rest of the board, and this suffices to force exchanges and win. Samuel's program ``would like'' to advance the two singles, but the learned evaluation function doesn't give it a high priority and the actual disaster may be 30 moves in the future --- too far for lookahead. It wouldn't be difficult to modify the program to take this specific phenomenon into account, but humans learn such things on the fly.
Easier than making a program that can learn this king-holds-two-singles stratagem should be making a program that can be told about it. Whoever tells the computer about the strategem should not have to know the details of the program. Otherwise, we have an analogy to education by brain surgery.
A related unsolved problem is how to make programs, specifically game playing programs, decompose a situation into subsituations that can be analyzed separately and whose interaction is subsequently analyzed. Humans do this all the time, but it seems that quite good checkers and chess can be played without it---taking advantage of the computer's high speed. It is essential in the Japanese game of go, so there are no good go programs yet.
How does this book differ from the usual ``Philosophy of X''?
Quotes:
p. 5
p.4 According to a central tradition in Western philosophy, thinking (intellection) essentially is rational manipulation of mental symbols (viz., ideas).
p.5 Artificial Intelligence in this sense (as a branch of cognitive science) is the only kind we will discuss. For instance, we will pay no attention to commercial ventures (so-called ``expert systems'', etc.) that make no pretense of developing or applying psychological principles. We also won't consider whether computers might have some alien or inhuman kind of intellect (like Martians or squids?). My own hunch, in fact, is that anthropomorphic prejudice, ``human chauvinism,'' is built into our very concept of intelligence. This concept, of course, could still apply to all manner of creatures; the point is merely that it's the only concept we have---if we escaped our ``prejudice,'' we wouldn't know what we were talking about.
p.5 - a muddle But the lesson goes deeper: if Artificial Intelligence really has little to do with computer technology and much more to do with abstract principles of mental organization, then the distinctions among AI, psychology, and even philosophy of mind seem to melt away. One can study those basic principles using tools and techniques from computer science, or with the methods of experimental psychology, or in traditional philosophical terms---but it's the same subject in each case.
p.39 On the other hand, if the manipulator does not pay attention to the meanings, then the manipulatoions can't be instances of reasoning --- because what's reasonable depends crucially on what the symbols mean.
Does a calculator pay attention to the meanings? What about the HP-28c that has commutativity as an explicit rule? Paying attention to the meanings leads to the infinite regress of Achilles and the Tortoise with modus ponens. At some point the logic must be built into the machinery.
p.93 Artificial Intelligence embraces this fundamental approach---to the point, indeed, of arguing that thoughts are symbolic. But that doesn't imply that people think in English (or any ``language'' very similar to English); AI maintains only that thinking is ``like'' talking in the more abstract sense that it occurs in a symbolic system---probably incorporating a mode//content distinction. Such a system could be vastly different from ordinary languages (in richness and subtlety or whatever) and still have the crucial abstract features of compositionality and arbitrary basic meanings.
p. 100 ``... because chess is not an interpreted system'' Capablanca at age three succeeded in interpreting chess (from watching his father play).
p.112 GOFAI as a branch of cognitive science, rests on a particular theory of intelligence and thought---essentially Hobbes's idea that ratiocination is computation. We shold appreciate that this thesis is empirically substantive; it is not trivial or tautological, but actually says something about the world that could be false.
No, it's more like an open mathematical conjecture.
p. 120 And that's exactly the sort of fact that inclines us to suppose that people understand their thoughts, whereas paper and floating magnets are utterly oblivious.
Perhaps understanding one's thoughts is like metamathematics; it can be done to any level, but at the highest level to which it is actually done, there are thoughts that are not understood, because to do so one would have to ascend to another level, and one hasn't taken the time.
p.174 [What is said today about the 1950s language translators is, I suspect, a legend. I mean about their motivations and what they thought they could get away with. Also about the utility of their output.]
[Cybernetics may not be so bad if used in a symbolic domain. H is right that its mathematical stability conditions provide only a few metaphors of limited applicability]. GPS is cybernetics.
p.194 [There's a lot more to be learned from microworlds---even from chess]. Drosophila.
p. 7 Why IQ is irrelevant
Hobbes?
relating AI to traditional philosophical issues
p.98 et seq
arguments from cryptography
p. 208 reasonable treatment of the frame problem
210 - some good remarks on dogs vs. trees as patterns.
216 - a mathematical theory of belief has to have its 0 and 1
242 - simulation as a way of understanding other systems is often neither necessary nor sufficient. He will never do X. He will try to achieve his goals. Universal statements about behavior.
244 - The importance of knowledge representation was discussed in my 1959 paper Programs with common sense.
251 - GPS was a hypothesis about intelligence.
Mention Sloman.
The Turing test
The Turing test shouldn't be used as a criterion for artificial intelligence, and I don't think that Turing intended that way. Instead it serves primarily as a criterion for whether it is worthwhile talking further with a philosopher of mind, e.g. a philosopher who publishes in Mind. Whoever won't admit that a machine that can pretend to be a person is intelligent has a non-empirical criterion for intelligence, and it's difficult to see how a scientific discussion with him can be continued.
However, if one actually wants to set up the test, it must be done carefully. Otherwise, it may be possible to fool the interrogator easily. The interrogator should be sophisticated about AI and know what is currently considered difficult for machines. Of course, the interrogator should know what task he is undertaking. Otherwise some people may think they are dealing with a person when interacting with a quite ordinary expert system.
Some puzzles of the philosophy of mind can be made to assume their characteristic form for systems simple enough to understand completely. We regard the question ``Does a pocket calculator add numbers?'' as entirely analogous to the question ``Does a human or AI program manipulate ideas?''.
Does the fact that 7 + 8 = 15 make the calculator print 15 when the user hits the keys 7, 8, = in succession? This is analogous to the question of whether thoughts have a causal role in humans or are merely an epiphenomenon. The latter queston is no more likely to be fruitful than the former.
Does a calculator ``pay attention to the meaning of numbers''? p. 39
the very idea is wrong, but
cs vs. biology
travelling example
Samuel example
skates over the top
How does the information situation in which intelligent behavior is required differ from that of conventional forms of optimization and control? The full answer isn't known, so we must make do with some characterizations and some examples.
AI is relevant when the information situation is open-ended, when it must be possible to add new ideas at any time.
Years ago, when I was more naive, I hoped to get some useful work from philosophers in developing artificial intelligence (AI). After all, AI is concerned with some of the same epistemological problems philosophers have been working on for 2,000 years. For example, we both face the problem of reconciling making choices with determinism. Thus a computer program has to consider its options, i.e. to regard itself as having a free choice to make, even though it is a deterministic machine and might very well have to know it. It gradually became apparent that AI was unlikely to get this help from philosophers---at least from those of the present generation.
Haugeland's book doesn't actually offer any help, but it offers more understanding than previous efforts by philosophers except perhaps Aaron Sloman's The Computer Revolution in Philosophy. The latter makes big claims for the effect AI should have on philosophical thinking. I mainly agree with the claims but have to admit that Sloman can't yet sufficiently substantiate them to have much influence on his fellow philosophers.
Both the understanding and the potential help is considerably vitiated by the fact that Haugeland has got the Very Idea of AI wrong. He treats it as a branch of biology, i.e. as concerned with how people think, rather than as a branch of computer science, concerned with how goals can be achieved under certain complex conditions of information and capability for action. The biology and the computer science are related by the fact that certain problems require certain methods whether the solver is human or a computer program.
AI is a branch of computer science just as is linear programming. Indeed if all goals presented themselves as minimizing linear functionals subject to linear inequality constraints, then AI would be included in linear programming. However, the world is more complex then that, and so is the information available when a goal is undertaken and that can be made available by information seeking actions. Therefore, intelligent behavior requires many abilities. There are more than I can list here---indeed more than AI has so far identified---but the following have been extensively studied.
1. To recognize patterns in phenomena, i.e. to assign values to variables in patterns according to the way the matches some aspect of the phenomenon.
2. To search a space of abstract objects for one that fullfills a desired condition, e.g. to search for a strategy of action that achieves a goal, e.g. for a move that wins in chess or a lemma that helps prove a theorem.
3. To represent information about the world in a way that allows new information to be acquired without knowing how it is going to be used. Information about how information can be obtained must also be represented.
4. To infer from facts the answers to questions.
Haugeland is mainly concerned with whether AI is philosophically ok as a scientific subject. He doesn't reach a definite conclusion. From this point of view, Hobbes and Leibniz are AI's philosophical founders, because both of them wanted to regard thinking as calculation with physical representations of symbols, and Leibniz even wanted to do it by machine. However, Descartes with his dualism and the idealists raised various difficulties, and if they were right, it's hard to see how AI would be possible. The entire discussion can be carried out on a lofty plane not requiring much identification of specific intellectual abilities or the specific problems that AI is currently researching.
Within the limitations he implicitly sets for himself, Haugeland does a marvelous job.