The review editors asked me to say what I think the obstacles are to human-level AI by the logic route and why I think they can be overcome. If anyone could make a complete list of the obstacles, this would be a major step towards overcoming them. What I can actually do is much more tentative.
Workers in logic-based AI hope to reach human-level in a logic based system. Such a system would, as proposed in [McCarthy, 1959], represent what it knew about the world in general, about the particular situation and about its goals by sentences in logic. Other data structures, e.g. for representing pictures, would be present together with programs for creating them, manipulating them and for getting sentences for describing them. The program would perform actions that it inferred were appropriate for achieving its goals.
Logic-based AI is the most ambitious approach to AI, because it proposes to understand the common sense world well enough to express what is required for successful action in formulas. Other approaches to AI do not require this. Anything based on neural nets, for example, hopes that a net can be made to learn human-level capability without the people who design the original net knowing much about the world in which their creation learns. Maybe this will work, but then they may have an intelligent machine and still not understand how it works. This prospect seems to appeal to some people.
Common sense knowledge and reasoning is at the core of AI, because a human or an intelligent machine always starts from a situation in which the information available to it has a common sense character. Mathematical models of the traditional kind are imbedded in common sense. This was not obvious, and many scientists supposed that the development of mathematical theories would obviate the need for common sense terminology in scientific work. Here are two quotations that express this attitude.
One service mathematics has rendered to the human race. It has put common sense back where it belongs, on the topmost shelf next to the dusty canister labelled `discarded nonsense'. --E. T. Bell
All philosophers, of every school, imagine that causation is one of the fundamental axioms or postulates of science, yet, oddly enough, in advanced sciences such as gravitational astronomy, the word `cause' never occurs ... The law of causality, I believe, like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm .--B. Russell, ``On the Notion of Cause'', Proceedings of the Aristotelian Society, 13 (1913), pp. 1-26.
The ``Nemesis'' theory of the mass extinctions holds that our sun has a companion star that every 13 million years comes close enough to disrupt the Oort cloud of comets, some of which then come into the inner solar system and bombard the earth causing extinctions. The Nemesis theory involves gravitational astronomy, but it doesn't propose a precise orbit for the star Nemesis and still less proposes orbits for the comets in the Oort cloud. Therefore, the theory is formulated in terms of the common sense notion of causality.
It was natural for Russell and Bell to be pleased that mathematical laws were available for certain phenomena that had previously been treated only informally. However, they were interested in a hypothetical information situation in which the scientist has a full knowledge of an initial configuration, e.g. in celestial mechanics, and needs to predict the future. It is only when people began to work on AI that it became clear that general intelligence requires machines that can handle the common sense information situation in which concepts like ``causes'' are appropriate. Even after that it took 20 years before it was apparent that nonmonotonic reasoning could be and had to be formalized.
Making a logic-based human-level program requires enough progress on at least the following problems:
Unfortunately, too many people concentrated on self-referential sentences. It's a cute subject, but not relevant to human introspection or to the kinds of introspection we will have to make computers do.
Dreyfus asks why anyone should believe all this can be done. It seems as good a bet as any other difficult scientific problem. Recently progress has become more rapid, and many people have entered the field of logical AI in the last 15 years. Besides those whose papers I referenced, these include Raymond Reiter, Leora Morgenstern, Donald Perlis, Ernest Davis, Murray Shanahan, David Etherington, Yoav Shoham, Fangzhen Lin, Sarit Kraus, Matthew Ginsberg, Douglas Lenat, R. V. Guha, Hector Levesque, Jack Minker, Tom Costello, Erik Sandewall, Kurt Konolige and many others. There aren't just a few ``die-hards''.
However, reaching human level AI is not a problem that is within engineering range of solution. Very likely, fundamental scientific discoveries are still to come.