PROF. Y. BAR-HILLEL: Dr. McCarthy's paper belongs in the Journal of Half-Baked Ideas, the creation of which was recently proposed by Dr. I. J. Good. Dr. McCarthy will probably be the first to admit this. Before he goes on to bake his ideas fully, it might be well to give him some advice and raise some objections. He himself mentions some possible objections, but I do not think that he treats them with the full consideration they deserve; there are others he does not mention.
For lack of time, I shall not go into the first part of his paper, although I think that it contains a lot of highly unclear philosophical, or pseudo-philosophical assumptions. I shall rather spend my time in commenting on the example he works out in his paper at some length. Before I start, let me voice my protest against the general assumption of Dr. McCarthy -- slightly caricatured -- that a machine, if only its program is specified with a sufficient degree of carelessness, will be able to carry out satisfactory even rather difficult tasks.
Consider the assumption that the relation he designates by at is transitive (page ). However, since he takes both ``at(I,desk)'' and ``at(desk,home)'' as premises, I presume - though this is never made quite clear - that at means something like being-a-physical-part or in-the-immediate-spatial-neighborhood-of. But then the relation is clearly not transitive. If A is in the immediate spatial neighborhood of B and B in the immediate spatial neighborhood of C then A need not be in the immediate spatial neighborhood of C. Otherwise, everything would turn out to be in the immediate spatial neighborhood of everything, which is surely not Dr. McCarthy's intention. Of course, starting from false premises, one can still arrive at right conclusions. We do such things quite often, and a machine could do it. But it would probably be bad advice to allow a machine to do such things consistently.
Many of the other 23 steps in Dr. McCarthy's argument are equally or more questionable, but I don't think we should spend our time showing this in detail. My major question is the following: On page McCarthy states that a machine which has a competence of human order in finding its way around will have almost all the premises of the argument stored in its memory. I am at a complete loss to understand the point of this remark. If Dr. McCarthy wants to say no more than that a machine, in order to behave like a human being, must have the knowledge of a human being, then this is surely not a very important remark to make. But if not, what was the intention of this remark?
The decisive question how a machine, even assuming that it will have somehow countless millions of facts stored in its memory, will be able to pick out those facts which will serve as premises for its deduction is promised to receive its treatment in another paper, which is quite right for a half-baked idea.
It sounds rather incredible that the machine could have arrived at its conclusion -- which, in plain English, is ``Walk from your desk to your car!'' -- by sound deduction. This conclusion surely could not possibly follow from the premise in any serious sense. Might it not be occasionally cheaper to call a taxi and have it take you over to the airport? Couldn't you decide to cancel your flight or to do a hundred other things? I don't think it would be wise to develop a programming language so powerful as to make a machine arrive at the conclusion Dr. McCarthy apparently intends it to make.
Let me also point out that in the example the time factor has never been mentioned, probably for the sake of simplicity. But clearly this factor is here so important that it could not possibly be disregarded without distorting the whole argument. Does not the solution depend, among thousands of other things, also upon the time of my being at my desk, the time at which I have to be at the airport, the distance from the airport, the speed of my car, etc.
To make the argument deductively sound, its complexity will have to be increased by many orders of magnitude. So long as this is not realized, any discussions of machines able to perform the deductive -- and inductive! -- operations necessary for treating problems of the kind brought forward by Dr. McCarthy is totally pointless. The gap between Dr. McCarthy's general programme (with which I have little quarrel, after discounting its ``philosophical'' features) and its execution even in such a simple case as the one discussed seems to me so enormous that much more has to be done to persuade me that even the first step in bridging this gap has already been taken.
DR. O. G. SELFRIDGE: I have a question which I think applies to this. It seems to me in much of that work, the old absolutist Prof. Bar-Hillel has really put his finger on something; he is really worried about the deduction actually made. He seemed really to worry that that system is not consistent, and he made a remark that conclusions should not be drawn from false premises. In my experience those are the only conclusions that have ever been drawn. I have never yet heard of someone drawing correct conclusions from correct premises. I mean this seriously. This, I think is Dr. Minsky's point this morning. What this leads to is that the notion of deductive logic being something sitting there sacred which you can borrow for particularly sacred uses and producing inviolable results is a lot of nonsense. Deductive logic is inferrred as much as anything else. Most women have never inferred it, but they get along pretty well, marrying happy husbands, raising happy children, without ever using deductive logic at all. My feeling is that my criticism of Dr. McCarthy is the other way. He assumes deductive logic, whereas in fact that is something to be concocted.
This is another important point which I think Prof. Bar-Hillel ignores in this, the criticism of the programme should not be as to whether it is logically consistent, but only will he be able to wave it around saying ``this in fact works the way I want it''. Dr. McCarthy would be the first to admit that his proramme is not now working, so it has to be changed. Then can you make the changes in the programme to make it work? That has nothing to do with logic. Can he amend it in such a way that it includes the logic as well as the little details of the programme? Can he manage in such a way that it works the way he does? He said at the begining of his talk that when he makes an arbitrary change in the programme it will not work usually, evidence, to me at least, that small changes in his programme will not obviously make the programme work and might even improve it. His next point is whether he can make small changes that in fact make it work. That is what we do not know yet.
PROF. Y. BAR-HILLEL: May I ask whether you could thrash this out with Dr. McCarthy? It was my impression that Dr. McCarthy's advice taker was meant to be able, among other things, to arrive at a certain conclusion from appropriate premises by faultless deductive reasoning. If this is still his programme, then I think your defence is totally beside the point.
DR. O. G. SELFRIDGE: I am not defending his programme, I am only defending him.
DR. J. McCARTHY: Are you using the word `programme' in the technical sense of a bunch of cards or in the sense of a project that you get money for?
PROF. Y. BAR-HILLEL: When I uttered my doubts that a machine working under the programme outlined by Dr. McCarthy would be able to do what he expects it to do, I was using `programme' in the technical sense.
DR. O. G. SELFRIDGE: In that case your criticisms are not so much philosophical as technical.
PROF. Y. BAR-HILLEL: They are purely technical. I said that I shall not make any philosophical criticisms, for lack of time.
DR. O. G. SELFRIDGE: A technical objection does not make ideas half-baked.
PROF. Y. BAR-HILLEL: A deductive argument, where you have first to find out what are the relevant premises, is something which many humans are not always able to carry out successfully. I do not see the slightest reason to believe that at present machines should be able to perform things that humans find trouble in doing. I do not think there could possibly exist a programme which would, given any problem, divide all facts in the universe into those which are and those which are not relevant for that problem. Developing such a programme seems to me by orders of magnitude more difficult than, say, the Newell-Simon problem of developing a heuristic for deduction in the propositional calculus. This cavalier way of jumping over orders of magnitude only tends to becloud the issue and throw doubt on ways of thinking for which I have a great deal of respect. By developing a powerful programme language you may have paved the way for the first step in solving problems of the kind treated in your example, but the claim of being well on the way towards their solution is a gross exaggeration. This was the major point of my objections.
DR. MCCARTHY (in reply): Prof. Bar-Hillel has correctly observed that my paper is based on unstated philosophical assumptions although what he means by ``pseudo-philosophical'' is unclear. Whenever we program a computer to learn from experience we build into the program a sort of epistemology. It might be argued that this epistemology should be made explicit before one writes the programme, but epistemology is in a foggier state than computer programming even in the present half-baked state of the latter. I hope that once we have succeeded in making computer programs reason about the world, we will be able to reformulate epistemology as a branch of applied mathematics no more mysterious or controversial than physics.
On re-reading my paper I can't see how Prof. Bar-Hillel could see in it a proposal to specify a computer program carelessly. Since other people have proposed this as a device for achieving ``creativity'', I can only conclude that he has some other paper in mind.
In his criticism of my use of the symbol ``at'', Prof. Bar-Hillel seems to have misunderstood the intent of the example. First of all, I was not trying to formalize the sentence form, A is at B, as it is used in English. ``at'' merely was intended to serve as a convenient mnemonic for the relation between a place and a sub-place. Second, I was not proposing a practical problem for the program to solve but rather an example intended to allow us to think about the kinds of reasoning involved and how a machine may be made to perform them.
Prof. Bar-Hillel's major point concerns my statement that the premises listed could be assumed to be in memory. The intention of this statement is to explain why I have not included formalizations of statements like, ``it is possible to drive from my home to the airport'' among my premises. If there were n known places in the county there would be
such sentences and, since we are quite sure that we do not have each of them in our memories, it would be cheating to allow the machine to start with them.
The rest of Prof. Bar-Hillel's criticisms concern ways in which the model mentioned does not reflect the real world; I have already explained that this was not my intention. He is certainly right that the complexity of the model will have to be increased for it to deal with practical problems. What we disagree on is my contention that the conceptual difficulties arise at the present level of complexity and that solving them will allow us to increase the complexity of the model easily.
With regard to the discussion between Prof. Bar-Hillel and Oliver Selfridge -- the logic is intended to be faultless although its premises cannot be guaranteed. The intended conclusion is ``do(go(desk,car,walking))''--not, of course, ``at(I,airport)''. The model oversimplifies but is not intended to oversimplify to the extent of allowing one to deduce one's way to the airport.