We propose to extend the ontology of logical AI to include approximate objects, approximate predicates and approximate theories. Besides the ontology we discuss relations among different approximations to the same or similar phenomena.
The article will be as precise as we can make it. We apply Aristotle's remark to the approximate theories themselves. The article treats three topics.
In principle, AI theories, e.g. the original proposals for situation calculus, have allowed for rich entities which could not be fully defined. However, almost all theories used in existing AI research has not taken advantage of this generality. Logical AI theories have resembled formal scientific theories in treating well-defined objects in well-defined domains. Human-level AI will require reasoning about approximate entities.
Approximate predicates can't have complete if-and-only-if definitions and usually don't even have definite extensions. Some approximate concepts can be refined by learning more and some by defining more and some by both, but it isn't possible in general to make them well-defined. Approximate concepts are essential for representing common sense knowledge and doing common sense reasoning. In this article, assertions involving approximate concepts are represented in mathematical logic.
A sentence involving an approximate concept may have a definite truth value even if the concept is ill-defined. It is definite that Mount Everest was climbed in 1953 even though exactly what rock and ice is included in that mountain is ill-defined. We discuss the extent to which we can build solid intellectual structures on such swampy conceptual foundations.
Quantitative approximation is one kind considered--but not the most interesting or the kind that requires logical innovation. Fuzzy logic involves a semi-quantitative approximation, although there are extensions as mentioned in [Zad99].
For AI purposes, the key problem is relating different approximate theories of the same domain. For this we use mathematical logic fortified with contexts as objects. Further innovations in logic may be required to treat approximate concepts as flexibly in logic as people do in thought and language.
Looked at in sufficient detail, all concepts are approximate, but some are precise enough for a given purpose. McCarthy's weight measured by a scales is precise enough for medical advice, and can be regarded as exact in a theory of medical advice. On the other hand, McCarthy's purposes are approximate enough so that almost any discussion of them is likely to bump against its imprecision and ambiguity. Many concepts used in common sense reasoning are imprecise. Here are some questions and issues that arise.
Let there be an axiomatic theory in situation calculus in which it can be shown that a sequence of actions will have a certain result. Now suppose that a physical robot is to observe that one block is or is not on another and determine the actions to achieve a goal using situation calculus. It is important that the meanings of used in solving the problem theoretically and that used by the robot correspond well enough so that carrying out a plan physically has the desired effect. How well must they correspond?
We claim
In the subsequent sections of this article, tools will be proposed for reasoning with approximate concepts.
The article treats successively approximate objects, approximate theories, and formalisms for describing how one object or theory approximates another.