. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
(I) That knowing the meaning of a term is just a matter of being in a certain ``psychological state'' (in the sense of ``psychological state'' in which states of memory and psychological dispositions are ``psychological states''; no one thought that knowing the meaning of a word was a continuous state of consciousness, of course.)(II) That the meaning of a term (in the sense of ``intension'') determines its extension (in the sense that sameness of intension entails sameness of extension).
Suppose Putnam is right in his criticism of the general correctness of (I) and (II). His own ideas are more elaborate.
It may be convenient for a robot to work mostly in contexts within a larger context in which (I) and (II) (or something even simpler) hold. However, the same robot, if it is to have human level intelligence, must be able to transcend when it has to work in contexts to which Putnam's criticisms of the assumptions of apply.
It is interesting, but perhaps not necessary for AI at first, to characterize those contexts in which (I) and (II) are correct.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .