Next: Understanding and Awareness
Up: What Consciousness does a
Previous: Easy introspection
To do the tasks we will give them, a robot will need many forms of
self-consciousness, i.e. ability to observe its own mental state.
When we say that something is observable, we mean that a
suitable action by the robot causes a sentence and possibly
other data structures giving the result of the observation to appear
in the robot's consciousness.
This section uses two formalisms described in previous papers.
The first is the notion of a context as a first class object
introduced in [McCarthy, 1987] and developed in [McCarthy, 1993] and
[McCarthy and Buvac, 1998]. As first class objects, contexts can be the values
of variables and arguments and values of functions. The most
important expression is Ist(c,p), which asserts that the proposition
p is true in the context c. Propositions true in
subcontexts need not be true in outer contexts. The
language of a subcontext can also be an abbreviated version of the
language of an outer context, because the subcontext can involve some
assumptions not true in outer contexts. A reasoning system can
enter a subcontext and reason with the assumptions and in the
language of the subcontext. If we have Ist(c,p) in an outer
context c0, we can write
and reason directly with the sentence p. Much human reasoning,
maybe all, is done in subcontexts, and robots will have to do the
same. There is no most general context. The outermost context used
so far can always be transcended to a yet outer context. A
sentence Ist(c,p) represents a kind of introspection all by itself.
The second important formalism is that of a proposition or
individual concept as a first class object distinct from the
truth value of the proposition or the value of the individual
concept. This allows propositions and individual concepts to be
discussed formally in logical language rather than just informally in
natural language. One motivating example from [McCarthy, 1979b] is given
by the sentences
Making the distinction between concepts and their denotation allows us
to say that Pat knows Mike's telephone number but doesn't know Mary's
telephone number even though Mary's telephone number is the same as
Mike's telephone number. [McCarthy, 1979b] uses capitalized words for
concepts and lower case for objects. This is contrary to the
convention in the rest of this paper that capitalizes constants and
uses lower case for variables.
We will give tentative formulas for some of the results of
observations. In this we take advantage of the ideas of [McCarthy, 1993]
and [McCarthy and Buvac, 1998] and give a context for each formula. This makes
the formulas shorter. What Here, Now and I mean is determined
in an outer context.
- Observing its physical body, recognizing the positions of its
effectors, noticing the relation of its body to the environment and
noticing the values of important internal variables, e.g. the state
of its power supply and of its communication channels. Already
a notebook computer is aware of the state of its battery.
[No reason why the robot shouldn't have three hands.]
- Observing that it does or doesn't know the value of a certain
term, e.g. observing whether it knows the telephone number of a
certain person. Observing that it does know the number or that it
can get it by some procedure is likely to be straightforward.
However, observing that it doesn't know the telephone number and
cannot infer what it is involves getting around Gödel's second
incompleteness theorem. The reason we have to get around it is
that showing that any sentence is not inferrable says that the
theory is consistent, because if the theory is inconsistent, all
sentences are inferrable.
Section 5 shows how do this using Gödel's
idea of relative consistency. Consider
and
Here, as discussed in [McCarthy, 1979b], Telephone(Clinton) stands for
the concept of Clinton's telephone number, and Sitting(Clinton) is
the proposition that Clinton is sitting.
Deciding that it doesn't know and cannot infer the value of
a telephone number is what should motivate the robot to look in
the phone book or ask someone.
- The robot needs more than just the ability to observe that it
doesn't know whether a particular sentence is true. It needs to be
able to observe that it doesn't know anything about a certain
subject, i.e. that anything about the subject is possible. Thus it
needs to be able to say that the members of Clinton's cabinet may be
in an arbitrary configuration of sitting and standing. This is
discussed in Section 5.1.
- Reasoning about its abilities. ``I think I can figure out how to
do this''. ``I don't know how to do that.''
- Keeping a journal of physical and intellectual events
so it can refer to its past beliefs, observations and actions.
- Observing its goal structure and forming sentences about it.
Notice that merely having a stack of subgoals doesn't achieve this
unless the stack is observable and not merely obeyable. This lets
it notice when a subgoal has become irrelevant to a larger goal and
then abandon it.
- The robot may intend to perform a certain action. It
may later infer that certain possibilities are irrelevant in
view of its intentions. This requires the ability to observe
intentions.
- It may also be able to say, ``I can tell you how I solved that
problem'' in a way that takes into account its mental search
processes and not just its external actions.
- The obverse of a goal is a constraint. Maybe we will
want something like Asimov's science fiction laws of robotics, e.g.
that a robot should not harm humans. In a sufficiently general way
of looking at goals, achieving its other goals with the constraint
of not harming humans is just an elaboration of the goal itself.
However, since the same constraint will apply to the achievement of
many goals, it is likely to be convenient to formalize them as a
separate structure. A constraint can be used to reduce the space of
achievable states before the details of the goals are considered.
- Observing how it arrived at its current beliefs.
Most of the important beliefs of the system will have been
obtained by nonmonotonic reasoning, and therefore are usually
uncertain. It will need to maintain a critical view of these
beliefs, i.e. believe meta-sentences about them that will aid
in revising them when new information warrants doing so. It will
presumably be useful to maintain a pedigree for each belief of
the system so that it can be revised if its logical ancestors
are revised. Reason maintenance systems maintain
the pedigrees but not in the form of sentences that can
be used in reasoning. Neither do they have
introspective subroutines that can observe the pedigrees
and generate sentences about them.
- Not only pedigrees of beliefs but other auxiliary information
should either be represented as sentences or be observable in such
a way as to give rise to sentences. Thus a system should be able to
answer the questions: ``Why do I believe p?'' or alternatively
``Why don't I believe p?''.
- Regarding its entire mental state up to the present as an
object, i.e. a context. [McCarthy, 1993] discusses contexts as formal
objects. The ability to transcend one's present context and
think about it as an object is an important form of introspection.
The restriction to up to the present avoids the paradoxes of
self-reference and still preserves the useful generality.
- Knowing what goals it can currently achieve and what its choices
are for action. [McCarthy and Hayes, 1969a] showed how a robot could think
about its own ``free will'' by considering the effects of the
actions it might take, not taking into account its own internal
processes that decide on which action to take.
- A simple (and basic) form of free will is illustrated in the
situation calculus formula that asserts that John will do the action
that John thinks results in the better situation for him.
Here is to be understood as
asserting that John thinks s1 is better for him than s2.
- Besides specific information about its mental state, a robot
will need general facts about mental processes, so it can plan its
intellectual life.
- There often will be auxiliary goals, e.g. curiosity. When a
robot is not otherwise occupied, we will want it to work at
extending its knowledge.
- Probably we can design robots to keep their goals in order so
that they won't ever have to say, ``I wish I didn't want to smoke.''
The above are only some of the needed forms of self-consciousness.
Research is needed to determine their properties and to
find additional useful forms of self-consciousness.
Next: Understanding and Awareness
Up: What Consciousness does a
Previous: Easy introspection
John McCarthy
Mon Jul 15 13:06:22 PDT 2002