% /u/jmc/s04/selfaware-sli.tex Self-awareness slides % /u/jmc/s04/selfaware Self-awareness notes % /u/ftp/jmc/selfaware.tex NOTES ON SELF-AWARENESS \documentclass[landscape]{slides} \usepackage{color} \begin{document} %\bibliographystyle{/u/jmc/1/cslibib} \begin{slide} \begin{center} \textcolor{red}{What Will Self-Aware Computer Systems Be?} \end{center} \begin{center} John Mccarthy, Stanford University \\ Mccarthy@Stanford.Edu \\ Http://Www-Formal.Stanford.Edu/Jmc/ \\ \today \end{center} $\bullet$ Darpa Wants To Know, And There's A Workshop Tomorrow. $\bullet$ The Subject Is Ready For Basic Research. $\bullet$ Short Term Applications \textcolor{blue}{May} Be Feasible. $\bullet$ Self-Awareness Is Mainly Applicable To Programs With Persistent Existence. \vfill \end{slide} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{slide} \begin{center} \textcolor{red}{WHAT WILL SELF-AWARE SYSTEMS BE AWARE OF?} \end{center} $\bullet$ Easy aspects of state: battery level, memory available, etc. $\bullet$ Ongoing activities: serving users, driving a car $\bullet$ Knowledge and lack of knowledge $\bullet$ purposes, intentions, hopes, fears, likes, dislikes $\bullet$ Actions it is free to choose among relative to external constraints. That's where free will comes from. $\bullet$ Permanent aspects of mental state, e.g. long term goals, beliefs, $\bullet$ Episodic memory---only partial in humans, probably absent in animals, but readily available in computer systems. \vfill \end{slide} %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{slide} \begin{center} \textcolor{red}{HUMAN SELF-AWARENESS---1} \end{center} $\bullet$ Human self-awareness is weak but improves with age. $\bullet$ Five year old but not three year old. I used to think the box contained candy because of the cover, but now I know it has crayons. He will think it contans candy, $\bullet$ Simple examples: I'm hungry, my left knee hurts from a scrape, my right knee feels normal, my right hand is making a fist. $\bullet$ Intentions: I intend to have dinner, I intend to visit New Zealand some day. I do not intend to die. $\bullet$ I exist in time with a past and a future. Philosophers argue a lot about what this means and how to represent it. $\bullet$ Permanent aspects of ones mind: I speak English and a little French and Russian. I like hamburgers and caviar. I cannot know my blood pressure without measuring it. \vfill \end{slide} %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{slide} \begin{center} \textcolor{red}{HUMAN SELF-AWARENESS---2} \end{center} $\bullet$ What are my choices? (Free will is having choices.) $\bullet$ Habits: I know I often think of you. I often have breakfast at the Pennsula Creamery. $\bullet$ Ongoing processes: I'm typing slides and also getting hungry. $\bullet$ Juliet hoped there was enough poison in Romeo's vial to kill her. $\bullet$ More: fears, wants (sometimes simultaneous but incompatible) $\bullet$ Permanent compared with instantaneous wants. \vfill \end{slide} %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{slide} \begin{center} \textcolor{red}{MENTAL EVENTS (INCLUDING ACTIONS)} \end{center} $\bullet$ consider $\bullet$ Infer $\bullet$ decide $\bullet$ choose to believe $\bullet$ remember $\bullet$ forget $\bullet$ realize $\bullet$ ignore \vfill \end{slide} %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{slide} \begin{center} \textcolor{red}{MACHINE SELF-AWARENESS} \end{center} $\bullet$ Easy self-awareness: battery state, memory left $\bullet$ Straightorward s-a: the program itself, the programming language specs, the machine specs. $\bullet$ Self-simulation: Any given number of steps, \textcolor{blue}{can't do in general} ``Will I ever stop?'', ``Will I stop in less than $n$ steps in general---in less than $n$ steps. $\bullet$ Its choices and their inferred consequences \textcolor{red}{(free will)}. $\bullet$ ``I hope it won't rain tomorrow''. Should a machine hope and be aware that it hopes? I think it should sometimes. $\bullet$ $\lnot Knows(I,TTelephone(MMike))$, so I'll have to look it up. \vfill \end{slide} %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{slide} \begin{center} \textcolor{red}{WHY WE NEED CONCEPTS AS OBJECTS} \end{center} We had $\lnot Knows(I,TTelephone(MMike))$, and I'll have to look it up. Suppose $Telephone(Mike) = ``321\mbox{-}7580''$. If we write $\lnot Knows(I,Telephone(Mike))$, then substitution would give\\ $\lnot Knows(I, ``321\mbox{-}7580'')$, which doesn't make sense. There are various proposals for getting around this. The most advocated is some form of modal logic. My proposal is to regard \emph{individual concepts} as objects, and represent them by different symbols, e.g. doubling the first letter. There's more about why this is a good idea in my ``First order theories of individual concepts and propositions'' \vfill \end{slide} %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{slide} \begin{center} \textcolor{red}{WE ALSO NEED CONTEXTS AS OBJECTS} \end{center} We write % \begin{displaymath} c: p \end{displaymath} % to assert $p$ while in the context $c$. Terms also can be written using contexts. $c:e$ is an expression $e$ in the context $c$. The main application of contexts as objects is to assert relations between the objects denoted by different expressions in different contexts. Thus we have % \begin{displaymath} c:Does(Joe,a) = SpecializeActor(c,Joe):a, \end{displaymath} or, more generally, % \begin{displaymath} SpecializesActor(c,c',Joe) \rightarrow c:Does(Joe,a)) = c':a. \end{displaymath} Such relations between expressions in different contexts allows using a situation calculus theory in which the actor is not explicitly represented in an outer context in which there is more than one actor. We also need to express the relation between an external context in which we refer to the knowledge and awareness of AutoCar1 and AutoCar1's internal context in which it can use ``I''. \vfill \end{slide} %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{slide} \begin{center} \textcolor{red}{SELF-AWARENESS EXPRESSED IN LOGICAL FORMULAS---1} \end{center} Pat is aware of his intention to eat dinner at home. \begin{equation} \label{eq:f1} \begin{array}[l]{l} c(Awareness(Pat)): Intend(I, MMod(AAt(HHome),EEat(DDinner))) \\ \end{array} \end{equation} $Awareness(Pat)$ is a context. $Eat(Dinner)$ denotes the general act of eating dinner, logically different from eating $Steak7642$.\\ $Mod(At(Home),Eat(Dinner))$ is what you get when you apply the modifier ``at home'' to the act of eating dinner. $Intend(I,X)$ says that I intend $X$. The use of $I$ is appropriate within the context of a person's (here Pat's) awareness. We should extend this to say that Pat will eat dinner at home unless his intention changes. This can be expressed by formulas like \begin{equation} \label{eq:patintent} \begin{array}[l]{l} \lnot Ab17(Pat,x,s) \land Intends(Pat,Does(Pat,x),s) \\ \quad\quad\quad \rightarrow (\exists s' > s)Occurs(Does(Pat,x),s). \end{array} \end{equation} % in the notation of %\cite{McC02}. (McCarthy 2002). \vfill \end{slide} %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{slide} \begin{center} \textcolor{red}{FORMULAS---2} \end{center} $\bullet$ AutoCar1 is driving John from Office to Home. AutoCar1 is aware of this. Autocar1 becomes aware that it is low on hydrogen. AutoCar1 is permanently aware that it must ask permission to stop for gas, so it asks for permission. Etc., Etc. These facts are expressed in a context $C0$. \begin{equation} \begin{array}[l]{l} C0: \\ Driving(I,John,Home1) \\ \land Aware(DDriving(II,JJohn,HHome) \\ \land OccursBecomes(Aware(I,LLowfuel(AAutoCar1))) \\ \land OccursBecomes(Want(I,SStopAt(GGasStation1))) \\ \land \end{array} \end{equation} \vfill \end{slide} %%%%%%%%%%%%%%%%%%%%%%%%%%%%% % /u/ftp/jmc/selfaware.tex NOTES ON SELF-AWARENESS \begin{slide} \begin{center} \textcolor{red}{QUESTIONS} \end{center} $\bullet$ Does the lunar explorer require self-awareness? What about the entries in the recent DARPA contest? $\bullet$ Do self-aware reasoning systems require dealing with referential opacity? What about explicit contexts? $\bullet$ Where does tracing and journaling involve self-awareness? $\bullet$ Does an online tutoring program (for example, a program that teaches a student Chemistry) need to be self aware? $\bullet$ What is the simplest self-aware system? $\bullet$ Does self-awareness always involve self-monitoring? $\bullet$ In what ways does self-awareness differ from awareness of other agents? Does it require special forms of representation or architecture? \vfill \end{slide} %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{slide} \begin{center} \textcolor{red}{REFERENCES} \end{center} Some Philosophical Problems from the Standpoint of Artificial Intelligence\\ John McCarthy and Patrick J. Hayes \\ Machine Intelligence 4, 1969 \\ also http://www-formal.stanford.edu/jmc/mcchay69.html \\ Actions and other events in situation calculus \\ John McCarthy \\ KR2002 \\ also http://www-formal.stanford.edu/jmc/sitcalc.html. \vfill \end{slide} %\bibliography{/u/jmc/1/biblio} \end{document} Overschelde find th author = "John McCarthy and Patrick J. Hayes", title = "\htmladdnormallinkfoot{Some Philosophical Problems from the Standpoint of Artificial Intelligence} {http://www-formal.stanford.edu/jmc/mcchay69.html}", booktitle = "Machine Intelligence 4", editor = "B. Meltzer and D. Michie", pages = "463--502", publisher = "Edinburgh University Press", year = "1969", e top part find the red part mirror self-recognition the biggest co-referential but having concepts only - Shapiro. He didn't thinks concepts of concepts offer difficulty. Barbara Yoon, asst. prof. IPTO disagreement with mike I don't know Mike's phone number. I want to dial Mike's number. Information will give me the number for \$1.00 I cannot dial Mike without spending the \$1.00 Mike and I disagreed about whether about knowledge is necessary, but we agreed that it is desirable. %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% $\bullet$ \begin{slide} \begin{center} \textcolor{red}{} \end{center} $\bullet$ \vfill \end{slide} %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{slide} \begin{center} \textcolor{red}{} \end{center} $\bullet$ \vfill \end{slide} %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% Here's an example of awareness leading to action. Pat is driving to his job. Presumably he could get there without much awareness of that fact, since the drive is habitual. However, he becomes aware that he needs cigarettes and that he can stop at Mac's Smoke Shop and get some. Two aspects of his awareness, the driving and the need for cigarettes are involved. That Pat is driving to his job can be expressed with varying degrees of elaboration. Here are some I have considered. \begin{equation} \label{eq:a} \begin{array}[l]{l} Driving(Pat,Job,s) \\ \mbox{} \\ Doing(Pat,Drive(Job),s) \\ \mbox{} \\ Holds(Doing(Pat,Mod(Destination(Job),Drive)),s) \\ \mbox{} \\ Holds(Mod(Ing,Mod(Destination(Job,Action(Drive,Pat))),s) \\ \mbox{}. \\ \end{array} \end{equation} % The last two use a notion like that of an adjective modifying a noun. Here's a simple sentence giving a consequence of Pat's awareness. It uses $Aware$ as a modal operator. This may require repair or it may be ok in a suitably defined context. \begin{equation} \label{eq:b} \begin{array}[l]{l} Aware(Pat,Driving(Job,s),s) \land Aware(Pat,Needs(Cigarettes),s) \\ \land Aware(Pat,About\mbox{-}to\mbox{-}pass(CigaretteStore,s),s) \\ \rightarrow Occurs(StopAt(CigaretteStore),s). \end{array} \end{equation} The machine knows that if its battery is low, it will be aware of the fact. \begin{equation} \label{eq:c} \begin{array}[l]{l} Knows(Machine, (\forall s')(LowBattery(s') \rightarrow Aware(LowBattery(s'))),s) \\ \end{array} \end{equation} The machine knows, perhaps because a sensor is broken, that it will not necessarily be aware of a low battery. \begin{equation} \label{eq:d} \begin{array}[l]{l} Knows(Machine, \lnot (\forall s')(LowBattery(s') \rightarrow Aware(LowBattery(s'))),s) \end{array} \end{equation} The positive sentence ``I am aware that I am aware \ldots '' doesn't seem to have much use by itself, but sentences of the form ``If X happens, I will be aware of Y'' should be quite useful. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Miscellaneous} \label{sec:misc} % B1649 .R93 P65 1997 Here are some examples of awareness and considerations concerning awareness that don't yet fit the framework of the previous sections. I am slow to solve the problem because I waste time thinking about ducks. I'd like Mark Stickel's SNARK to observe, ``I'm slow to solve the problem, because I keep proving equivalent lemmas over and over''. I was aware that I was letting my dislike of the man influence me to reject his proposal unfairly. Here are some general considerations about what fluents should be used in making self-aware systems. 1. Observability. One can observe ones intentions. One cannot observe the state of ones brain at a more basic level. This is an issue of epistemological adequacy as introduced in %\cite{McCHayes69}. (McCarthy and Hayes 1969). 2. Duration. Intentions can last for many years, e.g. "I intend to retire to Florida when I'm 65". "I intend to have dinner at home unless something better turns up." 3. Forming a system with other fluents. Thus beliefs lead to other beliefs and eventually actions. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\section{Remarks and questions} %\label{sec:remarks} Is there a technical difference between observations that constitute self-observations and those that don't? Do we need a special mechanism for \emph{self-observation}? At present I don't think so. If $p$ is a precondition for some action, it may not be in consciousnes, but if the action becomes considered, whether $p$ is true will then come into consciousnes, i.e. short term memory. We can say that the agent is \emph{subaware} of $p$. What programming languages provide for interrupts? comments on my talk Brian Williams, space systems Schuberrt: add causality extensions to first order logic, wants generalized quantifierrs, likke I often have luncch while working at the computer Summary capabiliity for episodic memory visualization, temporal reasoning, belief is very is very fast retriieval, how do I know I don't own property in Florida links to language thomason: wants to connect theory and engineering