\documentclass[12pt]{article} \usepackage[color,html]{} \usepackage{html} % NOT THE OFFICIAL VERSION, see /u/jmc/e06/freewill2.tex \begin{document} \bibliographystyle{alpha} % Definition of title page: \title{SIMPLE DETERMINISTIC FREE WILL} % /u/jmc/s05/freewill-sli.tex \author{ John McCarthy } \date{from May 16, 2002 until \today} \maketitle \begin{abstract} A central feature of free will is that a person has choices among alternative actions and chooses the action with the apparently most preferred consequences. In a determinist theory, the mechanism that makes the choice among the alternatives is determinist. The sense of free will comes from the fact that the mechanism that generates the choices uses a non-determinist theory as a computational device and that the stage in which the choices have been identified is introspectable. This treatment is based on work in artificial intelligence (AI). We present a theory of \emph{simple deterministic free will} (\textbf{SDFW}) in a deterministic world. The theory splits the mechanism that determines action into two parts. The first part computes possible actions and their consequences. Then the second part decides which action is most preferred and does it. We formalize SDFW by two sentences in \emph{situation calculus}, a mathematical logical theory often used in AI. The situation calculus formalization makes the notion of free will technical. According to this notion, almost no animal behavior exhibits free will, because exercising free will involves \emph{considering the consequences of alternative actions}. A major advantage of our notion of free will is that whether an animal does have free will may be determinable by experiment. Some computer programs, e.g. chess programs, exhibit SDFW. Almost all do not. At least SDFW seems to be required for effective chess performance and also for human-level AI. Many features usually considered as properties of free will are omitted in SDFW. That's what makes it simple. The criterion for whether an entity uses SDFW is not behavioristic but is expressed in terms of the internal structure of the entity. \end{abstract} \section{The Informal theory} % /u/ftp/jmc/freewill.tex The older paper. Let the course of events, including events in my brain (or yours or his or its) be deterministic. It seems to many people that there is no place for free will. Even our thoughts are determined. However, if we examine closely how a human brain (or chess program) deterministically makes decisions, free will (or imitation free will if your philosophy forbids calling it real free will) must come back in. Some deterministic processes consider alternative actions and their consequences and choose the actions they think have the most preferred consequences. This deterministic decision process uses a nondeterministic theory to present the set of available actions and the consequences of each of them. When a person, animal, or machine reacts directly to a situation rather than comparing the consequences of alternative actions, free will is not involved. So far as I can see, no animals consider the consequences of alternative actions; hence they don't have free will. Others think that apes sometimes do compare consequences. A relevant experiment is suggested in section \ref{sec:apes}. Using free will is too slow in many situations, and training and practice often have the purpose of replacing comparison of consequences by automatic reaction to a situation. We believe this simple theory covers the most basic phenomenon of human free will. We'll call it \emph{simple deterministic free will} and abbreviate it SDFW. Robots with human-level intelligence will also require at least this much free will in order to be useful. Beyond having free will, some systems are conscious of having free will and communicate about it. If asked to tell what it is doing, humans or some machine will tell about their choices for action and say that they intend to determine which action leads to the best consequence. Such a report, whether given externally or contemplated internally, constitutes the human sense and the human report of free will. SDFW does not require consciousness of having free will or the ability to communicate about it. That's what's simple about SDFW. Thinking about one's free will requires theoretical structure beyond or above SDFW. So will considering actions as praiseworthy or blameworthy. SDFW also doesn't treat game theoretic situations in which probabilistic mixed strategies are appropriate. In AI research one must treat simple cases of phenomena, e.g. intentional behavior, because full generality is beyond the state of the art. Many philosophers are inclined to only consider the general phenomenon, but this limits what can be accomplished. I recommend to them the AI approach of doing the simplest cases first. %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Situation calculus formulas for SDFW} Artificial intelligence requires expressing this phenomenon formally, and we'll do it here in the mathematical logical language of situation calculus. Situation calculus is described in \cite{McCHayes69}, \cite{Shanahan97}, \cite{Reiter01}, and in the extended form used here, in \cite{McC02}. Richmond Thomason in \cite{Thomason03b} compares situation calculus to theories of action in the philosophical literature. As usually presented, situation calculus is a non-deterministic theory. The equation \begin{displaymath} s' = Result(e,s) \end{displaymath} % asserts that $s'$ is the situation that results when event $e$ occurs in the situation $s$. Since there may be many different events that can occur in $s$, and the theory of the function $Result$ does not say which occurs, the theory is non-deterministic. Some AI jargon refers to it as a theory with branching time rather than linear time. Actions are a special case of events, but most AI work discusses only actions. Usually, there are some preconditions for the event to occur, and then we have the formula \begin{displaymath} Precond(e,s) \rightarrow s' = Result(e,s). \end{displaymath} % \cite{McC02} proposes adding a formula $Occurs(e,s)$ to the language that can be used to assert that the event $e$ occurs in situation $s$. We have \begin{displaymath} Occurs(e,s) \rightarrow (Next(s) = Result(e,s)). \end{displaymath} Adding occurrence axioms, which assert that certain actions occur, makes a theory more deterministic by specifying that certain events occur in situations satisfying specified conditions. In general the theory will remain partly non-deterministic, but if there are occurrence axioms specifying what events occur in all situations, then the theory becomes deterministic, i.e. has linear time. We can now give a situation calculus theory for SDFW illustrating the role of a non-deterministic theory in determining what will deterministically happen, i.e. by saying what choice a person or machine will make. In these formulas, lower case terms denote variables and capitalized terms denote constants. Suppose that $actor$ has a choice of just two actions $a1$ and $a2$ that he may perform in situation $s$. We want to say that the event $Does($actor$,a1)$ or $Does($actor$,a2)$ occurs in $s$ according to which of $Result(Does($actor$,a1),s)$ or $Result(Does($actor$,a2),s)$ $actor$ prefers. The formulas that assert that a person (actor) will do the action that he, she or it thinks results in the better situation for him are \begin{equation} \label{eq:a1} \begin{array}[l]{l} Occurs(Does(actor,Choose(actor,a1,a2,s),s),s), \end{array} \end{equation} (\ref{eq:a1}) \noindent and \begin{equation} \label{eq:a2} \begin{array}[l]{l} Choose(actor,a1,a2,s) = \\ \textbf{if}\ Prefers(actor,Result(a1,s),Result(a2,s))\\ \textbf{then}\ a1\ \textbf{else}\ a2. \end{array} \end{equation} % nonmonotonic reasoning Adding (\ref{eq:a2}) makes the theory deterministic by specifying which choice $actor$ makes.\footnote{(\ref{eq:a2}) uses a conditional expression. $\textbf{if}\ p\ \textbf{then}\ a\ \textbf{else}\ b$ has the value $a$ if the proposition $p$ is true and otherwise has the value $b$. The theory of conditional expressions is discussed in \cite{McC63}. Conditional expressions are used in the Lisp, Algol 60, Algol 68, and Scheme programming languages.} Here $\mbox{Prefers}(actor, s1,s2)$ is to be understood as asserting that $actor$ prefers $s1$ to $s2$. Here's a non-deterministic theory of greedy John. % \begin{equation} \label{eq:a3} \begin{array}[l]{l} Result(A1,S0) = S1, \\ Result(A2,S0) = S2, \\ Wealth(John,S1) = \$2.0\times 10^6, \\ Wealth(John,S2) = \$1.0\times 10^6, \\ (\forall s\ s')(Wealth(John,s) > Wealth(John,s')\\ \quad\quad\quad \rightarrow \mbox{Prefers}(John,s,s'). \end{array} \end{equation} As we see, greedy John has a choice of at least two actions in situation $S0$ and prefers a situation in which he has greater wealth to one in which he has lesser wealth. From equations 1-3 we can infer \begin{equation} \label{eq:a4} \begin{array}[l]{l} Occurs(Does(John,A1,S0)). \end{array} \end{equation} For simplicity, we have omitted the axioms asserting that $A1$ and $A2$ are exactly the actions available and the nonmonotonic reasoning used to derive the conclusion. Here $\mbox{Prefers}(actor, s1,s2)$ is to be understood as asserting that $actor$ prefers $s1$ to $s2$. I used just two actions to keep the formula for $Choose$ short. Having more actions or even making $Result$ probabilistic or quantum would not change the nature of SDFW. A substantial theory of $\mbox{Prefers}$ is beyond the scope of this article. This illustrates the role of the non-deterministic theory of $Result$ within a deterministic theory of what occurs. (\ref{eq:a1}) includes the non-deterministic notion of $Result$ used to compute which action leads to the better situation. (\ref{eq:a2}) is the deterministic part that tells which action occurs. We make four claims. 1. Effective AI systems, e.g. robots, will require identifying and reasoning about their choices once they get beyond what can be achieved with situation-action rules. For example, chess programs have always computed their choices and compared their consequences. 2. The above theory captures the most basic feature of human free will. 3. $Result(a1,s)$ and $Result(a2,s)$, as they are computed by the agent, are not full states of the world but elements of some theoretical space of approximate situations the agent uses in making its decisions. \cite{McC00a} has a discussion of approximate entities. Part of the problem of building human-level AI lies in inventing what kind of entity $Result(a,s)$ shall be taken to be. 4. Whether a human or an animal uses simple free will in a type of situation is subject to experimental investigation---as discussed in section \ref{sec:apes}. Formulas (\ref{eq:a1}) and (\ref{eq:a2}) illustrate $person$ making a choice. They don't say anything about $person$ knowing it has choices or preferring situations in which it has more choices. SDFW is therefore a partial theory that requires extension when we need to account for these phenomena. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{A generalization of SDFW} \label{sec:gen} We can generalize SDFW by applying preferences to actions rather than to the situations resulting from actions. The formulas then become \begin{equation} \label{eq:a7} \begin{array}[l]{l} Occurs(Does(actor,Choose\mbox{-}action(actor,a1,a2,s),s),s) \end{array} \end{equation} (\ref{eq:a1}) % \noindent and \begin{equation} \label{eq:a8} \begin{array}[l]{l} Choose\mbox{-}action(actor,a1,a2,s) = \\ \textbf{if}\ Prefers\mbox{-}action(actor,a1,a2,s)\\ \textbf{then}\ a1\ \textbf{else}\ a2. \end{array} \end{equation} (\ref{eq:a7}) and (\ref{eq:a8}) obviously generalize (\ref{eq:a1}) and (\ref{eq:a2}), because the earlier case is obtained by writing \begin{equation} \label{eq:a9} \begin{array}[l]{l} Prefers\mbox{-}action(a1,a2,s) \equiv Prefers(Result(a,s),Result(a2,s)). \end{array} \end{equation} I am doubtful about the generalization, because I don't see how to represent commonsense preferences between actions except in terms of preferring one resulting situation to another. %A possibility is that the ability to compare actions, without %necessarily comparing their consequences, may have evolved. Thus %evolution would have compared the consequences. \section{Knowledge of one's free will and wanting more or fewer choices} \label{sec:know} This section is less worked out than basic SDFW and not axiomatized. That's why it was best to start simple. Here are some examples of it being good to have more choices. ``I'll take my car to work today rather than bicycling so I can shop on the way home if I want to.'' ``If you learn mathematics, you will have more choices of scientific occupations''. ``The more money I have, the more models of car I can choose from.'' ``If I escape from Cuba, I will have more choice of what to read, what I can say or write, and where to travel.'' %[formulas to come---maybe] We want to say that situation $s1$ is at least as free as situation $s2$, written $s1 \geq_{freedom} s2$, if every fluent (AI jargon for what holds in a situation) achievable by a single action from $s2$ is achievable from $s1$. Just as with equation (\ref{eq:a1}), we can say that $person$ chooses an action that leads to more freedom at the next situation. \begin{equation} \label{eq:morefree} \begin{array}[l]{l} s1 \geq_{freedom} s2 \\ \equiv \\ (\forall f) ((\exists a)(Holds(f, Result(Does($person$,a),s2))) \\ \quad\quad\quad\quad \rightarrow \\ (\exists a)(Holds(f, Result(Does($person$,a),s1)))). \end{array} \end{equation} Here $f$ ranges over fluents. Having more choices is usually preferred. However, one sometimes wants fewer choices. Burning one's bridges, nailing the flag to the mast, and promising to love until death do us part are examples of actions that reduce choices. The conditions under which this occurs are too difficult for me to formalize at present. They can involve fearing that one's preferences in the future might be different from one's present preferences for future actions or that making a commitment about one's future actions confers a present benefit. %Perhaps we can regard making restrictions on future choices as a %metachoice and can suitably formalize metachoices. %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Philosophical issues} \label{sec:phil} The formalism of this paper takes sides in several philosophical controversies. \begin{enumerate} \item It considers determinism and free will, as experienced and observed by humans, as compatible. This is in accordance with the views of Locke and Hume.%, as John Perry told me. % /u/jmc/RMAIL.S02==974 \item It takes a third person point of view, i.e. considers the free will of others and not just the free will of the observer. \item It breaks the phenomenon of free will into parts and considers the simplest systems first---in contrast to approaches that demand that all complications be understood before anything can be said. In this it resembles the approaches to belief and other intentional states discussed in \cite{Dennett71}, \cite{Dennett78}, and \cite{McC79a}. Starting with simple systems is the practice in AI, because only what is understood can be implemented in computer programs. \end{enumerate} It seems to me that formulas (\ref{eq:a1}) and (\ref{eq:a2}) expressing the use of the branching time $Result(e,s)$ function in determining what events occur make the philosophical ideas definite. Thus we can see which modifications of the notions are compatible with (\ref{eq:a1}) and (\ref{eq:a2}), and which require different axioms. %Simple deterministic free will won't satisfy many philosophical, %theological, and other advocates of free will. However, it is enough %to account for behavior involving making choices and for the feeling %that one has choices. The process of deciding what to do often involves considering a pruned set of actions which eliminate those that have obviously bad consequences. The remaining actions are those that one \emph{can} do. When someone refers to a pruned action, one sometimes gets the reply, ``Oh, I could do that, but I really can't, because \ldots .'' %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Praise and blame} \label{sec:praise} We have maintained that the basic notion of free will is the same for humans, animals and robots. Praising or blaming agents for their actions is an advanced notion requiring more structure, e.g. including good or bad actions or outcomes. Blaming or praising humans requires taking human peculiarities, not shared with agents in general, e.g. robots, into account. Consider the verdict: \emph{``Not guilty by reason of insanity''} as applied to a person with schizophrenia. Schizophrenia is basically a disease of the chemistry of the blood and nervous system. At a higher level of abstraction, it is regarded as a disease in which certain kinds of thoughts enter and dominate consciousness. A patient's belief that the CIA has planted a radio in his brain is relieved by medicines that change blood chemistry. If the patient's belief caused him to kill someone whom he imagined to be a CIA agent, he would be found not guilty by reason of insanity. If we wanted robots susceptible to schizophrenia, we would have to program in something like schizophrenia, and it would be a complicated and unmotivated undertaking---unmotivated by anything but the goal of imitating human schizophrenia. The older M'Naghten criterion, ``unable to understand the nature and consequences of his acts'', uses essentially the criteria of the present article for assessing the presence or absence of free will. I don't know if all praise or blame for robots is artificial; the matter requires more thought. Verbally one might praise a robot as way of getting it to do more of the same. %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{A possible experiment with apes} \label{sec:apes} Here's a \emph{gedanken} experiment aimed at determining whether apes (or other animals) have free will in the sense of this article. The criterion is whether they consider the consequences of alternate actions. The ape can move a lever either to the left or the right. The lever causes a prize to be pushed off a shelf, either to the left or the right. The goody then hits a baffle and is deflected either to the ape in control of the lever or to a rival ape. On each trial, the baffle is set by the experimenter. The whole apparatus is visible to the ape, so it can see the consequences of each choice. The free will involves the ape having two choices and being able to determine the consequences of each choice. There is a possibility that the ape can win without determining the consequences of the possible actions. It may just learn a rule relating the position of the baffle and the action that will get the prize. Maybe we wouldn't be able to tell whether the ape predicted the consequences or not. We can elaborate the experiment to obviate this difficulty. Let there be a sequence of (say) six baffles that are put in a randomly selected configuration by the experimenter or his program at each trial. Each baffle deflects the prize one way or the other according to how it is set. If the ape can mentally follow the prize as it would bounce from baffle to baffle, it will succeed. However, there are 64 combinations of baffle positions. If a training set of (say) 32 combinations permits the ape to do the remaining 32 without further trial and error, it would be reasonable to conclude that the ape can predict the effects of the successive bounces. I hope someone who works with apes will try this or a similar experiment. Frogs are simpler than apes. Suppose a frog sees two flies and can stick out its tongue to capture one or the other. My prejudice is that the frog doesn't consider the consequences of capturing each of the two flies but reacts directly to its sensory inputs. My prejudice might be refuted by a physiological experiment. Suppose first that frogs can taste flies, i.e. when a frog has a fly in its mouth, an area of the frog's brain becomes active in a way that depends on the kind of fly. Suppose further that when a frog sees a fly, this area becomes active, perhaps weakly, in the same way as when the frog has the fly in its mouth. We can interpret this as the frog imagining the taste of the fly that it sees. Now further suppose that when the frog sees two flies, it successively imagines their tastes and chooses one or the other in a consistent way depending on the taste. If all this were demonstrated, I would give up my prejudice that frogs don't have SDFW. %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Dennet on complexity \section{Comparison with Dennett's ideas} \label{sec:dennett} % Drescher too /u/jmc/e05/drescher Daniel Dennett \cite{Dennett03} writes about \emph{The evolution of freedom}. I agree with him that free will is a result of evolution. It may be based on a more basic ability to predict something about what future will result from the occurrence of certain events including actions. He compares \emph{determinism} and \emph{inevitability}, and makes definitions so that in a deterministic world, not all events that occur are inevitable. He considers that freedom evolves in such a way as to make more and more events \emph{evitable}, especially events that are bad for the organism. Dennett's ideas and those of this paper are in the same direction and somewhat overlap. I think SDFW is simpler, catches the intuitive concepts of freedom and free will better, and are of more potential utility in AI. Consider a species of animal with eyes but without a blink reflex. Every so often the animal will be hit in the eye and suffer an injured cornea. Now suppose the species evolves a blink reflex. Getting hit in the eye is now often evitable in Dennett's sense. However, it is not an exercise of free will in my sense.\footnote{Dennett (email of 2003 Feb 27) tells me that the blink reflex involves no significant free will in his sense} On the other hand, deciding whether or not to go through some bushes where there was a danger of getting hit in the eye on the basis of weighing the advantages against the dangers would be an exercise of free will in my sense. It would also be an evitability in Dennett's sense. Evitability assumes that there is a normal course of events some of which may be avoided, e.g. that getting hit in they eye is normal and is avoided by the blink reflex. My notion of free will does not involve this, because the choice between actions $a1$ and $a2$ is symmetric. It is interesting to ask when there are normal events that can sometimes be avoided. %This should be rather common, but we may %need some axioms involving $Normally\mbox{-}occurs(e,s)$. The converse of an evitability is an opportunity. Both depend on a distinction between an action and non-action. In the case of non-action, nature takes its course. %%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Summary and remarks} \label{sec:remarks} A system operating only with situation-action rules in which an action in a situation is determined directly from the characteristics of the situation does not involve free will. Much human action and almost all animal action reacts directly to the present situation and does not involve anticipating the consequences of alternative actions. One of the effects of practicing an action is to remove deliberate choice from the computation and to respond immediately to the stimulus. This is often, but not always, appropriate. Human free will, i.e. considering the consequences of action, is surely the product of evolution. Do animals, even apes, ever make decisions based on comparing anticipated consequences? Almost always no. Thus when a frog sees a fly and flicks out its tongue to catch it, the frog is not comparing the consequences of catching the fly with the consequences of not catching the fly. %Even if there are two flies and it catches one % after a hesitation, it still isn't comparing consequences of catching % one fly or the other. %Ranaism. One computer scientist claims that dogs (at least his dog) consider the consequences of alternate actions. I'll bet the proposition can be tested, but I don't yet see how. According to Dennett (phone conversation), some recent experiments suggest that apes sometimes consider the consequences of alternate actions. If so, they have free will in the sense of this article. Even if even apes do not ordinarily compare consequences, perhaps they can be trained to do so. Chess programs do compare the consequences of various moves, and so have free will in the sense of this article. Present programs are not conscious of their free will, however. \cite{McC96} discusses what consciousness computer programs need. People and chess programs carry thinking about choice beyond the first level. Thus ``If I make this move, my opponent (or nature regarded as an opponent) will have the following choices, each of which will give me further choices.'' Examining such trees of possibilities is an aspect of free will in the world, but the simplest form of free will in a deterministic world does not involve branching more than once. Daniel Dennett \cite{Dennett78} and \cite{Dennett03} argue that a system having free will depends on it being complex. I don't agree, and it would be interesting to design the simplest possible system exhibiting deterministic free will. A program for tic-tac-toe is simpler than a chess program, but the usual program does consider choices. However, the number of possible tic-tac-toe positions is small enough so that one could make a program with the same external behavior that just looked up each position in a table to determine its move. Such a program would not have SDFW. Likewise, Ken Thompson has built chess programs for end games with five or fewer pieces on the board that use table lookup rather than look-ahead. See \cite{Thompson86}. Thus whether a system has SDFW depends on its structure and not just on its behavior. Beyond 5 pieces, direct lookup in chess is infeasible, and all present chess programs for the full game use look-ahead, i.e. they consider alternatives for themselves and their opponents. I'll conjecture that successful chess programs must have at least SDFW. This is not the only matter in which quantitative considerations make a philosophical difference. Thus whether the translation of a text is indeterminate depends on the length of the text. Simpler systems than tic-tac-toe programs with SDFW are readily constructed. The theory of greedy John formalized by (\ref{eq:a3}) may be about as simple as possible and still involves free will. Essential to having any kind of free will is knowledge of the consequences of one's possible actions and choosing among them. In many environments, animals with at least SDFW are more likely to survive than those without it. This seems to be why human free will evolved. When and how it evolved, as with other questions about evolution, won't be easy to answer. Gary Drescher \cite{Drescher91} contrasts situation-action laws with what he calls the \emph{prediction-value paradigm}. His prediction-value paradigm corresponds approximately to the deterministic free will discussed in this article. I thank Drescher for showing me his forthcoming \cite{Drescher06}. His notion of \emph{choice system} corresponds pretty well to SDFW, although it is imbedded in a more elaborate context. This article benefited from discussions with Johan van Benthem, Daniel Dennett, Gary Drescher, and John Perry. The work was partly supported by the Defense Advanced Research Projects Agency (DARPA). \bibliography{/u/jmc/1/biblio} % Its full name is /u/jmc/1/biblio.bib %\bibliography{biblio} % Tom worked some magic. % for use with Alpha on Powerbook 3400 \vfill \begin{latexonly} {\tiny\rm\noindent /@steam.stanford.edu:/u/ftp/jmc/freewill2.tex: begun Thu May 16, 2002, latexed \today} \end{latexonly} \end{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% hide-and-seek Metachoices: I chose to have no choice in this matter. @Article{ObhiHaggard04, author = {Sukhvinder S. Obhi and Patrick Haggard}, title = {Free Will and Free Won't}, journal = {American Scientist}, year = {2004}, volume = {92}, pages = {358--365}, month = {July-August}, note = {}, } sobhi@uwo.ca Check Drescher Emphasize utility of fw discuss formalization of evitability in Dennett section What good is free will? Just so story. Whether a system has free will depends on where you cut. Much choice among alternatives is done in advance and compiled into situation-action rules, e.g. the route to drive to the store. regrets about past choices, desire to change Prefers The multi-worlds interpretation of quantum mechanics doesn't change matters. The worlds don't especially branch on human decisions; the branch all the time. A human's or computer's deterministic choice mechanism is just as reliable as any other deterministic mechanism. %%% John Perry suggestion I think a bit of reorganization would help this paper. I would start with something like Locke, Hume and other philosophers develop an account of freewill (or at least freedom) that is compatible with determinism about human actions. Call this Basic Free Will. Basic free will requires that an agent's action is caused by the agent's preferences between alternative results of alternative actions. The causation involve may be, but need not be, deterministic. My view is that humans have basic free will, that their having it can be explained evolutionarily, that it is a good thing that we have it, and that our sense of being free is based on our experience of contemplating the results of different actions and making such preference guided choices. In this paper I show how to formalize basic free will with formalisms developed in AI, and argue that ... (Then incorporate some of the material from secttion 5 into this introduction) (And material from the abstract and the present section 1. %\subsection{What kind of an entity is $Result(a,s)$?} %\label{sec:whatis} %According to (\ref{eq:a1}) and (\ref{eq:a2}) $person$ %computes $Result(a1,s)$ and $Result(a2,s)$ and computes which is %better. If (\ref{eq:a1}) is to describe human or robotic choice, %the computation must be feasible. %In the original proposals \cite{McCHay69} for situation calculus, a %situation was a complete snapshot of the world at an instant. One %could not know situations---one could only know facts (fluents) of %situations. For present purposes we don't want snapshots of the %world, because it is indefinite what event an action like %$Move(Block1,Top(Block2))$ corresponds to in a given state of the %world. Is $Block1$ moved slowly or rapidly? %Therefore, the situations of our theory must be approximate %entities. Thus $Occurs(Move(Block1,Top(Block2)),s)$ should result %in a definite approximate situation %$Result(Move(Block1,Top(Block2)),s)$. %However, we don't always want to take our approximate situations as %elements of a fixed set of finitely describable objects. That's okay %for a game like chess, but human-level AI often requires something %more general. We would like to allow for the possibility that %unexpected events may occur, e.g. $Block1$ falling off $Block2$. %%[more to come] %%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%% %A human (and maybe a dog) has a notion of what he ought to do. This %has both social and individual aspects and general and specific %aspects. In general I ought not to drink so much coffee, and if I do %drink coffee, I ought not put so much sugar in it. Perhaps I %ought to pour out the cup of sugared coffee sitting on my desk. These %oughts are individual rather than social. %I ought to read my student's paper so I can talk to him about it this %afternoon. This is a specific ought, but in general I should try to %understand better my students' states of mind. %Humans develop this notion of ought from a very early age, maybe 2 %years old or maybe 4. One's idea of what one ought to do or ought not %to do affects one's actions, but does not determine it. Sometimes it %is mere weakness of will that prevents us from doing what we ought, %but sometimes it is a rejection of a criterion seen as imposed from %without. %We need to distinguish what an agent ought to accomplish from the %actions it intends to achieve that result. %%%%%%%%%%%%%%%%%%%%%% % Contrariwise, the anthropologist John F. Hoffecker \cite{Hoffecker02} % holds that only modern humans could consider the consequences of % alternative actions. The Neanderthals couldn't and neither could the % first humans, modern in body form. Only when modern humans settled % ice age Eastern Europe did what Hoffecker calls \emph{symbol-based % technology} develop. %%%%%%%%%%%%%%%%%% %\section{No free will after all?} %\label{sec:nofree} %We can make a formalism in which even the comparison of alternatives %is regarded as just an immediate reaction to the situation. We do it %by \emph{externalizing} the facts about the choices available. %Compare the following to cases in which the actor chooses a hotel %room. In one case, he goes into both rooms, comes out and then %decides. In the other case he decides on the basis of the %descriptions of the rooms. We can reduce the second case to the first %by \emph{externalizing} the facts about the room, i.e. regarding that %aspect of the brain or the computer memory as external to the decision %process. In either case, however, their is a need to compare %alternatives which isn't simply a reaction to a situation. %Maybe Brooksians or telereactors would accept writing a program to %choose a hotel room as a challenge.