\newpage \subsection{Explicit Minimization} A common mode for organizing computation is this. In one phase, all agents post whatever constraints they want on the variables of interest. On quiescence it is desired to determine {\em minimal} values for the particular variable --- perhaps {\em the} minimal one (if it is unique), perhaps {\em some} arbitrary one (if there are many minimal ones; this is sometimes called a ``credulous'' choice), perhaps the logical disjunction of all the minimal ones (this is sometimes called ``skeptical'' choice), while keeping the other variables in the store constant or varying. Various kinds of minimization procedures proposed for logic programming and for default reasoning (e.g., varieties of circumscription) fall into this general paradigm. The \tcc{} model makes possible the specification of ``safe'' versions of such primitive processes: it makes sense to perform the minimization only on quiescence of the current phase (the minimization operation is not monotone in its argument, hence cannot be run while its argument is being accumulated); hence the new constraint may be added to the store only at the {\em next} instant; hence structurally in a way that cannot interact with the actual process of minimization itself. This leads to the addition of a class of minimization agents to the language. Because of the variability mentioned above, many such agents are definable and interesting. Assume $min(X,c)$ is such a minimization operation --- it returns a constraint minimal in $X$ given the information $c$ per some chosen definition of minimization. Then we may define the basic process by: \begin{eqnarray*} \LL{\tt minimize(X)} &=& \{\epsilon\} \cup \{c \in \Obs\} \cup \{c\cdot d \cdot s \in \Obs \alt d \geq min(X,c)\} \end{eqnarray*} Of course once the basic agent is introduced into the language, it may be combined with all the other combinators in the language in completely general ways. Two concrete examples should illustrate the matter further. Consider the {\em histogram} problem studied by researchers in working on determinate data-flow languages. The problem is to efficiently determine the histogram associated with a given input function $f$. That is, determine, for each point $x$ in the range of $f$, the number of points in the domain mapped to $x$. The following simple algorithm is expressible in \tgentzen{} augmented with a suitable minimization operation: examine the tuples {\tt (Index, Value)} in $f$ in parallel, asserting that {\tt Index} belongs to a set associated with {\tt Value}. On quiescence, minimize the extension of these sets, and take their cardinality. Concretely: \begin{ccprogram} \agent histogram(In, Hist) :: (Temp)\Hat \[(Index, Value)\$((Index, Value)\ in\ In -> \{Index\ in\ Temp.Value\}), minimize(Temp), next\ sizeof(Temp, Hist)\].. \end{ccprogram} {\tt minimize(X)} is an example of an ``indexical'' constraint \cite{cc-FD} --- a functions from constraints to constraints that is not necessarily monotone. Obviously, several other constraint-system specific indexicals are possible in this vein. The second concrete example is taken from work on model composition in Qualitative Reasoning and is representative of the kind of default inferencing employed therein. While building component models one may only be able to specify very incomplete information about the relationship between some ``global'' variable $X$ and some variable $V$ corresponding to the component. For example, one may merely know that $X$ varies monotonically with $V$ (assuming that both vary over some ordered set such as the reals). This can, of course, be achieved by asserting the constraint that the partial derivative of $X$ with respect to $V$ is positive. However, once system structure is completely determined, and the constraints between $X$ and the ensemble of component variables $V_1, \ldots, V_n$ established, another piece of information can be deduced, namely, that $X$ is a function of {\em just} the inputs $V_1, \ldots, V_n$. Structurally, this is a safe default inference because no inference licensed by this deduction should affect the applicability of the default: once the structure of the system has been determined, it has been determined. That is, the default depends only on predications that are going to remain stable for the rest of the deduction, that are causally independent of the consequences of the conclusion of the default. Such a safe default rule can clearly be directly expressed in \tgentzen, augmented with minimization predicates. Indeed the basic notion of {\em staged computation} underlying \tcc{} is in completely harmony with the ideas of staged model composition underlying model composition work in Qualitative Reasoning.