%% Things to explain still. (VJ) Mon Dec 2 21:41:07 1991 %% Why upper cardinality terms are not allowed to have B. %% Why B has what it does. e.g. why no Asks, existential quantification %% or disjunction? i.e. why is a B not an A (other than recursion). %% The reference to ``Biology'' P.9 should be much more %% specific. %% Define the conditions for solving an indexical constraint. %% State the theorem. %% Formulate and establish the theorem connecting operational to %% denotational sematnics. %% Formulate the proof system for programs, and the inequational %% axiomatization. %% Define and Use indexical constraint which specifies how %% many of a given set of constreants are enailed by the store. %% and how many are rejected by the store. %% Clean up introduction. %% Reference to Williams and Reid Simmons' constraint systems in %% the Conclusions section. Is the idea of constraints on %% Herbrand terms %% (rather than just variables) worth putting in the language? \documentstyle{article} \def\IN{\;\mbox{{\em in}}\;} \def\PP{\withmath{{\cal P}_I}} \makeatletter {\@definecounter{MetaNote}\@addtoreset{MetaNote}{section}% \expandafter\xdef\csname theMetaNote\endcsname{\expandafter\noexpand \csname thesection\endcsname \@thmcountersep \@thmcounter{MetaNote}}% \global\long\def\MetaNote#1{{\@thm{MetaNote}{MetaNote} #1\@endtheorem}}} {\@definecounter{TechComm}\@addtoreset{TechComm}{section}% \expandafter\xdef\csname theTechComm\endcsname{\expandafter\noexpand \csname thesection\endcsname \@thmcountersep \@thmcounter{TechComm}}% \global\long\def\TechComm#1{{\@thm{TechComm}{Note to Reviewer:} #1\@endtheorem}}} %% In case you want Meta-notes not to appear in the printed %% text. e.g. in final version, remove the following %% signs: %% \long\def\MetaNote#1{\relax} \makeatother \setlength{\oddsidemargin}{0in} \setlength{\evensidemargin}{0in} \setlength{\textwidth}{6.5in} \setlength{\topmargin}{0in} \setlength{\headsep}{0.5in} \setlength{\textheight}{8.5in} \hbadness=10000 \newtheorem{specification_int}[definition_int]{Specification} \newenvironment{specification}{\begin{specification_int}\begin{rm}}% {\end{rm}\end{specification_int}} \def\up{\uparrow} \def\imply{\; \Rightarrow \;} \def\exist{(\exists) \;} \def\mand{\wedge} \def\mor{\vee} \def\cor{\; \nabla \;} \def\mi{\mbox{-}} \def\tellt{tell \mi true} \def\tellf{tell \mi false} \def\askf{ask \mi false} \def\askt{ask \mi true} %\def\pp{$<$} %\def\ppe{$\leq$} %\def\pge{$\geq$} %\def\pg{$>$} %\def\dif{$\neq$} \def\ccfd{{\tt cc(FD)}} \def\ap{$\in$} \def\al{$\alpha$} \def\be{$\beta$} \def\an{$\wedge$} \def\oal{$\overline{\alpha}$} \def\obe{$\overline{\beta}$} \def\sd{$\overline{D}$} \def\sud{$\overline{D_1}$} \def\sdd{$\overline{D_2}$} \def\ox{$\overline{x}$} \def\oy{$\overline{y}$} \def\oz{$\overline{z}$} \input{vijay-macros} \def\dom{\mbox{{\em dom}}} \def\max{\mbox{{\em max}}} \def\min{\mbox{{\em min}}} \begin{document} \Large \noindent {\bf Constraint Processing in \ccfd} \normalsize \\ \\ \noindent {\bf Pascal Van Hentenryck}\footnote{ Brown University, Box 1910, Providence, RI 02912, Email: pvh@cs.brown.edu}, {\bf Vijay Saraswat}\footnote{Xerox Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto, CA 94304, Email: saraswat@parc.xerox.com}, {\bf Yves Deville}\footnote{ Universit\'e Catholique de Louvain, Pl. Ste Barbe, 2 B-1348 Louvain-La-Neuve, Belgium, Email: yde@info.ucl.ac.be} % MATHEMATICS ARE FLUSHED LEFT \section*{Abstract} Constraint logic programming languages such as CHIP \cite{Vanhentenryck89a,Dincbas88f} have demonstrated the practicality of declarative languages supporting consistency techniques and nondeterminism. Nevertheless they suffer from the {\em black-box effect}: the programmer must work with a monolithic, unmodifiable, inextensible constraint-solver. This problem can be overcome within the logically and computationally richer concurrent constraint (\cc) programming paradigm \cite{Saraswat89}. We show that some basic constraint-operations currently hardwired into constraint-solvers can be abstracted and made available as combinators in the programming language. This allows complex constraint-solvers to be decomposed into logically clean and efficiently implementable \cc{} programs over a much simpler constraint system. In particular, we show that the CHIP constraint-solver can be simply programmed in \ccfd, a \cc{} language with an extremely simple built-in constraint solver for finite domains. %%\input{intro.tex} \section{Introduction} The purpose of our research is to design programming languages that support the solution of various combinatorial search problems arising in areas like combinatorial optimization, artificial intelligence, natural language processing, and hardware design. Our goal is to extract some of the fundamental techniques used in those areas and to support them inside programming languages in order to speed development time, allow rapid prototyping, and ease modifiability. One aspect of our work has focused on the support of consistency techniques, a paradigm emerging from Artificial Intelligence, inside constraint logic programming. Consistency techniques have been used in a number of systems (e.g. \cite{Lauriere78,Fikes68,Mackworth77,mackworth} in order to reduce the search space by removing combinations of values that cannot appear together in a solution. Constraint Logic Programming (CLP) is a new class of declarative languages which generalizes Logic Programming by replacing unification with constraint solving \cite{lassez-const}. Such languages are attractive for solving combinatorial search problems since their nondeterminism makes them adequate for stating various search procedures and their relational form makes it easy to state constraints. The support of consistency techniques in conjunction with nondeterminism is particularly appealing (as noted by Mackworth in \cite{Mackworth77}) as it frees the programmer from implementing both tree-search and constraint propagation. The feasibility and practicality of a declarative language supporting both consistency techniques and nondeterminism was demonstrated in the CHIP system \cite{Vanhentenryck89a,Dincbas88f}. Solutions for a large variety of combinatorial problems --- test-generation \cite{Simonis89a,reportappli}, car-sequencing \cite{Dincbas88c}, microcode labelling \cite{Dincbas90}, scheduling and cutting stock \cite{Vanhentenryck89a} --- were developed quickly, and were comparable in efficiency (within a constant factor) to specialized algorithms using the same approach. CHIP supported consistency techniques mainly by providing a number of primitive constraints, such as equations, inequalities, and disequations involving variables ranging over finite subsets of natural numbers. Though successful, CHIP has a number of limitations, the most important being its lack of extensibility. The programmer has to recast non-primitive constraints in terms of existing constraints, is unable to define new primitive constraints and is unable to cope adequately with disjunctions of constraints or the definition of incomplete constraint solvers. As a result, non-primitive constraints are often recast in terms of more basic variables (e.g.{} Boolean variables) and/or are used to make choices. Both alternatives decrease pruning, increase the search space, and entail a substantial loss in efficiency compared to procedural languages. The problem is not specific to CHIP but present in all CLP languages which are all based on a black-box constraint solver that the programmer cannot extend or modify. \paragraph{The glass-box approach.} We propose to remove the above limitation by taking a different approach. Our thesis is that consistency techniques can be supported in a much more fundamental way by providing general-purpose, efficiently implementable, declarative operations in the programming language that capture essential algorithmic techniques embedded in the implementation of such systems. To support this thesis, we introduce four new constraint language ideas: (1)~{\em indexical constraints}, related to a restricted use of universal quantification, (2)~{\em cardinality operators} with indexical bounds, related to the threshold operator of propositional logic \cite{cardinality}, (3)~ {\em constructive disjunction}, related to intuitionistic disjunction, and (4)~{\em extended Asks}, related to intuitionistic implication. We instantiate these ideas in the programming language \ccfd{}, incorporating a very simple constraint system over finite systems, {\tt FD}. Through these combinators, we show that \ccfd{} supports the basic reasoning principles implemented in systems such as ALICE \cite{Lauriere78}, REF-ARF \cite{Fikes68}, CHIP in a very simple and flexible way. Further, we briefly discuss a prototype implementation of \ccfd{} that supports our contention that the language can be implemented extremely efficiently. The main advantages of our approach are generality, extensibility, efficiency and semantic simplicity. The constructs introduced are {\em general-purpose}: they are definable on very general notions of constraint systems \cite{Saraswat91a}, including rational arithmetic, Booleans and first-order terms. They are powerful enough that the underlying constraint-solver can be extremely simple (e.g., it may provide only unary constraints). The additional functionality provided by more powerful constraint-solvers can be obtained by means of user-level programs. This significantly reduces the black-box effect. The programmer can define his own constraints (to the extent of the reasoning principles supported) without sacrificing the declarative reading of the programs. Since the embedded constraint-solver is extremely simple, it can be very efficiently implemented. The combinators themselves are sufficiently ``low-level'' as not to introduce overhead (in time or space) compared to procedural implementation of the reasoning principles supported. Finally, the abstract semantics of these combinators can be presented in a very simple way, within the concurrent constraint (\cc{}) programming \cite{Saraswat89,Saraswat90a,Saraswat91a}. In this framework, computation progresses through the interaction of agents that communicate with each other via a shared store that is itself a constraint. The basic operations are {\em Tell} (add a constraint to the store) and {\em Ask} (check that a constraint is entailed by the store). \cite{Saraswat89,Saraswat90a,Saraswat91a} also discuss several other combinators such as parallel composition, hiding, and backtracking. Logically, the Ask operation corresponds to a primitive form of intuitionistic implication, parallel composition to conjunction, hiding to existential quantification and backtracking to disjunction in a very precise way \cite{cc-logic}. Denotationally, agents can be thought of as {\em closure operators} over the underlying constraint system\footnote{A closure operator over a lattice is a monotone, idempotent and extensive operator.}. The four new combinators we introduce in this paper can readily be integrated into the framework as new operations on closure operators. \paragraph{Rest of this paper.} In summary, \ccfd{} offers a new alternative to the traditional CLP model for constraint programming by systematically exploiting the {\tt cc} framework. It proposes to design constraint languages by abstracting the computational principles behind a class of applications and by supporting them at the language level via constraint operations. The combinators are based on procedural interpretations of logical connectives; hence the approach combines the advantages of referential transparency, flexibility, and computational efficiency. This idea would bring similar advantages for other solvers (e.g. first-order terms, rationals) and application areas exploiting constraint technology (e.g. natural language processing, artificial intelligence). Although languages of this type would be more basic, it is natural to imagine that constraint solvers would be built in a modular, incremental, and hierarchical manner recovering the ease of programming of existing CLP languages while adding flexibility and efficiency. The rest of this paper is devoted to an overview of the constraint processing facilities of \ccfd. First we present the {\tt FD} constraint system underlying \ccfd--- this is used in several examples throughout the paper. Next, we introduce the four combinators and discuss their operational and denotational semantics. Finally we consider the design of the \cc{} language, which is an instantiation of the \cc{} language with these combinators, over the {\tt FD} constraint system. In particular, we sketch the implementation of the CHIP constraint-solver in \ccfd. To ease reading, only informal presentation are given in the text. The precise operational and denotational semantics of the new constructs will be given in the full version of the paper. \begin{reviewer-note} This technical abstract assumes some familiarity with the notions of constraint systems, concurrent constraint programming languages and closure operators. For more information on these, the reviewer may consult \cite{Saraswat91a}. Appendix~\ref{clop} briefly summarizes the notions of closure operators. \cite{mackworth} is a good source for information on local consistency techniques (e.g., arc-consistency). \end{reviewer-note} %%\input{fd.tex} \section{The {\tt FD} constraint system} We briefly present the very simple constraint system underlying \ccfd. (In reality, the constraint system in \ccfd{} also provides \Herbrand{} trees, but for the purposes of this paper we concentrate on novel aspects.) There are three kinds of syntactic objects, (t) arithmetic terms, (r) ranges, and (c) constraints. Their syntax is given by the grammar in Table~\ref{sem-table-1}. Ranges denote a finite union of intervals on the number-line --- the given operations are interpreted in the obvious way. The entailment relation on constraints is induced in the obvious way from the interpretation: a valuation $\theta$ satisfies a constraint $ X\IN r$ exactly when $ \theta(X)$ lies in the range denoted by $ r$. Thus the constraint $$ X\IN t_1\,{..}\,t_2 $$ is inconsistent if $ t_2 < t_1$ --- this is the only way inconsistency can arise in this system. Note that the constraint system is closed under negation: the constraint $ \neg X\IN r$ is just $ X \IN -r$. Incremental algorithms for checking consistency of constraints, and for entailment are completely straightforward and very efficiently implementable. \begin{table} \begin{tabular}{|l|} \hline \begin{minipage}[t]{\columnwidth} In the following, $n$ ranges over natural numbers. $$\begin{array}{l} \begin{array}{ll} t\;{::=} & X \alt n \alt t + t \alt t - t \alt t * t \alt t \mod t \alt t \div t \end{array}\\ \begin{array}{lll} r ::= & [t_1]\,{..}\,[t_2] & \mbox{{\em (from $t_1$, default $0$, to $t_2$, default infinity)}}\\ & \alt t & \mbox{{\em (shorthand for $t\,{..}\,t$)}}\\ & \alt r_1:r_2 & \mbox{{\em (union)}}\\ & \alt r_1\; \&\; r_2 & \mbox{{\em (intersection)}}\\ & \alt -r & \mbox{{\em (complementation)}}\\ & \alt r + t & \mbox{{\em (pointwise addition)}}\\ & \alt r \mod t & \mbox{{\em (pointwise mod)}}\\ \end{array}\\ \begin{array}{ll} c\; {::=} & X\; in\; r \end{array} \end{array}$$ Each constraint is associated with an {\em implicit ask restriction} \cite{Saraswat89} --- a constraint $X\IN r$ can be added to the store only when $r$ is ground. Some handy abbreviations: $$ \begin{array}{l@{\;\stackrel{\triangle}{=}\;}l} X \IN \{n_1, \ldots, n_k\} & X \IN n_1 : \ldots : n_k \\ X \leq t & X \IN \,{..}\,t \\ X < t & X \IN \,{..}\,(t-1)\\ X \geq t & X \IN t\,{..}\,\\ X > t & X \IN (t+1)\,{..}\,\\ X \neq t & X \IN \,{..}\,(t-1):(t+1)\,{..}\,\\ X = t & X \IN t \\ \end{array}$$ \end{minipage}\\ \hline \end{tabular} \caption{The constraint system {\tt FD}}\label{sem-table-1} \end{table} The following proposition is easy to establish. \begin{proposition} Any finite conjunction of constraints in this system in one variable can be represented by a single constraint of one of the following kinds: $$ \begin{array}{l} X\IN (m_1{\,{..}\,}n_1 : \ldots : m_k {\,{..}\,} n_k)\\ X\IN (m_1{\,{..}\,}n_1 : \ldots : m_k {\,{..}\,} ) \end{array} $$ \noindent where either $(k=1, m_1=1, n_1=0)$ or $(k \geq 1, m_i \leq n_i < m_{i+1})$. \end{proposition} Such a constraint is said to be in normal form. A set of constraints $\{{ X_i \IN s_i} \alt i \in I\}$, for some index-set $I$, is in normal form if all the variables are distinct and each constraint is in normal form. The construction in the following theorem is the completion by ideals used in the definition of information systems \cite{scott-info-sys}. \begin{theorem} For any finite set of variables $V$, let ${\tt FD}_V$ be the family of all entailment-closed sets of constraints generated from a set of constraints with free variables in $V$. \begin{enumerate} \item ${ FD}_V$ is a complete distributive lattice, with glbs given by intersection, and lubs by closure of the union. \item The finite elements of ${\tt FD}_V$ are exactly those generated from a finite set of constraints.\footnote{$u$ is a finite element of a lattice $L$ iff for every directed subset $S$ of $L$ with lub above $u$, there is a finite subset of $S$ with lub above $u$.} \item ${\tt FD}_V$ is not finitary: that is, there exists a finite constraint which has infinitely many finite elements below it. \item The glb of two finite elements of ${\tt FD}_V$ is itself finite. \end{enumerate} \end{theorem} In particular, the glb of two normal-form sets of constraints $\{{ X_i \IN r_i} \alt i \in I\}$ and $\{{ Y_j \IN s_j} \alt j \in J\}$ is just the set of constraints generated by $ Z \IN r_i:s_j$ where $X_i=Y_j$. Note that the choice of the constraints represent one more step towards simplicity in the constraint-solver and hence reduces the black-box effect of usual CLP languages. In previous work \cite{plilp}, primitive constraints were limited to at most two variables. Such constraints can be solved completely with arc-consistency. In this paper, we can achieve the same effect while assuming that the implementation supports just unary constraints, using indexical constraints. %%\input{extended.tex} \section{The Extended \cc{} language framework} \begin{table} \begin{tabular}{|l|} \hline \begin{minipage}[t]{\columnwidth} \paragraph{Basic Ask-and-Tell language.} $$ \begin{array}{rlll} \mbox{{\em Programs}}& P {::=} & H {::} A \alt P \wedge P \alt \forall X\; P\\ \mbox{{\em Agents}}& A {::=} & B & \mbox{(Tell)} \\ & & \alt B \rightarrow A & \mbox{(Ask)} \\ & & \alt A, A & \mbox{(Conjunction)} \\ & & \alt A ; A & \mbox{(Disjunction)} \\ & & \alt \exists X A & \mbox{(Local variables)} \\ & & \alt H & \mbox{(Procedure Call)} \\ \mbox{{\em Basic constraints}} & B {::=} & c \alt B,B \\ \mbox{{\em Procedure Call}} & H {::=} & p(X_1,\ldots, X_n) \end{array}$$ \paragraph{The Extended language.} The language allows Basic agents to be defined using cardinality, constructive disjunction, and indexical constraints. It allows users to specify new primitive constraints (the atomic formulas $D$), provided {\bf tell}, {\bf ask}, declarations are provided. It adds to the grammar for the basic Ask-and-Tell language the productions: $$\begin{array}{l} \begin{array}{rlll} \mbox{{\em Basic agents}}& B {::=} & \#(l, [B,\ldots, B]) \alt \#([c,\ldots, c], u) & \mbox{(Cardinality)}\\ & & \alt \cor (B, \ldots, B) & \mbox{(Constructive Disjunction)}\\ & & \alt ic & \mbox{(Indexical constraints)}\\ & & \alt D & \mbox{(User-defined constraints)} \end{array}\\ \begin{array}{rll} \mbox{{\em Programs}} & P\; {::=}\;& \mbox{\bf tell}\; D:: B \alt \mbox{\bf ask}\; D:: B \end{array} \end{array} $$ Basic agents are required to be monotone in their indexical arguments in the Tell part, and anti-monotone in their Ask part. The definitions for user-defined constraints are required to be recursion-free. \end{minipage}\\ \hline\end{tabular} \caption{Abstract Syntax for \ccfd{}}\label{sem-table-2} \end{table} \begin{table} \begin{tabular}{|l|} \hline \begin{minipage}[t]{\columnwidth} Configurations are pairs of the form \tuple{\Gamma, s} where $s$ is a set of constraints, and $\Gamma$ is a multiset of agents. (The empty multiset is denoted $()$.) Below, we define a binary relation $\derives$, the {\em transition} relation which specifies how configurations evolve. Note that only {\em fair} execution sequences (e.s.) are considered legal. (A fair e.s.{} is not unfair; an {\em unfair} e.s.{} is one in which some given transition is enabled at every configuration in the sequence, and never taken.) \paragraph{Basic Ask-and-Tell language.} We omit the rules for existential quantification, backtracking and procedure calls, since they are standard. (See for example, \cite{angelic-nondet}.) Below, $\vdash$ is the entailment relation of the underlying constraint system. $$\begin{array}{rll} \mbox{(Tell)} & \tuple{(\Gamma, c), s} \derives \tuple{\Gamma, s\cup\{c\}}\\ \mbox{(Ask)} &\tuple{(\Gamma, d \then A), s} \derives \tuple{(\Gamma,A),s} & s \vdash d\\ \mbox{(Parallel Composition)} & \tuple{(\Gamma, (A_1,A_2)), s} \derives \tuple{(\Gamma,A_1,A_2), s}\\ \end{array}$$ \paragraph{Indexical Constraints.} $$\begin{array}{lll} \tuple{(\Gamma, ic), s} \derives \tuple{(\Gamma, ic), s \cup \{ic_{s}\}} \end{array}$$ \paragraph{Cardinality.} To simplify presentation we assume that the collection of agents of a cardinality constraint is a multiset of the form $[(B_1, d_1),\ldots, (B_n, d_n)]$, where the $d_i$ are (possibly empty) sets of constraints. (The $d_i$ denote local stores for each basic agent.) $$\begin{array}{lll} \from{\tuple{B, d \cup s} \derives \tuple{B', d'}} \infer{\tuple{(\Gamma, \#(l,[(B,d) \alt \sigma])), s} \derives \tuple{(\Gamma, \#(l,[(B',d') \alt \sigma])), s}} \\ \quad\\ \tuple{(\Gamma, \#(l,[(B,\false) \alt \sigma])), s} \derives \tuple{(\Gamma, \#(l, \sigma)), s} \\ \tuple{(\Gamma, \#(l,[((),d) \alt \sigma])), s} \derives \tuple{(\Gamma, \#(l-1, \sigma)), s} & \mbox{if $s \vdash d$}\\ \tuple{(\Gamma, \#(l,[(B_1, d_1), \ldots, (B_l,d_l)])), s} \derives \tuple{(\Gamma, B_1,\ldots, B_l) , s \cup d_1 \cup \ldots \cup d_l}\\ \tuple{(\Gamma, \#(l,\sigma)), s} \derives \tuple{(), \false} & \mbox{if $|\sigma| < l$} \\ \end{array} $$ The rules for upper-cardinality agents are similar, and omitted. \paragraph{Constructive Disjunction.} To simplify presentation we assume that the argument of a disjunctive agent is an unordered collection of agents of the form $(B,d)$, where $d$ is a possibly empty set of constraints. $$\begin{array}{ll} \from{\tuple{B, d \cup s} \derives \tuple{B', d'}} \infer{\tuple{(\Gamma, \cor ((B,d), E)), s} \derives \tuple{(\Gamma, \cor ((B',d'), E)), s}}\\ \quad\\ \tuple{(\Gamma, \cor ((B_1, d_1),\ldots, (B_n, d_n))), s} \derives \tuple{(\Gamma, \cor ((B_1, d_1),\ldots, (B_n, d_n))), s \cup glb(d_1, \ldots, d_n)} \\ \end{array}$$ \paragraph{Extended Asks.} To simplify presentation, we assume that the ask part is an agent of the form $(B,d)$, where $d$ is a possibly empty set of constraints. $$\begin{array}{lll} \from{\tuple{B, d \cup s} \derives \tuple{B', d'}} \infer{\tuple{(\Gamma, (B,d) \then A), s} \derives \tuple{(\Gamma, (B',d')\then A), s}} \\ \quad\\ \tuple{(\Gamma, ((),d) \then A), s} \derives \tuple{(\Gamma, d\then A), s} \end{array} $$ \end{minipage}\\ \hline\end{tabular} \caption{Transition system for Extended \cc{} language}\label{trans-sem} \end{table} The basic Ask-and-Tell \cc{} languages provide several combinators: Tell, Ask, Parallel composition, Hiding and Backtracking. These are provided in \ccfd{} as well. The discussion of these combinators is omitted because they are standard --- some information about Ask, Tell and Parallel composition can be obtained from the operational semantics in Table~\ref{trans-sem}. %% %% %% %%\paragraph*{Tell and Ask.}\label{Tell}\label{Ask} %%The most basic operation in the \cc{} framework is that of adding %%a primitive constraint to the store (the {\em Tell} operation). %% %%The {\em Ask} operation \cite{Maher87}, %%\cite{Saraswat89},\cite{Dincbas88f,Graf90},\cite{Simonis87b} allows %%execution of an agent to be delayed until a certain specified %%constraint can be checked to hold. Computationally, the Ask %%operation provides a very powerful form of dynamic execution %%control which is especially appropriate for implementing value %%propagation techniques such as those included in CONSTRAINTS %%\cite{Sussman80} and ThingLab \cite{Borning81}. It has been %%instrumental in solving digital circuit design problems such as %%test pattern generation and diagnostics (e.g. %%\cite{Simonis89a,Simonis87a}). %% \paragraph*{Indexical Constraints.} \label{set} %% emphasize general-purpose, %% motivation. %% basic idea. %% monotonicity condition. %% implementation is totally straightforward. \begin{table} \begin{tabular}{|l|} \hline \begin{minipage}[t]{\columnwidth} $$ \begin{array}{ll} it\;{::=} & X \alt n \alt it + it \alt it - it \alt it \times it \alt it \mod t \alt it \div it \alt \min(X) \alt \max(X)\\ ir\;{::=} & [it_1]..[it_2] \alt it \alt ir_1:ir_2 \alt ir_1 \& ir_2 \alt -ir \alt ir + t \alt ir \mod t \alt \dom(X)\\ ic\; {::=} & X\IN ir \end{array}$$ The indexical terms $\min(X)$, $\max(X)$, and $\dom(X)$ are considered ground, for purposes of the implicit Ask restriction. Also, constraints satisfy the condition that terms $\min(X)$ must occur ``negatively'' in the range, and terms $\max(X)$ must occur ``positively'' in the range. \end{minipage}\\ \hline \end{tabular} \caption{Indexical constraints in \ccfd}\label{indexical-table} \end{table} Suppose we would like to write a program on top of the {\tt FD} constraint system to implement reasoning with variation intervals for the constraint $X \geq Y+c$, for $X$, $Y$ variables and $c$ a constant. That is, we would like this program to have the following effect: \begin{enumerate} \item whenever it is true that $Y \geq e$ (for $e$ a constant), the program should impose the constraint $X \geq e+c$. \item whenever it is true that $e \geq X$, the program should impose the constraint $(e-c \geq Y)$. \end{enumerate} In other words, we would like the program to behave as if it were the infinite conjunction of rules of the form $$ (Y \geq \alpha) \then (X \geq \alpha +c)$$ for all values of $\alpha$. For this purpose we introduce the notion of {\em indexical terms} and {\em indexical constraints}. An indexical term is syntactically an ordinary term (e.g., $\max(X)$), but semantically it denotes a {\em mapping} from constraints to terms. As an example, for $X$ a variable, we may define $\max(X)$ to be the indexical term which in store $s$ denotes $n$, where $n$ is the least upper bound on $X$ given $s$. Thus, in store $s=\{X\IN 1:3:4\;{..}\; 20\}$, the value $\max(X)_s$ of the indexical term $\max(X)$ would be $20$. The indexical terms $\min(X)$ (returns the greatest lower bound on $X$) and $\dom(X)$ (returns the domain of $X$) can similarly be defined in {\tt FD}. We allow indexical terms to be used within constraints just as ordinary terms. A constraint containing an indexical sub-term is called an {\em indexical constraint}. It denotes a mapping from constraints to constraints obtained in the obvious way: given an indexical constraint $d$ and input constraint $s$, the mapping returns the constraint obtained from $d$ by replacing each indexical sub-term $t$ by $t_s$. For example, in the store $s=\{X\IN 3\;{..}, Z \IN 2\}$, the indexical constraint $(Y \IN \min(X)\;{..})$ denotes the constraint $(Y \IN 3\;{..})$. \begin{example}[greatereqc/3] The constraint {\tt greatereqc/3} may now be programmed as follows: \begin{ccprogram} \agent greatereqc(X, Y, C):: (X \IN (\min(Y) + C)\,{..}), (Y \IN {..}\,(\max(X) - C)).. \end{ccprogram} In any store $s$, both constraints may fire, producing information about {\tt X} and {\tt Y} respectively, exactly as desired. \end{example} Logically, a constraint $(X \IN (\min(Y) + c)\;{..})$ can be thought of as the constraint: $\forall k. (Y \geq k) \Rightarrow X \geq k+c$. However, only very restricted kinds of universal quantifications can be obtained by using $\min/1, \max/1,\dom/1$ indexical terms. \begin{example}[mod/4] As another example, consider the constraint {\tt mod(X,Y,C,Base)} which holds iff {\tt X = (Y + C) mod Base} assuming that {\tt X,Y} are variables ranging over {\tt \{0..Base-1\}} and {\tt C} is a integer in {\tt \{0..Base-1\}} and {\tt Base} is a positive number. We can get this effect with: \begin{ccprogram} \agent mod(X,Y,C,Base) :: X \IN (\dom(Y)+C) \mod Base, Y \IN (\dom(X)-C) \mod Base.. \end{ccprogram} The first expression makes sure that, whenever a value is removed from the domain of {\tt Y}, the corresponding value is removed from the domain of {\tt X}. Similarly for the second. \end{example} For indexical constraints to generate closure operators, (and hence to be well-defined) they must denote {\em monotone} functions: that is, as the information content of the store increases, the strength of the constraint yielded by the indexical constraint should increase.\footnote{Every monotone function $f$ generates a closure operator $\hat{f}= \lambda s. {\tt fix}(\lambda a. s \wedge (f a)))$ by augmentation and iteration to quiescence.} This is true, for example, for $(Y \IN \min(X)\;{..})$. It is not true for $(Y \IN {..}\; \min(X))$. Therefore, additional syntactic restrictions have to be placed on occurrences of indexical terms to ensure that the overall constraints built using them are monotone. These conditions are easy to state: intuitively all (positive) occurrences of terms $\max(X)$ should be in the ``upper'' portions of range-restrictions, and all (positive) occurrences of terms $\min(X)$ should be in ``lower'' portions of range-restrictions. \subparagraph{Use.} Indexical arithmetic constraints provide more effective algorithms for arc-consistency. The new algorithms are $O(ed)$ (where $e$ is the number of constraints and $d$ the size of the domains), improving on the traditional quadratic algorithms. More important, the space requirement is $O(e)$ --- each constraint takes constant space. An example of an application which can benefit from this extension is the microcode labeling problem \cite{Vanhentenryck89a}. \subparagraph{Implementation.} It should be clear that for the {\tt FD} constraint system, the implementation of indexical terms and constraints is totally straightforward. The implementation already maintains $\max{}$, $\min{}$ and $\dom{}$ information for each variable: we are now simply providing a mechanism for the user to access this information in a ``safe'' way. %%Universal quantification is introduced in part to perform efficient %%arithmetic reasoning using variation intervals and in part to obtain %%the complexity bounds of some specialized arc-consistency algorithms %%(e.g.{} functional constraints). The availability of universal %%quantification makes sure that only constraints with at most one %%variable need to be provided by the constraint solver. Universal %%quantification is however quite general-purpose --- on {\tt %%Herbrand}, it enables an efficient and lazy implementation of %%iteration while on real numbers it allows us to reconstruct in a %%simple way solvers (such as the one of BNR-Prolog) which approximate %%non-linear constraints. \paragraph*{Cardinality.} \label{lcardinality} The {\em cardinality} combinator \cite{cardinality} is a generalization of the threshold operators for propositional calculus and provides a general form of disjunction. Related to the Andorra model of computation \cite{dhd-andorra}, it allows arc-consistency of any finite domain constraint to be implemented within the complexity bounds of the optimal algorithm of \cite{Mohr86}. Moreover, it provides a general way of implementing the {\em DMA/DMT} options as well as the {\em conditional summation} of {\tt ALICE} \cite{Lauriere78}. There are two cardinality combinators, the {\em lower} and the {\em upper}, of the form $\#(it,[B_1,\ldots, B_n])$ and $\#([c_1,\ldots c_n], it)$ respectively, where $it$ is an indexical term. Intuitively, the constraint $\#(l,[B_1,\ldots, B_n])$ holds in store $s$ iff at least $l_s$ of the given agents are true, and the constraint $\#([c_1,\ldots, c_n], u)$ holds iff at most $u_s$ of the given constraints are true. Note that for this to be well-defined it must be the case that $l_s$ increases as $s$ becomes stronger (e.g. $l=\min(X)$), and $u_s$ decreases as $s$ becomes stronger (e.g. $u=\max(X)$). In the following, we will consider the expression $c \Rightarrow d$ as shorthand for $\#(1, [\neg c,d])$, and the expression $c \iff d$ as shorthand for $(c \Rightarrow d),(d \Rightarrow c)$. The agent $\#(l, [c_1, \ldots, c_n])$ is executed by spawning $n$ Ask agents, one for each argument and maintaining a counter each for the number of entailed constraints ($a$), and the number of constraints whose negation is not yet entailed ($t$). If in any store $a \geq l$, the agent may terminate without adding any new information to the store. On the other hand, if $l=t$, then the agent may terminate, adding those constraints $c \in [c_1,\ldots, c_n]$ such that neither $c$ nor its negation is entailed by the store. If $l>t$, the agent may terminate, causing the store to become inconsistent. The evaluation rules for $\#([c_1,\ldots,c_n], u)$ are similar --- note just that if $u=a$, the agent may terminate, adding the {\em negations} of all the constraints in $[c_1,\ldots,c_n]$ not entailed by the store. The set of equational laws underlying this procedural description are given in Table~\ref{simp-table}. \subparagraph{Use.} \begin{example}[Arc-consistency.] We show how to enforce directional arc-consistency on a binary constraint {\tt p(X,Y)}. First, the domains of {\tt X} and {\tt Y} must be reduced respectively to the projections of the constraint on the first and second arguments, say {\tt D$_1$, D$_2$}. Next, for each value {\tt v} in {\tt D$_1$}, we generate the constraint $$\tt Y \neq w_1, \ldots, Y \neq w_p \; \imply \; X \neq v$$ where {\tt w$_1$,\ldots,w$_p$} are all values {\tt w} such that {\tt p(v,w)}. Thus, as soon as {\tt Y} is constrained to be different from all values in {\tt \{w$_1$,\ldots,w$_p$\}}, {\tt X} is constrained to be different from {\tt v}. Conversely, if {\tt X} is equated to {\tt v}, then {\tt Y} will be required to take on one of the given values. \end{example} \cite{cardinality} showed that, on simple examples, cardinality can bring an exponential improvement over traditional approaches. Most CHIP applications can be solved more naturally using cardinality removing the need for ad-hoc built-in constraints. For these applications, the overhead may be as low as $6\%$ (e.g. car-sequencing \cite{Dincbas88c}) while not exceeding $30\%$. A new interesting application is the allocation of jobs to pipeline processors to minimize total delay. A \ccfd{} program was developed for the task and is comparable (sometimes faster) in efficiency to a specific branch and bound algorithm. Cardinality was used to implement conditional summation (e.g.{} the summation of the computation times of all jobs assigned to the same processor must not exceed it capacity in a cycle time) as well as the constraint for the transmission time which is: if either job is run on the master processor (processor 1), then transmission time is $0$; and if the two jobs are run on adjacent machines (neither of which is the master processor) then transmission time is $1$; otherwise transmission time is $2$. This can be expressed directly in \ccfd: \begin{ccprogram} \#(1,[P_1 \neq 1, P_2 \neq 1, T_{12} = 0]) \#(1,[P_1 = 1,P_2 = 1, \#([X\IN \dom(Y) + 1, Y\IN \dom(X) + 1],0), T_{12} = 1]) \#(1,[P_1 = 1,P_2 = 1, \#([min(P_1) \gt \max(P_2) + 1, \min(P_2) \gt \max(P_2) + 1],0), T_{12} = 2]) \end{ccprogram} \noindent where ${\tt P_1,P_2}$ are the processors associated with jobs $1$ and $2$ and $\tt T_{12}$ is the transmission time between them. %%(P_1 = 1 \char`;\; P_2 = 1) \Rightarrow T_{12} = 0, %%(P_1 \neq 1, P_2 \neq 1, (P_1 = P_2+ 1 \char`;\; P_2 = P_1 + 1) \Rightarrow T_{12} = 1, %%(P_1 \neq 1, P_2 \neq 1, (P_1 \gt P_2 + 1 \char`;\; P_2 \gt P_1 + 1) \Rightarrow T_{12} = 2 \paragraph{Constructive Disjunction.}\label{new-comb} \label{lconstructive} The basic idea behind constructive disjunction is to add to the store constraints entailed by all possible alternatives, i.e. the conjunctions made up from the store and one of the disjunctions. By factoring common information from the disjuncts constructive disjunction produces more pruning than cardinality, albeit at a greater cost. Constructive disjunction can be used for many constraints involving (explicitly or implicitly) disjunctions. %%Constructive disjunction is closely %%related to the PROPIA system %%\cite{Wallace91} which computes generalizations of goals and was %%developed independently of our research. Computations in PROPIA %%are not incremental (they do not maintain some persistent data %%structures) but they are memory efficient. %%The motivation behind the operator amounts to achieving a more global %%pruning than cardinality by factoring out information common to all %%disjuncts. \begin{example}[max/3] A typical example is the {\tt max/3} constraint, especially useful in scheduling problems \cite{Vanhentenryck89a}. The constraint {\tt max(X,Y,Z)} holds iff {\tt Z} is the maximum of {\tt X} and {\tt Y}. It can be expressed, using cardinality, in the following way: \begin{ccprogram} \agent max(X,Y,Z) :: Z \IN \min(X)\,{..}\;\&\; \min(Y)\,{..}, \#(1,[ Z \IN \dom(X), Z \IN \dom(Y)]).. \end{ccprogram} The first two constraints ensure that {\tt Z} is always not smaller than {\tt X} and {\tt Y}, while the last constraint ensures that {\tt Z} is equal to one of them. The constraint however does not achieve as much pruning as we would like. For instance, if {\tt X,Y,Z} range over $\{5..10\}$, $\{7..11\}$, and $\{1..20\}$ respectively, the above program would reduce the domain of {\tt Z} to to $\{7..20\}$ instead of the expected $\{7..11\}$, because cardinality handles the two equations locally and concludes that neither is entailed. %%It is possible to enforce arc-consisteny on the constraint using %%cardinality on all values in the domains but that forces a %%substantial loss of efficiency compared to a procedural %%implementation. \end{example} Constructive disjunction adds to the store the constraints common to the disjuncts, e.g.{} $\tt Z\;in\; \{5..11\}$ in the above example. Syntactically, a constructive disjunction is of the form $\cor (B_1, \ldots, B_n)$ and can be read declaratively as a disjunction. Operationally it performs {\em constraint generalization}, i.e.{} it computes the greatest lower bound (glb) in the constraint lattice of the constraints $B_1(s), \ldots, B_n(s)$ (where we are thinking of each $B_i$ as a closure operator). If a glb algorithm is not available for the constraint system, any approximation (i.e.{} any set of constraints which is implied by all constraints in the {\em glb}) would do as well. \begin{example}[max/3 contd.] The {\tt max/3} constraint can be expressed as \begin{ccprogram} \agent max(X,Y,Z) :: Z \IN min(X)\,{..}\;\&\; \min(Y)\,{..}, \cor (Z\IN \dom(X), Z \IN \dom(Y)).. \end{ccprogram} This program achieves the expected pruning since information from both equations can be combined during propagation. \end{example} \subparagraph{Use.} Constructive disjunction is mainly motivated by disjunctive scheduling applications although it may be used in many other applications (e.g. biology). In the general problem, tasks have to be scheduled on different machines in presence of precedence and other constraints. For simplicity, we assume that the machine on which a task has to be executed has already been determined. At this point, the key problem is to find an ordering of the tasks on a machine so as to minimize an objective function (e.g. the makespan or the total delay). To obtain reasonable efficiency, it becomes necessary to exploit the disjunctions to prune the search space. Consider $n$ tasks in such a disjunction and assume that their starting dates, end dates and durations are denoted by $S_1,E_i,D_i$ respectively. A typical pruning rule \cite{Carlier78} is as follows: \begin{quote} If the starting date of task $i$ plus the summation of all durations is greater than the maximum of all end dates but the one of task $i$, then task $i$ cannot be scheduled first. \end{quote} This rule can be implemented using implication and constructive disjunction as \begin{ccprogram} \agent Max \IN \min(E_1)\,{..}\; \&\; \ldots\; \&\; \min(E_n), \cor (Max \IN \dom(E_1), \ldots, Max \IN \dom(E_n)), Max \IN {..}\,(min(S_i) + D_1 + \ldots + D_n -1) \then P_i \IN 0:2\,{..}).. \end{ccprogram} \noindent where {\tt P$_1$} represents the position of task $i$ in the disjunction. The branch \& bound algorithm of \cite{Carlier78} was implemented using the above constraint and additional ones in the same spirit. The resulting program is short, yet it provides reasonable efficiency (between 5-10 time slower). As constraint programming languages are still rather recent and \ccfd{} is a prototype, the result is encouraging. \subparagraph{Implementation.} In our current implementation constraint generalization is done pairwise and changes are propagated only when the result has been modified and a subsequent operation may change the result. Successive glbs are maintained in a persistent data structure (somewhat similar to a RETE network \cite{Forgy82}) to avoid restarting the whole computation from scratch when the store is updated. Similar implementations are possible for other domains (e.g.{} first-order terms). \paragraph{Extended Asks.} To obtain a flexible, extensible, and efficient system, it is desirable to ask non-primitive constraints as well as primitive constraints. In general, a non-primitive constraint will be a closure operator. Asking whether a primitive constraint $c$ holds is just checking to see if the store has enough information to entail $c$. Semantically, the closure operator corresponding to $c \then f$ (for $c$ a constraint, and $f$ a closure operator) is just the operator whose fixed-point set is $\{s \alt s \not\vdash c\} \cup f$ \cite{Saraswat90a} (see the Appendix for the connection between closure operators and their fixed-point sets). What should it mean to Ask a closure operator? One natural answer is as follows: say that a store $s$ solves a closure operator $f$ if $f$ {\em terminates} when run in $s$, without producing any more information. In such a case, $f$ cannot produce any more information in $s$, or {\em in any strengthening of $s$}. More abstractly, we can say that $s$ solves $f$ (and write $s \vdash f$) if $s$ is a stable fixed-point of $f$, that is, every $t \geq s$ is a fixed-point of $f$ (notationally: $\up{s} \subseteq f$). Now it becomes easy to define an {\em extended} ask operator: given a closure operators $f$ and $g$, $f \then g$ is the closure operator whose fixed-point set is given by $$\{s \alt s \not\vdash f\} \cup g$$ It is easy to see that this set is closed under glbs and hence is the fixed-point set of a closure operator. The operational semantics of Extended Asks is given in more detail in Table~\ref{trans-sem}. The correspondence between the operational and denotational semantics is not difficult to establish. We note that this operation satisfies a number of nice properties, summarized in Table~\ref{simp-table}. Note, however, that the operation is anti-monotone in its first argument --- hence we cannot allow the class of agents that appear in this position (namely, Basic Agents) to be closed under recursion. We also note in passing that (monotone) indexical constraints can be answered quite efficiently in \ccfd. We illustrate with an example. A constraint $s$ solves the indexical constraint $X \IN {..} \max(Y)$ iff $s \vdash (X \IN {..}\;\min(Y)_s$. Thus, all that needs to be done is to evaluate a constraint in the current store --- if the check succeeds, then the Ask succeeds, no more work needs to be done. \begin{table} \begin{tabular}{|l|} \hline \begin{minipage}[t]{\columnwidth} \paragraph{Cardinality.} In the following, $\sigma'$ is a permutation of $\sigma$. {\footnotesize $$ \begin{array}{llll} \#(l, \sigma, u) = \#(l,\sigma), \#(\sigma,u)\\ \#(l, \sigma) = \#(l, \sigma') & & \#(\sigma, u) = \#(\sigma', u)\\ c, \#(l, [d \alt \sigma]) = \#(l, [e \alt \sigma]) & & c, \#([d \alt \sigma], u) = \#([e \alt \sigma], u) & \mbox{{\em $(c,d) \iff (c, e)$}} \\ \#(l, \sigma) = false &\mbox{{\em($|\sigma| < l$)}} & \#(\sigma, u) = false &\mbox{{\em($u < 0$)}}\\ \#(l, \sigma) = true &\mbox{{\em($l \leq 0$)}} & \#(\sigma, u) = true &\mbox{{\em($|\sigma| \leq u$)}}\\ \#(l, [false \alt \sigma])= \#(l, \sigma) & & \#([false \alt \sigma], u)= \#(\sigma, u)\\ \#(l, [true \alt \sigma])= \#(l-1, \sigma) & & \#([true \alt \sigma], u)= \#(\sigma, u-1)\\ \#(l, [c_1, \ldots, c_l])= (c_1, \ldots, c_l) & & \#([c_1, \ldots, c_n], 0)= (\neg c_1), \ldots, (\neg c_n)\\ \end{array}$$ } \paragraph{Constructive Disjunction.} {\footnotesize $$ \begin{array}{ll} c_1 \cor c_2 = c_1 & \mbox{(if $c_2 \rightarrow c_1$)}\\ c, (d \cor B) = c, (e \cor B) & \mbox{(if $c \rightarrow (d \leftrightarrow e)$)}\\ c \cor d = glb(c,d), (c \cor d) \\ B \cor \false = B \\ B_1 \cor B_2 = B_2 \cor B_1 \\ B_1 \cor (B_2 \cor B_3) = (B_1 \cor B_2) \cor B_3 \end{array}$$ } \paragraph{Extended Asks.} \begin{definition} $s \vdash B \stackrel{\triangle}{=} \up{s} \subseteq B$ \end{definition} \begin{proposition} $s \vdash (B_1, B_2)$ iff $s \vdash B_1$ and $s \vdash B_2$. \end{proposition} \begin{definition} $B_1 \then B_2 \stackrel{\triangle}{=} \{s \alt s \not\vdash B_1\} \cup B_2$ \end{definition} \begin{proposition} $s \vdash B_1 \then B_2$ if $B_1(s) \vdash B_2$. \end{proposition} The converse may not hold; there are simple counter-examples. The following laws are satisfied by Extended Asks: ($I$ is the identity closure operator) {\footnotesize $$\begin{array}{ll} A \then A = I\\ I \then A = A\\ B_1 \then B_2 = (B_1 \then (B_1, B_2) \\ B_1 \then (B_2 \then B_3) = (B_1, B_2) \then B_3\\ B_1 \then (B_2, B_3) = (B_1 \then B_2), (B_1 \then B_3) \end{array}$$ } \end{minipage}\\ \hline \end{tabular} \caption{Simplification rules for basic combinators.}\label{simp-table} \end{table} \paragraph{Non-primitive Constraints.} \ccfd{} also provides a simple facility for defining Basic Agents. For each new non-primitive constraint, two actions need to be defined, {\bf tell} and {\bf ask}, via declarations of the form: \begin{ccprogram} \agent {\bf tell}\; D :: B_t.. \agent {\bf ask}\; D:: B_a.. \end{ccprogram} The first is used in situations where the user-defined constraint needs to imposed, and the second in situations where it is to be asked. If the user supplies only the {\bf tell} action $\tt B_t$ for a user-defined constraint $\tt D$, the system will take the {\bf ask} action to be $\tt B_t$ as well. However, the {\bf tell} action is usually an {\em incomplete approximation} to the intended interpretation of {\tt D}, and it may be possible for the user to supply a better approximation for Ask. For this reason, we allow the user to specify the two actions separately. It is the user's responsibility to ensure that these two definitions are logically related to each other in the expected way. \begin{example}[{\tt greatereqc(X,Y,C)}] This example illustrates the advantage of allowing the user to define the four basic actions differently. A typical definition for this constraint in \ccfd{} would be: \begin{ccprogram} \agent {\bf tell}\; greatereqc(X,Y,C) :: X \IN (\min(Y)+C\,{..}), Y \IN ({..}\,\max(X)-C).. \agent {\bf ask}\; greatereqc(X,Y,C) :: X\IN (\max(Y)+C\,{..}).. \end{ccprogram} \end{example} \paragraph{Denotational semantics.} Above, we have discussed the denotational semantics of extended asks explicitly. The semantics of the other constructs (indexical constraints, cardinality and constructive disjunction) is straightforward: all of them can be seen as defining closure operators in the obvious way. Indeed the constructive disjunction operation is just the glb operation in the lattice of closure operators over the constraint system. (This is how they were first discovered; only later was their computational significance understood.) %%\input{ccfd.tex} \section{The language \ccfd}\label{ccfd} \ccfd{} is simply an instantiation of the above language framework, over the {\tt FD} constraint system. In many ways it can be thought of as a ``rational reconstruction'' of CHIP. Below we discuss how the constraint-solver of CHIP can be programmed in \ccfd{} in very straightforward ways. \paragraph*{Arithmetic Constraints.} CHIP supports the use of numerical constraints over finite domains using reasoning over variation intervals. These constraints are constructed from natural numbers, domain variables, the operators $+$, $*$, $-$, and the relations $\leq$, $\geq$, $<$, $>$, $=$. It is easy to see that any such constraint can be expressed in terms of constraints having more than three variables. Hence it is sufficient to show how to simulate three-variable CHIP constraints in \ccfd. For this we use indexical constraints. For instance, the constraint $\tt X \times Y \geq Z$ is modelled as: \begin{ccprogram} \agent gtemult(X,Y,Z) :: Z\IN ({..}\; max(X) \times max(Y)), X \IN (1\;{..}) \then Y\IN (min(Z)/max(X){..}), Y \IN (1\;{..}) \then X\IN (min(Z)/max(Y){..}).. \end{ccprogram} \noindent Similarly for other constraints. {\em Disequations} are only handled in \ccfd{} exactly in the same way as in CHIP: suspend until exactly one variable is left in the constraint. {\em Minimum} and {\em Maximum} constraints can be defined directly as discussed earlier. \paragraph*{Symbolic Constraints.} A number of symbolic constraints were available in CHIP and were instrumental in solving many practical problems. % The constraint {\tt % atmost} seen previously is a primitive constraint of the language. For example, CHIP provides the constraint {\tt element(I,L,V)} which holds iff element {\tt I} of list {\tt L} is equal to {\tt V}. The constraint is used with {\tt I} and {\tt V} being domain variables and {\tt L} being a list of integers. In CHIP, the user may specify that the system reason with this constraint using either arc-consistency or reasoning about variation intervals. Both these reasoning techniques can be expressed as user-programs in \ccfd{} as follows. To enforce arc-consistency on {\tt element(I,L,V}), generate the constraint $\tt V = e \Leftrightarrow I\; in\; i_1:\ldots:i_p$ for all values {\tt e} with occurrences ${\tt i_1,\ldots,i_p}$ in {\tt L}. To enforce reasoning with variation intervals, generate the constraint $V \geq e \Leftrightarrow I \notin \{i_1,\ldots,i_p\}$ for each value {\tt e} in {\tt L}, where ${\tt i_1,\ldots,i_p}$ are all positions in {\tt L} with values smaller than {\tt e}. If desired, similar constraints can be generated for ${\tt V \leq e}$ --- and indeed that is the basic point of \ccfd{}: the programmer can put in his own heuristics in a declarative way. The most difficult constraint is {\tt circuit} which enforces a circuit among a set of {\tt n} variables wrt their indices. The easiest way to implement the constraint is to spawn a recursive predicate for each variable. Those constraints make sure that all variables at a distance $<$ {\tt n-2} on the path starting from a variable are different from the index of that variable. The recursive predicate makes use of Ask to control the termination and find the successor (instead of guessing it). It is also possible to express the {\tt circuit} constraint using only cardinality and universal quantification by creating variables for the successors of a variable. \paragraph*{Demons and Declarations.} CHIP also has demon declarations and an {\tt if\_then\_else} construct, which are easily expressed using Asks. It also has two declaration mechanisms to define non-primitive constraints using forward checking and lookahead. These two mechanisms are obsolete since arc-consistency can be enforced in an optimal manner using cardinality. In addition, Ask can delay a constraint until it is sufficiently instantiated thus obtaining the effect of forward checking. \paragraph*{Discussion.} \ccfd{} generalizes the finite domain part of CHIP in many ways. First, it provides, at the language level, all the principles that were previously buried inside the implementation. Typical examples are arithmetic reasoning and consistency techniques. Second, it proposes general ways of dealing with some issues such as disjunctions. Cardinality and constructive disjunction are particularly helpful in that respect. Third, it allows more efficient implementation techniques by taking advantage of the property of the constraints. This is one of the basic features of universal and existential quantification. Last, it allows non-primitive constraints to be handled in exactly the same way as primitive constraints. %%\input{conclusion.tex} \section{Conclusion} This paper aimed at presenting an overview of the constraint processing facilities of \ccfd. \ccfd{} is best viewed as a systematic reconstruction and extension of the finite domain part of CHIP and supports both consistency techniques and nondeterminism in a declarative language. The main novelty in \ccfd{} is to support consistency techniques at the language level through a number of combinators and constraint operations. This is to be contrasted with existing constraint logic programming languages where all constraint techniques are buried inside the constraint solver with the consequence that the user has no way to extend and modify the constraint solver and to define his own non-primitive constraints. The main advantages which result from the design of \ccfd{} are (1)~the additional flexibility and extensibility which enables the user to define his own constraints (2)~the clean semantic foundation in terms of an extremely simple constraint solver. Another contribution of this research is the identification of a number of general-purpose constraint operations and combinators which will be valuable for other constraint systems as well. Current research on this topic is devoted to a robust implementation of the language as well as the study of the combinators for other constraint systems, e.g.{} linear rational arithmetics. \paragraph*{Acknowledgements.} We thank John Lamping, Johan deKleer, Danny Bobrow and Francesca Rossi for useful discussions. {\footnotesize \bibliographystyle{plain} \bibliography{pascal} } \normalsize \appendix %%\input{appendix} \section{Closure operators}\label{clop} An operator over a partial order that is extensive, idempotent and monotone is called a {\em closure operator} (or, more classicaly, a {\em consequence} operator, \cite{tarski-closure}). Closure operators are extensively studied in \cite{lattices}; continuous closure operators have been used in \cite{scott-SIAM} to characterize data-types. We list here some basic properties. Every closure operator can be uniquely represented by its range (which is the same as its set of fixed points); the function can be recovered by mapping each input to the least element in the range above it. (In the following, we shall often confuse a closure operator with its range, writing $x\in f$ to mean $f(x)=x$.) In fact, a subset of $|D|$ is the range of a closure operator iff it is closed under glbs of arbitrary subsets. (Thus, the range of every closure operator is non-empty, since it must contain $\top_{|D|}=glb \emptyset$.) Also, it can be shown that a closure operator is continuous iff its range is closed under the lubs of directed subsets. The extensional partial order on closure operators ($f \leq g$ iff for all $x$, $f(x) \leq g(x)$) translates into reverse-inclusion on the set of fixed points: $f \leq g$ iff $f \supseteq g$. The identity operator $I$ (with range $|D|$) is the bottom element; the operator which maps every element to $\top$ (and hence has range $\{\top\}$) is the top element. In fact it is easy to see that the set of closure operators over $|D|$ ordered by $\supseteq$ is a complete lattice, with meets and joins given by: $P \sqcap Q = \{c \sqcap d \alt c,d \in P \cup Q\}$ and $P \sqcup Q = P \cap Q$. As we shall see, joins correspond to parallel composition. In the presence of angelic non-determinism (disjunction), it becomes necessary to take the denotation of a program to be a linear closure operator over $\PP{(|D|)}$, the indeterminate power-domain of $|D|$ \cite{angelic-nondet}. (Roughly, this power-domain has as elements upwards-closed subsets of $|D|$, ordered by reverse set inclusion. A function is linear if it satisfies the property that $f(S)=\cup_{s\in S} f(\{s\})$.) Such closure operators can also be characterized by the set of their fixed-points, which need not be closed under glbs. \end{document}