throbber
Backtracking algorithms for constraint
`satisfaction problems(cid:3)
`
`Rina Dechter and Daniel Frost
`Department of Information and Computer Science
`University of California, Irvine
`Irvine, California, USA  - 
`fdechter,frostg@ics.uci.edu
`
`September ,
`
`Abstract
`
`Over the past twenty (cid:12)ve years many backtracking algorithms have
`been developed for constraint satisfaction problems. This survey describes
`the basic backtrack search within the search space framework and then
`presents a number of improvements developed in the past two decades,
`including look-back methods such as backjumping, constraint recording,
`backmarking, and look-ahead methods such as forward checking and dy-
`namic variable ordering.
`
` Introduction
`
`Constraint networks have proven successful in modeling mundane cognitive tasks
`such as vision, language comprehension, default reasoning, and abduction, as
`well as specialized reasoning tasks including diagnosis, design, and temporal and
`spatial reasoning. The constraint paradigm is a generalization of propositional
`logic, in that variables may be assigned values from a set with any number
`of elements, instead of only true and false. Flexibility in the number of
`values can improve the ease and naturalness with which interesting problems
`are modeled.
`The contribution of this paper is a survey of several approaches to solv-
`ing constraint satisfaction problems, focusing on the backtracking algorithm
`and its variants, which form the basis for many constraint solution procedures.
`We provide on a careful exposition of each algorithm, its theoretical underpin-
`nings, and its relationship to similar algorithms. Worst-case bounds on time
`and space usage are developed for each algorithm. In addition to the survey,
`
`(cid:3)This work was partially supported by NSF grant IRI-  , Air Force O(cid:14)ce of Scienti(cid:12)c
`Research grant AFOSR F - - -, Rockwell MICRO grants ACM- and - .
`
`
`
`CONFIGIT 1048
`
`1
`
`

`

`the paper makes several original contributions in formulation and analysis of
`the algorithms. In particular the look-back backjumping schemes are given a
`fresh exposition through comparison of the three primary variants, Gashnig’s,
`graph-based and con(cid:13)ict-directed. We show that each of these backjumping
`algorithms is optimal relative to its information gathering process. The com-
`plexity of several schemes as a function of parameters of the constraint-graph are
`explicated. Those include backjumping complexity as a function of the depth
`of the DFS traversal of the constraint graph, learning algorithms as a function
`of the induced-width, and look-ahead methods such as partial-lookahead as a
`function of the size of the cycle-cutset of the constraint graph.
`The remainder of the paper is organized as follows. Section  de(cid:12)nes the
`constraint framework and provides an overview of the basic algorithms for solv-
`ing constraint satisfaction problems. In Section we present the backtracking
`algorithm. Sections  and  survey and analyze look-back methods such as
`backjumping and learning schemes while Section  surveys look-ahead methods.
`Finally, in Section  we present a brief historical review of the (cid:12)eld. Previous
`surveys on constraint processing as well as on backtracking algorithms can be
`found in [Dec , Mac , Kum , Tsa , KvB ].
`
` The constraint framework
`
`. De(cid:12)nitions
`
`A constraint network or constraint satisfaction problem (CSP) is a set of n
`variables X = fx ; : : : ; xng, a set of value domains Di for each variable xi, and
`a set of constraints or relations. Each value domain is a (cid:12)nite set of values, one
`of which must be assigned to the corresponding variable. A constraint RS over
`S (cid:18) X is a subset of the cartesian product of the domains of the variables in S.
`If S = fxi ; : : : ; xirg, then RS (cid:18) Di (cid:2) (cid:1) (cid:1) (cid:1) (cid:2) Dir . S is called the scope of RS .
`A nogood is a particular assignment of values to a subset of variables which is
`not permitted. In a binary constraint network all constraints are de(cid:12)ned over
`pairs of variables. A constraint graph associates each variable with a node and
`connects any two nodes whose variables appear in the same scope.
`A variable is called instantiated when it is assigned a value from its domain;
`otherwise it is uninstantiated. By xi = a or by (xi; a) we denote that variable xi
`is instantiated with value a from its domain. A partial instantiation or partial
`assignment of a subset of X is a tuple of ordered pairs ((x ; a ); :::;(x i; ai)),
`frequently abbreviated to (a ; ::::; ai) or ~ai when the order of the variables is
`known.
`Let Y and S be sets of variables, and let ~y be an instantiation of the variables
`in Y . We denote by ~yS the tuple consisting of only the components of ~y that
`correspond to the variables in S. A partial instantiation ~y satis(cid:12)es a constraint
`RS i(cid:11) ~yS  RS. [Rina, the next sentence is new.] ~y is consistent if ~y satis(cid:12)es all
`constraints RT ; T (cid:18) Y . A consistent partial instantiation is also called a partial
`solution. A solution is an instantiation of all the variables that is consistent.
`
`
`
`2
`
`

`

`T1
`
`T2
`
`T3
`
`T5
`
`T4
`
`Unary constraint
`T = :
`Binary constraints
`T , T: f( :,:), ( :, :), (:, :)
`(:, :), ( :, :), ( :,:)g
`T , T : f(:, :), ( :, :), ( :,:)g
`T, T: f( :,:), ( :, :), (:, :)
`(:, :), ( :, :), ( :,:)g
`T , T: f( :,:), ( :, :), (:, :)g
`T , T: f(:, :), ( :, :), ( :,:)g
`
`Figure : The constraint graph and constraint relations of the scheduling prob-
`lem in Example .
`
`Example . The constraint framework is useful for expressing and solving
`scheduling problems. Consider the problem of scheduling  tasks T , T, T ,
`T, T, each of which takes hour to complete. The tasks may start at :,
`:, or :. Any number of tasks can be executed simultaneously, subject to
`the restrictions that T must start after T , T must start before T and after
`T, T cannot execute at the same time as T or T, and T cannot start at
`:.
`With (cid:12)ve tasks and three time slots, we can model the scheduling problem
`by creating (cid:12)ve variables, one for each task, and giving each variable the domain
`f :, :, :g. Another equally valid approach is to create three variables,
`one for each starting time, and to give each of these variables a domain which
`is the powerset of fT , T, T , T, Tg. Adopting the (cid:12)rst approach, the
`problem’s constraint graph is shown in Figure . The constraint relations are
`shown on the right of the (cid:12)gure.
`
`. Constraint algorithms
`
`Once a problem of interest has been formulated as a constraint satisfaction
`problem, it can be attacked with a general purpose constraint algorithm. Many
`CSP algorithms are based on the principles of search and deduction; more so-
`phisticated algorithms often combine both principles. In this section we brie(cid:13)y
`survey the (cid:12)eld of CSP algorithms.
`
`.. Search - backtracking
`
`The term search is used to represent a large category of algorithms which solve
`problems by guessing an operation to perform or an action to take, possibly
`with the aid of a heuristic [Nil, Pea]. A good guess results in a new state
`that is nearer to a goal. If the operation does not result in progress towards the
`goal (which may not be apparent until later in the search), then the operation
`can be retracted and another guess made.
`
`
`
`3
`
`

`

`For CSPs, search is exempli(cid:12)ed by the backtracking algorithm. Backtracking
`search uses the operation of assigning a value to a variable, so that the current
`partial solution is extended. When no acceptable value can be found, the pre-
`vious assignment is retracted, which is called a backtrack. In the worst case the
`backtracking algorithm requires exponential time in the number of variables,
`but only linear space. The algorithm was (cid:12)rst described more than a century
`ago, and since then has been reintroduced several times [BR].
`
`.. Deduction - constraint propagation
`
`To solve a problem by deduction is to apply reasoning to transform the problem
`into an equivalent but more explicit form.
`In the CSP framework the most
`frequently used type of deduction is known as constraint propagation or con-
`sistency enforcing [Mon, Mac, Fre]. These procedures transform a given
`constraint network into an equivalent yet more explicit one by deducing new
`constraints, tightening existing constraints, and removing values from variable
`domains. In general, a consistency enforcing algorithm will make any partial so-
`lution of a subnetwork extendable to some surrounding network by guaranteeing
`a certain degree of local consistency, de(cid:12)ned as follows.
`A constraint network is -consistent if the values in the domain of each vari-
`able satisfy the network’s unary constraints (that is, constraints which pertain
`to a single variable). A network is k-consistent, k (cid:21) , i(cid:11) given any consistent
`partial instantiation of any k (cid:0) distinct variables, there exists a consistent
`instantiation of any kth additional variable [Fre]. The terms node-, arc-, and
`path-consistency [Mac] correspond to -, -, and -consistency, respectively.
`Given an ordering of the variables, the network is directional k-consistent i(cid:11) any
`subset of k (cid:0) variables is k-consistent relative to variables that succeed the
`k (cid:0) variables in the ordering [DP]. A problem that is k-consistent for all k
`is called globally consistent.
`A variety of algorithms have been developed for enforcing local consistency
`[MF, MH, Coo , VHDT , DP]. For example, arc-consistency algo-
`rithms delete certain values from the domains of certain variables, to ensure
`that each value in the domain of each variable is consistent with at least one
`value in the domain of each other variable. Path-consistency is achieved by
`introducting new constraints or nogoods which disallow certain pairs of values.
`Constraint propagation can be used as a CSP solution procedure, although
`doing so is usually not practical. If global consistency can be enforced, then one
`or more solutions can easily be found in the transformed problem, without back-
`tracking. However, enforcing k-consistency requires in general exponential time
`and exponential space in k [Coo ], and so in practice only local consistency,
`with k (cid:20) , is used.
`In Example , enforcing -consistency on the network will result in the value
`: being removed from the domain of T, since that value is incompatable
`with a unary constraint. Enforcing -consistency will cause several other domain
`values to be removed. For instance, the constraint between T and T means
`that if T is scheduled for :, there is no possible time for T , since it must
`
`
`
`4
`
`

`

`occur before T . Therefore, an arc-consistency algorithm will, among other
`actions, remove : from the domain of T .
`Algorithms that enforce local consistency can be performed as a preprocess-
`ing step in advance of a search algorithm.
`In most cases, backtracking will
`work more e(cid:14)ciently on representations that are as explicit as possible, that is,
`those having a high level of local consistency. The value of the tradeo(cid:11) between
`the e(cid:11)ort spent on pre-processing and the reduced e(cid:11)ort spent on search has
`to be assessed experimentally, and is dependent on the character of the prob-
`lem instance being solved [DM ]. Varying levels of consistency-enforcing can
`also be interleaved with the search process, and doing so is the primary way
`consistency enforcing techniques are incorporated into constraint programming
`languages [JL ].
`
`.. Other constraint algorithms
`
`In addition to backtracking search and constraint propagation, other approaches
`to solving constraint problems include stochastic local search and structure-
`driven algorithms. Stochastic methods typically move in a hill-climbing manner
`augmented with random steps in the space of complete instantiations [MJPL ].
`In the CSP community interest in stochastic approaches was sparked by the
`success of the GSAT algorithm [SLM ]. Structure-driven algorithms, which
`employ both search and consistency-enforcing components, emerge from an at-
`tempt to characterize the topology of constraint problems that are tractable.
`Tractable classes of constraint networks are generally recognized by realizing
`that for some problems enforcing low-level consistency (in polynomial time)
`guarantees global consistency. The basic graph structure that supports tractabil-
`ity is a tree [MF]. In particular, enforcing -consistency on a tree-structured
`binary CSP network ensures global consistency along some ordering of the vari-
`ables.
`
` Backtracking
`
`A simple algorithm for solving constraint satisfaction problems is backtracking,
`which traverses the search graph in a depth-(cid:12)rst manner. The order of the
`variables can be (cid:12)xed in advance or determined at run time. The backtracking
`algorithm maintains a partial solution that denotes a state in the algorithm’s
`search space. Backtracking has three phases: a forward phase in which the next
`variable in the ordering is selected; a phase in which the current partial solution
`is extended by assigning a consistent value, if one exists, to the next variable;
`and a backward phase in which, when no consistent value exists for the current
`variable, focus returns to the variable prior to the current variable.
`Figure  describes a basic backtracking algorithm. As presented in Figure ,
`the backtracking algorithm returns at most a single solution, but it can easily be
`modi(cid:12)ed to return all solutions, or a desired number. The algorithm employs a
`
`
`
`series of mutable value domains Di such that each Di (cid:18) Di. Di holds the subset
`
`
`
`5
`
`

`

`procedure backtracking
`Input: A constraint network with variables fx ; : : : ; xng and domains
`fD ; : : : ; Dng.
`Output: Either a solution, or noti(cid:12)cation that the network is inconsistent.
`i
`(initialize variable counter)
`
`Di Di
`(copy domain)
`while (cid:20) i (cid:20) n
`instantiate xi selectValue
`if xi is null
`(no value was returned)
`i i (cid:0)
`(backtrack)
`else
`i i +
`
`Di Di
`end while
`if i =
`return \inconsistent"
`else
`return instantiated values of fx ; : : : ; xng
`end procedure
`
`(step forward)
`
`select an arbitrary element a  Di, and remove a from D
`
`i
`
`procedure selectValue
`while D
`i is not empty
`
`if (xi; a) is consistent with ~ai(cid:0)
`return a
`end while
`return null
`end procedure
`
`(no consistent value)
`
`Figure : The backtracking algorithm.
`
`
`
`6
`
`

`

`x
`
`red, blue, green
`
`red, blue
`
`x
`
`x
`
`blue, green
`
`red, green, teal
`
`x
`
`x
`
`red, blue
`
`x
`
`red, blue
`
`blue, green x
`
`Figure : A modi(cid:12)ed coloring problem.
`
`of Di that has not yet been examined under the current partial instantiation.
`The D sets are not needed if the values can be mapped to a contiguous set
`of integers that are always considered in ascending order; in this case a single
`integer can be used as a marker to divide values that have been considered from
`those that have not. We use the D sets to describe backtracking for increased
`generality and to be consistent with the portrayal of more complex algorithms
`later in the paper.
`The selectValue subprocedure is separated from the main backtracking
`routine for clarity. It has access to the current value of all variables in the main
`procedure. The complexity of selectValue depends on how the constraints
`are represented internally. When the CSP is binary, it may be practical to store
`the constraints in a table in contiguous computer memory; a one bit or one byte
`(cid:13)ag denotes whether each possible pair of variables with corresponding values
`is compatible. Accessing an entry in the table is an O( ) operation. Since
`the compatibility of the current candidate assignment (xi; a) must be checked
`with at most n earlier variable-value pairs, the complexity of selectValue for
`binary CSPs can be O(n). With non-binary CSPs, checking compatibility is
`more expensive for two reasons. First, the number of possible constraints is
`exponential in the number of variables, and so the actual constraints are most
`likely stored in a list structure. Checking the list will be O(log c), where c is
`the number of constraints. Another possibility that bears consideration is that
`a constraint can be represented not as data but as a procedure. In this case the
`complexity of selectValue is of course dependent on the complexity of the
`procedures it invokes.
`
`Example . Consider the coloring problem in Figure . The domain of each
`node is written inside the node. Note that not all nodes have the same domain.
`Arcs join nodes that must be assigned di(cid:11)erent colors. Assume backtracking
`search for a solution using two possible orderings: d = x ; x; x ; x; x; x; x
`and d = x ; x; x; x; x; x ; x. The search spaces along orderings d and
`
`
`
`7
`
`

`

`d, as well as those portions explicated by backtracking from left to right, are
`depicted in Figure (a) and (b), respectively. (Only legal states are depicted
`in the (cid:12)gure.)
`
` . Improvements to backtracking
`
`Much of the work in constraint satisfaction during the last decade has been
`devoted to improving the performance of backtracking search. Backtracking
`usually su(cid:11)ers from thrashing, namely, rediscovering the same inconsistencies
`and same partial successes during search. E(cid:14)cient cures for such behavior in
`all cases are unlikely, since the problem is NP-hard [GJ ].
`The performance of backtracking can be improved by reducing the size of
`its expanded search space, which is determined both by the size of the under-
`lying search space, and by the algorithm’s control strategy. The size of the
`underlying search space depends on the way the constraints are represented
`(e.g. on the level of local consistency), the order of variable instantiation, and,
`when one solution su(cid:14)ces, the order in which values are assigned to each vari-
`able. Using these factors, researchers have developed procedures of two types:
`those employed before performing the search, thus bounding the size of the un-
`derlying search space; and those used dynamically during the search and that
`decide which parts of the search space will not be visited. Commonly used pre-
`processing techniques are arc- and path-consistency algorithms, and heuristic
`approaches for determining the variable ordering [HE, Fre, DP ].
`The procedures for dynamically improving the pruning power of backtrack-
`ing can be conveniently classi(cid:12)ed as look-ahead schemes and look-back schemes,
`in accordance with backtracking’s two main phases of going forward to assem-
`ble a solution and going back in case of a dead-end. Look-ahead schemes can
`be invoked whenever the algorithm is preparing to assign a value to the next
`variable. The essence of these schemes is to discover from a restricted amount
`of constraint propagation how the current decisions about variable and value
`selection will restrict future search. Once a certain amount of forward constraint
`propagation is complete the algorithm can use the results to:
`
` . Decide which variable to instantiate next, if the order is not predeter-
`mined. Generally, it is advantageous to (cid:12)rst instantiate variables that
`maximally constrain the rest of the search space. Therefore, the most
`highly constrained variable having the least number of values, is usually
`selected.
`
`. Decide which value to assign to the next variable when there is more than
`one candidate. Generally, when searching for a single solution an attempt
`is made to assign a value that maximizes the number of options available
`for future assignments.
`
`Look-back schemes are invoked when the algorithm is preparing the backtrack-
`ing step after encountering a dead-end. These schemes perform two functions:
`
`
`
`8
`
`

`

`x
`
`1
`
`x
`2
`
`x
`
`3
`
`x
`
`4
`
`b
`
`g
`
`r
`
`r
`
`g
`
`b
`
`5
`
`x x
`
`r t
`6
`
`g
`
`b
`
`b
`
`r
`
`b
`
`g
`
`b
`
`g
`
`b
`
`r
`
`b
`
`r
`
`b
`
`b
`
`b
`
`g
`
`g
`
`b
`
`b
`
`g
`
`r
`
`g
`
`t
`
`r
`
`gt
`
`r
`
`t
`
`r
`
`t
`
`rg
`
`t
`
`r
`
`t
`
`r
`
`t
`
`r
`
`r
`
`r
`
`r
`
`(a)
`
`g
`
`r
`
`b
`
`r
`
`g
`
`r
`
`r
`
`b
`
`r
`
`b
`
`g
`
`t
`
`b
`
`b
`
`t
`
`r
`
`b
`
`(b)
`
`r
`
`b
`
`b
`
`b
`
`r
`
`x
`
`7
`
`1
`
`7
`
`x x
`
`x x x
`
`x x
`
`4
`
`5
`
`6
`
`3
`
`2
`
`Figure : Backtracking search for the orderings (a) d = x ; x; x ; x; x; x; x
`and (b) d = x ; x; x; x; x; x ; x on the example in Figure . Intermediate
`states are indicated by ovals, dead-ends by squares, and solutions by triangles.
`The colors blue, red, green, and teal are abbreviated by their (cid:12)rst letters. Thick
`lines denote portions of the search space explored by backtracking when using
`left-to-right ordering and stopping after the (cid:12)rst solution.
`
`
`
`9
`
`

`

` . Deciding how far to backtrack. By analyzing the reasons for the dead-end,
`irrelevant backtrack points can often be avoided so that the algorithm goes
`back directly to the source of failure, instead of just to the immediately
`preceding variable in the ordering. This procedure is often referred to as
`backjumping.
`
`. Recording the reasons for the dead-end in the form of new constraints, so
`that the same con(cid:13)icts will not arise again later in the search. The terms
`used to describe this function are constraint recording and learning.
`
`In sections  and  we will describe in detail several principle look-back
`schemes, while section  will focus on look-ahead methods.
`
` Backjumping
`
`Backjumping schemes are one of the primary tools for reducing backtracking’s
`unfortunate tendency to rediscover the same dead-ends. A dead-end occurs if
`xi has no consistent values left, in which case the backtracking algorithm will
`go back to xi(cid:0) . Suppose a new value for xi(cid:0) exists but there is no constraint
`between xi and xi(cid:0) . A dead-end will be reached at xi for each value of xi(cid:0) until
`all values of xi(cid:0) have been exhausted. For instance, the problem in Figure will
`have a dead-end at x given the assignment (x = red; x = blue; x = blue; x =
`blue; x = green; x = red). Backtracking will then return to x and instantiate
`it as x = teal, but the same dead-end will be encountered at x. We can (cid:12)nesse
`this situation by identifying the culprit variable responsible for the dead-end and
`then jumping back immediately to reinstantiate the culprit variable, instead of
`repeatedly instantiating the chronologically previous variable. Identi(cid:12)cation of
`a culprit variable in backtracking is based on the notion of con(cid:13)ict sets. For
`ease of exposition, in the following discussion we assume a (cid:12)xed ordering of
`the variables d = (x ; : : : ; xn). This restriction can be lifted without a(cid:11)ecting
`correctness, thus allowing dynamic variable orderings in all the algorithms.
`
`. Con(cid:13)ict sets
`
`A dead-end state at level i indicates that a current partial instantiation ~ai =
`(a ; :::; ai) con(cid:13)icts with xi+ . (a ; :::; ai) is called a dead-end state and xi+ is
`called a dead-end variable. Namely, backtracking generated the consistent tuple
`~ai = (a ; :::; ai) and tried to extend it to the next variable, xi+ , but failed: no
`value of xi+ was consistent with all the values in ~ai.
`The subtuple ~ai(cid:0) = (a ; :::; ai(cid:0) ) may also be in con(cid:13)ict with xi+ , and
`therefore going back to xi and changing its value will not always resolve the
`dead-end at variable xi+ . In general, a tuple ~ai that is a dead-end at level i
`may contain many subtuples that are in con(cid:13)ict with xi+ . Any such partial
`instantiation will not be part of any solution. Backtracking’s control strategy
`often retreats to a subtuple ~aj (alternately, to variable xj) without resolving all
`or even any of these con(cid:13)ict sets. As a result, a dead-end at xi+ is guaranteed
`
`
`
`10
`
`

`

`to recur. Therefore, rather than going to the previous variable, the algorithm
`should jump back from the dead-end state at ~ai = (a ; :::; ai) to the most recent
`variable xb such that ~ab(cid:0) = (a ; :::; ab(cid:0) ) contains no con(cid:13)ict sets of the dead-
`end variable xi+ . As it turns out, identifying this culprit variable is fairly
`easy.
`
`De(cid:12)nition (con(cid:13)ict set) Let ~a = (a ; :::; ai) be a consistent instantiation,
`and let x be a variable not yet instantiated. If no value in the domain of x is
`consistent with ~a, we say that ~a is a con(cid:13)ict set of x, or that ~a con(cid:13)icts with
`variable x. If, in addition, ~a does not contain a subtuple that is in con(cid:13)ict with
`x, ~a is called aminim al con(cid:13)ict set of x.
`
`De(cid:12)nition  (i-leaf dead-ends) Given an ordering d = x ; :::; xn, then a tu-
`ple ~ai = (a ; :::; ai) that is consistent but is in con(cid:13)ict with xi+ is called an
`i-leaf dead-end state.
`
`De(cid:12)nition (no-good) Any partial instantiation ~a that does not appear in
`any solution is called ano-good. Minimal no-goods have no no-good subtuples.
`
`A con(cid:13)ict set is clearly a no-good, but there are no-goods that are not con(cid:13)ict
`sets of any single variable. Namely, they may con(cid:13)ict with two or more variables.
`Whenever backjumping discovers a dead-end, it tries to jump as far back
`as possible without skipping potential solutions. These two issues of safety in
`jumping and optimality in the magnitude of a jump need to be de(cid:12)ned relative
`to the information status of a given algorithm. What is safe and optimal for one
`style of backjumping may not be safe and optimal for another, especially if they
`are engaged in di(cid:11)erent levels of information gathering. Next, we will discuss
`two styles of backjumping, Gaschnig’s and graph-based, that lead to di(cid:11)erent
`notions of safety and optimality when jumping back.
`
`De(cid:12)nition  (safe jump) Let ~ai = (a ; :::; ai) be an i-leaf dead-end state. We
`say that xj , where j (cid:20) i, is safe if the partial instantiation ~aj = (a ; :::; aj) is a
`no-good, namely, it cannot be extended to a solution.
`
`In other words, we know that if xj ’s value is changed no solution will be
`missed.
`
`De(cid:12)nition  Let ~ai = (a ; :::; ai) be an i-leaf dead-end. The culprit index
`relative to ~ai is de(cid:12)ned by b = minfj (cid:20) ij ~aj conf licts with xi+ g. We de(cid:12)ne
`the culprit variable of ~ai to be xb.
`
`We use the notions of culprit tuple ~ab and culprit variable xb interchangeably.
`By de(cid:12)nition, ~ab is a con(cid:13)ict set that is minimal relative to pre(cid:12)x tuples, namely,
`those associated with a pre(cid:12)x subset of the ordered variables. We claim that xb is
`both safe and optimal: safe in that ~ab cannot be extended to a solution; optimal
`in that jumping back to an earlier node risks missing a solution. Essentially, if
`the algorithm fails to retract as far back asx b, it is guaranteed to wander in
`the search space rooted at ~ab unnecessarily, but if it retracts further back than
`xb, the algorithm may exclude a portion of the search space in which there is a
`solution.
`
`
`
`11
`
`

`

`Proposition If ~ai is an i-leaf dead-end discovered by backtracking, and xb is
`the culprit variable, then ~ab, is an optimal and safe backjump destination.
`
`Proof: By de(cid:12)nition of a culprit, ~ab is a con(cid:13)ict set of xi+ and therefore
`is a no-good. Consequently, jumping to xb and changing the value ab of xb
`to another consistent value of xb (if one exists) will not result in skipping a
`potential solution. To prove optimality, we need to show that jumping farther
`back to an earlier node risks skipping potential solutions. Speci(cid:12)cally, if the
`algorithm jumps to xb(cid:0)j, then by de(cid:12)nition ~ab(cid:0)j is not a con(cid:13)ict set of xi+ ,
`and therefore it may be part of a solution. Note that ~ab(cid:0)j may be a no-good,
`but the backtracking algorithm cannot determine whether it is without testing
`it further. .
`Computing the culprit variable of ~ai is relatively simple since at most i
`subtuples need to be tested for consistency with xi+ . Moreover, it can be
`computed during search by gathering some basic information while assembling
`~ai. Procedurally, the culprit variable of a dead-end (cid:22)ai = (a ; :::; ai) is the most
`recent variable whose assigned value renders inconsistent the last remaining
`value in the domain of xi+ not ruled out by prior variables.
`Next we present three variants of backjumping. Gaschnig’s backjumping
`implements the idea of jumping back to the culprit variable only at leaf dead-
`ends. Graph-based Backjumping extracts information about irrelevant backtrack
`points exclusively from the constraint graph. Although its strategy for jumping
`back at leaf dead-ends is less than optimal, it introduces the notion of jumping
`back at internal dead-ends as well as leaf dead-ends. Con(cid:13)ict-directed backjump-
`ing combines optimal backjumps at both leaf and internal dead-ends.
`
`. Gaschnig’s backjumping
`
`Rather than wait for a dead-end ~ai to occur, Gaschnig’s backjumping [Gas ]
`records some information while generating ~ai, and uses this information to deter-
`mine the dead-end’s culprit variable xb. The algorithm uses a marking technique
`whereby each variable maintains a pointer to the latest predecessor found in-
`compatible with any of the variable’s values. While generating ~a in the forward
`phase, the algorithm maintains a pointer highi for each variable xi. The pointer
`identi(cid:12)es the latest variable tested for consistency with xi and found to be the
`earliest variable in con(cid:13)ict with a value in D
`i. For example, when no compatible
`values exist for xi and if highi = , the pointer indicates that ~a is a con(cid:13)ict
`set of xi. If ~ai does have a consistent value, then highi is assigned the value
`i (cid:0) . The algorithm jumps from a leaf dead-end ~ai that is inconsistent with
`xi+ , back tox highi+ , its culprit since the dead-end variable is xi+ . Gaschnig’s
`backjumping algorithm is presented in Figure .
`
`Proposition  Gaschnig’s backjumping implements only safe and optimal back-
`
`jumps in leaf dead-ends.
`
`Proof: Whenever there is a leaf dead-end ~ai(cid:0) , the algorithm has a partial
`instantiation ~ai(cid:0) = (a ; :::; ai(cid:0) ). Let j = highi. The algorithm jumps back to
`
` 
`
`12
`
`

`

`procedure gaschnig’s-backjumping
`Input: A constraint network with variables fx ; : : : ; xng and domains
`fD ; : : : ; Dng.
`Output: Either a solution, or a decision that the network is inconsistent.
`i
`(initialize variable counter)
`
`Di Di
`(copy domain)
`highi
`(initialize pointer to culprit)
`while (cid:20) i (cid:20) n
`instantiate xi selectValue-gbj
`if xi is null
`(no value was returned)
`i highi
`(backjump)
`else
`i i +
`
`Di Di
`highi
`end while
`if i =
`return \inconsistent"
`else
`return instantiated values of fx ; : : : ; xng
`end procedure
`
`select an arbitrary element a  Di, and remove a from D
`
`i
`
`procedure selectValue-gbj
`while D
`i is not empty
`
`consistent true
`k
`while k < i and consistent
`if k > highi
`highi k
`if ~ak con(cid:13)icts with (xi; a)
`consistent false
`else
`k k +
`end while
`if consistent
`return a
`end while
`return null
`end procedure
`
`(no consistent value)
`
`Figure : Gaschnig’s backjumping algorithm.
`
`
`
`13
`
`

`

`xj , namely, to the tuple ~aj. Clearly, ~aj is in con(cid:13)ict with xi, so we only have to
`show that ~aj is minimal. Since j = highi when the domain of xi is exhausted,
`and since a dead-end did not happen previously, any earlier ~ak for k < j is not
`a con(cid:13)ict set of xi, and therefore xj is the culprit variable. >From Proposition
` , it follows that this algorithm is safe and optimal. 
`
`Example . For the problem in Figure , at the dead-end for x (x =
`red; x = blue; x = blue; x = blue; x = green; x = red), high = ,
`because x = red was ruled out by x = red, blue was ruled out by x = blue,
`and no later variable had to be examined. On returning to x , the algorithm
`
`(cid:12)nds no further values to try (D = ;). Since high = , the next variable
`examined will be x. This demonstrate the algorithm’s ability to backjump on
`leaf dead-ends. On subsequent dead-ends (in x ) it goes back to its preceding
`variable only.
`
`In Gaschnig’s backjumping, a jump happens only at leaf dead-ends. If all
`the children of a node in the search tree lead to dead-ends (as happens with
`x = red in Figure a) the node is termed an internal dead-end. Algorithm
`graph-based backjumping implements jumps at internal dead-ends as well as at
`leaf dead-ends.
`
`. Graph-based backjumping
`
`Graph-based backjumping extracts knowledge about possible con(cid:13)ict sets from
`the constraint graph exclusively. Whenever a dead-end occurs a

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket