Handbook of Satisfiability
Armin Biere, Marijn Heule, Hans van Maaren and Toby Walsch
IOS Press, 2008
c
2008 Henry Kautz, Ashish Sabharwal, and Bart Selman. All rights reserved.
1
Chapter 6
Incomplete Algorithms
Henry Kautz, Ashish Sabharwal, and Bart Selman
An incomplete method for solving the propositional satisfiability problem (or a
general constraint satisfaction problem) is one that does not provide the guarantee
that it will eventually either report a satisfying assignment or declare that the
given formula is unsatisfiable. In practice, most such methods are biased towards
the satisfiable side: they are typically run with a pre-set resource limit, after
which they either produce a valid solution or report failure; they never declare
the formula to be unsatisfiable. These are the kind of algorithms we will discuss
in this chapter. In complexity theory terms, such algorithms are referred to as
having one-sided error. In principle, an incomplete algorithm could instead be
biased towards the unsatisfiable side, always providing proofs of unsatisfiability
but failing to find solutions to some satisfiable instances, or be incomplete with
respect to both satisfiable and unsatisfiable instances (and thus have two-sided
error).
Unlike systematic solvers often based on an exhaustive branching and back-
tracking search, incomplete methods are generally based on stochastic local search,
sometimes referred to as SLS. On problems from a variety of domains, such incom-
plete methods for SAT can significantly outperform DPLL-based methods. Since
the early 1990’s, there has been a tremendous amount of research on designing,
understanding, and improving local search methods for SAT.
1
There have also
been attempts at hybrid approaches that explore combining ideas from DPLL
methods and local search techniques [e.g. 39, 68, 84, 88]. We cannot do justice to
all recent research in local search solvers for SAT, and will instead try to provide
a brief overview and touch upon some interesting details. The interested reader
is encouraged to further explore the area through some of the nearly a hundred
publications we cite along the way.
We begin the chapter by discussing two methods that played a key role in the
success of local search for satisfiability, namely GSAT [98] and Walksat [95]. We
will then discuss some extensions of these ideas, in particular clause weighting
1
For example, there is work by Anbulagan et al. [7], Cha and Iwama [14], Frank et al.
[28], Gent and Walsh [31], Ginsberg and McAllester [32], Gu [37], Gu et al. [38], Hirsch and
Kojevnikov [42], Hoos [44, 45], Hoos and St¨utzle [46], Kirkpatrick and Selman [55], Konolige
[57], Li et al. [62, 63], McAllester et al. [70], Morris [78], Parkes and Walser [81], Pham et al.
[83], Resende and Feo [87], Schuurmans and Southey [91], Spears [100], Thornton et al. [101], Wu
and Wah [103], and others.
2 Chapter 6. Incomplete Algorithms
schemes which have made local search solvers highly competitive [14, 27, 47, 48,
78, 98, 101], and explore alternative techniques based on the discrete Lagrangian
method [92, 99, 102, 103]. We will close this chapter with a discussion of the phase
transition phenomenon in random k-SAT [20, 55, 76] which played a critical role
in the development of incomplete methods for SAT in the 1990s, and mention a
relatively new incomplete technique for SAT called Survey Propagation [73].
These ideas lie at the core of most of the local search based competitive
incomplete SAT solvers out there today, and have been refined and implemented in
a number of very successful prototypes including adaptg2wsat+, AdaptNovelty,
gNovelty+, R+AdaptNovelty+, SAPS, UBCSAT, UnitWalk, and Walksat. Rather
than going into the details of each individual solver, we hope to present the
broader computer science concepts that form the backbone of these solvers and
to provide the reader with enough background and references to further explore
the intricacies if desired.
We note that there are other exciting incomplete solution methodologies, such
as those based on translation to Integer Programming [43, 52], Evolutionary and
Genetic Algorithms [22, 24, 35, 61], and Finite Learning Automata [36], that we
will not discuss here. There has also been work on formally analyzing local search
methods, yielding some of the best o(2
n
) time algorithms for SAT. For instance,
the expected running time, ignoring polynomial factors, of a simple local search
method with restarts after every 3n “flips” has been shown to be (2 · (k 1)/k)
n
for k-SAT [89, 90], which yields a complexity of (4/3)
n
for 3-SAT. This result has
been derandomized to yield a deterministic algorithm with complexity 1.481
n
up
to polynomial factors [21]. We refer the reader to Part 1, Chapter 12 of this
Handbook for a detailed discussion of worst-case upper bounds for k-SAT.
For the discussion in the rest of this chapter, it will be illustrative to think
of a propositional formula F with n variables and m clauses as creating a dis-
crete manifold or landscape in the space {0, 1}
n
× {0, 1, . . . , m}. The 2
n
truth
assignments to the variables of F correspond to the points in {0, 1}
n
, and the
“height” of each such point, a number in {0, 1, . . . , m}, corresponds to the num-
ber of clauses of F that are violated by this truth assignment. The solutions or
satisfying assignments for F are precisely the points in this landscape with height
zero, and thus correspond to the global minima of this landscape, or, equivalently,
the global minima of the function that maps each point in {0, 1}
n
to its height.
The search problem for SAT is then a search for a global minimum in this im-
plicitly defined exponential-size landscape. Clearly, if the landscape did not have
any local minima, a greedy descent would provide an effective search method.
All interesting formulas, however, do have local minima—the main challenge and
opportunity for the designers of local search methods.
This landscape view also leads one naturally to the problem of maximum
satisfiability or MAX-SAT: Given a formula F , find a truth assignment that
satisfies the most number of clauses possible. Solutions to the MAX-SAT problem
are, again, precisely the global minima of the corresponding landscape, only that
these global minima may not have height zero. Most of the incomplete methods
we will discuss in this chapter, especially those based on local search, can also work
as a solution approach for the MAX-SAT problem, by providing the “best found”
truth assignment (i.e., one with the lowest observed height) upon termination.
Chapter 6. Incomplete Algorithms 3
Of course, while it is easy to test whether the best found truth assignment is a
solution to a SAT instance, it is NP-hard to perform the same test for a MAX-SAT
instance. Thus, this approach only provides a heuristic algorithm for MAX-SAT,
with a two-sided error. For further details on this problem, we refer the reader
to Part 2, Chapter 19 of this Handbook.
6.1. Greedy Search and Focused Random Walk
The original impetus for trying a local search method on the satisfiability problem
was the successful application of such methods for finding solutions to large N-
queens instances, first using a connectionist system by Adorf and Johnston [6], and
then using greedy local search by Minton et al. [75]. It was originally assumed that
this success simply indicated that N-queens was an easy problem, and researchers
felt that such techniques would fail in practice for SAT and other more intricate
problems. In particular, it was believed that local search methods would easily
get stuck in local minima, with a few clauses remaining unsatisfied. Experiments
with the solver GSAT showed, however, that certain local search strategies often do
reach global minima, in many cases much faster than systematic search methods.
GSAT is based on a randomized local search technique [64, 80]. The basic GSAT
procedure, introduced by Selman et al. [98] and described here as Algorithm 6.1,
starts with a randomly generated truth assignment for all variables. It then
greedily changes (‘flips’) the truth assignment of the variable that leads to the
greatest decrease in the total number of unsatisfied clauses. The neighborhood
of the current truth assignment, thus, is the set of n truth assignments each of
which differs from the current one in the value of exactly one variable. Such flips
are repeated until either a satisfying assignment is found or a pre-set maximum
number of flips (max-flips) is reached. This process is repeated as needed, up
to a maximum of max-tries times.
Algorithm 6.1: GSAT (F )
Input : A CNF formula F
Parameters : Integers max-flips, max-tries
Output : A satisfying assignment for F , or FAIL
begin
for i 1 to max-tries do
σ a randomly generated truth assignment for F
for j 1 to max-flips do
if σ satisfies F then return σ // success
v a variable flipping which results in the greatest decrease
(possibly negative) in the number of unsatisfied clauses
Flip v in σ
return FAIL // no satisfying assignment found
end
Selman et al. showed that GSAT substantially outperformed even the best
backtracking search procedures of the time on various classes of formulas, in-
4 Chapter 6. Incomplete Algorithms
cluding randomly generated formulas and SAT encodings of graph coloring in-
stances [50]. The search of GSAT typically begins with a rapid greedy descent
towards a better truth assignment (i.e., one with a lower height), followed by long
sequences of “sideways” moves. Sideways moves are moves that do not increase or
decrease the total number of unsatisfied clauses. In the landscape corresponding
to the formula, each collection of truth assignments that are connected together
by a sequence of possible sideways moves is referred to as a plateau. Experiments
indicate that on many formulas, GSAT spends most of its time on plateaus, transi-
tioning from one plateau to another every so often. Interestingly, Frank et al. [28]
observed that in practice, almost all plateaus do have so-called “exits” that lead
to another plateau with a lower number of unsatisfied clauses. Intuitively, in a
very high dimensional search space such as the space of a 10,000 variable formula,
it is very rare to encounter local minima, which are plateaus from where there
is no local move that decreases the number of unsatisfied clauses. In practice,
this means that GSAT most often does not get stuck in local minima, although
it may take a substantial amount of time on each plateau before moving on to
the next one. This motivates studying various modifications in order to speed up
this process [96, 94]. One of the most successful strategies is to introduce noise
into the search in the form of uphill moves, which forms the basis of the now
well-known local search method for SAT called Walksat [95].
Walksat interleaves the greedy moves of GSAT with random walk moves of a
standard Metropolis search. It further focuses the search by always selecting the
variable to flip from an unsatisfied clause C (chosen at random). This seemingly
simple idea of focusing the search turns out to be crucial for scaling such tech-
niques to formulas beyond a few hundred variables. If there is a variable in C
flipping which does not turn any currently satisfied clauses to unsatisfied, it flips
this variable (a “freebie” move). Otherwise, with a certain probability, it flips
a random literal of C (a “random walk” move), and with the remaining proba-
bility, it flips a variable in C that minimizes the break-count, i.e., the number of
currently satisfied clauses that become unsatisfied (a “greedy” move). Walksat
is presented in detail as Algorithm 6.2. One of its parameters, in addition to
the maximum number of tries and flips, is the noise p [0, 1], which controls
how often are non-greedy moves considered during the stochastic search. It has
been found empirically that for various instances from a single domain, a single
value of p is optimal. For random 3-SAT formulas, the optimal noise is seen to
be 0.57, and at this setting, Walksat is empirically observed to scale linearly for
clause-to-variable ratios α up to (and even slightly beyond) 4.2 [93], though not
all the way up to the conjectured satisfiability threshold of nearly 4.26.
2
The focusing strategy of Walksat based on selecting variables solely from
unsatisfied clauses was inspired by the O(n
2
) randomized algorithm for 2-SAT by
Papadimitriou [79]. It can be shown that for any satisfiable formula and starting
from any truth assignment, there exists a sequence of flips using only variables
from unsatisfied clauses such that one obtains a satisfying assignment.
Remark 6.1.1. We take a small detour to explain the elegant and insightful
O(n
2
) time randomized local search algorithm for 2-SAT by Papadimitriou [79].
2
Aurell et al. [8] had observed earlier that Walksat scales linearly for random 3-SAT at least
till clause-to-variable ratio 4.15.
Chapter 6. Incomplete Algorithms 5
Algorithm 6.2: Walksat (F )
Input : A CNF formula F
Parameters : Integers max-flips, max-tries; noise parameter p [0, 1]
Output : A satisfying assignment for F , or FAIL
begin
for i 1 to max-tries do
σ a randomly generated truth assignment for F
for j 1 to max-flips do
if σ satisfies F then return σ // success
C an unsatisfied clause of F chosen at random
if variable x C with break-count = 0 then
v x // freebie move
else
With probability p: // random walk move
v a variable in C chosen at random
With probability 1 p: // greedy move
v a variable in C with the smallest break-count
Flip v in σ
return FAIL // no satisfying assignment found
end
The algorithm itself is very simple: while the current truth assignment does not
satisfy all clauses, select an unsatisfied clause arbitrarily, select one of its variables
uniformly at random, and flip this variable. Why does this take O(n
2
) flips in
expectation before finding a solution? Assume without loss of generality that the
all-zeros string (i.e., all variables set to false) is a satisfying assignment for the
formula at hand. Let ¯σ denote this particular solution. Consider the Hamming
distance d(σ, ¯σ) between the current truth assignment, σ, and the (unknown)
solution ¯σ. Note that d(σ, ¯σ) equals the number of true variables in σ. We
claim that in each step of the algorithm, we reduce d(σ, ¯σ) by 1 with probability
at least a half, and increase it by one with probability less than a half. To see this,
note that since ¯σ is a solution, all clauses of the formula have at least one negative
literal so that any unsatisfied clause selected by the algorithm must involve at
least one variable that is currently set to true, and selecting this variable to flip
will result in decreasing d(σ, ¯σ) by 1. Given this claim, the algorithm is equivalent
to a one-dimensional Markov chain of length n + 1 with the target node—the all-
zeros string—on one extreme and the all-ones string at the other extreme, and
that at every point we walk towards the target node with probability at least a
half. It follows from standard Markov chain hitting time analysis that we will hit
¯σ after O(n
2
) steps. (The algorithm could, of course, hit another solution before
hitting ¯σ, which will reduce the runtime even further.)
When one compares the biased random walk strategy of Walksat on hard
random 3-CNF formulas against basic GSAT, the simulated annealing process of
Kirkpatrick et al. [54], and a pure random walk strategy, the biased random walk
process significantly outperforms the other methods [94]. In the years follow-
6 Chapter 6. Incomplete Algorithms
ing the development of Walksat, many similar methods have been shown to be
highly effective on not only random formulas but on several classes of structured
instances, such as encodings of circuit design problems, Steiner tree problems,
problems in finite algebra, and AI planning [cf. 46].
6.2. Extensions of the Basic Local Search Method
Various extensions of the basic process discussed above have been explored, such
as dynamic noise adaptation as in the solver adapt-novelty [45], incorporating
unit clause elimination as in the solver UnitWalk [42], incorporating resolution-
based reasoning [7], and exploiting problem structure for increased efficiency [83].
Recently, it was shown that the performance of stochastic solvers on many struc-
tured problems can be further enhanced by using new SAT encodings that are
designed to be effective for local search [85].
While adding random walk moves as discussed above turned out to be a suc-
cessful method of guiding the search away from local basins of attraction and
toward other parts of the search space, a different line of research considered
techniques that relied on the idea of clause re-weighting as an extension of basic
greedy search [14, 27, 47, 48, 78, 98, 101]. Here one assigns a positive weight to
each clause and attempts to minimize the sum of the weights of the unsatisfied
clauses. The clause weights are dynamically modified as the search progresses,
increasing the weight of the clauses that are currently unsatisfied. (In some imple-
mentations, increasing the weight of a clause is done by simply adding identical
copies of the clause.) In this way, if one waits sufficiently long, any unsatisfied
clause gathers enough weight so as to sway the truth assignment in its favor. This
is thought of as “flooding” the current local minimum by re-weighting unsatisfied
clauses to create a new descent direction for local search. Variants of this approach
differ in the re-weighting strategy used, e.g., how often and by how much the
weights of unsatisfied clauses are increased, and how are all weights periodically
decreased in order to prevent certain weights from becoming dis-proportionately
high. The work on DLM or Discrete Lagrangian Method grounded these tech-
niques in a solid theoretical framework, whose details we defer to Section 6.3.
The SDF or “smoothed descent and flood” system of Schuurmans and Southey
[91] achieved significantly improved results by using multiplicative (rather than
additive) re-weighting, by making local moves based on how strongly are the
clauses currently satisfied in terms of the number of satisfied literals (rather than
simply how many are satisfied), and by periodically shrinking all weights towards
their common mean. Overall, two of the most well-known clause-reweighting
schemes that have been proposed are SAPS (scaling and probabilistic smoothing,
along with its reactive variant RSAPS) [47] and PAWS (pure additive weighting
scheme) [101].
Other approaches for improving the performance of GSAT were also explored
in the early years. These include TSAT by Mazure et al. [69], who maintain a
tabu list in order to prevent GSAT from repeating earlier moves, and HSAT by
Gent and Walsh [31], who consider breaking ties in favor of least recently flipped
variables. These strategies provide improvement, but to a lesser extent than the
basic random walk component added by Walksat.
Chapter 6. Incomplete Algorithms 7
In an attempt towards better understanding the pros and cons of many of
these techniques, Schuurmans and Southey [91] proposed three simple, intuitive
measures of the effectiveness of local search: depth, mobility, and coverage. (A)
Depth measures how many clauses remain unsatisfied as the search proceeds.
Typically, good local search strategies quickly descend to low depth and stay
there for a long time; this corresponds to spending as much time as possible
near the bottom of the search landscape. (B) Mobility measures how rapidly the
process moves to new regions in the search space (while simultaneously trying to
stay deep in the objective). Clearly, the larger the mobility, the better the chance
that a local search strategy has of achieving success. (C) Coverage measures how
systematically the process explores the entire space, in terms of the largest “gap”,
i.e., the maximum Hamming distance between any unexplored assignment and the
nearest evaluated assignment. Schuurmans and Southey [91] hypothesized that, in
general, successful local search procedures work well not because they possess any
special ability to predict whether a local basin of attraction contains a solution
or not—rather they simply descend to promising regions and explore near the
bottom of the objective as rapidly, broadly, and systematically as possible, until
they stumble across a solution.
6.3. Discrete Lagrangian Methods
Shang and Wah [99] introduced a local search system for SAT based on the
theory of Lagrange multipliers. They extended the standard Lagrange method,
traditionally used for continuous optimization, to the discrete case of propositional
satisfiability, in a system called DLM (Discrete Lagrangian Method). Although
the final algorithm that comes out of this process can be viewed as a clause
weighted version of local search as discussed in Section 6.2, this approach provided
a theoretical foundation for many design choices that had appeared somewhat
ad-hoc in the past. The change in the weights of clauses that are unsatisfied
translates in this system to a change is the corresponding Lagrange multipliers,
as one searches for a (local) optimum of the associated Lagrange function.
The basic idea behind DLM is the following. Consider an n-variable CNF
formula F with clauses C
1
, C
2
, . . . , C
m
. For x {0, 1}
n
and i {1, 2, . . . , m}, let
U
i
(x) be a function that is 0 if C
i
is satisfied by x, and 1 otherwise. Then the
SAT problem for F can be written as the following optimization problem over
x {0, 1}
n
:
minimize N(x) =
m
X
i=1
U
i
(x) (6.1)
subject to U
i
(x) = 0 i {1, 2, . . . , m}
Notice that N(x) 0 and equals 0 if and only if all clauses of F are satisfied.
Thus, the objective function N(x) is minimized if and only if x is a satisfying
assignment for F . This formulation, somewhat strangely, also has each clause as
an explicit constraint U
i
(x), so that any feasible solution is automatically also
locally as well as globally optimal. This redundancy, however, is argued to be the
strength of the system: the dynamic shift in emphasis between the objective and
8 Chapter 6. Incomplete Algorithms
the constraints, depending on the relative values of the Lagrange multipliers, is
the key feature of Lagrangian methods.
The theory of discrete Lagrange multipliers provides a recipe for converting
the above constrained optimization problem into an unconstrained optimization
problem, by introducing a Lagrange multiplier for each of the constraints and
adding the constraints, appropriately multiplied, to the objective function. The
resulting discrete Lagrangian function, which provides the new objective function
to be optimized, is similar to what one would obtain in the traditional continuous
optimization case:
L
d
(x, λ) = N(x) +
m
X
i=1
λ
i
U
i
(x) (6.2)
where x {0, 1}
n
are points in the variable space and λ = (λ
1
, λ
2
, . . . , λ
m
) R
m
is a vector of Lagrange multipliers, one for each constraint in (i.e., clause of)
F . A point (x
, λ
) {0, 1}
n
× R
m
is called a saddle point of the Lagrange
function L
d
(x, λ) if it is a local minimum w.r.t. x
and a local maximum w.r.t.
λ
. Formally, (x
, λ
) is a saddle point for L
d
if
L
d
(x
, λ) L
d
(x
, λ
) L
d
(x, λ
)
for all λ sufficiently close to λ
and for all x that differ from x
only in one di-
mension. It can be proven that x
is a local minimum solution to the discrete
constrained optimization formulation of SAT (6.1) if there exists a λ
such that
(x
, λ
) is a saddle point of the associated Lagrangian function L
d
(x, λ). There-
fore, local search methods based on the Lagrangian system look for saddle points
in the extended space of variables and Lagrange multipliers. By doing descents in
the original variable space and ascents in the Lagrange-multiplier space, a saddle
point equilibrium is reached when (locally) optimal solutions are found.
For SAT, this is done through a difference gradient
x
L
d
(x, λ), defined to be
a vector in {−1, 0, 1}
n
with the properties that it has at most one non-zero entry
and y = x
x
L
d
(x, λ) minimizes L
d
(y, λ) over all neighboring points y of x,
including x itself. Here denotes vector addition; x z = (x
1
+ z
1
, . . . , x
n
+ z
n
).
The neighbors are “user-defined” and are usually taken to be all points that differ
from x only in one dimension (i.e., are one variable flip away). Intuitively, the
difference gradient ∆
x
L
d
(x, λ) “points in the direction” of the neighboring value
in the variable space that minimizes the Lagrangian function for the current λ.
This yields an algorithmic approach
for minimizing the discrete Lagrangian
function L
d
(x, λ) associated with SAT (and hence minimizing the objective func-
tion N(x) in the discrete constrained optimization formulation of SAT (6.1)).
The algorithm proceeds iteratively in stages, updating x {0, 1}
n
and λ R
m
in
each stage using the difference gradient and the current status of each constraint
in terms of whether or not it is satisfied, until a fixed point is found. Let x(k)
and λ(k) denote the values of x and λ after the k
th
iteration of the algorithm.
Then the updates are as follows:
x(k + 1) = x(k)
x
L
d
(x(k), λ(k))
λ(k + 1) = λ(k) + c U(x(k))
Chapter 6. Incomplete Algorithms 9
where c R
+
is a parameter that controls how fast the Lagrange multipliers are
increased over iterations and U denotes the vector of the m constraint functions
U
i
defined earlier. The difference gradient
x
L
d
determines which variable, if
any, to flip in order to lower L
d
(x, λ) for the current λ. When a fixed point for
these iterations is reached, i.e., when x(k +1) = x(k) and λ(k +1) = λ(k), it must
be that all constraints U
i
are satisfied. To see this, observe that if the i
th
clause
is unsatisfied after the k
th
iteration, then U
i
(x(k)) = 1, which implies λ
i
(k + 1) =
λ
i
(k) + c; thus, λ
i
will keep increasing until U
i
is satisfied. This provides dynamic
clause re-weighting in this system, placing more and more emphasis on clauses
that are unsatisfied until they become satisfied. Note that changing the Lagrange
multipliers λ
i
in turn affects x by changing the direction in which the difference
gradient ∆
x
L
d
(x, λ) points, eventually leading to a point at which all constraints
are satisfied. This is the essence of the DLM system for SAT.
The basic implementation of DLM [99] uses a simple controlled update rule
for λ: increase λ
i
by 1 for all unsatisfied clauses C
i
after a pre-set number θ
1
of
up-hill or flat moves (i.e., changes in x that do not decrease L
d
(x, λ)) have been
performed. In order to avoid letting some Lagrange multipliers become dispro-
portionately large during the search, all λ
i
’s are periodically decreased after a
pre-set number θ
2
of increases in λ have been performed. Finally, the implemen-
tation uses tabu lists to store recently flipped variables, so as to avoid flipping
them repeatedly.
Wu and Wah [102] observed that in many difficult to solve instances, such
as from the parity and hanoi domains, basic DLM frequently gets stuck in
traps, i.e., pairs (x, λ) such that there are one or more unsatisfied clauses but the
associated L
d
increases by changing x in any one dimension. They found that
on these instances, some clauses are significantly more likely than others to be
amongst the unsatisfied clauses in such a trap. (Note that this is different from
counting how often is a clause unsatisfied; here we only consider the clause status
inside a trap situation, ignoring how the search arrived at the trap.) Keeping track
of such clauses and periodically performing a special increase in their associated
Lagrange multipliers, guides the search away from traps and results in better
performance.
[103] later generalized this strategy by recording not only information about
traps but a history of all recently visited points in the variable space. Instead of
performing a special increase periodically, this history information is now added
directly as a distance penalty term to the Lagrangian function L
d
. The penalty
is larger for points that are in Hamming distance closer to the current value of x,
thereby guiding the search away from recently visited points (in particular, points
inside a trap that would be visited repeatedly).
6.4. The Phase Transition Phenomenon in Random k-SAT
One of the key motivations in the early 1990’s for studying incomplete, stochastic
methods for solving SAT instances was the observation that DPLL-based sys-
tematic solvers perform quite poorly on certain randomly generated formulas.
For completeness, we provide a brief overview of these issues here; for a detailed
discussion, refer to Part 1, Chapter 8 of this Handbook.
10 Chapter 6. Incomplete Algorithms
Consider a random k-CNF formula F on n variables generated by indepen-
dently creating m clauses as follows: for each clause, select k distinct variables
uniformly at random out of the n variables and negate each selected variable
independently with probability 1/2. When F is chosen from this distribution,
Mitchell, Selman, and Levesque [76] observed that the median hardness of the in-
stances is very nicely characterized by a single parameter: the clause-to-variable
ratio, m/n, typically denoted by α. They observed that instance hardness peaks
in a critically constrained region determined by α alone. The top pane of Fig-
ure 6.1 depicts the now well-known “easy-hard-easy” pattern of SAT and other
combinatorial problems, as the key parameter (in this case α) is varied. For ran-
dom 3-SAT, this region has been experimentally shown to be around α 4.26
(for early results see Crawford and Auton [20], Kirkpatrick and Selman [55]; new
findings by Mertens et al. [71]), and has provided challenging benchmarks as a
test-bed for SAT solvers. Cheeseman et al. [15] observed a similar easy-hard-easy
pattern in the random graph coloring problem. For random formulas, interest-
ingly, a slight natural variant of the above “fixed-clause-length” model, called the
variable-clause-length model, does not have a clear set of parameters that leads
to a hard set of instances [26, 33, 86]. This apparent difficulty in generating
computationally hard instances for SAT solvers provided the impetus for much
of the early work on local search methods for SAT. We refer the reader to Cook
and Mitchell [19] for a detailed survey.
The critically constrained region marks a stark transition not only in the
computational hardness of random SAT instances but also in their satisfiability
itself. The bottom pane of Figure 6.1 shows the fraction of random formulas
that are unsatisfiable, as a function of α. We see that nearly all formulas with α
below the critical region (the under-constrained instances) are satisfiable. As α
approaches and passes the critical region, there is a sudden change and nearly all
formulas in this over-constrained region are unsatisfiable. Further, as n grows, this
phase transition phenomenon becomes sharper and sharper, and coincides with
the region in which the computational hardness peaks. The relative hardness of
the instances in the unsatisfiable region to the right of the phase transition is
consistent with the formal result of Chatal and Szemer´edi [17] who, building
upon the work of Haken [41], proved that large unsatisfiable random k-CNF
formulas almost surely require exponential size resolution refutations, and thus
exponential length runs of any DPLL-based algorithm proving unsatisfiability.
This formal result was subsequently refined and strengthened by others [cf. 9, 10,
18].
Relating the phase transition phenomenon for 3-SAT to statistical physics,
Kirkpatrick and Selman [55] showed that the threshold has characteristics typical
of phase transitions in the statistical mechanics of disordered materials (see also
Monasson et al. [77] and Part 2, Chapter 18 of this Handbook). Physicists have
studied phase transition phenomena in great detail because of the many interest-
ing changes in a system’s macroscopic behavior that occur at phase boundaries.
One useful tool for the analysis of phase transition phenomena is called finite-size
scaling analysis [97]. This approach is based on rescaling the horizontal axis by
a factor that is a function of n. The function is such that the horizontal axis is
stretched out for larger n. In effect, rescaling “slows down” the phase-transition
Chapter 6. Incomplete Algorithms 11
0
500
1000
1500
2000
2500
3000
3500
4000
2 3 4 5 6 7 8
# of DP calls
Ratio of clauses-to-variables
20--variable formulas
40--variable formulas
50--variable formulas
0
0.2
0.4
0.6
0.8
1
3 3.5 4 4.5 5 5.5 6 6.5 7
Fraction of unsatisfiable formulae
M/N
Threshold for 3SAT
N = 12
N = 20
N = 24
N = 40
N = 50
N = 100
Figure 6.1. The phase transition phenomenon in random 3-SAT. Top: Computational hard-
ness peaks at α 4.26. Bottom: Formulas change from being mostly satisfiable to mostly
unsatisfiable. The transitions sharpen as the number of variables grows.
for higher values of n, and thus gives us a better look inside the transition. From
the resulting universal curve, applying the scaling function backwards, the actual
transition curve for each value of n can be obtained. In principle, this approach
also localizes the 50%-satisfiable-point for any value of n, which allows one to
generate very hard random 3-SAT instances.
Interestingly, it is still not formally known whether there even exists a critical
constant α
c
such that as n grows, almost all 3-SAT formulas with α < α
c
are
satisfiable and almost all 3-SAT formulas with α > α
c
are unsatisfiable. In this
respect, Friedgut [29] provided the first positive result, showing that there exists
a function α
c
(n) depending on n such that the above threshold property holds.
(It is quite likely that the threshold in fact does not depend on n, and is a fixed
constant.) In a series of papers, researchers have narrowed down the gap between
upper bounds on the threshold for 3-SAT [e.g. 13, 23, 26, 49, 56], the best so
far being 4.51 [23], and lower bounds [e.g. 1, 5, 13, 25, 30, 40, 53], the best so
far being 3.52 [40, 53]. On the other hand, for random 2-SAT, we do have a
full rigorous understanding of the phase transition, which occurs at the clause-to-
12 Chapter 6. Incomplete Algorithms
variable ratio of 1 [11, 16]. Also, for general k, the threshold for random k-SAT
is known to be in the range 2
k
ln 2 O(k) [2, 3, 34].
6.5. A New Technique for Random k-SAT: Survey Propagation
We end this chapter with a brief discussion of Survey Propagation (SP), an ex-
citing new incomplete algorithm for solving hard combinatorial problems. The
reader is referred to Part 2, Chapter 18 of this Handbook for a detailed treatment
of work in this direction. Survey propagation was discovered in 2002 by M´ezard,
Parisi, and Zecchina [73], and is so far the only known method successful at solv-
ing random 3-SAT instances with one million variables and beyond in near-linear
time in the most critically constrained region.
3
The SP method is quite radical in that it tries to approximate, using an iter-
ative process of local “message” updates, certain marginal probabilities related to
the set of satisfying assignments. It then assigns values to variables with the most
extreme probabilities, simplifies the formula, and repeats the process. This strat-
egy is referred to as SP-inspired decimation. In effect, the algorithm behaves like
the usual DPLL-based methods, which also assign variable values incrementally
in an attempt to find a satisfying assignment. However, quite surprisingly, SP
almost never has to backtrack. In other words, the “heuristic guidance” from SP
is almost always correct. Note that, interestingly, computing marginals on sat-
isfying assignments is strongly believed to be much harder than finding a single
satisfying assignment (#P-complete vs. NP-complete). Nonetheless, SP is able
to efficiently approximate certain marginals on random SAT instances and uses
this information to successfully find a satisfying assignment.
SP was derived from rather complex statistical physics methods, specifically,
the so-called cavity method developed for the study of spin glasses. The origin of
SP in statistical physics and its remarkable and unparalleled success on extremely
challenging random 3-SAT instances has sparked a lot of interest in the computer
science research community, and has led to a series of papers in the last five
years exploring various aspects of SP [e.g. 4, 8, 12, 58–60, 65, 67, 66, 72, 74, 104].
Many of these aspects still remain somewhat mysterious, making SP an active and
promising research area for statistical physicists, theoretical computer scientists,
and artificial intelligence practitioners alike.
While the method is still far from well-understood, close connections to belief
propagation (BP) methods [82] more familiar to computer scientists have been
subsequently discovered. In particular, it was shown by Braunstein and Zecchina
[12] (later extended by Maneva, Mossel, and Wainwright [66]) that SP equations
are equivalent to BP equations for obtaining marginals over a special class of com-
binatorial objects called covers. In this respect, SP is the first successful example
of the use of a probabilistic reasoning technique to solve a purely combinatorial
search problem. The recent work of Kroc et al. [58] empirically established that
SP, despite the very loopy nature of random formulas which violate the standard
tree-structure assumptions underlying the BP algorithm, is remarkably good at
3
As mentioned earlier, it has been recently shown that by finely tuning the noise and tem-
perature parameters, Walksat can also be made to scale well on hard random 3-SAT instances
with clause-to-variable ratios α slightly exceeding 4.2 [93].
Chapter 6. Incomplete Algorithms 13
computing marginals over these covers objects on large random 3-SAT instances.
Kroc et al. [59] also demonstrated that information obtained from BP-style al-
gorithms can be effectively used to enhance the performance of algorithms for
the model counting problem, a generalization of the SAT problem where one is
interested in counting the number of satisfying assignments.
Unfortunately, the success of SP is currently limited to random SAT instances.
It is an exciting research challenge to further understand SP and apply it suc-
cessfully to more structured, real-world problem instances.
6.6. Conclusion
Incomplete algorithms for satisfiability testing provide a complementary approach
to complete methods, using an essentially disjoint set of techniques and being
often well-suited to problem domains in which complete methods do not scale
well. While a mixture of greedy descent and random walk provide the basis for
most local search SAT solvers in use today, much work has gone into finding the
right balance and in developing techniques to focus the search and efficiently bring
it out of local minima and traps. Formalisms like the discrete Lagrangian method
and ideas like clause weighting or flooding have played a crucial role in pushing
the understanding and scalability of local search methods for SAT. An important
role has also been played by the random k-SAT problem, particularly in providing
hard benchmarks and a connection to the statistical physics community, leading
to the survey propagation algorithm. Can we bring together ideas and techniques
from systematic solvers and incomplete solvers to create a solver that has the
best of both worlds? While some progress has been made in this direction, much
remains to be done.
References
[1] D. Achlioptas. Setting 2 variables at a time yields a new lower bound for
random 3-SAT. In 32nd STOC, pages 28–37, Portland,OR, May 2000.
[2] D. Achlioptas, A. Naor, and Y. Peres. Rigorous location of phase transitions
in hard optimization problems. Nature, 435:759–764, 2005.
[3] D. Achlioptas and Y. Peres. The threshold for random k-SAT is 2
k
(ln 2
O(k)). J. American Math. Soc., 17:947–973, 2004.
[4] D. Achlioptas and F. Ricci-Tersenghi. On the solution-space geometry of
random constraint satisfaction problems. In 38th STOC, pages 130–139,
Seattle, WA, May 2006.
[5] D. Achlioptas and G. Sorkin. Optimal myopic algorithms for random 3-SAT.
In 41st FOCS, pages 590–600, Redondo Beach, CA, Nov. 2000. IEEE.
[6] H. M. Adorf and M. D. Johnston. A discrete stochastic neural network al-
gorithm for constraint satisfaction problems. In Intl. Joint Conf. on Neural
Networks, pages 917–924, San Diego, CA, 1990.
[7] Anbulagan, D. N. Pham, J. K. Slaney, and A. Sattar. Old resolution meets
modern SLS. In 20th AAAI, pages 354–359, Pittsburgh, PA, July 2005.
[8] E. Aurell, U. Gordon, and S. Kirkpatrick. Comparing beliefs, surveys, and
random walks. In 18th NIPS, Vancouver, BC, Dec. 2004.
14 Chapter 6. Incomplete Algorithms
[9] P. Beame, R. Karp, T. Pitassi, and M. Saks. On the Complexity of Un-
satisfiability Proofs for Random k-CNF Formulas. In 30th STOC, pages
561–571, Dallas, TX, May 1998.
[10] P. W. Beame and T. Pitassi. Simplified and improved resolution lower
bounds. In 37th FOCS, pages 274–282, Burlington, VT, Oct. 1996. IEEE.
[11] B. Bollob´as, C. Borgs, J. T. Chayes, J. H. Kim, and D. B. Wilson. The
scaling window of the 2-SAT transition. Random Struct. and Alg., 19(3-4):
201–256, 2001.
[12] A. Braunstein and R. Zecchina. Survey propagation as local
equilibrium equations. J. Stat. Mech., P06007, 2004. URL
http://lanl.arXiv.org/cond-mat/0312483.
[13] A. Broder, A. Frieze, and E. Upfal. On the satisfiability and maximum
satisfiability of random 3-CNF formulas. In Proc., 4th SODA, Jan. 1993.
[14] B. Cha and K. Iwama. Adding new clauses for faster local search. In 13th
AAAI, pages 332–337, Portland, OR, Aug. 1996.
[15] P. Cheeseman, B. Kenefsky, and W. M. Taylor. Where the really hard prob-
lems are. In Proceedings of IJCAI-91, pages 331–337. Morgan Kaufmann,
1991.
[16] V. Chatal and B. Reed. Mick gets some (the odds are on his side). In 33rd
FOCS, pages 620–627, Pittsburgh, PA, Oct. 1992. IEEE.
[17] V. Chv´atal and E. Szemer´edi. Many hard examples for resolution. J. Assoc.
Comput. Mach., 35(4):759–768, 1988.
[18] M. Clegg, J. Edmonds, and R. Impagliazzo. Using the Gr¨obner basis al-
gorithm to find proofs of unsatisfiability. In 28th STOC, pages 174–183,
Philadelphia, PA, May 1996.
[19] S. Cook and D. Mitchell. Finding hard instances of the satisfiability prob-
lem: a survey. In DIMACS Series in Discr. Math. and Theoretical Comp.
Sci., volume 35, pages 1–17. American Math. Society, 1997.
[20] J. M. Crawford and L. D. Auton. Experimental results on the cross-over
point in satisfiability problems. In Proc. AAAI-93, pages 21–27, Washing-
ton, DC, 1993.
[21] E. Dantsin, A. Goerdt, E. A. Hirsch, R. Kannan, J. M. Kleinberg, C. H.
Papadimitriou, P. Raghavan, and U. Sconing. A deterministic (2 2/(k +
1))
n
algorithm for k-SAT based on local search. Theoretical Comput. Sci.,
289(1):69–83, 2002.
[22] S. Droste, T. Jansen, and I. Wegener. On the analysis of the (1+1) evolu-
tionary algorithm. Theoretical Comput. Sci., 276(1-2):51–81, 2002.
[23] O. Dubois, Y. Boufkhad, and J. Mandler. Typical random 3-SAT formulae
and the satisfiability threshold. Technical Report TR03-007, ECCC, 2003.
[24] A. E. Eiben and J. E. Smith. Introduction to Evolutionary Computing.
springer, 2003. ISBN 3-540-40184-9.
[25] J. Franco. Probabilistic analysis of the pure literal heuristic for the satisfi-
ability problem. Annals of Operations Research, 1:273–289, 1983.
[26] J. Franco and M. Paull. Probabilistic analysis of the Davis-Putnam proce-
dure for solving the satisfiability problem. Discr. Applied Mathematics, 5:
77–87, 1983.
[27] J. Frank. Learning short-term weights for GSAT. In 15th IJCAI, pages
Chapter 6. Incomplete Algorithms 15
384–391, Nagoya, Japan, Aug. 1997.
[28] J. Frank, P. Cheeseman, and J. Stutz. Where gravity fails: Local search
topology. JAIR, 7:249–281, 1997.
[29] E. Friedgut. Sharp thresholds of graph properties, and the k-sat problem.
Journal of the American Mathematical Society, 12:1017–1054, 1999.
[30] A. Frieze and S. Suen. Analysis of two simple heuristics on a random
instance of k-SAT. J. Alg., 20(2):312–355, 1996.
[31] I. P. Gent and T. Walsh. Towards an understanding of hill-climbing proce-
dures for SAT. In 11th AAAI, pages 28–33, Washington, DC, July 1993.
[32] M. L. Ginsberg and D. A. McAllester. GSAT and dynamic backtracking.
In 4th KR, pages 226–237, Bonn, Germany, May 1994.
[33] A. Goldberg. On the complexity of the satisfiability problem. Technical
Report Report No. 16, Courant Computer Science, New York University,
1979.
[34] C. P. Gomes and B. Selman. Can get satisfaction. Nature, 435:751–752,
2005.
[35] J. Gottlieb, E. Marchiori, and C. Rossi. Evolutionary algorithms for the
satisfiability problem. Evolutionary Computation, 10(1):35–50, 2002.
[36] O.-C. Granmo and N. Bouhmala. Solving the satisfiability problem using
finite learning automata. Intl. J. Comp. Sci. App., 4(3):15–29, 2007.
[37] J. Gu. Efficient local search for very large-scale satisfiability problems.
Sigart Bulletin, 3(1):8–12, 1992.
[38] J. Gu, P. W. Purdom, J. Franco, and B. J. Wah. Algorithms for the Satisfia-
bility (SAT) Problem: A Survey. In Satisfiability (SAT) Problem, DIMACS,
pages 19–151. American Mathematical Society, 1997.
[39] D. Habet, C. M. Li, L. Devendeville, and M. Vasquez. A hybrid approach
for SAT. In 8th CP, volume 2470 of LNCS, pages 172–184, Ithaca, NY,
Sept. 2002.
[40] M. Hajiaghayi and G. B. Sorkin. The satisability threshold of random 3-
SAT is at least 3.52. Technical Report RC22942, IBM Research Report,
2003. http://arxiv.org/pdf/math.CO/0310193.
[41] A. Haken. The intractability of resolution. Theoretical Comput. Sci., 39:
297–305, 1985.
[42] E. A. Hirsch and A. Kojevnikov. UnitWalk: A new SAT solver that uses
local search guided by unit clause elimination. Annals Math. and AI, 43(1):
91–111, 2005.
[43] J. N. Hooker. A quantitative approach to logical inference. Decision Support
Systems, 4:45–69, 1988.
[44] H. H. Hoos. On the run-time behaviour of stochastic local search algorithms
for SAT. In Proceedings of AAAI-99, pages 661–666. AAAI Press, 1999.
[45] H. H. Hoos. An adaptive noise mechanism for WalkSAT. In 18th AAAI,
pages 655–660, Edmonton, Canada, July 2002.
[46] H. H. Hoos and T. St¨utzle. Stochastic Local Search: Foundations and Ap-
plications. Morgan Kaufmann, San Francisco, CA, 2004.
[47] F. Hutter, D. A. D. Tompkins, and H. H. Hoos. Scaling and probabilistic
smoothing: Efficient dynamic local search for SAT. In 8th CP, volume 2470
of LNCS, pages 233–248, Ithaca, NY, Sept. 2002.
16 Chapter 6. Incomplete Algorithms
[48] A. Ishtaiwi, J. Thornton, Anbulagan, A. Sattar, and D. N. Pham. Adaptive
clause weight redistribution. In 12th CP, volume 4204 of LNCS, pages 229–
243, Nantes, France, Sept. 2006.
[49] S. Janson, Y. C. Stamatiou, and M. Vamvakari. Bounding the unsatisfiabil-
ity threshold of random 3-SAT. Random Struct. and Alg., 17(2):103–116,
2000.
[50] D. S. Johnson, C. R. Aragon, L. A. McGeoch, and C. Schevon. Optimization
by simulated annealing: an experimental evaluation; part II. Operations
Research, 39, 1991.
[51] D. S. Johnson and M. A. Trick, editors. Cliques, Coloring and Satisfiability:
the Second DIMACS Implementation Challenge, volume 26 of DIMACS
Series in DMTCS. Amer. Math. Soc., 1996.
[52] A. Kamath, N. Karmarkar, K. Ramakrishnan, and M. Resende. Compu-
tational experience with an interior point algorithm on the satisfiability
problem. In Proceedings of Integer Programming and Combinatorial Opti-
mization, pages 333–349, Waterloo, Canada, 1990. Mathematical Program-
ming Society.
[53] A. C. Kaporis, L. M. Kirousis, and E. G. Lalas. The probabilistic analysis
of a greedy satisfiability algorithm. Random Structures and Algorithms, 28
(4):444–480, 2006.
[54] S. Kirkpatrick, D. Gelatt Jr., and M. P. Vecchi. Optimization by simuleated
annealing. Science, 220(4598):671–680, 1983.
[55] S. Kirkpatrick and B. Selman. Critical behavior in the satisfiability of ran-
dom Boolean expressions. Science, 264:1297–1301, May 1994. Also p. 1249:
”Math. Logic: Pinning Down a Trecherous Border in Logical Statements”
by B. Cipra.
[56] L. M. Kirousis, E. Kranakis, and D. Krizanc. Approximating the unsatisfi-
ability threshold of random formulas. In Proceedings of the Fourth Annual
European Symposium on Algorithms, pages 27–38, Barcelona, Spain, Sept.
1996.
[57] K. Konolige. Easy to be hard: Difficult problems for greedy algorithms. In
4th KR, pages 374–378, Bonn, Germany, May 1994.
[58] L. Kroc, A. Sabharwal, and B. Selman. Survey propagation revisited. In
23rd UAI, pages 217–226, Vancouver, BC, July 2007.
[59] L. Kroc, A. Sabharwal, and B. Selman. Leveraging belief propagation,
backtrack search, and statistics for model counting. In 5th CPAIOR, volume
5015 of LNCS, pages 127–141, Paris, France, May 2008.
[60] F. Krzakala, A. Montanari, F. Ricci-Tersenghi, G. Semerjian, and L. Zde-
borova. Gibbs states and the set of solutions of random constraint satisfac-
tion problems. PNAS, 104(25):10318–10323, June 2007.
[61] F. Lardeux, F. Saubion, and J.-K. Hao. GASAT: A genetic local search
algorithm for the satisfiability problem. Evolutionary Computation, 14(2):
223–253, 2006.
[62] C. M. Li, W. Wei, and H. Zhang. Combining adaptive noise and look-ahead
in local search for SAT. In 10th SAT, volume 4501 of LNCS, pages 121–133,
Lisbon, Portugal, May 2007.
[63] X. Y. Li, M. F. M. Stallmann, and F. Brglez. QingTing: A local search SAT
Chapter 6. Incomplete Algorithms 17
solver using an effective switching strategy and an efficient unit propagation.
In 6th SAT, pages 53–68, Santa Margherita, Italy, May 2003.
[64] S. Lin and B. W. Kernighan. An efficient heuristic algorithm for the
traveling-salesman problem. Operations Research, 21:498–516, 1973.
[65] E. Maneva. Belief Propagation Algorithms for Constraint Satisfaction Prob-
lems. PhD thesis, University of California, Berkeley, 2006.
[66] E. Maneva, E. Mossel, and M. J. Wainwright. A new look at survey prop-
agation and its generalizations. J. Assoc. Comput. Mach., 54(4):17, July
2007.
[67] E. Maneva and A. Sinclair. On the satisfiability threshold and survey prop-
agation for random 3-SAT, 2007. http://arxiv.org/abs/cs.CC/0609072.
[68] B. Mazure, L. Sais, and E. Gregoire. Boosting complete techniques thanks
to local search methods. In Proc. Math and AI, 1996.
[69] B. Mazure, L. Sais, and E. Gr´egoire. Tabu search for SAT. In 14th AAAI,
pages 281–285, Providence, RI, July 1997.
[70] D. A. McAllester, B. Selman, and H. Kautz. Evidence for invariants in local
search. In AAAI/IAAI, pages 321–326, Providence, RI, July 1997.
[71] S. Mertens, M. ezard, and R. Zecchina. Threshold values of random K-
SAT from the cavity method. Random Struct. and Alg., 28(3):340–373,
2006.
[72] M. ezard, T. Mora, and R. Zecchina. Clustering of solutions in the random
satisfiability problem. Phy. Rev. Lett., 94:197205, May 2005.
[73] M. ezard, G. Parisi, and R. Zecchina. Analytic and algorithmic solution
of random satisfiability problems. Science, 297(5582):812–815, 2002.
[74] M. ezard and R. Zecchina. Random k-satisfiability problem: From an
analytic solution to an efficient algorithm. Phy. Rev. E, 66:056126, Nov.
2002.
[75] S. Minton, M. D. Johnston, A. B. Philips, and P. Laird. Solving large-
scale constraint satisfaction an scheduling problems using a heuristic repair
method. In Proceedings AAAI-90, pages 17–24. AAAI Press, 1990.
[76] D. Mitchell, B. Selman, and H. J. Levesque. Hard and easy distributions of
SAT problems. In Proc. AAAI-92, pages 459–465, San Jose, CA, 1992.
[77] R. Monasson, R. Zecchina, S. Kirkpatrick, B. Selman, and L. Troyansky. De-
termining computational complexity from characteristic phase transitions.
Nature, 400(8):133–137, 1999.
[78] P. Morris. The breakout method for escaping from local minima. In 11th
AAAI, pages 428–433, Washington, DC, July 1993.
[79] C. H. Papadimitriou. On selecting a satisfying truth assignment. In 32nd
FOCS, pages 163–169, San Juan, Puerto Rico, Oct. 1991. IEEE.
[80] C. H. Papadimitriou and K. Steiglitz. Combinatorial Optimization.
Prentice-Hall, Inc., 1982.
[81] A. J. Parkes and J. P. Walser. Tuning local search for satisfiability testing.
In 13th AAAI, pages 356–362, Portland, OR, Aug. 1996.
[82] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plau-
sible Inference. Morgan Kaufmann, 1988.
[83] D. N. Pham, J. Thornton, and A. Sattar. Building structure into local
search for SAT. In 20th IJCAI, pages 2359–2364, Hyderabad, India, Jan.
18 Chapter 6. Incomplete Algorithms
2007.
[84] S. D. Prestwich. Local search and backtracking vs non-systematic back-
tracking. In AAAI 2001 Fall Symp. on Using Uncertainty within Compu-
tation, 2001.
[85] S. D. Prestwich. Variable dependency in local search: Prevention is better
than cure. In 10th SAT, Lisbon, Portugal, May 2007.
[86] P. W. Purdom Jr. and C. A. Brown. Polynomial average-time satisfiability
problems. Information Science, 41:23–42, 1987.
[87] M. G. C. Resende and T. A. Feo. A GRASP for satisfiability. In Johnson
and Trick [51], pages 499–520.
[88] I. Rish and R. Dechter. To guess or to think? hybrid algorithms for SAT. In
Proc. Conference on Principles of Constraint Programming (CP-96), pages
555–556, 1996.
[89] U. Sch¨oning. A probabilistic algorithm for k-SAT and constraint satisfaction
problems. In 40th FOCS, pages 410–414, New York, NY, Oct. 1999. IEEE.
[90] U. Sconing. A probabilistic algorithm for k-SAT based on limited local
search and restart. Algorithmica, 32(4):615–623, 2002.
[91] D. Schuurmans and F. Southey. Local search characteristics of incomplete
SAT procedures. AI J., 132(2):121–150, 2001.
[92] D. Schuurmans, F. Southey, and R. C. Holte. The exponentiated subgra-
dient algorithm for heuristic boolean programming. In 17th IJCAI, pages
334–341, Seattle, WA, Aug. 2001.
[93] S. Seitz, M. Alava, and P. Orponen. Focused local search for random 3-
satisfiability. J. Stat. Mech., P06006, 2005.
[94] B. Selman, H. Kautz, and B. Cohen. Noise strategies for local search. In
Proc. AAAI-94, pages 337–343, Seattle, WA, 1994.
[95] B. Selman, H. Kautz, and B. Cohen. Local search strategies for satisfiability
testing. In Johnson and Trick [51], pages 521–532.
[96] B. Selman and H. A. Kautz. Domain-independent extensions to GSAT:
Solving large structured satisfiability problems. In 13th IJCAI, pages 290–
295, France, 1993.
[97] B. Selman and S. Kirkpatrick. Critical behavior in the computational cost
of satisfiability testing. AI J., 81:273–295, 1996.
[98] B. Selman, H. J. Levesque, and D. G. Mitchell. A new method for solving
hard satisfiability problems. In 10th AAAI, pages 440–446, San Jose, CA,
July 1992.
[99] Y. Shang and B. W. Wah. A discrete Lagrangian-based global-search
method for solving satisfiability problems. J. of Global Optimization, 12
(1):61–99, 1998.
[100] W. M. Spears. Simulated annealing for hard satisfiability problems. In
Johnson and Trick [51], pages 533–558.
[101] J. Thornton, D. N. Pham, S. Bain, and V. Ferreira Jr. Additive versus
multiplicative clause weighting for SAT. In 19th AAAI, pages 191–196, San
Jose, CA, July 2004.
[102] Z. Wu and B. W. Wah. Trap escaping strategies in discrete Lagrangian
methods for solving hard satisfiability and maximum satisfiability problems.
In 16th AAAI, pages 673–678, Orlando, FL, July 1999.
Chapter 6. Incomplete Algorithms 19
[103] Z. Wu and B. W. Wah. An efficient global-search strategy in discrete La-
grangian methods for solving hard satisfiability problems. In 17th AAAI,
pages 310–315, Austin, TX, July 2000.
[104] L. Zdeborova and F. Krzakala. Phase transition in the coloring of random
graphs. Phy. Rev. E, 76:031131, Sept. 2007.
Index
belief propagation, 12
BP, see belief propagation
break count, 4
cavity method, 12
clause re-weighting, see clause weight-
ing
clause weighting, 6, 7, 9
cover, 12
discrete Lagrangian method, 7
DLM, see discrete Lagrangian method
finite-size scaling, 10
focused random walk, 3
Lagrangian method, see discrete La-
grangian method
landscape, 2
local search, 1
adaptg2wsat+, 2
AdaptNovelty, 2
clause weighting, 6
DLM, 7
flooding, 6
gNovelty+, 2
GSAT, 3
HSAT, 6
PAWS, 6
R+AdaptNovelty+, 2
RSAPS, 6
SAPS, 2, 6
TSAT, 6
UBCSAT, 2
UnitWalk, 6
Walksat, 4
weights, 6
MAX-SAT, 2
move
freebie, 4
greedy, 4
random walk, 4
sideways, 3
uphill, 4
phase transition, 9
plateau, 4
random problems
random 2-SAT, 9
random 3-SAT, 9
random k-SAT, 9
random walk, 3
satisfiability algorithms
incomplete, 1
satisfiability threshold, see threshold
search
greedy, 3
local, 1
SLS, see stochastic local search
SP, see survey propagation
statistical physics, 10, 12
stochastic local search, see local search
survey propagation, 12
threshold, 9
20