SAT Competition 2013
affiliated with the SAT 2013 conference, July 8-12 in Helsinki, Finland
jointly organized by Ulm University, University of Helsinki,
University College Dublin, University of Texas at Austin,

Description of the Tracks

The competition will consist of multiple tracks, whereby a "track" is defined by a combination of

  1. the type of solvers allowed to participate,
  2. the computational resources provided to each solver, and
  3. the class of benchmarks used.

The following taxonomy is followed:

Types of solvers

  • Core solvers -- this type of solvers are allowed to use at most two different SAT solving engines for all runs and at any time during one track. Type of solvers that fall into this category are single engine solvers like: minisat, glucose, lingeling, SAT4J, clasp, CCASat, Sparrow, sattime and hybrid solvers like: CCCeq. A preprocessor is not considered a solver so that a hybrid solver could additionally run a preprocessor. A portfolio approach consisting of only two different core solvers can also participate in this track.

    Note: the organizers and the judges of the competition reserve the right to make a final decision whether a particular solver is qualified as a core solver or not.

  • Alternative approaches -- interacting multi-engine SAT solvers, portfolio-based SAT solvers, as well as any other solver architecture not covered by the definition of a core solver.

Computational resources

  • Sequential -- 1 core (of a 4-core CPU), 7.5GB RAM, 5000 seconds CPU time, 50 GB /tmp disk space (100GB /tmp space for certified UNSAT track)
  • Parallel -- 8 cores (of a cluster node), 15GB RAM, 5000 seconds wall-clock time, 100 GB /tmp disk space.

Classes of benchmarks

  • Application (both SAT and UNSAT)
  • Hard-combinatorial (both SAT and UNSAT)
  • Random (both SAT and UNSAT)


Using the above taxonomy, the following 13 tracks are organized for the SAT Competition 2013:

  1. Core solvers, Sequential, Application SAT track.
  2. Core solvers, Sequential, Application certified UNSAT track.
  3. Core solvers, Sequential, Application SAT+UNSAT track.
  4. Core solvers, Sequential, Hard-combinatorial SAT track.
  5. Core solvers, Sequential, Hard-combinatorial certified UNSAT track.
  6. Core solvers, Sequential, Hard-combinatorial SAT+UNSAT track.
  7. Core solvers, Sequential, Random SAT track.
  8. Core solvers, Sequential, Random certified UNSAT track.
  9. Core solvers, Sequential, Random SAT+UNSAT track.
  10. Core solvers, Parallel, Application SAT+UNSAT Track.
  11. Core solvers, Parallel, Hard-combinatorial SAT+UNSAT Track
  12. Core solvers, Parallel, Random SAT Track
  13. "Open track": Any type of solver, Parallel, 1/3Application + 1/3Hard-Combinatorial + 1/3Random (S AT+UNSAT) track.
  14. Core solvers, Sequential, MiniSAT Hack-track, Application SAT+UNSAT track.


The benchmark selection process will follow that of SAT Challenge 2012.

Evaluation - Ranking

The solvers will be ranked based on the number of instances solved using the provided computational resources. Ties will be broken by CPU time in sequential tracks, and by wall-clock time in parallel tracks. The instance is considered solved by a solver if the solver produced both the answer and the complete certificate within the allocated resources.


Tracks 1-9: gold (SAT, UNSAT, SAT+UNSAT), silver (SAT, UNSAT, SAT+UNSAT), bronze (SAT, UNSAT, SAT+UNSAT).

Tracks 10-14: gold, silver, bronze.

The awards are subject to change in the tracks with a small number of participants. A gold medal will be awarded only if there are at least three solvers participating in the track. (for silver and bronze, respectively, four and five solvers need to participate).

Minisat hack track

This track, originally introduced for the SAT 2009 competition and had a follow-up during SAT 2011, is organized again. This year the latest release of Minisat (2.2.0) (the simp version) is used as the base solver.

The motivation of this track is twofold. On the one hand, we want to lower the threshold for Master's and PhD students to enter the SAT competition by bringing new and innovative ideas to the original Minisat solver. On the other hand, the aim is to see how far the performance of Minisat can be improved by making only minor changes to the source code. We strongly encourage usual participants to the main track (including non-students) with "hacked" Minisat solver to participate in this track.

Our motivation behind using Minisat as the base solver: Minisat is a well-known and widely adopted solver that can be seen as a canonical CDCL solver, offering an efficient (and necessary) CDCL basis and easily modifiable code. Indeed, Minisat has been often improved by introducing only minor changes to its code ("hacking it"). Furthermore, looking back at recent progresses in CDCL solvers, most of them are based on minor changes (<10 lines each): phase caching, blocked literals, and Luby series for rapid restarts.

The motivation behind competing with minor hacks is that it can help the community to detect which types of modifications pay off: in order to isolate the actual causes of improvements when modifying solvers, it is necessary to perform scientifically valid experiments on the effects of individual changes made to a solver (e.g., individual new techniques introduced to the base solver).

The organizers of the SAT Competition 2013 provide a patch for Minisat 2.2.0 that modifies the output following to the SAT Competition 2013 output requirements. Addionally, this patch enables Minisat to emit a proof in DRUP format for the certified UNSAT tracks. To patch Minisat, place the MinisatHack.patch file in the directory of Minisat and run: "patch -p1 < MinisatHack.patch".

The size of hacks is restricted as follows: the diff between the patched and modified sources of Minisat should be less than 1000 non-space characters. You can use our edit distance tool on all modified files to count the number of modifications, which summed up should be less than 1000.