next up previous
Next: First stage Up: The SAT2002 Competition (preliminary Previous: Computers available

The results

The results of the competition were released during the SAT2002 symposium. The detailed results can be found on the competition web page http://www.satlive.org/SATCompetition/.

Some of the solvers were found buggy by the organizers during compliance testing, returned to their authors, and corrected (some of them). However, this was not the aim of this phase, and cannot be considered as a guarantee of any kind of testing. Only wrong answers (or obvious crashes) were reported as bugs. We also noticed problems with some solvers during the first round but we did not accept new version of the solvers. Here are some of the things one must be aware of before reading the results.

But, first of all, we must begin with a word of caution: The following results should be considered with care, because they correspond to the behavior of a particular version of each solver (the one that was submitted) on the benchmarks accepted for the competition, on a particular computer under a particular operating system7. The competition results should increase our knowledge about solvers, but one inherent risk of such snapshot is that results can be misinterpreted, and thus lead to wrong pictures of the state of the art.

We discarded some of the solvers from the competition (ran hors concours) because they demonstrated unexpected behavior; mainly, claimed UNSAT for a satisfiable instance. Most of the time, the problem showed up only on a few families of benchmarks. Also some versions of the marchXYZ solvers were hors concours from the beginning, because these versions were submitted substantially after the deadline (but before the first stage of the competition). Hors concours solvers results are also displayed on the competition web page. For instance, the lsat solver answered incorrectly on some instance (there was actually a bug in the code), but the corrected version (as well as the buggy version) solved easily all urquhart instances and the xor-chains benchmarks, awarded during the competition (adding the detection of boolean functions pays on small but hard hand-made formulas). But it was officially discarded because of its bug.

Some specific problems occured with other solvers (not hors concours). In the following two cases, the picture given by the competition results does not reflect the real solvers performance:

Berkmin
was composed of two engines, one for small/medium instances, and one for large instances. The latter just crashed on Linux (the authors tested it under MS Windows and Solaris only). That problem was not detected during the compliance testing (for more details, see http://www.satlive.org/SATCompetition/berkmin.html). Note that other solvers also crashed sometimes, especially during the second round where benchmarks were larger.

JQuest
did not output a correct certificate when the instance had less variables than the nbvar parameter provided in the ``p cnf nbvar nbclause'' line (because in that case, it renames internally the variables ids). For that reason, JQuest is reported not solving those instances.

The best way to view the detailed results of the competition is to take a look at all the traces at http://www.ececs.uc.edu/sat2002/scripts/menu_choix2.php3; the summaries of the results per competition stage per category follow. The instances used for the competition are available on SATLIB8.



Subsections
next up previous
Next: First stage Up: The SAT2002 Competition (preliminary Previous: Computers available
LE BERRE Daniel 2002-09-16