SAT2003 Competition Judges decisions
Last modification: 6 february 2003
Those questions were asked by people to the competition organizing commitee.
The answers are done by the board of judges appointed this year (John Franco,
Hans Van Maaren and Toby Walsh).
1) There is a rule limiting the number of copies of the same solver
(that are different only in their settings) to 3. What about different
solvers developed by the same team of people?
We are fortunately in a rather tight and collegial community. Hence we
need not impose rules to prevent abuses. We should trust that if there
are going to be more than one submission by a group of people that those
submissions are fundamentally different, not just parameter settings.
So, in principle, I do not like the use of rules but we might specifically
say that parameter changes are a no-no.
2) Do you absolutely insist on having access to sources? Having an account
on a competition computer suggests that one can just build a working binary
and then remove the sources.
a) For guys like us giving out sources is a sensitive issue.
b) The reason for asking is I use some of the more advanced template
techniques in C++ and different version of GCC handles them differently.
Recently when I upgraded to GCC 3.2 I got lots of errors in code that
had been working perfectly before. Adjusting the code made old compiler
version unhappy. I want to avoid this trouble in the competition.
Also, I have my own make system and support library which I need to
send you together with the solver. This will be a bit of work. It
would be *much* easier for me if an executable was okay.
There is a definite need to protect source code. Executables are fine.
All we really need is the algorithm which, again, we should trust
represents faithfully what the code is doing (although I realize for
various reasons this is hard to do).
I suggest we ask for sources but if this prevents someone entering, we tell
them the hardware and they have to provide appropriate binaries. If it
doesn't work out of the box, they will be disqualified.
3) The new rules say that the winners of the previous categories
automatically enter the competition. I hope that you don't insist that
the same version of BerkMin62 that entered the last year
competition (containing the same bug) to be submitted.
Seems reasonable, but should be restricted to fixing the bug only.
Of course, updated versions should be invited but we need to use the
previous winners to see how much progress has been made. So, we should
definitely make runs with old versions but not include their results
with the new results: only compare the aggregate of the new results with
the aggregate of the pevious year's results. The above assumes new
equipment is going to be used this year. If not, the older versions
need not be run again on old benchmarks unless new statistical categories
have been established.
4) Do you consider the possibility for a SAT-solver to voluntarily give up
a particular category?
I am talking about the participation of solvers tailored for insductrial
benchmarks in the random category. Everybody knows that learning does not
work for random formulas. So giving the participants such a possibility could
save you a lot of time when running random instances.
As such this makes sense but in some sense it blocks the possibility of
analyzing performances afterwards. That one solver works well in some
particular and bad in others is something we can learn from! Submitting for
one category only can end, in a few years, in having completely different
solvers running in different categories. I think this is not what we want: one
reason of SAT becoming popular is because it offers the possibility of having
a sort of universal approach to tackle NP complete problems. Of course, tailor
made sat solvers will perform better on specific problems and this is used in
the various applications. But the competition should be something offering the
possibilities in making such distinctions transparent