The third Black-Box Optimization Benchmarking workshop took place in Philadelphia, as part of GECCO 2012, July 07 - 11. A list of the published papers is here and results are presented here.
Benchmarking of optimization algorithms is crucial to assess performance of optimizers quantitatively, understand weaknesses and strengths of each algorithm and is a compulsory path to evaluate new algorithm designs. However, this task turns out to be tedious and difficult to realize even in the single-objective case — at least if one is willing to accomplish it in a scientifically decent and rigorous way. The BBOB 2012 workshop for real-parameter optimization, follow-up of BBOB 2009 workshop and BBOB 2010 workshop, will furnish most of this tedious task for its participants:
For this new edition, we provide essentially the same test-suite as in 2010. This year, the post-processing also allows a comparison between more than two algorithms, for example for a well-grounded assessment of a (new) algorithm modification. Data from BBOB 2009 contributions are also used for a comparison.
What remains to be done for the participants is to allocate CPU-time, run the black-box real-parameter optimizer(s) of their interest in different dimensions a few hundreds of times and finally start the post-processing procedure. Two testbeds are provided,
The participants can freely choose any or all of them. This new edition entirely bans different parameter settings for different test functions and encourages analyses that study the impact of parameter setting changes.
During the workshop, algorithms and results will be presented by the participants. An overall analysis and comparison will be accomplished by the organizers and the overall process will be critically reviewed. A plenary discussion on future improvements will, among others, address the question, of how the testbed should evolve.
The source code of the test-functions is available in Matlab, C, Java, R and Python. Downloads are available at BBOB 2012 downloads.
We encourage any submission that is concerned with black-box optimization benchmarking of continuous optimizers, for example, benchmarking new or not-so-new algorithms on the BBOB-2012 testbed (which have not been tested in BBOB-2009 or BBOB-2010) or analyzing the data obtained in BBOB-2009/ BBOB-2010 or…
Three templates are provided:
Paper & data submission:
Experimental setup documents, code for the benchmark functions (Matlab, C, Java, R, Python) and for the post-processing (Python) are provided at the download page. To be notified about the release of the code, subscribe to the feed http://coco.gforge.inria.fr/feed.php and / or subscribe to the announcement list by sending an email to bbob _at_ lri.fr
All results about the 2012 algorithms can be found here.
You can subscribe (or unsubscribe) to our discussion mailing list by following this link http://lists.lri.fr/cgi-bin/mailman/listinfo/bbob-discuss
To receive announcement about the workshop, send an email to the BBOB team bbob_at_lri.fr with title “register to BBOB announcement list”.
Anne Auger, Nikolaus Hansen, Verena Heidrich-Meisner, Olaf Mersmann, Petr Posik, Mike Preuss