The second Black-Box Optimization Benchmarking workshop has taken place in Portland, Wednesday July 7th. The program of the workshop can be downloaded here, a list of the published papers is here and result are presented here.
Quantifying and comparing performance of optimization algorithms is one important aspect of research in search and optimization. However, this task turns out to be tedious and difficult to realize even in the single-objective case — at least if one is willing to accomplish it in a scientifically decent and rigorous way. The BBOB 2010 workshop for real-parameter optimization, follow-up of BBOB 2009 workshop, will furnish most of this tedious task for its participants: (1) choice and implementation of a well-motivated single-objective benchmark function testbed, (2) design of an experimental set-up, (3) generation of data output for (4) post-processing and presentation of the results in graphs and tables. This year, the post-processing also allows a comparison between two algorithms, for example for a well-grounded assessment of a (new) algorithm modification. Data from last years contributions are also be used for a comparison.
What remains to be done for the participants is to allocate CPU-time, run the black-box real-parameter optimizer(s) of their interest in different dimensions a few hundreds of times and finally start the post-processing procedure. Two testbeds are provided,
The participants can freely choose any or all of them.
During the workshop, algorithms and results will be presented by the participants. An overall analysis and comparison will be accomplished by the organizers and the overall process will be critically reviewed. A plenary discussion on future improvements will, among others, address the question, of how the testbed should evolve.
Downloads are available at BBOB 2010 downloads
We encourage any submission that is concerned with black-box optimization benchmarking of continuous optimizers, for example, benchmarking new or not-so-new algorithms on the BBOB-2010 testbed (which have not been tested in BBOB-2009) or analyzing the data obtained in BBOB-2009 or…
Two templates are provided:
Paper & data submission:
Experimental setup documents, code for the benchmark functions (Matlab, C, Java) and for the post-processing (Python) (soon) are provided at the download page. To be notified about the release of the code, subscribe to the feed http://coco.gforge.inria.fr/feed.php and / or subscribe to the announcement list by sending an email to bbob _at_ lri.fr
To subscribe (or unsubscribe) to our discussion mailing list ( bbob-discuss _at_ lists.lri.fr ) and announcement list, send an email to the organizers at bbob _at_ lri.fr .
Anne Auger, Hans-Georg Beyer, Steffen Finck, Nikolaus Hansen, Petr Posik, Raymond Ros