The Black-Box Optimization Benchmarking workshop has taken place in Montreal, July 2009. A list of the published papers is here and result are presented here.
Quantifying and comparing performance of optimization algorithms is one important aspect of research in search and optimization. However, this task turns out to be tedious and difficult to realize even in the single-objective case — at least if one is willing to accomplish it in a scientifically decent and rigorous way. The BBOB 2009 workshop for real-parameter optimization will furnish most of this tedious task for it's participants: (1) choice and implementation of a well-motivated single-objective benchmark function testbed, (2) design of an experimental set-up, (3) generation of data output for (4) post-processing and presentation of the results in graphs and tables. What remains to be done for the participants is to allocate CPU-time, run their favorite (not necessarily brand-new) black-box real-parameter optimizer in different dimensions a few hundreds of times and finally start the post-processing procedure. Two testbeds are provided,
The participants can freely choose any or all of them.
During the workshop the overall process will be critically presented, the algorithms will be presented by the participants, quantitative performance measurements of all submitted algorithms will be presented, categorized by early versus late performance and function properties like multi-modality, ill-conditioning, symmetry, ridge-solving, coarse- and fine-grain ruggedness, weak global structure, outlier noise…
should contain experimental results obtained with the prescribed experimental procedure (presumably obtained with the provided software). The used algorithm is presented and the necessary details to reproduce the result are given (see also experimental procedure). The paper has no more than 8 pages (a template is provided).
Example papers:
Paper & data submission:
Extended versions of selected papers will appear in a special issue of the journal Evolutionary Computation.
Experimental setup documents, code for the benchmark functions (Matlab, C, Java (soon)) and for the post-processing (Python) are provided at the download page. To be notified about the release of the code, subscribe to the feed http://coco.gforge.inria.fr/feed.php and / or subscribe to the announcement list by sending an email to bbob _at_ lri.fr
To subscribe (or unsubscribe) to our discussion mailing list ( bbob-discuss _at_ lists.lri.fr ) and announcement list, send an email to the organizers at bbob _at_ lri.fr .
Anne Auger, Hans-Georg Beyer, Nikolaus Hansen, Steffen Finck, Raymond Ros, Marc Schoenauer, Darrell Whitley