COCO (COmparing Continuous Optimisers) is a platform for systematic and sound comparisons of real-parameter global optimisers. COCO provides benchmark function testbeds and tools for processing and visualizing data generated by one or several optimizers. The COCO platform has been used for the Black-Box-Optimization-Benchmarking (BBOB) workshops that took place during the GECCO conference in 2009, 2010, and 2012. The next edition is going to take place in July 2013 during the next GECCO. The COCO source code is available at the downloads page.
To subscribe to (or unsubscribe from) the bbob discussion mailing list follow this link http://lists.lri.fr/cgi-bin/mailman/listinfo/bbob-discuss .
To receive announcements related to the BBOB workshops simply send an email to BBOB team bbob_at_lri.fr with the title “register to BBOB announcement list”
Internal wiki: https://gforge.inria.fr/plugins/mediawiki/wiki/coco
The figures show selected results from BBOB 2009. The empirical runtime distribution is shown for six subgroups of the BBOB functions. Click on the respective figures for more details. (At the top of this page the aggregated results over all functions are shown).
The current 24 noiseless test functions are
Only f1 and f5 are purely quadratic or linear respectively.
See also N. Hansen et al (2010): Comparing Results of 31 Algorithms from the Black-Box Optimization Benchmarking BBOB-2009. Workshop Proceedings of the GECCO Genetic and Evolutionary Computation Conference 2010, ACM.