Incorporating real-world problems into the benchmarking of multiobjective optimizers

T. Tušar

COCO (Comparing Continuous Optimizers) is a state-of-the-art open-source platform for benchmarking optimization algorithms on a black-box setting. However, the test functions from COCO, like other available benchmark suites in multiobjective optimization, are at their core still synthetic and do not incorporate some important properties of real-world problems, such as mixed variables, constraints, expensive evaluations and asynchronous evaluations of objectives. Since only a few real-world multiobjective optimization problems are freely available for research purposes, there is an urgent need to collect real-world problems, models of real-world problems and more realistic synthetic benchmark problems into an open benchmark suite that could be used by any researcher in multiobjective optimization.

The idea of this project is to extend the COCO platform by incorporating real-world problems and their properties in order to bridge the gap between research and application in multiobjective optimization. More specifically, the project will:

– extend COCO’s problem formulation and its fixed-target any-time performance assessment methodology to accommodate specificities of real-world problems,

– provide a new multiobjective benchmark suite consisting of real-world problems, models of real-world problems and synthetic problems containing features of real-world problems, and

– design an algorithm capable of solving problems from the new real-world benchmark suite and make its results available for future comparisons.