Further Enquiries

School of Computer and Mathematical Sciences
Ingkarni Wardli Building
The University of Adelaide
SA 5005
AUSTRALIA
Email

Telephone: +61 8 8313 5586
Facsimile: +61 8 8313 4366

Competition - Evolutionary Submodular Optimisation GECCO 2024

Submodular functions play a key role in the area of optimisation as they allow to model many real-world optimisation problem. Submodular functions model a wide range of problems where the benefit of adding solution components diminishes with the addition of elements. They form an important class of optimization problems, and are extensively studied in the literature. Problems that may be formulated in terms of submodular functions include influence maximization in social networks, maximum coverage, maximum cut in graphs, sensor placement problem, and sparse regression. In recent years, the design and analysis of evolutionary algorithms for submodular optimisation problems has gained increasing attention in the evolutionary computation and artificial intelligence community.

The aim of the competition is to provide a platform for researchers working evolutionary computing methods and interested in benchmarking them on a wide class of combinatorial optimization problems. The competition will benchmark evolutionary computing techniques for submodular optimisation problems and enable performance comparison for this type of problems. It provides an idea vehicle for researchers and students to design new algorithms and/or benchmark their existing approaches on a wide class of combinatorial optimization problems captured by submodular functions.


A description of the different submodular optimization problems included in this competion can be found in
  • F. Neumann, A. Neumann, C. Qian, V.A. Do, J. de Nobel, D. Vermetten, S. S. Ahouei, F. Ye, H. Wang, T. Bäck (2023): Benchmarking Algorithms for Submodular Optimization Problems Using IOHProfiler. In: [CoRR abs/2302.01464].

Technical details:

We will use IOHprofiler for the competition. The problems are integrated into IOHprofiler and you can run your algorithms using it for evaluation and obtaining your results.The problem implementations and training benchmarks are available in IOHprofiler.

The submodular problem implementations can be found in this directory IOHprofiler Problem. The code in C++ can be found on this page.

Example:

Example on how to use IOHprofiler. Links to other examples and tutorials on IOHprofiler can be found here.

Notebook Example:

Example on how to access the submodular problems using IOHexperimenter.

Comparing to baselines:

To compare your algorithm's performance to our baseline algorithms, you can make use of the IOHanalzyer. Here, you can load the sets of algorithms we ran on the MaxCut and MaxCoverage problems. This can be done by selecting the relevant 'source' in the 'Load from repositories' box on the startpage. These sources can be identified by the prefix 'submodular', and each contain data for a set of instances, specifically Instance 2100-2127 for MaxCoverage and Instance 2000-2004 for MaxCut. For more information on the use of IOHanalyzer, please see this page.

Evaluation:

We will evaluate all submissions on different instances of the submodular problems provided as part of the submodular problem implementations. The algorithms will be evaluated with respect to the fixed budget perspective. We will consider two categories and will determine a winner for each of the two categories.

  • In the low budget category each algorithm will be run for 10,000 fitness evaluations and the best solution obtained during the run will be used as the result.
  • In the high budget category each algorithm will be run for 100,000 fitness evaluations and the best solution obtained during the run will be used as the result.

Each algorithm will be run on each test evaluation instance 10 times in each category. Algorithms will be ranked for each considered instance. The winner is the approach that has the smallest average rank in each category, i. e., low budget or high budget. As a default, we assume that each submission should be considered for both categories. If a submission should only be considered in one of the categories, i. e., either low or high budget we ask the contributors to clearly state this in the submission email.

Submission:

Submission deadline: 12 July 2024, AoE.

To submit to the competition, we recommend creating a publically visible repository (e.g. on Zenodo) where you upload the code of your algorithm algorithm as a single zip-file (named according to your algorithm name and the category you are submitting to) as well as a readme which contains instructions on how to execute it. Please make sure this readme is sufficiently detailed to ease reproducibility. Additionally, you can upload the performance data on the publicly available problem instances as well. Finally, please email the link to your repository to Aneta Neumann.

Organisers:

Aneta Neumann, University of Adelaide, Australia
Saba Sadeghi Ahouei, University of Adelaide, Australia
Diederick Vermetten, Leiden University, The Netherlands
Jacob de Nobel, Leiden University, The Netherlands
Thomas Bäck, Leiden University, The Netherlands


Evaluation: