Further Enquiries

School of Computer and Mathematical Sciences
Ingkarni Wardli Building
The University of Adelaide
SA 5005
AUSTRALIA
Email

Telephone: +61 8 8313 5586
Facsimile: +61 8 8313 4366

Foundations of Bio-Inspired Computing

Directors: Dr Denis Antipov and Prof Frank Neumann

Bio-inspired computing methods such as evolutionary algorithms (EAs) and ant colony optimization are very successful problem solvers for a wide range of combinatorial and complex engineering problems. Evolutionary algorithms follow Darwin’s principle of survival of the fittest and construct solutions via an iterative process which simulates natural evolution. The basic strategy is to randomly initialize a set of possible solutions called a population and evolve it over time by creating a new population using a selection process that probabilistically favors replication of the fitter and variation operators such as crossover and mutation. Ant colony optimization takes its inspiration from ants finding shortest path between their nest and a food source. In the algorithmic ant colony optimization approach, solutions for a given problem are therefore constructed by artificial ants moving on a so-called construction graph. Depending on the quality of the solutions obtained, the random walk is influenced such that better solutions are constructed as the algorithm progresses.

Evolutionary Diversity Optimization

Diversity optimization is a problem of finding a diverse set of solutions which satisfy some constraints on their minimal quality. There exist many different measures of diversity, but in general they imply that the individuals in the measured set must be as different as possible in terms of their genotype, that is, their position in the search space. The main difference between diversity optimization and regular optimization problems is that we search not in the space of solutions, but in the space of sets of solutions. This implies that the search space is extremely high-dimensional, which makes the problem of finding a diverse set a much more complicated task.

One of the standard approaches to the diversity optimization is to use evolutionary algorithms (EAs), which is called the Evolutionary Diversity Optimization (EDO). We study this approach from the theoretical perspective on different combinatorial problems such as the traveling salesman problem, the knapsack problem and the minimum spanning tree.

In the context of multi-objective optimization the EDO is often used to find a diverse set of Pareto-optimal solutions. We study this approach on simple benchmark problems such as OneMinMax or LeadingOnes-TrailingZeros to get a better understanding of what is the best diversity which can be obtained on these problems and also how effective different EAs are at finding it.

The list of publications on this topic can be found in the project page.

Optimization in Uncertain and Dynamic Environment

Many real-world application problems have some uncertainties which complicate the optimization. One of the most simple examples is when the optimized function does not return a precise value, but some value affected by noise. This problem might arise, e.g., in algorithm tuning, when the performance is measured as a mean of some number of runs. The previous theoretical studies showed that some simple EAs can be very sensitive to noise, failing to converge with even small noise rates. Our latest results indicate that using non-trivial populations in EA might make them resistant to even high noise rates.

Another source of uncertainty which can be met in the real-world is chance constraints. E.g., in the knapsack problem we might lack the precise information of items weights, but only have some information about their distributions. Is such situation we want to find a solution (or different solutions) which would maximize the profit and break the constraint with only a small probability. Recently we have shown that multi-objective representation of such problems is very efficient and even allows the user to decide on the probability of breaking the constraint after finding a set of good solutions. More information on this topic, including the list of publications, can be found at the corresponding project page.

It is also a case when in the real-world problem the constraints change after finding good solutions. In this case we have to re-optimize the problem, and EAs have previously been shown effective for this task. We focus on the theoretical research of this aspect of EAs. Our results are described in the project page.

Previous Projects

Our previous research includes topics on parametrized analysis of EAs and other nature-inspired search heuristics, approximation-guided evolution, analysis of genetic programming algorithms and also other topics on foundations of evolutionary computation. More information on those topics can be found in this page.