Loading...
 

Tutorials


GECCO 2021 will feature 39 tutorials.

Introductory Tutorials

TitleOrganizers
A Gentle Introduction to Theory (for Non-Theoreticians)
  • Benjamin Doerr Inria and Ecole Polytechnique, France
NEW Benchmarking: state-of-the-art and beyond
  • Anne Auger Inria, France
  • Nikolaus Hansen Inria and Ecole Polytechnique, France
Evolution of Neural Networks
  • Risto Miikkulainen The University of Texas at Austin and Cognizant Technology Solutions, USA
Genetic Programming: A Tutorial Introduction
  • UnaMay OReilly MIT, USA
  • Erik Hemberg MIT CSAIL
Hyper-heuristics
  • Daniel Tauritz Auburn University, USA
  • John Woodward Queen Mary University of London, UK
Introductory Mathematical Programming for EC
  • Ofer Shir Tel-Hai College and Migal Institute, Israel
Learning Classifier Systems: From Principles to Modern Systems
  • Anthony Stein University of Hohenheim
  • Masaya Nakata Faculty of Engineering, Yokohama National University, Japan
Model-Based Evolutionary Algorithms
  • Dirk Thierens Utrecht University, The Netherlands
  • Peter A. N. Bosman Centre for Mathematics and Computer Science, The Netherlands
Recent Advances in Particle Swarm Optimization Analysis and Understanding 2021
  • Christopher Cleghorn University of Witwatersrand, South Africa
  • Andries Engelbrecht University of Stellenbosch, South Africa
NEW Replicability and Reproducibility in Evolutionary Optimization
  • Luís Paquete University of Coimbra, Portugal
  • Manuel López-Ibáñez University of Málaga, Spain
Representations for Evolutionary Algorithms
  • Franz Rothlauf Universität Mainz
Runtime Analysis of Evolutionary Algorithms: Basic Introduction
  • Per Kristian Lehre University of Birmingham
  • Pietro Simone Oliveto Department of Computer Science, University of Sheffield
NEW Theoretical Foundations of Evolutionary Computation for Beginners and Veterans
  • Darrell Whitley Colorado State University, United States

Advanced Tutorials

TitleOrganizers
Advanced Learning Classifier Systems
  • Will Browne Victoria University of Wellington, NZ
Automated Algorithm Configuration and Design
  • Thomas Stützle Université Libre de Bruxelles, Belgium
  • Manuel López-Ibáñez University of Málaga, Spain
NEW Benchmarking Multiobjective Optimizers 2.0
  • Dimo Brockhoff Inria and Ecole Polytechnique, France
  • Tea Tušar Jožef Stefan Institute, Slovenia
CMA-ES and Advanced Adaptation Mechanisms
  • Youhei Akimoto University of Tsukuba
  • Nikolaus Hansen Inria and Ecole Polytechnique, France
Coevolutionary Computation for Adversarial Deep Learning
  • Jamal Toutouh MIT, USA
  • UnaMay OReilly MIT, USA
Constraint-Handling Techniques used with Evolutionary Algorithms
  • Carlos Coello Coello CINVESTAV-IPN, Mexico
Decomposition Multi-Objective Optimization: Current Developments and Future Opportunities
  • Ke Li University of Exeter, UK
  • Qingfu Zhang City University of Hong Kong
Dynamic Multi-objective Optimization: Introduction, Challenges, Applications and Future Directions
  • Mardé Helbig Griffith University, Australia
Evolutionary Multi- and Many-Objective Optimization: Methodologies, Applications and Demonstration
  • Kalyanmoy Deb Michigan State University, USA
  • Julian Blank Michigan State University, USA
NEW Evolutionary Submodular Optimisation
  • Aneta Neumann The University of Adelaide, Australia
  • Frank Neumann University of Adelaide, Australia
  • Chao Qian Nanjing University
Genetic improvement: Taking real-world source code and improving it using computational search methods.
  • Saemundur Haraldsson University of Stirling
  • John Woodward Queen Mary University of London, UK
  • Markus Wagner University of Adelaide, Australia
  • Alexander Brownlee University of Stirling
NEW Lexicase Selection
  • Thomas Helmuth Hamilton College
  • William La Cava University of Pennsylvania
Quality-Diversity Optimization
  • Antoine Cully Imperial College London, UK
  • Jean-Baptiste Mouret Inria Nancy - Grand Est, CNRS, Université de Lorraine, France
  • Stéphane Doncieux ISIR, Université Pierre et Marie Curie-Paris 6, CNRS UMR 7222, Paris
Recent Advances in Landscape Analysis for Optimisation and Learning
  • Katherine Malan University of South Africa
  • Gabriela Ochoa University of Stirling, UK
Recent Advances of Differential Evolution in Numerical optimization
  • Ponnuthurai Suganthan NTU
Runtime Analysis of Population-based Evolutionary Algorithms
  • Per Kristian Lehre University of Birmingham
  • Pietro Simone Oliveto Department of Computer Science, University of Sheffield
Sequential Experimentation by Evolutionary Algorithms
  • Ofer Shir Tel-Hai College and Migal Institute, Israel
  • Thomas Bäck LIACS, Leiden University, The Netherlands
Statistical Analyses for Meta-heuristic Stochastic Optimization Algorithms
  • Tome Eftimov Jožef Stefan Institute, Slovenia
  • Peter Korošec Jožef Stefan Institute

Specialized Tutorials

TitleOrganizers
Applications of Dynamic Parameter Control in Evolutionary Computation
  • Gregor Papa Jožef Stefan Institute
NEW Evolutionary Art and Design: Representation, Fitness and Interaction
  • Penousal Machado University of Coimbra, CISUC, DEI
Evolutionary Computation and Evolutionary Deep Learning for Image Analysis, Signal Processing and Pattern Recognition
  • Mengjie Zhang Victoria University of Wellington, New Zealand
  • Stefano Cagnoni University of Parma
Evolutionary Computation and Machine Learning in Cryptology and Cybersecurity
  • Stjepan Picek TU Delft, The Netherlands
  • Domagoj Jakobovic University of Zagreb, Faculty of electrical engineering and computing
Evolutionary Computation for Feature Selection and Feature Construction
  • Bing Xue Victoria University of Wellington, New Zealand
  • Mengjie Zhang Victoria University of Wellington, New Zealand
Push
  • Lee Spector Amherst College, Hampshire College, and the University of Massachusetts, Amherst
Search-based Software Engineering: Challenges, Opportunities and Recent Applications
  • Ali Ouni ETS Montreal, University of Quebec
  • Mohamed Wiem Mkaouer
NEW Towards a Green AI: Evolutionary solutions for an ecologically viable artificial intelligence
  • Nayat Sanchez-Pi Inria Chile Research Center, Chile
  • Luis Martí Inria, Chile

Introductory Tutorials

A Gentle Introduction to Theory (for Non-Theoreticians)

This tutorial addresses GECCO attendees who do not regularly use theoretical methods in their research. For these, we give a smooth introduction to the theory of evolutionary computation.
Complementing other introductory theory tutorials, we do not discuss mathematical methods or particular results, but explain

  • what the theory of evolutionary algorithms aims at,
  • how theoretical research in evolutionary computation is conducted,
  • how to interpret statements from the theory literature,
  • what the most important theory contributions are, and
  • what the theory community is trying to understand most at the moment.

 

Benjamin Doerr

Benjamin Doerr is a full professor at the French Ecole Polytechnique. He received his diploma (1998), PhD (2000) and habilitation (2005) in mathematics from Kiel University. His research area is the theory of both problem-specific algorithms and randomized search heuristics like evolutionary algorithms. Major contributions to the latter include runtime analyses for existing evolutionary algorithms, the determination of optimal parameter values, and the theory-guided design of novel operators, on-the-fly parameter choices, and whole new evolutionary algorithms. Together with Frank Neumann and Ingo Wegener, Benjamin Doerr founded the theory track at GECCO and served as its co-chair 2007-2009 and 2014. He is a member of the editorial boards of ""Artificial Intelligence"", ""Evolutionary Computation"", ""Natural Computing"", ""Theoretical Computer Science"", and three journal on classic algorithms theory. Together with Anne Auger, he edited the book ""Theory of Randomized Search Heuristics"" (World Scientific 2011). Together with Frank Neumann, he is an editor of the book ""Theory of Evolutionary Computation - Recent Developments in Discrete Optimization"" (Springer 2020).

NEW Benchmarking: state-of-the-art and beyond

Benchmarking is a compulsory task to assess the performance of a (new) optimization algorithm. While this appears as a mainly technical task, there are surprisingly many methodological problems that arise when benchmarking algorithms. Over the past decade, there was a great effort towards improving the benchmarking methodology for (gradient-free) optimization. It was started for continuous optimization problems and then extended to multi-objective and mix-integer problems.
In this tutorial, we will present and discuss these key methodological ideas emphasizing the importance of quantitative measurement, the use of instances of problems as well as choosing well the testbed to not bias results towards too easy problems. We will particularly review the advantages of presenting data using the empirical cumulative distribution of runtimes, a tool that everyone assessing the performance of an algorithm should know.
We will then review how this methodology is implemented within the COCO software and show how COCO can and should be used to benchmark an algorithm and write a scientific paper.

This tutorial is intended for young researchers starting in the field who need benchmarking for their research as well as for researchers that wish to get up-to-date with the latest methodological developments in benchmarking methodology.

 

Anne Auger

Anne Auger is a research director at Inria heading the RandOpt team. She received her diploma (2001) and PhD (2004) in mathematics from the Paris VI University.
Before joining Inria, she worked for two years (2004-2006) at ETH in Zurich.
Her main research interest is stochastic continuous optimization including
theoretical aspects, algorithm designs and benchmarking. She is a member of ACM-SIGECO executive committee and of the editorial board of Evolutionary Computation. She has been General chair of GECCO in 2019. She is co-organizing the forthcoming Dagstuhl seminar on benchmarking and one of the core members behind the COCO/BBOB benchmarking platform.

 

Nikolaus Hansen

Nikolaus Hansen is a research director at Inria, France. Educated in medicine and mathematics, he received a Ph.D. in civil engineering in 1998 from the Technical University Berlin under Ingo Rechenberg. Before he joined Inria, he has been working in evolutionary computation, genomics and computational science at the Technical University Berlin, the Institute of Genetic Medicine and the ETH Zurich. His main research interests are learning and adaptation in evolutionary computation and the development of algorithms applicable in practice. His best-known contribution to the field of evolutionary computation is the so-called Covariance Matrix Adaptation (CMA).

Evolution of Neural Networks

Evolution of artificial neural networks has recently emerged as a powerful technique in two areas. First, while the standard value-function based reinforcement learning works well when the environment is fully observable, neuroevolution makes it possible to disambiguate hidden state through memory. Such memory makes new applications possible in areas such as robotic control, game playing, and artificial life. Second, deep learning performance depends crucially on the network architecture and hyperparameters. While many such architectures are too complex to be optimized by hand, neuroevolution can be used to do so automatically. Such evolutionary AutoML can be used to achieve good deep learning performance even with limited resources, or state-of-the art performance with more effort. It is also possible to optimize other aspects of the architecture, like its size, speed, or fit with hardware. In this tutorial, I will review (1) neuroevolution methods that evolve fixed-topology networks, network topologies, and network construction processes, (2) methods for neural architecture search and evolutionary AutoML, and (3) applications of these techniques in control, robotics, artificial life, games, image processing, and language.

Risto Miikkulainen

Risto Miikkulainen is a Professor of Computer Science at the University of Texas at Austin and a AVP of Evolutionary AI at Cognizant Technology Solutions. He received an M.S. in Engineering from the Helsinki University of Technology, Finland, in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His research focuses on methods and applications of neuroevolution, as well as neural network models of natural language processing and self-organization of the visual cortex; he is an author of over 450 articles in these research areas. He is an IEEE Fellow, recipient of the 2020 IEEE CIS EC Pioneer Award, Gabor Award of the INNS, and Outstanding Paper of the Decade Award of the ISAL.

Genetic Programming: A Tutorial Introduction

Genetic programming emerged in the early 1990's as one of the most exciting new evolutionary algorithm paradigms. It has rapidly grown into a thriving area of research and application. While sharing the evolutionary inspired algorithm principles of a genetic algorithm, it differs by exploiting an executable genome. Genetic programming evolves a 'program' to solve a problem rather than a single solution. This tutorial introduces the basic genetic programming paradigm. It explains how the powerful capability of genetic programming is derived from modular algorithmic components: executable representations such as a parse tree, variation operators that preserve syntax and explore a variable length, hierarchical solution space, appropriately chosen programming functions and fitness function specification. It provides demos and walks through an example of GP software.

 

UnaMay OReilly

Una-May O'Reilly is leader of the AnyScale Learning For All (ALFA) group at MIT CSAIL. ALFA focuses on evolutionary algorithms, machine learning and frameworks for large scale knowledge mining, prediction and analytics. The group has projects in cyber security using coevolutionary algorithms to explore adversarial dynamics in networks and malware detection. Una-May received the EvoStar Award for Outstanding Achievements in Evolutionary Computation in Europe in 2013. She is a Junior Fellow (elected before age 40) of the International Society of Genetic and Evolutionary Computation, which has evolved into ACM Sig-EVO. She now serves as Vice-Chair of ACM SigEVO. She served as chair of the largest international Evolutionary Computation Conference, GECCO, in 2005.

 

Erik Hemberg

Erik Hemberg is a Research Scientist in the AnyScale Learning
For All(ALFA) group at Massachusetts Institute of Technology Computer Science and Artificial Intelligence Lab, USA. He has a Ph.D. in Computer Science from University College Dublin, Ireland, and an MSc in Industrial Engineering and Applied Mathematics from Chalmers University of Technology, Sweden. He has 10 years of experience in EC focusing on the use of programs with grammatical representations, estimation of distribution, and coevolution. His work has been applied to networks, tax avoidance, and Cyber Security.

Hyper-heuristics

The automatic design of algorithms has been an early aim of both machine learning and AI, but has proved elusive. The aim of this tutorial is to introduce hyper-heuristics as a principled approach to the automatic design of algorithms. Hyper-heuristics are metaheuristics applied to a space of algorithms; i.e., any general heuristic method of sampling a set of candidate algorithms. In particular, this tutorial will demonstrate how to mine existing algorithms to obtain algorithmic primitives for the hyper-heuristic to compose new algorithmic solutions from, and to employ various types of genetic programming to execute the composition process; i.e., the search of program space.

This tutorial will place hyper-heuristics in the context of genetic programming - which differs in that it constructs solutions from scratch using atomic primitives - as well as genetic improvement - which takes a program as starting point and improves on it (a recent direction introduced by William Langdon).

The approach proceeds from the observation that it is possible to define an invariant framework for the core of any class of algorithms (often by examining existing human-written algorithms for inspiration). The variant components of the algorithm can then be generated by genetic programming. Each instance of the framework therefore defines a family of algorithms. While this allows searches in constrained search spaces based on problem knowledge, it does not in any way limit the generality of this approach, as the template can be chosen to be any executable program and the primitive set can be selected to be Turing-complete. Typically, however, the initial algorithmic primitive set is composed of primitive components of existing high-performing algorithms for the problems being targeted; this more targeted approach very significantly reduces the initial search space, resulting in a practical approach rather than a mere theoretical curiosity. Iterative refining of the primitives allows for gradual and directed enlarging of the search space until convergence.

This leads to a technique for mass-producing algorithms that can be customized to the context of end-use. This is perhaps best illustrated as follows: typically a researcher might create a travelling salesperson algorithm (TSP) by hand. When executed, this algorithm returns a solution to a specific instance of the TSP. We will describe a method that generates TSP algorithms that are tuned to representative instances of interest to the end-user. This method has been applied to a growing number of domains including; data mining/machine learning, combinatorial problems including bin packing (on- and off-line), Boolean satisfiability, job shop scheduling, exam timetabling, image recognition, black-box function optimization, wind-farm layout, and the automated design of meta-heuristics themselves (from selection and mutation operators to the overall meta-heuristic architecture).

This tutorial will provide a step-by-step guide which takes the novice through the distinct stages of automatic design. Examples will illustrate and reinforce the issues of practical application. This technique has repeatedly produced results which outperform their manually designed counterparts, and a theoretical underpinning will be given to demonstrate why this is the case. Automatic design will become an increasingly attractive proposition as the cost of human design will only increase in-line with inflation, while the speed of processors increases in-line with Moore's law, thus making automatic design attractive for industrial application. Basic knowledge of genetic programming will be assumed.

Daniel Tauritz

Daniel Tauritz is an Associate Professor in the Department of Computer Science and Software Engineering at Auburn University (AU), Interim Director and Chief Cyber AI Strategist of the Auburn Cyber Research Center, the founding Head of AU’s Biomimetic Artificial Intelligence Research Group (BioAI Group), a cyber consultant for Sandia National Laboratories, a Guest Scientist at Los Alamos National Laboratory (LANL), and founding academic director of the LANL/AU Cyber Security Sciences Institute (CSSI). He received his Ph.D. in 2002 from Leiden University. His research interests include the design of generative hyper-heuristics and self-configuring evolutionary algorithms and the application of computational intelligence techniques in cyber security, critical infrastructure protection, and program understanding. He was granted a US patent for an artificially intelligent rule-based system to assist teams in becoming more effective by improving the communication process between team members.

 

John Woodward

John R. Woodward is a lecturer at the Queen Mary University of London. Formerly he was a lecturer at the University of Stirling, within the CHORDS group (http://chords.cs.stir.ac.uk/) and was employed on the DAASE project (http://daase.cs.ucl.ac.uk/). Before that he was a lecturer for four years at the University of Nottingham. He holds a BSc in Theoretical Physics, an MSc in Cognitive Science and a PhD in Computer Science, all from the University of Birmingham. His research interests include Automated Software Engineering, particularly Search Based Software Engineering, Artificial Intelligence/Machine Learning and in particular Genetic Programming. He has over 50 publications in Computer Science, Operations Research and Engineering which include both theoretical and empirical contributions, and given over 50 talks at International Conferences and as an invited speaker at Universities. He has worked in industrial, military, educational and academic settings, and been employed by EDS, CERN and RAF and three UK Universities.

Introductory Mathematical Programming for EC

Global optimization of complex models has been for several decades approached by means of formal algorithms as well as
Mathematical Programming (MP) (often branded as Operations Research, yet strongly rooted at Theoretical CS), and simultaneously has been treated by a wide range of dedicated heuristics (frequently under the label of Soft Computing) – where EC resides. The former is applicable when explicit modeling is available, whereas the latter is typically utilized for simulation- or experimentation-based optimization (but also applicable for explicit models).
These two branches complement each other, yet practically studied under two independent CS disciplines.
It is widely recognized nowadays that EC scholars become stronger, better-equipped researchers when obtaining knowledge on this so-called "optimization complement". In other words, the claim that our EC education should encompass basic MP is untenable at present times, and this tutorial aims at bridging the gap for our community's scholars.

The tutorial comprises three parts, aiming to introduce basic MP for EC audience.
The first part introduces the fundamentals of MP. It overviews mathematical optimization and outlines its taxonomy when classified by the underlying model: convex optimization (linear programs (pure-LP) and non-linear programs), versus combinatorial optimization (integer and mixed-integer linear programs, integer quadratic programs).
It then discusses some of the theoretical aspects, such as polyhedra and the duality theorem.
The second part focuses on MP in practice. The tutorial presents the principles of MP modeling, with emphasis on the roles of constraints and auxiliary/dummy decision variables in MP. It is compared to equivalent modeling for EC heuristics, which operate differently with respect to such components. It then covers selected algorithms for the major classes of problems (Dantzig's Simplex for pure-LP, Ellipsoid for convex models, and Branch-and-bound for ILP).
The third part constitutes an interactive demo session of problem-solving. We plan to model in a precept-style, using ILOG Studio's OPL, several ILP benchmark problems. We then plan to demonstrate their rapid solution-attainment by a state-of-the-art solver, namely IBM's CPLEX.

Ofer Shir

Ofer Shir is an Associate Professor of Computer Science at Tel-Hai College, and a Principal Investigator at Migal-Galilee Research Institute – both located in the Upper Galilee, Israel.
Ofer Shir holds a BSc in Physics and Computer Science from the Hebrew University of Jerusalem, Israel (conferred 2003), and both MSc and PhD in Computer Science from Leiden University, The Netherlands (conferred 2004, 2008; PhD advisers: Thomas Bäck and Marc Vrakking). Upon his graduation, he completed a two-years term as a Postdoctoral Research Associate at Princeton University, USA (2008-2010), hosted by Prof. Herschel Rabitz in the Department of Chemistry – where he specialized in computational aspects of experimental quantum systems. He then joined IBM-Research as a Research Staff Member (2010-2013), which constituted his second postdoctoral term, and where he gained real-world experience in convex and combinatorial optimization as well as in decision analytics.
His current topics of interest include Statistical Learning in Theory and in Practice, Experimental Optimization, Theory of Randomized Search Heuristics, Scientific Informatics, Natural Computing, Computational Intelligence in Physical Sciences, Quantum Control and Quantum Machine Learning.

Learning Classifier Systems: From Principles to Modern Systems

Learning Classifier Systems (LCSs) emerged in the late 1970s and since then have attracted a lot of research attention. Originally introduced as a technique to model adaptive agents within Holland’s notion of complex adaptive systems, various enhancements toward a full-fledged Machine Learning (ML) technique with an evolutionary component at its core have appeared. Nowadays, several variations capable of dealing with most modern ML tasks exist, including online and batch-wise supervised learning, single- and multi-step reinforcement learning, and even unsupervised learning. This great flexibility, which is due to their modular and clearly defined algorithmic structure paving the way for simple and straight-forward adaptations, is unique in the field of Evolutionary Machine Learning (EML) – yielding the LCS paradigm an indisputable permanent place in it. Despite this well-known blueprint comprising the building blocks bringing LCSs to function, gaining theoretical insights regarding the interplay between them has been a crucial research topic for a long time, and it still constitutes a pursued subject of active research.

In this tutorial, the main goal is to make the audience familiar with exactly these building blocks of LCS-based EML. To achieve this, Wilson’s XCS classifier system, the most prominent and deeply investigated derivative of Michigan-style LCS will be introduced in the first part of the tutorial. In order to provide a holistic view on LCSs, the tutorial also provides a sketch on the lineage and historical developments in the first part, but will then quickly focus on the more prominent Michigan-style systems.
Recent theoretical advances in XCS research will be the subject of the tutorial’s second part. The attendees should get in touch with the fundamental challenges and also the bounds of what can be achieved by XCS under which circumstances. This introductory theory part is contributed by Dr. Masaya Nakata.
The third part of the tutorial is then devoted to the state of the art in LCS research in terms of their real world applicability. Modern systems such as XCSF for online function approximation or ExSTraCS for large-scale supervised data mining will be the subject of discussion.
The tutorial closes with a wrap-up regarding the distinctive potentials of LCS and with suggestions for a complementary (“traditional”) ML-centric view on this unique paradigm. With the intention to provide the audience with an impression about where LCS research stands these days and which open questions are still around, a brief review of most recent work to tackle unsolved issues closes the tutorial.

Anthony Stein

Anthony Stein is a tenure track professor at the University of Hohenheim, where he heads the department of Artificial Intelligence in Agricultural Engineering. He received his bachelor's degree (B.Sc.) in Business Information Systems from the University of Applied Sciences Augsburg in 2012. He then moved on to the University of Augsburg for his master's degree (M.Sc.) in computer science with a minor in information economics which he received in 2014. Since November 2019, he also holds a doctorate (Dr. rer. nat.) in computer science from the University of Augsburg. His research is concerned with the development and application of Evolutionary Artificial Intelligence methods to complex self-adaptive and self-organizing (SASO) systems with a focus on the domain of digital agriculture. Dr. Stein is involved in the organization of workshops on lifelike computing systems and evolutionary machine learning. He is engaged as reviewer for numerous international conferences and journals, including IEEE CEC or IEEE T-EVC.

 

Masaya Nakata

Masaya Nakata is an associate professor at the Faculty of Engineering, Yokohama National University. He received the B.A., M.Sc. Ph.D. degrees in informatics from the University of Electro- Communications, Japan, in 2011, 2013, 2016 respectively. He was a visiting student of the School of Engineering and Computer Science in Victoria University of Wellington from 2014. He was a visiting student of the Department of Electronics and Information, Politecnico di Milano, Milan, Italy, in 2013, and of the Department of Computer Science, University of Bristol, Bristol, UK, in 2014. His research interests are in evolutionary computation, reinforcement learning, data mining, more specifically, in learning classifier systems. He has received the best paper award and the IEEE Computational Intelligence Society Japan Chapter Young Researcher Award from the Japanese Symposium of Evolutionary Computation 2012. He is a co-organizer of International Workshop on Learning Classifier Systems (IWLCS) for 2015-2016, as well as 2018-2020.

Model-Based Evolutionary Algorithms

In model-based evolutionary algorithms (MBEAs) the variation operators are guided by the use of a model that conveys problem-specific information so as to increase the chances that combining the currently available solutions leads to improved solutions. Such models can be constructed beforehand for a specific problem, or they can be learnt during the optimization process.
Replacing traditional crossover and mutation operators by building and using models enables the use of machine learning techniques for automatic discovery of problem regularities and subsequent exploitation of these regularities, thereby enabling the design of optimization techniques that can automatically adapt to a given problem. This is an especially useful feature when considering optimization in a black-box setting. The use of models can furthermore also have major implications for grey-box settings where not everything about the problem is considered to be unknown a priori.

Well-known types of MBEAs are Estimation-of-Distribution Algorithms (EDAs) where probabilistic models of promising solutions are built and samples are subsequently drawn from these models to generate new solutions.
A more recent class of MBEAs is the family of Optimal Mixing EAs such as the Linkage Tree GA and, more generally, various GOMEA variants.
The tutorial will mainly focus on the latter types of MBEAs.

 

Dirk Thierens

Dr. Ir. Dirk Thierens is a lecturer/senior researcher at the Department of
Information and Computing Sciences at Utrecht University, where he is teaching courses on Evolutionary Computation and Computational Intelligence. He received his PhD in Computer Science on "Analysis and Design of Genetic Algorithms" (KULeuven, 1995). He has (co)-authored over 100 peer reviewed papers in Evolutionary Computation. His main current research interests are focused on the design and application of structure learning techniques in the framework of population-based, stochastic search. Dirk contributed to the organization of GECCO conferences as track chair, workshop organizer, Editor-in-Chief, and past member of the SIGEVO ACM board.

Peter A. N. Bosman

Peter A. N. Bosman is a senior researcher in the Life Sciences research group at the Centrum Wiskunde & Informatica (CWI) (Centre for Mathematics and Computer Science) located in Amsterdam, the Netherlands. Peter was formerly affiliated with the Department of Information and Computing Sciences at Utrecht University, where also he obtained both his MSc and PhD degrees in Computer Science, more specifically on the design and application of estimation-of-distribution algorithms (EDAs). He has (co-)authored over 125 refereed publications on both algorithmic design aspects and real-world applications of evolutionary algorithms. At the GECCO conference, Peter has previously been track (co-)chair (EDA track, 2006, 2009), late-breaking-papers chair (2007), (co-)workshop organizer (OBUPM workshop, 2006; EvoDOP workshop, 2007; GreenGEC workshop, 2012-2014), (co-)local chair (2013) and general chair (2017).

Recent Advances in Particle Swarm Optimization Analysis and Understanding 2021

The main objective of this tutorial will be to inform particle swarm optimization (PSO) practitioners of the many common misconceptions and falsehoods that are actively hindering a practitioner’s successful use of PSO in solving challenging optimization problems. While the behaviour of PSO’s particles has been studied both theoretically and empirically since its inception in 1995, most practitioners unfortunately have not utilized these studies to guide their use of PSO. This tutorial will provide a succinct coverage of common PSO misconceptions, with a detailed explanation of why the misconceptions are in fact false, and how they are negatively impacting results. The tutorial will also provide recent theoretical results about PSO particle behaviour from which the PSO practitioner can now make better and more informed decisions about PSO and in particular make better PSO parameter selections. The tutorial will focus on the following important aspects of PSO behaviour

  • Understanding why the random variables used in the velocity update should not be scalars, but rather vectors of random values
  • Exploring the effects of different ways in which velocity can be initialized
  • Clearing up issues with reference to velocity clamping
  • The influence of social topology and different iteration strategies on performance is discussed
  • Understanding PSO control parameters, and how to use them more efficiently

The importance of parameter selection will be illustrated with an interactive demo where audience members will vote for/suggest control parameters. From the set of audience selected control parameters relative performance ranking, based on popular benchmark suites, of these configuration will be given in relation to a very extensive set of possible configurations. This demo will clearly illustrate why the subsequent theoretical discussion on control parameters is so import for effective PSO use.

  • Existing theoretical PSO results and what they mean to a PSO practitioner
  • Roaming behaviour of PSO particles
  • Understanding why PSO struggles with large-scale optimization problems
  • Known stability criteria of PSO algorithms
  • Effects of particle stability of PSO’s performance
  • How to derive new stability criteria for PSO variants and verify them
  • The use of a general PSO stability theorem in addition to a simple to use specialization are demonstrated
  • The newly derived specialization theorem allows for a non-restrictive relationship of PSO control coefficients, which is a first for PSO stability theory
  • Dealing with dimensional coupling in PSO theory.
  • Control parameter tuning, and self-adaptive control parameters
  • Is the PSO a local optimizer or a global optimizer?

With the knowledge presented in this tutorial a PSO practitioner will gain up to date theoretical insights into PSO behaviour and as a result be able to make informed choices when utilizing PSO.

Christopher Cleghorn

Christopher Cleghorn received his Masters and PhD degrees in Computer Science from the University of Pretoria, South Africa, in 2013 and 2017 respectively. He is an Associate Professor in the School of Computer Science and Applied Mathematics, at the University of the Witwatersrand. His research interests include swarm intelligence, evolutionary computation, machine learning, and radio-astronomy with a strong focus of theoretical research. Prof Cleghorn annually serves as a reviewer for numerous international journals and conferences in domains ranging from swarm intelligence and neural networks to mathematical optimization.

 

Andries Engelbrecht

Andries Engelbrecht received the Masters and PhD degrees in Computer Science from the University of Stellenbosch, South Africa, in 1994 and 1999 respectively. He is Voigt Chair in Data Science in the Department of Industrial Engineering, with a joint appointment as Professor in the Computer Science Division, Stellenbosch University. His research interests include swarm intelligence, evolutionary computation, artificial neural networks, artificial immune systems, and the application of these Computational Intelligence paradigms to data analytics, games, bioinformatics, finance, and difficult optimization problems. He is author of two books, Computational Intelligence: An Introduction and Fundamentals of Computational Swarm Intelligence. Prof Engelbrecht serves as an associate-editor for a number of journals, and serves as a reviewer for numerous journals and conferences.

NEW Replicability and Reproducibility in Evolutionary Optimization

The reproducibility of experimental results is one of the corner-stones of experimental science, yet often, published experimental results are neither replicable nor reproducible due to insufficient details, missing datasets or software implementations. The Association for Computing Machinery (ACM) distinguishes among “repeatability”, “reproducibility" and “replicability" and it has instituted different badges to be attached to research articles in ACM publications depending on the level of reproducibility. The new ACM journal “Transactions on Evolutionary Learning and Optimization (TELO)” uses these badges to encourage the publication of reproducible results.

The tutorial will be structured in two main parts. The first part will introduce basic concepts in reproducible research, the motivation behind it, and potential pitfalls illustrated from real-world examples. This part will also explain in detail the ACM standards for reproducibility and explain how the process runs at the ACM TELO journal. The second part will describe several techniques for improving reproducibility, ranging from trivial but surprisingly effective to somewhat more technical and laborious. Techniques presented will be demonstrated during the session step-by-step.

Luís Paquete

Luís Paquete is Associate Professor at the Department of Informatics Engineering, University of Coimbra, Portugal, since 2007. He received his Ph.D. in Computer Science from T.U. Darmstadt, Germany, in 2005 and a M.S. in Systems Engineering and Computer Science from the University of Algarve, Portugal, in 2001. His research interest is mainly focused on exact and heuristic solution methods for multiobjective combinatorial optimization problems. He is in editorial board of Operations Research Perspectives and Area Editor at ACM Transactions on Evolutionary Learning and Optimization.

Manuel López-Ibáñez

Dr. López-Ibáñez is Senior Distinguished Researcher at the University of Málaga (Spain) and a Senior Lecturer (Associate Professor) in the Decision and Cognitive Sciences Research Centre at the Alliance Manchester Business School, University of Manchester, UK. He received the M.S. degree in computer science from the University of Granada, Granada, Spain, in 2004, and the Ph.D. degree from Edinburgh Napier University, U.K., in 2009. He has published 27 journal papers, 9 book chapters and 48 papers in peer-reviewed proceedings of international conferences on diverse areas such as evolutionary algorithms, ant colony optimization, multi-objective optimization, pump scheduling and various combinatorial optimization problems. His current research interests are experimental analysis and automatic design of stochastic optimization algorithms, for single and multi-objective optimization. He is the lead developer and current maintainer of the irace software package (http://iridia.ulb.ac.be/irace).

Representations for Evolutionary Algorithms

Successful and efficient use of evolutionary algorithms (EA) depends on the choice of the genotype, the problem representation (mapping from genotype to phenotype) and on the choice of search operators that are applied to the genotypes. These choices cannot be made independently of each other. The question whether a certain representation leads to better performing EAs than an alternative representation can only be answered when the operators applied are taken into consideration. The reverse is also true: deciding between alternative operators is only meaningful for a given representation.

Research in the last few years has identified a number of key concepts to analyse the influence of representation-operator combinations on EA performance. Relevant concepts are the locality and redundancy of representations.

Locality is a result of the interplay between the search operator and the genotype-phenotype mapping. Representations have high locality if the application of variation operators results in new solutions similar to the original ones. Representations are redundant if the number of phenotypes exceeds the number of possible genotypes. Redundant representations can lead to biased encodings if some phenotypes are on average represented by a larger number of genotypes or search operators favor some kind of phenotypes.

The tutorial gives a brief overview about existing guidelines for representation design, illustrates the different aspects of representations, gives a brief overview of models describing the different aspects, and illustrates the relevance of the aspects with practical examples.

It is expected that the participants have a basic understanding of EA principles.

Franz Rothlauf

He received a Diploma in Electrical Engineering from the University of Erlangen, Germany, a Ph.D. in Information Systems from the University of Bayreuth, Germany, and a Habilitation from the University of Mannheim, Germany, in 1997, 2001, and 2007, respectively.

Since 2007, he is professor of Information Systems at the University of Mainz. He has published more than 100 technical papers in the context of planning and optimization, evolutionary computation, e-business, and software engineering, co-edited several conference proceedings and edited books, and is author of the books "Representations for Genetic and Evolutionary Algorithms" and "Design of Modern Heuristics". At the University Mainz, he is Academic Director of the Executive MBA program (since 2013) and Chief Information Officer (since 2016).

His main research interests are the application of modern heuristics in planning and optimization systems. He is a member of the Editorial Board of Evolutionary Computation Journal (ECJ) and Business & Information Systems Engineering (BISE). Since 2007, he is member of the Executive Committee of ACM SIGEVO. He serves as treasurer for ACM SIGEVO since 2011. He has been organizer of a number of workshops and tracks on heuristic optimization, chair of EvoWorkshops in 2005 and 2006, co-organizer of the European workshop series on "Evolutionary Computation in Communications, Networks, and Connected Systems", co-organizer of the European workshop series on "Evolutionary Computation in Transportation and Logistics", and co-chair of the program committee of the GA track at GECCO 2006. He was conference chair of GECCO 2009.

Runtime Analysis of Evolutionary Algorithms: Basic Introduction

Evolutionary algorithm theory has studied the time complexity of
evolutionary algorithms for more than 20 years. Different aspects of
this rich and diverse research field were presented in three different
advanced or specialized tutorials at last year's GECCO. This tutorial
presents the foundations of this field. We introduce the most
important notions and definitions used in the field and consider
different evolutionary algorithms on a number of well-known and
important example problems. Through a careful and thorough
introduction of important analytical tools and methods, including
fitness-based partitions, typical events and runs and drift analysis,
by the end of the tutorial the attendees will be able to apply these
techniques to derive relevant runtime results for non-trivial
evolutionary algorithms. Moreover, the attendees will be fully
prepared to follow the more advanced tutorials that cover more
specialized aspects of the field, including the new advanced runtime
analysis tutorial on realistic population-based EAs. To assure the
coverage of the topics required in the specialised tutorials, this
introductory tutorial will be coordinated with the presenters of the
more advanced ones. In addition to custom-tailored methods for the
analysis of evolutionary algorithms we also introduce the relevant
tools and notions from probability theory in an accessible form.
The tutorial will be based on the 'theoretical analysis of stochastic
search heuristics' chapter of the forthcoming Springer Handbook of
Heuristics.

 

Per Kristian Lehre

Dr Lehre is Senior Lecturer in the School of Computer Science at the
University of Birmingham (since Jan 2017). Before joining Birmingham,
he was since 2011 Assistant Professor with the University of
Nottingham. He obtained MSc and PhD degrees in Computer Science from
the Norwegian University of Science and Technology (NTNU) in
Trondheim, He completed the PhD in 2006 under the supervision of Prof
Pauline Haddow, and joined the School of Computer Science at The
University of Birmingham, UK, as a Research Fellow in January 2007
with Prof Xin Yao. He was a Post Doctoral Fellow at DTU Informatics,
Technical University of Denmark in Lyngby, Denmark from April 2010.

Dr Lehre's research interests are in theoretical aspects of
nature-inspired search heuristics, in particular runtime analysis of
population-based evolutionary algorithms. His research has won
numerous best paper awards, including GECCO (2013, 2010, 2009, 2006),
ICSTW (2008), and ISAAC (2014). He is vice-chair of IEEE Task Force on
Theoretical Foundations of Bio-inspired Computation, and a member of
the editorial board of Evolutionary Computation and associate editor
of IEEE Transactions of Evolutionary Computation. He has guest-edited
special issues of Theoretical Computer Science and IEEE Transaction on
Evolutionary Computation on theoretical foundations of evolutionary
computation. Dr Lehre has given many tutorials on evolutionary
computation in summer schools (UK: 2007, 2015, 2016, France: 2009,
2010, and 2011, Estonia: 2010), as well as major conferences and
workshops (GECCO 2013-2020, CEC 2013 and 2015, PPSN 2016, 2018, 2020,
and ThRaSH 2013). He was the main coordinator of the 2M euro
EU-funded project SAGE which brought together theory of evolutionary
computation and population genetics.

 

Pietro Simone Oliveto

Dr Oliveto is a Senior Lecturer and an EPSRC Early Career Fellow at the University of Sheffield, UK. He received the Laurea degree and PhD degree in computer science respectively from the University of Catania, Italy in 2005 and from the University of Birmingham, UK in 2009. From October 2007 to April 2008, he was a visiting researcher of the Efficient Algorithms and Complexity Theory Institute at the Department of Computer Science of the University of Dortmund where he collaborated with Prof. Ingo Wegener's research group. He has been EPSRC PhD+ Fellow (2009-2010) and EPSRC Postdoctoral Fellow (2010-2013) at Birmingham and Vice-Chancellor's Fellow at Sheffield (2013-2016). His main research interest is the performance analysis of bio-inspired computation techniques including evolutionary algorithms, genetic programming, artificial immune systems and hyper-heuristics. He has published a review paper of the runtime analysis field and two book chapters on the theoretical techniques for the analysis. He has won best paper awards at the GECCO 2008, ICARIS 2011 and GECCO 2014 conferences. Dr. Oliveto has regularly given tutorials on computational complexity analysis at the main evolutionary computation conferences (including GECCO, CEC, SSCI and PPSN) since 2012. He has guest-edited journal special issues of Computer Science and Technology, Evolutionary Computation, Theoretical Computer Science and IEEE Transactions on Evolutionary Computation. He has co-Chaired the the IEEE symposium on Foundations of Computational Intelligence (FOCI) from 2015 to 2018. He is part of the Steering Committee of the annual workshop on Theory of Randomized Search Heuristics (ThRaSH), Leader of the Benchmarking Working Group of the COST Action ImAppNIO, member of the EPSRC Peer Review College, Associate Editor of IEEE Transactions on Evolutionary Computation.

NEW Theoretical Foundations of Evolutionary Computation for Beginners and Veterans

There exists more than 40 years of theoretical research in evolutionary computation covering a wide array of topics. For example, it is not widely known that the behavior of an Evolutionary Algorithm can be influenced by attractors that exists outside the space of the feasible population.

The tutorial will review pseudo-Boolean optimization, and explain the use of transforms from OR and quantum computing that can convert pseudo-Boolean optimization problem into a k-bounded form. And for every k-bounded pseudo-Boolean optimization problem, the location of improving moves (i.e., bit flips) can be computed in constant time, making most mutation operators obsolete.

The tutorial will cover 1) No Free Lunch, 2) Sharpened No Free Lunch and 3) Focused No Free Lunch, and how the different theoretical proofs can lead to seemingly contradictory conclusions. (What many researchers know about No Free Lunch might actually be wrong.)
The tutorial will also cover the relationship between functions and representations, the space of all possible representations and the duality between search algorithms and landscapes. The tutorial will explain both Elementary Landscapes and eigenvectors of search neighborhoods in simple terms and explain how the two are related.

Finally, the tutorial will also offer a cautionary critique of theory. While theoretical results are typically based on proofs, virtually all proofs are based on assumptions. And yet, theory is sometimes leveraged far beyond the supporting assumptions, which can lead to misleading claims and false conclusions. Every researcher in the field of Evolutionary Computation needs to be a wiser consumer of both theoretical and empirical results.

Darrell Whitley

Darrell Whitley is a Professor of Computer Science at Colorado State University. He served as the Chair of the International Society of Genetic Algorithm from 1993 to 1997, and as the Editor-in-Chief of the journal Evolutionary Computation from 1997 to 2003. He was Chair of the Governing Board of ACM SIGEVO from 2007 to 2011. He was named an ACM Fellow in 2019 for his contributions to the field of genetic and evolutionary computation.

Advanced Tutorials

Advanced Learning Classifier Systems

This tutorial presents how advanced Learning Classifier Systems embed Evolution with Machine Learning to produce transparent, explainable and flexible AI systems. Evolutionary Machine Learning is continuing to grow in popularity due to its combination of efficient genetic search with effective machine learning. Although this is led by Evolutionary Deep Learning and Neuroevolution, where evolution seeks to find the frameworks/parameterisation for connectionist approaches, there are alternatives. Learning Classifier Systems adopt more of a symbolic approach through evolving heuristics, which helps advance explainable AI. Furthermore, LCS are over 40 years old as a concept, where just as artificial neural network approaches have advanced in this time, so have LCS.

The reasons why many concepts associate with LCS are now considered superseded are presented (e.g. the bucket brigade), where instead modern reinforcement learning-based updates are considered. Similarly, instead of old/restricted representations being discussed, such as the ternary alphabet, the most recent advances in knowledge perception are described for adaptation to real-world domains. How LCS can be adapted to problems ranging from bioinformatics to robotic navigation will be explained.

How to adapt LCS to future academic research directions will be considered, for example continual learning. How hybrid methods with other EC techniques, such as Genetic Programming, for feature selection and feature construction will be discussed. Inspiration from natural cognitive learning will be used to motivate directions for further LCS enhancements.

 

Will Browne

Associate Prof Will Browne’s research focuses on applied cognitive systems. Specifically, how to use inspiration from natural intelligence to enable computers/machines/robots to behave usefully. This includes cognitive robotics, learning classifier systems, and modern heuristics for industrial application. A/Prof. Browne has been co-track chair for the Genetics-Based Machine Learning (GBML) track and the co-chair for the Evolutionary Machine Learning track at Genetic and Evolutionary Computation Conference. He has also provided tutorials on Rule-Based Machine Learning at GECCO, chaired the International Workshop on Learning Classifier Systems (LCSs) and lectured graduate courses on LCSs. He has recently co-authored the first textbook on LCSs ‘Introduction to Learning Classifier Systems, Springer 2017’. Currently, he leads the LCS theme in the Evolutionary Computation Research Group at Victoria University of Wellington, New Zealand.

Automated Algorithm Configuration and Design

Most optimization algorithms, including evolutionary algorithms and
metaheuristics, and general-purpose solvers for integer or constraint
programming, have often many parameters that need to be properly
designed and tuned for obtaining the best results on a particular
problem. Automatic (offline) algorithm design methods help algorithm
users to determine the parameter settings that optimize the
performance of the algorithm before the algorithm is actually
deployed. Moreover, automatic offline algorithm design methods may
potentially lead to a paradigm shift in algorithm design because they
enable algorithm designers to explore much larger design spaces than
by traditional trial-and-error and experimental design
procedures. Thus, algorithm designers can focus on inventing new
algorithmic components, combine them in flexible algorithm frameworks,
and let final algorithm design decisions be taken by automatic
algorithm design techniques for specific application contexts.

This tutorial is structured into two main parts. In the first part, we
will give an overview of the algorithm design and tuning problem,
review recent methods for automatic algorithm design, and illustrate
the potential of these techniques using recent, notable applications
from the presenters' and other researchers work. In the second part of
the tutorial will focus on a detailed discussion of more complex
scenarios, including multi-objective problems, anytime algorithms,
heterogeneous problem instances, and the automatic generation of
algorithms from algorithm frameworks. The focus of this second part of
the tutorial is, hence, on practical but challenging applications of
automatic algorithm design. The second part of the tutorial will
demonstrate how to tackle algorithm design tasks using our irace
software (http://iridia.ulb.ac.be/irace), which implements the
iterated racing procedure for automatic algorithm design. We will
provide a practical step-by-step guide on using irace for the typical
algorithm design scenario.

Thomas Stützle

Thomas Stützle is a research director of the Belgian F.R.S.-FNRS working at the IRIDIA laboratory of Université libre de Bruxelles (ULB), Belgium. He received his PhD and his habilitation in computer science both from the Computer Science Department of Technische Universität Darmstadt, Germany, in 1998 and 2004, respectively. He is the co-author of two books about ``Stochastic Local Search: Foundations and Applications and ``Ant Colony Optimization and he has extensively published in the wider area of metaheuristics including 22 edited proceedings or books, 11 journal special issues, and more than 250 journal, conference articles and book chapters, many of which are highly cited. He is associate editor of Computational Intelligence, Evolutionary Computation and Applied Mathematics and Computation and on the editorial board of seven other journals. His main research interests are in metaheuristics, swarm intelligence, methodologies for engineering stochastic local search algorithms, multi-objective optimization, and automatic algorithm configuration. In fact, since more than a decade he is interested in automatic algorithm configuration and design methodologies and he has contributed to some effective algorithm configuration techniques such as F-race, Iterated F-race and ParamILS.

Manuel López-Ibáñez

Dr. López-Ibáñez is Senior Distinguished Researcher at the University of Málaga (Spain) and a Senior Lecturer (Associate Professor) in the Decision and Cognitive Sciences Research Centre at the Alliance Manchester Business School, University of Manchester, UK. He received the M.S. degree in computer science from the University of Granada, Granada, Spain, in 2004, and the Ph.D. degree from Edinburgh Napier University, U.K., in 2009. He has published 27 journal papers, 9 book chapters and 48 papers in peer-reviewed proceedings of international conferences on diverse areas such as evolutionary algorithms, ant colony optimization, multi-objective optimization, pump scheduling and various combinatorial optimization problems. His current research interests are experimental analysis and automatic design of stochastic optimization algorithms, for single and multi-objective optimization. He is the lead developer and current maintainer of the irace software package (http://iridia.ulb.ac.be/irace).

NEW Benchmarking Multiobjective Optimizers 2.0

Benchmarking is an important part of algorithm design, selection and recommendation---both in single-objective and multiobjective optimization. Benchmarking multiobjective solvers seems at first sight more complicated than benchmarking single-objective ones as there exists no natural total order on the objective space. In the past, comparisons of multiobjective solvers have therefore been done either entirely visually (at first) or via quality indicators and the attainment function. Only very recently did we realize that the quality indicator approach transforms a multiobjective problem into a single-objective (set-based) problem and thus all recent progress from the rapidly evolving single-objective benchmarking field can be transferred to the multiobjective case as well. Moreover, many multiobjective test functions have been proposed in the past but not much has changed in the last 15 or so years in terms of addressing the disadvantages of those problems (like Pareto sets on constraint boundaries, usage of distance variables, etc.).

In this tutorial, we will discuss the past and future of benchmarking multiobjective optimizers. In particular, we will discuss the new view on benchmarking multiobjective algorithms by falling back on single-objective comparisons and thus being able to use all methodologies and tools from the single-objective domain such as empirical distributions of runtimes. We will also discuss the advantages and drawbacks of some widely used multiobjective test suites and explain how we can do better, going back to the roots of what a multi-objective problem is in practice, namely the simultaneous optimization of multiple objective functions. We will also discuss recent advances in the visualization of (multiobjective) problem landscapes and compare the previous and newly proposed benchmark problems in the context of those landscape visualizations.

Dimo Brockhoff

Dimo Brockhoff received his diploma in computer science from University of Dortmund, Germany in 2005 and his PhD (Dr. sc. ETH) from ETH Zurich, Switzerland in 2009. After two postdocs at Inria Saclay Ile-de-France (2009-2010) and at Ecole Polytechnique (2010-2011), he joined Inria in November 2011 as a permanent researcher (first in its Lille - Nord Europe research center and since October 2016 in the Saclay - Ile-de-France one). His research interests are focused on evolutionary multiobjective optimization (EMO), in particular on theoretical aspects of indicator-based search and on the benchmarking of blackbox algorithms in general.

Tea Tušar

Tea Tusar is a research fellow at the Department of Intelligent Systems of the Jozef Stefan Institute in Ljubljana, Slovenia. She was awarded the PhD degree in Information and Communication Technologies by the Jozef Stefan International Postgraduate School for her work on visualizing solution sets in multiobjective optimization. She has completed a one-year postdoctoral fellowship at Inria Lille in France where she worked on benchmarking multiobjective optimizers. Her research interests include evolutionary algorithms for singleobjective and multiobjective optimization with emphasis on visualizing and benchmarking their results and applying them to real-world problems.

CMA-ES and Advanced Adaptation Mechanisms

The Covariance-Matrix-Adaptation Evolution Strategy (CMA-ES) is nowadays considered as the state-of-the art continuous stochastic search algorithm, in particular for optimization of non-separable, ill-conditioned and rugged functions. The CMA-ES consists of different components that adapt the step-size and the covariance matrix separately.

This tutorial will focus on CMA-ES and provide the audience with an overview of the different adaptation mechanisms used within CMA-ES and the scenarios where their relative impact is important. We will in particular present the rank-one update, rank-mu update, and active CMA for the covariance matrix adaptation. Different step-size mechanisms (CSA and TPA) as well as the scenario where they should be applied will be discussed.

We will address important design principles as well as questions related to parameter tuning that always accompany algorithm design. The input parameters such as the initial mean, the initial step-size, and the population size will be discussed in relation with the ruggedness of functions. Restart strategies that automatize the input parameter tuning will be presented.

 

Youhei Akimoto

Youhei Akimoto is an associate professor at University of Tsukuba, Japan. He received his diploma (2007) in computer science and his master degree (2008) and PhD (2011) in computational intelligence and systems science from Tokyo Institute of Technology, Japan. He was a research fellow of JSPS in Japan (2010-2011), a post-doctoral research fellow at INRIA in France (2011-2013), an assistant professor at Shinshu University in Japan (2013-2018). He served as track chair for the CO (current ENUM) track for GECCO 2015 and 2016. He is serving as an associate editor of ACM TELO and an editorial board of ECJ. His research interests include design principle, theoretical analysis, and applications of stochastic search heuristics.

 

Nikolaus Hansen

Nikolaus Hansen is a research director at Inria, France. Educated in medicine and mathematics, he received a Ph.D. in civil engineering in 1998 from the Technical University Berlin under Ingo Rechenberg. Before he joined Inria, he has been working in evolutionary computation, genomics and computational science at the Technical University Berlin, the Institute of Genetic Medicine and the ETH Zurich. His main research interests are learning and adaptation in evolutionary computation and the development of algorithms applicable in practice. His best-known contribution to the field of evolutionary computation is the so-called Covariance Matrix Adaptation (CMA).

Coevolutionary Computation for Adversarial Deep Learning

In recent years, machine learning with Generative Adversarial Networks (GANs) has been recognized as a powerful method for generative modeling. Generative modeling is the problem of estimating the underlying distribution of a set of samples. GANs accomplish this using unsupervised learning. They have also been extended to handle semi-supervised and fully supervised learning paradigms. GANs have been successfully applied to many domains. They can generate novel images (e.g., image colorization or super-resolution, photograph editing, and text-to-image translation), sound (e.g., voice translation and music generation), and video (e.g., video-to-video translation, deep fakes generation, and AI-assisted video calls), finding application in domains of multimedia information, engineering, science, design, art, and games.

GANs are an adversarial paradigm. Two NNs compete with each other using antagonistic lost function to train the parameters with gradient descent. This connects them to evolution because evolution also exhibits adversarial engagements and competitive coevolution. In fact, the evolutionary computation community’s study of coevolutionary pathologies and its work on competitive and cooperative coevolutionary algorithms offers a means of solving convergence impasses often encountered in GAN training.

In this tutorial we will explain:
- The main concepts of generative modeling and adversarial learning.
GAN gradient-based training and the main pathologies that prevent ideal convergence. Specifically, we will explain mode collapse, oscillation, and vanishing gradients.
- Coevolutionary algorithms and how they can be applied to train GANs. Specifically, we will explain how algorithm enhancements address non-ideal convergence
- To demonstrate we will draw upon the open-source Lipizzaner framework (url: http://lipizzaner.csail.mit.edu/). This framework is easy to use and extend. It sets up a spatial grid of communicating populations of GANs.
Students will be given the opportunity to set up and use the Lipizzaner framework during the tutorial by means of a jupyter notebook expressly developed for teaching purposes.

 

Jamal Toutouh

I am a Marie Skłodowska Currie Postdoctoral Fellow at Massachusetts Institute of Technology (MIT) in the USA, at the MIT CSAIL Lab. I obtained my Ph.D. in Computer Engineering at the University of Malaga (Spain). The dissertation, Natural Computing for Vehicular Networks, was awarded the 2018 Best Spanish Ph.D. Thesis in Smart Cities. My dissertation focused on the application of Machine Learning methods inspired by Nature to address Smart Mobility problems.
My current research explores the combination of Nature-inspired gradient-free and gradient-based methods to address Adversarial Machine Learning. The main idea is to devise new algorithms to improve the efficiency and efficacy of the state-of-the-art methodology by mainly applying co-evolutionary approaches. Besides, I am working on the application of Machine Learning to address problems related to Smart Mobility, Smart Cities, and Climate Change.

 

UnaMay OReilly

Una-May O'Reilly is leader of the AnyScale Learning For All (ALFA) group at MIT CSAIL. ALFA focuses on evolutionary algorithms, machine learning and frameworks for large scale knowledge mining, prediction and analytics. The group has projects in cyber security using coevolutionary algorithms to explore adversarial dynamics in networks and malware detection. Una-May received the EvoStar Award for Outstanding Achievements in Evolutionary Computation in Europe in 2013. She is a Junior Fellow (elected before age 40) of the International Society of Genetic and Evolutionary Computation, which has evolved into ACM Sig-EVO. She now serves as Vice-Chair of ACM SigEVO. She served as chair of the largest international Evolutionary Computation Conference, GECCO, in 2005.

Constraint-Handling Techniques used with Evolutionary Algorithms

Evolutionary Algorithms (EAs), when used for global optimization,
can be seen as unconstrained optimization techniques. Therefore,
they require an additional mechanism to incorporate constraints of
any kind (i.e., inequality, equality, linear, nonlinear) into their fitness
function. Although the use of penalty functions (very popular with
mathematical programming techniques) may seem an obvious choice,
this sort of approach requires a careful fine tuning of the penalty
factors to be used. Otherwise, an EA may be unable to reach the
feasible region (if the penalty is too low) or may reach quickly
the feasible region but being unable to locate solutions that lie in
the boundary with the infeasible region (if the penalty is too severe).
This has motivated the development of a number of approaches to
incorporate constraints into the fitness function of an EA. This tutorial
will cover the main proposals in current use, including novel approaches
such as the use of tournament rules based on feasibility, multiobjective
optimization concepts, hybrids with mathematical programming
techniques (e.g., Lagrange multipliers), cultural algorithms, and artificial
immune systems, among others. Other topics such as the importance of
maintaining diversity, current benchmarks and the use of alternative
search engines (e.g., particle swarm optimization, differential evolution,
evolution strategies, etc.) will be also discussed (as time allows).

Carlos Coello Coello

Carlos Artemio Coello Coello received a PhD in Computer Science from Tulane University (USA) in 1996. His research has mainly focused on the design of new multi-objective optimization algorithms based on bio-inspired metaheuristics, which is an area in which he has made pioneering contributions. He currently has over 500 publications which, according to Google Scholar, report over 52,700 citations (with an h-index of 92).
He has received several awards, including the National Research Award (in 2007) from the Mexican Academy of Science (in the area of exact sciences), the 2009 Medal to the Scientific Merit from Mexico City's congress, the Ciudad Capital: Heberto Castillo 2011 Award for scientists under the age of 45, in Basic Science, the 2012 Scopus Award (Mexico's edition) for being the most highly cited scientist in engineering in the 5 years previous to the award and the 2012 National Medal of Science in Physics, Mathematics and Natural Sciences from Mexico's presidency (this is the most important award that a scientist can receive in Mexico). He also received the Luis Elizondo Award from the Instituto
Tecnológico de Monterrey in 2019. He is the recipient of the prestigious 2013 IEEE Kiyo Tomiyasu Award, "for pioneering contributions to single- and multiobjective optimization techniques using bioinspired metaheuristics" and of the 2016 The World Academy of Sciences (TWAS) Award in “Engineering Sciences”. Since January 2011, he is an IEEE Fellow. He is also Associate Editor of several
international journals including the IEEE Transactions on Evolutionary Computation, Evolutionary Computation and the IEEE Transactions on Emerging Topics in Computational Intellience.
He is currently Full Professor with distinction at the Computer Science Department of CINVESTAV-IPN in Mexico City, Mexico.

Decomposition Multi-Objective Optimization: Current Developments and Future Opportunities

Evolutionary multi-objective optimization (EMO) has been a major research topic in the field of evolutionary computation for many years. It has been generally accepted that combination of evolutionary algorithms and traditional optimization methods should be a next generation multi-objective optimization solver. As the name suggests, the basic idea of the decomposition-based technique is to transform the original complex problem into simplified subproblem(s) so as to facilitate the optimization. Decomposition methods have been well used and studied in traditional multi-objective optimization. MOEA/D decomposes a multi-objective problem into a number of subtasks, and then solves them in a collaborative manner. MOEA/D provides a very natural bridge between multi-objective evolutionary algorithms and traditional decomposition methods. It has been a commonly used evolutionary algorithmic framework in recent years.

Within this tutorial, a comprehensive introduction to MOEA/D will be given and selected research results will be presented in more detail. More specifically, we are going to (i) introduce the basic principles of MOEA/D in comparison with other two state-of-the-art EMO frameworks, i.e., Pareto- and indicator-based frameworks; (ii) present a general overview of state-of-the-art MOEA/D variants and their applications; (iii) discuss the future opportunities for possible further developments.

Ke Li

Ke Li is a Senior Lecturer (Associate Professor) in Computer Science at the Department of Computer Science, University of Exeter. He earned his PhD from City University of Hong Kong. Afterwards, he spent a year as a postdoctoral research associate at Michigan State University. Then, he moved to the UK and took the post of research fellow at University of Birmingham. His current research interests include the evolutionary multi-objective optimization, automatic problem solving, machine learning and applications in water engineering and software engineering. He is the founding chair of IEEE CIS Task Force on Decomposition-based Techniques in Evolutionary Computation. He currently serves as an associate editor of IEEE Transactions on Evolutionary Computation, International Journal of Machine Learning and Cybernetics and Complex \& Intelligent Systems. He served as a guest editor in Neurocomputing Journal and Multimedia Tools and Applications Journal. His current research interests include the evolutionary multi-objective optimization, automatic problem solving, machine learning and applications in water engineering and software engineering. Recently, he has been awarded a prestigious UKRI Future Leaders Fellowship.

 

Qingfu Zhang

Qingfu Zhang is a Professor at the Department of Computer Science, City University of Hong Kong. His main research interests include evolutionary computation, optimization, neural networks, data analysis, and their applications. He is currently leading the Metaheuristic Optimization Research (MOP) Group in City University of Hong Kong. Professor Zhang is an Associate Editor of the IEEE Transactions on Evolutionary Computation and the IEEE Transactions Cybernetics. He was awarded the 2010 IEEE Transactions on Evolutionary Computation Outstanding Paper Award. He is on the list of the Thomson Reuters 2016 and 2017 highly cited researchers in computer science. He is an IEEE fellow.

Dynamic Multi-objective Optimization: Introduction, Challenges, Applications and Future Directions

Most optimization problems in real-life have more than one objective, with at least two objectives in conflict with one another and at least one objective that changes over time. These kinds of optimization problems are referred to as dynamic multi-objective optimization (DMOO) problems. Instead of re-starting the optimization process after a change in the environment has occurred, previous knowledge is used and if the changes are small enough, this may lead to new solutions being found much quicker.

Most research in multi-objective optimization has been conducted on static problems and most research on dynamic problems has been conducted on single-objective optimization. The goal of a DMOO algorithm (DMOA) is to find an optimal set of solutions that is as close as possible to the true set of solutions (similar to static MOO) and that contains a diverse set of solutions. However, in addition to these goals a DMOA also has to track the changing set of optimal solutions over time. Therefore, the DMOA also has to deal with the problems of a lack of diversity and outdated memory (similar to dynamic single-objective optimization).

This tutorial will introduce the participants to the field of DMOO by discussing: benchmark functions and performance measures that have been proposed and the issues related to each of these; algorithms that have been proposed to solve DMOO; issues with the comparison of DMOAs’ performance and ensuring a fair comparison; analysing the performance of DMOAs and why traditional approaches used for static MOO is not necessarily adequate enough; challenges in the DMOO field that are not yet addressed, such as incorporating a decision maker’s preference in DMOO and visualizing the behavior of DMOAs; real-world applications; and emerging research fields (such as many-objective optimization and fitness landscape analysis) that provide interesting research opportunities in the field of DMOO.

Mardé Helbig

Mardé Helbig is a Senior Lecturer at the School of ICT at Griffith University in Australia. Her research focuses on solving dynamic multi-objective optimization (DMOO) problems using computational intelligence algorithms. She is a sub-committee member of the IEEE CIS Young Professionals and IEEE Women in CI, and a member of the IEEE CIS Emerging Technologies Technical Committee. She has organised special sessions and presented tutorials and keynotes on DMOO at various conferences. She is an executive committee member of the South African Young Academy of Science and has received the 2018/2019 TW Kambule-NSTF: Emerging Researcher award.

Evolutionary Multi- and Many-Objective Optimization: Methodologies, Applications and Demonstration

Evolutionary multi-criterion optimization (EMO) has become an established subfield within the evolutionary computation (EC) field, simply because of its appeal and importance to practical problems. Real-world problems usually involve multiple conflicting criteria and resulting outcomes in these problems are a number of trade-off solutions. EC methods became useful in solving these problems compared to their classical counterparts, since they can find and store multiple solutions in their populations and they provide an implicit parallel search. The EMO research was started in early nineties (more than two decades ago) and many different methodologies have been suggested since then. Software companies have emerged having specialized products for solving multi-criterion problems in industrial set-ups.
In the past 20 years, besides developing new methodologies, EMO field has also found new directions for research and application, which are relatively unknown to many EC researchers who are not directly working in EMO area. Some of these new ideas have deep-rooted applications to other EC areas and EMO researchers can borrow key EC findings to improve EMO algorithms and applications. The new ideas need to be explored and nurtured further, requiring deep theoretical, algorithmic, and application-oriented studies. In this tutorial, we shall present past studies, highlight present recent advances, and discuss some future possible research areas in EMO field so that EC researchers and practitioners, in general, and EMO researchers, in particular, can make a comprehensive scope and directions of research in EMO area.
In addition, we shall demonstrate the use of a public-domain software "pymoo" in solving multi- and many-objective optimization problems.

Kalyanmoy Deb

Kalyanmoy Deb is Koenig Endowed Chair Professor at Department of Electrical and Computer Engineering in Michigan State University, USA. Prof. Deb's research interests are in evolutionary optimization and their application in multi-criterion optimization, modeling, and machine learning. He has been a visiting professor at various universities across the world including University of Skövde in Sweden, Aalto University in Finland, Nanyang Technological University in Singapore, and IITs in India. He was awarded IEEE Evolutionary Computation Pioneer Award for his sustained work in EMO, Infosys Prize, TWAS Prize in Engineering Sciences, CajAstur Mamdani Prize, Distinguished Alumni Award from IIT Kharagpur, Edgeworth-Pareto award, Bhatnagar Prize in Engineering Sciences, and Bessel Research award from Germany. He is fellow of IEEE, ASME, and three Indian science and engineering academies. He has published over 548 research papers with Google Scholar citation of over 149,000 with h-index 123. He is in the editorial board on 18 major international journals. More information about his research contribution can be found from https://www.coin-lab.org.

Julian Blank

Julian Blank is a PhD student in the Department of Computer Science and Engineering at Michigan State University. He received his B.Sc. in Business Information Systems from Otto von Guericke University, Germany in 2010. He was a visiting scholar for six months at the Michigan State University, Michigan, USA in 2015, and finished his M.Sc. in Computer Science at Otto von Guericke University, Germany, in 2016. He is the lead developer of pymoo, an open-source multi-objective optimization framework in Python. His research interests include evolutionary computation, multi-objective optimization, surrogate-assisted optimization, and machine learning.

NEW Evolutionary Submodular Optimisation

Submodular functions play a key role in the area of optimisation and machine learning as many real world problems can be stated in terms of a submodular function. Submodular functions capture problem that face a diminishing return when adding additional components to a solution.

The goal of the tutorial is to give an overview on the area of evolutionary submodular optimisation that has gained increasing attention in the evolutionary computation and artificial intelligence community in recent years.

The audience will gain an overview of recent research on submodular optimisation using evolutionary computing methods and corresponding results published at conferences such as GECCO, PPSN, AAAI and IJCAI.
This new innovate research area has shown that evolutionary computing techniques achieve the same theoretical performance guarantees as traditional approaches, but usually perform much better in practice. The tutorial will summarize the current state of the art in this research area and point out research gaps and future research directions.

The target audience includes researchers interested in the area of evolutionary computation for combinatorial optimisation. An interest in theoretical guarantees of algorithms with respect to their approximation behaviour and/or interest in experimental algorithmic research sets a good basis for tutorial attention, but is not required as the tutorial will cover both theoretical and applied aspects of evolutionary submodular optimisation.

Aneta Neumann

Aneta Neumann graduated in Computer Science from the Christian-Albrechts-University of Kiel, Germany and received her PhD from the University of Adelaide, Australia. She is currently a postdoctoral researcher at the School of Computer Science, The University of Adelaide in Australia within the Premier's Research and Industry Fund, Research Consortium. She presented invited talks at UCL London, Goldsmiths, University of London, the University of Nottingham, the University of Sheffield, Hasso Plattner Institut University Potsdam, Sorbonne University, University of Melbourne, University of Sydney. Aneta is a co-designer and co-lecturer for the EdX Big Data Fundamentals course in the Big Data MicroMasters® program and Online Pearson Master of Data Science, Working with Big Data. She received an ACM Women scholarship, sponsored by Google, Microsoft, and Oracle, a Hans-Juergen and Marianna Ohff Research Grant in 2018, and the Best Paper Nomination at GECCO 2019 in the track “Genetic Algorithms”.

Her main research interest focuses on bio-inspired computation, particularly dynamic and stochastic optimisation, evolutionary diversity optimisation, submodular functions, and optimisation under uncertainty in practice. Moreover, her work contributes to understanding the fundamental link between bio-inspired computation, machine learning, and computational creativity. She investigates evolutionary image transition and animation in the area of Artificial Intelligence and examines how to develop designs and applications of artificial intelligent methods based on complex agent-based models. Aneta has given tutorials on Evolutionary Computation for Digital Art at GECCO 2018-20 (https://www.researchgate.net/publication/342762835_Evolutionary_Computation_for_Digital_Art), and tutorial on Evolutionary Diversity Optimisation at PPSN 2020.

Frank Neumann

Frank Neumann received his diploma and Ph.D. from the Christian-Albrechts-University of Kiel in 2002 and 2006, respectively. He is a professor and leader of the Optimisation and Logistics Group at the School of Computer Science, The University of Adelaide, Australia. Frank has been the general chair of the ACM GECCO 2016. With Kenneth De Jong he organised ACM FOGA 2013 in Adelaide and together with Carsten Witt he has written the textbook "Bioinspired Computation in Combinatorial Optimization - Algorithms and Their Computational Complexity" published by Springer. He is an Associate Editor of the journals "Evolutionary Computation" (MIT Press) and ACM Transactions on Evolutionary Learning and Optimization. In his work, he considers algorithmic approaches in particular for combinatorial and multi-objective optimization problems and focuses on theoretical aspects of evolutionary computation as well as high impact applications in the areas of renewable energy, logistics, and mining.

Chao Qian

Chao Qian is an Associate Professor in the School of Artificial Intelligence, Nanjing University, China. He received the BSc and PhD degrees in the Department of Computer Science and Technology from Nanjing University. After finishing his PhD in 2015, he became an Associate Researcher in the School of Computer Science and Technology, University of Science and Technology of China, until 2019, when he returned to Nanjing University.

His research interests are mainly theoretical analysis of evolutionary algorithms and designing evolutionary algorithms with provable approximation guarantee for sophisticated optimization problems, particularly in machine learning. He has published one book “Evolutionary Learning: Advances in Theories and Algorithms” and more than 30 papers in top-tier journals (e.g., AIJ, TEvC, ECJ, Algorithmica, TCS) and conferences (e.g., AAAI, IJCAI, NeurIPS). He has won the ACM GECCO 2011 Best Theory Paper Award, and the IDEAL 2016 Best Paper Award. He is chair of IEEE Computational Intelligence Society (CIS) Task Force on Theoretical Foundations of Bio-inspired Computation.

NEW Lexicase Selection

Lexicase parent selection decides which individuals in an EC population are selected not based on a single, aggregated fitness value, but instead based on random sequences of test cases (or objectives) used to filter the population down to a single individual. By never aggregating performance on separate cases, lexicase selection effectively makes use of additional information for parent selection. In Genetic Programming, this results in a semantic-aware parent selection, although lexicase selection has proven useful in other EC systems without a genotypic/semantic distinction. Since lexicase selection places importance on performing well on subsets of the test cases rather than generally on all cases at once, it often selects specialist individuals who perform well on some parts of a problem while poorly on others. These specialists contribute to the maintenance of increased diversity in populations evolved under lexicase selection. However, unlike some methods that focus entirely on diversity, lexicase selection also provides strong selection pressure toward individuals that perform well on many combinations of test cases, which can lead to excellent overall performance.

Lexicase selection has recently been used in various evolutionary computation systems, often outperforming tournament selection as well as other more advanced parent selection techniques. While originally formulated for Genetic Programming, it has also been used for evolutionary many-objective optimization in Genetic Algorithms, in Learning Classifier Systems, in Evolutionary Robotics, and as a method of building machine learning ensembles. Many variants of lexicase selection have been proposed to tackle different issues. In particular, epsilon lexicase selection has performed well in problem domains where errors on test cases are real numbers. We will discuss these variants and when you might want to use them.

This tutorial will give an introduction to lexicase selection and will discuss major research directions involving both advances in its use and explanations for why it performs well. We will discuss its effects on population diversity, its ability to select specialist individuals, and its propensity toward hyperselection, where it selects the same individual many times in one generation. Participants will come away with a strong understanding of lexicase selection and how it can be applied to any test case-based evolutionary computation system. In addition, they will gain insight into the research frontiers in lexicase selection.

 

Thomas Helmuth

Thomas is an assistant professor of computer science at Hamilton College in New York. He received his Ph.D. from University of Massachusetts Amherst working with professor Lee Spector. His research focuses on applications of genetic programming to automatic software synthesis. He has contributed to genetic programming in the areas of parent selection, stack-based program representations (in particular Push), and the benchmarking of program synthesis systems.

 

William La Cava

William is a research associate in the Institute for Biomedical Informatics at the University of Pennsylvania. He received his Ph.D. from the University of Massachusetts Amherst with professors Kourosh Danai and Lee Spector. He is interested in the interpretability and fairness of predictive models used in healthcare. His contributions in genetic programming include methods for local search, parent selection, representation learning, and fairness.

Quality-Diversity Optimization

A fascinating aspect of natural evolution is its ability to produce a diversity of organisms that are all high performing in their niche. By contrast, the main artificial evolution algorithms are focused on pure optimization, that is, finding a single high-performing solution.

Quality-Diversity optimization (or illumination) is a recent type of evolutionary algorithm that bridges this gap by generating large collections of diverse solutions that are all high-performing. This concept was introduced by the ``Generative and Developmental Systems community between 2011 (Lehman & Stanley, 2011) and 2015 (Mouret and Clune, 2015) with the ``Novelty Search with Local Competition and ``MAP-Elites'' evolutionary algorithms. The main differences with multi-modal optimization algorithms are that (1) Quality Diversity typically works in the behavioral space (or feature space), and not in the genotypic space, and (2) Quality Diversity attempts to fill the whole behavior space, even if the niche is not a peak in the fitness landscape. In the last 5 years, more than 100 papers have been written about quality diversity, many of them in the GECCO community (a non exhaustive list is available at https://quality-diversity.github.io/papers).

The collections of solutions obtained by Quality Diversity algorithms open many new applications for evolutionary computation. In robotics, it was used to create repertoires of behaviors (Cully & Mouret, 2016), to allow robots to adapt to damage in a few minutes (Cully & et al. 2015, Nature); in engineering, it can be used to propose a diversity of optimized aerodynamic shapes (Gaier et al., 2018 --- best paper of the CS track); they were also recently used in video games (Khalifa et al., 2018) and for Workforce Scheduling and Routing Problems (WSRP) (Urquhart & Hart, 2018).


This tutorial will give an overview of the various questions addressed in Quality-Diversity optimization, relying on examples from the literature. Past achievements and major contributions, as well as specific challenges in Quality-Diversity optimization will be described. The tutorial will in particular focus on:
- what is Quality-Diversity optimization?
- similarities and differences with traditional evolutionary algorithms;
- existing variants of Quality-Diversity algorithms;
- example of application: Learning behavioural repertoires in robotics, evolving 3D shapes for aerodynamic design;
- open questions and future challenges.

The tutorial will effectively complement the Complex Systems track, which usually contains several papers about Quality-Diversity algorithms. For instance, 56% (5/9) of the papers accepted in the CS track in 2020 used Quality-Diversity optimization, 36% (4/11) in 2019 and 20% (3/15) in 2018. After the conference, the video of the tutorial will be hosted on the quality-diversity website (https://quality-diversity.github.io/), which gather papers and educational content related to quality-diversity algorithms.

 

Antoine Cully

Antoine Cully is Lecturer (Assistant Professor) at Imperial College London (United Kingdom). His research is at the intersection between artificial intelligence and robotics. He applies machine learning approaches, like evolutionary algorithms, on robots to increase their versatility and their adaptation capabilities. In particular, he has recently developed Quality-Diversity optimization algorithms to enable robots to autonomously learn large behavioural repertoires. For instance, this approach enabled legged robots to autonomously learn how to walk in every direction or to adapt to damage situations.
Antoine Cully received the M.Sc. and the Ph.D. degrees in robotics and artificial intelligence from the Pierre et Marie Curie University of Paris, France, in 2012 and 2015, respectively, and the engineer degree from the School of Engineering Polytech’Sorbonne, in 2012. His Ph.D. dissertation has received three Best-Thesis awards. He has published several journal papers in prestigious journals including Nature, IEEE Transaction in Evolutionary Computation, and the International Journal of Robotics Research. His work with Jean-Baptiste Mouret was recently featured on the cover of Nature (Cully et al., 2015) and received the "Outstanding Paper of 2015" award from the Society for Artificial Life (2016), and the French "La Recherche" award (2016).

 

Jean-Baptiste Mouret

Jean-Baptiste Mouret is a senior researcher (``directeur de recherche) at Inria, a French research institute dedicated to computer science and mathematics. He was previously an assistant professor (``maître de conférences) at ISIR (Institute for Intelligent Systems and Robotics), which is part of Université Pierre et Marie Curie - Paris 6 (UPMC, now Sorbonne Université). He obtained a M.S. in computer science from EPITA in 2004, a M.S. in artificial intelligence from the Pierre and Marie Curie University (Paris, France) in 2005, and a Ph.D. in computer science from the same university in 2008. He is the principal investigator of an ERC grant (ResiBots - Robots with animal-like resilience, 2015-2020) and was the recipient of a French ``ANR young researcher grant (Creadapt — Creative adaptation by Evolution, 2012-2015). Overall, J.-B. Mouret conducts researches that intertwine evolutionary algorithms, neuro-evolution, and machine learning to make robots more adaptive. His work was recently featured on the cover of Nature (Cully et al., 2015) and it received the ``2017 ISAL Award for Distinguished Young Investigator in the field of Artificial Life , the ``Outstanding Paper of 2015 award from the Society for Artificial Life (2016), the French ""La Recherche"" award (2016), 3 GECCO best paper awards (2011, GDS track; 2017 & 2018, CS track), and the IEEE CEC ""best student paper"" award (2009). He co-chaired the ``Evolutionary Machine Learning track at GECCO 2019 and the ``Generative and Developmental Systems'' track in 2015.

 

Stéphane Doncieux

Stéphane Doncieux is Professeur des Universités (Professor) in Computer Science at Sorbonne University, Paris, France. He is engineer of the ENSEA, a french electronic engineering school. He obtained a Master's degree in Artificial Intelligence and Pattern Recognition in 1999. He pursued and defended a PhD in Computer Science in 2003. He was responsible, with Bruno Gas, of the SIMA research team since its creation in 2007 and up to 2011. From 2011 to 2018, he was the head of the AMAC (Architecture and Models of Adaptation and Cognition) research team with 11 permanent researchers, 3 post-doc students and engineers and 11 PhD students. As from January 2019, he is deputy director of the ISIR lab, one of the largest robotics lab in France. He has organized several workshops on ER at conferences like GECCO or IEEE-IROS and has edited 2 books. Stéphane Doncieux is co-chair of the GECCO complex systems track in 2019 and 2020. His research is in cognitive robotics, with a focus on the use of evolutionary algorithms in the context of synthesis of robot controllers. He worked on selective pressures and on the use of evolutionary methods in a developmental robotics approach in which the evolutionary algorithms are used for their creativity to bootstrap a cognitive process and allow it to acquire an experience that can be later redescribed in another representation for a faster and more effective task resolution. This is the goal of the H2020 DREAM European project that he was coordinating (http://robotsthatdream.eu/).

Recent Advances in Landscape Analysis for Optimisation and Learning

The goal of this tutorial is to provide an overview of recent advances in landscape analysis for optimisation and learning. The subject matter will be relevant not only to delegates who are interested in applying landscape analysis for the first time, but also to those involved in landscape analysis research to obtain a broad view of recent developments in the field. Fitness landscapes were first introduced to aid in the understanding of natural evolution, but techniques were later developed for analysing fitness landscapes in the context of evolutionary computation. In the last decade, the field has experienced a large upswing in research, evident in the increased number of published papers on the topic as well as the appearance of tutorials, workshops and special sessions at all the major evolutionary computation conferences.

One of the recent developments in landscape analysis is that fitness landscapes have been applied in a wider context than evolutionary computation, in areas such as feature selection, hyperparameter optimisation and neural network training. The tutorial will cover new landscape analysis techniques that have been developed in the last few years and present case studies of recent applications of landscape analysis in both discrete and continuous domains. Finally, we will highlight opportunities for future research in landscape analysis.

Katherine Malan

Katherine Malan is an associate professor in the Department of Decision Sciences at the University of South Africa. She received her PhD in computer science from the University of Pretoria in 2014 and her MSc & BSc degrees from the University of Cape Town. She has over 20 years' lecturing experience, mostly in Computer Science, at three different South African universities. Her research interests include automated algorithm selection in optimisation and learning, fitness landscape analysis and the application of computational intelligence techniques to real-world problems.

Gabriela Ochoa

Gabriela Ochoa is a Professor in Computing Science at the University of Stirling, Scotland, where she leads the Data Science and Intelligent Systems (DAIS) research group. She received BSc and MSc degrees in Computer Science from University Simon Bolivar, Venezuela and a PhD from University of Sussex, UK. She worked in industry for 5 years before joining academia, and has held faculty and research positions at the University Simon Bolivar, Venezuela and the University of Nottingham, UK. Her research interests lie in the foundations and application of evolutionary algorithms and optimisation methods, with emphasis on autonomous search, hyper-heuristics, fitness landscape analysis, visualisation and applications to logistics, transportation, healthcare, and software engineering. She has published over 110 scholarly papers (H-index 31) and serves various program committees. She was associate editor of the IEEE Transactions on Evolutionary Computation, is currently for the Evolutionary Computation Journal, and is a member of the editorial board for Genetic Programming and Evolvable Machines. She has served as organiser for various Evolutionary Computation events and served as the Editor-in-Chief for the Genetic and Evolutionary Computation Conference (GECCO) 2017. She is a member of the executive boards of the ACM interest group on Evolutionary Computation (SIGEVO), and the leading European Event on bio-inspired computing (EvoSTAR).

Recent Advances of Differential Evolution in Numerical optimization

Differential Evolution (DE) is one of the most powerful stochastic real-parameter optimization algorithms of current interest. DE operates through similar computational steps as employed by a standard Evolutionary Algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance. This tutorial will begin with a brief overview of the basic concepts related to DE, its algorithmic components and control parameters. It will subsequently discuss some of the significant algorithmic variants of DE. The talk will discuss the effects of incorporating ensemble learning, adaptations and neighborhood topologies in DE. Theoretical advances made to understand the search mechanism of DE and the effect of its most important control parameters will be discussed. The talk will finally highlight a few problems that pose challenge to the state-of-the-art DE algorithms and demand strong research effort from the DE-community in the future.

Ponnuthurai Suganthan

Ponnuthurai Nagaratnam Suganthan finished schooling at Union College and subsequently received the B.A degree, Postgraduate Certificate and M.A degree in Electrical and Information Engineering from the University of Cambridge, UK in 1990, 1992 and 1994, respectively. He received an honorary doctorate (i.e. Doctor Honoris Causa) in 2020 from University of Maribor, Slovenia. After completing his PhD research in 1995, he served as a pre-doctoral Research Assistant in the Dept. of Electrical Engineering, University of Sydney in 1995–96 and a lecturer in the Dept. of Computer Science and Electrical Engineering, University of Queensland in 1996–99. He moved to Singapore in 1999. He was an Editorial Board Member of the Evolutionary Computation Journal, MIT Press (2013-2018) and an associate editor of the IEEE Trans on Cybernetics (2012 - 2018). He is an associate editor of Applied Soft Computing (Elsevier, 2018- ), Neurocomputing (Elsevier, 2018- ), IEEE Trans on Evolutionary Computation (2005 - ), Information Sciences (Elsevier, 2009 - ), Pattern Recognition (Elsevier, 2001 - ) and IEEE Trans. on SMC: Systems (2020 - ). He is a founding co-editor-in-chief of Swarm and Evolutionary Computation (2010 - ), an SCI Indexed Elsevier Journal. His research interests include swarm and evolutionary algorithms, pattern recognition, forecasting, randomized neural networks, deep learning and applications of swarm, evolutionary & machine learning algorithms. He was selected as one of the highly cited researchers by Thomson Reuters every year from 2015 to 2020 in computer science. He served as the General Chair of the IEEE SSCI 2013. He is an IEEE CIS distinguished lecturer (DLP) in 2018-2021.

Runtime Analysis of Population-based Evolutionary Algorithms

Populations are at the heart of evolutionary algorithms (EAs). They
provide the genetic variation which selection acts upon. A complete
picture of EAs can only be obtained if we understand their population
dynamics. A rich theory on runtime analysis (also called
time-complexity analysis) of EAs has been developed over the last 20
years. The goal of this theory is to show, via rigorous mathematical
means, how the performance of EAs depends on their parameter settings
and the characteristics of the underlying fitness landscapes.
Initially, runtime analysis of EAs was mostly restricted to simplified
EAs that do not employ large populations, such as the (1+1) EA. This
tutorial introduces more recent techniques that enable runtime
analysis of EAs with realistic population sizes.

The tutorial begins with a brief overview of the population-based EAs
that are covered by the techniques. We recall the common stochastic
selection mechanisms and how to measure the selection pressure they
induce. The main part of the tutorial covers in detail widely
applicable techniques tailored to the analysis of populations. We
discuss random family trees and branching processes, drift and
concentration of measure in populations, and level-based analyses.

To illustrate how these techniques can be applied, we consider several
fundamental questions: When are populations necessary for efficient
optimisation with EAs? What is the appropriate balance between
exploration and exploitation and how does this depend on relationships
between mutation and selection rates? What determines an EA's
tolerance for uncertainty, e.g. in form of noisy or partially
available fitness?

The tutorial builds upon, and extends, the techniques presented in the
introductory tutorial "Runtime Analysis - Basic
Introduction". However, it can also be run as a standalone tutorial,
in which case it will not require any previous background.

 

Per Kristian Lehre

Dr Lehre is Senior Lecturer in the School of Computer Science at the
University of Birmingham (since Jan 2017). Before joining Birmingham,
he was since 2011 Assistant Professor with the University of
Nottingham. He obtained MSc and PhD degrees in Computer Science from
the Norwegian University of Science and Technology (NTNU) in
Trondheim, He completed the PhD in 2006 under the supervision of Prof
Pauline Haddow, and joined the School of Computer Science at The
University of Birmingham, UK, as a Research Fellow in January 2007
with Prof Xin Yao. He was a Post Doctoral Fellow at DTU Informatics,
Technical University of Denmark in Lyngby, Denmark from April 2010.

Dr Lehre's research interests are in theoretical aspects of
nature-inspired search heuristics, in particular runtime analysis of
population-based evolutionary algorithms. His research has won
numerous best paper awards, including GECCO (2013, 2010, 2009, 2006),
ICSTW (2008), and ISAAC (2014). He is vice-chair of IEEE Task Force on
Theoretical Foundations of Bio-inspired Computation, and a member of
the editorial board of Evolutionary Computation and associate editor
of IEEE Transactions of Evolutionary Computation. He has guest-edited
special issues of Theoretical Computer Science and IEEE Transaction on
Evolutionary Computation on theoretical foundations of evolutionary
computation. Dr Lehre has given many tutorials on evolutionary
computation in summer schools (UK: 2007, 2015, 2016, France: 2009,
2010, and 2011, Estonia: 2010), as well as major conferences and
workshops (GECCO 2013-2020, CEC 2013 and 2015, PPSN 2016, 2018, 2020,
and ThRaSH 2013). He was the main coordinator of the 2M euro
EU-funded project SAGE which brought together theory of evolutionary
computation and population genetics.

 

Pietro Simone Oliveto

Dr Oliveto is a Senior Lecturer and an EPSRC Early Career Fellow at the University of Sheffield, UK. He received the Laurea degree and PhD degree in computer science respectively from the University of Catania, Italy in 2005 and from the University of Birmingham, UK in 2009. From October 2007 to April 2008, he was a visiting researcher of the Efficient Algorithms and Complexity Theory Institute at the Department of Computer Science of the University of Dortmund where he collaborated with Prof. Ingo Wegener's research group. He has been EPSRC PhD+ Fellow (2009-2010) and EPSRC Postdoctoral Fellow (2010-2013) at Birmingham and Vice-Chancellor's Fellow at Sheffield (2013-2016). His main research interest is the performance analysis of bio-inspired computation techniques including evolutionary algorithms, genetic programming, artificial immune systems and hyper-heuristics. He has published a review paper of the runtime analysis field and two book chapters on the theoretical techniques for the analysis. He has won best paper awards at the GECCO 2008, ICARIS 2011 and GECCO 2014 conferences. Dr. Oliveto has regularly given tutorials on computational complexity analysis at the main evolutionary computation conferences (including GECCO, CEC, SSCI and PPSN) since 2012. He has guest-edited journal special issues of Computer Science and Technology, Evolutionary Computation, Theoretical Computer Science and IEEE Transactions on Evolutionary Computation. He has co-Chaired the the IEEE symposium on Foundations of Computational Intelligence (FOCI) from 2015 to 2018. He is part of the Steering Committee of the annual workshop on Theory of Randomized Search Heuristics (ThRaSH), Leader of the Benchmarking Working Group of the COST Action ImAppNIO, member of the EPSRC Peer Review College, Associate Editor of IEEE Transactions on Evolutionary Computation.

Sequential Experimentation by Evolutionary Algorithms

This tutorial addresses applications of Evolutionary Algorithms (EAs) to global optimization tasks where the objective function cannot be calculated (no explicit model nor a simulation exist), but rather requires an experimental measurement in the real-world – e.g., in pharmaceuticals, biocatalyst design, protein expression, quantum processes, engineering – to mention only a few.

The use of EAs for experimental optimization is placed in its historical context with an overview of the landmark studies in this area carried out in the 1960s at the Technical University of Berlin. At the same time, statistics-based Design of Experiments (DoE) methodologies, rooted in the late 1950s, constitute a gold-standard in existing laboratory equipment, and are therefore reviewed as well at an introductory level to the GECCO audience.

The main characteristics of experimental optimization work, in comparison to optimization of simulated systems, are discussed, and practical guidelines for real-world experiments with EAs are given. For example, experimental problems can constrain the evolution due to overhead considerations, interruptions, changes of variables, missing assays, imposed population-sizes, and target assays that have different evaluation times (in the case of multiple objective optimization problems).

Selected modern-day case studies show the persistence of experimental optimization problems today. These cover experimental quantum systems, assay-based “wet experiments” such as in combinatorial drug discovery and protein expression, as well as others. These applications can throw EAs out of their normal operating envelope, and raise research questions in a number of different areas ranging across constrained EAs, multiple objective EAs, robust and reliable methods for noisy and dynamic problems, and metamodeling methods for expensive evaluations.
The plan is to feature a demonstration of algorithmically-guided sequential experimentation in post-harvesting protocols that is currently underway at Prof. Shir's host institute Migal. The specific target is to minimize the post-harvest quality loss of cucumbers, where the entire treatment protocol is subject to search (including the selection, the reordering and the scheduling of post-harvest operations). The problem illustrates a resourcing issue in which a physical experiment, required to construct and assay a candidate solution, may involve tremendous effort and time, and last up to several weeks.

Ofer Shir

Ofer Shir is an Associate Professor of Computer Science at Tel-Hai College, and a Principal Investigator at Migal-Galilee Research Institute – both located in the Upper Galilee, Israel.
Ofer Shir holds a BSc in Physics and Computer Science from the Hebrew University of Jerusalem, Israel (conferred 2003), and both MSc and PhD in Computer Science from Leiden University, The Netherlands (conferred 2004, 2008; PhD advisers: Thomas Bäck and Marc Vrakking). Upon his graduation, he completed a two-years term as a Postdoctoral Research Associate at Princeton University, USA (2008-2010), hosted by Prof. Herschel Rabitz in the Department of Chemistry – where he specialized in computational aspects of experimental quantum systems. He then joined IBM-Research as a Research Staff Member (2010-2013), which constituted his second postdoctoral term, and where he gained real-world experience in convex and combinatorial optimization as well as in decision analytics.
His current topics of interest include Statistical Learning in Theory and in Practice, Experimental Optimization, Theory of Randomized Search Heuristics, Scientific Informatics, Natural Computing, Computational Intelligence in Physical Sciences, Quantum Control and Quantum Machine Learning.

 

Thomas Bäck

Thomas Bäck is professor of Computer Science at the Leiden Institute of Advanced Computer Science, Leiden University, Netherlands, since 2002. He received his PhD in Computer Science from Dortmund University, Germany, in 1994, and was leader of the Center for Applied Systems Analysis at the Informatik Centrum Dortmund until 2000. Until 2009, Thomas was also CTO of NuTech Solutions, Inc.~(Charlotte, NC), where he gained ample experience in solving real-world problems in optimization and data analytics, by working with global enterprises in the automotive and other industry sectors. Thomas received the IEEE Computational Intelligence Society (CIS) Evolutionary Computation Pioneer Award for his contributions in synthesizing evolutionary computation (2015), was elected as a fellow of the International Society of Genetic and Evolutionary Computation (ISGEC) for fundamental contributions to the field (2003), and received the best dissertation award from the ""Gesellschaft für Informatik"" in 1995. Thomas has more than 300 publications, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation and the Handbook of Natural Computing.

Statistical Analyses for Meta-heuristic Stochastic Optimization Algorithms

Moving to the era of explainable AI, a comprehensive comparison of the performance of stochastic optimization algorithms has become an increasingly important task. One of the most common ways to compare the performance of stochastic optimization algorithms is to apply statistical analyses. However, for performing them, there are still caveats that need to be addressed for acquiring relevant and valid conclusions. First of all, such statistical analyses require good knowledge from the user to apply them properly, which is often lacking and leads to incorrect conclusions. Secondly, the standard approaches can be influenced by outliers (e.g., poor runs) or some statistically insignificant differences (solutions within some ε-neighborhood) that exist in the data.
This tutorial will provide an overview of the current approaches for analyzing algorithms performance with special emphasis on caveats that are often overlooked. We will show how these can be easily avoided by applying simple principles that lead to Deep Statistical Comparison. The tutorial will not be based on equations, but mainly examples through which a deeper understanding of statistics will be achieved. Examples will be based on various comparison scenarios for single-objective optimization algorithms. The tutorial will end with a demonstration of a web-service-based framework (i.e. DSCTool) for statistical comparison of stochastic optimization algorithms.

Outline of the covered material
- Introduction to statistical analysis (Frequentist vs. Bayesian).
- Background on frequentist hypothesis testing, different statistical tests, the required conditions for their usage and sample size.
- Typical mistakes and understanding why making a statistical comparison of data needs to be done properly.
- Understanding the difference between statistical and practical significance.
- Understanding the effect that performance measures have on making a statistical comparison.
- Defining single-problem and multiple-problem analysis.
- Insight into pairwise comparison, multiple comparisons (all vs. all), and multiple comparisons with a control algorithm (one vs. all).
- Standard approaches to making statistical comparisons and their deficiencies.
- Latest advances in making statistical comparisons e.g., Deep Statistical Comparison, which provides more robust statistical results in cases of outliers and statistically insignificant differences between data values.
- Extended Deep Statistical Comparison for understanding exploitation and exploration powers of stochastic optimization algorithms.
- Examples of all possible statistical scenarios in single-objective optimization and caveats.
- Presentation of a web-service-based framework that automatizes and simplifies the whole process of making a statistical comparison.
- Take home messages.

Tome Eftimov

Tome Eftimov is a researcher at the Computer Systems Department at the Jožef Stefan Institute, Ljubljana, Slovenia. He is a visiting assistant professor at the Faculty of Computer Science and Engineering, Ss. Cyril and Methodius University, Skopje. He was a postdoctoral research fellow at the Stanford University, USA, where he investigated biomedical relations outcomes by using AI methods. In addition, he was a research associate at the University of California, San Francisco, investigating AI methods for rheumatology concepts extraction from electronic health records. He obtained his PhD in Information and Communication Technologies (2018). His research interests include statistical data analysis, metaheuristics, natural language processing, representation learning, and machine learning. He has been involved in courses on probability and statistics, and statistical data analysis. The work related to Deep Statistical Comparison was presented as tutorial (i.e. IJCCI 2018, IEEE SSCI 2019, GECCO 2020, and PPSN 2020) or as invited lecture to several international conferences and universities. He is an organizer of several workshops related to AI at high-ranked international conferences. He is a coordinator of a national project “Mr-BEC: Modern approaches for benchmarking in evolutionary computation” and actively participates in European projects.

Peter Korošec

Peter Korošec received his Ph.D. degree from the Jožef Stefan Postgraduate School, Ljubljana, Slovenia, in 2006. Since 2002, he has been a researcher at the Computer Systems Department, Jožef Stefan Institute, Ljubljana. His current areas of research include understanding principles behind meta-heuristic optimization and parallel/distributed computing. He participated in several tutorials related to statistical analysis for optimization algorithms presented in different international conferences and co-organized a workshop on understanding of evolutionary optimization behavior.

Specialized Tutorials

Applications of Dynamic Parameter Control in Evolutionary Computation

One of the challenging problems in solving optimization problems with evolutionary algorithms is the selection of the control parameters, which allow to adjust the behaviour of the algorithms to the problem at hand. Several control parameters need to be set, for the procedure of searching for the optimum of an objective function to be successful. Suitable control parameter values need to be found, for example, for the population size, the mutation strength, the crossover rate, the selective pressure, etc. The choice of these parameters can have a significant impact on the performance of the algorithm and need thus to be executed with care.

With parameter control approach no prior training of parameters is needed. It also accounts for the fact that the optimal parameter values typically change during the optimization process: for example, at the beginning of an optimization process we typically aim for exploration, while in the later stages we want the algorithm to converge and to focus its search on the most promising regions in the search space.

While parameter control is indispensable in continuous optimization, it is far from being well-established in discrete optimization heuristics. The ambition of this tutorial is to inform participants about different parameter control techniques, and by discussing both theoretical as well as experimental results that demonstrate the unexploited potential of non-static parameter choices.

Our tutorial addresses experimentally and theory-oriented researchers alike, and requires only basic knowledge of optimization heuristics.

Gregor Papa

Gregor Papa (gregor.papa@ijs.si, http://cs.ijs.si/papa/) is a Senior researcher and a Head of Computer Systems Department at the Jožef Stefan Institute, Ljubljana, Slovenia, and an Associate Professor at the Jožef Stefan International Postgraduate School, Ljubljana, Slovenia. He received the PhD degree in Electrical engineering at the University of Ljubljana, Slovenia, in 2002.
Gregor Papa's research interests include meta-heuristic optimisation methods and hardware implementations of high-complexity algorithms, with a focus on dynamic setting of algorithms' control parameters. His work is published in several international journals and conference proceedings. He regularly organizes several conferences and workshops in the field of nature-inspired algorithms from the year 2004 till nowadays. He led and participated in several national and European projects.
Gregor Papa is a member of the Editorial Board of the Automatika journal (Taylor & Francis) for the field “Applied Computational Intelligence”. He is a Consultant at the Slovenian Strategic research and innovation partnership for Smart cities and communities.

NEW Evolutionary Art and Design: Representation, Fitness and Interaction

In recent years there is a growing interest in Computational Creativity and in the application of soft computing techniques, most notably of Artificial Neural Networks, to fields like music, art, sound, architecture, and design. Although techniques and applications such as deep learning, generative adversarial neural networks, variational auto-encoders, style transfer, deep fakes, deep dreams, etc., are currently widespread, they were implausible a few years ago.

The goals of this tutorial are: making a comprehensive overview that synthesizes the current trends in the field of evolutionary art and design, while incorporating relevant research from related fields; reflecting upon it; identifying the opportunities and challenges that lie ahead. In particular, we will address three main pillars for the development and evaluation of evo art applications, namely: representation, fitness assignment, and user interaction. Particular emphasis will be given to applications and techniques that promote assisted creativity and co-creation, expanding the creative possibilities of the user and leading to novel and unforeseen results.

Penousal Machado

Penousal Machado is Associate Professor at the Department of Informatics of the University of Coimbra in Portugal. He is a deputy director of the Centre for Informatics and Systems of the University of Coimbra (CISUC), the coordinator of the Cognitive and Media Systems group and the scientific director of the Computational Design and Visualization Lab. of CISUC. His research interests include Evolutionary Computation, Computational Creativity, Artificial Intelligence and Information Visualization. He is the author of more than 200 refereed journal and conference papers in these areas, and his peer-reviewed publications have been nominated and awarded multiple times as best paper. Until April 2020 his publications gathered over 2631 citations, an h-index of 24, and an i10-index of 61. He was the advisor of 7 PhD thesis and 39 MSc thesis, and is currently advising 7 PhD and 3 MSc thesis.

He is also the chair of several scientific events, including, amongst the most recent, ICCC 2020, PPSN XV, and EvoStar 2016; member of the Programme Committee and Editorial Board of some of the main conferences and journals in these fields; member of the Steering Committee of EuroGP, EvoMUSART and Evostar; and executive board member of SPECIES.

He is the recipient of several scientific awards, including the prestigious EvoStar Award for outstanding Contribution to Evolutionary Computation in Europe, and the award for Excellence and Merit in Artificial Intelligence granted by the Portuguese Association for Artificial Intelligence.

Penousal Machado has been invited to perform keynote speeches in a wide set of domains, from evolutionary computation to visualization and art. His work was featured in the Leonardo journal, Wired magazine and presented in venues such as the National Museum of Contemporary Art (Portugal) and the “Talk to me” exhibition of the Museum of Modern Art, NY (MoMA).

Evolutionary Computation and Evolutionary Deep Learning for Image Analysis, Signal Processing and Pattern Recognition

The intertwining disciplines of image analysis, signal processing and pattern recognition are major fields of computer science, computer engineering and electrical and electronic engineering, with past and on-going research covering a full range of topics and tasks, from basic research to a huge number of real-world industrial applications.

Among the techniques studied and applied within these research fields, evolutionary computation (EC) including evolutionary algorithms, swarm intelligence and other paradigms is playing an increasingly relevant role. Recently, evolutionary deep learning has also attracted very good attention to these fields. The terms Evolutionary Image Analysis and Signal Processing and Evolutionary Computer Vision are more and more commonly accepted as descriptors of a clearly defined research area and family of techniques and applications. This has also been favoured by the recent availability of environments for computer hardware and systems such as GPUs and grid/cloud/parallel computing systems, whose architecture and computation paradigm fit EC algorithms extremely well, alleviating the intrinsically heavy computational burden imposed by such techniques and allowing even for real-time applications.

The tutorial will introduce the general framework within which Evolutionary Image Analysis, Signal Processing and Pattern Recognition can be studied and applied, sketching a schematic taxonomy of the field and providing examples of successful real-world applications. The application areas to be covered will include edge detection, segmentation, object tracking, object recognition, motion detection, image classification and recognition. EC techniques to be covered will include genetic algorithms, genetic programming, particle swarm optimisation, evolutionary multi-objective optimisation as well as memetic/hybrid paradigms. In particular, we will discuss the detection of relevant set of features for classification based on an information-theoretical approach derived from complex system analysis. We take a focus on the use of evolutionary deep learning idea for image analysis --- this includes automatic learning architectures, learning parameters and transfer functions of convolutional neural networks (and autoencoders and genetic programming if time allows). The use of GPU boxes will be discussed for real-time/fast object classification. We will show how such EC techniques can be effectively applied to image analysis and signal processing problems and provide promising results.

Mengjie Zhang

Mengjie Zhang is a Fellow of Royal Society of New Zealand, a Fellow of IEEE, IEEE Distinguished Lecturer, Professor of Computer Science at Victoria University of Wellington where he heads the interdisciplinary Evolutionary Computation Research Group. He is a member of the University Academic Board, a member of the University Postgraduate Scholarships Committee, Associate Dean (Research and Innovation) in the Faculty of Engineering, and Chair of the Research Committees for the Faculty of Engineering and the School of Engineering and Computer Science.
His research is mainly focused on artificial intelligence (AI), machine learning and big data, particularly in evolutionary computation and learning (using genetic programming, particle swarm optimisation and learning classifier systems), feature selection/construction and big dimensionality reduction, computer vision and image processing, job shop scheduling and resource allocation, multi-objective optimisation, classification with unbalanced data and missing data, and evolutionary deep learning and transfer learning. Prof Zhang has published over 500 research papers in refereed international journals and conferences in these areas. He has been serving as an associated editor or editorial board member for over ten international journals including IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, IEEE Transactions on Emergent Topics in Computational Intelligence, ACM Transactions on Evolutionary Learning and Optimisation, the Evolutionary Computation Journal (MIT Press), Genetic Programming and Evolvable Machines (Springer), Applied Soft Computing, and Natural Computing, and as a reviewer of over 30 international journals. He has been involving major AI and EC conferences such as GECCO, IEEE CEC, EvoStar, IJCAI, AAAI, PRICAI, PAKDD, AusAI, IEEE SSCI and SEAL as a Chair. He has also been serving as a steering committee member and a program committee member for over 100 international conferences. Since 2007, he has been listed as one of the top ten (currently No. 4) world genetic programming researchers by the GP bibliography (http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/index.html www.cs.bham.ac.uk).

Prof Zhang is the (immediate) past Chair of the IEEE CIS Intelligent Systems Applications, the IEEE CIS Emergent Technologies Technical Committee and the IEEE CIS Evolutionary Computation Technical Committee, a vice-chair of the IEEE CIS Task Force on Evolutionary Deep Learning and Applications, a vice-chair of the IEEE CIS Task Force on Evolutionary Computer Vision and Image Processing, and the founding chair of the IEEE Computational Intelligence Chapter in New Zealand.

 

Stefano Cagnoni

Stefano Cagnoni graduated in Electronic Engineering at the University of Florence, Italy, where he also obtained a PhD in Biomedical Engineering and has been a post-doc until 1997. In 1994 he was a visiting scientist at the Whitaker College Biomedical Imaging and Computation Laboratory at the Massachusetts Institute of Technology. Since 1997 he has been with the University of Parma, where he has been Associate Professor since 2004. Recent research grants include: a 3-year grant from Regione Emilia-Romagna to support research on industrial applications of Big Data Analysis, the co-management of industry/academy cooperation projects with Protec srl on the development of a computer vision-based fruit sorter of new generation and with Italian Railway Network Society (RFI) aimed at developing an automatic inspection system for train pantographs; a ""Marie Curie Initial Training Network"" grant, for a four-year research training project in Medical Imaging using Bio-Inspired and Soft Computing; a grant from ""Compagnia di S. Paolo"" on ""Bioinformatic and experimental dissection of the signalling pathways underlying dendritic spine function"". He has been Editor-in-chief of the ""Journal of Artificial Evolution and Applications"" from 2007 to 2010. From 1999 to 2018, he has been chair of EvoIASP, an event dedicated to evolutionary computation for image analysis and signal processing, then a track of the EvoApplications conference. Since 2005, he has co-chaired MedGEC, workshop on medical applications of evolutionary computation at GECCO. Co-editor of special issues of journals dedicated to Evolutionary Computation for Image Analysis and Signal Processing. Member of the Editorial Board of the journals “Evolutionary Computation”, “Genetic Programming and Evolvable Machines”, and “Algorithms”. He has been awarded the ""Evostar 2009 Award"", in recognition of the most outstanding contribution to Evolutionary Computation.

Evolutionary Computation and Machine Learning in Cryptology and Cybersecurity

In recent years, the interplay between artificial intelligence (AI) and security is becoming more prominent and important. This comes naturally because of the need to improve security in a more automated way. One specific domain of security that steadily receives more AI applications is cryptology. There, we already see how AI techniques can improve implementation attacks, attacks on PUFs, hardware Trojan detection, etc.
This tutorial first gives a brief introduction into domains where AI (we concentrate on machine learning and evolutionary algorithms) is used to solve problems from the security domain. We briefly discuss some very successful EA applications to the security domain like fuzzing and network security.
Next, we focus on problems coming from the cryptology domain.
We discuss several realistic crypto problems successfully tackled with EAs and machine learning and elaborate on why those problems are suitable to apply such techniques. Some representative examples of the problems we cover are the evolution of Boolean functions and S-boxes, machine learning and EA attacks on Physically Unclonable Functions, machine learning for side-channel attacks, EA/machine learning to improve fault injection, hardware Trojan detection/prevention/insertion, attacks on logic locking, AI-based countermeasures.
We finish this tutorial with a discussion about how experiences in solving problems in AI could help to solve problems in security and vice versa. To that end, we emphasize a collaborative approach for both communities.

 

Stjepan Picek

Stjepan Picek is an assistant professor in the Cybersecurity group at TU Delft, The Netherlands. His research interests are security/cryptography, machine learning, and evolutionary computation. Prior to the assistant professor position, Stjepan was a postdoctoral researcher at the ALFA group, MIT, USA. Before that, he was a postdoctoral researcher at KU Leuven, Belgium, as a part of the COSIC group. Stjepan finished his PhD in 2015 with a topic on cryptology and evolutionary computation techniques. Stjepan also has several years of experience working in industry and government. Up to now, Stjepan gave more than 15 invited talks at conferences and summer schools and published more than 100 refereed papers in both evolutionary computation and cryptography journals and conferences. Stjepan is a member of the organization committee for International Summer School in Cryptography and president of the Croatian IEEE CIS Chapter. He is a general co-chair for Eurocrypt 2020 and 2021, program committee member, and reviewer for a number of conferences and journals.

 

Domagoj Jakobovic

Domagoj Jakobovic received his Ph.D. degree in 2005 at the Faculty of Electrical Engineering and Computing, University of Zagreb, on the subject of generating scheduling heuristics with genetic programming. He is currently a full professor at the Department of Electronics, Microelectronics, Computer and Intelligent Systems at the University of Zagreb. His research interests include evolutionary algorithms, optimization methods, and parallel algorithms. Most notable contributions are in the area of a machine supported scheduling, optimization problems in cryptography, parallelization, and improvement of evolutionary algorithms. He has published more than 100 papers, lead several research projects, and serves as a reviewer for many international journals and conferences. He has supervised four doctoral theses and more than 150 bachelor and master theses.

Evolutionary Computation for Feature Selection and Feature Construction

In data mining/big data and machine learning, many real-world problems such as bio-data classification and biomarker detection, image analysis, text mining often involve a large number of features/attributes. However, not all the features are essential since many of them are redundant or even irrelevant, and the useful features are typically not equally important. Using all the features for classification or other data mining tasks typically does not produce good results due to the big dimensionality and the large search space. This problem can be solved by feature selection to select a small subset of original (relevant) features or feature construction to create a smaller set of high-level features using the original low-level features.

Feature selection and construction are very challenging tasks due to the large search space and feature interaction problems. Exhaustive search for the best feature subset of a given dataset is practically impossible in most situations. A variety of heuristic search techniques have been applied to feature selection and construction, but most of the existing methods still suffer from stagnation in local optima and/or high computational cost. Due to the global search potential and heuristic guidelines, evolutionary computation techniques such as genetic algorithms, genetic programming, particle swarm optimisation, ant colony optimisation, differential evolution and evolutionary multi-objective optimisation have been recently used for feature selection and construction for dimensionality reduction, and achieved great success. Many of these methods only select/construct a small number of important features, produce higher accuracy, and generated small models that are easy to understand/interpret and efficient to run on unseen data. Evolutionary computation techniques have now become an important means for handling big dimensionality issues where feature selection and construction are required.


The tutorial will introduce the general framework within which evolutionary feature selection and construction can be studied and applied, sketching a schematic taxonomy of the field and providing examples of successful real-world applications. The application areas to be covered will include bio-data classification and biomarker detection, image analysis and pattern classification, symbolic regression, network security and intrusion detection, and text mining. EC techniques to be covered will include genetic algorithms, genetic programming, particle swarm optimisation, differential evolution, ant colony optimisation, artificial bee colony optimisation, and evolutionary multi-objective optimisation. We will show how such evolutionary computation techniques (with a focus on particle swarm optimisation and genetic programming) can be effectively applied to feature selection/construction and dimensionality reduction and provide promising results.


Bing Xue

Bing Xue received her PhD degree in 2014 at Victoria University of Wellington (VUW), New Zealand. She is currently an Associate Professor and Program Director of Science in School of Engineering and Computer Science at VUW. She has over 200 papers published in fully refereed international journals and conferences and her research focuses mainly on evolutionary computation, machine learning, classification, symbolic regression, feature selection, evolving deep neural networks, image analysis, transfer learning, multi-objective machine learning. Dr Xue is currently the Chair of IEEE Computational Intelligence Society (CIS) Data Mining and Big Data Analytics Technical Committee, and Vice-Chair of IEEE Task Force on Evolutionary Feature Selection and Construction, Vice-Chair of IEEE CIS Task Force on Transfer Learning & Transfer Optimization, and of IEEE CIS Task Force on Evolutionary Deep Learning and Applications. She is also served as associate editor of several international journals, such as IEEE Computational Intelligence Magazine and IEEE Transactions of Evolutionary Computation.

Mengjie Zhang

Mengjie Zhang is a Fellow of Royal Society of New Zealand, a Fellow of IEEE, IEEE Distinguished Lecturer, Professor of Computer Science at Victoria University of Wellington where he heads the interdisciplinary Evolutionary Computation Research Group. He is a member of the University Academic Board, a member of the University Postgraduate Scholarships Committee, Associate Dean (Research and Innovation) in the Faculty of Engineering, and Chair of the Research Committees for the Faculty of Engineering and the School of Engineering and Computer Science.
His research is mainly focused on artificial intelligence (AI), machine learning and big data, particularly in evolutionary computation and learning (using genetic programming, particle swarm optimisation and learning classifier systems), feature selection/construction and big dimensionality reduction, computer vision and image processing, job shop scheduling and resource allocation, multi-objective optimisation, classification with unbalanced data and missing data, and evolutionary deep learning and transfer learning. Prof Zhang has published over 500 research papers in refereed international journals and conferences in these areas. He has been serving as an associated editor or editorial board member for over ten international journals including IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, IEEE Transactions on Emergent Topics in Computational Intelligence, ACM Transactions on Evolutionary Learning and Optimisation, the Evolutionary Computation Journal (MIT Press), Genetic Programming and Evolvable Machines (Springer), Applied Soft Computing, and Natural Computing, and as a reviewer of over 30 international journals. He has been involving major AI and EC conferences such as GECCO, IEEE CEC, EvoStar, IJCAI, AAAI, PRICAI, PAKDD, AusAI, IEEE SSCI and SEAL as a Chair. He has also been serving as a steering committee member and a program committee member for over 100 international conferences. Since 2007, he has been listed as one of the top ten (currently No. 4) world genetic programming researchers by the GP bibliography (http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/index.html www.cs.bham.ac.uk).

Prof Zhang is the (immediate) past Chair of the IEEE CIS Intelligent Systems Applications, the IEEE CIS Emergent Technologies Technical Committee and the IEEE CIS Evolutionary Computation Technical Committee, a vice-chair of the IEEE CIS Task Force on Evolutionary Deep Learning and Applications, a vice-chair of the IEEE CIS Task Force on Evolutionary Computer Vision and Image Processing, and the founding chair of the IEEE Computational Intelligence Chapter in New Zealand.

Push

Push is a general purpose, multi-type, Turing complete programming language for programs that evolve in an evolutionary computation system. Initially developed in 2000, it has been used for a range of research projects and applications, including the production of human-competitive results and state-of-the-art work on general program synthesis.

Push and systems that evolve Push programs (such as PushGP) are available in a variety of host languages. However, the core concepts of Push are simple enough that programmers should also be able to build their own Push systems relatively quickly, and to incorporate them into their own evolutionary computation systems. This tutorial will present examples and best practices that will help participants to use Push effectively in their own work, whether they use existing libraries or develop their own.

Push has unusually simple syntax, which makes it particularly easy to randomly generate and vary valid programs. It nonetheless supports rich data and control structures, which in most other languages involve syntax restrictions that can be difficult to maintain during evolutionary change. Furthermore, the data and control structures used by a Push program can themselves emerge through the evolutionary process.

This tutorial will provide a detailed introduction to the Push programming language, and will demonstrate the use of the PushGP genetic programming system with open source libraries in several host languages. Participants will be invited to install and interact with these libraries, and also to interact with online Push systems in their browsers during the tutorial.

Among the features of Push that will be illustrated are those that support the evolution of programs that use multiple types, iteration, recursion, and modularity to solve problems that may involve multiple tasks. Push-based "autoconstructive evolution" systems, in which evolutionary methods co-evolve with solutions to problems, will also be briefly described.

Lee Spector

Lee Spector is a Visiting Professor of Computer Science at Amherst College, a Professor of Computer Science at Hampshire College, and an adjunct professor and member of the graduate faculty in the College of Information and Computer Sciences at the University of Massachusetts, all in Amherst Massachusetts. He received a B.A. in Philosophy from Oberlin College in 1984 and a Ph.D. from the Department of Computer Science at the University of Maryland in 1992. His areas of teaching and research include genetic and evolutionary computation, quantum computation, and a variety of intersections between computer science, cognitive science, evolutionary biology, and the arts. He is the Editor-in-Chief of the journal Genetic Programming and Evolvable Machines (published by Springer), and a member of the editorial board of Evolutionary Computation (published by MIT Press). He is also a member of the SIGEVO executive committee and he was named a Fellow of the International Society for Genetic and Evolutionary Computation. He has won several other awards and honors, including two gold medals in the Human Competitive Results contest of the Genetic and Evolutionary Computation Conference, and the highest honor bestowed by the National Science Foundation for excellence in both teaching and research, the NSF Director's Award for Distinguished Teaching Scholars.

Search-based Software Engineering: Challenges, Opportunities and Recent Applications

Modern software development has entered a mass production era with a multitude of engineering challenges in practice to provide higher performance and better functionality. To address these challenges, a growing trend has begun in recent years to move software engineering problems from human-based search to machine-based search that balances a number of constraints to achieve optimal or near-optimal solutions. As a result, human effort is moving up the abstraction chain to focus on guiding the automated search, rather than performing the search itself. This emerging software engineering paradigm is known as Search Based Software Engineering (SBSE). It uses search based optimization techniques, mainly those from the evolutionary computation literature to automate the search for optimal or near-optimal solutions to software engineering problems. The SBSE approach can and has been applied to many problems in software engineering that span the spectrum of activities along the software lifecycle from requirements to maintenance and reengineering. Already, success has been achieved in requirements, refactoring, project planning, testing, maintenance and reverse engineering. However, several challenges have to be addressed to mainly tackle the growing complexity of software systems nowadays in terms of number of objectives, constraints and inputs/outputs.
In this tutorial, we will cover the basic knowledge about SBSE and explain its challenges and opportunities. Then, we will focus on recent case studies that we proposed, along with our research group and industrial partners, addressing the above challenges. These case studies will cover a range of topics in software engineering including: multi-objective software design defects detection, static and interactive multi-objective optimization for software refactoring, and multi-objective optimization for third-party software components reuse. Finally, we will discuss possible new research directions in SBSE.

Ali Ouni

Ali Ouni is an Associate Professor in the Department of Software Engineering and IT at Ecole de technologie superieure (ETS), University of Quebec. He received his Ph.D. degree in computer science from University of Montreal in 2014. Before joining ETS Montreal, he has been an assistant professor at Osaka University, Japan, and UAE University. For his exceptional Ph.D. research productivity, he was awarded the Excellence Award from the University of Montreal. He has served as a visiting researcher at Missouri University of Science and Technology, and University of Michigan, in 2013 and 2014 respectively. His research interests are in software engineering and evolutionary computation including software maintenance and evolution, refactoring of software systems, software quality, service-oriented computing, and search-based software engineering. He served as a program committee member and reviewer in several journals and conferences in software engineering and evolutionary computation.

Mohamed Wiem

Mohamed Wiem Mkaouer is currently an Assistant Professor in the Software Engineering Department, in the B. Thomas Golisano College of Computing and Information Sciences at the Rochester Institute of Technology, NY, USA. He received his PhD in 2016 from the University of Michigan-Dearborn. His research interests include software quality, systems refactoring, model-driven engineering and software testing. His current research focuses on the use of computational search and evolutionary algorithms to address several software engineering problems such as software quality, software remodularization, software evolution and bug management. He is serving as a program committee member and reviewer in several journals and conferences in software engineering, machine learning and evolutionary computation.

NEW Towards a Green AI: Evolutionary solutions for an ecologically viable artificial intelligence

AI methods are essential components of any renewable energy plant. This is derived from the fact that in order to make renewable energies operational, efficient, and viable it is necessary to model many complex phenomena and to optimize many processes. There is, however, an area that has been neglected by researchers and industry: the ecological impact of artificial intelligence itself.

Only recently some light has been cast in this direction. On one hand, it has been forecast that by 2030 half of the world’s electric energy consumption with be attributed to computing facilities. On the other hand, recent studies show the design and training of a state of the art machine learning models produced the same amount of CO2 as six medium cars during their lifespan.
This raises concerns on how to make an ecologically-viable artificial intelligence. Recent work shows that a combination of cloud computing, transfer and active learning, model reuse, and evolutionary computing, among others, could lead to an eco-savvy AI. However, this needs yet to be properly understood as it calls for a coordinated effort involving different areas of AI, ML, and theoretical computer science.

 

Nayat Sanchez-Pi

Nayat Sánchez-Pi is currently the Director and CEO of the Inria Chile Research Center, created in 2012 by Inria, the French National Research Institute for Digital Sciences to facilitate scientific and industrial cooperation between France, Chile, and Latin America. Before that, she was a professor of Artificial Intelligence and Human-Computer Interaction at the Department of Informatics and Computer Science of the Institute of Mathematics and Statistics of the Rio de Janeiro State University. She also co-led the Research on Intelligence and Optimisation Group (RIO). Prof. Sánchez-Pi's research interests have broadened over the years and span topics that range from artificial intelligence, machine learning, and data mining to ambient intelligence, ubiquitous computing, and multi-agent systems. This is a consequence of the collaboration with several research groups. Her research profile for the last 10 years includes active research in the area of applied artificial intelligence. She has several publications in national and international peer-reviewed conferences, book chapters, and high-standard international journals. She received a degree in Computer Science in 2000 from the University of Havana and a Ph.D. degree in Computer Science in 2011 from the Universidad Carlos III de Madrid. She has led numerous research projects applying evolutionary computation, machine learning, and other artificial intelligence methods.

 

Luis Martí

Luis is currently the scientific director of Inria Chile, the Chilean Center of Inria, the French National Institute for Computational Sciences. Before that, he was a senior researcher of the TAU team at Inria Saclay since 2015. He was also an Adjunct Professor (tenured) at the Institute of Computing of the Universidade Federal Fluminense. Previous to that, Luis was a CNPq Young Talent of Science Fellow at the Applied Robotics and Intelligence Lab of the Department of Electrical Engineering of the Pontifícia Universidade Católica do Rio de Janeiro, Brazil. Luis did his Ph.D. at the Group of Applied Artificial Intelligence of the Department of Informatics of the Universidad Carlos III de Madrid, Madrid, Spain, and got his Computer Science degree from the University of Havana. He is mainly interested in artificial intelligence, and, in particular, machine learning, neural networks, evolutionary computation, optimization, machine learning, hybrid systems, and all that.