Evolving

Self-organisation

Workshop description

Recent dramatic advances in the problem-solving capabilities and scale of Artificial Intelligence (AI) systems have enabled their successful application in challenging real-world scientific and engineering problems (Abramson et al 2024, Lam et al 2023). Yet these systems remain brittle to small disturbances and adversarial attacks (Su et al 2019, Cully 2014), lack human-level generalisation capabilities (Chollet 2019), and require alarming amounts of human, energy and financial resources (Strubel et al 2019).

Biological systems, on the other hand, seem to have largely solved many of these issues. They are capable of developing into complex organisms from a few cells and regenerating limbs through highly energy-efficient processes shaped by evolution. They do so through self-organisation: collectives of simple components interact locally with each other to give rise to macroscopic properties in the absence of centralised control (Camazine, 2001). This ability to self-organise renders organisms adaptive to their environments and robust to unexpected failures, as the redundancy built in the collective enables repurposing components, crucially, by leveraging the same self-organisation process that created the system in the first place.

Self-organisation lies at the core of many computational systems that exhibit properties such as robustness, adaptability, scalability and open-ended dynamics. Some examples are Cellular Automata (Von Neumann 1966), reaction-diffusion systems (Turing 1992, Mordvintsev 2021), particle systems (Reynolds 1987, Mordvintsev) , and Neural Cellular Automata (Mordvintsev et al 2020), showing promising results in pattern formation in high dimensional spaces such as images . Examples from neuroevolution are indirect encodings of neural networks inspired from morphogenesis such as cellular encodings (Gruau 1992), HyperNEAT (Stanley et al 2009), Hypernetworks (Ha 2016), HyperNCA (Najarro et al 2022) and Neural Developmental Programs (Najarro et al 2023, Nisioti et al 2024), showing improved robustness and generalisation.

Guiding self-organising systems through evolution is a long-standing and promising practise, yet the inherent complexity of the dynamics of these systems complicates their scaling to domains where gradient-based methods or simpler models excel (Risi 2021). If we view self-organising systems as genotype to phenotype mappings, we can leverage techniques developed in the evolutionary optimization community to understand how they alter evolutionary dynamics and guide them better.

The reverse is also possible: evolution can emerge as an inherent property of a self-organising system allowing us to study questions about the origin of life. Investigating under which conditions they appear, and the particular emergent evolutionary behaviours in these systems could afford insights applicable to existing artificial evolutionary approaches, or even directly provide an evolutionary substrate for learning tasks and achieving open-endedness. Early work in this direction (Ray 1992, Agüera y Arcas et al 2024, Fontana 1990, Adami et al 1994, Rasmussen et al 1991) has demonstrated emergent evolution in several computational substrates.

References

  • J. Abramson et al., “Accurate structure prediction of biomolecular interactions with AlphaFold 3,” Nature, pp. 1–3, May 2024, doi: 10.1038/s41586-024-07487-w.
  • J. Su, D. V. Vargas, and S. Kouichi, “One pixel attack for fooling deep neural networks,” IEEE Trans. Evol. Computat., vol. 23, no. 5, pp. 828–841, Oct. 2019, doi: 10.1109/TEVC.2019.2890858.
  • R. Lam et al., “GraphCast: Learning skillful medium-range global weather forecasting,” Aug. 04, 2023, arXiv: arXiv:2212.12794. Accessed: May 31, 2024. [Online]. Available: http://arxiv.org/abs/2212.12794
  • A. Cully, J. Clune, D. Tarapore, and J.-B. Mouret, “Robots that can adapt like animals,” Nature, vol. 521, no. 7553, pp. 503–507, May 2015, doi: 10.1038/nature14422.
  • F. Chollet, “On the Measure of Intelligence,” Nov. 25, 2019, arXiv: arXiv:1911.01547. doi: 10.48550/arXiv.1911.01547.
  • E. Strubell, A. Ganesh, and A. McCallum, “Energy and Policy Considerations for Deep Learning in NLP,” Jun. 05, 2019, arXiv: arXiv:1906.02243. doi: 10.48550/arXiv.1906.02243.
  • S. Camazine, J.-L. Deneubourg, N. R. Franks, J. Sneyd, G. Theraulaz, and E. Bonabeau, Self-Organization in Biological Systems, vol. 38. Princeton University Press, 2001. doi: 10.2307/j.ctvzxx9tx.
  • A. Mordvintsev, E. Randazzo, and E. Niklasson, “Differentiable Programming of Reaction-Diffusion Patterns,” Jun. 22, 2021, arXiv: arXiv:2107.06862. doi: 10.48550/arXiv.2107.06862.
  • A. M. Turing, “The chemical basis of morphogenesis,” Bltn Mathcal Biology, vol. 52, no. 1, pp. 153–197, Jan. 1990, doi: 10.1007/BF02459572.
  • C. W. Reynolds, “Flocks, herds and schools: A distributed behavioral model,” in Proceedings of the 14th annual conference on Computer graphics and interactive techniques, in SIGGRAPH ’87. New York, NY, USA: Association for Computing Machinery, Aug. 1987, pp. 25–34. doi: 10.1145/37401.37406.
  • Alexander Mordvintsev. Self-Organizing Particle Swarm. https://znah.net/icra23/
  • B. A. y Arcas et al., “Computational Life: How Well-formed, Self-replicating Programs Emerge from Simple Interaction,” Aug. 02, 2024, arXiv: arXiv:2406.19108. doi: 10.48550/arXiv.2406.19108.
  • W. Fontana, “Algorithmic Chemistry: A model for functional self-organization.”
  • C. Adami and C. T. Brown, “Evolutionary Learning in the 2D Artificial Life System ‘Avida,’” May 16, 1994, arXiv: arXiv:adap-org/9405003. doi: 10.48550/arXiv.adap-org/9405003.
  • S. Rasmussen, C. Knudsen, R. Feldberg, and M. Hindsholm, “The coreworld: emergence and evolution of cooperative structures in a computational chemistry,” in Emergent computation, Cambridge, MA, USA: MIT Press, 1991, pp. 111–134.
  • E. Najarro, S. Sudhakaran, and S. Risi, “Towards Self-Assembling Artificial Neural Networks through Neural Developmental Programs,” Jul. 16, 2023, arXiv: arXiv:2307.08197. Accessed: Oct. 03, 2023. [Online]. Available: http://arxiv.org/abs/2307.08197
  • E. Nisioti, E. Plantec, M. Montero, J. Pedersen, and S. Risi, “Growing Artificial Neural Networks for Control: the Role of Neuronal Diversity,” in Proceedings of the Genetic and Evolutionary Computation Conference Companion, Melbourne VIC Australia: ACM, Jul. 2024, pp. 175–178. doi: 10.1145/3638530.3654356.

Call for papers

We invite authors to submit papers through the Gecco submission system focused on the above subjects. We encourage two categories of submissions: papers of up to four pages showcasing early research ideas and papers up to 8 pages presenting more substantial contributions (such as technical contributions, benchmarks, negative results, surveys). Page count excludes references and appendices and submissions should follow the Gecco format . We encourage submissions related to evolution and self-organisation that address the following questions:

  • How can we evolve artificial systems that exhibit robustness, generalisation, and adaptability?
  • What properties are missing from current self-organising systems? Can we design new ones?
  • How can we analyse the trainability/navigability of self-organising systems?
  • How can evolutionary processes such as self-replication emerge in a self-organising system?
  • Which benchmarks/environments will reveal the benefit of self-organising systems?
  • Which scientific and engineering domains would benefit from the development of such systems?

Important dates

  • Submission deadline: March 26, 2025
  • Notification: April 28, 2025
  • Camera ready: May 5, 2025
  • Author's mandatory registration: May 8, 2025

Organizers

...

Eleni Nisioti

IT University of Copenhagen

...

Sebastian Risi

IT University of Copenhagen

...

Ettore Randazzo

Google, Zürich

...

Alex Mordvintsev

Google, Zürich

...

Joachim Winther Pedersen

IT University of Copenhagen

...

Eyvind Niklasson

Google, Zürich

Contact

If you have any questions regarding the workshop, you can reach out to enis@itu.dk.