Description

Recent dramatic advances in the problem-solving capabilities and scale of Artificial Intelligence (AI) systems have enabled their successful application in challenging real-world scientific and engineering problems (Abramson et al 2024, Lam et al 2023). Yet these systems remain brittle to small disturbances and adversarial attacks (Su et al 2019, Cully 2014), lack human-level generalization capabilities (Chollet 2019), and require alarming amounts of human, energy and financial resources (Strubel et al 2019).

Biological systems, on the other hand, seem to have largely solved many of these issues. They are capable of developing into complex organisms from a few cells and regenerating limbs through highly energy-efficient processes shaped by evolution. They do so through self-organization: collectives of simple components interact locally with each other to give rise to macroscopic properties in the absence of centralized control (Camazine, 2001). This ability to self-organize renders organisms adaptive to their environments and robust to unexpected failures, as the redundancy built in the collective enables repurposing components, crucially, by leveraging the same self-organization process that created the system in the first place.

Classical examples of self-organizing systems are Cellular Automata (Von Neumann 1966), and their more recent counterparts Neural Cellular Automata (Mordvintsev et al 2020) and Lenia (Chan, 2019), reaction-diffusion systems (Turing 1992, Mordvintsev 2021), particle systems (Reynolds 1987, Mordvintsev), and random boolean networks (Kauffman). Examples of self-organizing systems for neural network optimization are indirect genotype to phenotype mappings such as cellular encodings (Gruau 1992), Hypernetworks (Ha 2016), HyperNCA (Najarro et al 2022), Neural Developmental Programs (Najarro et al 2023, Nisioti et al 2024) and Hebbian learning. An exciting area emerging at the intersection of Artificial Life and AI are autoregressive graph generative models, such as graph rewrite networks (Wolfram, 2020) and deep learning models for graph generation (Liao et al. 2019). Self-organizing systems are also relevant for the study of social dynamics, as with Schelling's model (Schelling, 1978), Spatial Social Dilemmas (Nowak and May, 1992) and, more recently, groups of Large Language Models (Nisioti et al, 2024).

The adaptive, autonomous, and complex nature of self-organizing systems presents significant challenges for optimization (Risi 2021, Mitchell et al. 1996). Yet this same complexity is hypothesized to hold the key to scaling up evolutionary optimization — development can reduce the effective parameter space and shape the fitness landscape enabling sample-efficient search in high-dimensional landscapes (Kauffman and Levin 1987).

Our workshop aims to bring together experts in evolutionary optimization and self-organizing systems to discuss how the field has evolved over recent decades, and how the latest developments can drive advances in self-organizing systems and their applications across science and engineering.

Call for papers

We invite authors to submit papers through the GECCO submission system. We welcome two categories of submissions: short papers of up to four pages presenting early-stage research ideas, and full papers of up to eight pages reporting more substantial contributions (including technical advances, benchmarks, negative results, and surveys). Page limits exclude references and appendices; all submissions should follow the GECCO format. We welcome any work at the intersection of evolution and self-organization (we adopt a broad definition of self-organisation, see our Description above for examples of systems), including but not limited to the following questions:

What are the challenges of optimizing self-organizing systems? Early studies of cellular automata employed primarily genetic programming for their optimization (Mitchell 1994, 1996). Numerous works have identified challenges in such setups: self-organizing systems are characterized by a need for symmetry breaking (Mitchell 1996) and fitness landscapes with local optima (Kauffman 1987, Greenbury 2022). Today the dominant training paradigm for Neural CA is gradient-based optimization (Mordvintsev). Although more powerful, it has been recently shown to be out-competed by modern neuro-evolution approaches in the presence of local optima (Such 2024), noisy feedback (Salimans et al 2017) and non-stationarity (Nisioti 2025). We welcome empirical and theoretical studies showcasing challenges and benefits of different optimization approaches for self-organizing systems.

Why do we need artificial self-organizing systems? From an AI perspective, such systems offer advantages often missing in today’s approaches, including robustness, scalability, parameter efficiency, and improved generalization. From an application standpoint, they can model natural systems more effectively than black-box methods—for example, in synthetic biology. We welcome studies exploring known or novel benefits, limitations and links to applications.

How can we analyze the trainability of self-organizing systems? Theoretical and empirical work on fitness loss landscapes and random boolean networks has shed insights into the challenges of training self-organizing systems (Kauffman and Levin 1987, Greenbury et al 2022). The evolutionary optimization community has laid great focus on developing tools for analyzing fitness landscapes and understanding the effect of the genotype to phenotype mapping (Thomson et al 2024). How can we apply and extend such approaches to self-organizing systems?

Submission deadline: March 27, 2026
Notification: April 24, 2026
Conference dates: July 13 - 17, 2026

Important note: Accepted workshop papers will be published in the conference proceedings as usual. However, starting this year, ACM has adopted a new Open Access publishing model. For our workshop, this means that papers up to 4 pages are free of charge, while longer papers incur a publication fee. If your institution is listed among the ACM Open participants, the fee for longer papers is waived.

Speakers

TBA

Schedule

TBA

Organizers

  • Eleni Nisioti Eleni Nisioti

    IT University of Copenhagen

  • Eyvind Niklasson Eyvind Niklasson

    Google Research, Zurich

  • Alex Mordvintsev Alex Mordvintsev

    Google Research, Zurich

  • Ettore Randazzo Ettore Randazzo

    Google Research, Zurich

  • Marcello Barylli Marcello Barylli

    IT University of Copenhagen

  • Sebastian Risi Sebastian Risi

    IT University of Copenhagen

  • Milton Montero Milton Montero

    IT University of Copenhagen

  • Mayalen Etcheverry Mayalen Etcheverry

    Google Research, Zurich

Past workshops

References

J. Abramson et al., "Accurate structure prediction of biomolecular interactions with AlphaFold 3," Nature, pp. 1–3, May 2024, doi: 10.1038/s41586-024-07487-w.

J. R. Koza, Genetic programming II: automatic discovery of reusable programs. Cambridge, MA, USA: MIT Press, 1994.

B. Georgiev, J. Gómez-Serrano, T. Tao, and A. Z. Wagner, "Mathematical exploration and discovery at scale," Nov. 06, 2025, arXiv: arXiv:2511.02864. doi: 10.48550/arXiv.2511.02864.

A. Novikov et al., "AlphaEvolve: A coding agent for scientific and algorithmic discovery," June 17, 2025, arXiv: arXiv:2506.13131. doi: 10.48550/arXiv.2506.13131.

X. Qiu et al., "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning," Sept. 29, 2025, arXiv: arXiv:2509.24372. doi: 10.48550/arXiv.2509.24372.

J. Su, D. V. Vargas, and S. Kouichi, "One pixel attack for fooling deep neural networks," IEEE Trans. Evol. Computat., vol. 23, no. 5, pp. 828–841, Oct. 2019, doi: 10.1109/TEVC.2019.2890858.

R. Lam et al., "GraphCast: Learning skillful medium-range global weather forecasting," Aug. 04, 2023, arXiv: arXiv:2212.12794. Accessed: May 31, 2024. [Online]. Available: http://arxiv.org/abs/2212.12794

R. T. Lange, Y. Imajuku, and E. Cetin, "ShinkaEvolve: Towards Open-Ended And Sample-Efficient Program Evolution," Sept. 17, 2025, arXiv: arXiv:2509.19349. doi: 10.48550/arXiv.2509.19349.

A. Cully, J. Clune, D. Tarapore, and J.-B. Mouret, "Robots that can adapt like animals," Nature, vol. 521, no. 7553, pp. 503–507, May 2015, doi: 10.1038/nature14422.

F. Chollet, "On the Measure of Intelligence," Nov. 25, 2019, arXiv: arXiv:1911.01547. doi: 10.48550/arXiv.1911.01547.

E. Strubell, A. Ganesh, and A. McCallum, "Energy and Policy Considerations for Deep Learning in NLP," Jun. 05, 2019, arXiv: arXiv:1906.02243. doi: 10.48550/arXiv.1906.02243.

S. Camazine, J.-L. Deneubourg, N. R. Franks, J. Sneyd, G. Theraulaz, and E. Bonabeau, Self-Organisation in Biological Systems, vol. 38. Princeton University Press, 2001. doi: 10.2307/j.ctvzxx9tx.

A. Mordvintsev, E. Randazzo, and E. Niklasson, "Differentiable Programming of Reaction-Diffusion Patterns," Jun. 22, 2021, arXiv: arXiv:2107.06862. doi: 10.48550/arXiv.2107.06862.

A. M. Turing, "The chemical basis of morphogenesis," Bltn Mathcal Biology, vol. 52, no. 1, pp. 153–197, Jan. 1990, doi: 10.1007/BF02459572.

C. W. Reynolds, "Flocks, herds and schools: A distributed behavioral model," in Proceedings of the 14th annual conference on Computer graphics and interactive techniques, in SIGGRAPH '87. New York, NY, USA: Association for Computing Machinery, Aug. 1987, pp. 25–34. doi: 10.1145/37401.37406.

Alexander Mordvintsev. Self-Organizing Particle Swarm. https://znah.net/icra23/

B. W.-C. Chan, "Lenia - Biology of Artificial Life," ComplexSystems, vol. 28, no. 3, pp. 251–286, Oct. 2019, doi: 10.25088/ComplexSystems.28.3.251.

B. A. y Arcas et al., "Computational Life: How Well-formed, Self-replicating Programs Emerge from Simple Interaction," Aug. 02, 2024, arXiv: arXiv:2406.19108. doi: 10.48550/arXiv.2406.19108.

W. Fontana, "Algorithmic Chemistry: A model for functional self-organization."

C. Adami and C. T. Brown, "Evolutionary Learning in the 2D Artificial Life System 'Avida,'" May 16, 1994, arXiv: arXiv:adap-org/9405003. doi: 10.48550/arXiv.adap-org/9405003.

S. Rasmussen, C. Knudsen, R. Feldberg, and M. Hindsholm, "The coreworld: emergence and evolution of cooperative structures in a computational chemistry," in Emergent computation, Cambridge, MA, USA: MIT Press, 1991, pp. 111–134.

E. Najarro, S. Sudhakaran, and S. Risi, "Towards Self-Assembling Artificial Neural Networks through Neural Developmental Programs," Jul. 16, 2023, arXiv: arXiv:2307.08197. Accessed: Oct. 03, 2023. [Online]. Available: http://arxiv.org/abs/2307.08197

E. Nisioti, E. Plantec, M. Montero, J. Pedersen, and S. Risi, "Growing Artificial Neural Networks for Control: the Role of Neuronal Diversity," in Proceedings of the Genetic and Evolutionary Computation Conference Companion, Melbourne VIC Australia: ACM, Jul. 2024, pp. 175–178. doi: 10.1145/3638530.3654356.

T. C. Schelling, "Micromotives and macrobehavior", 1st ed. New York: Norton, 1978.

M. A. Nowak and R. M. May, "Evolutionary games and spatial chaos", Nature, vol. 359, no. 6398, pp. 826–829, Oct. 1992, doi: 10.1038/359826a0.

E. Nisioti, S. Risi, I. Momennejad, P.-Y. Oudeyer, and C. Moulin-Frier, "Collective Innovation in Groups of Large Language Models", in ALIFE 2024: Proceedings of the 2024 Artificial Life Conference, MIT Press, 2024. Accessed: Feb. 06, 2025. [Online]. Available: https://direct.mit.edu/isal/proceedings-abstract/isal2024/36/123452

M. Mitchell, J. P. Crutchfield, and R. Das, "Evolving cellular automata with genetic algorithms: A review of recent work," in Proceedings of the First International Conference on Evolutionary Computation and Its Applications (EvCA'96), 1996.

M. Mitchell, J. P. Crutchfield, and P. T. Hraber, "Evolving cellular automata to perform computations: Mechanisms and impediments," Physica D, vol. 75, pp. 361–391, 1994.

S. Kauffman and S. Levin, "Towards a general theory of adaptive walks on rugged landscapes," Journal of Theoretical Biology, vol. 128, no. 1, pp. 11–45, 1987.

S. F. Greenbury, A. A. Louis, and S. E. Ahnert, "The structure of genotype-phenotype maps makes fitness landscapes navigable," Nature Ecology & Evolution, vol. 6, pp. 1–11, 2022.

F. P. Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, and J. Clune, "Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning," Dec. 2017, arXiv: arXiv:1712.06567.

T. Salimans, J. Ho, X. Chen, S. Sidor, and I. Sutskever, "Evolution Strategies as a Scalable Alternative to Reinforcement Learning," Sept. 2017, arXiv: arXiv:1703.03864.

E. Nisioti, M. Khajehnejad, M. Borg, and S. Dahl, "When Does Neuroevolution Outcompete Reinforcement Learning?," Apr. 2025, arXiv: arXiv:2504.16779.

S. L. Thomson, L. Le Goff, E. Hart, and E. Buchanan, "Understanding Fitness Landscapes in Morpho-Evolution via Local Optima Networks," in Proceedings of the Genetic and Evolutionary Computation Conference, Melbourne VIC Australia: ACM, Jul. 2024, pp. 114–123. doi: 10.1145/3638529.3654059.

S. Wolfram, "A Class of Models with the Potential to Represent Fundamental Physics," Complex Systems, vol. 29, no. 2, pp. 107–536, 2020, doi: 10.25088/ComplexSystems.29.2.107.

Contact

For questions or further information, please contact us at enis@itu.dk