Evolving
Self-organisation
how collectives emerge, self-regulate and adapt guided by evolution
Recent dramatic advances in the problem-solving capabilities and scale of Artificial Intelligence (AI) systems have enabled their successful application in challenging real-world scientific and engineering problems (Abramson et al 2024, Lam et al 2023)). Yet these systems remain brittle to small disturbances and adversarial attacks (Su et al 2019, Cully 2014), lack human-level generalisation capabilities (Chollet 2019), and require alarming amounts of human, energy and financial resources [9] (Strubel et al 2019).
Biological systems, on the other hand, seem to have largely solved many of these issues. They are capable of developing into complex organisms from a few cells and regenerating limbs through highly energy-efficient processes shaped by evolution. They do so through self-organisation: collectives of simple components interact locally with each other to give rise to macroscopic properties in the absence of centralised control [10]. This ability to self-organise renders organisms adaptive to their environments and robust to unexpected failures , as the redundancy built in the collective enables repurposing components, crucially, by leveraging the same self-organisation process that created the system in the first place.
Can we harness such self-organisation processes for the benefit of modern AI? And can such nature-inspired AI systems prove useful tools for science and applications?
Self-organisation lies at the core of many computational systems that exhibit properties such as robustness, adaptability, scalability and open-ended dynamics with examples such Cellular Automata (Von Neumann 1966), a model inspired from the process of morphogenesis, and their more recent counterpart, Neural Cellular Automata, showing promising results in pattern formation in high dimensional spaces such as images (Mordvintsev et al 2020)}, indirect encodings of neural networks inspired from morphogenesis such as cellular encodings (Gruau 1992), Hyperneat (Stanley et al 2009), Hypernetworks (Ha 2016) and HypeNCAs (Najarro et al 2022), showing improved robustness and generalisation.
Guiding self-organising systems through evolution is a long-standing and promising practise, yet the inherent complexity of the dynamics of these systems complicates their scaling to domains where gradient-based methods or simpler models excel (Risi 2021). If we view self-organising systems as genotype to phenotype mappings, we can leverage techniques developed in the evolutionary optimization community to understand how they alter evolutionary dynamics and guide them better.
Our workshop aims at inviting experts to submit their work and join our discussions on questions such as:
We hope that our workshop can provide a welcoming and diverse forum for discussing these questions, sowing the seed for a longer-term dialogue. The workshop on Self-organising Artificial and Natural Intelligence (SANI) invites researchers to consider self-organisation in the age of deep learning and ponder on questions such as:
References
[1] D. Silver et al., “A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play,” Science, vol. 362, no. 6419, pp. 1140–1144, Dec. 2018, doi: 10.1126/science.aar6404.
[2] Open Ended Learning Team et al., “Open-Ended Learning Leads to Generally Capable Agents.” arXiv, Jul. 31, 2021. doi: 10.48550/arXiv.2107.12808.
[3] J. Wei et al., “Emergent Abilities of Large Language Models,” arXiv.org. Accessed: Jun. 06, 2024. [Online]. Available: https://arxiv.org/abs/2206.07682v2
[4] J. Abramson et al., “Accurate structure prediction of biomolecular interactions with AlphaFold 3,” Nature, pp. 1–3, May 2024, doi: 10.1038/s41586-024-07487-w.
[5] R. Lam et al., “GraphCast: Learning skillful medium-range global weather forecasting.” arXiv, Aug. 04, 2023. Accessed: May 31, 2024. [Online]. Available: http://arxiv.org/abs/2212.12794
[6] J. Su, D. V. Vargas, and S. Kouichi, “One pixel attack for fooling deep neural networks,” IEEE Trans. Evol. Computat., vol. 23, no. 5, pp. 828–841, Oct. 2019, doi: 10.1109/TEVC.2019.2890858.
[7] X. Qu, Z. Sun, Y.-S. Ong, A. Gupta, and P. Wei, “Minimalistic Attacks: How Little it Takes to Fool a Deep Reinforcement Learning Policy.” arXiv, Oct. 29, 2020. doi: 10.48550/arXiv.1911.03849.
[8] F. Chollet, “On the Measure of Intelligence.” arXiv, Nov. 25, 2019. doi: 10.48550/arXiv.1911.01547.
[9] E. Strubell, A. Ganesh, and A. McCallum, “Energy and Policy Considerations for Deep Learning in NLP.” arXiv, Jun. 05, 2019. doi: 10.48550/arXiv.1906.02243.
[10] S. Camazine, J.-L. Deneubourg, N. R. Franks, J. Sneyd, G. Theraulaz, and E. Bonabeau, Self-Organization in Biological Systems, vol. 38. Princeton University Press, 2001. doi: 10.2307/j.ctvzxx9tx.
[11] “The Society of Mind | BibSonomy.” Accessed: Jun. 03, 2024. [Online]. Available: https://www.bibsonomy.org/bibtex/208c1be53a1626d25311f7de4596f19a5/flint63
[12] N. Wiener, Cybernetics, or control and communication in the animal and the machine, 2nd ed. in Cybernetics, or control and communication in the animal and the machine, 2nd ed. Cambridge, MA, US: Boston Review, 1961, pp. xvii, 212. doi: 10.1037/13140-000.
[13] M. S. C. Thomas and J. L. McClelland, “Connectionist models of cognition,” in The Cambridge handbook of computational psychology, New York, NY, US: Cambridge University Press, 2008, pp. 23–58. doi: 10.1017/CBO9780511816772.005.
[14] D. O. Hebb, The organization of behavior; a neuropsychological theory. in The organization of behavior; a neuropsychological theory. Oxford, England: Wiley, 1949, pp. xix, 335.
[15] E. Randazzo, E. Niklasson, and A. Mordvintsev, “MPLP: Learning a Message Passing Learning Protocol.” arXiv, Jul. 03, 2020. doi: 10.48550/arXiv.2007.00970.
[16] W. Aguilar, G. Santamaría-Bonfil, T. Froese, and C. Gershenson, “The Past, Present, and Future of Artificial Life,” Front. Robot. AI, vol. 1, Oct. 2014, doi: 10.3389/frobt.2014.00008.
[17] J. V. Neumann and A. W. Burks, Theory of Self-Reproducing Automata. USA: University of Illinois Press, 1966.
[18] C. W. Reynolds, “Flocks, herds and schools: A distributed behavioral model,” SIGGRAPH Comput. Graph., vol. 21, no. 4, pp. 25–34, Aug. 1987, doi: 10.1145/37402.37406.
[19] A. Mordvintsev, E. Randazzo, E. Niklasson, and M. Levin, “Growing Neural Cellular Automata,” Distill, vol. 5, no. 2, p. 10.23915/distill.00023, Feb. 2020, doi: 10.23915/distill.00023.
[20] S. Sudhakaran et al., “Growing 3D Artefacts and Functional Machines with Neural Cellular Automata.” arXiv, Jun. 04, 2021. doi: 10.48550/arXiv.2103.08737.
[21] A. M. Zador, “A critique of pure learning and what artificial neural networks can learn from animal brains,” Nat Commun, vol. 10, no. 1, p. 3770, Aug. 2019, doi: 10.1038/s41467-019-11786-6.
[22] D. Ha, A. Dai, and Q. V. Le, “HyperNetworks.” arXiv, Dec. 01, 2016. doi: 10.48550/arXiv.1609.09106.
[23] E. Najarro, S. Sudhakaran, and S. Risi, “Towards Self-Assembling Artificial Neural Networks through Neural Developmental Programs.” arXiv, Jul. 16, 2023. Accessed: Oct. 03, 2023. [Online]. Available: http://arxiv.org/abs/2307.08197
[24] B. Baker et al., “Emergent Tool Use From Multi-Agent Autocurricula,” presented at the International Conference on Learning Representations, 2020. Accessed: May 19, 2020. [Online]. Available: https://openreview.net/forum?id=SkxpxJBKwS
[25] Y. Du, S. Li, A. Torralba, J. B. Tenenbaum, and I. Mordatch, “Improving Factuality and Reasoning in Language Models through Multiagent Debate.” arXiv, May 23, 2023. doi: 10.48550/arXiv.2305.14325.
Antony is the Alle Davis Harris Professor of Biology and Chair of Neuroscience at Cold Spring Harbor Laboratory, where his lab the studies the neural circuits underlying auditory decisions, the development of new technologies for sequencing the connectome and the application of insights from neuroscience to artificial intelligence.
Bert is a Research Software Engineer at Google DeepMind, Japan. He holds an M.Sc. in Cognitive Science from Lund University, Sweden and a B.Sc. in Computer Science from CUHK, China. His work focuses on self-organising and open-ended systems. He created Lenia, a modern family of CA with a very active community at ALife.
is a Principal Researcher at Microsoft Research, NYC., USA. She broadly focuses on studying how humans and AIs build models of the world and use them in memory, exploration and planning. To that end she combines approaches from Reinforcement Learning, neural networks, large language models \& machine learning with behavioral experiments and fMRI and electrophysiological measurements.
is a PhD student on Neuroscience and AI Researcher at the University of Cambridge and is about to embark on a post-doctoral position at the Department of Physiology, Anatomy and Genetics, University of Oxford. He studies how to combine large scale neuroscience data with artificial neural networks in order to understand which features underlie highly efficient learning and inference in brains and machines.
Jessica is a professor at the Santa Fe Institute where she directs the Collective Computation Group. Her research focuses on the roles of information processing, coarse-graining, and collective computation in the emergence of robust structure and function in nature and society. Flack was previously founding director of University of Wisconsin, Madison's Center for Complexity and Collective Computation in the Wisconsin Institutes for Discovery
is a Professor of Neurobiology at Freie Universität Berlin. His lab studies the genetic basis of brain wiring and maintenance using genetic manipulations and \emph{in vivo} studies of biological neural networks during self-assembly. He has published the book The Self-Assembling Brain, where he discusses these questions and their relation to other fields of research such as Neuroscience, AI and Robotics.
Iain is Director of the Department of Collective Behavior at the Max Planck Institute of Animal Behavior and a Full Professor at the University of Konstanz. His work aims to reveal the fundamental principles that underlie evolved collective behavior from insect swarms to fish schools and primate groups. He has received multiple awards, such as the Searle Scholar Award in 2008 and the Lagrange Prize in 2019
Sabine is Reader (Associate Professor) of Swarm Engineering at the University of Bristol, UK. Her research focusses on creating robotic swarms which to solve problems at varying scales, from nanobots for cancer treatment, to larger robots for environmental monitoring or logistics. She holds senior positions of two non-profits dedicated to scientific outreach: Robohub.org and AIhub.org, being co-founder and current President of the former.
TBA