1. How evolution learns to generalise:: using the principles of learning theory to understand the evolution of developmental organisation
- Author
-
Markus Brede, Jeff Clune, Kostas Kouvaris, Richard A. Watson, and Loizos Kounios
- Subjects
Evolutionary Genetics ,0301 basic medicine ,Computer science ,Social Sciences ,Variation (game tree) ,computer.software_genre ,Machine Learning ,Learning and Memory ,Natural Selection ,Learning theory ,Psychology ,Simplicity ,lcsh:QH301-705.5 ,media_common ,Natural selection ,Ecology ,Physics ,Biological Evolution ,Phenotypes ,Phenotype ,Computational Theory and Mathematics ,Modeling and Simulation ,Principles of learning ,Physical Sciences ,Sound Pressure ,Research Article ,Evolutionary Processes ,Exploit ,media_common.quotation_subject ,Environment ,Machine learning ,Models, Biological ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,Evolutionary Adaptation ,Genetics ,Humans ,Learning ,Selection, Genetic ,Molecular Biology ,Ecology, Evolution, Behavior and Systematics ,Evolutionary Biology ,Evolutionary Developmental Biology ,business.industry ,Cognitive Psychology ,Computational Biology ,Biology and Life Sciences ,Acoustics ,Organismal Evolution ,Evolvability ,030104 developmental biology ,lcsh:Biology (General) ,Evolutionary developmental biology ,Cognitive Science ,Artificial intelligence ,business ,computer ,Developmental Biology ,Neuroscience - Abstract
One of the most intriguing questions in evolution is how organisms exhibit suitable phenotypic variation to rapidly adapt in novel selective environments. Such variability is crucial for evolvability, but poorly understood. In particular, how can natural selection favour developmental organisations that facilitate adaptive evolution in previously unseen environments? Such a capacity suggests foresight that is incompatible with the short-sighted concept of natural selection. A potential resolution is provided by the idea that evolution may discover and exploit information not only about the particular phenotypes selected in the past, but their underlying structural regularities: new phenotypes, with the same underlying regularities, but novel particulars, may then be useful in new environments. If true, we still need to understand the conditions in which natural selection will discover such deep regularities rather than exploiting ‘quick fixes’ (i.e., fixes that provide adaptive phenotypes in the short term, but limit future evolvability). Here we argue that the ability of evolution to discover such regularities is formally analogous to learning principles, familiar in humans and machines, that enable generalisation from past experience. Conversely, natural selection that fails to enhance evolvability is directly analogous to the learning problem of over-fitting and the subsequent failure to generalise. We support the conclusion that evolving systems and learning systems are different instantiations of the same algorithmic principles by showing that existing results from the learning domain can be transferred to the evolution domain. Specifically, we show that conditions that alleviate over-fitting in learning systems successfully predict which biological conditions (e.g., environmental variation, regularity, noise or a pressure for developmental simplicity) enhance evolvability. This equivalence provides access to a well-developed theoretical framework from learning theory that enables a characterisation of the general conditions for the evolution of evolvability., Author summary A striking feature of evolving organisms is their ability to acquire novel characteristics that help them adapt in new environments. The origin and the conditions of such ability remain elusive and is a long-standing question in evolutionary biology. Recent theory suggests that organisms can evolve designs that help them generate novel features that are more likely to be beneficial. Specifically, this is possible when the environments that organisms are exposed to share common regularities. However, the organisms develop robust designs that tend to produce what had been selected in the past and might be inflexible for future environments. The resolution comes from a recent theory introduced by Watson and Szathmáry that suggests a deep analogy between learning and evolution. Accordingly, here we utilise learning theory to explain the conditions that lead to more evolvable designs. We successfully demonstrate this by equating evolvability to the way humans and machines generalise to previously-unseen situations. Specifically, we show that the same conditions that enhance generalisation in learning systems have biological analogues and help us understand why environmental noise and the reproductive and maintenance costs of gene-regulatory connections can lead to more evolvable designs.
- Published
- 2017