We are searching data for your request:
Upon completion, a link will appear to access the found materials.
Imagine a combinatorial tree showing all the possible arrangements of nucleotides. Some arrangements are functional, meaning they can be plugged into a cell and create a living organism capable of reproducing. Some arrangements are nonfunctional, meaning they can't sustain any living system at all.
In evolution, new information is created as the result of what are essentially an accumulation of copying errors. My question is, would the ratio of functional to nonfunctional arrangements have to be relatively large so that enough genetic information can arise in response to natural selection? If so, what scientific studies have led biologists to conclude that this combinatorial space is rich with functional arrangements. If it isn't necessary please explain why.
No, with some caveats. First, about the actual value of the ratio: it's probably tiny, with almost all possible genetic sequences failing to lead to living organisms.
Why does this not make evolution impossible? Each sequence can have billions* of neighbors in sequence space. If even a very small number of those are functional, evolution can proceed. More importantly, the functional sequences are clustered in sequence space. If you pick 3 billion nucleotides randomly, you'll get nonsense, but if you mutate a single nucleotide in a human you'll most likely get a perfectly good human.
There are interesting open questions here, though, especially regarding the origin of life, and whether natural selection tends to drive populations to regions of sequence space that are especially dense with functional sequences.
*Just looking at point mutations. Including the full mutational spectrum gives many more.
would the ratio of functional to nonfunctional arrangements have to be relatively large so that enough genetic information can arise in response to natural selection?
Evolution does definitely not explore all possible combinations of a DNA. It just try things out. Of course a new gene is not necessarily recreated from scratch. A gene often get copied and then the two copies can diverge (this is called neo-functionalization).
So, no not the all possibility space is explored but this is in no way a trouble. It is not like a mutation would cause a whole brand new sequence out ov the void and hope for it to be beneficial. A path is being taken through this possibility space.
"would the ratio of functional to nonfunctional arrangements have to be relatively large so that enough genetic information can arise in response to natural selection?"
Yes it is very large. Because most mutations are synonymous mutations. Take a look at the part about the degeneracy of the genetic code.
Another reason, and this one only helps evolution move on is called gene duplication. Many proteins in a genome are duplicated, meaning there is some redundancy involved. If we delete one protein, the fitness of the organism may/may not be affected but survival won't be affected. One example that comes to mind is Histone H1 (because I studied about it a bit). To clearly affect a change we need to delete at least three H1 genes, any less and one of its variants act as replacements.
Bonus answer, evolution does not always lead to the best possible combination, it leads to functional combinations.
How does evolution get around the combinatorial problem? - Biology
All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited.
Feature Papers represent the most advanced research with significant potential for high impact in the field. Feature Papers are submitted upon individual invitation or recommendation by the scientific editors and undergo peer review prior to publication.
The Feature Paper can be either an original research article, a substantial novel research study that often involves several techniques or approaches, or a comprehensive review paper with concise and precise updates on the latest progress in the field that systematically reviews the most exciting advances in scientific literature. This type of paper provides an outlook on future directions of research or possible applications.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to authors, or important in this field. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.
Mendel's Genetic Experiments
Gregor Mendel's arrival at the St.Thomas Abbey was a stroke of luck for its abbot. Cyril Napp had already decided that understanding "what is inherited and how" was key to the study of hybridization [_1_] . Answering this question would require someone with a lot of patience and an unusual attention to detail. That person was Gregor Mendel.
Gregor Mendel took over the monastery's research garden from his mentor, Friar Klacel, in 1846. The research garden is shown below. Klacel had been studying heredity and variation in peas [_2_] . Gregor Mendel would focus on peas as well, perhaps influenced by his mentor. This choice was very important to his eventual success. Pea plants have easily identifiable features, can self-fertilize and are easily prevented from cross-fertilizing. While the choice of pea plant made success more likely, he and his team still had to overcome many hurdles.
Gregor Mendel encountered problems from the start. If you self-fertilized some tall pea plants they would always produce tall plants even through more than one generation. But if you self-fertilized other tall pea plants they would produce mostly tall plants but some dwarf plants. Although the plants looked similar (same phenotype, tall) they were obviously different genetically (different genotypes). Similar problems occurred with every trait that he was testing. Mendel knew he had to start with a set of plants that when self-crossed would always produce the same phenotype. Developing this set of true-breeding plants took two years [_3_] .
After developing his set of true-breeding plants, Mendel and his assistants spent years making 29000 crosses through multiple generations of plants. This cross-fertilization was tedious work. Pea plants have both male and female organs. To cross-fertilize pea plants you have to make certain they don't self-fertilize first. Mendel performed surgery on each target plant by cutting off the male organs (stamens) while the plant was still immature. When the time came to cross-fertilize, Gregor Mendel and his assistants used a paintbrush to brush some pollen off the anthers of the donor plant and painted the pollen onto the stigma (part of female reproductive structure) of the target plant. A bag was then wrapped around the flower to prevent other pollen from landing on the stigma.
The money St. Thomas Abbey spent sending Mendel to the University of Vienna paid off in both the design of Mendel's experiments and the analysis of the results. One of his professors was a renowned physicist, Christian Doppler. Mendel would have been taught the design of physical experiments. Doppler's math textbooks contained sections on combinatorial theory and the use of probability. One of Mendel's innovations was to look at the inheritance of traits as a random event and analyze the results based on probabilities. Random events, statistics and probabilities were part of the language used by nineteenth century physicists, but not nineteenth century biologists. [_4_]
We can follow Mendels's logic by following one of his experiments. Mendel took true-breeding pea plants that produced only yellow peas and crossed them with true-breeding pea plants that produced only green peas. All offspring had yellow seeds. The green trait had completely disappeared. Then Mendel took this first generation (F.1) and self-crossed them. The green trait showed up again. 6022 of the offspring of the second generation (F.2) had yellow seeds and 2001 had green seeds. Genetic material from the green-seeded plants must have been preserved in the first generation. It was masked by something more powerful..the genetic material that coded for yellow seeds. Yellow-seeded plants were dominant and green-seeded plants were recessive. The ratio of the results in the second generation is very close to 3:1. This ratio can be explained if the inheritance of traits depended on paired elements that are recombined (not blended as Darwin believed) in the offspring. In this experiment a Yellow-Green pair would show as a yellow pea. But if we crossed many Yellow-Green plants we could get only 4 different permutations Yellow-Yellow, Yellow-Green, Green-Yellow, and Green-Green. Three of them result in yellow peas, and only one, the Green-Green, results in green peas. The diagram below (taken from a early book by Thomas Hunt Morgan) illustrates Mendelian genetics through two generations (F.1 and F.2) .
Why did Mendel use such large numbers of crosses in his experiments? Mendel needed large samples to produce higher confidence in the 3:1 ratio. If Mendel had used smaller sample sizes his work would have been of little value. Charles Darwin had conducted similar experiments with snapdragons but because of his poor understanding of sampling had only used 125 crosses. His result of 2.4:1 could have been interpreted as a 2:1 ratio or a 3:1 ratio ( Darwin, Mendel and Statistics). Mendelian genetics helped support a trend toward a more mathematical approach in biology.
Gregor Mendel's work on genetics was finally published as "Experiments in Plant Hybridization" in the Proceedings of the Natural History Society of Brünn in 1866. No-one seemed to care. The paper was rarely mentioned over the next 35 years. It would dramatically change the field of biology when it was rediscovered around 1900.
Results and discussion
Anatomy of an autocatalytic set
Kauffman ( pp. 2-3) defines an autocatalytic set as an arrangement of molecules in which "every member of the autocatalytic set has at least one of the possible last steps in its formation catalyzed by some member of the set, and that connected sequences of catalyzed reactions lead from the maintained 'food set' to all members of the autocatalytic set". This is more formally defined by Hordijk and Steel [23, 24], who state that an R (sub)set of reactions is called (i) reflexively autocatalytic (RA) if every reaction in R is catalyzed by at least one molecule involved in any of the reactions in R, (ii) food-generated (F) if every reactant in R can be constructed from a small "food set" by successive applications of reactions from R, and (iii) reflexively autocatalytic and food-generated (RAF) if both RA and F.
The concept of an autocatalytic or RAF set, although important for questions of self-organization, does not directly address heredity or selectability . However, such a set can be divided into one or more strongly connected autocatalytic cores and their peripheries, and we propose that these cores are the units of heritable adaptations in reaction networks (for chemical network motifs see Figure 1 and for a specific example see Figure 2). A core can be viewed as a chemical network genotype and its corresponding periphery as a chemical network phenotype (although without a modular or compositional mapping between them). An autocatalytic core (which we abbreviate from now on to 'core') contains one or more linked autocatalytic loops . Autocatalytic loops are closed circular paths of any length where each molecule in the loop depends on the previous one for its production (Figure 1). In the core all species catalyze the production of all other species, including themselves, which means that they are indirectly autocatalytic (Figure 1). The periphery consists of molecular species that are catalyzed by the core (Figure 1). The provision of any one molecule of core species is sufficient to produce all the core species and the periphery species of that core in other words, all core molecules contain the information that is necessary for igniting and sustaining the autocatalytic core and periphery and can therefore act as an autocatalytic seed. This is not the case for periphery molecules that depend, as a phenotype does on its genotype, upon the core. Note that an autocatalytic or RAF set as defined above can contain any number of distinct core-periphery units (Figure 1) and the structurally and kinetically possible combinations of such units define different alternative stable states of the same chemical network (Figure 2).
Classification of various network modules within autocatalytic sets. food1-food6: food set that is assumed to be present at all times, A-D: non-food species generated by ligation/cleavage reactions. Solid lines: reactions dotted lines: catalytic activities. Orange dotted lines show the superimposed autocatalytic loops. (A) Viable autocatalysts (A in all three examples) are the necessary units needed for exponential growth of an autocatalyst, in contrast to suicidal autocatalysts (B in all three examples) that use reactants only produced by the autocatalytic reaction itself. (B) A molecular species can be directly autocatalytic, forming a one-member autocatalytic loop, or several species can form loops of various sizes that result in indirect autocatalysis. (C) A loop is autocatalytic - and able to grow exponentially - as long as at least one of the steps is a catalytic dependency. Therefore a loop can be made of solely catalytic or mixed couplings. (D) An autocatalytic core contains one or more linked loops. Note that any member of a core (A and B in all three examples) is sufficient to act as a seed for the core. Several distinct cores can form within a catalytic reaction network. Some can exist independently of other cores, while dependent cores rely on others as food supply or catalysts. (E) An autocatalytic core is typically associated with a periphery that is dependent on the core (C and D in first example). It is also possible that a molecular periphery appears only if two or more cores are present (D in second example). We propose that autocatalytic cores are the units of heritable adaptation in chemical networks.
Multiple cores result in selectable attractors for a chemical network. food1-food11: food set that is assumed to be present at all times, A-H: non-food species generated by ligation/cleavage reactions. Solid lines: reactions dotted lines: catalytic activities. Orange dotted lines show the superimposed autocatalytic loops. Structural considerations: Autocatalytic sets can contain several distinct autocatalytic units, each of which can be divided to a core of autocatalytic molecules and a periphery. Here, two independent cores are shown. The first consists of the two linked loops A → A and A → B → A. The second core includes the two linked loops C → C and C → D → E → C, with the periphery of F and G. H is the shared periphery of the two cores that requires both for its production. Dynamical considerations: This platonic reaction network can manifest in four possible stable compositions of the core-periphery units: (i) no cores (only food species) (ii) only first core (A, B yellow area) (iii) only second core (C, D, E, F, G blue area) (iv) both cores (all species). Now imagine that we have a compartment that only contains food species, but rare uncatalyzed reactions among them are possible. The uncatalyzed appearance of any one molecule of core species is sufficient to produce all the core species and the periphery species of that core, e.g. either A or B for the first core and either C, D, E for the second. Now let us assume that after reaching a certain size a compartment that contains both cores will split and produce propagules. If neither C, D or E is present in the daughter compartment, the second core is lost and the remaining molecules of its periphery will be washed out from the compartment. Discovering cores by rare reactions, and losing cores by segregation instability opens up the possibility for a chemical reaction network to respond to natural selection.
Having defined an autocatalytic core as a set of connected autocatalytic loops, it is important to distinguish between the possible types of such loops (Figure 1). Typically, the cycle of reactions is coupled by catalytic dependencies. However, as Eigen  has shown, the cycle maintains its autocatalytic properties as long as at least one of the steps is catalytic (an idea that was missed by previous models in [11, 23]). All other steps can be substrate dependencies where the product of the previous step serves as a precursor for the next reaction (e.g. consider the dependency of Y upon X in the reaction A + X ←→ Y Figure 1). In other words, one reaction where the product is not consumed but serves as a catalyst is enough for the exponential increase of the mass of the cycle. It can be misleading to solely focus on directly or indirectly autocatalytic molecules also, because of the possibility of what we call suicidal autocatalysts . An autocatalytic molecule can be suicidal in a kinetic sense. Bearing in mind that all reactions in autocatalytic sets of biopolymers  are assumed to be reversible, let us consider the simple autocatalytic reaction A + X ←→ 2A. If X is not present (or present in very low concentration) this reaction will go in the direction of self-decomposition, in a form of autoinhibitive cycle (Figure 1). Such suicidal autocatalysts are obviously incapable of exponential growth, the very feature that gives autocatalysis its evolutionary significance. A viable autocatalytic molecule, either directly autocatalytic or embedded in a longer autocatalytic loop, however, does grow exponentially. Note that rather similar examples of viable and suicidal autocatalysts can be found in contemporary biochemistry: whereas the Calvin cycle is a network of autocatalytic sugar production, the pentose phosphate pathway is an example of autoinhibitive decomposition of sugar phosphates . Viable autocatalytic loops are necessary but not sufficient for evolution by natural selection of autocatalytic networks, as we shall see below.
Spontaneous formation of autocatalytic sets in a polymer chemistry
The original mathematical model of autocatalytic sets  assumes the following: (i) there exists a large food set of abundant polymers naturally formed in the environment up to some low level of complexity, i.e. up to length M consisting of B types of monomers (e.g. a, b, aa, bb) (ii) each molecule has a certain probability P of catalyzing each ligation-cleavage reaction. The model assumes infinite discriminability, in other words, a molecule either does or does not catalyze a particular reaction without quantitative variation in efficiency. However, it does not assume specificity, because a catalyst typically catalyses a number of reactions (on average, P fraction of the possible reactions). It was demonstrated that above a certain catalytic probability threshold a chain reaction is triggered and due to catalytic closure autocatalytic sets appear .
Hordijk and Steel  verified this claim by generating random networks of reversible ligation/cleavage reactions between strings up to length n = 20, where each molecule had the probability P of catalyzing each reaction. At low values of P they found unconnected sets utilizing separate food sets, but at higher values a percolation phenomenon produced fully connected autocatalytic sets. Farmer and co-workers [12, 14] were the first to implement the original mathematical model and confirmed that a supracritical reaction network that keeps growing with accelerating speed arises above a certain catalytic probability P c. By constraining the growing catalytic reaction network in a flow reactor with finite mass and lower bound concentration threshold (the relevant scenario being here to study the issue of evolvability in compartmentalized systems), they implemented a chemical model where the size of the chemical network shows logistic growth above P c. Our initial task was to corroborate these results and to investigate the underlying structure of the catalytic reaction network (the methods are described in detail in Additional file 1). We found that as the networks grow in size they form one large autocatalytic core consisting of all molecular species above the concentration threshold. However, autocatalytic cores mostly consist of suicidal autocatalysts as only a small minority of autocatalytic species use valid reactants in the autocatalytic reactions and thus form viable loops (see Figure 1). Note that although the number of viable loops increases with system size, they are always within the same viable core and therefore they cannot be independent targets for natural selection (see Additional file 1). In conclusion, we substantiate the speculation  that a self-sustaining network of reactions -an autocatalytic primitive metabolism- appears in this minimal model of polymer chemistry (Figure 3).
Emergence of a self-sustaining network of reactions in a flow reactor. (A) The squares show critical thresholds for subcritical (empty squares) or supracritical (coloured squares) growth of the reaction network as a function of the firing disc (maximum length of molecular species in the food set) and the probability P that a species catalyses a specific reaction. The darkness of a square reflects the proportion of 100 runs in which the network exceeded one of the following conditions: > 2 × 10 7 reactions or > 10 5 molecular species (note that in any finite system the reaction network cannot be explored infinitely due to mass constraints). (B) The crucial parameter P was decomposed into its two elementary probabilities: P' (the probability that a species can be catalytic) and P'' (the per reaction probability that this catalyst catalyses a reaction). When P' decreases P'' must be considerably higher for reaction networks to keep growing, but there is a threshold above which catalytic networks grow supracritically. (C) Weak inhibition does not prevent formation of large catalytic reaction networks. For values of P that do produce catalytic network growth, strong non-competitive inhibition is introduced by choosing with probability K that a species removes another species from the reactor completely if at least one molecule of the inhibiting species exists (this is clearly a worst-case assumption). Left: supracritical growth without inhibition. Middle: weak inhibition results in alternating fast and slow growth phases. Right: strong inhibition makes the network subcritical.
Our next task was to verify that the previous claim remained true in the face of earlier criticisms of the model. Thus, a critical parameter in the model is the probability P that each molecule can catalyze each ligation-cleavage reaction, which was assumed to be constant. This assumption led to the serious objection that the model implied an unrealistically high probability (of one) that a peptide could act as a catalyst : when the maximum length M of polymers in the set increases, the number of reactions (((M-2) × 2 M+1 ) ) increases faster than the number of molecules (N ≈ 2 M+1 ), therefore all molecules quickly become catalytic - an outcome that is clearly very unrealistic. To remedy the situation, the parameter P should be a composite of two probabilities: the probability P' that the molecule is a catalyst, and the probability P" that a molecule catalyzes a given reaction . We implemented this criticism, but with the caveat that it is unlikely that a catalyst is expected to catalyze only one out of the infinite possible reactions as suggested by Lifson , following assumptions in ( p. 306). That is, defining
it is implicitly asserted that the probability that a catalyst catalyzes a given reaction is not independent of the probability that it catalyzes another reaction - but why would considering a bigger reaction space make catalysis less likely? In our view, a more reasonable scenario is to assume that P' is defined as it was above, but P" is now considered to be the per reaction probability that a catalyst catalyzes the reaction. When our previous simulations were re-implemented with constant P' and P" values it was found that when the ratio of catalysts (P') decreases, the probability of catalysis (P") must be considerably higher for reaction networks to keep growing (Figure 3A and 3B). Therefore, even though there is no known random polymer chemistry in which these probabilities are ever so high as necessary for supracritical growth -certainly not random polypeptides -, we conclude that Lifson's  criticism remains a quantitative one, leaving open the possibility that were it possible to obtain such catalysts, the catalytic network could still form spontaneously.
A second objection to the model was that autocatalytic sets could not have been formed spontaneously due to the 'paradox of specificity' that is, a high number of molecules is required for spontaneous emergence of a self-sustaining network of reactions , but the harmful effect of side reactions that ought to rise with an increasing set calls for a small system size . One way to check whether harmful effects of side reactions in spontaneously emerging autocatalytic sets could inhibit network growth is to introduce strong non-competitive inhibition. It is easy to imagine that a species removes another species from the reactor by some side reaction, and we chose to implement the strictest possible scenario where one molecule of inhibitor removes the inhibited species completely. In a manner analogous to the determination of catalytic reactions, each molecule inhibited any of the other molecules with the probability K. A species may therefore be both an inhibitor and also have other positive catalytic effects. It should be noticed that competitive inhibition already emerges in the model in the case where a catalyst uses another catalyst as a substrate, and so it is not necessary to explicitly add this. At high levels of inhibition (e.g. K = 0.01) the consequence is to convert what would have been a supracritical network to a subcritical one. However, at lower levels (e.g. K = 0.001) the effects of poisoning do not radically prevent supracritical growth, but the networks grow non-monotonically due to the loss of some catalysts because of inhibition (Figure 3C). We therefore conclude that inhibition does not qualitatively prevent formation of large catalytic reaction networks. To summarize, the formation of autocatalytic sets is robust against the two main criticisms that have been raised against the model.
All the previous simulations assumed that only catalyzed reactions happen in the flow reactor. Bagley and co-authors [14, 15] modelled the background of uncatalyzed reactions as spontaneous fluctuations that resulted in the rare appearance of autocatalytic subgraphs from the shadow of existing reactions (subset of species that can be produced from existing species in uncatalyzed reactions), an approach we find problematic because (i) it already assumes without proof that autocatalytic loops are present and (ii) it only allows for loops where each step is catalytic, discarding a large variety of possible organizations. In order to avoid this flaw, we simulated the uncatalyzed reactions directly. In our model rare uncatalyzed reactions produce random novel species in low copy number from the shadow, and if the new molecule happens to be a catalyst, it will generate a chemical avalanche of directly and indirectly catalyzed further novel molecular species. As expected, we found that only those species are able to permanently join the network that eventually catalyze their own production from already existing molecules and so produce a viable autocatalytic loop (see Figure 2). Such viable loops define a new, distinct core within the autocatalytic set. Such a novel core is only rarely produced, at least in the small networks we simulated, but the probability of their spontaneous appearance depends on the size of the shadow and is expected to increase with network size and P. There is, therefore, an intrinsic slow tendency for non-food set mass to increase by rare incorporation of viable loops in the network that also results in the increase of complexity (Figure 4, Additional file 1). This appearance of novel cores is a critical property as we shall see next.
Persistent increase in non-food set mass due to novel viable loops. We simulated 460 runs lasting 30,000 growth steps each, with food set size M = 4, P' = 0.75, P'' = 0.0025, K = 0 (without inhibition), but with spontaneous emergence of rare novel species from uncatalyzed reactions. 5 out of 460 runs showed persistent increases in non-food set mass (B). This was always due to the incorporation of at least one viable loop. (A) Example of viable loop organization used in evolutionary simulations. Solid lines: reactions dotted lines: catalytic activities. Orange dotted lines show the superimposed autocatalytic loop. The original network, on the left side of the blue line, is not shown in detail.
Evolvability of chemical networks enclosed in compartments
Our next step was to tackle the issue of evolvability when chemical reaction networks are confined into a small volume (compartment). Now the question is: what is required for chemical networks to undergo Darwinian evolution? As emphasized by Gánti  and Wächtershäuser [30, 31], if distinct, organizationally different, alternative autocatalytic networks can coexist in the same environment then they could compete with each other and the 'fittest' would eventually prevail. This is obviously a narrow view of what a unit of evolution really is , but raises the important issue that reaction networks must somehow posses multiple attractors and transitions among attractors must be possible. As Wesson  put it, "the attractor is the essence of self-organization. Just what constitutes it and how the organism shifts from one attractor to another is a task for genetics. to elucidate". This message is even clearer in Conway Morris , who posits that evolution navigates to particular functional solutions (convergence) thus pointing to the existence of something analogous to 'attractors' in biological systems. We demonstrate that for a catalytic network to accumulate adaptations it needs to be compartmentalized, the platonic reaction network must have multiple attractors, and some of these attractors must be selectable. The larger the number of attractors, the smaller the chance of convergence.
Compartmentalizing the reaction networks enables filtering out harmful modifications and therefore it is a prerequisite for accumulating potential beneficial 'adaptations', as demonstrated in . We modelled compartments exactly as in Farmer et al.  that is, each compartment is a flow reactor in which food is input, and materials leak out at first order. The number of attractors is itself of interest, for they allow a protocell to have multiple pathways of autocatalysis and also to show molecular variability to respond to an environment. We approximated the number of attractors by fixing the reaction network and shuffling the chemical concentrations (by choosing random pairs of species and swapping their concentrations) several times in order to sample various initial conditions. After shuffling, the network dynamics are run for some fixed period of time until an attractor is reached. Even if multiple attractors exist, stochastic division might not generate sufficient variation to allow transition between them. To test this, we also simulated the more realistic situation where the compartment enclosing the generative chemistry was allowed to grow for a fixed period of time, after which it was assumed to split into daughter compartments, whose molecules were sampled from a polyhypergeometric distribution of molecular contents in the parental compartment.
Now we arrive at the critical issue of evolvability, which can be first rephrased as the potential of a population of compartmentalized molecular networks with different attractors to respond to selection that is, to transit between different attractors according to the fitness value assigned to each of them. As a preliminary test of evolvability, reaction networks were subjected to artificial selection. A small population of 10 compartments was isolated for a fixed generation period. After this time the fitness of each compartment, defined as the total mass of non-food species present at the end of the growth phase just prior to division, was assessed and production of the next generation occurred by taking molecule propagules from compartments on the basis of fitness proportionate selection (roulette-wheel selection with elitism ). This elitist selection was used to always sample at least one propagule from the individual with the highest rank in any given generation.
The results of the artificial selection experiment were confirmed in numerical simulations of natural selection. Here we assumed an initial population of N = 100 compartments and introduced a classical Moran process  to test the evolvability of reaction networks when subjected to natural selection. Thus, in each time step a randomly chosen compartment from the whole population is selected for growing at a rate which is a function of its chemical composition. The compartment is returned to the population whenever its size is less than η molecules and the step ends. If, however, size reaches η, the compartment generates two daughter compartments by creation of two propagules. One offspring replaces the parent compartment and the other a randomly chosen one from the population. In this stochastic process, the total number of compartments remains constant and given by N, but compartment's size can fluctuate between the propagule size and η molecules. Selection for a specific target was implemented by multiplying the rates of all reactions by a selective advantage S if it matched the characteristic composition of the desired attractor.
We tested the evolvability in all three previously described models - the original Farmer-type autocatalytic sets, networks with inhibition, and networks with random novel species produced by uncatalyzed reactions - according to the principles described above. In the case of the original networks  the results were straightforward: they always have only one attractor (Additional file 1) and selection is not possible. This was not surprising considering that these networks contain only one autocatalytic core (Additional file 1). Therefore, the conclusion immediately follows from our previous considerations: Kauffman's  original polymer chemistry when enclosed in a finite space will eventually crystallize into the same attracting network which can never ever be a Darwinian unit.
Interestingly, this behaviour is analogous to conceptually similar models'  where the whole catalytic network forms inevitably only one viable core and so ultimately converges to only one attractor. Therefore, one important conclusion to be derived from our work is that we can definitively discard all autocatalytic networks discussed so far in the literature as units of evolution in the Darwinian sense, with the possible exception of .
However, it has been suggested  that the inclusion of inhibition in the Farmer-type network should permit the formation of autocatalytic sets having complex dynamical attractors. To determine if this is so we also run simulations introducing strong non-competitive inhibition as indicated above. Interestingly, our results substantiate this speculation because molecular networks now exhibited multiple attractors, but when the growth-splitting process was implemented spontaneous transitions between them were rare. When transitions did occur, they happened either periodically or chaotically (Additional file 1). Rather surprisingly, the artificial selection experiment excluded networks with inhibition from candidates of units of evolution, since the population typically settled down into one equilibrium or fluctuated stochastically or periodically between attractors and so attractors typically could not be stably selected (Additional file 1). Instead, the internal dynamics of the growth-splitting process completely overrode any effect of selection. This provides a clear counter-example to the widely accepted claim that the existence of multiple attractors is sufficient to allow selectability it is not.
The crucial modification to the model was to allow rare novel species to appear from the shadow. In the few cases of networks in which spontaneous addition of new species resulted in the ignition of a novel viable loop, and thus novel cores, there always existed multiple attractors (see Figure 2. for a didactic example and Figure 4 for the network used in the evolutionary simulations). Note that we did not simulate inhibitory reactions in this version of the model while they are certainly relevant in applications closer to real chemistry, their inclusion would have made our results on viable loops more difficult to interpret. Analogous to the idea that 'attractors' in biological systems have different stabilities (i.e., convergence can be equated with the revisiting of most stable attractors ), we also detected stable attractors (with a larger attractor basin) where the system settles most of the time with occasional transitions to less stable attractors (with smaller attractor basins).
We intuitively suspected that selection would work in networks with novel viable loops, and this was indeed the case. Our results can be summarized as follows: while networks with the viable core have an implicit selective advantage due to their higher growth rate, and so constitute the majority of the population, a one percent selective advantage attributed to the absence of the core is enough to significantly reduce the proportion of networks with viable cores in the population (Figure 5). The reason for the selectability in this model is that a novel viable core results in a new and distinct attractor for the reaction network, and due to its autocatalytic properties enables a higher (non-food mass) growth rate. Hence, we already have the basic requirements for natural selection to happen: two entities that are growing exponentially at different rates and have different division times . Since it is always possible to lose the viable core upon protocell fission (a loss mutation that is simply a function of propagule size), there is a kind of 'mutation-selection' balance if no novel chemical species can invade from the shadow. When rare reactions are allowed, novelty can arise by generation of new viable cores, and they can be removed by selection if they reduce the growth rate of the compartment. Between-compartment selection as shown in Figure 5 arises due to the effect a core has on the compartment level fitness. For example, the large core (Figure 4) sustains more non-food mass in its core and its periphery, and this increases the growth rate of the compartment. In reality, each molecule of the core and its periphery may confer a host of compartment level effects, e.g. modification of permeability of the membrane, specific metabolic adaptations, etc. that could have a compartment level fitness effect, but this is not explicitly modelled here.
Selectability of potentially coexisting attractors in a molecular network. Each dot corresponds to a compartment just prior to division. (Top) Due to its autocatalytic properties a viable loop enables a higher growth rate and therefore the network with the large viable loop (characterized by 26 reactions and dividing after approximately 20 000 time steps) constitutes the most frequent network type. Spontaneous reaction rate = 0.00001. Propagule size 800 no selection (S = 1). (Bottom) However, with a mere 1 percent fitness advantage (S = 1.01) attributed to the networks without the loop, it is possible to reduce its frequency. In this case the original network without any viable loops is the most frequent.
It is important to note that there are two levels of autocatalysis in this system. Even if the internal organization of the network encapsulated by a protocell fails to be autocatalytic, the rule that after reaching the critical mass the compartment divides into two effectively ensures that such compartments will have a 'generation time' and the potential to grow exponentially. Also autocatalytic cores grow exponentially. Hence there is autocatalysis at two levels: the level of molecules and that of compartments. The reproducing compartment without an enclosed autocatalytic network is not, however, a replicator, as it always assumes the same state and cannot sustain hereditary variation .
As new principles emerge in new domains and bodies of technology develop, they ripple across the economy in profound ways. The economy doesn’t so much adopt a technology as encounter it. An industry is made up of its organizations, business processes and production equipment. These elements come face to face with the domain of a new technology, like the field of computing moving into the banking sector. As the impact on this industry ripples out to other sectors, if it’s large enough, the structure of the economy changes as well.
So technology is not just combining on a micro, individual technology level, it’s also combining bodies of technology together with various industries on a macro level.
Rather than thinking about demand for technologies, Arthur uses the notion of “opportunity niches,” as though it were a type of ecosystem within which the technology evolves and lives. Opportunity niches evolve over time, not just as human tastes and needs evolve, but also in response to the opportunities opened by other technologies. The automobile opened an opportunity niche for fueling vehicles, for example.
If you were to map all the existing technologies in a network, it would look somewhat organic, growing out in all directions. You can imagine the actively used technologies as lit up nodes in the network, while the light fades out on technologies that are no longer actively used in the mainstream. When these technology nodes appear and light up and when they dim, it affects not just the network of related technologies, but also ripples through the economy. Because technologies are made up of combinations of other technologies, once a certain critical mass of them exists, the potential combinations of usable parts grows exponentially, creating a kind of Cambrian Explosion of technological possibilities.
Arthur sees the economy not so much as a container for technology, so much as something that is formed by technology. There is more to the economy than technology, but technologies form something akin to its skeletal structure. Another way of seeing it is that the economy is roughly analogous to an ecosystem within which technologies exist. The economy changes the technology and the technology changes the economy, similar to the way organisms and ecosystems coexist and shape one another.
Evolution reveals missing link between DNA and protein shape
Fifty years after the pioneering discovery that a protein's three-dimensional structure is determined solely by the sequence of its amino acids, an international team of researchers has taken a major step toward fulfilling the tantalizing promise: predicting the structure of a protein from its DNA alone.
The team at Harvard Medical School (HMS), Politecnico di Torino / Human Genetics Foundation Torino (HuGeF) and Memorial Sloan-Kettering Cancer Center in New York (MSKCC) has reported substantial progress toward solving a classical problem of molecular biology: the computational protein folding problem.
The results will be published Dec. 7 in the journal PLoS ONE.
In molecular biology and biomedical engineering, knowing the shape of protein molecules is key to understanding how they perform the work of life, the mechanisms of disease and drug design. Normally the shape of protein molecules is determined by expensive and complicated experiments, and for most proteins these experiments have not yet been done. Computing the shape from genetic information alone is possible in principle. But despite limited success for some smaller proteins, this challenge has remained essentially unsolved. The difficulty lies in the enormous complexity of the search space, an astronomically large number of possible shapes. Without any shortcuts, it would take a supercomputer many years to explore all possible shapes of even a small protein.
"Experimental structure determination has a hard time keeping up with the explosion in genetic sequence information," said Debora Marks, a mathematical biologist in the Department of Systems Biology at HMS, who worked closely with Lucy Colwell, a mathematician, who recently moved from Harvard to Cambridge University. They collaborated with physicists Riccardo Zecchina and Andrea Pagnani in Torino in a team effort initiated by Marks and computational biologist Chris Sander of the Computational Biology Program at MSKCC, who had earlier attempted a similar solution to the problem, when substantially fewer sequences were available.
"Collaboration was key," Sander said. "As with many important discoveries in science, no one could provide the answer in isolation."
The international team tested a bold premise: That evolution can provide a roadmap to how the protein folds. Their approach combined three key elements: evolutionary information accumulated for many millions of years data from high-throughput genetic sequencing and a key method from statistical physics, co-developed in the Torino group with Martin Weigt, who recently moved to the University of Paris.
Using the accumulated evolutionary information in the form of the sequences of thousands of proteins, grouped in protein families that are likely to have similar shapes, the team found a way to solve the problem: an algorithm to infer which parts of a protein interact to determine its shape. They used a principle from statistical physics called "maximum entropy" in a method that extracts information about microscopic interactions from measurement of system properties.
"The protein folding problem has been a huge combinatorial challenge for decades," said Zecchina, "but our statistical methods turned out to be surprisingly effective in extracting essential information from the evolutionary record."
With these internal protein interactions in hand, widely used molecular simulation software developed by Axel Brunger at Stanford University generated the atomic details of the protein shape. The team was for the first time able to compute remarkably accurate shapes from sequence information alone for a test set of 15 diverse proteins, with no protein size limit in sight, with unprecedented accuracy.
"Alone, none of the individual pieces are completely novel, but apparently nobody had put all of them together to predict 3D protein structure," Colwell said.
To test their method, the researchers initially focused on the Ras family of signaling proteins, which has been extensively studied because of its known link to cancer. The structure of several Ras-type proteins has already been solved experimentally, but the proteins in the family are larger-with about 160 amino acid residues-than any proteins modeled computationally from sequence alone.
"When we saw the first computationally folded Ras protein, we nearly went through the roof," Marks said. To the researchers' amazement, their model folded within about 3.5 angstroms of the known structure with all the structural elements in the right place. And there is no reason, the authors say, that the method couldn't work with even larger proteins.
The researchers caution that there are other limits, however: Experimental structures, when available, generally are more accurate in atomic detail. And, the method works only when researchers have genetic data for large protein families. But advances in DNA sequencing have yielded a torrent of such data that is forecast to continue growing exponentially in the foreseeable future.
The next step, the researchers say, is to predict the structures of unsolved proteins currently being investigated by structural biologists, before exploring the large uncharted territory of currently unknown protein structures.
"Synergy between computational prediction and experimental determination of structures is likely to yield increasingly valuable insight into the large universe of protein shapes that crucially determine their function and evolutionary dynamics," Sander said.
This research was funded by the National Cancer Institute and the Engineering and Physical Sciences Research Council of the United Kingdom.
Evolution Does Not Reward Selfish and Mean People
New research has found that our evolutionary biology does not reward selfish and mean people. Over the long run, cooperative ‘nice guys’ actually finish first. Michigan State University evolutionary biologists Christoph Adami and Arend Hintze have found that evolution favors cooperation and altruism over being ‘mean and selfish.’
The new study titled “Evolutionary Instability of Zero-Determinant Strategies Demonstrates That Winning Is Not Everything” was published August 1, 2013 in Nature Communications. Adami and Hintze say their research shows that exhibiting only selfish traits would have caused humans to go extinct. "We found evolution will punish you if you're selfish and mean," said lead author Christoph Adami, MSU professor of microbiology and molecular genetics. "For a short time and against a specific set of opponents, some selfish organisms may come out ahead. But selfishness isn't evolutionarily sustainable."
Adami and Hintze’s research focuses on game theory, which is used in biology, economics, political science and other disciplines. There has been a lot of research over the past 30 years investigating how cooperation has evolved in various species. Cooperative behavior is the key to the survival for many forms of life, from single-cell organisms to people. Mutualistic cooperation is at the core of human interdependence and key to the survival of our species.
Winning Isn’t Everything
Game theory involves devising games to simulate situations of conflict or cooperation. It allows researchers to unravel complex decision-making strategies and to establish why certain types of behavior emerge among different individuals.
The MSU researchers used a model of "the prisoner’s dilemma" game, where two suspects who are interrogated in separate prison cells must decide whether or not to snitch on the other. The researchers dubbed being an informant the “mean and selfish” strategy and was influenced by a participant knowing their opponent’s previous decision and adapting their strategy accordingly.
In the prisoner’s dilemma each player is offered a ‘get out of jail’ card if they snitch on their opponent and put him or her in prison for six months. However, this scenario will only be played out if the opponent chooses not to inform. If both prisoners choose to inform (defection) they both get three months in prison, but if they both stay silent (cooperation) they both only get a jail term of one month.
Adami explains, “The two prisoners that are interrogated are not allowed to talk to each other. If they did they would make a pact and be free within a month. But if they were not talking to each other, the temptation would be to rat the other out. Being mean can give you an advantage on a short timescale but certainly not in the long run – you would go extinct.”
We’re All in This Together
A 2012 study titled "Two Key Steps in the Evolution of Human Cooperation: The Interdependence Hypothesis” showed that humans are much more inclined to cooperate than are their closest evolutionary relatives. The authors of the study found that humans developed cooperative skills because it was in their mutual interest to work well with others — primarily due to ecological circumstances which forced us to cooperate with others to obtain food.
Ultimately altruism is self-serving to a degree — we must cooperate with one another in order to survive. We are altruistic to others because we need them for our individual survival and the survival of our species.
Researchers speculate that as hunter-gatherers humans had to forage together, which meant that each individual had a direct stake in the welfare of the group. This created an interdependence which caused humans to develop special cooperative abilities that other apes do not possess. This includes: dividing food fairly, communicating goals and strategies, and understanding one’s individual role in the collective. Homo sapiens who were able to coordinate with their fellow hunter-gatherers would pull their weight in the group and were more likely to survive.
As societies grew larger and more complex, humans actually became more dependent on one other. The authors define this as a second evolutionary jump in which collaborative skills and impulses were developed on a larger scale as humans faced competition from other groups. As we moved from agrarian to industrialization individuals actually became more "group-minded," identifying with others in their society even if they did not know them personally. This new sense of belonging created indigenous cultural conventions and norms, that were reflected in behaviors interested in the well-being of the collective such as volunteerism.
Conclusion: Evolutionary Biology Shows that Nice Guys Finish First
Will the digital age and Facebook culture make us more interdependent, cooperative, and collective minded—or more selfish, egocentric, and mean? In a dog-eat-dog world it is encouraging to have scientists confirm that machiavellian and self-serving behavior ultimately backfires. Being cooperative, loyal, and altruistic is not only good for your individual well-being it builds social capital, resilience, and is good for the collective. This is the ultimate win-win.
“What we modelled in the computer were very general things, namely decisions between two different behaviors. We call them cooperation and defection. But in the animal world there are all kinds of behaviors that are binary, for example to flee or to fight,” says Adami. "In any evolutionary environment, knowing your opponent’s decision would not be advantageous for long because your opponent would evolve the same recognition mechanism to also know you," Dr. Adami concludes. “It’s almost like what we had in the cold war, an arms race – but these arms races occur all the time in evolutionary biology."
5 THE ε-BTP PROBLEM
Typically on real data, a BTP will not exist—either because the frequencies ai are determined with some error or the VAF data does not capture the frequency of a subpopulation that does not have mutations that exclusively occur in that subpopulation (VAFs provide information only about the proportion of cells with a mutation, and do not provide information about proportions of cells that have a specific mutation and lack another mutation.). In this section, we introduce the ε-BTP to account for these scenarios. Suppose we have the multiset L ∼ = < a ∼ 1 , … , a ∼ m >of observed frequencies and a corresponding VAF error vector ε = (ε1, … ,εm) for L ∼ , where εi is the maximum possible error in observing a ∼ i for 1 ≤ i ≤ m. To account for subpopulations without distinguishing mutations, we may need to add auxiliary frequencies to L ∼ that correspond to the missing subpopulation frequencies. We make the following definitions.
Given a multiset L ∼ = < a ∼ 1 , … , a ∼ m >with associated VAF error vector ϵ = ( ϵ 1 , … , ϵ m ) , an ε-BTP with k ≥ 0 auxiliary nodes is a BTP for a multiset L = < a 1 , a 2 , … a m + k >such that for all i ≤ m : | a i − a ∼ i | ≤ ϵ i . We call the nodes am+1, … ,am+k the auxiliary nodes of the ε-BTP.
Given a multiset L ∼ and an associated VAF error vector ε, find an ε-BTP of L ∼ with minimum number of auxiliary nodes such that two auxiliary nodes are not siblings.
The constraint on auxiliary nodes in the definition of ε-BTP problem follows from the assumptions in our model of cancer progression: each branching in the cancer progression happens only when at least one clonal expansion starts. So, the VAF data captures the frequency of the newly formed subpopulation (see Section 2). Thus, at least one of the children of the current subpopulation node is not an auxiliary node.
It is straightforward to show that for any multiset L of size m, it is always possible to obtain an ε-BTP with k = m − 1 auxiliary nodes (proof in Supplementary Appendix A.3 ). Also, when εi = 0 for all 1 ≤ i ≤ m, a BTP exists for L if and only if the corresponding ε-BTP has a solution with k = 0 auxiliary nodes.
To outline our algorithm, we need the following definitions.
Given a VAF error vector ε, an ε-CSP tree is a (weighted) binary tree, such that for each internal node a ∼ i we have a ∼ j + a ∼ k ∈ [ a ∼ i ± ( ϵ i + ϵ j + ϵ k ) ] , where a ∼ j and a ∼ k are the children of a ∼ i .
We say that an ε-CSP tree T ∼ for a multiset L ∼ is acceptable if we can obtain a BTP, α ( T ∼ ) , by replacing each a ∼ i by a value ai where | a i − a ∼ i | ≤ ϵ i . Note that α ( T ∼ ) is an ε-BTP for L ∼ . Also, note that an ε-CSP tree is not necessarily acceptable (See Supplementary Appendix D.8 ). However, one can easily check whether a given ε-CSP tree T ∼ is acceptable by finding a collection of ei’s, where | e i | ≤ ϵ i , satisfying the following constraints: ( a ∼ i + e i ) = ( a ∼ j + e j ) + ( a ∼ k + e k ) , for each internal nodes a ∼ i and its children a ∼ j , a ∼ k . This can be easily done via a linear program, which we denote by L P ( T ∼ ) .
Our Rec-BTP algorithm (Algorithm 1) uses a recursive method that works as follows: at each recursion during the algorithm, we have (i) a partially constructed ε-CSP tree T ^ , (ii) a multiset of remaining frequencies L ^ and (iii) the number of remaining auxiliary nodes that we are allowed to use. We check if T ^ can be extended by attaching two elements of L ^ , or one element from L ^ and an auxiliary node, to one of the leaves in T ^ (we assign the auxiliary node’s weight accordingly). If L ^ is empty, it means the algorithm has constructed an ε-CSP tree. So we output < α ( T ^ ) >if L P ( T ^ ) has a feasible solution. Finally, Rec-BTP outputs all the ε-BTPs. Iterating over all values of k from 0 to m − 1, the algorithm will find the smallest k such that there exists an ε-BTP.
Later in Section 6, for the purpose of benchmarking our results, in case of multiple ε-BTP outputs, we choose only the tree whose list of node frequencies has the minimum root mean square deviation (RMSD) from the original VAFs data (defined below in Section 6).
Classical game theory Edit
Classical non-cooperative game theory was conceived by John von Neumann to determine optimal strategies in competitions between adversaries. A contest involves players, all of whom have a choice of moves. Games can be a single round or repetitive. The approach a player takes in making his moves constitutes his strategy. Rules govern the outcome for the moves taken by the players, and outcomes produce payoffs for the players rules and resulting payoffs can be expressed as decision trees or in a payoff matrix. Classical theory requires the players to make rational choices. Each player must consider the strategic analysis that his opponents are making to make his own choice of moves.  
The problem of ritualized behaviour Edit
Evolutionary game theory started with the problem of how to explain ritualized animal behaviour in a conflict situation "why are animals so 'gentlemanly or ladylike' in contests for resources?" The leading ethologists Niko Tinbergen and Konrad Lorenz proposed that such behaviour exists for the benefit of the species. John Maynard Smith considered that incompatible with Darwinian thought,  where selection occurs at an individual level, so self-interest is rewarded while seeking the common good is not. Maynard Smith, a mathematical biologist, turned to game theory as suggested by George Price, though Richard Lewontin's attempts to use the theory had failed. 
Adapting game theory to evolutionary games Edit
Maynard Smith realised that an evolutionary version of game theory does not require players to act rationally—only that they have a strategy. The results of a game shows how good that strategy was, just as evolution tests alternative strategies for the ability to survive and reproduce. In biology, strategies are genetically inherited traits that control an individual's action, analogous with computer programs. The success of a strategy is determined by how good the strategy is in the presence of competing strategies (including itself), and of the frequency with which those strategies are used.  Maynard Smith described his work in his book Evolution and the Theory of Games. 
Participants aim to produce as many replicas of themselves as they can, and the payoff is in units of fitness (relative worth in being able to reproduce). It is always a multi-player game with many competitors. Rules include replicator dynamics, in other words how the fitter players will spawn more replicas of themselves into the population and how the less fit will be culled, in a replicator equation. The replicator dynamics models heredity but not mutation, and assumes asexual reproduction for the sake of simplicity. Games are run repetitively with no terminating conditions. Results include the dynamics of changes in the population, the success of strategies, and any equilibrium states reached. Unlike in classical game theory, players do not choose their strategy and cannot change it: they are born with a strategy and their offspring inherit that same strategy. 
Evolutionary game theory analyses Darwinian mechanisms with a system model with three main components – population, game, and replicator dynamics. The system process has four phases:
1) The model (as evolution itself) deals with a population (Pn). The population will exhibit variation among competing individuals. In the model this competition is represented by the game.
2) The game tests the strategies of the individuals under the rules of the game. These rules produce different payoffs – in units of fitness (the production rate of offspring). The contesting individuals meet in pairwise contests with others, normally in a highly mixed distribution of the population. The mix of strategies in the population affects the payoff results by altering the odds that any individual may meet up in contests with various strategies. The individuals leave the game pairwise contest with a resulting fitness determined by the contest outcome, represented in a payoff matrix.
3) Based on this resulting fitness each member of the population then undergoes replication or culling determined by the exact mathematics of the replicator dynamics process. This overall process then produces a new generation P(n+1). Each surviving individual now has a new fitness level determined by the game result.
4) The new generation then takes the place of the previous one and the cycle repeats. The population mix may converge to an evolutionarily stable state that cannot be invaded by any mutant strategy.
Evolutionary game theory encompasses Darwinian evolution, including competition (the game), natural selection (replicator dynamics), and heredity. Evolutionary game theory has contributed to the understanding of group selection, sexual selection, altruism, parental care, co-evolution, and ecological dynamics. Many counter-intuitive situations in these areas have been put on a firm mathematical footing by the use of these models. 
The common way to study the evolutionary dynamics in games is through replicator equations. These show the growth rate of the proportion of organisms using a certain strategy and that rate is equal to the difference between the average payoff of that strategy and the average payoff of the population as a whole.  Continuous replicator equations assume infinite populations, continuous time, complete mixing and that strategies breed true. The attractors (stable fixed points) of the equations are equivalent with evolutionarily stable states. A strategy which can survive all "mutant" strategies is considered evolutionarily stable. In the context of animal behavior, this usually means such strategies are programmed and heavily influenced by genetics, thus making any player or organism's strategy determined by these biological factors.  
Evolutionary games are mathematical objects with different rules, payoffs, and mathematical behaviours. Each "game" represents different problems that organisms have to deal with, and the strategies they might adopt to survive and reproduce. Evolutionary games are often given colourful names and cover stories which describe the general situation of a particular game. Representative games include hawk-dove,  war of attrition,  stag hunt, producer-scrounger, tragedy of the commons, and prisoner's dilemma. Strategies for these games include hawk, dove, bourgeois, prober, defector, assessor, and retaliator. The various strategies compete under the particular game's rules, and the mathematics are used to determine the results and behaviours.
Hawk dove Edit
The first game that Maynard Smith analysed is the classic hawk dove [a] game. It was conceived to analyse Lorenz and Tinbergen's problem, a contest over a shareable resource. The contestants can be either a hawk or a dove. These are two subtypes or morphs of one species with different strategies. The hawk first displays aggression, then escalates into a fight until it either wins or is injured (loses). The dove first displays aggression, but if faced with major escalation runs for safety. If not faced with such escalation, the dove attempts to share the resource. 
|meets hawk||meets dove|
|if hawk||V/2 − C/2||V|
Given that the resource is given the value V, the damage from losing a fight is given cost C: 
- If a hawk meets a dove they gets the full resource V
- If a hawk meets a hawk – half the time they win, half the time they lose. so the average outcome is then V/2 minus C/2
- If a dove meets a hawk they will back off and get nothing – 0
- If a dove meets a dove both share the resource and get V/2
The actual payoff however depends on the probability of meeting a hawk or dove, which in turn is a representation of the percentage of hawks and doves in the population when a particular contest takes place. That in turn is determined by the results of all of the previous contests. If the cost of losing C is greater than the value of winning V (the normal situation in the natural world) the mathematics ends in an evolutionarily stable strategy (ESS), a mix of the two strategies where the population of hawks is V/C. The population regresses to this equilibrium point if any new hawks or doves make a temporary perturbation in the population. The solution of the hawk dove game explains why most animal contests involve only ritual fighting behaviours in contests rather than outright battles. The result does not at all depend on "good of the species" behaviours as suggested by Lorenz, but solely on the implication of actions of so-called selfish genes. 
War of attrition Edit
In the hawk dove game the resource is shareable, which gives payoffs to both doves meeting in a pairwise contest. Where the resource is not shareable, but an alternative resource might be available by backing off and trying elsewhere, pure hawk or dove strategies are less effective. If an unshareable resource is combined with a high cost of losing a contest (injury or possible death) both hawk and dove payoffs are further diminished. A safer strategy of lower cost display, bluffing and waiting to win, is then viable – a bluffer strategy. The game then becomes one of accumulating costs, either the costs of displaying or the costs of prolonged unresolved engagement. It is effectively an auction the winner is the contestant who will swallow the greater cost while the loser gets the same cost as the winner but no resource.  The resulting evolutionary game theory mathematics lead to an optimal strategy of timed bluffing. 
This is because in the war of attrition any strategy that is unwavering and predictable is unstable, because it will ultimately be displaced by a mutant strategy which relies on the fact that it can best the existing predictable strategy by investing an extra small delta of waiting resource to ensure that it wins. Therefore, only a random unpredictable strategy can maintain itself in a population of bluffers. The contestants in effect choose an acceptable cost to be incurred related to the value of the resource being sought, effectively making a random bid as part of a mixed strategy (a strategy where a contestant has several, or even many, possible actions in their strategy). This implements a distribution of bids for a resource of specific value V, where the bid for any specific contest is chosen at random from that distribution. The distribution (an ESS) can be computed using the Bishop-Cannings theorem, which holds true for any mixed-strategy ESS.  The distribution function in these contests was determined by Parker and Thompson to be:
The result is that the cumulative population of quitters for any particular cost m in this "mixed strategy" solution is:
as shown in the adjacent graph. The intuitive sense that greater values of resource sought leads to greater waiting times is borne out. This is observed in nature, as in male dung flies contesting for mating sites, where the timing of disengagement in contests is as predicted by evolutionary theory mathematics. 
Asymmetries that allow new strategies Edit
In the war of attrition there must be nothing that signals the size of a bid to an opponent, otherwise the opponent can use the cue in an effective counter-strategy. There is however a mutant strategy which can better a bluffer in the war of attrition game if a suitable asymmetry exists, the bourgeois strategy. Bourgeois uses an asymmetry of some sort to break the deadlock. In nature one such asymmetry is possession of a resource. The strategy is to play a hawk if in possession of the resource, but to display then retreat if not in possession. This requires greater cognitive capability than hawk, but bourgeois is common in many animal contests, such as in contests among mantis shrimps and among speckled wood butterflies.
Social behaviour Edit
Games like hawk dove and war of attrition represent pure competition between individuals and have no attendant social elements. Where social influences apply, competitors have four possible alternatives for strategic interaction. This is shown on the adjacent figure, where a plus sign represents a benefit and a minus sign represents a cost.
- In a cooperative or mutualistic relationship both "donor" and "recipient" are almost indistinguishable as both gain a benefit in the game by co-operating, i.e. the pair are in a game-wise situation where both can gain by executing a certain strategy, or alternatively both must act in concert because of some encompassing constraints that effectively puts them "in the same boat".
- In an altruistic relationship the donor, at a cost to themself provides a benefit to the recipient. In the general case the recipient will have a kin relationship to the donor and the donation is one-way. Behaviours where benefits are donated alternatively (in both directions) at a cost, are often called "altruistic", but on analysis such "altruism" can be seen to arise from optimised "selfish" strategies.
- Spite is essentially a "reversed" form of altruism where an ally is aided by damaging the ally's competitors. The general case is that the ally is kin related and the benefit is an easier competitive environment for the ally. Note: George Price, one of the early mathematical modellers of both altruism and spite, found this equivalence particularly disturbing at an emotional level.
- Selfishness is the base criteria of all strategic choice from a game theory perspective – strategies not aimed at self-survival and self-replication are not long for any game. Critically however, this situation is impacted by the fact that competition is taking place on multiple levels – i.e. at a genetic, an individual and a group level.
At first glance it may appear that the contestants of evolutionary games are the individuals present in each generation who directly participate in the game. But individuals live only through one game cycle, and instead it is the strategies that really contest with one another over the duration of these many-generation games. So it is ultimately genes that play out a full contest – selfish genes of strategy. The contesting genes are present in an individual and to a degree in all of the individual's kin. This can sometimes profoundly affect which strategies survive, especially with issues of cooperation and defection. William Hamilton,  known for his theory of kin selection, explored many of these cases using game-theoretic models. Kin-related treatment of game contests  helps to explain many aspects of the behaviour of social insects, the altruistic behaviour in parent-offspring interactions, mutual protection behaviours, and co-operative care of offspring. For such games, Hamilton defined an extended form of fitness – inclusive fitness, which includes an individual's offspring as well as any offspring equivalents found in kin.
Fitness is measured relative to the average population for example, fitness=1 means growth at the average rate for the population, fitness < 1 means having a decreasing share in the population (dying out), fitness > 1 means an increasing share in the population (taking over).
The inclusive fitness of an individual wi is the sum of its specific fitness of itself ai plus the specific fitness of each and every relative weighted by the degree of relatedness which equates to the summation of all rj*bj. where rj is relatedness of a specific relative and bj is that specific relative's fitness – producing:
If individual ai sacrifices their "own average equivalent fitness of 1" by accepting a fitness cost C, and then to "get that loss back", wi must still be 1 (or greater than 1). and using R*B to represent the summation results in:
Hamilton went beyond kin relatedness to work with Robert Axelrod, analysing games of co-operation under conditions not involving kin where reciprocal altruism came into play. 
Eusociality and kin selection Edit
Eusocial insect workers forfeit reproductive rights to their queen. It has been suggested that kin selection, based on the genetic makeup of these workers, may predispose them to altruistic behaviours.  Most eusocial insect societies have haplodiploid sexual determination, which means that workers are unusually closely related. 
This explanation of insect eusociality has, however, been challenged by a few highly-noted evolutionary game theorists (Nowak and Wilson)  who have published a controversial alternative game theoretic explanation based on a sequential development and group selection effects proposed for these insect species. 
Prisoner's dilemma Edit
A difficulty of the theory of evolution, recognised by Darwin himself, was the problem of altruism. If the basis for selection is at an individual level, altruism makes no sense at all. But universal selection at the group level (for the good of the species, not the individual) fails to pass the test of the mathematics of game theory and is certainly not the general case in nature.  Yet in many social animals, altruistic behaviour exists. The solution to this problem can be found in the application of evolutionary game theory to the prisoner's dilemma game – a game which tests the payoffs of cooperating or in defecting from cooperation. It is the most studied game in all of game theory. 
The analysis of the prisoner's dilemma is as a repetitive game. This affords competitors the possibility of retaliating for defection in previous rounds of the game. Many strategies have been tested the best competitive strategies are general cooperation, with a reserved retaliatory response if necessary.  The most famous and one of the most successful of these is tit-for-tat with a simple algorithm.
The pay-off for any single round of the game is defined by the pay-off matrix for a single round game (shown in bar chart 1 below). In multi-round games the different choices – co-operate or defect – can be made in any particular round, resulting in a certain round payoff. It is, however, the possible accumulated pay-offs over the multiple rounds that count in shaping the overall pay-offs for differing multi-round strategies such as tit-for-tat.
Example 1: The straightforward single round prisoner's dilemma game. The classic prisoner's dilemma game payoffs gives a player a maximum payoff if they defect and their partner co-operates (this choice is known as temptation). If, however, the player co-operates and their partner defects, they get the worst possible result (the suckers payoff). In these payoff conditions the best choice (a Nash equilibrium) is to defect.
Example 2: Prisoner's dilemma played repeatedly. The strategy employed is tit-for-tat which alters behaviours based on the action taken by a partner in the previous round – i.e. reward co-operation and punish defection. The effect of this strategy in accumulated payoff over many rounds is to produce a higher payoff for both players' co-operation and a lower payoff for defection. This removes the temptation to defect. The suckers payoff also becomes less, although "invasion" by a pure defection strategy is not entirely eliminated.
Routes to altruism Edit
Altruism takes place when one individual, at a cost (C) to itself, exercises a strategy that provides a benefit (B) to another individual. The cost may consist of a loss of capability or resource which helps in the battle for survival and reproduction, or an added risk to its own survival. Altruism strategies can arise through:
It has been argued that human behaviours in establishing moral systems as well as the expending of significant energies in human society for tracking individual reputations is a direct effect of societies' reliance on strategies of indirect reciprocation. 
Organisms that use social score are termed Discriminators, and require a higher level of cognition than strategies of simple direct reciprocity. As evolutionary biologist David Haig put it – "For direct reciprocity you need a face for indirect reciprocity you need a name".
The evolutionarily stable strategy Edit
The evolutionarily stable strategy (ESS) is akin to the Nash equilibrium in classical game theory, but with mathematically extended criteria. Nash equilibrium is a game equilibrium where it is not rational for any player to deviate from their present strategy, provided that the others adhere to their strategies. An ESS is a state of game dynamics where, in a very large population of competitors, another mutant strategy cannot successfully enter the population to disturb the existing dynamic (which itself depends on the population mix). Therefore, a successful strategy (with an ESS) must be both effective against competitors when it is rare – to enter the previous competing population, and successful when later in high proportion in the population – to defend itself. This in turn means that the strategy must be successful when it contends with others exactly like itself.   
- An optimal strategy: that would maximize fitness, and many ESS states are far below the maximum fitness achievable in a fitness landscape. (See hawk dove graph above as an example of this.)
- A singular solution: often several ESS conditions can exist in a competitive situation. A particular contest might stabilize into any one of these possibilities, but later a major perturbation in conditions can move the solution into one of the alternative ESS states.
- Always present: it is possible for there to be no ESS. An evolutionary game with no ESS is "rock-scissors-paper", as found in species such as the side-blotched lizard (Uta stansburiana).
- An unbeatable strategy: the ESS is only an uninvadeable strategy.
The ESS state can be solved for by exploring either the dynamics of population change to determine an ESS, or by solving equations for the stable stationary point conditions which define an ESS.  For example, in the hawk dove game we can look for whether there is a static population mix condition where the fitness of doves will be exactly the same as fitness of hawks (therefore both having equivalent growth rates – a static point).
Let the chance of meeting a hawk=p so therefore the chance of meeting a dove is (1-p)
Let Whawk equal the payoff for hawk.
Whawk=payoff in the chance of meeting a dove + payoff in the chance of meeting a hawk
Taking the payoff matrix results and plugging them into the above equation:
Equating the two fitnesses, hawk and dove
so for this "static point" where the population percent is an ESS solves to be ESS(percent Hawk)=V/C
Similarly, using inequalities, it can be shown that an additional hawk or dove mutant entering this ESS state eventually results in less fitness for their kind – both a true Nash and an ESS equilibrium. This example shows that when the risks of contest injury or death (the cost C) is significantly greater than the potential reward (the benefit value V), the stable population will be mixed between aggressors and doves, and the proportion of doves will exceed that of the aggressors. This explains behaviours observed in nature.
Rock paper scissors Edit
Rock paper scissors incorporated into an evolutionary game has been used for modelling natural processes in the study of ecology.  Using experimental economics methods, scientists have used RPS games to test human social evolutionary dynamical behaviours in laboratories. The social cyclic behaviours, predicted by evolutionary game theory, have been observed in various laboratory experiments.  
Side-blotched lizard plays the RPS, and other cyclical games Edit
The first example of RPS in nature was seen in the behaviours and throat colours of a small lizard of western North America. The side-blotched lizard (Uta stansburiana) is polymorphic with three throat-colour morphs  that each pursue a different mating strategy
- The orange throat is very aggressive and operates over a large territory – attempting to mate with numerous females within this larger area
- The unaggressive yellow throat mimics the markings and behavior of female lizards, and "sneakily" slips into the orange throat's territory to mate with the females there (thereby taking over the population)
- The blue throat mates with, and carefully guards, one female – making it impossible for the sneakers to succeed and therefore overtakes their place in a populati
However the blue throats cannot overcome the more aggressive orange throats. Later work showed that the blue males are altruistic to other blue males, with three key traits: they signal with blue color, they recognize and settle next to other (unrelated) blue males, and they will even defend their partner against orange, to the death. This is the hallmark of another game of cooperation that involves a green-beard effect.  
The females in the same population have the same throat colours, and this affects how many offspring they produce and the size of the progeny, which generates cycles in density, yet another game - the r-K game.  Here, r is the Malthusian parameter governing exponential growth, and K is the carrying capacity of the population. Orange females have larger clutches and smaller offspring and do well at low density. Yellow females (and blue) have smaller clutches and larger offspring and do better when the population exceeds carrying capacity and the population crashes to low density. The orange then takes over and this generates perpetual cycles of orange and yellow tightly tied to population density. The idea of cycles due to density regulation of two strategies originated with Dennis Chitty, who worked on rodents, ergo these kinds of games lead to "Chitty cycles". There are games within games within games embedded in natural populations. These drive RPS cycles in the males with a periodicity of four years and r-K cycles in females with a periodicity of two years.
The overall situation corresponds to the rock, scissors, paper game, creating a four-year population cycle. The RPS game in male side-blotched lizards does not have an ESS, but it has a Nash equilibrium (NE) with endless orbits around the NE attractor. Since that time many other three-strategy polymorphisms have been discovered in lizards and some of these have RPS dynamics merging the male game and density regulation game in a single sex (males).  More recently, mammals have been shown to harbour the same RPS game in males and r-K game in females, with coat-colour polymorphisms and behaviours that drive cycles.  This game is also linked to the evolution of male care in rodents, and monogamy, and drives speciation rates. There are r-K strategy games linked to rodent population cycles (and lizard cycles). 
When he read that these lizards were essentially engaged in a game with a rock-paper-scissors structure, John Maynard Smith is said to have exclaimed "They have read my book!". 
Aside from the difficulty of explaining how altruism exists in many evolved organisms, Darwin was also bothered by a second conundrum – why a significant number of species have phenotypical attributes that are patently disadvantageous to them with respect to their survival – and should by the process of natural section be selected against – e.g. the massive inconvenient feather structure found in a peacock's tail. Regarding this issue Darwin wrote to a colleague "The sight of a feather in a peacock's tail, whenever I gaze at it, makes me sick."  It is the mathematics of evolutionary game theory, which has not only explained the existence of altruism, but also explains the totally counterintuitive existence of the peacock's tail and other such biological encumbrances.
On analysis, problems of biological life are not at all unlike the problems that define economics – eating (akin to resource acquisition and management), survival (competitive strategy) and reproduction (investment, risk and return). Game theory was originally conceived as a mathematical analysis of economic processes and indeed this is why it has proven so useful in explaining so many biological behaviours. One important further refinement of the evolutionary game theory model that has economic overtones rests on the analysis of costs. A simple model of cost assumes that all competitors suffer the same penalty imposed by the game costs, but this is not the case. More successful players will be endowed with or will have accumulated a higher "wealth reserve" or "affordability" than less-successful players. This wealth effect in evolutionary game theory is represented mathematically by "resource holding potential (RHP)" and shows that the effective cost to a competitor with a higher RHP are not as great as for a competitor with a lower RHP. As a higher RHP individual is a more desirable mate in producing potentially successful offspring, it is only logical that with sexual selection RHP should have evolved to be signalled in some way by the competing rivals, and for this to work this signalling must be done honestly. Amotz Zahavi has developed this thinking in what is known as the "handicap principle",  where superior competitors signal their superiority by a costly display. As higher RHP individuals can properly afford such a costly display this signalling is inherently honest, and can be taken as such by the signal receiver. In nature this is illustrated than in the costly plumage of the peacock. The mathematical proof of the handicap principle was developed by Alan Grafen using evolutionary game-theoretic modelling. 
- Evolutionary games which lead to a stable situation or point of stasis for contending strategies which result in an evolutionarily stable strategy
- Evolutionary games which exhibit a cyclic behaviour (as with RPS game) where the proportions of contending strategies continuously cycle over time within the overall population
A third, coevolutionary, dynamic, combines intra-specific and inter-specific competition. Examples include predator-prey competition and host-parasite co-evolution, as well as mutualism. Evolutionary game models have been created for pairwise and multi-species coevolutionary systems.  The general dynamic differs between competitive systems and mutualistic systems.
In competitive (non-mutualistic) inter-species coevolutionary system the species are involved in an arms race – where adaptations that are better at competing against the other species tend to be preserved. Both game payoffs and replicator dynamics reflect this. This leads to a Red Queen dynamic where the protagonists must "run as fast as they can to just stay in one place". 
A number of evolutionary game theory models have been produced to encompass coevolutionary situations. A key factor applicable in these coevolutionary systems is the continuous adaptation of strategy in such arms races. Coevolutionary modelling therefore often includes genetic algorithms to reflect mutational effects, while computers simulate the dynamics of the overall coevolutionary game. The resulting dynamics are studied as various parameters are modified. Because several variables are simultaneously at play, solutions become the province of multi-variable optimisation. The mathematical criteria of determining stable points are Pareto efficiency and Pareto dominance, a measure of solution optimality peaks in multivariable systems. 
Carl Bergstrom and Michael Lachmann apply evolutionary game theory to the division of benefits in mutualistic interactions between organisms. Darwinian assumptions about fitness are modeled using replicator dynamics to show that the organism evolving at a slower rate in a mutualistic relationship gains a disproportionately high share of the benefits or payoffs. 
A mathematical model analysing the behaviour of a system needs initially to be as simple as possible to aid in developing a base understanding the fundamentals, or “first order effects”, pertaining to what is being studied. With this understanding in place it is then appropriate to see if other, more subtle, parameters (second order effects) further impact the primary behaviours or shape additional behaviours in the system. Following Maynard Smith's seminal work in evolutionary game theory, the subject has had a number of very significant extensions which have shed more light on understanding evolutionary dynamics, particularly in the area of altruistic behaviors. Some of these key extensions to evolutionary game theory are:
Spatial Games Edit
Geographic factors in evolution include gene flow and horizontal gene transfer. Spatial game models represent geometry by putting contestants in a lattice of cells: contests take place only with immediate neighbours. Winning strategies take over these immediate neighbourhoods and then interact with adjacent neighbourhoods. This model is useful in showing how pockets of co-operators can invade and introduce altruism in the Prisoners Dilemma game,  where Tit for Tat (TFT) is a Nash Equilibrium but NOT also an ESS. Spatial structure is sometimes abstracted into a general network of interactions.   This is the foundation of evolutionary graph theory.
Effects of having information Edit
In evolutionary game theory as in conventional Game Theory the effect of Signalling (the acquisition of information) is of critical importance, as in Indirect Reciprocity in Prisoners Dilemma (where contests between the SAME paired individuals are NOT repetitive). This models the reality of most normal social interactions which are non-kin related. Unless a probability measure of reputation is available in Prisoners Dilemma only direct reciprocity can be achieved.  With this information indirect reciprocity is also supported.
Alternatively, agents might have access to an arbitrary signal initially uncorrelated to strategy but becomes correlated due to evolutionary dynamics. This is the green-beard effect (see side-blotched lizards, above) or evolution of ethnocentrism in humans.  Depending on the game, it can allow the evolution of either cooperation or irrational hostility. 
From molecular to multicellular level, a signaling game model with information asymmetry between sender and receiver might be appropriate, such as in mate attraction  or evolution of translation machinery from RNA strings. 
Finite populations Edit
Many evolutionary games have been modelled in finite populations to see the effect this may have, for example in the success of mixed strategies.
Review #1: P. Lopez-Garcia, Centre National de la Recherche Scientifique, France
The manuscript has some originality and comments on important aspects on the transition from chemistry to biology. I find the style a bit complex and not very clear even if the ideas are interesting (though not necessarily novel). It deserves publication, the literary style could be improved.
I thank the reviewer for her comments, particularly when she finds that the manuscript has some originality. Regarding the literary style I tried to improve it.
In this manuscript, M. Tessera critically examines various models on the origin of life and more specifically interrogates about whether these models involve pre-Darwinian (chemical) evolution prior to the Darwinian evolution characteristic of open far-from-equilibrium systems that are considered alive. The subset of chosen models are classed in three categories: metabolism-first, replicator-first and coupled metabolism-replicator models. Many of the criticisms and concerns highlighted by Tessera have been already raised by previous authors he revisits them in the context of a transition from chemical to Darwinian (biological) evolution. Overall, the ideas summarized in this critical review are stimulating for research on the chemistry-biology transition. I have, however, some comments: - My major comment is that definitions of Darwinian and pre-Darwinian evolution are somewhat fuzzy and this affects whether a system is considered to evolve by Darwinian or pre-Darwinian evolution. The definition of Darwinian evolution is more obvious, this corresponds to encoded (genetically inherited) variation upon which natural selection acts. Pre-Darwinian evolution involves chemical evolution. But does a system where genotype and phenotype are not (yet) coupled (i.e., containing a replicator plus not-encoded components) evolve via pre-Darwinian or Darwinian evolution? For instance, Tessera claims that lipid vesicles produced in the surroundings of alkaline vents and acting as chemical reactors that eventually include replicators display Darwinian evolution since the beginning. However, these initial lipid-vesicle reactors are not encoded (even if there are replicators inside), so in principle, there should equally represent a pre-Darwinian to Darwinian evolution transition as in other models. - I personally find amphiphile-vesicle models involving co-evolving metabolism and replicator systems as the most practically plausible for the origin of life on Earth. However, even if these systems would show Darwinian evolution from the beginning (which I questioned above), does this mere fact (Darwinian evolution from the beginning) qualify them as more likely than models where a pre-Darwinian/Darwinian evolution transition is required? If so, why?
As I specify it in the “Background” of the manuscript I prefer to use the expression “pre-Darwinian evolution” instead of “prebiotic evolution” because the concept of life is very much debatable according to me, eventually questionable [8,9,10,11,12] while the mechanism of Darwinian evolution can be well defined. Thus I choose not to use words like “alive”, “life”, “living organisms”, “biotic”, “prebiotic” etc. except when the researchers I cite use them. I would have preferred to use the term “level-4 evolution” instead of “Darwinian evolution” in accordance with the claim that there are four fundamental levels of evolution . Unfortunately, the scientific community to refer to it does not yet accept this view. I find questionable the “pre-Darwinian evolution” concept. Consequently, I cannot find an accurate definition of it. According to me, “Pre-Darwinian evolution” may only be defined in reference to the definition of “Darwinian evolution”. I agree with the reviewer that genotype and phenotype should be coupled. For instance, genotype is represented by the membrane sites in my model and phenotype by the carbon-based molecules catalyzed by the latter as they may impact the structure and the functions of the membrane. Genotype and phenotype are clearly coupled as they form a hypercycle. When vesicles multiplied the daughter vesicles inherited both genotype (i.e. membrane sites) and phenotype (i.e., carbon-based molecules) either directly when the hypercycle was transmitted to the daughter vesicles or indirectly when only one element was transmitted but able to reconstruct the hypercycle . Thus, separate lineages formed on which natural selection might have operated. Once lipid vesicles with multiplication abilities formed Darwinian evolution would have emerged in one step with the occurrence of specific arrangements of the amphiphiles among a huge number of combinations in the inner part of the bilayer membrane. The only Pre-Darwinian processes at work were the formation of lipid vesicles with multiplication abilities and the selection of the most viable. Finally I think only Darwinian evolution could have led to an evolution in complexity over time. To avoid any ambiguity I now clearly answer the question of the plausibility of pre-Darwinian evolution. Of course, I confirm in the manuscript that the answer to the question posed in the title is a prerequisite to the understanding of the origin of Darwinian evolution.
Why not including Wächtershäuser’s iron-sulfur world in the comparison? I understand that it has severe problems, notably in the transition to cellularization. Nonetheless, this is one of the most influential models and not less problematic than the Russell’s chemical garden. At least, some of its chemical predictions were proved. – Chirality.
Wächtershäuser’s iron-sulfur world model is now analyzed in the manuscript.
The fact that many models do not address the issue of chirality does not imply that they may not accommodate an explanation for chirality. The absence of proof is not the proof of absence. I guess that in many of these models, chirality is simply seen as some kind of consequence of chance. Once you start incorporating one particular isomer, the choice was selectively maintained. - This brings me to my last general comment. The role of chance is ignored in this review. Within the realm of biology, Darwinian evolution is not a full synonym of biological evolution because in addition to selective processes, there is genetic drift. What about pre-genetic drift? This is not trivial because even if one gets experimental evidence for a particular model, this does not mean that historically life originated that way. This would only provide an argumental basis not to discard a particular model.
For sure chirality has emerged luckily. In my model chance would have been at work when mutual catalysis emerged for the first time in lipid vesicles with heterogeneous membranes. Even if the emergence of a mutual catalysis was allowed by the structure of the vesicle membrane composed of a mixture of amphiphiles chance would have played its part. This occurred when a specific arrangement of amphiphiles appeared among the huge number of possible combinations of arrangements. It was able to catalyze the synthesis of a specific carbon-based molecule. This latter soluble compound had the property of catalyzing the transformation of the local membrane arrangement into a stabilized membrane site. Chance would have operated again when specific small carbon-based molecules led to the synthesis of a bigger molecule with a chiral centre carbon atome. Surely, in the other models chance may have played its part too to make chirality appear but the researchers should present a rational and plausible explanation. I agree that genetic drift should have played its part. There is no particular reason why it would not have occurred in my model. I do not understand what the reviewer means by “pre-genetic drift”.
Review #2: A. Poole, Stockholm University, Sweden
This manuscript has a promising title. Unfortunately the manuscript seems instead to be more focused on giving a potted critique of the shortcomings of some of the better-known models for the origin of life. The main issues the author highlights are around whether the models adequately address issues like chirality or Darwinian evolution. A more extensive review of the various models would be helpful as presented, this review part is a bit too uneven, and needs clearer explanations of the proposed models before launching into critique. The section that discusses the question posed in the title is too brief - it is only a few lines on page 13 (lines 1–25), where the author presents four ‘levels’ of evolution. This is not referenced, but it does note that ‘evolution’ is broader than ‘Darwinian evolution’. However, the question posed in the title is not really discussed in any great depth, and, for me, did not expand the existing discussion around this interesting question. The impression I got from reading the article was that the author’s answer to the question he posed was, ‘yes, but it’s not important’.
Thank to the reviewer I realized that my view on the plausibility of pre-Darwinian evolution was not so clear as there was a misunderstanding in the reviewer’s impression of my opinion about it. This essay highlights critical aspects of the plausibility of pre-Darwinian evolution. It is based on a critical review of some better-known open, far-from-equilibrium system-based scenarios supposed to explain processes that took place before Darwinian evolution had emerged and that resulted in the origin of the first systems capable of Darwinian evolution. Each model was evaluated according to the researchers’ answers to eight crucial questions that should be addressed (Table 1). I tried to summarize the models as well as possible by using the researchers’ wording as far as possible but, surely, my reports cannot be fully exhaustive and unbiased. I appreciate the reviewer’s citation of our proposition, G. Hoelzer and I, that there are four fundamental levels of evolution . In accordance with our claim I would have preferred to use the term “level-4 evolution” instead of “Darwinian evolution”. Unfortunately, the scientific community to refer to it does not yet accept this view. I find the concept of “pre-Darwinian evolution” questionable. It is unlikely if not impossible that any evolution in complexity over time could have worked without multiplication and heritability. Only Darwinian evolution would have led to such an evolution. By the way the only pre-Darwinian processes at work in the model I propose were the formation of lipid vesicles with multiplication abilities and the selection of the most viable. Only after Darwinian evolution would have emerged by chance in one step with the occurrence of specific arrangements of the amphiphiles among a huge number of combinations in the inner part of the bilayer membrane. To avoid any ambiguity I now clearly answer the question of the plausibility of pre-Darwinian evolution. Of course, I confirm in the manuscript that the answer to the question posed in the title is a prerequisite to the understanding of the origin of Darwinian evolution.
Review #3: D. Lancet, Weizmann Institute of Science, Israel
As instructed, I am checking whether the original referee comments have been addressed to satisfactory standards. My own comments refer to the revised version (R1). Reviewer 1 Points 1,3, are satisfactorily addressed by the author. Point 2 (beginning “In this manuscript, M. Tessera critically examines various models on the origin of life…”) is valid, and has not been fully addressed. In the abstract, the authors present the following conclusion: “From this critical review it is (inferred) that the concept of ‘pre-Darwinian evolution’ appears questionable, in particular because it is unlikely if not impossible that any evolution in complexity over time may work without multiplication and heritability. Only Darwinian evolution could have led to such an evolution. Thus, Pre-Darwinian evolution is not plausible according to the author”. How then can the attribute “Pre-Darwinian” appear for any model in the last column of the table, a column entitled “initial evolution”? I read the author’s conclusion above as implying that whatever chemical processes that took place prior to the advent of Darwinian evolution cannot be called evolution at all. This is because the author applies the same necessary criteria (multiplication and heritability leading to complexification) to both pre-Darwinian and Darwinian evolution. Thus if the criteria are not fulfilled, we have neither Darwinian nor pre-Darwinian evolution. But then, to avoid the confusion on which both reviewer 1 and I agree, the title of the paper should be “what chemical processes led to Darwinian evolution”. Point 4 beginning with “The fact that many models do not address the issue of chirality does not imply that they may not accommodate an explanation for chirality”. I fully agree with this comment and feel that it has not been adequately addressed. The many models which have the value “not an issue” and “not addressed” in the column entitles “Chirality issue” of the paper’s Table, attest to the idea that homochirality should not serve as a yardstick for judging evolution of any kind. This is, in fact, supported by the author’s inference based on a paper of ours (Ref 52): “The C-GARD model would highlight the possibility that chiral selection is a result of, rather than a prerequisite for early life-like processes and thus would not have been an issue”. I strongly suggest that the an issue criterion for evolution be eliminated altogether. Reviewer 2 I agree with this reviewer’s comment: “A more extensive review of the various models would be helpful as presented, this review part is a bit too uneven, and needs clearer explanations of the proposed models before launching into critique”. I wish to support this comment by addressing the example of my own model (GARD), pointing to necessary corrections. The author should please re-check that the description of other models might not have been similarly afflicted. Here are the points that need to be corrected in the description of the GARD model: 1) The author’s statement: “The model is also based on the view that non-equilibrium self-organizing systems have dynamic properties that exist in a state close to chaotic behavior allowing the emergence of autocatalytic cycles…” is not correct. In the GARD model (as in other similar models) the emergence of mutually catalytic networks (not “autocatalytic cycles”, which is a restrictive term) is afforded merely by the nature of the molecules, i.e. their capacity to exert catalysis on each other, irrespective of chaotic behavior. 2) The author says: “…mutually catalytic sets as an alternative to alphabet-based inheritance”. The GARD model has a form of alphabet-based inheritance. The crucial difference between GARD and a templating biopolymer model is that the former accumulates and reproduces compositional information (counts of chemical “alphabet” letters), while the latter encompasses sequence information (order of chemical “alphabet” letters). 3) The current text proclaims: “A basic feature in GARD is that non-covalent, micelle-like molecular assemblies capable of growing homeostatically (i.e., buffered enough as to maintain stability) according to the assembly’s constitution store compositional information can be propagated after occasional fission (i.e., assembly splitting)”. This confusing sentence should better read: “A basic feature in GARD is that non-covalent, micelle-like molecular assemblies are capable of growing homeostatically, i.e. catalytically maintaining the assembly’s composition as it grows (dynamic buffering). The maintained compositional information can be propagated to progeny assemblies upon occasional fission”. 4) The author states, based on ref. 47, that “Regarding the evolvability of the system it has recently been demonstrated that replication of compositional information (in GARD) is so inaccurate that fitter compositional genomes cannot be maintained by selection”. This criticism is hotly disputed, as exemplified in one of our papers [PMID: 22662913], and it would be fair to make this statement less unqualified and quote the alternative view. 5) The author says (based on Refs 49–51): “Moreover, there is no reason why ‘compositional information’ should have been transferred to bilayer membrane lipid vesicles when these would have taken over the ‘micelle-like molecular assemblies’.” This statement is based on a misunderstanding, and should be omitted. The GARD model does not invoke a capacity of a small micelle to confer its composition onto a much larger vesicular assembly when fusing with it. Rather, it invokes the possibility that homeostatic growth via single molecule accretion could gradually lead to increasingly larger assemblies having a similar composition to that of the original micelle. Returning to Reviewer 2 comments: A sweeping negativity of this reviewer is manifested in the statement “…the question posed in the title is not really discussed in any great depth, and, for me, did not expand the existing discussion around this interesting question”. This, in my opinion, is overstated. I feel there is value in this review, warranting publication in Biology Direct.
I thank the reviewer for his helpful comments. Regarding the GARD model I modified the sentence when I agreed with the reviewer that it was incorrect (e.g., the basic feature in GARD). When a question is disputed I still mention it with the arguments, i.e., the criticisms, and the counter-arguments, i.e., the reviewer’s reply (e.g., the question of evolvability and of the transfer of the compositional information). With regards to the chiral question I cannot agree that homochirality should not serve as a yardstick for judging evolution in complexity over time. While it is not addressed in most models it does not mean that the chiral question is not crucial. As the reviewer noticed, I present the results of his simulations tending to support the reviewer’s view that chiral selection is a result of, rather than a prerequisite for early life-like processes and thus would not have been an issue. However, I also observe that the authors’ assertion supporting the relevance of these simulations seems unrealistic, i.e., that, in the prebiotic environment, there would have been sufficiently complex molecular structures allowing them to assume that all molecules were chiral. Finally, I believe that the new title of the manuscript the reviewer proposes does not suit the aim of the manuscript, i.e., to support the view that Pre-Darwinian evolution is not plausible. According to this view, the likely processes that would have taken place and allowed evolution in complexity over time before Darwinian evolution emerged are not satisfactory. In my model, the only pre-Darwinian processes at work are the formation of lipid vesicles with multiplication abilities and the selection of the most viable. These processes are not sufficient to allow an evolution in complexity over time.
Review #4: T. Dandekar, Department of Bioinformatics, University of Wuerzburg, Germany
The paper makes a case that purely pre-Darwinian evolution does not exist, looking at different examples of chemical evolution and the different major aspects they cover. This table presents some new comparative results. The conclusion is that even in chemical evolution models there is some hereditary element involved, so according to the author, some Darwinian evolution. 2. I think it is worthwhile to stress this point and the comparison with the chemical models stresses this point appropriately and that justifies a publication 3. What could be added would be some more implications, for instance if any type of evolution always needs some element of heredity, does this (more information is passed to the next generation) enhance then always the speed of evolution? 3b. Does then determining whether there is evolution always boil down to identify some heritable element (or information storage)? 4. Of course, choosing the suitable definition you can always be right as an author, but probably by such a definition of evolution you largely ignore the more general and bigger class of self-organizing processes, right? 4b. So my main worry is that by defining terms and things as you do, you may get rid of pre-Darwinian evolution (as you claim that there is always heredity necessary), however, you then become blind for the large, interesting and important class of self organizing phenomena in physics and chemistry who happen without any gene storage or any other such direct storage.
I agree with the reviewer that it would be less constraining for the purposes of the search of the origin of life if pre-Darwinian evolution was plausible. It would open the search to include self organizing phenomena in physics and chemistry which happen without any gene storage or any other such direct storage. I don’t think it is a question of definition, like, for example the definition of life. Darwinian evolution is a mechanism. It works as it allows evolution in complexity over time, It is based, amongst other things, on natural selection on distinct lineages. Without information storage and the possibility of transfer to the progeny, no lineage may emerge. Thus, natural selection cannot operate and allow evolution in complexity over time.