15 October 2007

How to evolve a new protein in (about) 8 easy steps

ResearchBlogging.orgIf you have only read the more superficial descriptions of intelligent design theory, and specifically the descriptions of irreducible complexity, you might (reasonably) conclude that Michael Behe and other devotees of ID have claimed that any precise interaction between two biological components (two parts of a flagellum, two enzymes in the blood clotting cascade, or a hormone and its receptor) cannot arise through standard Darwinian evolution. (If you don't know anything about the term 'irreducible complexity' you should probably read a little about it before proceeding.) In other words, you may be under the impression that Behe doesn't think that such a system could arise through a stepwise process of mutation and selection. You may even be under the impression that Behe has demonstrated the near impossibility of such a system coming to be through naturalistic means.

This article was UPDATED on 1 November 2007, incorporating some corrections and clarifications provided by the senior author of the studies described. In other words, this post was peer reviewed, and this is the final version.






You would be mistaken, albeit (in my opinion) understandably so. Behe has not claimed this -- though he's often come pretty close -- and recently he has made it clear that this is not his position. Unfortunately, many of the critiques of irreducible complexity contain significant errors, including the claim that Behe rejects all stepwise accounts of molecular evolution, and you have to look pretty hard to find well-reasoned examinations of the problems with Behe's interesting but fruitless challenge to evolutionary theory.

My purpose in the preamble above is to make it clear that this Journal Club is not intended to refute Behe's claims regarding the ability of Darwinian mechanisms to generate irreducibly complex structures. (In my view, his claims are wholly mistaken, and Christian enthusiasm for his natural theology is a disastrous mistake. But that's for another time.) Rather, it is to discuss a superb recent example of the kind of experimental molecular analysis of evolution that can be done in this postgenomic era. Experiments like this are revealing how evolutionary adaptation actually comes about at the molecular level, thereby addressing the very questions raised by ID thinkers. ID apologists are, in a sense, wise to attack the work described here, because these experiments are the first fruits of the types of analysis that will usher ID into permanent scientific ignominy.

So, to our two papers.

How, exactly, does a protein acquire a new function during evolution? This is one of those "big questions" in evolutionary biology. Broad concepts such as gene duplication are quite helpful in formulating explanations, but the specific question raised is focused on the details -- the actual steps -- that must occur during the step-by-step modification of a protein such that it performs a different job than the proteins from which it has descended. The constraints on the process of change are significant, and the issues are similar to those I discussed when describing the concept of fitness landscapes in morphospace. The problem, basically, is this: how can you change a protein without wrecking it in the process? In other words, can you get from function A to function B, step by step, without passing through an intermediate form, call it protein C, which is worthless (or even harmful)?

These are precisely the questions addressed in an elegant set of experiments reported in two reports over the last year or so. The second article, by Ortlund et al., was reported in the 14 September issue of Science, and built on work reported in Science in April 2006. Their studies focused on two closely-related proteins that are receptors for steroid hormones. In this case, the steroids of interest are corticosteroids (the kind often used to treat inflammation; Ortlund et al. studied receptors for cortisol, which is of course quite similar to cortisone) and a mineralocorticoid (a less well-known hormone, aldosterone, that regulates fluid and salt intake). The hormones are structurally similar (being steroids).

Joseph Thornton, at the University of Oregon, has been studying the origins of these receptors for about 10 years, and has assembled an interesting (and detailed) account of their history. The basic outline is as follows: the original steroid receptor was an estrogen receptor, and is extremely ancient, apparently arising "before the origin of bilaterally symmetric animals" (Thornton et al., Science 2003). (That's seriously ancient, sometime in the Cambrian or earlier.) The progesterone receptor seems to have arisen next, followed by the androgen (i.e., testosterone) receptor. (Now that's intriguing.) Fairly late in this game, the two receptors of interest to us here, the corticosteroid receptor and the mineralocorticoid receptor, were added to the vertebrate repertoire. The two modern receptors are thought to descend from an ancestral corticosteroid receptor, which underwent a gene duplication. Hereafter, I'll refer to the receptors as the corticosteroid receptor and the aldosterone receptor, hoping that all the jargon won't obscure the message.

In a widely-discussed paper published in Science a year ago (Bridgham et al., Science 2006), Thornton's group determined the most likely DNA sequence of this ancestral gene, then "resurrected" it, meaning simply that they created that very DNA sequence in the lab. (Determining the ancestral sequence was a nifty piece of work; actually making the DNA is quite straightforward, especially if you have a little dough.)

Their experiments showed that the ancestral receptor could bind to a hormone that didn't exist yet (aldosterone) while it was functioning as a receptor for corticosteroids. In other words, the receptor was available for activation by aldosterone long before aldosterone was around. (All jawed vertebrates make corticosteroids, but only tetrapods make and use aldosterone, an innovation that occurred at least 50 million years later.) The modern corticosteroid receptor has since lost its ability to interact with aldosterone, and Bridgham et al. chart the most likely evolutionary path, at the molecular level, by which we and other tetrapods came to have a corticosteroid receptor that won't bind to aldosterone. The surprising result, however, is the fact that the ancient receptor was able to bind aldosterone, millions of years before aldosterone is thought to have been present.

The 2006 paper is, I think, more notable as an illustration of an important evolutionary principle ("molecular exploitation" is the authors' term) than as a set of observations; Michael Behe's trashing of the group's work is disgusting, but it's true that the findings are limited in scope. It's worth having a look at the whole paper, though (and I believe it's freely available with free registration), because the authors very clearly explain the rationale for their continuing work, which is to begin to address one of the major "gaps in evolutionary knowledge": the mechanisms underlying stepwise evolution of "complex systems that depend on specific interactions among the parts."

If you're well-read on ID thought, that last sentence should sound pretty familiar. So let's note that prominent papers in science's premier journals are acknowledging that the evolutionary mechanisms that generate complex structures -- including "irreducibly complex" systems -- are as yet poorly understood. And let's give ID credit for asking a good question. (Not a new one...but a good one.)

The 2006 paper did not, as advertised, utterly destroy ID arguments, and again Behe is right to criticize the near-hysteria surrounding that work. But I find Behe's bravado otherwise unconvincing. Because that paper did set up the most recent work, and the whole story illustrates rather clearly how ID's question will (soon) be answered.

The most recent paper adds significantly to the picture, and introduces some genetic concepts that Behe's fans should pray he understands. The authors (Ortlund et al.) took their analysis to a far more detailed level, by extending their previous observations to include much more of the receptor family tree. In the 2006 work, they had assembled a detailed family tree for the receptors, by looking at DNA sequences from living species known to represent various branches on the tree of life. In other words, they chose organisms such as lampreys, bony fish, amphibians and mammals, and examined their DNA codes (for the receptors) to find the changes that occurred in each branch of the lineage. Now, please stop and think about this, because it's really cool. What the authors did was mine existing databases of DNA sequence data, pulling out the sequences of the steroid receptors from 29 different vertebrate species. You could repeat this part of the experiment right now, by referring to their list of organisms in Supplemental Table S5, which provides the ID codes needed to locate the DNA sequences in the Entrez Gene database. Then they charted the changes in the DNA sequence in the context of the tree of life as sketched out in the fossil record. The tree they assembled includes all the steroid receptors, and I've annotated it a little if you want to have a look. They used this tree to guide their further experiments, as I'll explain below. What the most recent paper added to the story was an analysis of the 3-D structure of the various postulated intermediates in the evolutionary pathway. The authors accomplished this by making proteins from the "resurrected" genes, then crystallizing them and using X-ray diffraction techniques to determine their precise structures.

Examination of their receptor family tree revealed something interesting. Most vertebrates have highly specific receptors: the corticosteroid receptor isn't strongly stimulated by aldosterone, and vice versa. But some living vertebrates (skates, in particular) show a different pattern: the corticosteroid receptor isn't all that specific for cortisol. Because the ancestral receptor also lacked specificity (as shown in the 2006 paper), the authors concluded that the receptor acquired its discriminating taste at some point between the branching-off of skates (and their kin) and the separation of fish from tetrapods. Their Figure 1 is a little crowded, but it illustrates this nicely:


To follow the evolutionary narrative in this graph, start at the blue circle, which represents the ancestral receptor that was "resurrected" in the 2006 paper and that happily binds to both corticosteroids and aldosterone. (The graphs on the right side of the figure demonstrate the specificity, or lack thereof, of the receptors at different times in history.) There's a branch leading up and to the left, to the various GRs (corticosteroid receptors), and one leading up and to the right, to the MRs (aldosterone receptors). At the green circle, another branching event occurred, 440 million years ago, at which point certain groups of fishes (skates among them) branched off, up and to the right. The receptor at that point is an ancestral corticosteroid receptor, and it still isn't specific for corticosteroids. But the receptor at the yellow circle, in the common ancestor of tetrapods and bony fishes, is specific. The authors conclude that specificity arose between those two points, between 420 and 440 million years ago. With some (deliberate?) irony, they indicate that process with a black box.

The rest of the paper explores the pathway by which the receptor might have been successively altered so as to install specificity for cortisol. During those 20 million years of evolution, at least 36 different changes were introduced in the makeup of the receptors. By looking at the 3-D structures of the ancestral forms, the authors were able to discern the specific functional ramifications of these various changes, and they found that the alterations fell into three groups:
  • Group 'X' alterations included the changes reported in the 2006 article. These are the biggies, that account for much of the functional 'switch' between GRs and MRs. These alterations don't account for the specificity change that occurred inside the black box in Figure 1.
  • Group 'Y' alterations are all strongly conserved (meaning that they were permanent changes), and occurred during the black box time period. Moreover, this group of changes is always seen together: modern receptors have all of these alterations, while ancestral receptors have none of them.
  • Group 'Z' alterations are also conserved changes, but they don't always occur together like group 'Y'.
The authors set about the work of examining the function of "resurrected" receptors bearing these groups of changes. When they introduced group 'X' changes into the ancestral receptor, they got a receptor that was almost modern (i.e., specifically tuned to cortisol) but not quite; this was what the previous work had indicated. Then they hypothesized that the group 'Y' changes, because they were so highly conserved and because they all occurred together, would make the transition complete. But no: instead, the group 'Y' alterations made the receptor worthless, unable to bind any hormone at all. Surprise! Looking at their 3-D structures, they figured out what this meant. The group 'Y' changes were somehow important, but they could only have a beneficial influence in the presence of another set of alterations, group 'Z', which had to occur in advance. The biophysical details don't concern us, but the basic idea is that the group 'Z' changes created a permissive environment for the group 'Y' changes, which are the alterations that complete the development of the modern specific form of the receptor for cortisol.

In genetics, we have a word for this type of interaction between genetic influences: epistasis. The fascinating history of steroid receptor evolution includes examples of what the authors call "conformational epistasis," meaning that some alterations in 3-D structure are required in advance for other alterations to ever get off the ground. Specifically, some alterations are evolutionary dead ends, because they yield worthless proteins, unless those alterations follow another set of changes that generated a different -- and more fruitful -- environment.

The authors then construct a map of what they call "restricted evolutionary paths through sequence space," showing how you can get there from here, without traversing an evolutionary no-man's-land of non-function. The path includes changes that don't apparently improve the receptor, but that yielded the right environment for the changes that did improve function. Their map is in Figure 3:


The idea is that you want to get from the lower left corner of the cube (the ancestral receptor) to the upper right corner (the modern receptor) without hitting a stop sign (a worthless receptor). The green arrows indicate a change in function of some kind, the white arrows no change. Yes, you can get there from here.

The authors note that their data "shed light on long-standing issues in evolutionary genetics," firstly the question of whether adaptation proceeds through "large-effect" changes (mutations), or through baby steps. Their conclusion:
Our findings are consistent with a model of adaptation in which large-effect mutations move a protein from one sequence optimum to the region of a different function, which smaller-effect substitutions then fine-tune; permissive substitutions of small intermediate effect, however, precede this process.
They note that the large-effect changes are inherently easier to identify (of course), and that the painstaking work of "resurrecting" the ancestral proteins and studying their function is the only way to identify the critical small-effect alterations that made the "big jump" work.

The authors also comment on the big "contingency" debate. I'll write more on the whole "rewinding the tape of life" question some other time; for now, we'll just consider the authors' words:
A second contentious issue is whether epistasis makes evolutionary histories contingent on chance events. We found several examples of strong epistasis, where substitutions that have very weak effects in isolation are required for the protein to tolerate subsequent mutations that yield a new function. Such permissive mutations create “ridges” connecting functional sequence combinations and narrow the range of selectively accessible pathways, making evolution more predictable.
If you have read my summary of the wormholes in morphospace story, this metaphor of "ridges" should make a little sense. The authors here are describing the same concept: an evolutionary exploration of a design space, with paths meandering through a map of the possibilities. But:
Whether a ridge is followed, however, may not be a deterministic outcome. If there are few potentially permissive substitutions and these are nearly neutral, then whether they will occur is largely a matter of chance. If the historical “tape of life” could be played again, the required permissive changes might not happen, and a ridge leading to a new function could become an evolutionary road not taken.
The history of the steroid hormone receptor, then, appears to include several different aspects of evolutionary biology combined: "chance" creating opportunity, leading (via epistasis) to selection for improvement, all done step by step, with some steps generating more apparently dramatic change than others.

Amazingly, Michael Behe is pretending that this analysis is utterly unimportant, with no implications at all for ID proposals, because the receptor-hormone system isn't "irreducibly complex." Some critics of ID claim that the goalposts are being regularly moved, and I'm inclined to agree. But let's just grant Behe the difference between protein-hormone interactions and protein-protein interactions. Does anyone really believe that Joseph Thornton's work doesn't show us exactly how the "irreducible complexity" challenge is going to fare in the near future?

blog comments powered by Disqus