Showing posts with label Journal Club. Show all posts
Showing posts with label Journal Club. Show all posts

01 January 2009

Clone wars, or how evolution got a speed limit

The standard simplified narrative of evolutionary adaptation goes something like this. A population of organisms is exposed to a challenge of some kind. Perhaps a new predator has appeared on the scene, or the temperature of the environment has ticked up a degree or two, or the warm little pond is slowly accumulating a toxic chemical. Some of the organisms in the population harbor (or acquire) mutations – so-called beneficial mutations – and these individuals are more successful in the face of the challenge. The population evolves, then, as these beneficial mutations become more common until they are the new status quo. The change is brought about by selection, and the process is called adaptation.

These beneficial mutations, as one might suppose, are quite rare. Most mutations are either harmful to some degree or have little or no effect. Since the good stuff is so hard to come by, it follows that huge populations will be better able to adapt, and will do it faster, because they contain more of the good stuff.

It's a straightforward conclusion, and it's the basis of some recent challenges to evolutionary theory coming from the Intelligent Design movement. But it's mostly wrong. ResearchBlogging.org Here's the problem with the simple story.

In a very large population, many beneficial mutations will be present at the same time, in different individuals. When the challenge is presented, these beneficial mutants will compete against each other, and typically one will win. This means that most beneficial mutations – specifically those with small effects – will be erased from the population as it adapts. So, seemingly paradoxically, a very large population doesn't benefit from its bounty of beneficial mutations when it is subjected to an evolutionary challenge. It's as though adaptation has a built-in speed limit in large populations, and the effect has been clearly demonstrated experimentally. It's called clonal interference.

As geneticists examined this phenomenon, it became clear that any attempt to measure beneficial mutation rates would have been influenced, perhaps dramatically, by clonal interference. Such experiments were often done in bacteria, in the huge populations that can be so easily generated in the lab. Analyses in bacteria, published 6 or 7 years ago, had estimated the beneficial mutation rate to be about 10-8 per organism per generation. (That's 1 per 100 million genomes per generation.) Since the overall mutation rate is estimated to be about 10-3 per organism (a few per thousand genomes per generation), it was concluded that beneficial mutations are fantastically rare compared to harmful or irrelevant mutations.

Creationists have long emphasized the rarity of beneficial mutations, for obvious reasons. For their part, geneticists knew that clonal interference was obscuring the true rate, but no one knew just what that rate might be. That changed in the summer of 2007, when a group in Portugal (Lília Perfeito and colleagues) published the results of a study [abstract/full-text DOI] designed to directly address the effect of clonal interference on estimates of the beneficial mutation rate. Their cool bacterial system (based on good old E. coli) enabled them to genetically analyze the results of an evolutionary experiment, using techniques similar to those made famous by Richard Lenski and his colleagues at Michigan State University.

In short, Perfeito et al. took populations of bacteria and allowed them to adapt to a new environment for 1000 generations. Then they looked for evidence of a "selective sweep" in which one particular genetic variant (i.e., mutant) has taken over the population (their system was set up to facilitate the identification of these adaptive phenomena). The same system had been used before to estimate the beneficial mutation rate, and had arrived at the minuscule number I mentioned before.

The Portuguese group introduced one simple novelty: they studied adaptation in the typical large populations, but also in moderately-sized populations, and then compared the results. The difference was profound: the beneficial mutation rate in the smaller populations was 1000-fold greater than that in the very large populations. This means that clonal interference in the large populations led to the loss of 99.9% of the beneficial mutations that arose during experimental evolution. And that means that the actual beneficial mutation rate, at least in bacteria, is 1000 times greater than the typically-cited estimates.

Perfeito et al. further exploited their system to measure the fitness of all of the mutant clones that they recovered. They found that evolution in very large populations generally resulted in beneficial mutations with larger beneficial effects. This makes sense: the slightly-beneficial clones were eliminated by competition, so at the end of the process of adaptation, we're mostly left with the more-beneficial mutations.

Now some comments.

1. It might seem at first that the large populations are still better off during adaptation, since they do generate beneficial mutations, and selectively retain the more-beneficial ones. But the claim is not that large populations don't adapt; the point is that the vast majority of possible adaptive trajectories are lost due to competition, such that only the trajectories that begin with a relatively large first step are explored. That's a significant limitation, and quite the opposite of the simplistic models of design proponents like Michael Behe and Hugh Ross. Genetic models have shown that the only way for an asexual population to get around the barrier is to do what Michael Behe claims is almost impossible: to generate multiple mutations in the same organism. And recent experimental results show that this does indeed occur.

2. Since the early days of evolutionary genetics, the genetic benefits of sex have been postulated to include the bringing together of beneficial mutations to create more-fit genetic combinations expeditiously. In 2002, an experimental study validated this conjecture, showing that sexual reproduction circumvents the "speed limit" imposed by clonal interference in large populations, and in 2005 another experimental analysis showed that sex speeds up adaptation in yeast but confers no other obvious advantage. Perfeito et al. identified this connection as a major implication of their own work:
...if there is a chance for recombination, clonal interference will be much lower and organisms will adapt faster. [...] Given our results, we anticipate that clonal interference is important in maintaining sexual reproduction in eukaryotes.
(One of the hallmarks of sexual reproduction, besides fun, is recombination – the active shuffling of genetic material that generates offspring with wholly unique mixtures of genes from mom and dad.) In other words, one of the most important benefits of sexual reproduction – and especially of genetic recombination – is negation of the evolutionary drag of clonal interference.

3. All of the examples I've mentioned here are bacterial or viral. If clonal interference arises merely as a result of large population sizes, then it should be an issue for other populations too. And it is: in last month's issue of Nature Genetics, Kao and Sherlock present a tour de force of experimental evolution in a eukaryote, demonstrating the importance of clonal interference and multiple mutations in yeast cells growing asexually. In their study, they identified each beneficial mutation by sequencing the affected gene. Wow.

Why does all of this matter? Well, because it's cool, that's why. And it does mean that our biological enemies have a lot more adaptive resources than we used to think. Here are the closing comments of Perfeito and colleagues:
...our estimate of Ua implies that 1 in 150 newly arising mutations is beneficial and that 1 in 10 fitness-affecting mutations increases the fitness of the individual carrying it. Hence, an enterobacterium has an enormous potential for adaptation and [this] may help explain how antibiotic resistance and virulence evolve so quickly.
But also: keep clonal interference in mind when you encounter any simple story about evolution and genetics. Evolution isn't impossibly difficult to comprehend, but getting it straight requires just a little more effort (and a whole lot more integrity) than has been demonstrated in recent work by those who just can't believe that it could be true.

Article(s) discussed in this post:
L. Perfeito, L. Fernandes, C. Mota, I. Gordo (2007). Adaptive Mutations in Bacteria: High Rate and Small Effects. Science, 317 (5839), 813-815 DOI: 10.1126/science.1142284
K.C. Kao and G. Sherlock (2008). Molecular characterization of clonal interference during adaptive evolution in asexual populations of Saccharomyces cerevisiae. Nature Genetics, 40(12), 1499-1504. DOI: 10.1038/ng.280

22 May 2008

Finches, bah! What about Darwin's tomatoes?

Charles Darwin collected all sorts of cool stuff (like a vampire bat, caught while feeding on his horse) on his journey aboard the Beagle, and it has to be said that he understood little of it until after he got back. The finches that bear his name were identified as such by someone else, and his own bird collections from the Galapagos were nearly worthless due to the fact that he hadn't bothered to label specimens as to their place of origin. It was only upon their correct identification as different species of finch that Darwin realized that the birds represented what we now call an adaptive radiation.

Darwin collected a lot of plant material, too, and much of it was completely new to science. J.D. Hooker was a botanist and contemporary of Darwin, and in 1851 he wrote a little paper, "An Enumeration of the Plants of the Galapagos Archipelago; with Descriptions of those which are new" describing his studies of Darwin's collection. It was more than 100 pages long.

One unique feature of the collection was a pair of species of tomato plant. Like all other species in the archipelago, the Galapagean tomatoes resemble South American species, but are subtly different. More interestingly, the two Galapagean species are highly similar to each other (and reproductively compatible), but occupy separate habitats and exhibit some odd variations, including a striking divergence in leaf shape.

Image from Figure 1 of Kimura et al., cited below. On the left is S. cheesmaniae; on the right is S. galapagense.

How might such a variation arise in evolution? A nice study published in Current Biology two weeks ago provides the interesting answer, and addresses an important question raised by evo-devo theorists. The article is "Natural Variation in Leaf Morphology Results from Mutation of a Novel KNOX Gene," by Seisuke Kimura and colleagues at UC Davis.

Look again at the picture: the leaves pictured on the left are "normal" tomato leaves, as one might see in a Michigan garden or on the South American plants thought to be the ancestors of the Galapagean species. The leaves on the right are significantly more complex. (For lovers of botanical detail, the "normal" leaves are unipinnately compound, while the S. galapagense leaves are three- or four-pinnately compound. For the botanically challenged like me, the leaves on the right are more snowflake-like.)

This trait has long been known to be under the control of a single gene, but the nature of that gene and its effects were unknown before the experiments of Kimura et al. They did some pretty intense genetic mapping, and zeroed in on a rather small piece of the genome. Specifically, they ended up examining a region 1749 base pairs in length. Inside that region, they found exactly one change that could account for the leaf variation: a deletion of a single base pair. One DNA letter, removed from the genome, makes all that difference.

But there's more. That change isn't in the coding region of a gene, meaning that the mutation doesn't affect the structure of any protein. Like the genetic variation that Cretekos et al. studied in their analysis of bat wing development, this is an example of a change in a regulatory region of the DNA, the kind of change that evo-devo theorists have predicted to be fairly common in the evolution of new forms.

The authors showed that the teeny little one-letter change results in a huge increase in the amount of a protein called TKD1. And they did a compelling experiment similar to the one that Cretekos and colleagues did with the bat and the mouse: they took that piece of regulatory DNA (with the one-letter change) and stuck it into a tomato plant, and showed that it could induce a complex-leaf trait all by itself. No change in protein structures, just a one-letter change in a regulatory DNA region. Isn't that cool?

Kimura et al. went on to show that TKD1 reduces the formation of a complex between two other proteins, and their data suggest that the levels of TKD1 constitute a dimmer switch-like (rheostat) control on that complex, which ultimately controls the development of leaf shape.

Now, here's why this result is interesting in the context of evo-devo. A structural mutation in a protein that controls development can result in dramatic changes in form, for sure. But such a mutation will likely alter all of the processes controlled by that protein, resulting in widespread developmental reorganization. (Think "hopeful monster" here.) Evo-devo thinkers assert that regulatory changes are better suited (in general) for the induction of evolutionary changes in form, because such changes can affect isolated developmental processes without affecting the overall development of the organism. In this case, the excess TDK1 protein is able to inhibit the action of a particular complex in particular areas at particular times, without interfering in the functions of those other proteins elsewhere and at other times. Here are the concluding sentences of the paper:
Mutations affecting the expression levels of transcription factors can modify the function of a major developmental regulatory complex in some organs without interfering with its other essential roles in morphogenesis. Such dosage-sensitive interactions may be broadly responsible for evolutionary change and provide a relatively simple mechanism for the generation of natural variation.
I hope you agree that studies like this one and the bat-wing story are inherently interesting. But I hope you also see how sadly foolish it is to disparage evolutionary science as mere mythology, or to pretend to invalidate a century of evolutionary genetic analysis with a few bogus calculations. Scientists are weird enough to think tomato plant leaves on the Galapagos are worth subjecting to detailed genetic analysis, and maybe that means we're a bit on the obsessive side. But come on: we're not stupid.

Article(s) discussed in this post:

KIMURA, S., KOENIG, D., KANG, J., YOONG, F., SINHA, N. (2008). Natural Variation in Leaf Morphology Results from Mutation of a Novel KNOX Gene. Current Biology, 18(9), 672-677. DOI: 10.1016/j.cub.2008.04.008

17 May 2008

How the bat got its wing

Nothing can be more hopeless than to attempt to explain this similarity of pattern in members of the same class, by utility or by the doctrine of final causes. The hopelessness of the attempt has been expressly admitted by Owen in his most interesting work on the 'Nature of Limbs.' On the ordinary view of the independent creation of each being, we can only say that so it is;—that it has so pleased the Creator to construct each animal and plant.

The explanation is manifest on the theory of the natural selection of successive slight modifications,—each modification being profitable in some way to the modified form, but often affecting by correlation of growth other parts of the organisation. In changes of this nature, there will be little or no tendency to modify the original pattern, or to transpose parts. The bones of a limb might be shortened and widened to any extent, and become gradually enveloped in thick membrane, so as to serve as a fin; or a webbed foot might have all its bones, or certain bones, lengthened to any extent, and the membrane connecting them increased to any extent, so as to serve as a wing: yet in all this great amount of modification there will be no tendency to alter the framework of bones or the relative connexion of the several parts.

– from On the Origin of Species, 1st Edition (1859), Charles Darwin
The wing of a bat is an amazing thing. It's not just a wing; it's clearly a modified mammalian limb. A bat looks like a lot like a rodent with really long, webbed fingers on elongated arms.

Image from Animal Diversity Web at the University of Michigan.

Recent genetic analyses have yielded a fairly solid outline of the evolutionary history of bats, which have left a somewhat poor fossil record in which the earliest fossil bats look pretty much like modern bats. ResearchBlogging.orgIt seems that bats arose relatively quickly during evolution, acquiring their distinctive feature – powered flight – in a few million years. No transitional forms have yet been found, which is a shame, because this particular evolutionary transition is the kind that is otherwise reasonably approachable for the detailed study of how changes in form come about.

The fossils can't yet show us how paws gave rise to wings, but that doesn't mean we can't test specific hypotheses regarding the paths that evolution could have taken. In fact, developmental biologists have enormous resources that can be brought to bear on the question, by virtue of decades of research on the development and genetics of the wingless terrestrial bat better known as the mouse. A few months ago, an interesting new report described one kind of genetic change that can lead to bat-like bodies, and the findings put some new wind in the sails of evo-devo.

Two of the more remarkable aspects of bat wing structure are the forelimbs and the forelimb digits, what humans would call the arms and the fingers. Both are dramatically elongated in the adult animal, despite getting off to a very typical start during early development. Check it out: in the picture below, bat and mouse limbs are compared with the image scaled so that body lengths are comparable.

Image from Figure 1 of Cretekos et al., cited below.

Developmental biologists have some pretty good ideas about how this might arise physiologically: certain growth factors (called bone morphogenetic proteins, or BMPs) are known to control limb growth, and some BMPs seem to be turned up in developing bat fingers. But the genetic mechanisms underlying these processes are unknown.

Enter Chris Cretekos and colleagues, then working in a group in Houston headed by Richard Behringer. They set out to examine the genetic underpinnings of the elongation of the forelimbs (arms) of bats, using the formidable tools of mouse developmental genetics. And, clearly, they also sought to directly test one of the central hypotheses of evo-devo: that changes in regulatory DNA sequences (as opposed to changes within the genes themselves) are a potent source of variation in evolution. Consider the beginning of their abstract:
Natural selection acts on variation within populations, resulting in modified organ morphology, physiology, and ultimately the formation of new species. Although variation in orthologous proteins can contribute to these modifications, differences in DNA sequences regulating gene expression may be a primary source of variation.

– From C.J. Cretekos et al., "Regulatory divergence modifies limb length between mammals, Genes & Development 22:141-151, 15 Jan. 2008
Besides their expertise in mouse genetics, the authors brought two major assets to their study: 1) they had already carefully mapped the development of the short-tailed fruit bat (Carollia perspicillata, "our model Chiropteran"); and 2) they knew a lot about the genetic control of limb length in other mammals. In particular, they knew that the protein Prx1 is known to influence limb elongation, by controlling the expression of other genes. So they hypothesized that changes in the activity or level of Prx1 might underlie the difference in limb length between bats and mice, and they were well-equipped to do the experiments.

First, the authors examined the Prx1 gene in the two species, and found that the overall structure of the gene is very similar in both mice and bats, and that the actual coding sequences of the two genes are almost completely identical. (Aligning the coding sequences showed that more than 99% of the amino acids are the same in both species.) In other words, the part of the Prx1 gene that codes for protein is almost certainly not a source of variation between mice and bats. This could mean that Prx1 doesn't have anything to do with the difference between forelimb length in these two species, or it could mean the the difference is generated, at least in part, by variation in the regulation of the gene. Cretekos et al. postulated that altered Prx1 regulation might be involved, and designed a cool experiment to address this possibility.

They already knew that the Prx1 gene in mice contains known regulatory elements in particular locations within the gene. (Such elements are often located in the DNA sequences that precede the coding region.) When they looked at the bat gene, they found similar elements in the same location, but these elements showed some intriguing variation: when the two regions were aligned, they shared only 67% identity, meaning that a third of the DNA bases were different in mouse and bat. They did some nifty cell biology to show that this region did function as a regulator of the expression of Prx1, then did something that biologists could only dream about before the genomic era: they altered the mouse genome by replacing the mouse regulatory region with the corresponding region from the bat genome. In other words, they gave a mouse a piece of a bat's genome, without actually changing the coding sequence of any gene.

The result was dramatic, although it won't sound that way at first. The mice with the bat DNA displayed forelimbs that were 6% longer than normal. Why is this a dramatic result? Well, first of all, think about a 6% change in a major structural attribute. If adult males in a certain country average 5'10" in height, a 6% increase would mean an increase of more than 4 inches. But more importantly, the Prx1 gene is known to account for about 12% of forelimb length – mice that lack the gene altogether show a 12% reduction in forelimb length. That 6% change reflects a huge change in Prx1 activity, a change that was completely due to alterations in regulatory DNA sequences without any change in coding sequence.

If that's not impressive enough, the authors went on to examine the importance of this regulatory region in mice, by deleting it altogether. The result was very surprising, but very interesting: limb length in mice was completely unaffected by the loss of this chunk of regulatory DNA. (The region we're discussing is 1000 bases in length.) This means that the Prx1 gene of both bats and mice contains a regulatory region that is completely dispensable for normal development but that can be altered to generate significant changes in limb length, which points to significant evolutionary potential in genetic regions that seem unimportant. Here's how the authors say it:
Maintenance of redundant enhancers for essential developmental control genes would allow changes in expression pattern to arise from mutations that alter regulatory activity while preserving the required gene function.
So, why is this significant? Here are two aspects of the story that are worth highlighting.

1. The results provide strong (and rare) experimental support for the ideas of the evo-devo school. The currently-heated debate over the merits of evo-devo is focused on the central evo-devo claim that morphological evolution (i.e., evolutionary changes in form) is driven to a large extent by changes in the regulation of gene expression, and less so by changes in the structures of the proteins that are encoded. To simplify, evo-devo postulates that significant evolutionary change – like that discussed here – is more likely a result of the varied use of a protein toolkit than a result of modification of the toolkit itself. Cretekos et al. have presented a case in point, and one that is considered outstanding in that it documents a morphological gain; many previous examples showed only losses.

2. The results provide a sharp picture of what Darwin's vision of "successive slight modifications" means in terms of developmental biology. In this case, the modifications (of a redundant regulatory region) can yield significant anatomical remodeling without altering protein structure at all.

The article was a notable advance for evo-devo and for evolutionary science, but soon there will surely be many others like it. Desperate or ignorant creationists will always find a way to avoid facing the explanatory power of common descent, but scientists are just plugging away, and for every blog post by a creationist ignoramus, there are 30 unheralded publications in the biological literature that advance our understanding of common descent and the mechanisms that generate biological novelty. And they're fun to read.
Article(s) discussed in this post:

17 February 2008

This is your fetal brain on drugs.

We interrupt this series on "junk DNA" and rampant folk science to bring you a months-overdue Journal Club.

I wonder how many of my readers remember this little tidbit of American genius:



I remember some very funny spoofs, mostly on T-shirts. (Back then, I think the Internet was still a toy for geeks at the NCSA.) "This is your brain. This is your brain on drugs. This is your brain on drugs with a side of bacon. Any questions?"

Marijuana, as I recall, was typically included as one of the frying pans that could turn your central nervous system into a not-very-heart-healthy staple at Denny's. It was – and probably still is – easy to get the impression that smoking pot would hollow out your skull and make you into the inspiration for a character played by Keanu Reeves.

But that's baloney. Long-term marijuana use is certainly not without effects on the brain (duh), but its most abundantly-documented pathological outcome is, well, stupidity. (Mild stupidity. How such an effect is detected in ResearchBlogging.organ American population is not so clear to me.) And gosh, if we intend to stamp out stupidity-enhancing behavior through legal action, we'd better send the Marines to Hollywood right now. Seriously, there are few well-established long-term negative effects of using cannabis, and most of those are associated with smoking marijuana and not with the neurological impact of cannabis itself. (Full disclosure: I have never had a joint to my lips, and the closest I've come to inhaling is second-hand at the occasional concert. It would seem that my stupidity has a different cause.)

The rules are different, though, when developing brains are the subject, and it doesn't matter whether the neuroactive substance is legal or not. Maybe pot doesn't mess up a young adult's brain, but that doesn't mean it won't affect a fetal brain. And in fact, some recent studies indicate that we should pay close attention to the possibility that fetal brain development is affected by cannabis. One of those studies, "Hardwiring the Brain: Endocannabinoids Shape Neuronal Connectivity" by Paul Berghuis and colleagues, published in Science last May, suggests that mammalian prenatal brain development is likely to be significantly impacted by cannabis. It's an interesting paper for that reason, and because it deals with two of the subjects of my own research: neuronal growth cones and Rho GTPase signaling. I'll briefly explain those terms later.

The active ingredient in pot is a chemical called Delta(9)-tetrahydrocannabinol, or THC. THC affects the brain by activating receptors on particular types of neurons in the brain, causing these neurons to release less of their neurotransmitters (the normal chemical signals used for communication among neurons). While a serious intelligent design proponent might need to claim that the "purpose" of these receptors is to help people respond to pot (to suppress nausea while on chemotherapy, for example), scientists instead sought and found the chemicals within the brain that normally act on these receptors. These chemicals are called endocannabinoids, signifying that they are cannabis-like but originate from within. (After biologists discovered the endocannabinoids, they subsequently discovered the receptors, but that's not an issue here.)

This means that a first step toward discovering the potential roles of endocannabinoids in brain development is the identification of the parts of the developing brain that display the receptors. If you know where the receptors are, then you know where the chemicals are likely to act. And those are the areas that are likely to be affected by cannabinoids like THC, that come from outside.

Neurons are the brain cells that send and receive electrical signals. A typical neuron has many (perhaps thousands) of dendrites, which receive signals from other neurons, and one axon, which transmits signals to other cells, often a great distance away.
A typical neuron. Image credit: NIH, NIDA

During brain development, neurons have to develop their magnificent and specific architectures. Beginning as a boring little round ball, a neuron has to sprout and extend dendrites and (typically) a single axon. The axon must somehow migrate to its final position, which may be in a completely different part of the body or right next door.

When Berghuis et al. looked for endocannabinoid receptors in the developing brain, they found them in the cerebral cortex, and specifically they found them in the growing axons of the cerebral cortex. In case you haven't been introduced to the cerebral cortex, it is thought to be responsible for "all forms of conscious experience."

Layers of the developing cerebral cortex of a mouse. The red streaks are developing axons that are displaying endocannabinoid receptors. From Berghuis et al., Figure 1D.

They found the receptors in other developing brain regions, too, and they showed that the endocannabinoids are likely to be produced in those regions at those times. The somewhat surprising result raises the possibility that cannabinoids affect how the brain develops, by affecting how the axons develop.

What might these effects be? The authors found that the receptors were clustered right at the growing tips of these developing axons. This region is called the growth cone, and it's one focus of my own research, because it's obviously the place where the axon is continuously elongating, and it's a place where the skeleton of the cell must be always remodeling.

The growth cone of a mouse neuron. The red indicates structural elements of the growth cone; the green blobs are endocannabinoid receptors, and the yellow smudges indicate where the red and green overlap. From Berghuis et al., Figure 2C.

If endocannabinoid receptors are located right on the growth cone, then they are positioned to influence speed and direction of axon outgrowth. Yikes!

Okay, so endocannabinoids (and, of course, THC from pot smoke) are uniquely positioned to affect growing axons in the brain. But what's the effect? The authors show that one effect is the inhibition of steering mechanisms in the growth cone. In my favorite experiment, they put neurons into an electric field, where the growth cones tend to steer toward the negative pole. When the neurons were treated with an endocannabinoid, they failed to show this preference.

Axon growth in an electric field. Each black tracing represents the behavior of one axon. On the left, notice that untreated axons tend to grow toward the negative pole (left side), and many of those that are growing toward the positive pole are turning away from it. On the far right, notice that axons treated with the endocannabinoid grow in every direction and don't care about the electric field; the center shows how they grow when there's no electric field at all. From Berghuis et al., Figure 3D .

The authors went on to show that this effect seems to result from the activation of a well-known signaling system inside cells, mediated by a protein called RhoA. RhoA is a Rho GTPase, and I'll spare you the details since you've probably read all my papers already. :-) What matters is this: Rho signaling is known to be involved in axon growth, and is generally a negative influence on axon growth. In fact, some attempts to stimulate axon growth in the spinal cord after injury (and paralysis) are focused on the inactivation of RhoA and its partners. So this connection between endocannabinoids and Rho GTPases is further evidence of a specific – and likely negative – influence of cannabinoids on axon outgrowth in the developing brain.

But is there any evidence of a specific effect on brain development, in an animal? The final experiment presented in the paper is a genetic experiment, in which the authors examined the brains of mice in which the endocannabinoid receptor (one in particular) was genetically deleted in certain parts of the brain. And they found that certain neurons in the cerebral cortex of these mutant mice had lost almost half of their inputs, presumably due to the inability of the incoming axons to find their way to the recipient neurons. In other words, when the receptors were deleted from a subpopulation of neurons, those neurons evidently had trouble making their normal connections.

What this means is that to whatever extent the human brain resembles the mouse brain with regard to expression of cannabinoid receptors and their function in growth cones, the developing human brain is potentially vulnerable to damage, or at least alteration, by exposure to THC. And as the authors note, this may partly explain recent findings (in rats) that point to permanent alterations in brain function in pot users – alterations that may predispose these people to much more serious addictions.

I've long been inclined to skepticism regarding anti-pot hysteria, and I strongly support efforts to legalize and legitimize medical use of cannabis. But these data should make us look hard at the potential implications of cannabis exposure during human development.
Article(s) discussed in this post:

  • Berghuis, P., et al. (2007) Hardwiring the Brain: Endocannabinoids Shape Neuronal Connectivity. Science 316:1212-1216.

10 December 2007

Gene duplication: "Not making worse what nature made so clear"

But he that writes of you, if he can tell
That you are you, so dignifies his story,
Let him but copy what in you is writ,
Not making worse what nature made so clear,
And such a counterpart shall fame his wit,
Making his style admired every where.
--Sonnet 84, The Oxford Shakespeare
One of the most common refrains of anti-evolutionists is the claim that evolutionary mechanisms can only degrade what has already come to be. All together now: "No new information!" It's a sad little mantra, an almost religious pronouncement that is made even more annoying by its religious underpinnings, hidden or overt.
ResearchBlogging.org
But it's a good question: how do new genes come about?

One major source of new genes is gene duplication, which is as conceptually simple as it sounds. It might seem a little odd, and it's not that easy to picture, but the duplication of discrete sections of genetic material is commonplace in genomes. In fact, a significant amount of the genetic variation among individual humans is due to copy number variation, which is variation in the number of copies of particular genes or chunks of genetic material from individual to individual. Genes can be duplicated within a genome via various mechanisms, one of which includes the rare but fascinating occurrence of whole-genome duplication. In any case, it is very clear that gene duplication and subsequent evolution explains the existence of thousands of the most interesting genes in animal genomes.

It should be obvious that gene duplication gives you more genes, but perhaps it's not so clear how this can yield something truly new. For many years, new genes were thought to arise after duplication by a process called neofunctionalization. The basic idea is this: consider a gene A, with a set of functions we'll call F1 and F2. Now suppose the gene is duplicated, so that we now have genes A and B, both capable of carrying out F1 and F2. In neofunctionalization, gene B is free to vary and (potentially) acquire new functions, because gene A is still making sure that F1 and F2 are covered. So the duplication has created an opportunity for a little "experimentation." Most of the time, gene B will be mutated into another piece of genomic debris, a pseudogene with no evident function. (The human genome is riddled with pseudogenes, and that's a story all its own.) Occasionally, though, the tinkering will yield a gene with a new evolutionary trajectory. This model makes good sense and surely accounts for numerous genetic innovations during evolution.

But another model has come to the fore in the last several years, in which the two duplicates seem to "divide and conquer." The process is called subfunctionalization, and the idea is straightforward: gene A covers F1, while gene B covers F2. Straightforward perhaps, but this scenario creates some interesting evolutionary opportunities that aren't immediately obvious. Here in this newest Journal Club, I'll look at another example of the experimental analysis of evolutionary principles and hypotheses, summarizing some recent work that examines subfunctionalization in the laboratory.

In the 11 October issue of Nature, Chris Todd Hittinger and Sean B. Carroll examine an actual example of subfunctionalization in an elegant set of experiments that seeks to re-create the evolutionary changes that occurred after a gene duplication. Specifically, they looked at the events that led to the formation of a new pair of functionally-intertwined genes in yeast. The genes are GAL1 and GAL3, and there are several aspects of this story that make it an ideal system in which to experimentally explore the creation of new genes.
  1. GAL1 and GAL3 arose following a whole-genome duplication in an ancestral yeast species about 100 million years ago. The ancestral form of the gene (see Note 1 at the end of this article) is still present in other species of yeast (namely, those that branched off before the duplication event). This means that the authors were able to compare the new genes (meaning GAL1 and GAL3) and their functions to the single ancestral gene and its functions.
  2. The genomes of these yeast species have been completely decoded, so that the authors had ready access to the sequences of the genes of interest and any DNA sequences in the neighborhood.
  3. Decades of research on yeast have yielded superb tools for the manipulation of the yeast genome. Using these resources, the authors were able to create custom-designed yeast strains in which genes of interest were altered to suit experimental purposes. (Those of us who work in mammalian systems can only dream of being able to do this kind of genetic modification with such ease.)
  4. The biochemical functions of GAL1 and GAL3 were already well known.
Hittinger and Carroll capitalized on this excellent set of tools, and added a key component of their own. They needed a way to measure fitness of different strains of yeast, namely strains that had been modified to resemble various ancestral forms. But most typical methods for testing gene function are unsuitable for estimating fitness, which is the relevant issue. The question, in other words, is focused not on the ability of a particular protein to perform a particular function, but on the ability of a particular protein to change the fitness of the organism that expressed it. The authors' solution can only be described as elegant: they assessed fitness of various yeast strains by measuring the outcomes of head-to-head competitions between strains. Their experimental approach, developed by a colleague (see Note 2) employed some very nice genetic tricks and a sophisticated analytical tool called flow cytometry. (Take some time to read about Abbie Smith's research at ERV if you haven't already done so; in her work on HIV, she asks similar questions regarding fitness and uses a very similar approach in seeking answers.)

Why did the authors choose the GAL1-GAL3 system for close scrutiny? The two genes are critical components of a system in yeast that controls the utilization of galactose (a certain sugar) as an energy source. The GAL1 protein is an enzyme that begins the breakdown of galactose; the GAL3 protein controls the induction of the GAL1 protein. When galactose is present, the GAL3 gene is induced, such that GAL3 protein amounts increase by a few fold. The GAL3 protein is in turn a potent inducer of the GAL1 gene: when galactose is present, GAL1 protein levels increase 1000-fold or so. The two proteins are very similar to each other, and both are very similar to the single protein that is found in the genomes of yeasts that never underwent the genome duplication. So this means that the ancestral protein is bifunctional: it must carry out the very different processes of induction and of galactose metabolism. Not surprisingly, situations like this are thought to involve trade-offs which resolve "adaptive conflicts" between the two different functions of the protein. The reasoning is straightforward: mutations that would improve function A might degrade function B, and vice versa. So the protein is not optimized for either function. There is an adaptive conflict between the two functions. The GAL1-GAL3 system clearly involves subfunctionalization following duplication, and because the ancestral gene is available for comparison, the story invites exploration of the notion of adaptive conflict.

Hittinger and Carroll found that there is indeed an adaptive conflict that was resolved by the evolution of GAL1 and GAL3 following the duplication. But the nature of that conflict is not what some might have predicted. Look again at my description of adaptive conflict above. I focused exclusively on the proteins themselves, claiming that the conflict would arise during attempts to optimize two functions in a single protein. But there's another possibility (that need not exclude the first): perhaps the conflict occurs in the regulation of the expression of those proteins. In the case of GAL1 and GAL3, the two different genes can be turned on and off by two different signaling systems. But in the ancestral situation, there's only one gene and therefore fewer opportunities for diversity in the signaling that leads to expression.

The data presented by Hittinger and Carroll suggest that there is not strong adaptive conflict between the two functions of the ancestral protein. If such a conflict existed, we would expect that changes in GAL1 that make it look more like GAL3 (and vice versa) would cause significant decreases in fitness. But that's not what the fitness analysis showed, and the authors inferred that the adaptive conflict must occur in the arena of regulation, and not in the context of actual protein function. The story is complicated, and I'm not convinced that the authors have ruled out adaptive conflict at the level of the structure of the proteins. Nevertheless, their subsequent experiments demonstrate a clear adaptive conflict in the regulation of expression of the different proteins, and an efficient resolution of that conflict in the subfunctionalization of the two genes following duplication. Those results are strengthened by some detailed structural analysis that seems to account for the physical basis of the optimization that occurred during evolution of the GAL1 and GAL3 genes, optimization that occurred in DNA sequences that control the levels of expression of protein.

If you're a little dizzy at this point, relax and let's zoom out to reflect on this article's significance in evolutionary biology, and its relevance for those who are influenced by the claims of anti-evolution commentators.

First, take note that this article is another example of a sophisticated, hypothesis-driven experimental analysis of a central evolutionary concept. Research like this is reported almost daily, though you'd never learn this by reading the work of Reasons To Believe or the fellows of the Discovery Institute. The mis-characterization of evolutionary biology by the creationists of those organizations is a scandal, and as you might already know, my blog's main purpose is to give evangelical Christians an opportunity to explore the science that is being so carefully avoided by those critics. You don't need to understand sign epistasis or the structure of transcription factors to get this take-home message: evolutionary biologists are hard at work solving the problems that some prominent Christian apologists can't or won't even acknowledge. How does gene duplication lead to the formation of genes with new functions? The folks at the Discovery Institute can't even admit that it happens. Over at Reasons To Believe, they don't mention gene duplication all, despite their fascination with "junk DNA." That's from a ministry that claims to have developed a "testable model" to explain scores of questions regarding origins.

This makes me mad. No matter what you think of the age of the earth or the need for creation miracles, you should be upset by Christians who mangle science to serve apologetic ends.

Second, it's important to note that Hittinger and Carroll's paper is not merely a significant contribution to our understanding of subfunctionalization. It's also a salvo, in an apparently intensifying debate within evolutionary biology regarding the kinds of genetic changes that are more likely to drive evolutionary change. Sean Carroll is one of the leading lights in the new field of evolutionary developmental biology, or evo-devo, and one of the tenets of this upstart school is the claim that most of the genetic changes that lead to adaptation -- and especially to changes in form -- occur in regulatory regions of the genome and not in the genes themselves. (More technically: evo-devo advocates like Carroll postulate that changes in form are more likely to arise from mutations in cis-regulatory regions than in protein-coding sequences within genes.) This assertion is hotly contested, as are many of the other basic views of the evo-devo school. The antagonists include some serious evolutionary biologists, Michael Lynch and Jerry Coyne among them. (Lynch is the guy who took the time to explain why Michael Behe's paper on gene duplication was a joke. Coyne co-wrote the book on speciation, literally.)

I'm a developmental biologist, and therefore partial to many of the arguments of evo-devo thinkers. I'm excited about the union of evolutionary and developmental biology, and I do think that many of the new evo-devo ideas are thought-provoking and potentially fruitful. But the debate is riveting and informative, and I find Lynch and Coyne and their talented colleagues to be alarmingly convincing. I'm worried about some of those cool ideas, but I do take some comfort in this thought: any idea that can survive the onslaught of Lynch and Coyne is a hell of a good idea.

It's easy to see how the disputes spawned by the brash (and perhaps rash) evo-devo folks can lead to innovation and discovery, even if many of their proposals are diminished or destroyed in the process. The disagreement is pretty clear-cut, and both sides seem to agree on how to figure out who's right. They'll go to the lab; they'll perform hypothesis-driven experiments; they'll analyze their data; they'll write up their findings; their work will be subjected to peer review. In other words, they'll do real science.
---
Note 1: The ancestral gene itself, of course, isn't available for analysis. The authors are studying the ancestral form of the gene, using a yeast species that never experienced the whole-genome duplication.
Note 2: As Hittinger and Carroll indicate in the acknowledgments, the experimental design was developed by Barry L. Williams, who was a postdoctoral fellow in Carroll's lab and is now on the faculty at Michigan State. And by the way, this little state of Michigan doesn't have much of an economy, but boy are we crawling with gifted evolutionary biologists.

Article(s) discussed in this post:

  • Hittinger, C.T. and Carroll, S.B. (2007) Gene duplication and the adaptive evolution of a classic genetic switch. Nature 449:677-681.

03 November 2007

What happens in my brain when I imagine that people actually read my blog?

Lady Macbeth [to Macbeth]: Great Glamis! worthy Cawdor!
Greater than both, by the all-hail hereafter!
Thy letters have transported me beyond
This ignorant present, and I feel now
The future in the instant.
--Macbeth, Act I, Scene V. (The Oxford Shakespeare)

Obsessions with self-preservation
Faded when I threw my fear away
It's not a thing you can imagine
You either lose your fear
Or spend your life with one foot in the grave
Is God the last romantic?
--"Spark" by Over The Rhine (Drunkard's Prayer, 2005)
Optimism or delusions of grandeur? Bullish or blinkered? Looking on the bright side, or gazing through rose-colored glasses? Am I a romantic, or am I just in denial?

ResearchBlogging.orgI do consider myself a romantic, and this blog is a testament to a particular form of optimism that I just can't shake off: I'm ever hopeful that people (like me) can learn new things and change their minds. But sometimes I worry: is my optimism (on this subject, and hundreds of others) unreasonable? Or worse...is my optimism unreasonable but also adaptive, a pitiful delusion without which I can't otherwise get by?

[Waits for jeers of skeptics to die down] Actually, being (overly) optimistic is apparently a universally human trait. I may be a romantic, but...I'm not the only one. (Imagine!)

Consider these opening sentences in a research article ("Neural mechanisms mediating optimism bias," Sharot et al., Nature 450:102-105) published in Nature this week:
Humans expect positive events in the future even when there is no evidence to support such expectations. For example, people expect to live longer and be healthier than average, they underestimate their likelihood of getting a divorce, and overestimate their prospects for success on the job market.
Lord, what fools these mortals be! Yes indeed; but how does this happen? The study by Sharot et al. set out to identify mechanisms in the brain that might account for what they call "pervasive optimism bias." First the authors note that this "optimism bias" is considered to be a mark of good mental health, and exhibits apparent adaptive value; excessive pessimism correlates with symptoms of depression, and of course excessive optimism can lead to recklessness. A "normal" dose of optimism, they note, "can motivate adaptive behaviour in the present towards a future goal." Nevertheless, the authors describe this normal (wild-type?) human stance as "a moderate optimistic illusion." Yikes! We're all deluded.

Okay, so how does this work? Previous work has shown that, when imagining the future, people use the same brain systems that they employ when recalling the past, suggesting that the construction of an imagined future involves the rearrangement of pictures and stories from the remembered past. So we might expect to see these systems somehow involved in the expression of optimism.

The authors used functional MRI (fMRI) to look at brain activity while subjects were thinking about events in their lives that centered on a "life episode" like "winning an award" or "the end of a romantic relationship." They correlated the brain imaging with the participants' ratings of their experience of these episodes, which were either past or future events (i.e., recollections or imagined scenarios). And they used a psychological test (the Life Orientation Test-Revised, or LOT-R) to measure "trait optimism" and thereby estimate the relative optimism or pessimism of individual experimental subjects.

The behavioral data alone reveal some interesting things about people and their optimism. Amazingly, future positive episodes were judged to be more positive than past positive events, and were felt to be closer in time than any other experience, past or future. And there's more:
Negative future events were experienced with a weaker subjective sense of pre-experiencing, and were more likely to be imagined from an outsider viewing in, than positive future events and all past events (Fig. 1b). The more optimistic participants were, as indicated by the LOT-R scores, the more likely they were to expect positive events to happen closer in the future than negative events, and to experience them with a greater sense of pre-experiencing (Fig. 1c, d).
So, humans in general seem to think (or feel) that the future looks better than the past, and optimistic people seem to be able to better connect with the positive illusion of the future that they create.

Combining the various techniques enabled the authors to identify some brain regions of interest (ROIs) with regard to optimism. Some of these areas are The Usual Suspects: the rostral anterior cingulate cortex (rACC), the posterior cingulate cortex, and the dorsal medial prefrontal cortex, all areas that were previously implicated in autobiographical memory recall and in the construction of imagined future scenarios. Activation of these regions accompanies optimism, presumably because optimism requires a vision of the future. That's all interesting and informative, but it's not what makes this paper so intriguing. I think the paper's real impact arises from the fact that the imaging analysis implicated a fourth brain area in optimism bias: the amygdala. This region of the limbic system is famously involved in emotional processing, and the authors suggest that the amygdala's role in optimism is to add emotional impact to the imagined future events. They demonstrate "strong functional connectivity" between the amygdala and the rACC during the process of imagining future positive events, and not while imagining negative scenarios. And, importantly, they document a correlation between the strength of activation of the rACC and the overall optimism of the person, as measured by the LOT-R. I find this graph compelling:
Two aspects of their discussion are worth noting. First, not surprisingly, the authors highlight the relevance of their findings to the understanding of depression. Perhaps depression causes -- or arises from -- malfunctioning of the systems that Sharot et al. have implicated in optimism. Second, the authors make an important distinction between remembering and imagining in the interpretation of their results. Namely, there are two potentially relevant differences between remembering and imagining: the temporal difference (past versus future) and the reality difference (real versus imaginary). The authors speculate that the optimism bias functions when constructing imaginary scenarios, and that the past versus future distinction is only relevant because the past is real and the future is imaginary.

In any case, the article provides another glimpse into the workings of the hunk of meat in our skulls, a messy wet organ that somehow creates memories and imagination, and in the process conjures various carrots, hanging out there in front of us, urging us to ignore our (reasonable) fears and plunge into an unknown future, eyes on an illusion concocted by...functional crosstalk between the amygdala and the rostral anterior cingulate cortex.

That last part didn't sound quite right. But I think that's the way it is. And I think Christians should get used to learning how various aspects of humanness are explainable on the basis of the workings of the brain.

Now I'll imagine a future where my blog article, on the brain systems that fill us with optimism, is being read by scores of people, all picturing their own private versions of the grail beacon.

Article(s) discussed in this post:

  • Sharot, T., Riccardi, A.M., Raio, C.M. and Phelps, E.A. (2007) Neural mechanisms mediating optimism bias. Nature 450:102-105.

24 October 2007

They selected teosinte...and got corn. Excellent!

Evolutionary science is so much bigger, so much deeper, so much more interesting than its opponents (understandably) will admit. It's more complicated than Michael Behe or Bill Dembski let on, and yet it's not that hard to follow, for those who are willing to try. The best papers by evolutionary biologists are endlessly fascinating and scientifically superb, and reading them is stimulating and fun.

Yet, as an experimental developmental biologist reading work in evolutionary biology, I often find myself yearning for what we call "the definitive experiment." Molecular biology, for example, can point to a few definitive experiments -- elegant and often simple -- that provided answers to big questions. Sometimes, while examining an excellent evolutionary explanation, I think, "Wouldn't it be great if they could do the experiment?"

Now of course, plenty of evolutionary biology is experimental, and I've reviewed some very good examples of experimental evolutionary science on this blog. But when it comes to selection and the evolution of new structures and functions, the analysis often seems to beg for an experiment, one that is simple to conceive but, typically, impossible to actually pull off -- there's not enough time. The previous Journal Club looked at one way around this limitation: bring the past back to life. Even better, though, would be to find an example of evolutionary change in which the new and old forms are still living, so that one could do the before-and-after comparison. It would look something like this: take a species, subject it to evolutionary influences of some kind until the descendants look significantly different from the ancestors, then compare the genomes (or developmental processes) of the descendant and the ancestor, in hopes of discovering the types of changes at the genetic or developmental level that gave rise to the differences in appearance or function of the organisms. That would be a cool experiment.

In fact, that kind of experiment has been done, more than once. The best example, in my opinion, involves an organism far less sexy than a dinosaur or a finch or a whale: Zea mays, better known as corn (or maize).

Corn is a grass, but a grass that's been so extensively modified genetically that it's barely recognizable (to non-specialists like me) as a member of that family. Wait...genetically modified? Yes, and I'm not talking about the really modern tricks that gave us Bt corn or Roundup Ready corn. In fact, the wonderful stuff they grow in Iowa is quite different from the plants that humans first started to harvest and domesticate in Central America a few millenia ago. Corn as we know it is the result of a major evolutionary transformation, driven by selection at the hands of humans. (I don't find the natural/artificial selection distinction at all useful, since there's no explanatory difference, but you can refer to the selection under consideration here as 'artificial' if it makes you feel better.) The story has been a major topic in evolutionary genetics for decades, but it's largely absent from popular discussions, probably because the Discovery Institute has wisely avoided it. I hope it will soon be clear why you won't find the word 'teosinte' anywhere at discovery.org.

For many years, the origin of corn was a mystery. Like most known crops, it was domesticated 6000-10,000 years ago. But unlike other crops, its wild ancestor was unknown until relatively recently. Why this odd gap in our knowledge? Well, it turns out that corn is shockingly different -- in form, or morphology -- from its closest wild relative, which is a grass called teosinte, still native to southwestern Mexico. In fact, corn and teosinte are so different in appearance that biologists initially considered teosinte to be more closely related to rice than to corn, and even when evidence began to suggest a genetic and evolutionary relationship, the idea was hard to accept. As John Doebley, University of Wisconsin geneticist and expert on corn genetics and evolution, puts it: "The stunning morphological differences between the ears of maize and teosinte seemed to exclude the possibility that teosinte could be the progenitor of maize." (From 2004 Annual Review article, available on the lab website and cited below.)

But it is now clear that teosinte (Balsas teosinte, to be specific) is the direct ancestor of corn. In addition to archaeological evidence, consider:
  • The chromosomes of corn and teosinte are nearly indistinguishable at very fine levels of structural detail.
  • Analysis using microsatellite DNA (repetitive DNA elements found in most genomes) identified teosinte as the immediate ancestor of corn, and indicated that the divergence occurred 9000 years ago, in agreement with archaeological findings.
  • Most importantly, a cross between corn and teosinte yields healthy, fertile offspring. So, amazingly, despite being so different in appearance that biologists initially considered them unrelated, corn and teosinte are clearly members of the same species.
The basic idea, then, is that corn is a domesticated form of teosinte, exhibiting a strikingly distinct form as a result of selection by human farmers. And that means that we have a perfect opportunity to examine the genetic and developmental changes that underlie these "stunning morphological differences." We can do the experiment.

First, have a look at an example of one of the evolutionary changes in teosinte under human selection.

The small ear of corn on the left is a "primitive" ear; the brown thing on the right is an ear from pure teosinte. (Both are about 5 cm long.) The "primitive" ear is similar to archaeological specimens representing the earliest known corn. Image from John Doebley, "The genetics of maize evolution," Annual Review of Genetics 38:37-59, 2004. Article downloaded from Doebley lab website.




The thing on the far left is a teosinte "ear," the far right is our friend corn, and the middle is what you get in a hybrid between the two. Photo by John Doebley; image from Doebley lab website.



The pattern of branching of the overall plant is also strikingly different between corn and teosinte, and you can read much more on the Doebley lab website and in their publications.

When I first heard about this work at the 2006 Annual Meeting of the Society for Developmental Biology, I was astonished at the amount of basic evolutionary biology that was exposed to experimental analysis in this great ongoing experiment. Here are two key examples of the insights and discoveries generated in recent studies of corn evolution.

1. Does the evolution of new features require new, rare, mutations in major genes?

Perhaps this seems like a stupid question to you. Anti-evolution propagandists are eager to create the impression that evolutionary change only occurs when small numbers of wildly improbable mutations somehow manage to help and not hurt a species. And in fact, experimental biology has produced good examples of just such phenomena. But there is at least one other genetic model that has been put forth to explain the evolution of new forms. This view postulates that many major features exhibited by organisms are "threshold" traits, meaning that they are determined by many converging influences which add together and -- once the level of influence exceeds a threshold -- generate the trait. The model predicts that certain invariant (i.e., never-changing) traits would nevertheless exhibit significant genetic variation, since evolutionary selection is acting on the overall trait and not on the individual genetic influences that are added together. Hence the implication that...
...populations contain substantial cryptic genetic variation, which, if reconfigured, could produce a discrete shift in morphology and thereby a novel phenotype. Thus, evolution would not be dependent on rare mutations, but on standing, albeit cryptic, genetic variation.
--from Nick Lauter and John Doebley, "Genetic Variation for Phenotypically Invariant Traits Detected in Teosinte: Implications for the Evolution of Novel Forms," Genetics 160:333-342, 2002.
In that paper, the authors show that several invariant traits (e.g., number of branches at the flower) in teosinte display significant genetic variation. In other words, the traits are the same in every plant, but the genes that generate the traits vary. The variation is 'cryptic' because it's not apparent in basic genetic crosses. But it's there. The authors ask: "How can cryptic genetic variation such as we have detected in teosinte contribute to the evolution of discrete traits?" Two ways: 1) the variation is available to modify or stabilize the effects of large-effect mutations; and 2) variation in multiple genes can be reconfigured such that it adds up to a new threshold effect. Note that the first scenario is clearly applicable to the kind of evolutionary trajectory outlined by Joe Thornton's group and discussed in a previous post. The second scenario is particularly interesting, however, since it addresses an important question about the role of selection. Consider the authors' discussion of this issue:
At first glance, cryptic variation would seem inaccessible to the force of selection since it has no effect on the phenotype. However, if discrete traits are threshold traits, then one can imagine ... that variation ... could be reconfigured such that an individual or population would rise above the threshold and thereby switch the trajectory of development so that a discrete adult phenotype is produced. We find this an attractive model since evolution would not be constrained to “wait” for new major mutations to arise in populations. (Italics are mine; ellipses denote deletion of technical jargon, with apologies to the authors.)
In fact, in a 2004 review article, Doebley is bluntly critical of the assumption that new mutations were required during the evolution of corn, and seems to suggest that this view led researchers significantly astray:
There is an underlying assumption in much of the literature on maize evolution that new mutations were central to the morphological evolution of maize. The word "mutation" is used repeatedly to describe the gene changes involved, and Beadle led an expedition ("mutation hunt") to find these rare alleles. The opposing view, that naturally occurring standing variation in teosinte populations could provide sufficient raw material for maize evolution, was stated clearly for the first time by Iltis in 1983. Although new mutation is likely to have made a contribution, anyone who has worked with teosinte would agree that teosinte populations possess abundant genetic variation. [...] Allowing for cryptic variants and novel phenotypes from new epistatic combinations to arise during domestication, it is easy to imagine that maize was domesticated from teosinte.
--John Doebley, "The genetics of maize evolution." Annual Review of Genetics 38:37-59, 2004.
Compare that discussion, and others like it in the paper I'm quoting, with the yapping about mutations that passes for anti-evolution criticism of evolutionary genetics. I can find no evidence that Michael Behe or any other ID theorist has even attempted to seriously address the importance of genetic variation in populations. I haven't read The Edge of Evolution yet, but I have it right here, and the index suggests that Behe hasn't tried to engage genetics beyond the high school level. There's a good reason why Behe is an object of scorn in evolutionary biology. He wants you to think it's because his critics are mean. No; it's much worse than that.

2. Does evolutionary change ever result from a "gain of information," or does Darwinian evolution merely prune things out?

It would be easy to get the impression from various creationists and ID proponents that mutation and selection can only remove things from a genome. Young-earth creationist commentary on "microevolution" (a yucky term for the now-undeniable fact of genetic change over time) always adds that this kind of change involves NO NEW INFORMATION. (The caps are important, apparently, since caps and/or italics are de rigueur in creationist denialism on this topic.)

Similarly, Michael Behe wants you to think that beneficial (or adaptive) mutations are some kind of near impossibility, and that when they do happen it's almost always because something's been deleted or damaged, with a beneficial outcome.

Studies of evolution in corn and teosinte (and other domesticated plants), not to mention findings like the HIV story on Abbie Smith's now-famous blog, tell a different -- and, of course, more wonderfully interesting -- story. In a minireview on the genetics of crop plant evolution in Science last June, John Doebley notes that most of the mutations that led to major evolutionary innovations occurred in transcription factors, which are proteins that turn other genes on and off. Then this:
Another remarkable feature of this list is that the domesticated alleles of all six genes are functional. If domestication involved the crippling of precisely tuned wild species, one might have expected domestication genes to have null or loss-of-function alleles. Rather, domestication has involved a mix of changes in protein function and gene expression.
In other words, the new genes are not dead or damaged; they're genes that are making proteins with new functions. ('Allele' is just the term for a particular version of a particular gene, and 'null', as you might have guessed, is a version that is utterly functionless, as though the gene were deleted entirely.) Now, if you've even flipped through The Origin of Species, you might not be surprised by Doebley's conclusion:
Given that the cultivated allele of not one of these six domestication genes is a null, a more appropriate model than "crippling" seems to be adaptation to a novel ecological niche -- the cultivated field. Tinkering and not disassembling is the order of the day in domestication as in natural evolution, and Darwin's use of domestication as a proxy for evolution under natural selection was, not surprisingly, right on the mark.
The change from teosinte to corn happened in about a thousand years. That's fast evolution. Apply selection to a varying population, and you get new functions, new proteins, new genes, completely new organisms. Fast.

So in summary, we can do the experiment. And we've done the experiment. ('We' being John Doebley and his many able colleagues.) And we've learned a lot about evolution and development. Now if we can just get people to read it. Then they'll know more about evolution, and about God's world, and about the trustworthiness of the anti-evolution propaganda machines that are exploiting the credulity of evangelical Christians.

15 September 2007

Say cheese! Or, evidence that facial muscles are the puppet-strings of the soul

Souls will come up regularly in this blog, for lots of reasons. For one, disembodied spirits (wandering souls, I presume) are everywhere in Shakespeare, and his very conception of death seems to be the separation of the soul from the body. I can't very well bring up Shakespeare without conjuring ghosts or visions thereof. Such visions are utterly commonplace in Western literature and thought, and Shakespeare certainly didn't cook them up (I recall spirits fluttering out of dead warriors in the Iliad, and that little piece of work was conceived just a few millenia before the Bard). The picture of someone "giving up the ghost" (hilariously pictured in "Who Framed Roger Rabbit?", if you remember that little gem) obviously inspires Romeo:
Now, Tybalt, take the villain back again
That late thou gav’st me; for Mercutio’s soul
Is but a little way above our heads,
Staying for thine to keep him company:
Either thou, or I, or both, must go with him.

--Romeo and Juliet, Act III, Scene I (The Oxford Shakespeare)
We need souls in our poetry, even when our poetry has no soul. Hamlet without souls? No such thing.

And of course, we need souls in Christianity. We're essentially dualists, meaning that we believe in everlasting souls encamped (or entrapped) in mortal bodies. Right?

Well, actually, no. I'm just a biologist, but some of my best friends are philosophers, Christian philosophers, and darn good ones at that. It's a story for another time, but suffice it to say for now that many hard-thinking Christians are advancing a physicalistic (or "materialistic") view of human persons, some while claiming that biblical evidence for belief in immaterial souls is quite thin.

But whether or not you're an agnostic on immaterial souls, you should find the notion of "embodied emotion" interesting, because:
  • It's cool science, and of course you love cool science;
  • You're a human, and humans, it seems to me, are dualistically inclined;
  • Souls are linked to various cognitive phenomena, including emotion;
  • You're reading a blog called Quintessence of Dust, for heaven's sake.
The 18 May 2007 issue of Science features a "Behavioral Science" theme, and includes a brief review of some new applications of theories of embodied cognition to the study of human emotion. The author, Paula M. Niedenthal, contrasts such theories with traditional models of human cognition built around the image of brains (and minds) as computers, and identifies the following assertion as distinctive of theories of embodied cognition:
...that high-level cognitive processes (such as thought and language) use partial reactivations of states in sensory, motor, and affective systems to do their jobs. Put another way, the grounding for knowledge -- what it refers to -- is the original neural state that occurred when the information was initially acquired. If this is true, then using knowledge is a lot like reliving past experience in at least some (and sometimes all) of its sensory, motor, and affective modalities.
The idea, then, is that when you think, you are in some ways reenacting the scenario or the information itself. You are thinking with your whole body, not just with the meat-based computational soul-center in your skull. (As cool as that thing is.) If you are, like I am, a fan of Antonio Damasio and his ideas, then you're already familiar with this type of thinking and theorizing, and with the connection he makes between emotion and consciousness.

So...body is connected with emotion, emotion with cognition...doesn't this mean, then, that your body -- muscles, bones, tendons, mundane animal machinery -- can influence, even control, your cognition? Hello, Professor Descartes? If you just smile, can that make you happy?

Well, consider some of the wild stuff in this article. In one experiment, subjects were registering their perception of a projected image by moving a lever. When they saw the image, they were to quickly move the lever. The participants surely thought the experiment was measuring their reaction time, and they were partially correct. But they probably couldn't have discerned the variable of interest: whether the lever was pulled, toward the body, or pushed away. In the experiment, images were flashed, some that would be emotionally positive, some negative. Subjects who were pushing the lever away responded more quickly to negative images, and vice versa.

Maybe I'm the only one, but that kind of thing really messes with my dualistic impulses. (And I'm not a body-soul dualist.) But there's more. The author describes some of her group's work, in which activity in 4 facial muscles was recorded while subjects were judging the emotional content of certain words. Here's her synopsis of the results:
...individuals embodied the relevant, discrete emotion as indicated by their facial expressions...in the very brief time it took participants to decide that a "slug" was related to an emotion (less than 3 seconds), they expressed disgust on their faces.
The author also describes the elegant control experiment: the subjects looked at the words in print and determined whether they were written in all caps. No such embodiment was detected in the facial muscle recordings.

You might think, "gee, it must take a lot of time to do all that embodying work when making decisions." You'd be right: the author describes experiments that show timing costs associated with switching systems (or modalities):
They are slower to verify that a "bomb" can be "loud" when they have just confirmed that a "lemon" can be "tart" than compared to when, for example, they have just confirmed that "leaves" can be "rustling."
And you might wonder whether we could alter your emotional state by forcing you to embody a particular state. Suppose we force you to smile; will this make you happier? Call me silly, but my initial response to this hypothesis is to scoff. But wait: inspired scientists are testing hypotheses very much like this one.

In the last experiment described by Prof. Niedenthal, each subject was asked to determine whether a sentence described something pleasant or unpleasant, while holding a pen in his or her mouth. Huh? Have a look at Figure 1 (you don't need a subscription to Science): holding a pen with the lips precludes smiling, and even seems to embody the opposite; holding a pen with the teeth forces the lips into a smile. I suppose you know what's coming:
Reading times for understanding sentences describing pleasant events were faster when participants were smiling...sentences that described unpleasant events were understood faster when participants were prevented from smiling.
"Smile and laughter comes thereafter." Pretty corny stuff; it can still make me faintly nauseous (another embodied emotion, clearly). But maybe it's true. And if the eyes are the windows of the soul, what does that make the jaw muscles?