Nonlinear natural selection, particularly stabilizing selection, is often presumed to be widespread in nature. However, it is seldom found in practice. For instance, the now famous Kingsolver et al. (2001) review found that only 16% of estimated nonlinear selection coefficients on single traits (estimates of stabilizing or disruptive selection) were significant; and furthermore that correlational selection was estimated in fewer than 10% of studies. Nonlinear selection is very important in the study of evolution, however, because of its relevance to many very interesting questions, such as the evolution of genetic correlations between characters and the evolution of evolvability (e.g., Arnold et al. 2008).
An increasingly popular approach in recent years has been to first estimate the γ-matrix, which contains the coefficients of stabilizing and disruptive selection on its diagonal and the coefficients of correlational selection in off-diagonal positions, and then to diagonalize γ by solving MγM'=Λ for matrices containing the orthonormal eigenvectors (M) and eigenvalues (Λ) of γ. The widely perceived advantage of this approach is one of increased power: diagonalization identifies (in its first and/or last ranked eigenvectors) the dimensions of strongest nonlinear selection; and, furthermore, it allows for more modest multiple test correction, since the number of coefficients to be tested scales linearly with the number of traits in our analysis (rather than as the square). True to form, some studies (e.g., Blows et al. 2003) have found significant nonlinear selection on the canonical axes where none was found on the original traits.
However, a recent paper by Richard Reynolds and colleagues (2010) has revealed that some of this increased power may be illusory. In particular, the standard double-regression approach for hypothesis testing of the canonical nonlinear coefficients has type I error that goes to 1.0 (i.e., very bad type I error) under pretty realistic conditions. The lower panel of the figure above, copied from Reynolds et al. (2010), shows the type I error for hypothesis tests on the canonical axes for a nonlinear selection analysis of 10 traits. In this study no selection was simulated! The authors also prove analytically that the expected eigenvalues of the estimated γ-matrix for data without nonlinear selection only go to zero as the number of samples used to estimate γ goes to infinite (obviously sample sizes in empirical studies are usually finite. . . unless, of course, you take a really, really long field season).
The implications of this result are quite significant. In particular, it means that some recently published examples of significant nonlinear selection on canonical trait axes could be type I errors. However, the authors also provide a solution. They find that type I errors contract to their nominal levels when a permutation-based hypothesis testing approach is used. (In a self-serving addendum, I'd also like to note that I independently devised and applied the exact simulation test recommended by the authors in a recently published paper - detailed here in a supplement - even though I must admit I was not at all aware of this problem at the time!)
I think this paper also reflects the fact that methods are never static, and that when new ones are devised they must be tested thoroughly - and furthermore that these tests should be conducted with both empirical and simulated data. The rise of canonical rotation in the analysis of nonlinear selection had previously not been accompanied by this level of scrutiny. Reynolds et al. (2010) provides not only a definitive critique, but also a suitable way forward.
Dicyema japonicum
1 week ago
2 comments:
Liam - thanks for tackling a paper with the most intimidating title I've seen in a long time: "THE DISTRIBUTION AND HYPOTHESIS TESTING OF EIGENVALUES FROM THE CANONICAL ANALYSIS OF THE GAMMA MATRIX OF QUADRATIC AND CORRELATIONAL SELECTION GRADIENTS"
Ditto - nice post. Have you considered submitting this post at ResearchBlogging.org? Seems quite appropriate.
Post a Comment