Last summer, Hackett et al. published a widely-read study of phylogenetic relationships among major bird lineages based on 19 independent loci sampled from 169 species (see also Tom Near's previous post). Their study confirmed some patterns suggested by previous phylogenetic studies (e.g., ratites + tinamous as sister to remaining bird species) while also recovering some novel patterns (e.g., passerines sister to parrots [albiet with low support]). One of the more interesting results from their analyses, however, was relegated to the on-line supplement. In this supplement, we learn that all eight of the 10 million generation partitioned analyses they ran in MrBayes apparently failed to reach stationarity (see figure; note that the first 2 million generations are inexplicably trimmed from each analysis as 'burnin-in'). Unpartitioned analyses fared even worse, resulting in immediate crashes "regardless of the memory capacity of the computers used."
Among the partitioned analyses, some continued to shift to new areas of the likelihood surface until relatively late in the analysis. Perhaps even more troubling though was the fact that analyses that did appear to reach a stable plateau sampled significantly different likelihood scores (e.g., -lnL -861,000 v. -lnL 859,500). Is this problem unavoidable in analyses of large datasets?
The most obvious solution would be to simply run the analyses for more than 10 million generations. I've certainly had analyses that required more than 10 million generations to reach stationarity. Perhaps this wasn't done because it took two months on a super computer to run the 10 million generation analyses (anybody know if Hackett et al. or others have implemented longer runs since their paper was published?). Another possibile solution to their problems is to modify the parameters of the MC3 analyses implemented by MrBayes (recall that the MrBayes default is to run two independent MC3 analyses with one cold chain and three heated chains). Hackett et al. explored this possibility by running six analyses with one heated chain and one cold chain (B1-B6) and two analyses with six heated chains and one cold chain (A1-A2). The analyses run with multiple heated chains performed significantly better than those with a single heated chain, perhaps due to the fact that multiple chains are incrementally heated by MrBayes (meaning that the fourth of six heated chains has a flatter likelihood surface than the first). Hackett et al. do not discuss the temperatures used for the heated chains in their analyses, but their results suggest that running multiple heated chains in a single analysis is superior to repeatedly running analyses with only one heated chain.
In any case, Hackett et al.'s ultimate solution was to discard all of their Bayesian analyses and rely instead on parsimony and the fast maximum likelihood methods implemented by GARLI and RAxML. Is this shift away from Bayesian inference in favor of fast maximum likelihood searches for computational reasons a sign of things to come (or has this shift already occurred)? Are the fast maximum likelihood methods ready for prime time, or do people remain uncomfortable with the shortcuts they use to acheive their apparent computational efficiency?
A Rosy Outlook on Anole Sleeping Perches
10 hours ago