Senin, 31 Oktober 2011

The China Study II: Gender, mortality, and the mysterious factor X

WarpPLS and HealthCorrelator for Excel were used to do the analyses below. For other China Study analyses, many using WarpPLS as well as HealthCorrelator for Excel, click here. For the dataset used, visit the HealthCorrelator for Excel site and check under the sample datasets area. As always, I thank Dr. T. Colin Campbell and his collaborators for making the data publicly available for independent analyses.

In my previous post I mentioned some odd results that led me to additional analyses. Below is a screen snapshot summarizing one such analysis, of the ordered associations between mortality in the 35-69 and 70-79 age ranges and all of the other variables in the dataset. As I said before, this is a subset of the China Study II dataset, which does not include all of the variables for which data was collected. The associations shown below were generated by HealthCorrelator for Excel.


The top associations are positive and with mortality in the other range (the “M006 …” and “M005 …” variables). This is to be expected if ecological fallacy is not a big problem in terms of conclusions drawn from this dataset. In other words, the same things cause mortality to go up in the two age ranges, uniformly across counties. This is reassuring from a quantitative analysis perspective.

The second highest association in both age ranges is with the variable “SexM1F2”. This variable is a “dummy” variable coded as 1 for male sex and 2 for female, which I added to the dataset myself – it did not exist in the original dataset. The association in both age ranges is negative, meaning that being female is protective. They reflect in part the role of gender on mortality, more specifically the biological aspects of being female, since we have seen before in previous analyses that being female is generally health-protective.

I was able to add a gender-related variable to the model because the data was originally provided for each county separately for males and females, as well as through “totals” that were calculated by aggregating data from both males and females. So I essentially de-aggregated the data by using data from males and females separately, in which case the totals were not used (otherwise I would have artificially reduced the variance in all variables, also possibly adding uniformity where it did not belong). Using data from males and females separately is the reverse of the aggregation process that can lead to ecological fallacy problems.

Anyway, the associations with the variable “SexM1F2” got me thinking about a possibility. What if females consumed significantly less wheat flour and more animal protein in this dataset? This could be one of the reasons behind these strong associations between being female and living longer. So I built a more complex WarpPLS model than the one in my previous post, and ran a linear multivariate analysis on it. The results are shown below.


What do these results suggest? They suggest no strong associations between gender and wheat flour or animal protein consumption. That is, when you look at county averages, men and women consumed about the same amounts of wheat flour and animal protein. Also, the results suggest that animal protein is protective and wheat flour is detrimental, in terms of longevity, regardless of gender. The associations between animal protein and wheat flour are essentially the same as the ones in my previous post. The beta coefficients are a bit lower, but some P values improved (i.e., decreased); the latter most likely due to better resample set stability after including the gender-related variable.

Most importantly, there is a very strong protective effect associated with being female, and this effect is independent of what the participants ate.

Now, if you are a man, don’t rush to take hormones to become a woman with the goal of living longer just yet. This advice is not only due to the likely health problems related to becoming a transgender person; it is also due to a little problem with these associations. The problem is that the protective effect suggested by the coefficients of association between gender and mortality seems too strong to be due to men "being women with a few design flaws".

There is a mysterious factor X somewhere in there, and it is not gender per se. We need to find a better candidate.

One interesting thing to point out here is that the above model has good explanatory power in regards to mortality. I'd say unusually good explanatory power given that people die for a variety of reasons, and here we have a model explaining a lot of that variation. The model  explains 45 percent of the variance in mortality in the 35-69 age range, and 28 percent of the variance in the 70-79 age range.

In other words, the model above explains nearly half of the variance in mortality in the 35-69 age range. It could form the basis of a doctoral dissertation in nutrition or epidemiology with important  implications for public health policy in China. But first the factor X must be identified, and it must be somehow related to gender.

Next post coming up soon ...

Senin, 24 Oktober 2011

The China Study II: Animal protein, wheat, and mortality … there is something odd here!

WarpPLS and HealthCorrelator for Excel were used in the analyses below. For other China Study analyses, many using WarpPLS and HealthCorrelator for Excel, click here. For the dataset used, visit the HealthCorrelator for Excel site and check under the sample datasets area. I thank Dr. T. Colin Campbell and his collaborators at the University of Oxford for making the data publicly available for independent analyses.

The graph below shows the results of a multivariate linear WarpPLS analysis including the following variables: Wheat (wheat flour consumption in g/d), Aprot (animal protein consumption in g/d), Mor35_69 (number of deaths per 1,000 people in the 35-69 age range), and Mor70_79 (number of deaths per 1,000 people in the 70-79 age range).


Just a technical comment here, regarding the possibility of ecological fallacy. I am not going to get into this in any depth now, but let me say that the patterns in the data suggest that, with the possible exception of some variables (e.g., blood glucose, gender; the latter will get us going in the next few posts), ecological fallacy due to county aggregation is not a big problem. The threat of ecological fallacy exists, here and in many other datasets, but it is generally overstated (often by those whose previous findings are contradicted by aggregated results).

I have not included plant protein consumption in the analysis because plant protein consumption is very strongly and positively associated with wheat flour consumption. The reason is simple. Almost all of the plant protein consumed by the participants in this study was probably gluten, from wheat products. Fruits and vegetables have very small amounts of protein. Keeping that in mind, what the graph above tells us is that:

- Wheat flour consumption is significantly and negatively associated with animal protein consumption. This is probably due to those eating more wheat products tending to consume less animal protein.

- Wheat flour consumption is positively associated with mortality in the 35-69 age range. The P value (P=0.06) is just shy of the 5 percent (i.e., P=0.05) that most researchers would consider to be the threshold for statistical significance. More consumption of wheat in a county, more deaths in this age range.

- Wheat flour consumption is significantly and positively associated with mortality in the 70-79 age range. More consumption of wheat in a county, more deaths in this age range.

- Animal protein consumption is not significantly associated with mortality in the 35-69 age range.

- Animal protein consumption is significantly and negatively associated with mortality in the 70-79 age range. More consumption of animal protein in a county, fewer deaths in this age range.

Let me tell you, from my past experience analyzing health data (as well as other types of data, from different fields), that these coefficients of association do not suggest super-strong associations. Actually this is also indicated by the R-squared coefficients, which vary from 3 to 7 percent. These are the variances explained by the model on the variables above the R-squared coefficients. They are low, which means that the model has weak explanatory power.

R-squared coefficients of 20 percent and above would be more promising. I hate to disappoint hardcore carnivores and the fans of the “wheat is murder” theory, but these coefficients of association and variance explained are probably way less than what we would expect to see if animal protein was humanity's salvation and wheat its demise.

Moreover, the lack of association between animal protein consumption and mortality in the 35-69 age range is a bit strange, given that there is an association suggestive of a protective effect in the 70-79 age range.

Of course death happens for all kinds of reasons, not only what we eat. Still, let us take a look at some other graphs involving these foodstuffs to see if we can form a better picture of what is going on here. Below is a graph showing mortality at the two age ranges for different levels of animal protein consumption. The results are organized in quintiles.


As you can see, the participants in this study consumed relatively little animal protein. The lowest mortality in the 70-79 age range, arguably the range of higher vulnerability, was for the 28 to 35 g/d quintile of consumption. That was the highest consumption quintile. About a quarter to a third of 1 lb/d of beef, and less of seafood (in general), would give you that much animal protein.

Keep in mind that the unit of analysis here is the county, and that these results are based on county averages. I wish I had access to data on individual participants! Still I stand by my comment earlier on ecological fallacy. Don't worry too much about it just yet.

Clearly the above results and graphs contradict claims that animal protein consumption makes people die earlier, and go somewhat against the notion that animal protein consumption causes things that make people die earlier, such as cancer. But they do so in a messy way - that spike in mortality in the 70-79 age range for 21-28 g/d of animal protein is a bit strange.

Below is a graph showing mortality at the two age ranges (i.e., 35-69 and 70-79) for different levels of wheat flour consumption. Again, the results are shown in quintiles.


Without a doubt the participants in this study consumed a lot of wheat flour. The lowest mortality in the 70-79 age range, which is the range of higher vulnerability, was for the 300 to 450 g/d quintile of wheat flour consumption. The high end of this range is about 1 lb/d of wheat flour! How many slices of bread would this be equivalent to? I don’t know, but my guess is that it would be many.

Well, this is not exactly the smoking gun linking wheat with early death, a connection that has been reaching near mythical proportions on the Internetz lately. Overall, the linear trend seems to be one of decreased longevity associated with wheat flour consumption, as suggested by the WarpPLS results, but the relationship between these two variables is messy and somewhat weak. It is not even clearly nonlinear, at least in terms of the ubiquitous J-curve relationship.

Frankly, there is something odd about these results.

This oddity led to me to explore, using HealthCorrelator for Excel, all ordered associations between mortality in the 35-69 and 70-79 age ranges and all of the other variables in the dataset. That in turn led me to a more complex WarpPLS analysis, which I’ll talk about in my next post, which is still being written.

I can tell you right now that there will be more oddities there, which will eventually take us to what I refer to as the mysterious factor X. Ah, by the way, that factor X is not gender - but gender leads us to it.

Minggu, 23 Oktober 2011

Rugby World Cup: The ref debate

It's been a long time between posts - busy work periods, lack of inspiration, lack of news stories (well, that's not entirely true!), but pick the excuse.  Apologies for the long break. I'm back with a viewpoint on rugby - probably not the topic of interest for most of you reading this in the USA, but as our national sport in SA, felt compelled to put it out there!  I'm planning a series on fatigue, probably as a series of short video posts in the coming week, so hopefully that breaks the silence on sports science!  Join us soon!  And for those who followed the Rugby World Cup, congratulations to New Zealand.  Some thoughts on the refereeing below!


Rugby World Cup: New Zealand's drought ends and rugby's referee problem

So 24 years of waiting is over for New Zealand, who beat France 8-7 in a pulsating and perhaps unexpectedly competitive Rugby World Cup Final today.  It may have been the lowest scoring final ever played, but it was suspenseful and adventurous, certainly more than the previous two finals.  France produced a performance worthy of the showpiece match of the tournament, having come into it with two losses and the anticipation of a blowout victory to New Zealand.  Rather, it was France who played the adventurous rugby, and only some ineffectiveness in attack and New Zealand's resolute defending prevented them from winning their first title.  

Instead, New Zealand won their second, but it was significant in that they have been, for the most part, the best team going into each of the six World Cup tournaments, sometimes by a large margin.  Having failed to win the World Cup on five occasions despite being the favorites had earned New Zealand the tag of "chokers", a team that peaked between World Cups but failed to deliver when it mattered.  Two of those famous defeats came at the hands of France (in 1999 and 2007) and so when this French team stood firm and began to control the match following a second half try that brought the score back to 8-7, a blanket of anxiety settled over Eden Park in Auckland.  

Choking vs panic

There were times when New Zealand appeared close to panic in this final - they were flustered, made unforced errors, chose poor tactical options and generally seemed to be hanging on and defending a one-point lead with desire rather than application.  At this point, it seemed to me that had New Zealand NOT won this World Cup, it would have been because of panic, rather than choking (an explanation that is just too convenient to use, and unfairly earned, not only by NZ rugby by also by SA cricket).  Their composure deserted them, though the injury to their flyhalf, which meant that they played most of the final with a fourth choice pivot, certainly influenced their tactical approach.  As did their lead, and they seemed more concerned with defending the one-point advantage than playing proactively, which set the final 30 minutes up as France with the ball, New Zealand without it.

For an explanation of how choking differs from panic, and why a team that loses a match is not necessarily choking, read this piece by Malcolm Gladwell.  I've never really been fond of simply throwing out the excuse of "chokers" every time the more favored team loses - sometimes you are just outplayed or out-thought by a team who are better than you on the day.  The margins in international sport are so small that this can happen fairly easily, and it's too simple to say "New Zealand choked", when in fact, France may have simply been unbeatable on a given day, as was the case in 1999.  For a comparable case in tennis, Federer's loss to Tsonga in Wimbledon earlier this year is the best I can think of - sometimes, however great you are, the other team/player just rises to a level that no one would match, and it's your bad fortune to be there at the time!  

The influence of the referee in rugby

However, the tactical and technical nature of the game is not what I want to focus on in this post - that is something that rugby websites around the world will do enough of (see this example for a match report).  

Instead, I thought I would give some of my thoughts on a topic that follows every rugby match, and that is the debate and criticism of the referee.  The reality is that the referee in a rugby match has become incredibly influential in determining how the game is played.  The result is that rugby has a growing credibility problem, where every match threatens to degenerate into objections about the performance of the referee, rather than assessment of the relative performances of its players.  Whenever the result on the scoreboard can be dismissed as being the result of someone's opinion or bias, there is a problem.  

And this has happened in virtually every close match in the 2011 Rugby World Cup, which will be remembered not solely for the on-field performances, but for weak referee performances, some of which have been questionable, some outright poor.  The most controversial of these probably came in the Quarter-final, where Australia beat South Africa 11-9 in a match that was later alleged to have been "bent" as part of the condemnation on the performance of referee Bryce Lawrence (more on my views of that allegation later)

Rugby presents a unique challenge in that the referee is required to make a specific decision about a contested tackle almost 200 times a match (once every 30 seconds), and this decision is multi-dimensional, instantaneous and open to interpretation.  As a result, these decisions (and there are so many of them) influence the game to the extent that accusation, criticism and allegation are inevitable.   It's part of sport, certainly, but rugby seems more prone to accusations that "the ref helped ABC win" than any other sport.  The problem is that from this point, it's a short journey to allegations of fixing, corruption and cheating, when the problem may be simple incompetence or interpretation of the tackle rules of the sport.  Either way, the credibility of a result is called into question.

This situation exists because so much of the contest in rugby revolves around competing for the ball after a tackle, in the breakdown contest.  The attacking team needs to recycle possession quickly, whereas the defending team are at worst trying to slow it down to re-organize in defence, at best trying to win the ball on the ground.  The result is a huge contest, the outcome of which goes a considerable distance towards determining the match result, but which is itself determined by how the referee interprets how both sets of players test the boundaries of the law (because this is what players will do, understandably - it's like football players trying to play close to the offside line)

A unique situation?

I cannot think of another sport where the interpretation of the rule by an official so clearly influences the way that teams play the match.  In football (soccer), the most contentious decisions are those when a penalty appeal is made, offsides is ruled, or when foul-play is adjudged.   They are fairly clear-cut, and far less frequent than in rugby.  And certainly, they can influence matches in a big way - I'm not downplaying how significant a referee decision can be.  In the NFL, decisions can be similarly significant, but usually involve clear transgressions of rules.  Tennis, there's no influence, particularly now that television replays are used.  And similarly, cricket umpires are often criticized and single decisions can be very influential, but with TV assistance, the incidence of these has certainly come down.  If there is a sport that I'm missing, please let me know.

The rugby situation - too much interpretation

Rugby is different - the most contentious decision in rugby is one that is made on average twice a minute (five times a minute if you use ball in play time rather than total time), and it influences the next minute, rather than being a decision in isolation.  Consider that a typical match has about 170 rucks (or contests for the ball in a tackle) , and you realize that there are probably 100 decisions (because not all are contested the same way) where the referee must interpret, in a split second, a dizzying array of laws, and where each decision has implications for what follows.  

Different referees have a different sequence or approach to the decision, but they must judge, more or less in order: how the tackler interacts with the tackled player, when the tackle actually occurs, that the tackler releases the tackled player, that the tackled player releases the ball, when the ruck is formed, that players arriving to join the ruck remain on their feet, and that they join from the correct position and do not seal the ball off to prevent the contest.  Add in that there are often multiple tacklers, so the referee has to decide who the tackler is, and you appreciate that within half a second, there's a lot to judge.  Then the next problem is that many times, four or five things happen more or less simultaneously, and so it really is a judgment call.

Ultimately, what the decision comes down to is a) assigning roles to the involved players, and b) deciding on the order in which events occur - every tackle has similar events, and the job of the referee is to sort through the order in which they occur,  and if he sees a different order to you or I, then his decision will be accordingly different.  And this is precisely what happens to make these decisions so contentious.

I've been fortunate enough to work with the SA Sevens team for the last three seasons, and at every tournament, the IRB Head of Referees, all the coaches and technical staff of competing teams, and all the referees have a sit-down meeting a few days before the tournament starts.  The meetings involve discussion around how the referees have been instructed to officiate and usually include clips of tackles and rucks from previous tournaments.  Now bear in mind that this is Sevens, where the contest involves fewer players and with less congestion than you'd see in 15s, and then consider that even so, there rarely agreement in these meetings.  The situation in 15-man rugby is of course even more complex (though the tackle contest may be more significant in 7s, but that's for another discussion!)

For each clip, one coach will point to the tackler, another to the tackled player, another to the arriving player, another to the offside line, each one pointing out a different possible transgression PER RUCK!  Mostly, it boils to disagreement about the order in which events happen, and which player should be entitled to do what.  Eventually, even in slow-motion, it takes consensus or a swing vote to sort through the order of decisions that a referee must make.  Even then, it's often a 50-50 call as to whether a player released the tackled player or the ball and so on (if you are reading this without much knowledge of rugby and you're confused at how complex it sounds, well, that's exactly the point!)

A general approach to the decision and its implications

The reality is that rugby, by design, prioritizes the contest for the ball on the ground, and therefore the spotlight falls squarely on the man who must judge whether players are transgressing those laws.  Simple on paper - there is a very distinct set of rules governing the tackle.  But here's the problem - the rules may be clear, but the judgment of them is not.  So much is open to interpretation, and it is interpretation that happens in an instant, while on the run.  The result is that a match can very, very easily look 'influenced' by the referee, who generally speaking, can take one of two extreme approaches to how they cut through this organized chaos to make a decision.  Call it "conservative" vs "liberal" decision-making, but at its simplest, a referee is going to lean one of two ways.

The first approach is to over-police the contest (the conservative).  The result is that the referee will appear to punish legitimate contesting for the ball, and will reward penalties frequently, forcing players to back right off, killing the contest for the ball.  This favors the team in possession.  Alternatively, the referee can under-police the breakdowns (liberal), and allow much more to go unpenalized.  

Importantly, when this happens, the result is that the defending team will usually be favoured, because the referee will fail to prevent them from slowing the ball down, and slowing it down creates a disproportionate advantage.  I believe this is what happened in the South Africa - Australia match, where the rucks were highly contested and too much was allowed on the ground.  The result is that the defending team is advantaged.  But, significantly, the problem in that particular match is that the defending team was mostly Australia.

The stats reveal this - South Africa had 131 rucks, compared to Australia's 44.  That is, for every one opportunity for South Africa to contest and slow down Australian ball, there were three chances for Australia to do so.  So, by allowing too much contesting, the referee effectively gave Australia three times as many chances to push the limits of what was legal (and some would say exceed those limits).

When one team is as dominant as this (in terms of possession), and the more liberal referee is making the extreme "decision" to under-police and allow more, then it will always appear that he is deliberately biased.  The reality is that if the possession was equal, and if both teams have the same number of rucks, then nobody would really notice the referee because BOTH TEAMS would get away with slowing the other team's ball down!  You'd get a very messy match, but the liberal referee would be far more "anonymous" because his leaning affects both sides equally.

Instead, this match was one-sided, and South Africa seemed to be on the receiving end of an unfair performance.  I do think that Lawrence was poor, and I do think that his poor performance affected SA more, but it wasn't deliberate.  And as for match-fixing?  Not based on decisions that didn't go our way, no.  Rather, I think that the referee was poor and didn't do enough to control the rucks, but my point is that this may be because he was either instructed to allow the contest, and "over-applied" the instruction, or he just has a natural inclination to be liberal towards the contest.

In the case of Bryce Lawrence, it would not surprise me if he was told to allow a contest for the ball, because earlier in the tournament (in the Aus v Ireland match), he was criticized for penalizing Australia TOO MUCH at the breakdown.  I strongly suspect that what happened next is that he was asked to be a little slower on the whistle, and he erred on the other extreme, and didn't do enough.  In the end, it appeared that South Africa were hard done by, but as I have said, that's more because whenever one team dominates play, an error like Lawrence's appears to favour the team without the ball (Australia).

Analyzing referees - navigating with a broken compass

It may not surprise you to learn, for example, that many international teams now attempt to analyze referee trends, so that they can attempt to guess whether a given referee is likely to decide one way or the other.  At the most basic level, for example, you can look at whether a particular referee tends to award a penalty to the attacking team or the defending team to get an idea of that referee's "in-built bias".  This partly reveals whether that referee's priority in assessing the breakdown is whether the attacking team player releases the ball (penalty against the attacking team) or whether the tackler releases the player (defending team).  You can then go further to see whether the referee is more or less lenient on the tackler or the tackled player and the arriving supporting players from either team.

The problem with this approach is two-fold.  First, it's subjective.  When analysing clips, you have to judge not only what the referee does decide, but what he does not.  This means you have to make a call yourself, and this brings us back to the point about disputable situations, especially because on TV, you don't see what the referee does.  

The second problem, more significant, is that the referees, in my experience anyway, are too unpredictable to code in this way.  They are influenced by individual players and teams, and they change their approach too often, probably because they are very susceptible to suggestion and to the instructions coming down at them from their superiors. 

For example, we tried this in the Sevens setup,but it was a futile quest, because the referees changed their approach too often.  We worked out that what was happening was that the IRB were evaluating the referees and providing feedback on their performances (which is a good thing, of course), but this feedback was influencing the way that referee approached their next match.  The result was that for each referee, if you plotted a graph showing how they made decisions, it would look like a zig-zag curve of mountain peaks and valleys - one week they leaned one way, the next week they went the other.  And so trying to pre-empt how they would decide was like navigating with a broken compass.  

Yet again, what this showed is the "unstable" nature of the decision-making process.  Again, 170 decisions per match, each one in a fraction of a second at speed, with five or more variables to assess is going to introduce some "interpretation", and the problem is that this can lean one way or another very easily.

Emotion - the inherent bias when working backwards

The other factor in all this is that emotion and passion are such significant influencers of how we interpret this watching on television.  Fans (and even neutral spectators) have an inherent bias (it's what makes them fans!) and the result is that when they assess a referee performance, they exist in a world of black and white - the referee is either right or wrong.  Unfortunately for rugby, the decision is rarely black and white.  It is grey, because of the previously mentioned decisions around judging the order in which events occur, and who does what in the tackle, and so there is always conflict between what fans see and what is actually happening on the ground.

Consider an example from football (soccer):  A player scores a goal but is offside when he received the pass.  The referee/assistant see this, and the goal is correctly disallowed.  On first viewing, a fan who feels that his team has been robbed can make all manner of accusations including match-fixing and bias, but a replay will prove him wrong in most cases.  Similarly, in tennis, the ball is either in or out, and in the Hawkeye era, there's little dispute over those calls.  NFL, there are debatable calls (pass interference, roughing the passer etc), but they're much less frequent and different in nature to the ongoing, continuous rugby tackle calls.  

Rugby, however, has a much more subjective decision happening 170 times a match, and that's why I laboured the point about how "grey" the decision-making process can be earlier in this post.  The end result is that people who watch matches can make the logic mistake of working backwards.  They then interpret their observations to fit their theory, and of course their desired theory is that their team must win!  

It's a lot like bent science, in fact, in that you start out with the finding already "known" (in a fan's mind, there is only one team that can win - they "know" the result before the match!).  Then you have a series of "experiments", also known as the tackle situation, where the outcome of each must be known too.  The entire match is an observed experiment, and unwittingly, people mix emotion with interpretation and they will come up with accusations of bias because their observation will always fit their model.  This is the danger of looking for proof of what you already believe, because you will always succeed at finding it!

Don't trust the passionate perception

I made this mistake myself when working with the Sevens team.  Every single decision was "wrong" as long as it went against our team!  Such is the desire to win, that I stood on the sidelines and could not believe that a penalty should not be awarded to us.  We lose the ball, it could only be because the other team cheated, and the referee missed it!  

Only in the cold light of day, often the next morning, sitting in the hotel lobby, did I have the opportunity to review the match, sometimes to talk to the referee and he would explain what he was seeing as he made the call, and then it became much clearer to me that what was "obvious" to me was in fact "obvious" in exactly the other direction!  I was wrong, pure and simple.  But at the time I could not see that I was looking at it incorrectly.  I learned to have a deep mistrust of my own perceptions in those emotional, stressful situations, and learned instead to wait, hold the opinion and rather decide when removed from the passion and emotion.  It was a valuable lesson.  

Sometimes, of course, the referees did make mistakes - more than once, I still believe we were wrongly judged and that it cost matches.  Sometimes, referees even admitted it, and apologized.  But we have also been the beneficiaries of the decisions, and that's the result of rugby's tackle rule.  It certainly needs to be fixed, but this was a difficult lesson to learn, but an important one.

The reality is that fans need to step away from the emotion, and if they did, they may, in the case of South Africa anyway, recognize a few other reasons why it was New Zealand, and not us, lifting that trophy in Auckland yesterday.

The solution - analysis and a scorecard

As for the solution, my bias as a scientist is to measure and analyse, so that's where I'd look for rugby's problem.   And transparency would help - no one really knows what the IRB does with referees - they are accused of being a "protected species", which may not necessarily be a bad thing, but I do feel that some more open discussion would help.  At the moment, it's all left to the media, and in this day and age, the "media" now includes social networking, and so the public WILL have their say, and they are rarely going to be diplomatic in the absence of information.  Rather control the perception by making some information available  (it's a lot like the Caster Semenya case - the secrecy around her testing and treatment only fueled the flames and allowed people to make up the "truth".  And that version is always worse than the real truth).

And for rugby, the solution to me is that the performance of referees needs to be evaluated more transparently.  A panel of independent officials could analyze matches, producing a report on the match.  This report could analyze every single one of the 200 decisions a referee has to make in a match.  How many of the 200 were incorrect?  20? 30?  And of those 30, how many were clear, conclusive errors, and how many were interpretive calls?  One has to build in this human interpretation element, because it would be wrong to think that one can accurately judge off TV when the referee is 5m away from the decision he is making.

And of those conclusive errors, do they favor one team?  If you find for example that 30 decisions out of 200 are wrong, and 90% of them go against one team, then you have some weight behind accusations of bias or fixing.  But until that kind of evaluation is done, people speculate, and speculation is almost always worse than the truth.

Especially when the passions of die-hard fans are involved.  Just ask any referee...

Ross

Senin, 17 Oktober 2011

Book review: Perfect Health Diet

Perfect Health Diet is a book that one should own. It is not the type of book that you can get from your local library and just do a quick read over (and, maybe, write a review about it). If you do that, you will probably miss several important ideas that form the foundation of this book, which is a deep foundation.

The book is titled “Perfect Health Diet”, not “The Perfect Health Diet”. If you think that this is a mistake, consider that the most successful social networking web site of all time started as “The Facebook”, and then changed to simply “Facebook”; which was perceived later as a major improvement.

Moreover, “Perfect Health Diet” makes for a cool and not at all inappropriate acronym – “PHD”.

What people eat has an enormous influence on their lives, and also on the lives of those around them. Nutrition is clearly one of the most important topics in the modern world - it is the source of much happiness and suffering for entire populations. If Albert Einstein and Marie Curie were alive today, they would probably be interested in nutrition, as they were about important topics of their time that were outside their main disciplines and research areas (e.g., the consequences of war, and future war deterrence).

Nutrition attracts the interest of many bright people today. Those who are not professional nutrition researchers often fund their own research, spending hours and hours of their own time studying the literature and even experimenting on themselves. Several of them decide to think deeply and carefully about it. A few, like Paul Jaminet and Shou-Ching Jaminet, decide to write about it, and all of us benefit from their effort.

The Jaminets have PhDs (not copies of their books, degrees). Their main PhD disciplines are somewhat similar to Einstein’s and Curie’s; which is an interesting coincidence. What the Jaminets have written about nutrition is probably analogous, in broad terms, to what Einstein and Curie would have written about nutrition if they were alive today. They would have written about a “unified field theory” of nutrition, informed by chemistry.

To put it simply, the main idea behind this book is to find the “sweet spot” for each major macronutrient (e.g., protein and fat) and micronutrient (e.g., vitamins and minerals) that is important for humans. The sweet spot is the area indicated on the graph below. This is my own simplified interpretation of the authors' more complex graphs on marginal benefits from nutrients.


The book provides detailed information about each of the major nutrients that are important to humans, what their “sweet spot” levels are, and how to obtain them. In this respect the book is very thorough, and also very clear, including plenty of good arguments and empirical research results to back up the recommendations. But this book is much more than that.

Why do I refer to this book as proposing a “unified field theory” of nutrition? The reason is that this book clearly aims at unifying all of the current state of the art knowledge about nutrition, departing from a few fundamental ideas.

One of those fundamental ideas is that a good diet would provide nutrients in the same ratio as those provided by our own tissues when we “cannibalize” them – i.e., when we fast. Another is that human breast milk is a good basis for the estimation of the ratios of macronutrients a human adult would need for optimal health.

And here is where the depth and brilliance with which the authors address these issues can lead to misunderstandings.

For example, when our body “cannibalizes” itself (e.g., at the 16-h mark of a water fast), there is no digestion going on. And, as the authors point out, what you eat, in terms of nutrients, is often not what you get after digestion. It may surprise many to know that a diet rich in vegetables is actually a high fat diet (if you are surprised, you should read the book). One needs to keep these things in mind to understand that not all dietary macronutrient ratios will lead to the same ratios of nutrients after digestion, and that the dietary equivalent of “cannibalizing” oneself is not a beef-only diet.

Another example relates to the issue of human breast milk. Many seem to have misunderstood the authors as implying that the macronutrient ratios in human breast milk are optimal for adult humans. The authors say nothing of the kind. What they do is to use human breast milk as a basis for their estimation of what an adult human should get, based on a few reasonable assumptions. One of the assumptions is that a human adult’s brain consumes proportionally much less sugar than an infant’s.

Yet another example is the idea of “safe starches”, which many seem to have taken as a recommendation that diabetics should eat lots of white rice and potato. The authors have never said such a thing in the book; not even close. "Safe starches", like white rice and sweet potatoes (as well as white potatoes), are presented in the book as good sources of carbohydrates that are also generally free from harmful plant toxins. And they are, if consumed after cooking.

By the way, I have a colleague who has type 2 diabetes and can eat meat with white potatoes without experiencing hyperglycemia, as long as the amount of potato is very small and is eaten after a few bites of meat.

Do I disagree with some of the things that the authors say? Sure I do, but not in a way that would lead to significantly different dietary recommendations. And, who knows, maybe I am wrong.

For example, the authors seem to think that dietary advanced glycation end-products (AGEs) can be a problem for humans, and therefore recommend that you avoid cooking meat at high temperatures (no barbecuing, for example). I have not found any convincing evidence that this is true in healthy people, but following the authors’ advice will not hurt you at all. And if your digestive tract is compromised to the point that undigested food particles are entering your bloodstream, then maybe you should avoid dietary sources of AGEs.

Also, I think that humans tend to adapt to different macronutrient ratios in more fundamental ways than the authors seem to believe they can. These adaptations are long-term ones, and are better understood based on the notion of compensatory adaptation. For instance, a very low carbohydrate diet may bring about some problems in the short term, but long-term adaptations may reverse those problems, without a change in the diet.

The authors should be careful about small errors that may give a bad impression to some experts, and open them up to undue criticism; as experts tend to be very picky and frequently generalize based on small errors. Here is one. The authors seem to imply that eating coconut oil will help feed colon cells, which indeed seem to feed on short-chain fats; not exactly the medium-chain fats abundantly found in coconut oil, but okay. (This may be the main reason why indigestible fiber contributes to colon health, by being converted by bacteria to short-chain fats.) The main problem with the authors' implied claim is that coconut oil, as a fat, will be absorbed in the small intestine, and thus will not reach colon cells in any significant amounts.

Finally, I don’t think that increased animal protein consumption causes decreased longevity; an idea that the authors seem to lean toward. One reason is that seafood consumption is almost universally associated with increased longevity, even when it is heavily consumed, and seafood in general has a very high protein-to-fat ratio (much higher than beef). The connection between high animal protein consumption and decreased longevity suggested by many studies, some of which are cited in the book, is unlikely to be due to the protein itself, in my opinion. That connection is more likely to be due to some patterns that may be associated in certain populations with animal protein consumption (e.g., refined wheat and industrial seed oils consumption).

Thankfully, controversial issues and small errors can be easily addressed online. The authors maintain a popular blog, and they do so in such a way that the blog is truly an extension of the book. This blog is one of my favorites. Perhaps we will see some of the above issues addressed in the blog.

All in all, this seems like a bargain to me. For about 25 bucks (less than that, if you trade in quid; and more, if you do in Yuan), and with some self-determination, you may save thousands of dollars in medical bills. More importantly, you may change your life, and those of the ones around you, for the better.

Senin, 10 Oktober 2011

Certain mental disorders may have evolved as costs of attractive mental traits

I find costly traits fascinating, even though they pose a serious challenge to the notion that living as we evolved to live is a good thing. It is not that they always deny this notion; sometimes they do not, but add interesting and somewhat odd twists to it.

Costly traits have evolved in many species (e.g., the male peacock’s train) because they maximize reproductive success, even though they are survival handicaps. Many of these traits have evolved through nature’s great venture capitalist – sexual selection.

(Source: Vangoghart.org)

Certain harmful mental disorders in humans, such as schizophrenia and manic–depression, are often seen as puzzles from an evolutionary perspective. The heritability of those mental disorders and their frequency in the population at various levels of severity suggests that they may have been evolved through selection, yet they often significantly decrease the survival prospects of those afflicted by them (Keller & Miller, 2006; Nesse & Williams, 1994).

The question often asked is why have they evolved at all? Should not they have been eliminated, instead of maintained, by selective forces? It seems that the most straightforward explanation for the existence of certain mental disorders is that they have co-evolved as costs of attractive mental traits. Not all mental disorders, however, can be explained in this way.

The telltale signs of a mental disorder that is likely to be a cost associated with a trait used in mate choice are: (a) many of the individuals afflicted are also found to have an attractive mental trait; and (b) the mental trait in question is comparatively more attractive than other mental traits that have no apparent survival costs associated with them.

The broad category of mental disorders generally referred to as schizophrenia is a good candidate in this respect because:
    - Its incidence in human males is significantly correlated with creative intelligence, the type of intelligence generally displayed by successful artists, which is an attractive mental trait (Miller & Tal, 2007; Nettle, 2006b).
    - Creative intelligence is considered to be one of the most attractive mental traits in human males, to the point of females at the peak of their fertility cycles finding creative but poor males significantly more attractive than uncreative but wealthy ones (Haselton & Miller, 2006).

The same generally applies to manic–depression, and a few other related mental disorders.

By the way, creative intelligence is also strongly associated with openness, one of the "big five" personality traits. And, both creative intelligence and mental disorders are seen in men and women. This is so even though it is most likely that selection pressure for creative intelligence was primarily exerted by ancestral women on men, not ancestral men on women.

Crespi (2006), in a response to a thorough and provocative argument by Keller & Miller (2006) regarding the evolutionary bases of mental disorders, makes a point that is similar to the one made above (see, also, Nettle, 2006), and also notes that schizophrenia has a less debilitating effect on human females than males.

Ancestral human females, due to their preference for males showing high levels of creative intelligence, might have also selected a co-evolved cost that affects not only males but also the females themselves though gene correlation between the sexes (Gillespie, 2004; Maynard Smith, 1998).

There is another reason why ancestral women might have possessed certain traits that they selected for in ancestral men. Like anything that involves intelligence in humans, the sex applying selection pressure (i.e., female) must be just as intelligent as (if not more than) the sex to which selection pressure is applied (i.e., males). Peahens do not have to have big and brightly colored trains to select male peacocks that have them. That is not so with anything that involves intelligence (in any of its many forms, like creative and interpersonal intelligence), because intelligence must be recognized through communication and behavior, which itself requires intelligence.

Other traits that differentiate females from males may account for differences in the actual survival cost of schizophrenia in females and males. For example, males show a greater propensity toward risk-taking than females (Buss, 1999; Miller, 2000), and schizophrenia may positively moderate the negative relationship between risk-taking propensity and survival success.

Why were some of our ancestors in the Stone Age artists, creating elaborate cave paintings, sculptures, and other art forms? Maybe because a combination of genetic mutations and environmental factors made it a sexy thing to do from around 50,000 years ago or so, even though the underlying reason why the ancestral artists produced art may also have increased the chances that some of them suffered from mental disorders.

A heritable trait possessed by males and perceived as very sexy by females has a very good chance of evolving in any population. That is so even if the trait causes the males who possess it to die much earlier than other males. In the human species, a male can father literally hundreds of children in just a few years. Unlike men, women tend to be very selective of their sexual partners, which does not mean that they cannot all select the same partner (Buss, 1999).

So, if this is true, what is the practical value of knowing it?

It seems reasonable to believe that knowing the likely source of a strange and unpleasant view of the world is, in and of itself, therapeutic. A real danger, it seems, is in seeing the world in a strange and unpleasant way (e.g., as a schizophrenic may see it), and not knowing that the distorted view is caused by an underlying reason. The stress coming from this lack of knowledge may compound the problem; the symptoms of mental disorders are often enhanced by stress.

As one seeks professional help, it may also be comforting to know that something that is actually very good, like creative intelligence, may come together with the bad stuff.

Finally, is it possible that our modern diets and lifestyles significantly exacerbate the problem? The answer is "yes", and this is a theme that has been explored many times before by Emily Deans. (See also this post, by Emily, on the connection between mental disorders and creativity.)

Reference
(All cited references are listed in the article below. If you like mathematics, this article is for you.)

Kock, N. (2011). A mathematical analysis of the evolution of human mate choice traits: Implications for evolutionary psychologists. Journal of Evolutionary Psychology, 9(3), 219-247.

Minggu, 09 Oktober 2011

Analyzing India's Cricket Debacle - A Black Swan Event?

I have been thinking of posting this all through India's disastrous recent cricket tour of England. It was Chris Dillow's excellent post about cognitive biases in football that finally got me around to writing it.

The dismal performance of India's cricketers has been variously attributed to IPL, England emergence as the successor to the great West Indian and Australian teams of the past forty years, India's "club-side" like bowling attack and the inability of its batsmen to cope with the swinging ball, and so on.

Without going into the merits of each of these, if we view this performance in its true perspective - the sheer magnitude of the defeat, the recent relative performances of both teams, and an individual assessment of the players from both sides who played in the series - none of the aforementioned explanations appear convincing.

Consider these. India's 4-0 defeat, apart from being its worst ever against England in 15 series there, was also its worst test loss margin ever. In fact, even the great West Indies, with all its great bowlers and batsmen, or Australia of the last two decades, could not inflict a test defeat of such magnitude, even in series with more tests. Undoubtedly much weaker Indian teams, both in batting and bowling, have performed more creditably against far better teams than the current English team, even in conditions atleast as adverse as that in the recent series. A logical performance-based explanation would lead us someway down the conclusion that the current English team is among best ever cricket team or conversely this Indian team is among the worst ever team assembled by the country!

In the build-up to the series, both teams had equally impressive recent test records. If anything, India's performance was superior, both in terms of the fact that its successes were for a longer period of time and was against slightly better opposition. The No 1 test ranking was a just reflection of India's superiority. Apart from its big success in Australia last winter, England's recent victories have been against the lesser teams (nothing in Pakistan, Sri Lanka, India, and South Africa).

Interestingly, it needs to be borne in mind that the same set of English bowlers have played in all the last three test series between the two countries, two of which were in England, and two of which were won by India and one was drawn. James Anderson (in four) and Stuart Broad (in two) led the English bowling attack on these tours. This brings us to the English bowlers themselves. While Anderson is arguably one of the finest exponents of swing bowling in friendly conditions today, his place among the greats of swing bowling is questionable. Stuart Broad's place was itself under threat, though it can be argued that his best years may be ahead. Take out the performance of the last one year, and the averages speak for themselves.

Man to man, given the fact that Graeme Swann was hardly a factor in the first three tests, the South African attack of Dale Steyn and Morne Morkel, against whom the same Indian batting line-up fared with great distinction less than a year back, is far superior. In terms of every imaginable measure of a bowler's art, Dale Steyn is far superior to James Anderson. Morne Morkel is similarly superior to Stuart Broad. Although England's third seamer, Chris Tremlett or Tim Bresnan or Steve Finn, is superior to South Africa's, the added presence of Jacques Kallis evens up things on this front. So, if South Africa's bowlers are superior to the English bowlers, there is something amiss about attributing the extraordinary English performance to the excellence of their bowlers.

I have three explanations for the triumphalism of English cricket writers,

1. Statistical coincidence - As Chris Dillow writes, events occasionally turn out such that one team enjoys the rare confluence of all fortunate factors, while the other team suffers the exact opposite. England had all its players playing at the peak of their form and free from injuries (and given their otherwise normal averages, it cannot be denied that England enjoyed one of the rare runs of all players being in great form), conditions favorable to its bowlers, its batsmen facing a weak and demoralized bowling attack, and so on. India had exactly the opposite - the injury toll, even with IPL workload, and the near complete batting failure, being inexplicable.

And once, the coincidence of factors align in such comprehensive manner, and one team starts to suffer, it is more likely that its confidence will deplete just as fast as that of the other will rise. A self-fulfilling spiral is triggered off. A statistical outlier will then get mistaken for something else.

2. We live in the present - The stellar performance of the same English bowlers, who not far back were whacked by all and sundry in the cricket World Cup (admittedly there is difference between test and one-dayers, but not so much as to merit such differential - Morkel and Steyn hardly suffered such consistent punishment even in one-dayers), raises the issue of how we should assess modern day cricketers.

The amount of cricket modern cricketers play means that their careers are more likely to be short and spliced with injury or fatigue interruptions. In the circumstances, there is a strong case for assessing players based on their current form. Given the competition and standards, even lesser mortals (say, Time Bresnan or Chris Tremlett) playing cricket today are likely to have a streak of great form for some period of time. Then law of averages catch up and they fall back to their mean career trajectory. Only the great players, and there appear to be only a handful of true greats playing now, can sustain their high-level performance for years.

3. Availability Bias - The 3-1 victory over Australia in 2009-10 was easily the greatest cricketing achievement for England in nearly four decades, if not more. It was preceded and succeeded by consistent performances by the team, albeit, as aforementioned, against the weaker teams. The whitewash of the World No 1 Indian team on top of all this naturally reinforces the positive feeling about the team and therefore the impression of an all-conquering team.

In fact, this is classic availability bias, wherein the immediate recollections of the team's performance gets disproportionate importance in the overall assessment of the team. The immediacy of the string of these successes, amplified manifold by modern media coverage, gave rise to the impression of an all-conquering team.

None of this is to denigrate the English achievement nor condone the pathetic performance of India. It is only to fill in a sense of perspective to the emotion charged reporting that dominated the richly deserved triumph of the British team.

Senin, 03 Oktober 2011

Great evolution thinkers you should know about

If you follow a paleo diet, you follow a diet that aims to be consistent with evolution. This is a theory that has undergone major changes and additions since Alfred Russel Wallace and Charles Darwin proposed it in the 1800s. Wallace proposed it first, by the way, even though Darwin’s proposal was much more elaborate and supported by evidence. Darwin acknowledged Wallace's precedence, but received most of the credit for the theory anyway.

(Alfred Russel Wallace; source: Wikipedia)

What many people who describe themselves as paleo do not seem to know is how the theory found its footing. The original Wallace-Darwin theory (a.k.a. Darwin’s theory) had some major problems, notably the idea of blending inheritance (e.g., blue eye + brown eye = somewhere in between), which led it to be largely dismissed until the early 1900s. Ironically, it was the work of a Catholic priest that provided the foundation on which the theory of evolution would find its footing, and evolve into the grand theory that it is today. We are talking about Gregor Johann Mendel.

Much of the subsequent work that led to our current understanding of evolution sought to unify the theory of genetics, pioneered by Mendel, with the basic principles proposed as part of the Wallace-Darwin theory of evolution. That is where major progress was made. The evolution thinkers below are some of the major contributors to that progress.

Ronald A. Fisher. English statistician who proposed key elements of a genetic theory of natural selection in the 1910s, 1920s and 1930s. Fisher showed that the inheritance of discrete traits (e.g., flower color) described by Gregor Mendel has the same basis as the inheritance of continuous traits (e.g., human height) described by Francis Galton. He is credited, together with John B.S. Haldane and Sewall G. Wright, with setting the foundations for the development of the field of population genetics. In population genetics the concepts and principles of the theories of evolution (e.g., inheritance and natural selection of traits) and genetics (e.g., genes and alleles) have been integrated and mathematically formalized.

John B.S. Haldane. English geneticist who, together with Ronald A. Fisher and Sewall G. Wright, is credited with setting the foundations for the development of the field of population genetics. Much of his research was conducted in the 1920s and 1930s. Particularly noteworthy is the work by Haldane through which he mathematically modeled and explained the interactions between natural selection, mutation, and migration. He is also known for what is often referred to as Haldane’s principle, which explains the direction of the evolution of many species’ traits based on the body size of the organisms of the species. Haldane’s mathematical formulations also explained the rapid spread of traits observed in some actual populations of organisms, such as the increase in frequency of dark-colored moths from 2% to 95% in a little less than 50 years as a response to the spread of industrial soot in England in the late 1800s.

Sewall G. Wright. American geneticist and statistician who, together with Ronald A. Fisher and John B.S. Haldane, is credited with setting the foundations for the development of the field of population genetics. As with Fisher and Haldane, much of his original and most influential research was conducted in the 1920s and 1930s. He is believed to have discovered the inbreeding coefficient, related to the occurrence of identical genes in different individuals, and to have pioneered methods for the calculation of gene frequencies among populations of organisms. The development of the notion of genetic drift, where some of a population’s traits result from random genetic changes instead of selection, is often associated with him. Wright is also considered to be one of pioneers of the development of the statistical method known as path analysis.

Theodosius G. Dobzhansky. Ukrainian-American geneticist and evolutionary biologist who migrated to the United States in the late 1920s, and is believed to have been one of the main architects of the modern evolutionary synthesis. Much of his original research was conducted in the 1930s and 1940s. In the 1930s he published one of the pillars of the modern synthesis, a book titled Genetics and the Origin of Species. The modern evolutionary synthesis is closely linked with the emergence of the field of population genetics, and is associated with the integration of various ideas and predictions from the fields of evolution and genetics. In spite of Dobzhansky’s devotion to religious principles, he strongly defended Darwinian evolution against modern creationism. The title of a famous essay written by him is often cited in modern debates between evolutionists and creationists regarding the teaching of evolution in high schools: Nothing in Biology Makes Sense Except in the Light of Evolution.

Ernst W. Mayr. German taxonomist and ornithologist who spent most of his life in the United States, and is believed, like Theodosius G. Dobzhansky, to have been one of the main architects of the modern evolutionary synthesis. Mayr is credited with the development in the 1940s of the most widely accepted definition of species today, that of a group of organisms that are capable of interbreeding and producing fertile offspring. At that time organisms that looked alike were generally categorized as being part of the same species. Mayr served as a faculty member at Harvard University for many years, where he also served as the director of the Museum of Comparative Zoology. He lived to the age of 100 years, and was one of the most prolific scholars ever in the field of evolutionary biology. Unlike many evolution theorists, he was very critical of the use of mathematical approaches to the understanding of evolutionary phenomena.

William D. Hamilton. English evolutionary biologist (born in Egypt) widely considered one of the greatest evolution theorists of the 20th Century. Hamilton conducted pioneering research based on the gene-centric view of evolution, also know as the “selfish gene” perspective, which is based on the notion that the unit of natural selection is the gene and not the organism that carries the gene. His research conducted in the 1960s set the foundations for using this notion to understand social behavior among animals. The notion that the unit of natural selection is the gene forms the basis of the theory of kin selection, which explains why organisms often will instinctively behave in ways that will maximize the reproductive success of relatives, sometimes to the detriment of their own reproductive success (e.g., worker ants in an ant colony).

George C. Williams. American evolutionary biologist believed to have been a co-developer in the 1960s, together with William D. Hamilton, of the gene-centric view of evolution. This view is based on the notion that the unit of natural selection is the gene, and not the organism that carries the gene or a group of organisms that happens to share the gene. Williams is also known for his pioneering work on the evolution of sex as a driver of genetic variation, without which a species would adapt more slowly in response to environmental pressures, in many cases becoming extinct. He is also known for suggesting possible uses of human evolution knowledge in the field of medicine.

Motoo Kimura. Japanese evolutionary biologist known for proposing the neutral theory of molecular evolution in the 1960s. In this theory Kimura argued that one of the main forces in evolution is genetic drift, a stochastic process that alters the frequency of genotypes in a population in a non-deterministic way. Kimura is widely known for his innovative use of a class of partial differential equations, namely diffusion equations, to calculate the effect of natural selection and genetic drift on the fixation of genotypes. He has developed widely used equations to calculate the probability of fixation of genotypes that code for certain phenotypic traits due to genetic drift and natural selection.

George R. Price. American geneticist known for refining in the 1970s the mathematical formalizations developed by Ronald A. Fisher and William D. Hamilton, and thus making significant contributions to the development of the field of population genetics. He developed the famous Price Equation, which has found widespread use in evolutionary theorizing. Price is also known for introducing, together with John Maynard Smith, the concept of evolutionary stable strategy (ESS). The EES notion itself builds on the Nash Equilibrium, named after its developer John Forbes Nash (portrayed in the popular Hollywood film A Beautiful Mind). The concept of EES explains why certain evolved traits spread and become fixed in a population.

John Maynard Smith. English evolutionary biologist and geneticist credited with several innovative applications of game theory (which is not actually a theory, but an applied branch of mathematics) in the 1970s to the understanding of biological evolution. Maynard Smith is also known for introducing, together with George R. Price, the concept of evolutionary stable strategy (EES). As noted above, the EES notion builds on the Nash Equilibrium, and explains why certain evolved traits spread and become fixed in a population. The pioneering work by John Maynard Smith has led to the emergence of a new field of research within evolutionary biology known as evolutionary game theory.

Edward O. Wilson. American evolutionary biologist and naturalist who coined the term “sociobiology” in the 1970s to refer to the systematic study of the biological foundations of social behavior of animals, including humans. Wilson was one of the first evolutionary biologists to convincingly argue that human mental mechanisms are shaped as much by our genes as they are by the environment that surrounds us, setting the stage for the emergence of the field of evolutionary psychology. Many of Wilson’s theoretical contributions in the area of sociobiology are very general, and apply not only to humans but also to other species. Wilson has been acknowledged as one of the foremost experts in the study of ants’ and other insects’ social organizations. He is also known for his efforts to preserve earth’s environment.

Amotz Zahavi. Israeli evolutionary biologist best known for his widely cited handicap principle, proposed in the 1970s, which explains the evolution of fitness signaling traits that appear to be detrimental to the reproductive fitness of an organism. Zahavi argued that traits evolved to signal the fitness status of an organism must be costly in order to the reliable. An example is the large and brightly colored trains evolved by the males of the peacock species, which signal good health to the females of the species. The male peacock’s train makes it more vulnerable to predators, and as such is a costly indicator of survival success. Traits used for this type of signaling are often referred to as Zahavian traits.

Robert L. Trivers. American evolutionary biologist and anthropologist who proposed several influential theories in the 1970s, including the theories of reciprocal altruism, parental investment, and parent-offspring conflict. Trivers is considered to be one of the most influential living evolutionary theorists, and is a very active researcher and speaker. His most recent focus is on the study of body symmetry and its relationship with various traits that are hypothesized to have been evolved in our ancestral past. Trivers’s theories often explain phenomena that are observed in nature but are not easily understood based on traditional evolutionary thinking, and in some cases appear contradictory with that thinking. Reciprocal altruism, for example, is a phenomenon that is widely observed in nature and involves one organism benefiting another not genetically related organism, without any immediate gain to the organism (e.g., vampire bats regurgitating blood to feed non-kin).

There are many other more recent contributors that could arguably be included in the list above. Much recent progress has been made in interdisciplinary fields that could be seen as new fields of research inspired in evolutionary ideas. One such field is that of evolutionary psychology, which has emerged in the 1980s. New theoretical contributions tend to take some time to be recognized though, as will be the case with ideas coming off these new fields, because new theoretical contributions are invariably somewhat flawed and/or incomplete when they are originally proposed.