From the perspective of a blogger who has taken swipes at the randomistas over external validity a few times, I think much of the push back on the external validity front has less to do with the research itself, and more with how the research is being trumpeted outside the academic sphere – there haven’t been any NYT articles about how eye-tracking experiments herald the end of poverty.Aid Watch addresses a similar problem (unsurprisingly) in the new books touting what we have learned from randomized trials. While acknowledging that development workers have been perhaps a tad over-exuberant, they then come to universally-applicable conclusions based on their own work, perpetuating the cycle. Most of Easterly’s review, however, is much more positive than that one take-away. Here is the Economist being similarly overly optimistic about the prospects of well-meaning amateurs solving the world’s problems by developing cheap housing with “the basics of civilized life” which apparently include “solar panels.” A new entry for MDG 2.0?
Two: replicability. Given our publication biases, the incentives just aren’t there for economists to check up on each other as more than class exercises. A few do get published, though, when they overturn previous results. Clemens recommends:
Well-known examples include the work of Bill Easterly, Ross Levine, and David Roodman attempting to replicate a famous study on the effects of foreign aid, David Albouy trying to reproduce an influential study of how institutions shape development, and Jesse Rothstein attempting to replicate the results of a famous study on the effects of school choice.
Will the Journal of Development Effectiveness bring more examples?
3: significance. Matt at Aid Thoughts also reminds us to check for economic significance, not just statistical significance:
Let’s put this into perspective: the effect of an increase in one standard deviation in the food price index raises the number of riots in a country in a given year by 0.0143 riots and 0.0175 demonstrations.