Deaton v Banerjee

In March, Abhijit Banerjee and Angus Deaton, two of the most brilliant development economists of our time, squared off at DRI’s Debates in Development conference.

Round I

Banerjee argued that RCTs force the researcher to confront facts that otherwise could be ignored. With RCTs, researchers can test almost any hypothesis, and can no longer cling to the excuse that data does not exist: 

Banerjee described the surprising result of a recent experiment in Hyderabad, India. Even with access to small loans, small business owners did not invest in growing their own businesses. This is surprising because other RCTs have shown that such investments would reap returns of 60-100 percent a year.

Banerjee suggested several explanations. Perhaps the scale is simply too small. Perhaps lack of education constrains the potential of these businesses.  But the most convincing explanation according to Banerjee is that for the poor, the returns are huge incrementally, but small in absolute terms. The owners of these very small businesses think that whatever profits they might earn are unlikely to dramatically change their lives.

The larger point is that the whole intellectual journey of finding a surprising research result and formulating and testing hypotheses to explain the puzzle would not be possible without RCTs: 

Angus Deaton responded that RCTs are of limited value since they focus on very small interventions that by definition only work in certain contexts. It’s like designing a better lawnmower—and who wouldn’t want that? —unless you’re in a country with no grass, or where the government dumps waste on your lawn. RCTs can help to design a perfect program for a specific context, but there’s no guarantee it will work in any other context.

RCTs are so highly regarded because people assume that the randomness of the selection eliminates bias. What people don’t talk about is that there are actually two stages of selection. The first stage, in which researchers start with the entire population, and choose a group which will in the second stage be randomly divided into the study and control groups, is NOT random. Selection in the first stage may be determined by convenience or politics, and therefore may not be representative of the entire population. At the same time, the studied populations in RCTs are actually very small, which means that an outlier in the experimental group can have a huge distortionary effect: 

Third, Deaton argued that designing and executing RCTs takes too long and requires too much iteration to be efficient. He recommended trial and error instead, and offered a short course in Angry Birds Economics: 

Finally, Deaton described a problem of causality: RCTs may identify a causal connection in one situation, but the cause might be specific to that trial and not a general principle. In a Rube Goldberg machine, flying a kite sharpens a pencil, but kite flying does not normally cause pencil sharpening. In other words, the context is everything. And replication in different contexts leads to a variety of different results: 

For Deaton, the main problem is that RCTs “get treated differently.” The problems he described make him worry that a lot of bad evidence from RCTs is not receiving the critical judgment it deserves.

The Rebuttals

Just thinking about designing an RCT forces researchers to grapple with causality, responded Banerjee. And Angry Birds-style trial and error isn’t a realistic way to create policy: 

But RCTs don’t receive the same critical scrutiny as other methods of evaluation, said Deaton. Scholars should take “the halo off the RCT”: 

Still haven’t had enough? Watch the entire 90 minute debate, and take a look at the other sessions from this year’s conference here.

%d bloggers like this: