RCTs: Some “how matters” advice for donors

Karlan and Appel finish “More Than Good Intentions: How a New Economics Is Helping to Solve Global Poverty,” with an entreaty to donors to vote with their money.

As my readers know, how matters. Therefore, my responsibility is to keep beating that drum by sharing my own recommendations for donors. (See also my previous related posts, “RCTs: A band-aid on a deeper issue?” and “RCTs: Much to be said.”)

The authors and readers of Karlan and Dean’s book might see these recommendations as just a collection of the caveats already addressed in the book and elsewhere, but they can’t be highlighted enough within the current discussions on RCTs and aid effectiveness.

1. Be smart about what RCTs cannot tell us. Allow space for the unseen, complex and long-term consequences of aid investments to be discovered through accompanying and complementary research methods. IPA knowledgably discusses this on their blog post, “Helping Ugandan families save for school fees”:

“The ability of randomized control trials to robustly demonstrate causal correlations between variables and thus determine the effects of programs is not found in any other form of evaluation. This is not to say that a randomized control trial cannot benefit from other forms of evaluation.  As regards the primary school based savings program, with many different types of information and evaluation we are able to implement a better program and are better able to design and understand the results of the randomized control trial study.”

2. In pursuing value for money, realize that some investments have higher returns over the longer-term. Therefore, it’s vital that donors balance their investment portfolios in programs that not only serve the poor directly, but that are focused on challenging the unjust systems and structures that contribute to (and some would say are responsible for) poverty.

3. Understand that the issues that create poorly-conceived, –executed, and –interpreted evaluations and extractive research methods already within the aid industry are the same issues that affect RCTs as a methodology. As Edward Carr warns, “what I am seeing in the RCT4D world right now – really rigorous data collection, followed by really thin interpretations of the data.”

4. Consider carefully the moral issues behind extending “services” to control groups to ensure that this is being done as fairly and transparently as possible. Innovations for Poverty Action encourages us to also consider the morality of wasted aid dollars. However, in a program I worked on Zimbabwe, we found that uniforms were a significant determinate of a child’s psychosocial health and performance at school. What are the very real emotional consequences when some kids are singled out to receive uniforms, and others not, in an RCT ?

5. RCTs offer important clues on what kinds of program interventions can create impact, but it’s also important to question—are universal lessons even possible, or desirable in development? . Arvind Subramanian of the Center for Global Development speaks to RCTs’ “narrow focus and applicability, and hence non-generalizability.” After all these years, aid practitioners should not forget the importance of context-specific aid programs that are responsive to local communities’ identified needs. It’s important to guard against the adage that “poor people don’t know what’s good for them.”

4. Identify where “tweaking, tinkering, and testing” are occurring naturally, albeit perhaps, slower than through RCTs. In my experience, local, indigenous organizations that are “of the community” (as opposed to serving a community) are incredibly responsive and engaged in real-time learning. Rather than using any theory or methodology, local leaders are reading read trends via observation of what’s happening on the ground, which, in turn, drives intuition, much like entrepreneurs. This is due to their personal and collective stake in seeing change at a local level.

When a grassroots organization is rooted in community, evidence of grantee learning and adaptation can itself serve as evidence of effectiveness. Perhaps you as a donor don’t have to “know” when you ensure the people who could have a much better chance at “knowing” are the ones you’re investing in. (You can read more at “Small is beautiful…grants, that is.”)

7. Invest in ideas, but also the people who have them, whose expertise and critical thinking is grounded in their day-to-day experience. Probe and develop new questions to determine who are the drivers of not only RCTs, but aid programs in general. Objective processes for learning about the answer to these questions are also vital to aid effectiveness.

8. The bottom line? Recognize that this ephemeral life is governed by a multitude of forces. Control is an illusion. Scientists are wrong all the time. The insistence of certainty and the room for possibility can develop into an inverse relationship. Don’t be satisfied with poor results, but also don’t be afraid to embrace the mystery.


See also related how-matters.org posts, RCTs: A band-aid on a deeper issue? and RCTs: Much to be said.


Related Posts

World Bank and IMF, Don’t Just Develop Capacity—Unleash It!

Trocaire: 10 things INGOs need to do

Small is Beautiful…Grants, That Is (Part 2)

Reuters: New report, model of best practice for aid world

How to build strong relationships with grassroots organizations – Part 2

Diving Into the Deep End

One Comment

  1. Pingback: Evaluation: using the right tool for the job | kirstyevidence

Leave a Reply

Your email address will not be published. Required fields are marked *