RSS

161 Indicators, and then what?

One country, seven districts, seven international NGOs, five million dollars (excluding food commodity monetization), and…

161 indicators.

How do you begin to collect, analyze and, more importantly, utilize data on 161 indicators in this scenario?

Got me. And I was the Technical Advisor on Monitoring & Evaluation for this project – ha!

What a beautiful, i.e. funded, proposal it was—concocted by the lead INGOs’ most well-respected, imported-from-headquarters PhDs, in absentia of any real meaningful consultation with implementing partners, i.e. local organizations who do the actual work with communities, let alone the people they intended to serve.

I’ve been working in monitoring & evaluation (M&E) for over ten years now and during this time in the international development sector as a whole, I’ve seen an increasing desperation to find “evidence” of what is often inherently beyond logic and induction, also discussed recently by Rick Davies, Ramaswami Balasubramaniam, Lawrence Haddad, Dennis Whittle, Ben Ramalingam, and Alanna Shaikh, just to name a few. Caroline Preston also reports on “the data dash” of recent years in the philanthropic sector, which too often overlooks how “personal relationships, social networks, family and community dynamics, passion for causes, and other factors” that shape change.

The delusion of thinking you can conquer your world leads you to lose your soul. ~Cornel West

I see the development and philanthropic sectors locked in an increasing fixation on solving the problem of poverty through reductive ways of measurement. However, abstract metrics and experimental design is quite far from the intimate, difficult, and complex factors at play at the community level. Thus I believe it is time to examine our belief that there are technocratic, precise ways of measuring progress in order to make consequential judgments based on these measures. The business sector seems to have a healthier relationship with uncertainty, perhaps something we need to further explore in the development sector.

M&E implemented solely for the purpose of accountability time and again fails to result in improved programming and, in many cases, undermines the effectiveness of the very interventions it is trying to measure. (See related research paper by Blackett-Dibinga & Sussman.) And the latest trend towards using the “gold standard” of randomized control trials is especially troubling when one is talking about community-level initiatives. Imposing such incredibly risk-averse behavior through evaluating every single intervention can most certainly be a drain on the time and scarce resources of people who are in the process of organizing at the local level, let alone the development professionals engaged in supporting them.

As someone who has worked extensively in building the M&E capacity of grassroots organizations in Africa, what I have found is that abstract metrics or research frameworks don’t often help people understand their relationship to improving the well-being of those they serve. Rather than using any theory or logframes, local leaders, as members of a community, read real-time trends via observation of what’s happening on the ground, which, in turn, drives intuition. Most Significant Change, Outcome Mapping, and The Good Enough Guide are examples of alternative approaches to M&E that are better grounded in this reality.

For the past few years, I led the development of an innovative training and mentoring approach to monitoring and evaluation among local indigenous organizations in Lesotho, Zambia and Malawi. The approach is based on a premise that people at the grassroots level have the most expertise in terms of defining and measuring success, based on internal reflection processes of their own values and goals. One mantra at its core—M&E should never detract from the work at hand, which is serving people—kept it grounded in the practicalities of day-to-day work with families and communities. The approach’s success has also come from its focus on making M&E accessible through the “de-technicalization” of M&E concepts and practical exercises that utilized existing data from the organizations themselves, further developing critical reflection skills. (You can see an overview presentation of this training approach here.) Trained groups continue to demonstrate the effectiveness of the approach through their program adaptation, enhanced advocacy, and increased fundraising.

Those who work selfishly for results are miserable. ~Sri Krishna, Bhagavad Gita, The Song of God

Make no mistake. I’m not arguing that more rigorous evaluation techniques do not have their place, especially for larger, publicly-funded projects. I myself am a big data geek. A database full of numbers or a pile of raw stories to play with and I’m in my bliss, identifying trends and constructing potential causal inferences.

But as practitioners, we must always consider what is the appropriate cost and complexity needed for evaluation, especially given the size, scope, level of intervention, and length of the program. We must also aim for proportional expectations so we ensure M&E is a tool for learning and improvement, not just policing. Yet again, how matters.

Yes, let’s pursue and obtain useful data from the ground, but at a scale at which information can be easily generated and acted upon by those we are trying to serve. One hundred sixty-one indicators will ensure that surely it is not.

My hope is that the dominance of quantitative statistical information as the sole, authoritative source of knowledge can continue to be challenged so that we embrace much richer ways of thinking about and understanding development.

***

Related Posts

Beyond the Ribbon Cutting

A New Discipline for Development Practitioners

Rethinking Trust, by Ben Ramalingam

More on Why ‘How Matters’

The Conundrum of Counting Beneficiaries


9 Comments Add Yours ↓

  1. 1

    no comments right now,still i am reading and thinking,just to show my unity.

  2. 2

    Evidence is not the superlative degree of data.

    There are many techniques out there that deal with qualitative evidence, and they can quite well be built into our logframes and M&E efforts. The question is: why don’t we?

  3. 3

    Dear,

    Nous sommes une association des rescapés de guerres”ARMMK” qui encadrent les orphelins à travers les séminaires, reflexions en vue de bâtir une société sans meurtre,sans vengeance ni haine . Consulter notre site web : armmk.st pour information. Nous voulons travailler ensemble .

    LUBANGU Jacques Représentant Légal de l’association ” ARMMK ”

    BANGWE-MAKOBOLA / FIZI / SUD-KIVU / RDC

  4. Cath Barker #
    4

    Thank you for your very sensible ideas! I wonder if we get bogged down because we are not specific about who M&E is for. In my experience information about progress useful to communities is different from that needed by donors or the implementation team. But our M&E frameworks try to answer all stakeholders needs and so decisions are actually made outside that framework. Just a thought.

  5. 5

    Indeed Cath, one of the most important things I try to share with organizations as an M&E consultant is the importance of recognizing that the information needs of a donor are different from those of the organization working on the ground. So often this gets conflated and then the power issues can really distort best intentions.

  6. 6

    Francis Rolt pointed me in the direction of this post. I agree entirely. I have seen too many data driven projects where they have spent 50,000 to justify 10,000. And also too many research projects, like the BBC’s research into how climate change was reported in Africa, where there is no follow-up by editorial to make more effective programmes. I see smaller groups being more effective than some of the larger NGO’s.

  7. 7

    This is really poignent “Also reports on “the data dash” of recent years in the philanthropic sector, which too often overlooks how “personal relationships, social networks, family and community dynamics, passion for causes, and other factors” that shape change.”

    I also think M&E needs to be looked at more from a design perspective than an evaluation perspective. As in, the M&E plan helps to define goals and expected outcomes.

    I write case studies for companies and I find they tell the story of what’s really happening in the most powerful way. It also gives a voice to the community members which aren’t captured in M&E Tools.
    An example is http://gdmconsultingafrica.com/publications/

  8. 8

    A question to the author- How did you cut down on the indicators? If so, which were the first to go and how did you prioritise them? Thanks

  9. 9

    Unfortunately @Gabriella changing the indicators was not possible at the time, as it would have required a change in the grant agreement, needing not only approval from the donor, but from all the international NGOs that made up the consortium. The sheer number of indicators that they had agreed to report on is, in my mind, much riskier than having a more reasonable set of expectations for measurement of outcomes. I did not stay to the end of that project, but what I suspect is that much of a reporting burden among that many layers inevitably ends in an “estimated” reporting of project results. I’m working on an upcoming post on this issue. (Hint: It happens more often in aid and philanthropy than anyone cares to admit.)


1 Trackbacks/Pingbacks

  1. Tweets that mention How Matters /  161 Indicators, and then what? -- Topsy.com 18 11 10

Your Comment