Got ‘Em: An Evaluation Story

A wannabe hot-shot technical advisor, I had just enough knowledge to be useful and just enough naïveté to be dangerous.

I was new to the country office and eager to demonstrate my monitoring and evaluation (M&E) prowess.

The task at hand was completing a mid-term review of a “problematic” implementing partner whose work focused on HIV and food security programming. There was no baseline, but go for it.

Oh, and this particular country office needed to put partners “on notice.” (No wonder M&E is associated with scrutiny, policing, fear and confusion by implementing and local organizations.) The country office’s partnerships in general had been weak since the reign of an insane country director. The message from management was that it was time to “bring down the hammer” and “whip these partners into shape.” (Note the courtroom sentencing and slavery origins of these idioms.)

So my two colleagues and I traveled to the partner’s office and set up camp. Three full days of key informant interviews and focus group discussions. If we couldn’t go quantitative, we were going to make the best of what we could learn, the participatory element of the evaluation our focus.

What we found was not surprising. Some seeds and fertilizers distributed here and there, but late. Some HIV training “sessions” but no follow-up for voluntary counseling and testing or orphan support. In essence, “they [the partner staff] come from time to time” but I could not detect any deep relationships between the organization and the people they were serving from my vantage point.

And at the end of every day, as these results came in, I would post them on flip chart paper [an aid worker’s most important tool – another thing I wish I had learned in grad school] in the entrance hallway of the office.

As my preconceived expectations for the evaluation were being met, I wanted to make the evaluation “findings” as transparent as possible. When we observed the organization’s driver and vehicle were being dispatched each day to pick up and drop off the director’s granddaughter from school, this was the cherry on top.

We went back to our office, flips charts in tow, to compile the report. The evaluation report flowed as I wrote it the next week, my “got em” mentality reigning supreme. It was scathing, but I felt honest and given the sharing of the preliminary results, it should have hardly come as a surprise.

But it did.

The implementing partner requested a meeting. They felt called out and wanted a chance to respond.

I went in to the meeting, confident that the report might have been a bitter pill to swallow, but that it did indeed represent what we had found on the ground.

I honestly don’t remember much about that meeting. I know that because it was my strong written words that were at issue, I had enough sense to be fairly silent.

The partner staff described many other dynamics and circumstances at play. The partner didn’t make excuses, but rather shared much more of the story than our three days at the office revealed.

In the end, there was an agreement to add details and “tone down” the language of the report without altering any of the findings.

Looking back on this experience now, my hubris in writing that report with such a “gotcha” mentality is regrettable. But in our lives, in our relationships, it’s often the breakdowns and mistakes that make us more sure of who we are, that remind us of our connections to each other and of what’s most important.

I left the country office a year later and I don’t, in the end, know if the partner made changes to their programs or to their organization.

But what I do know now is that when you’re looking for what’s wrong, you’re certainly going find it.

***

Related Posts

Would YOU fund this organization?

Grassroots = No Brains?

Overlooking the Capacity of Local Organizations

Listening to People on the Receiving End of Aid

What is our true job?

7 Comments

  1. Matt L

    Jennifer,
    This is a great article and one that I think everyone who has been engaged to conduct an evaluation should read.

    In an early career evaluation I was commissioned to write, I fell prey to a similar desire to “make my mark” and was called out on the language.

    One early reviewer told me that if he was having an “arse kicking” party – i would be a “guest of honour” – naturally, at the time, I took it as a complement..wrongly of course!

    Anyway, many days of rewrites, a thesaurus at hand and all were happy – though the message remained the same, the language was less confrontational.

    Great article – well done

    Matt

  2. Lisa

    Thanks for the honest story. We have all been trapped in that paradigm of evaluation where experts know best and need to be critical. When we question our own worth, we tend to get even more critical. But when someone does it to us – ouch!
    I have tried to shift to empowerment evaluation where the actors evaluate themselves using agreed upon metrics, I facilitate and record. The sense of community really changes the dynamics. A few foundations are promoting it in their RFP’s, but most funders seem to test evaluators ability to escape the old paradigm. It’s tough.

  3. Jerome Caluyo

    I take off my hat to the writer. Monitoring and Evaluation should be a venue of learning rather than an opportunity to name and shame. In that case no learning will take place.

    I have just gotten through a programme evaluation just 12 days after I arrive in country to inherit programme management from a very good predecessor. In fact I see her work as wonderful and full of accomplishments despite a security compromised environment as Afghanistan. I felt the surging anger on how the evaluator framed his questions to the point of being insulting and insensitive making it appear how stupid the programme implementers were that I got the urge to punch him straight to the face.

    Thanks God, I did not and was able to hold to my senses until he sent us his evaluation report which I dissected to its bare anatomy highlighting his ignorance, lack of proper evaluation decorum, even knowledge of the evaluation methodologies and insensetivities.

    He got his time being evaluated as an evaluator and I hope he learns.

    Thanks for these thoughts Jennifer.

    Evaluation is both an art and a science. An art because it should see actions in the eye of an artist and to measure results with the instruments of the scientist.Capture the whole in a constructive language. Miss one and you lose the whole picture.

  4. Pingback: Development Digest – 22/04/11 « What am I doing here?

  5. So Anais Win said that quote? I’ve used that in each of my last Unitarian sermons in different contexts. (www.uufcc.com) I first saw it on as anti-war stenciled grafiti in State College PA ca. 2003. It stuck with me.

  6. GDM

    I really enjoyed this as its a lessons learned shared to other by discussing the mistakes in the approach. Often so many blogs are focused on everything we do wrong as development workers or how culturally unaware we are. Keep up the writing!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.