M&E – a top-down imposition

Yesterday I promised you some reflections on Prichett et al.’s working paper on improving monitoring and evaluation. They correctly identify that rigorous impact evaluation works too slowly for most management purposes, costs a lot (putting it beyond many project implementers), and is often highly context specific so not readily generalizable (the standard critique of randomized control trials).

Their proposed solution is to insert a second (lower case) ‘e’ that stands for experiential learning: MeE. In particular they propose that project managers should have considerable freedom to adaptively manage their project, and moreover should be encouraged to try different approaches to see what works best, thus avoiding the problem of over-specificity that I highlighted in the quotes I pulled out in yesterday’s post. They suggest that this will give much more like the real-time information that project managers need, as well as exploring more cost-effectively different project designs, than establishing a separate RCT for each one. (Which may not be sufficient any way, as which design is optimal may be context-specific.)

It is an excellent paper, and their proposal has a lot to recommend itself if you work for a big development agency. But, I cannot see it working very well at the small NGO end of the market where I operate. The problem is not really specific to experiential learning as to the whole gamut of impact evaluation as it applies to project design and management amongst small NGOs. If I think about the best small NGOs I know and have worked with, several features are often apparent that reduce the incentives for impact evaluation:

  • Small NGO types tend to be ‘doers’ rather than ‘thinkers’ – given the choice we will nearly always invest more money in implementation than M&E.
  • Many small NGOs have fairly modest aims which do not need sophisticated M&E to assess.
  • Other small NGOS are of the nimble innovator types. They may be iterating too rapidly for it to be easily captured in M&E, and do not have resources to iterate in such a systematic manner.
  • Such NGOs have the capacity to learn very rapidly internally for relatively little investment of time and effort; there is no big institutional ship to turn around. Instead, for these small NGOs new learning leads rapidly to more impact and the potential for more learning.
  • In contrast clearly analysing and communicating these lessons can involve a significant investment of effort that does little (except perhaps to support fund-raising) to deliver better results on the NGO’s chosen impact bottom line.
  • Thus generating new learning and achieving greater immediate impact can be much cheaper for such NGOs than disseminating lessons already learned.

Donors and small NGO partners obviously have a role to play in helping offset this tendency (which is not always 100% healthy), but, as I have remarked before, there seems to me to be an inherent contradiction in the calls both for bigger/better M&E and nimbler project implementation in an attempt to mimic the rapid success of internet-era start-ups.

The contradiction becomes more apparent when one realises that while business may regularly monitor all manner of variables that are relevant to their business (e.g. page hits as well as sales), they always have an instant answer when you ask about their ‘impact’: it’s the size of their profits. No construction of the counter-factual required there!

I also suspect that few aid beneficiaries care much about what any impact evaluation may or may not say so long as they are getting good results out of the project itself. Thus it becomes clear that much M&E, and certainly impact evaluations, are essentially a top-down imposition by donors understandably keen to know the results of their funding, and at odds with the bottom up approach many people in development advocate.

So the real question is: does the donor and wider-development community get value for money from the impact evaluations they demand? This is a question that Prichett et al. raise several times. The answer seems to be related to the challenge of scaling up, a relentless pressure in a lot of conservation and development work that I have repeatedly queried (e.g. see here and here.) I.e. impact evaluation and Prichett et al.’s experiential learning is all about learning how to move from a successful pilot to national programmes and similar projects in other countries.

Here I return to the internet start-up analogy. Did Google get where it is as a result of an impact evaluation? No it grew organically! If you want more bottom up development, which this blogger does, maybe the solution is less evaluation and more of a market-based approach in which successful implementers are simply invited to replicate their successes along the lines that I suggested yesterday?

Now before I chuck the whole M&E set overboard, a few basic points need to be made in return. Firstly, and most obviously, claiming ‘success’ is easy when all you need to do is check your bank balance. Determining which pilot projects are successful is not always so straightforward – although not always as difficult as might be supposed – and essentially requires some kind of impact evaluation. Indeed the converse problem often arises of a new fad rapidly gaining popularity far faster than evidence of its efficacy: the micro-lending boom comes to mind. And as those classic RCTs around improving educational attainment in Kenya show, sometimes it’s not so much about what is successful, but what gives the most success for your money. Indeed, Pritchett et al. lament the demise of project ‘valuation’ and computation of value-for-money metrics by large development agencies.

I conclude that idealists who want all their development to be 100% bottom up are living in cloud cuckoo land. Even if we dismantled the whole international aid industry, governments still regularly engage in this sort of thing within their own countries, often under pressure from their own electorates. So if the people want development aid then the paymasters are going to need to some evidence on which to base their decisions. Most of all, what this humble blogger would really like to see, is donors actually paying attention to these things, instead of continuing to commit large chunks of funding to projects and programmes they know are doomed. Better to over-fund something that has a decent chance of success than flush your money down the plughole of the utterly unfeasible.

Are donors getting value for money from the impact evaluations they demand? Only if they act on the results!

Advertisements

2 responses to this post.

  1. […] research project is a different animal, and changing the intervention would defeat the purpose.)  Bottom Up Thinking recently published a nicely nuanced take on the […]

    Reply

  2. Very insightful comments, both on my paper and on NGOs. We hope that MeE is a technique that could be used by NGOs but is part of a broader rethink of building organizational capability in the state (http://www.hks.harvard.edu/centers/cid/programs/building_state_capability) (including state funded agents) and in the end NGOs are a just small part of the overall development process.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: