Bill Easterly apparently wants to see faster determination of successful and failing aid projects than is provided by traditional monitoring and evaluation. That is according to Tom Murphy’s report on the DRI annual conference from back in March.* Tom commented:
“… strong and open monitoring and evaluation practices can ensure to the trial and error of the Easterly ‘searcher.’” (sic)
I also seem to recall – but cannot now find the right quote or link – the suggestion that aid practitioners could seek inspiration from how success and failure business is determined in the business world. (The DRI debates considered whether an RCT would be a suitable evaluation mechanism for the IPhone game Angry Birds – not the best example imho!) (Update 27/06/12: found that link, it was Ian Thorpe blogging here.)
As someone who played a major role in setting up a local NGO and then leading it for several years, I guess I can be classed as something of a social entrepreneur. Thus I have a few thoughts on how this can or cannot work in practice.
Like too many organisations in the development and conservation sectors I wouldn’t put down M&E as one of our strong points. We’re not terrible at it, and we’re getting better, but it wouldn’t be too difficult to poke plenty of holes in what we’ve done so far.
As ever with M&E, getting the budget balance right is tricky. If we had as much money as the MVP we could do some amazing M&E. (Hopefully a lot better than the MVP has achieved in practice!) But we do not have anything like that amount. Plus we have a problem that many of the impact indicators we are targeting take a long time to move in a positive direction (e.g. biodiversity). So we have to wait quite a while just to see whether our monitoring programmes are capable of detecting the kind of change we are seeking to achieve.
These challenges are universal. More particularly, taking an entrepreneurial approach involves a lot of flexibility and adaptation to changing circumstances. Most people seem to agree this kind of approach is a good thing. Which is all fine and dandy, but it does make it hard to set up your M&E baselines, because we’re continually adapting how the project will work, and thus what impacts it will have. As the project matures it settles down better, but for our earliest pilot sites the opportunity to establish a firm M&E baseline has long gone.
So how do we track our own progress? Mostly I would say through a number of milestones, which may or may not be well set out on paper. Conceptually we started with a pretty good idea of what we wanted to achieve. As things progressed we developed a number of internal targets (usually with very flexible timelines if they are specified at all) against which we could measure ourselves.
The analogy with modern business is perhaps something like Facebook. At first it was just an idea – something cool to do – then it became an enterprise, but one not particularly focused on bottom-line impact (profit) as on other metrics and milestones (e.g. market share and time spent on site by users). Only latterly has the focus at Facebook shifted increasingly to monetising their substantial achievements. (If recent share performance is anything to go by, this is proving tricky.)
Similarly our own project is being subjected to increasingly robust M&E assessment, as indeed it should do. But, in my mind, most M&E approaches and entrepreneurial innovation apply to quite different stages of project development. This is also why starting small is so important; it allows time for KISI. Alas too many people in conservation and development are often in such a rush that they want to spend $100 million first, and ask questions later.
* Yes I’m finally back blogging again. I have a number of posts queued up in my head responding to the news over the last couple of months. Hopefully it won’t all seem too much like yesterday’s left-overs.
Posted by Justin Morgan on June 7, 2012 at 11:54 am
Good reading and something that all of us in the NGO world are tackling. I do however think there is something that is necessary in the shorter learning times in what is working and what is not – we are all living and working in an evolutionary world where things are tried, some work and survive and some don’t and stop (or at least should stop…we often don’t see and learn from our own actions, let along the actions of others). Our job in part is to speed up the evolutionary prcesses, getting the good to the top quickly and stopping what is not working. Easier said than done, especially when we look at outcomes rather than outputs. Over the last year I have been overseeing a programme that uses outcome mapping and relatively short learning cicles (12 months). We expect some things, even when well thought through at the start and well delivered will not change things. Accepting this reality a good start, so at least you look for it. And we keep what we do see is changing things and try to scale it up. Even in this, we know some things we drop may turn into development winners (so we monitor things after we stop them) and things we think are working may also be longer term failures. At least by starting to look for success and failures, I think we are a better chance of identifying success and failure quicker