Posts Tagged ‘monitoring’

Social entrepreneurs and M&E

Bill Easterly apparently wants to see faster determination of successful and failing aid projects than is provided by traditional monitoring and evaluation. That is according to Tom Murphy’s report on the DRI annual conference from back in March.* Tom commented:

“… strong and open monitoring and evaluation practices can ensure to the trial and error of the Easterly ‘searcher.’” (sic)

I also seem to recall – but cannot now find the right quote or link – the suggestion that aid practitioners could seek inspiration from how success and failure business is determined in the business world. (The DRI debates considered whether an RCT would be a suitable evaluation mechanism for the IPhone game Angry Birds – not the best example imho!) (Update 27/06/12: found that link, it was Ian Thorpe blogging here.)

As someone who played a major role in setting up a local NGO and then leading it for several years, I guess I can be classed as something of a social entrepreneur. Thus I have a few thoughts on how this can or cannot work in practice.

Like too many organisations in the development and conservation sectors I wouldn’t put down M&E as one of our strong points. We’re not terrible at it, and we’re getting better, but it wouldn’t be too difficult to poke plenty of holes in what we’ve done so far.

As ever with M&E, getting the budget balance right is tricky. If we had as much money as the MVP we could do some amazing M&E. (Hopefully a lot better than the MVP has achieved in practice!) But we do not have anything like that amount. Plus we have a problem that many of the impact indicators we are targeting take a long time to move in a positive direction (e.g. biodiversity). So we have to wait quite a while just to see whether our monitoring programmes are capable of detecting the kind of change we are seeking to achieve.

These challenges are universal. More particularly, taking an entrepreneurial approach involves a lot of flexibility and adaptation to changing circumstances. Most people seem to agree this kind of approach is a good thing. Which is all fine and dandy, but it does make it hard to set up your M&E baselines, because we’re continually adapting how the project will work, and thus what impacts it will have. As the project matures it settles down better, but for our earliest pilot sites the opportunity to establish a firm M&E baseline has long gone.

So how do we track our own progress? Mostly I would say through a number of milestones, which may or may not be well set out on paper. Conceptually we started with a pretty good idea of what we wanted to achieve. As things progressed we developed a number of internal targets (usually with very flexible timelines if they are specified at all) against which we could measure ourselves.

The analogy with modern business is perhaps something like Facebook. At first it was just an idea – something cool to do – then it became an enterprise, but one not particularly focused on bottom-line impact (profit) as on other metrics and milestones (e.g. market share and time spent on site by users). Only latterly has the focus at Facebook shifted increasingly to monetising their substantial achievements. (If recent share performance is anything to go by, this is proving tricky.)

Similarly our own project is being subjected to increasingly robust M&E assessment, as indeed it should do. But, in my mind, most M&E approaches and entrepreneurial innovation apply to quite different stages of project development. This is also why starting small is so important; it allows time for KISI. Alas too many people in conservation and development are often in such a rush that they want to spend $100 million first, and ask questions later.

* Yes I’m finally back blogging again. I have a number of posts queued up in my head responding to the news over the last couple of months. Hopefully it won’t all seem too much like yesterday’s left-overs.


Numbering off

Bill Easterly and Laura Freschi lament the tendency to “make up” international development statistics. Though done from the best of intentions the result does rather seem to resemble a counting system that goes: one, two, many … pick a number, any number.

Thankfully, I don’t think too much attention is paid to such imaginary numbers at the national and sub-national level since clearly they have only a passing relation to reality. That presents development planners with a new problem: without relevant, up-to-date information how can ‘we’ plan our next round of interventions?

Leaving aside the question as to who are ‘we’ to be planning anything of the sort, the obvious answer is to fill the data vacuum pronto, and in turn this involves collating a whole load of statistics from the provinces. This generally requires a significant effort, and development organisations respond to this challenge in one of two ways.

One option is to ignore everyone else; go out and collect the information ‘we’ need and nothing more. Then, since no-one else publishes their data, we won’t publish ours. (And hence the endless of  cycle of household questionnaires asking almost identical questions of people.)

Alternatively ‘we’ can recognise that many others might be interested in answering similar questions and attempt something rather more systematic. Unfortunately it turns out that everyone has subtly different questions they want answered, and so, as we talk to each new set of wannabe development planners, new questions / dimensions get added to our information gathering initiative. It all takes a long time, but eventually we produce the master plan to undertake the best ever information gathering exercise ever; talk about “best practice” – ‘we’ are going to completely redefine best practice, yeah! Such talk is very exciting to donors, so even though this initiative is going to be very expensive, they readily cough up the cash. The developing country government is also pretty keen because this is a big budget investment that politicians can crow about and from which local officials can earn lots of per diems.

Of course, because ‘we’ are not stupid, someone will have realised quite early on the importance of developing this best ever survey into an on-going monitoring programme for which the initial big push will provide the base line. That big push will train all the local officials in everything they need to do the job, and so after that they’ll be able to continue it under their own steam, and the local politicians will continue to fund it because everyone will acknowledge how important this kind of information is so that we can plan development interventions properly …

Except that in practice that is often precisely what doesn’t happen, because although ‘we’ might think this is absolutely central to efficient aid planning, that is not the priority of our local country partners. For a start while ‘we’ may think in terms of development interventions, they are trying to run a country that has run just fine(-ish) up to now without that data, and there are always other priorities. Moreover a couple of years without updates won’t hurt, right?

At the end of the day I’m not certain which is worse: the original problem or the prescribed solution. But, in my limited experience, it does appear to be recurrent and until donors can learn to be more modest in their aspirations and less fickle with their money, I suspect it is unlikely to go away.

%d bloggers like this: