Posts Tagged ‘impact evaluation’

M&E – a top-down imposition

Yesterday I promised you some reflections on Prichett et al.’s working paper on improving monitoring and evaluation. They correctly identify that rigorous impact evaluation works too slowly for most management purposes, costs a lot (putting it beyond many project implementers), and is often highly context specific so not readily generalizable (the standard critique of randomized control trials).

Their proposed solution is to insert a second (lower case) ‘e’ that stands for experiential learning: MeE. In particular they propose that project managers should have considerable freedom to adaptively manage their project, and moreover should be encouraged to try different approaches to see what works best, thus avoiding the problem of over-specificity that I highlighted in the quotes I pulled out in yesterday’s post. They suggest that this will give much more like the real-time information that project managers need, as well as exploring more cost-effectively different project designs, than establishing a separate RCT for each one. (Which may not be sufficient any way, as which design is optimal may be context-specific.)

It is an excellent paper, and their proposal has a lot to recommend itself if you work for a big development agency. But, I cannot see it working very well at the small NGO end of the market where I operate. The problem is not really specific to experiential learning as to the whole gamut of impact evaluation as it applies to project design and management amongst small NGOs. If I think about the best small NGOs I know and have worked with, several features are often apparent that reduce the incentives for impact evaluation:

  • Small NGO types tend to be ‘doers’ rather than ‘thinkers’ – given the choice we will nearly always invest more money in implementation than M&E.
  • Many small NGOs have fairly modest aims which do not need sophisticated M&E to assess.
  • Other small NGOS are of the nimble innovator types. They may be iterating too rapidly for it to be easily captured in M&E, and do not have resources to iterate in such a systematic manner.
  • Such NGOs have the capacity to learn very rapidly internally for relatively little investment of time and effort; there is no big institutional ship to turn around. Instead, for these small NGOs new learning leads rapidly to more impact and the potential for more learning.
  • In contrast clearly analysing and communicating these lessons can involve a significant investment of effort that does little (except perhaps to support fund-raising) to deliver better results on the NGO’s chosen impact bottom line.
  • Thus generating new learning and achieving greater immediate impact can be much cheaper for such NGOs than disseminating lessons already learned.

Donors and small NGO partners obviously have a role to play in helping offset this tendency (which is not always 100% healthy), but, as I have remarked before, there seems to me to be an inherent contradiction in the calls both for bigger/better M&E and nimbler project implementation in an attempt to mimic the rapid success of internet-era start-ups.

The contradiction becomes more apparent when one realises that while business may regularly monitor all manner of variables that are relevant to their business (e.g. page hits as well as sales), they always have an instant answer when you ask about their ‘impact’: it’s the size of their profits. No construction of the counter-factual required there!

I also suspect that few aid beneficiaries care much about what any impact evaluation may or may not say so long as they are getting good results out of the project itself. Thus it becomes clear that much M&E, and certainly impact evaluations, are essentially a top-down imposition by donors understandably keen to know the results of their funding, and at odds with the bottom up approach many people in development advocate.

So the real question is: does the donor and wider-development community get value for money from the impact evaluations they demand? This is a question that Prichett et al. raise several times. The answer seems to be related to the challenge of scaling up, a relentless pressure in a lot of conservation and development work that I have repeatedly queried (e.g. see here and here.) I.e. impact evaluation and Prichett et al.’s experiential learning is all about learning how to move from a successful pilot to national programmes and similar projects in other countries.

Here I return to the internet start-up analogy. Did Google get where it is as a result of an impact evaluation? No it grew organically! If you want more bottom up development, which this blogger does, maybe the solution is less evaluation and more of a market-based approach in which successful implementers are simply invited to replicate their successes along the lines that I suggested yesterday?

Now before I chuck the whole M&E set overboard, a few basic points need to be made in return. Firstly, and most obviously, claiming ‘success’ is easy when all you need to do is check your bank balance. Determining which pilot projects are successful is not always so straightforward – although not always as difficult as might be supposed – and essentially requires some kind of impact evaluation. Indeed the converse problem often arises of a new fad rapidly gaining popularity far faster than evidence of its efficacy: the micro-lending boom comes to mind. And as those classic RCTs around improving educational attainment in Kenya show, sometimes it’s not so much about what is successful, but what gives the most success for your money. Indeed, Pritchett et al. lament the demise of project ‘valuation’ and computation of value-for-money metrics by large development agencies.

I conclude that idealists who want all their development to be 100% bottom up are living in cloud cuckoo land. Even if we dismantled the whole international aid industry, governments still regularly engage in this sort of thing within their own countries, often under pressure from their own electorates. So if the people want development aid then the paymasters are going to need to some evidence on which to base their decisions. Most of all, what this humble blogger would really like to see, is donors actually paying attention to these things, instead of continuing to commit large chunks of funding to projects and programmes they know are doomed. Better to over-fund something that has a decent chance of success than flush your money down the plughole of the utterly unfeasible.

Are donors getting value for money from the impact evaluations they demand? Only if they act on the results!

Advertisement

Accounting for failure

Project failure is far too common in conservation and development for anyone’s comfort. Many agencies and practitioners regrettably seek to hide their poor records behind euphemism and by redefining success radically downwards after that fact. Last year an aid bloggers forum considered the question of admitting failure although the consensus was not very positive (see my two contributions: here and here).

Yesterday I blogged about an alternative solution: pushing aid projects to obtain insurance against failure. It’s a nice idea, but probably quite a few years away at best from wide-scale implementation. But, it occurred to me, there is an intermediate solution which would take very little change to implement. Put simply donors would account rigorously for their projects’ success rate. It would work as follows.

Already most donors demand clear statements of project aims and expected outcomes before committing funds. Good donors will also ask for a risk assessment. All we need to do is quantify those risks. Sure putting a number on the likelihood that you will get the necessary buy in from local government officials is an exercise in extreme subjectivity, but we can live with that. Multiply all your risk percentages together and you should get an indication of the likelihood of project success. (I bet often it will be substantially lower than the project’s proponents would like, but if they try to massage it up they’ll get caught out later …)

Then, come evaluation time, the reviewers should explicitly assess what proportion of the original aims and outcomes have been attained, and what risk factors in actual fact came into play to the detriment of project impact. This assessment would have to be extremely robust and refer only to the original estimates of impact, so as not to allow project managers to redefine success downwards mid-project. (A subsidiary assessment could consider revised aims that were formally set out and agreed.)

Project proponents and implementers who consistently underestimate risks will be shown up (although, obviously the feedback loop will take a few years to generate much in the way of information). Imagine if donors pooled all this information so that they could look up organisations records. Imagine further if such estimates were linked to specific key people who worked upon the projects, and you had to justify both your risk assessment accuracy rate and actual project success rate in your next job interview. (Sensible employers would be tolerant of those who have only worked on a few projects and just got unlucky, but could talk intelligently about what went wrong, the lessons they learned and how they would put right such situations in future.)

As well as improving accountability and honesty about the very real risks involved in most conservation and developing projects, as the data built up, donors could also use it for their own internal evaluations. How successful were their projects? Which types of risk factors proved to be the most dangerous? Which types of risk factor were consistently under-estimated and which might have been over-estimated? Donors wanting specifically to target riskier projects with some or all of their money should not be discouraged; we all know that the pay-off from such initiatives can be that much bigger than me-too carbon copies of established models. This way the risk would simply be more explicitly acknowledged.

The total guesswork inherent in the original risk estimation would limit the data’s utility in evaluating individual performance, but for donors and BINGOs, when aggregated across the organisation, these errors would start to average out, and analysis of later results would help agencies to refine their estimates. E.g. typical political risk factors could be classified according to severity, with information for project proponents on actual failure rates in previous projects to help them gauge the likely risk in the new project they are proposing.

I can see plenty of resistance from many in the aid industry to such crude quantifications, but the move to increase transparency in the sector is gathering momentum. Perhaps it is the sort of thing that could be considered in the next iteration of IATI?

Does demanding contributions from local beneficiaries work?

Here’s a question for all you development research types (especially the randomistas).

A lot of community-level capital development projects these days seem to involve a requirement that the beneficiary community make a contribution towards the development. Sometimes this is in the form of free labour, other times it is financial. So, for example, a new bore hole and pump may cost around $20,000; the donor will pay the bulk but ask that the community stump up $1,000*; communities that cannot or will not stump up do not get the new well. The theory, as I understand it, is that if the community have had to stump up then they will value the development more, be more likely to take care of it etc, and the development project will be more successful as a result.  Conversely also, communities who do not stump up are assumed to not sufficiently want a new well, and thus the money is better spent elsewhere.

The second part of that theory has the obvious flaw that some communities may simply be unable to afford $1,000, but still it is a very seductive idea for directing aid to those areas which will benefit from it and value it most, and also increasing the likelihood of sustainability. If I were in charge of a programme offering such capital development grants I think I’d incorporate the requirement in my programme’s design.

But, does it really work? Or does the requirement for a local contribution simply slow down disbursement, miss out some needy communities altogether, and save the donor a negligible amount of money (unless so few communities can afford the contribution they don’t even spend the entire programme’s allocation)?

In particular I wonder whether if the community had to pay $1,000 for the new well then they might only value it at $1,000. Such a valuation might not even be completely irrational if the community sees other neighbouring communities also getting the new well for the same price (i.e. the wider programme effectively establishes the local price), and, following previous practice by the same and other donors  in the area, the community may consider it a reasonable chance that if in 5-10 years time the pump is broken, some donor will offer to repair it for another token contribution of $1,000.

Moreover, assuming this is now a ‘community-owned’ well it is unlikely that it provides value to any one individual of over $1,000, especially when labour is so cheap, and the primary water fetchers (women) have less political influence, and thus individual incentives for maintenance may be dulled. Cohesive, well led communities can of course overcome these challenges, but they are the exception that proves the rule of the tragedy of the commons from which communal investments often suffer. And investing in local community governance is a long, expensive undertaking which does not sit well alongside a quick in-and-out capital development programme.

Has anyone ever done any research on this issue? The relatively long time periods required to judge sustainability might be one challenge, but I could also imagine how it might be possible to measure earlier proxy indicators of likely success within a couple of years of installation. Anecdotal  evidence of success in NGO projects is not without interest in this area, but it might suffer from a question of attribution in relation to this measure versus other forms of support that the NGO provides as part of the integrated package.

Please enlighten me in the comments.

* I’m not a water engineer. These prices may be completely unrealistic. The exact numbers are not important to my basic point.

Don’t forget how it was before

Continuing the spirit of new starts in the new year, I was intrigued by J’s post considering to what extent aid efforts have succeeded in Haiti since last year’s devastating earthquake. His post mostly talks about the huge scale of the disaster and complexity of helping its surviving victims, and how these challenges explain the lack of success or even “failure” of interventions thus far. But how did it all start out?

Obviously all development and humanitarian relief interventions have target end goals they are seeking to meet, and when they fail to achieve the desired end point there is always a degree of failure to be considered. However, it should be remembered that most such goals are what the business world calls stretch targets. Indeed it is often necessary to be deliberately over-ambitious in order to secure funding. That is to say the architecture of the aid and development sector sets up many projects to apparently “fail” in the first place.

J, however, also questions whether it is correct to talk in terms of success and failure, and I think he is right to do so. When coming to a new place in the developing world it is always easy to spot the problems; complaints are often not far behind. What these critiques may fail to understand is just how bad the situation was before, and thus the degree of success that has been achieved.

This applies not just to disaster zones, and I was reminded of it over my Xmas break. We went to visit some friends in another developing country (it doesn’t matter which). I knew the country very little, and our friends not much better (they’ve been there just over a year). Of course, before long our conversation turned to the various shortcomings manifest in the government of said country. It wasn’t hard to find things to criticise, and there is certainly much that could be done better. However, our friends also passed on the views of some of their local friends and longer term residents. They were more relaxed about things; yes they agreed with all the criticisms but nonetheless they said the political situation was much improved now upon a few years ago. Things were bad, but they had been worse; the country, still very poor, is on an upward trajectory (although unlikely to become rich any time soon).

I help run a small NGO. Some of our management processes are still necessarily somewhat chaotic; we’re still putting together all the systems necessary to make it run smoothly. But compared to where we were ~5 years ago the organisation is a purring machine of efficient endeavour. It’s hard for new staff to realise that, though; they just see the problems (not enough computers) and don’t appreciate the progress we’ve made (2 laptops between 4 of us when we started).

Of course, one must be careful of this argument. It is not an excuse for every missed project milestone or impact that goes unachieved. The defence that things would have been much worse without the project only goes so far, and is probably over-used. But progress is something  to be valued. When evaluating a project, and deciding whether to commit further funding, donors need to be careful not to throw out too many babies with the bathwater. Here, unfortunately, I detect double standards: NGO projects can be measured against impossibly tough yardsticks, whilst bilateral donor projects continue to pour funds down recipient government maws whilst negligible progress is made towards the targeted outcomes.

That is a subject to explored more fully in a future post. For now, whenever you go somewhere in the developing world, don’t forget to ask how it was before, before you go making judgements about how it is now.

Accountable to who?

Accountability is a big thing in development these days. Mostly this is in relation to governments (national and local) in developing countries who have a habit of appearing not always to act in the best interests of their citizens. However, the development sector has enough free thinking types to detect the whiff of hypocrisy when it arises, and, especially within the NGO sector, we are increasingly encouraged to be properly accountable for what we do, and in particular to be accountable to our proposed beneficiaries. If we are working for them, or on their behalf in some way, then we should have to demonstrate to them exactly how we are benefitting them, and justify our work (and salaries) as being value for  money. Most advocates’ of development, including this one, only acquaintance with a ballot box is as electors not being elected.

I, for one, am from time to time apt to bemoan the lack of downward accountability in donors and erstwhile ‘international partners’ (anyone who sends us money, basically) whenever they may (shockingly!) voice an opinion at odds with ours. But those who live in glass houses etc etc, and so it befits me to consider to whom I  am accountable.

How about, then, those communities we are supporting in our projects? It might not surprise you to learn that I and my colleagues would not be exactly over the moon about subjecting ourselves to rigorous value-for-money tests by our claimed beneficiaries. Why? Well, despite our modest salaries by western standards, we still earn more in a month than many of our beneficiaries do in a year. Our projects are complex, and need careful explaining even to well-educated types; how do we justify them and their complexities to people who have barely completed primary education? In short, what looks proportionate from one perspective, can look fantastically rich from another.

Perhaps, in a few years, by which time we hope our projects will be starting to pay off substantially for the communities, they might be more accepting, but for now we have to conclude it  would be one incredibly hard sell. Hence we have a number of proxy accountability mechanisms which allow us to get input into our evolving plans, but it is also true to say that we take steps in advance to guide those decisions in what we believe to be the right direction, which isn’t necessarily what you might think from how we describe these meetings and other mechanisms to donors and the like. And, since we pay per diems to community representatives to turn up to these meetings, you can rightly ask yourself, who is accountable to who?

The communities elect representatives to local and national government, who then employ on their behalf a range of officials to look after different functions. To an extent we are then accountable to these elected councillors, members of parliament, and government officials of various sorts. This accountability certainly matters because these guys can kill off our projects and organisation pretty quickly if they like, due process or no. But are they themselves accountable in how they wield that power? Do they exercise judgement on behalf of their constituents or on behalf of themselves? Unfortunately, the evidence is often for the latter, and thus, because we are pragmatic about things, we often find ourselves buying their support in one way or another, as well as making their decisions easy by doing right by their electoral masters.

Are we accountable to our NGO board, perhaps? They generally have a good grasp of the broad brush strokes of what we’re doing, and they are an important safety net should something go seriously wrong, e.g. a senior manager found guilty of corruption. However, they are also often busy, and their experience of running similar projects variable. We do not always have enough time to explain sufficiently issues arising, and hence decisions may not be fully informed. Sometimes we are secretly relieved that a potentially awkward discussion was quickly closed, other times it can be greatly frustrating when decisions go unexpectedly against us. The end result; full disclosure at all times is not always the best option (indeed senior board members have advised us this way) and we have to carefully manage our board. (The challenges which this in itself imposes would be magnified many times over if we had to go through the same process with our beneficiary communities.)

Accountable to our donors, then? Now we’re getting closer. We have to submit regular reports and accounts to our donors. They keep us on our toes with independent evaluations (although those consultants conducting said reviews are not always as independent as you might think). Our donors are either developed country governments (and their amalgamated creations like the EU), large trust funds, or their intermediaries (generally BINGOs). For the most part I assume most BINGOs are no more accountable to their boards and members in terms of day-to-day programme management than we are to our board, whilst governments’ donor agencies do not, in my view, pay too much attention to how most of their voters would imagine development should proceed.

In fact the critical accountability process is proposal writing. This has its own flaws and requires various platitudes. But if you can persuade a donor to fund you, then you have set the path on which you will proceed with at least part of your work usually for several years. We may present annual budgets to our board for approval, but they are governed by the budgets agreed with donors; if our board want to reject these budgets we’d all be in something of a pickle.

There are some NGOs, I assume every country and every sector has them, which are known locally as puppets of a/several donor(s). We like to look down upon them. Although it is all shades of grey, we like to believe that we are in more control of our destiny than these puppet NGOs, that we pick and choose what proposals we write, and that if a donor demands too many changes then we may even turn down the money. How, ultimately, do we make these decisions? What sets us apart from the puppets?

Our board is certainly important, but in some senses I believe my greatest accountability is to myself and my immediate colleagues. I do know that when my own performance does not live up to the standards I like to aim for, when there is just too much to do and too little time to do it, that I get stressed because of my own expectations. My auto-accountability often exerts the greatest pressure on myself. It is also the kind of accountability about being able to hold one’s head up high, and so is about self-imposed social pressure from my selected peer group of conservation and development professionals.

Finally we can bring this full circle by considering that our beneficiary communities, our partners in government (national and local), our board, our donors and international partners, have all bought into the story that we have put together as to how we believe we can succeed with our projects. They support the overall strategy (or at least the bit of it that concerns them) and for the moment seem prepared to give us their cooperation, moral support, time of day, money and technical advice respectively. Whilst operational decisions made by management are rarely exposed to the glare of full accountability, we are delivering the most important thing of all: impact on the ground. For this we are accountable to our own high expectations of ourselves, assessed by our peers, and generally accountable to everyone.

Accountable to Accountants?

There has been quite a discussion of NGO accountability recently in the blogosphere kicked off by Till Bruckner’s guest post on Aid Watch about NGO budgets in Georgia. Aid Watch subsequently posted a series of replies from the NGOs involved, and Scott Gilmore jumped in with his two cents. Caveman Tom summarised the whole to and fro here and then subsequently added his analysis.

Update: Aid Info also put the case for budget transparency very succinctly here.

I kinda agree with both sides, but ultimately think Scott Gilmore is closer to the truth. Budgets (and actual expenditure) are pretty fundamental to evaluating any project. They indicate the allocation of resources and give a clue to value-for-money. I get frustrated any time I am presented with project information without the finances. It suggests people have something to hide. So notwithstanding the fact that it was USAID who appear to have redacted the project budgets, I sympathise with Till Bruckner.

That said, I get even more frustrated with donors who like to impose budgetary restrictions. Different projects need different approaches; most rules of thumb are pretty useless when evaluating budgets. I’ve heard donors say things like they weren’t happy because the recipient government spent all the money on per diems and cars. Except that we also spend most of our money on salaries, per diems and car journeys. What the donor really meant is that they were disappointed with the lack of impact that came from all that expenditure. This is Scott Gilmore’s point.

Apparently one of the bones of contention in the Georgia case was NGOs’ reluctance to divulge their jealously guarded overhead rate they have negotiated with USAID. This is one area where I boggle to understand what the real problem is. Who actually cares what the overhead rate was? What we ought to care about is the quality of the work done. We all instinctively understand this any time we hire a builder; paying a bit more ‘overhead’ (supervisor salary) might lead to much better results. Hiring the cheapest contractor is often not the best option. Conversely if an NGO is really good at what they do then I think it is appropriate to pay their staff a bit more – they deserve it!

In the private sector when someone buys a service from someone else they rarely ask the service provider to break down the exact costs; they just compare reports of the quality of service (perhaps including from their own experience) between different providers with the range of costs and make a decision. Service providers who do not offer value for money are quickly pushed out of the market.

The trouble is that in the development sector the customer is two completely different actors: the donor and the beneficiary. Donors find it very hard to evaluate projects they’ve funded, and for all the talk of putting beneficiaries at the heart of development aid, and getting them to make the decisions, in reality this happens very little. The result is plenty of BINGOs getting away with mediocre quality work. Assuming the entire structure of aid is not going to change very much in the near future donors need to put more effort into actually assessing impact.

Transparency over budgets and expenditure will assist in determining value for money but they are far from being the whole cigar.  Obsession with budgets and expenditure leads to the tyranny of the accountant. Till Bruckner and other jumped up accountability experts (see J’s excellent critique) should take note. Accountability is a lot more than just doing the accounting.

(In my next post I shall discuss to whom I think I am accountable …)

%d bloggers like this: