Accounting for failure

Project failure is far too common in conservation and development for anyone’s comfort. Many agencies and practitioners regrettably seek to hide their poor records behind euphemism and by redefining success radically downwards after that fact. Last year an aid bloggers forum considered the question of admitting failure although the consensus was not very positive (see my two contributions: here and here).

Yesterday I blogged about an alternative solution: pushing aid projects to obtain insurance against failure. It’s a nice idea, but probably quite a few years away at best from wide-scale implementation. But, it occurred to me, there is an intermediate solution which would take very little change to implement. Put simply donors would account rigorously for their projects’ success rate. It would work as follows.

Already most donors demand clear statements of project aims and expected outcomes before committing funds. Good donors will also ask for a risk assessment. All we need to do is quantify those risks. Sure putting a number on the likelihood that you will get the necessary buy in from local government officials is an exercise in extreme subjectivity, but we can live with that. Multiply all your risk percentages together and you should get an indication of the likelihood of project success. (I bet often it will be substantially lower than the project’s proponents would like, but if they try to massage it up they’ll get caught out later …)

Then, come evaluation time, the reviewers should explicitly assess what proportion of the original aims and outcomes have been attained, and what risk factors in actual fact came into play to the detriment of project impact. This assessment would have to be extremely robust and refer only to the original estimates of impact, so as not to allow project managers to redefine success downwards mid-project. (A subsidiary assessment could consider revised aims that were formally set out and agreed.)

Project proponents and implementers who consistently underestimate risks will be shown up (although, obviously the feedback loop will take a few years to generate much in the way of information). Imagine if donors pooled all this information so that they could look up organisations records. Imagine further if such estimates were linked to specific key people who worked upon the projects, and you had to justify both your risk assessment accuracy rate and actual project success rate in your next job interview. (Sensible employers would be tolerant of those who have only worked on a few projects and just got unlucky, but could talk intelligently about what went wrong, the lessons they learned and how they would put right such situations in future.)

As well as improving accountability and honesty about the very real risks involved in most conservation and developing projects, as the data built up, donors could also use it for their own internal evaluations. How successful were their projects? Which types of risk factors proved to be the most dangerous? Which types of risk factor were consistently under-estimated and which might have been over-estimated? Donors wanting specifically to target riskier projects with some or all of their money should not be discouraged; we all know that the pay-off from such initiatives can be that much bigger than me-too carbon copies of established models. This way the risk would simply be more explicitly acknowledged.

The total guesswork inherent in the original risk estimation would limit the data’s utility in evaluating individual performance, but for donors and BINGOs, when aggregated across the organisation, these errors would start to average out, and analysis of later results would help agencies to refine their estimates. E.g. typical political risk factors could be classified according to severity, with information for project proponents on actual failure rates in previous projects to help them gauge the likely risk in the new project they are proposing.

I can see plenty of resistance from many in the aid industry to such crude quantifications, but the move to increase transparency in the sector is gathering momentum. Perhaps it is the sort of thing that could be considered in the next iteration of IATI?

Advertisements

5 responses to this post.

  1. Posted by Justin Morgan on June 22, 2012 at 12:26 pm

    I think failure in projects is very natural and OK. It is the identification and acceptance of this failure that hurts the development industry the most. So often we try very hard to prove to the donors (the very people you are saying could help regulate NGO’s in terms of performance) that things are working through our evaluations, rather than improving things. Then there is also the questions of when would you determing is a success or a failure. Lots of good things from viewfromthecave at the moment on this.

    When I think of failure in development I think of – in the private sector if you are successful 80% of the time you are a market leader (and very rich), in the development industry if you admit to being wrong 20% of the time you are looking for a new job.

    If we can become much clearer on what is success (at outcome not output level) and have shorter learning circles, then failure when recognised and accepted is much easier to accept, and becomes much better learning.

    Reply

    • Hi Justin,
      I entirely agree project failure should be ok in risky projects, and even occasionally in supposed slam dunk projects. Conservation and development are not easy! But I also see too many projects which are pretty obviously doomed from the start; a more honest appraisal up front of the risks might head off such wastes of money. It is also about not being over ambitious with our goals in the first place.
      MJ

      Reply

  2. Posted by am on June 6, 2014 at 7:37 pm

    Impact evaluation. I think for the uninitiated you should write some definitions of the various uses of this term. I find it quite confusing. In a real project I take it to be measure of success based on expected outcomes. But is it also a term that is used to describe a research paper which as part of its conclusions comes up with some statement of the impact of scaling up the rct. Or is it external to the rct research paper. Maybe there are many uses of the term. If you have no time don’t worry.

    Reply

    • Alas I am not an expert on such things. I guess that with many things there is both a precise definition that true experts prefer, and a much more elastic concept that many others use. But I think one key point of an impact evaluation is that it does not just consider the immediate outcomes of the project, but whether long term impact (changes) are really achieved. Thus an IE may actually follow the end of a project by several years. Lots else out there on the interwebs including 3IE.

      Reply

      • Posted by am on June 12, 2014 at 8:04 am

        Totally agreed on longtermism. Turn up, take the photos, tell the world what a success it has all been and then walk away without looking back over the shoulder to review but go on to ask for more money and spend it the same way with infinite iterations, is a problem that needs addressed. I am thinking of it all in the context of what I call development spin for increase of funding.
        Whatever the definition I just think that there is an hostility to it in some quarters. It is almost as if they wish it had never have come up and also that it would go away rather than get it fixed.
        Your title has the word accounting in it and there is a creative accounting(development spin) too but I leave that for others to flesh out.
        Thanks.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: