Posts Tagged ‘adaptive management’

Theorising pig flight

“Everyone these days (funders, bosses etc) seems to be demanding a Theory of Change (ToC), although when challenged, many have only the haziest notion of what they mean by it. It’s a great opportunity, but also a risk, if ToCs become so debased that they are no more than logframes on steroids.”

That was Duncan Green writing a couple of months back. I totally dig the turn of phrase, but (luckily!) have so far escaped any such experiences of being enslaved to a donor’s preconception of what a ToC should look like. On the other hand I do find logframes (or ‘lockframes’ in the memorable corruption) more than a bit tiresome such that I might be inclined to reverse the comparison, and describe logframes as theories of change on steroids.

If you are worried that you might fall into that class of people who “have only the haziest notion” of what a ToC is then you can go read Duncan’s blog post (plus an excellent selection of comments) or Google for a whole bunch of other informative web sites. But over in this little corner of cyberspace I quite like my ignorance of the more formal definitions. Not that the above sources are not useful, quite the opposite, but I prefer the ‘pornography test’, which is to say I believe I have a pretty good intuitive idea of what a theory of change looks like, and I reckon I know one when I see one.

To me a ToC is primarily just a reasoned explanation of how what one proposes to do will actually deliver the impact you expect to achieve. It can be summarised in nicely boxed flow diagrams and the like, but for me the real test of a ToC is that it can stand up to reasoned, sceptical argument.

Where, I believe, so many conservation and development projects go wrong in their design, is not in their use or lack of use any particular framework, but in just plain sloppy thinking and lack of self-criticism. Part of the problem in development, it seems to me, is that we are too often too nice to each other, and not inclined to criticise (constructively!). This challenge can be exacerbated when discussions cross cultural boundaries: local ownership and deference to regional social norms are important, but should not trump having a workable plan in the first place.

Conversely we are also often too formal. A single written proposal, however, well constructed is never as satisfying as being able to discuss and probe people’s plans in person. And who reads those tediously long project documents any way? The result: too many projects approved primarily on the basis of the executive summary, without real testing of assumptions. And that’s when those flashy graphics really come into their own: great for communicating the central thrust of an idea, useless at exposing logical fallacies.

MJ’s theory of porcine aerodynamics: flashy graphics may not stand up to serious scrutiny.

MJ’s theory of porcine aerodynamics: flashy graphics may not stand up to serious scrutiny.

So I like donors who are prepared to get into a real conversation with their grantees, to get to know them and their plans a bit better. Such relationships can more easily support adaptive management, which in turn allows you to be a bit more relaxed about any flaws in the original proposal, because now you have a framework in which to manage deviations from the plan.

And how do you succeed with all those awkward discussions in which design flaws are impertinently probed? As one of the commenters on Duncan’s post put it: “the first order of business is to build TRUST.”

M&E – a top-down imposition

Yesterday I promised you some reflections on Prichett et al.’s working paper on improving monitoring and evaluation. They correctly identify that rigorous impact evaluation works too slowly for most management purposes, costs a lot (putting it beyond many project implementers), and is often highly context specific so not readily generalizable (the standard critique of randomized control trials).

Their proposed solution is to insert a second (lower case) ‘e’ that stands for experiential learning: MeE. In particular they propose that project managers should have considerable freedom to adaptively manage their project, and moreover should be encouraged to try different approaches to see what works best, thus avoiding the problem of over-specificity that I highlighted in the quotes I pulled out in yesterday’s post. They suggest that this will give much more like the real-time information that project managers need, as well as exploring more cost-effectively different project designs, than establishing a separate RCT for each one. (Which may not be sufficient any way, as which design is optimal may be context-specific.)

It is an excellent paper, and their proposal has a lot to recommend itself if you work for a big development agency. But, I cannot see it working very well at the small NGO end of the market where I operate. The problem is not really specific to experiential learning as to the whole gamut of impact evaluation as it applies to project design and management amongst small NGOs. If I think about the best small NGOs I know and have worked with, several features are often apparent that reduce the incentives for impact evaluation:

  • Small NGO types tend to be ‘doers’ rather than ‘thinkers’ – given the choice we will nearly always invest more money in implementation than M&E.
  • Many small NGOs have fairly modest aims which do not need sophisticated M&E to assess.
  • Other small NGOS are of the nimble innovator types. They may be iterating too rapidly for it to be easily captured in M&E, and do not have resources to iterate in such a systematic manner.
  • Such NGOs have the capacity to learn very rapidly internally for relatively little investment of time and effort; there is no big institutional ship to turn around. Instead, for these small NGOs new learning leads rapidly to more impact and the potential for more learning.
  • In contrast clearly analysing and communicating these lessons can involve a significant investment of effort that does little (except perhaps to support fund-raising) to deliver better results on the NGO’s chosen impact bottom line.
  • Thus generating new learning and achieving greater immediate impact can be much cheaper for such NGOs than disseminating lessons already learned.

Donors and small NGO partners obviously have a role to play in helping offset this tendency (which is not always 100% healthy), but, as I have remarked before, there seems to me to be an inherent contradiction in the calls both for bigger/better M&E and nimbler project implementation in an attempt to mimic the rapid success of internet-era start-ups.

The contradiction becomes more apparent when one realises that while business may regularly monitor all manner of variables that are relevant to their business (e.g. page hits as well as sales), they always have an instant answer when you ask about their ‘impact’: it’s the size of their profits. No construction of the counter-factual required there!

I also suspect that few aid beneficiaries care much about what any impact evaluation may or may not say so long as they are getting good results out of the project itself. Thus it becomes clear that much M&E, and certainly impact evaluations, are essentially a top-down imposition by donors understandably keen to know the results of their funding, and at odds with the bottom up approach many people in development advocate.

So the real question is: does the donor and wider-development community get value for money from the impact evaluations they demand? This is a question that Prichett et al. raise several times. The answer seems to be related to the challenge of scaling up, a relentless pressure in a lot of conservation and development work that I have repeatedly queried (e.g. see here and here.) I.e. impact evaluation and Prichett et al.’s experiential learning is all about learning how to move from a successful pilot to national programmes and similar projects in other countries.

Here I return to the internet start-up analogy. Did Google get where it is as a result of an impact evaluation? No it grew organically! If you want more bottom up development, which this blogger does, maybe the solution is less evaluation and more of a market-based approach in which successful implementers are simply invited to replicate their successes along the lines that I suggested yesterday?

Now before I chuck the whole M&E set overboard, a few basic points need to be made in return. Firstly, and most obviously, claiming ‘success’ is easy when all you need to do is check your bank balance. Determining which pilot projects are successful is not always so straightforward – although not always as difficult as might be supposed – and essentially requires some kind of impact evaluation. Indeed the converse problem often arises of a new fad rapidly gaining popularity far faster than evidence of its efficacy: the micro-lending boom comes to mind. And as those classic RCTs around improving educational attainment in Kenya show, sometimes it’s not so much about what is successful, but what gives the most success for your money. Indeed, Pritchett et al. lament the demise of project ‘valuation’ and computation of value-for-money metrics by large development agencies.

I conclude that idealists who want all their development to be 100% bottom up are living in cloud cuckoo land. Even if we dismantled the whole international aid industry, governments still regularly engage in this sort of thing within their own countries, often under pressure from their own electorates. So if the people want development aid then the paymasters are going to need to some evidence on which to base their decisions. Most of all, what this humble blogger would really like to see, is donors actually paying attention to these things, instead of continuing to commit large chunks of funding to projects and programmes they know are doomed. Better to over-fund something that has a decent chance of success than flush your money down the plughole of the utterly unfeasible.

Are donors getting value for money from the impact evaluations they demand? Only if they act on the results!

Aid project selection & implementation

Some great quotes in a new working paper proposing a different approach to M&E by Lant Prichett et al. My eye was particularly caught by these two from the conclusion.

“The reality of the project selection process, inside government organizations and between government organizations, tends to be an adversarial process of choosing among projects, which puts project advocates in the position of making much stronger claims for project benefits than can be supported, and being more specific than they would like to be.”

I’m relatively relaxed about the tendency to make over-ambitious claims of expected project impact since everyone does it, and is thus likely to fairly well factored into how projects are viewed. The problem of over-specificity in design is, I think, a bigger problem since it leads to significant wasted effort during the project proposal stage developing ridiculously over-detailed action plans and budgets. Most donors like to think they are flexible when it comes to plan and budget changes mid-grant, but the simple requirement to obtain approval is a deterrent to project managers and a source of risk: what if they do not approve the changes?

The issue of over-specified designs has other implications for implementation too:

“Organizations like the World Bank perpetually over-emphasize, over-reward, and over-fund ex ante project design over implementation. This is because in the standard model, implementation is just faithful execution of what has already been designed, whereby the thinking is done up front and the implementation is just legwork. However, de facto many successful project designs are discovered when project implementers are given the flexibility to learn, explore and experiment.”

As I wrote before: good strategies need good implementation. If the implication – that big donors like the World Bank already know this basic fact – is correct then it really makes me question the whole competitive grant awarding process that dominates NGO involvement in conservation and development. Donors could save everyone a lot of trouble by awarding grants on much shorter project outlines combined with a good track record of delivery (which needs to be much more robustly assessed). Good NGOs would be strongly incentivised to deliver good outcomes since otherwise they would lose their future funding. An entry level system would still allow new players to prove themselves, and also those fallen stars to re-establish themselves.

I will blog again tomorrow on the core proposal of the paper when I’ve had longer to digest it.

Hat tip: the Blattman

Social entrepreneurs and M&E

Bill Easterly apparently wants to see faster determination of successful and failing aid projects than is provided by traditional monitoring and evaluation. That is according to Tom Murphy’s report on the DRI annual conference from back in March.* Tom commented:

“… strong and open monitoring and evaluation practices can ensure to the trial and error of the Easterly ‘searcher.’” (sic)

I also seem to recall – but cannot now find the right quote or link – the suggestion that aid practitioners could seek inspiration from how success and failure business is determined in the business world. (The DRI debates considered whether an RCT would be a suitable evaluation mechanism for the IPhone game Angry Birds – not the best example imho!) (Update 27/06/12: found that link, it was Ian Thorpe blogging here.)

As someone who played a major role in setting up a local NGO and then leading it for several years, I guess I can be classed as something of a social entrepreneur. Thus I have a few thoughts on how this can or cannot work in practice.

Like too many organisations in the development and conservation sectors I wouldn’t put down M&E as one of our strong points. We’re not terrible at it, and we’re getting better, but it wouldn’t be too difficult to poke plenty of holes in what we’ve done so far.

As ever with M&E, getting the budget balance right is tricky. If we had as much money as the MVP we could do some amazing M&E. (Hopefully a lot better than the MVP has achieved in practice!) But we do not have anything like that amount. Plus we have a problem that many of the impact indicators we are targeting take a long time to move in a positive direction (e.g. biodiversity). So we have to wait quite a while just to see whether our monitoring programmes are capable of detecting the kind of change we are seeking to achieve.

These challenges are universal. More particularly, taking an entrepreneurial approach involves a lot of flexibility and adaptation to changing circumstances. Most people seem to agree this kind of approach is a good thing. Which is all fine and dandy, but it does make it hard to set up your M&E baselines, because we’re continually adapting how the project will work, and thus what impacts it will have. As the project matures it settles down better, but for our earliest pilot sites the opportunity to establish a firm M&E baseline has long gone.

So how do we track our own progress? Mostly I would say through a number of milestones, which may or may not be well set out on paper. Conceptually we started with a pretty good idea of what we wanted to achieve. As things progressed we developed a number of internal targets (usually with very flexible timelines if they are specified at all) against which we could measure ourselves.

The analogy with modern business is perhaps something like Facebook. At first it was just an idea – something cool to do – then it became an enterprise, but one not particularly focused on bottom-line impact (profit) as on other metrics and milestones (e.g. market share and time spent on site by users). Only latterly has the focus at Facebook shifted increasingly to monetising their substantial achievements. (If recent share performance is anything to go by, this is proving tricky.)

Similarly our own project is being subjected to increasingly robust M&E assessment, as indeed it should do. But, in my mind, most M&E approaches and entrepreneurial innovation apply to quite different stages of project development. This is also why starting small is so important; it allows time for KISI. Alas too many people in conservation and development are often in such a rush that they want to spend $100 million first, and ask questions later.

* Yes I’m finally back blogging again. I have a number of posts queued up in my head responding to the news over the last couple of months. Hopefully it won’t all seem too much like yesterday’s left-overs.

When admitting failure isn’t enough

There have been some great posts on the second aid blog forum on admitting failure.  Many bloggers picked up, as I like to think I did, on the fact that admitting failure is just one aspect of lesson learning (another tautological piece of yucky aid jargon), that we all ought to be doing as a matter of course. David Week called attention to this better than anyone, demolishing admitting failure as just another management fad. (I find it hard to disagree with him, but reckon admitting failure has so much more humility than your average management fad, that I’m prepared to give it the time of day.)

In particular, David examined the failure reports by Engineers Without Borders, the poster child for admitting failure in aid projects. In doing so he highlighted the limitation that I had suggested, that failures identified and admitted were unlikely to be central to an organisation’s work, but focus on relatively peripheral elements. David dissected an entry in EWB’s 2009 failure report. To paraphrase: he showed that while the EWB volunteer had successfully identified that things had gone wrong, and that the project would not be sustainable, she had failed to identify that the real problem lay in the fact that the whole project design was fundamentally unsustainable in the first place.* Or as I suggested in the comments: EWB is exposed as a glorified volunteer monger. Maybe one of the best volunteer monger’s out there, but a volunteer monger nonetheless.

EWB’s Erin Antcliffe responded in the comments and an excellent little debate developed, spreading to Twitter.** Now don’t get me wrong; I have followed several EWB volunteer blogs over time. I love their questioning approach and courage to face up to failure. Without ever having been near one of their projects, I nonetheless imagine they might just be the best volunteer monger in the world. And if their initiative to get aid and development organisations to similarly face up publicly to their failures catches on, then, regardless of David Week’s and others’ reservations, I think they’ll have done the world a big favour.

But, will their project design process have changed? Will they have learned the most important lesson from their failures? I can see this how might be difficult, because it appears fundamental to how the organisation works.

Often times in this blog I have contrasted how the topsy-turvy world of aid differs from that of business. (As have many others wiser than me!) This appears to be another such example. If you came up with a great new business idea, you could give it a good go, but if, whatever your original genius, it failed to deliver you would find out pretty quickly and the company would collapse. Alas the absence of good feedback loops in aid means that as long as you can convince the donors to keep on donating (and the volunteers signing up), you can go on indefinitely regardless of what you actually deliver.

It’s hard enough to admit failure in the first place. It’s even harder to admit that you might actually be the problem. And what matter most is what you do after you’ve admitted failure.

* This raises an important point. It is not enough simply to admit failure. One then needs to correctly diagnose the cause. This is not always easy!

** You can now follow me on there too: @bottmupthinking. Don’t count on too regular tweeting.

With fails like these who needs success?

This is a contribution to the second aid blog forum on admitting failure in aid projects. Several contributors have already pointed out the challenges of admitting failure in the first place, and I don’t want to pooh pooh their very real concerns. But I also think this is an idea whose time might be just around the corner. Once the ground has been broken by a few brave NGOs and supportive donors there could suddenly be a big rush, all in the name of marketing. It could come de rigeur to admit to at least a few failures.

All very wonderful, but I worry that a lot of it’s going to be rather superficial. As Marc Bellemare points out, in admitting failure, on one level all we’re really doing is showing that we are sufficiently self-critical to do so, and thus earn more kudos for self-criticism than we lose for the odd failure. This reminds me of those long application forms for graduate level jobs and questions such as ‘Name three weaknesses you have.’ No graduate worth their salt is going to confess that they don’t get on well with other people, but instead will say things like ‘can be a bit impulsive from time to time’ which can be almost turned into a positive.

Similarly, no NGO is going to admit that big chunks of their work is a complete failure. Instead, like the example given by Tom Murphy, they’re going to contrast their failure with a success (thus emphasising the value of the success story) and/or pick on relatively minor elements of their work. Of course this is just what you’d expect from human nature, and sensible management. Nobody wants to do a Ratner!

On the flip side of the coin, I would expect this to be a reasonable reflection of reality in well run organisations. Such organisations should be capable of spotting when they’re heading for failure on a major programme and devote the management time to turning things around. That’s what adaptive management, the mark of a good development project just as it is the mark of good business management, is all about. Thus the failures ought to be peripheral; where senior staff just took too long to become apprised of the trap into which they were about to fall. A good organisation should also be able extricate themselves from any such traps. Indeed admitting failure then just becomes another element of the lesson learning process that an effective organisation should be going through internally anyway. (Thus arguably admitting failure is simply exposing that process to the outside world.)

The trouble is that a lot of what goes on in development does not appear to me to be that well run. (On average I’d say the quality of management in NGOs is probably better than other parts of the aid world, but it’s not a hard and fast rule.) Will these less well run organisations, programmes and projects have the courage to admit they made much bigger failures? I fear not. Just ask the leading advocates of the Millennium Villages Project!

That all said, I can see a big potential win here if admitting failure really takes off. Because for all the lack of self-criticism in the aid world, I think it is worse in developing country governments. So if self-criticism became that much more mainstream, then there is a chance that it might percolate across the institutional boundaries. I’m not overly optimistic on this point, but it is certainly worth the attempt.

To conclude, I would have failed myself if readers of this post came away with a negative impression of the growing fad for publicly admitting failure in development projects. I think it is an excellent innovation, and I hope it catches on. A little more humility from the big aid players would be no bad thing any way. Just expect a certain degree of superficiality and turning negatives into positives along the way, because that’s just human nature.

Why I’m a Millennium Villages sceptic

Last week Jeffrey Sachs set out a robust defence of his brainchild, the Millennium Villages Project, although, as Tom Murphy pointed out, it was somewhat low on detail. I’ve never knowingly been near a Millennium Village, but my own experience causes me to doubt the lasting legacy of the MVP, at least in countries with similar problems to where I work.

First the good news, Sachs took on some of this detractors by saying that the MVP was as much about developing systems to improve service delivery (and hence attainment of the Millennium Development Goals) rather than just,  per se, achieving the MDGs in the targeted villages. I’m a big fan of systems approaches, so this gets the thumbs up from me. Systems are definitely more easily replicable and scaled up than individual projects that focus simply on the needs of its target area.

Now for the bad news, systems are not automatically and by definition scalable. A system that works well at one level may not work well a wider scales due to unanticipated problems and bottlenecks. Ben Ramalingam recently blogged on exactly some of the new challenges that occur as one scales up. This doesn’t mean that the original system was designed badly, but simply that good systems management takes an iterative approach, making tweaks and improvements as we go along (what I term the KISI approach).

But that is not the biggest problem that I see. Even the best designed systems need to interact with things outside their control, in particular people; indeed I suspect that the MVP has people playing integral roles at every step in the way (i.e. that mostly what we’re talking about here is systems for organising human work). A system’s output is constrained by the quality of these interactions. In short, as any good businessman knows, you need competent and motivated staff to deliver a high quality of service. And that is where so much service delivery in developing countries falls down, with last mile service delivery particularly badly managed. Unfortunately short-term, local solutions to this are not scalable.

The problems are legion, and not all a result of poor education amongst the workforce. I know some excellent and (when you consider what they are up against) surprisingly motivated local civil servants. But the overall system drags everyone down. Sure you can tinker at the margins with systems to improve paper flow (mostly in local government around here, we’re talking about paper flow), but the elephant in the room is an unmotivated and unsackable workforce.

Of course this problem will apply at the MVP sites, but there you also have a massive aid effort with lots of expat technical advisers and a high level of political interest. I’ve noticed around here, normally sloth-like civil servants who won’t even sit in a meeting without a generous per diem rush around like lauded socialist workers striving manly (or womanly) in the name of their country when a bigwig is due to visit, working into the night and through weekends, all without any per diems.

Thus I fear all the achievements of the MVP will wash up against the great brick wall that is a change resistant bureaucracy. Once the high level of funding, all the expat TAs, and the high level political interest have withdrawn we’ll be back to business as usual, and the MVP will be neither sustainable in the selected pilot villages nor scalable. Maybe this will not apply everywhere, but I would wager a decent sum that it will happen here. The community contributions which Sachs highlights may also be much harder to be elicit when it’s just government staff doing the asking.

The MVP has a laudable goal, and even as an experiment, the idea of resolving various systemic problems in service delivery is a worthy one that definitely deserves some experimentation; marginal changes can lead to marginal improvements, and, as a by product, perhaps a marginal improvement in government staff morale. But if Sachs wants to take a systems approach to achieving the MDGs maybe he should have looked at HR management reform in developing country civil services. It’s a Herculean task to be sure, that, around here at least, the World Bank has been striving at vainly for some time. But until you resolve that problem I fear these sorts of big push attempts to transform service delivery and hence quality of life in developing countries will always be at least one more big push away from succeeding.

Follow

Get every new post delivered to your Inbox.

Join 698 other followers

%d bloggers like this: