Planning for collapse: making development interventions too big to fail and vulnerable to systemic risk.

The financial collapse in 2008 following the collapse of Lehman Brothers was enthusiastically prepared by the political and economical decision makers. In the 70s and 80s, in the name of more efficiency and free marked, regulations were more and more seen as a restraint on the development of companies. With less regulation, the market would be more efficient with less transaction costs. The firewalls between savings and investment were torn down, as the memories of the thirties were deemed irrelevant and “this time it is different”. Indeed economic growth followed. Financial markets seemed more efficient. The business cycle seemed to have disappeared.

Companies became more and more interlinked and financial products became more sophisticated. Risk was shared among more actors, all of them with a AAA rating. The risk vaporised in the system. Until it suddenly was there again. The dew point was reached. And the lack of firewalls took down the system, including some governments who believed the hype, like in Iceland and Ireland. As I am not an economist, I would like to refer to Tim for a better explanation of the crisis. His explanation in his book “Adapt” is enlightening and suits my point well.

So the financial system became so integrated that risk became systemic. All actors were linked up so much that the failure of one hurt all of them. The financial innovation went so fast and the system became so complex that nobody could assess the overall risk anymore.

Development is a risky complex system.
Development is a risky business. Success is elusive and failure frequent. Moreover the “transaction costs”, the difference between what the donor wants to give, and what the ultimate recipient gets, is high. There are important inefficiencies: unaccountable partners, overlaps, gaps, lack of results, lack of knowledge what works and what not, not forgetting stubbornness in repeating things that failed over and over again like swedow.

The way to make changes in a complex system is best described by “do no harm”: a prudent and evolutionary approach. Make small changes, and with a short feedback-loop check on the effects on the system as a whole. Make another change. Innovations should be never too big too fail. Innovation should be tested before bringing it to scale. Indeed, this is our world, our ecosystem. We should not take systemic risks with the lives of poor. As it is impossible to predict what will work or not, it is better to have a lot of initiatives and not to pre-empt the outcome.

A development system based on these principles should be expected to be very conscious of the risks that go with large-scale intervention, and focus on the value chain from innovation to bringing to scale.

The development system that exists however has the Paris agenda and the humanitarian cluster approach. The 3D approach (development, diplomacy and development), linking relief to development and integrated missions. The items on the international agenda are aiming to link the different systems to each other. Does this lead to more efficiency or to unacceptable systemic risk?

Some examples where this clustering of agendas seems to have led to collapse due to systemic risk: :

  • In Afghanistan the West has tried the 3d approach, it did not seem succesful.
  • In Uganda, donors have linked themselves into a budget aid logic, meaning that to punish the parliament for a gay law, children will probably not get their vaccine anymore.
  • Madagascar textile lost their jobs, because the duty free regime for their country was withdrawn after the politicians squabbled too much.
  • In DRC and Sudan, the UN integrated missions make the UN-humanitarian agencies de facto not neutral, affecting their efficiency in a serious way.
  • In Somalia the mixing up of the humanitarian and anti-terrorism agenda was an element in the current crisis.

From these examples I would like to conclude that systemic risk for the whole aid effort in a country can exist if agendas are lumped together. If an approach does not work, the most logical explanation could be that the approach is not a good one, and different options must be explored. The alternative narrative, that it did not work because we did not try rigorously enough, seems dangerous.

An alternative: nimble aid (agile aid, mindful aid)
Nimble aid would consist of independent interventions, each very limited in its objectives and conscious about the unpredictability of externalities. Like a bird in a flock, each programme would be able to steer itself in full consciousness of the effects it has on its environment. It is the evolutionary approach to development. If every objective is diluted in a wider technocratic programme, nothing is really happening. Trying to be responsive in a wider programme just leads to more meetings and more lemming thinking on development.

Vaccination programmes in humanitarian settings do just that: saving the life of thousands of children, leading to healthier and more productive lives for the beneficiaries. An effect that can be traced over up to 100 years.
New agricultural practices are tried one by one on a small scale, until there is one that works well, and everybody adopts it. Like the use of maize in Africa, long before colonisation, and varieties resistant to plagues now.

It is a development agenda which is less ambitious in the short term, but revolutionary in the long term.

A prerequisite for this approach to work is information sharing. Not information about what happened, but information on what is happening. So all actors know what others do by reading this information and can adapt their interventions to what is already happening around them. The actors can only correct their actions if they know what is happening around them. This is very much like obliging public companies to publish essential information , development actors should to post essential information on their activities. Information sharing should not be confused with going to meetings nor coordination.

Keeping the firewalls
In the consensus thinking, a compounded indicator will tell you about success of failure (like the human development index). This means that each value, each objective is not important enough to fight for on its own. You want to raise the overall index. In a nimble approach there is a tension between overall goals and each objective. There is a specific programme assuring that child mortality is down. Even if the government does not have the institutions or intentions to do it themselves yet. Good child mortality work will always strengthen the local institutions when the objectives are both long-term and short-term.

This is why I would like to argue to keep the firewalls between the values that are important. This does not mean to work in an ivory tower, It means that there are some objectives you just don’t negotiate away.

Moreover: fund what works
There is a lot of evidence on interventions that do work, even on a large scale. Getting rid of funds is not a problem for the agencies even if they would limit themselves only to evidence based interventions. Just a few examples that work in some circumstances if done well:

  • Basic economical infrastructure
  • Basic health services
  • Cash support for the poorest
  • Water and sanitation
  • Most humanitarian interventions if done according to the Sphere Standards.

And so much more. In the cases of decent accountable local institutions, even some forms of budget aid seem to work.

In conclusion

Of course, coördination and coherence are important. But they are only means for reaching results. Sometimes other means, like innovation or competition, might work better. When coherence becomes an objective of the aid, however, the level of systemic risk for the development system might just be too high.

Finally, transparency will always be beneficial for planning interventions, as well in a coherent as in a competitive environment.

 

Markets in everything: 2021: the secondary market for development products.

Francis Watanabe is project portfolio manager for the government. He acquires development interventions on the secondary market, to add to his portfolio on early child development. Innovators, like the Gates Foundation or Oxfam, or even local governments, start up their interventions, and after the first rated evaluation sell them off to the highest bidder on the secondary market. Interventions from a reliable provider, with a good results projection and long life span are in high demand. Buyers normally will pay for all the investments and overhead, and are prepared to pay the innovator for the further project management. The management fee for high yielding projects can be set quite high. Private sector innovators with a good success rate can earn a good living, and in some sectors, like micro-finance, most innovators are from the private sector. In early child development however, most innovators are former NGOs or foundations.

Watanabe is expected to reach a very good results/cost ratio for his portfolio, better than the average from the donor group, so he cannot just rely on market data. He also has to research on the latest scientific findings and try to identify upcoming new techniques or new innovators. He can also improve his ratings by identifying local champions in difficult environments. Real bargains can be concluded when other countries decide to switch to other priorities, and they offload their old portfolio.

Thanks to the secondary market approach, most donors have managed to improve the development results of their work with a factor two or even three, all on a stable budget.

It seems like this market approach just had to happen when the different building blocks for the system were available:

  • In order to have a functional market, knowledge asymmetry should be solved as much as possible. The transparency drive in development funding provided the information needed. The International Aid Transparency Initiative lead to the availability of data on every intervention by every actor in a comparable way. IATI started just did this.
  • The results based approach lead to a system where interventions should deliver on the promised results.
  • Standards in results reporting and impact evaluations led to the rating of projects for a specific development outcome. Independent rating agencies emerged from evaluation and audit consultancies.

The acceptance of the Sphere standard as the absolute poverty line set a baseline and brought it all together.

The real breakthrough came with the sphere standards, setting concrete lines for absolute poverty. Donors wanted to spend the bulk of their money on palpable morality & evidence based interventions for the poor, instead of for vague institutional goals or long term elusive economical growth.

Inevitably once the results based approach was accepted, coördination and partnership moved from the agenda. Indeed, as ownership and the “do no harm principle”were part of the basic set of principles, debating coördination and partnership was not necessary any more. Any intervention bypassing ownership issues would get a bad rating for sustainability. Partnerships and coördination became more organic: it had to serve the development goals. Pragmatically the operators moved from partnership to competition and back again, according to the needs of the beneficiaries.

However, still a hefty 30 % of the interventions happen outside of the system. This is normal, as most of the interventions that don’t cover basic services are more difficult to assess on their results potential and their value would be too difficult to estimate. Indeed: important work still happens in the rule of law, security, democracy, governance and economic development. However, a secondary market for this type of projects still seems a few decades away.

Results in HIV/AIDS interventions: Considerations on the need for a vertical approach in an horizontal world, and vice versa

Aids day
During Aids-day, the blogs proved that the debate between the believers in a vertical approach and the believers in a geographical approach rages on. I did not write on it before, because it is an issue with ramifications in all directions, and wonderful opportunities for tangents and meandering digressions. Most thinking is black and white: HIV/AIDS needs advocacy and a vertical approach otherwise it does not get the priority it deserves, or all development must be locally generated, and advocates should stay out.
I will try to be brief and as provocative as I can to highlight the need for a more instinctive and competitive approach on this divisive issue.
Conferences
I was working in the HIV/AIDS sector in South Africa, before Mbeki got internet-shavvy, and before the Global Fund For AIDS, TB and Malaria existed. It was a very frustrating experience. The South African government was hailed as one of the few Sub-Saharan governments with a decent policy, but rates of HIV-positive cases kept going up. Donors and the government were subsidizing mostly advocacy and awareness programs, and the responsible officials were often found in international conferences. In short, everything was politically correct, and nothing worked. Until GFATM was created. They had exotic ideas such as “evidence based” interventions. Things were falling into place when the price for drugs dropped too. Alternative reading: until Brazil and MSF got their way and cheap drugs.

Lesson 1: If there is an internationally recognized crisis, focused forceful global action can be useful.
Lesson 2: “Evidence based” interventions might have a bigger chance for success than doing whatever seems right when you are at it.
Lesson 3: Advocates can make a difference. Sometimes for the better.

Reform
Since the UN was created, there have been calls for reform, but here I am talking about 2004, with a donor drive for more streamlining amongst agencies. Smaller agencies should be integrated in the bigger ones. This would lead to more efficiency, as we all know that big bureaucracies, thanks to economies of scale, are more efficient than nimble organisations fighting for their survival. One of the agencies under fire was UNIFEM, The organisation that “provides financial and technical assistance to innovative programmes and strategies that promote women’s human rights, political participation and economic security.” It should have been merged with UNDP. One of the delegates of the G77 berated: all UN-agencies have been created because there was a good reason. So good a reason, that all MS in unanimity decided to create this organisation. Are you really sure that the situation of women has changed to such a degree that we don need this organisation any more?

Indeed only 6 years later, the same donors managed to create a bigger UN-women organisation, that should strengthen the original mandate of UNIFEM, and bring it to a larger scale.

Lesson 4: never thrust a donor (or anyone) that is sure about the next silver bullet
Lesson 5: Sometimes, if something is very important, you need to create a special tasks force to make it happen.
Lesson 6: Development fads come in tides, tides rolling in and out, a new tide rolling in…

Localizing
In the early years when I was working on HIV/AIDS in South Africa, it was amazing how many of the “good practices” were just copy paste from the interventions that were used in the HIV/AIDS communities on the West Coast. A group threatened by exclusion dominated by homosexuality and intravenous drug use, while in Africa victims were often heterosexual middle class. It was only when results were required that the programs got adapted.

Lesson 7: local actors seeking locally adapted solutions based on global knowledge works better than local solutions transplanted to a different ecosystem. Without good knowledge to start with, chances are good nothing will happen at all.
Lesson 8: never thrust donors or iNGOs that they are open for local input. If they think they have a silver bullet, they will push it, claiming it is localised.

Conclusions:

Lesson 9: global institutions should offer global knowledge and try to adapt catalytic operations to local circumstances. Acceptance and rolling out should be up to the local owners of the problem (if they find it is a problem).
Lesson 10: vertical and localized horizontal programs must coexist, and fight for attention. Having a dynamic of competition, where global, vertical programs must prove their mettle, and local horizontal programs are constantly challenged is a good thing.

Lesson 11: as a donor, you invest your money best where it delivers the most. Depending of the situation and the “maturity” of the issue, this can be a global vertical program, or a local operation, or anything in between. You should have thematic and geographical programmes with different goals competing for resources and attention.

Should Multilateral aid have results?

Multilateral resource allocation: best practice approaches (Article – ODI Project Briefings 51, November 2010)

When DFID changes track on development, it is important to notice as DFID is one of the thought leaders among donor agencies. If ODI writes about it, it is important to notice, because ODI is one of the voices DFID is likely to follow. This is why the ODI project briefing “Multilateral resource allocation: best practice approaches for Multilateral resource allocation” is important. This is why the central thesis of the report, that multilateral results are difficult to quantify and we could settle for now for a transparent, quantifiable , auditable system, makes me uncomfortable. It seems an effort to plead for status quo. It outlines a superficially quantified and auditable system, but under the hood the data are subjective and debatable. More importantly, it sidelines the more important issue of results and effectiveness, because “objective measurement is difficult”.

Will this “best practice approach” lock the donors in a transparent system, taking away the pressure to move to better results? Will process and tools drive the donors for the foreseeable future instead of outcomes and results?

Governments have judged their private sector partners on their results and cost efficiency for years in a transparent way. Why would this be impossible for Multilateral Organisations (MO)? Choosing to fund organisations of which it is difficult to measure the results and effectiveness seems not a best practice. Perhaps we could measure the results by assessing the difference without funding? Is there another way? I think so.

The central problem with the thinking expressed in the briefing is the partnership approach, where an organisation is funded because of its institutional setup and not for its results. The funding becomes an entitlement that is not questioned. In a partnership approach, the UN-organisation has a role within the wider UN-system. This “ UN-system” however is a misconception: the UN-ecosystem is not a coherent system. To the contrary, each individual Multilateral Organisation was created by all member states because a certain distinct value, e.g. child care, or health standards, had to be addressed in its own sector, separately from the others. Those values stand on their own, and serve the global public needs only in this sector. In a sectoral approach or results based approach , the UN-organisation has a role within a certain sector (e.g. global public goods in health). You should assess the role of the organisation within this sector, and compare it to the alternatives in the sector. In a sectoral approach you are not expected to compare the performance across multilateral organisations, as there should be only one organisation in the sector fulfilling this role. You should not compare allocations among MOs, because they are in different sectors.

This choice between a partnership approach and a results based approach has important budgetary implications: in a results based approach there will be a funding balance sought among the different actors in the same sector, according to their contribution to the results. In the partnership approach the different UN-agencies will be funded from the UN-budget, and essentially compete with each other for funding. Within a partnership approach it is difficult to measure up which organisation is the most efficient; in a sectoral approach it is clear to most actors what global public good is needed and provided by the multilateral organisation.

For instance in the health sector, WHO is responsible for the global public goods such as the standard health procedures, but will also compete for operations with national governments, NGOs, the World Bank, and other UN-organisations such as UNICEF. Should we fund WHO for its “efficiency of procedures compared to the FAO” or should we fund them for the work they do in the sector?

Most Multilateral agencies have a creative approach to fundraising. While they pay lip-service to the UN-principles on funding, their fundraising is businesslike, and takes the reality of development funding into account. They try to cover all the markets:

  • Core funding is the bedrock of the organisation. This money mostly comes from multilateral budgets. Core budgets are supervised by the boards, and fund the administration, core responsibilities and whatever the board finds fit to approve.
  • Thematic funding gives flexibility within a sector. This money comes mostly from thematic funds from donors.
  • Project money can come from a myriad of donor budgets: multilateral budgets, thematic budgets, geographical budgets. The big money is in this line. A lot of small projects is together a lot of money. As administration is automated, the overhead per project is limited. The proof of this being that all organisations accept nearly all projects offered.

The objective is to maximise funding for the organisation. The board looks mostly into the core budget. Thematic spending is accounted for to the donor group that feeds this fund. Projects are on one by one accounted for. Most boards have no complete picture of what is happening. This gives management a lot of freedom.

The board members meanwhile, have seldom any management experience. The oversight happens mostly by diplomats who first defend the policy positions of their country and not by economists asking for efficient organisational management.

Another “best practice” approach

A results based approach to oversight on the multilateral organisations would start from a sector approach and define the role of the organisation within the sector.

Where the organisation really provides a global public good, the oversight should happen fully by the board. The funding allocation is very much like the funding for a government department in the home country: efficiency is a necessity, bud political priority and needs decide on the level of funding. Professionalisation of the board is necessary.

Where the organisation has a competitive edge for operations, they compete with other actors for funds. The picture is of course more blurred than this: they compete with the program country administration for direct funding through bilateral funds, but on the other hand coöperate with them too. The same happens with NGOs or the civil society.

It is in operations where the big money is. In operations results are measurable and can be compared with the results obtained by the other actors. Operations that can be done directly by other actors should not be single sourced to the multilateral agencies. By abandoning the push to form consortia and cartels in all areas, and stimulate competition instead, value for money would result, just like in all other government spending areas.

Compound indicators for meaningless conclusions

The five lenses approach, although it claims to be auditable, fails to be accountable as it fails to give “best value for money” being the measuring stick for government funding.

The five lenses measure clusters of related indicators in five different areas and bring them together in one evaluation framework. Eliminating competition and results from the framework means that funding will depend the quantification of often crowd-sourced assessments. Crowd sourcing can be useful, but is dangerous in areas where group think tends to occur, with development among government officials being certainly one of these areas.

The congruence with donor’s objectives is the first lens, and difficult to argue with. All donor funding should happen in line with the donor policy. If a donor funds against his own policy, well.

It seems incredible to find in the second lens, development effectiveness, only excuses for NOT measuring effectiveness. The lens is limited to process indicators like MEFF ( rule one of the logical framework: never make your means an objective) or MOPAN (crowd sourcing amongst donor diplomats). It could be seen as an insult by all the MOs who did work hard to get their indicators right and measure them.

What would be the outcome of the measurements in the third lens “role in the international architecture”? How do you distill an auditable number from these measures? It is remarkable how the role of “global public good provider” (appropriateness of the mandate) is mixed with the competitive role in the marked “alignment of activities with comparative advantage”. You would expect the board (with the donor included, and having a veto over all the decisions) to assure that the activities are aligned with the core mandate (I could expand on this one). These core activities should be well done, but without comparative advantage, because they fulfil a natural monopoly for the global public good. Comparative advantage is only relevant in sectors where there is competition, and not in the area where the organisation has a natural monopoly. Where there is comparative advantage, competition should play, and the funding should probably not be multilateral.

The fourth lens is also rather strange, as the potential for improvement is a reward for past bad management. Normally you would think past behaviour is seen as a proxy for the future. Those who reformed before have little scope for improvement. Moreover, it would also reward the organisations that can easily be instrumentalised by one donor, while the reform dynamic should mostly happen in the oversight bodies.

I am still wondering how scale made it as fifth lens. Indeed, it is more efficient for a donor to write 1 check of 1 billion than to write 1000 checks of a million, but the relationship with results is unclear to me. It is definitely easier to transform a small organisation than a big one. I wonder whether there is any link – all other parameters like professionalism and organisation the same – between size and efficiency. A small organisation with a focused mandate will probably be a lot more efficient than an unfocused sprawling dinosaur. However, a machine like WFP might be more efficient then an amateuristic outfit.

The total absence of the role of the oversight bodies in the document is worrying, and the prominent role given to informal donor gangs is a bad sign for the future of the multilateral system. The 5 lenses, without an assessment of the role in the boards, mean in practice that the donor and board member does not take responsibility for the management imposed on the organisation in the board.

Conclusion

The Multilateral Organisations have gone through important reforms, and some of them are more efficient than ever. Some Multilateral Organisations fulfil a central role in the development of the sector where they provide operations and global public goods. It is a disgrace not to reward them with funding in line with their results.