I am doing a series of articles on the book “Rethinking Canadian Aid” (University of Ottawa Press, 2015), and now it’s time for “Chapter 17: Conclusion: Rethinking Canadian Development Cooperation — Towards Renewed Partnerships?” by David R. Black, Stephen Brown and Molly den Heyer as the three editors. Their conclusion, and the title of the book, is that things are a-changing when it comes to Canadian aid, and whether it is under Harper’s governance or over a longer time period, it is time to rethink Canadian aid as a result. Except I don’t think that is the conclusion I get from a more critical review of the text. Bear in mind that I am not reading it as an academic, I’m reading and critiquing it from the perspective of a manager — does it hold any resonance for me, does it identify the key factors at play, does it ring true? Generally, no. Noting, of course, it wasn’t written for the likes of me.
Note too that I am not disagreeing with their conclusion i.e. that aid partnerships might benefit from a re-think in terms of foundations, international partnerships, partnerships with Canadian stakeholders and intra-governmental partnerships, but rather that this book doesn’t provide the evidence to get us there.
Their main argument is that “aid in Canada has shifted”, either overtime or under the recent Conservative government. In terms of principles, they argue that humane goals (i.e. altruism) have been replaced by self-interested goals (i.e. commercial trade goals). This shows up through the book — introduction, chapters 1, 4, 7, 9, 15 and 16. The sub-argument is that power has shifted, tied aid has gone down but private sector trade interests have gone up. Equally, they argue that specific policies around humanitarian aid (chapter 2), use of force (chapters 8, 13, & 14), gender equality (chapter 11), and Children-in-Development have all turned toward poorer development outcomes. Combined with changed management for whole-of-government approaches and a focus on aid effectiveness (introduction, chapter 1), the conclusion is that principles + policy + management have changed for the worse, and it is time to rethink Canadian aid.
Except, as I said above and my critique of each chapter, I’m not convinced the lines of evidence are there. When it comes to principles, the argument is that it is no longer about humane goals and only about trade — yet Swiss kicks that argument to the curb really well in Chapter 6. Reality backed by hardcore stats, not spin supported by anecdotes and rhetoric.
For policies, some of the analysis is decent but focused on such small sample sizes that even a first-year statistics student could tell you that they were statistically insignificant. Decent premises, but with few facts other than anecdotes, combined with projects representing a tenth of a percent of the overall budget. Pick a different set of projects and you would see “no change” at all.
For management, it is argued that it represents a wholesale change to new factors, but the same factors have always been there. Not as prominently discussed, but equally present. Results, data, short-term focus over long-term focus. Nothing new for CIDA or development pressures. And, more importantly, equally present in domestic organizations as well. The push for clearly demonstrable short-term results is not driven by aid effectiveness changes but rather by the current government’s overall approach to measuring results in any organization. The real question is if this produces a real difference for CIDA, or just run of the mill adaptation.
I think the book could have come to the right lines of evidence if they had tackled slightly different questions. First and foremost, they should have asked “what is development”, both in terms of what it means to Canadians as well as what it means in aid circles. For example, private sector development is a popular target for NGOs who argue that it shouldn’t be done and by the private sector who argues that governments can’t do it. Yet PSD is one of the few avenues that will generate new resources to sustain development gains. New monies have to go into the country, and since aid isn’t sustainable, it has to be the private sector that generates it in the long term. Like through trade. This isn’t to say everyone should do everything possible related to trade and call it development, but rather that a thorough examination of the types of PSD projects and their likelihood of producing strong development results would be a good basis for further analysis. If you want to conclude that Canada is doing it wrong, it would help to first establish that projects of Type A are generally good and projects of Type B are usually less effective and that as a result of ties to Canadian business, ask if Canada is now doing more type B than type A. That analytical framework would work for any of the sub-policies, but you need to show the framework rather than assume the outcomes and cherry-pick projects that support your premise.
Equally, however, if people want to say a “whole-of-government” policy is bad for development, presumably they mean it produces either the wrong results or at least less effective results than other purer policies. Great, if that is true, it should be easy to show which ones work better, why, how they produce better results, and then, to apply to the Canadian context, show how Canada is now choosing less effective projects. Except none of the articles in the book can meet that bar. Instead, it uses rhetoric and spin to say “better to do it another way, worse to do it this way” (with no evidence) and then say “see, they’re doing it the wrong way”.
Finally, if they want to rethink aid, I would expect them to talk about the other things that affect development and don’t get much attention normally. Things like migration, remittances, investment (as mentioned in Chapter 1 briefly). Or redistributive politics within a country (like the BRICs).
If I had evidence of THAT, I might agree there is a need to rethink Canadian aid. But if the principles haven’t changed allocations (as per Swiss), if the policies only change for minor levels of investment or with only anecdotal projects, and if management focus for government changed rather than aid management itself changing, then I think a different measuring stick is needed. I had hoped this book would be it, but it wasn’t. Maybe it never intended to be.
I am doing a series of articles on the book “Rethinking Canadian Aid” (University of Ottawa Press, 2015), and now it’s time for “Chapter 16: Undermining Foreign Aid: The Extractive Sector and the Recommercialization of Canadian Development Assistance” by Stephen Brown. In the words of the author, “This chapter argues that the new initiative [of funding mining projects] was emblematic of a new turn in Canadian development assistance — namely, the explicit recommercialization of aid. Canadian trade interests have always underpinned Canadian aid to a certain extent (Morrison 1998). However, a clear trend in the 2000s, under Jean Chrétien, Paul Martin, and the Stephen Harper minority government, had been to move away from commercial self-interest.”
As I started the chapter, I suspected I was going to have a problem with the thesis. In short, I don’t think it’s true, not even in the sense that it could be part of a trend — I disagree that it is commercialization let alone re-commercialization, nor do I think it is part of a trend as Brown sees it. But I was interested to see his lines of evidence. Even more so when he raises the bar so high: “Though undoubtedly some benefits will accrue to poor people in developing countries, the emphasis on extractives is an ineffective and potentially illegal use of ODA funds that will benefit wealthy mining companies more.“
Let’s start first with the argument that funding labour market training is a subsidy. I’m honestly not sure where to begin…perhaps with the fact that the WTO, which is kind of interested in anything remotely looking like a subsidy, says it is not a subsidy and never has been. Trust me, they looked at Japan’s funding of telecoms in Asia in the 90s and later, with their aid budget, and in a far more direct manner than here, and it wasn’t a subsidy then, so even less so now.
Or maybe I’ll start with the ILO and various labour agreements that require developed countries to help develop the local labour force. Or maybe I’ll look domestically at every donor in the world that has an equivalent labour market training program aimed at their own economy, providing the basis for much of GDP growth. Or maybe I’ll look at the fact that technical or vocational training has always been considered “dirty” in elitist aid circles particularly for academics, preferring instead to argue in favour of high-tech tertiary education or basic education, unless it looks like micro-credit for a ditch-digger to buy a shovel.
Or maybe I’ll start by pointing out that the approach (LM training) either stands on its own or doesn’t, is a subsidy or not, regardless of whether a Canadian company or NGO hires them or not. It’s either an acceptable effective aid technique or not, and the OECD gurus who look at best practices say it is. Exactly the same as for domestic labour markets in every other country in the world — training for the jobs that are available, not the ones the academics think are sexy or palatable. Training + job = wages and a chance at sustainable development. The presence of the Canadian company may make it look opportunistic, or even self-serving, but that colourizing doesn’t make it a subsidy.
The next point I really like is that the CSR projects were joint NGO/company/CIDA partnerships, but this was terrible apparently. Brown suggests instead, that as with some other projects, the company could and should have just contracted with the NGO themselves, and there was no need for CIDA funding. It’s a good argument, as long as you don’t poke at it a bit.
Like the fact that he’s right about other experience with this type of approach, particularly for environmental groups working in developed countries (and noted later for the headaches with the extractive industry abroad). What did they find? The NGOs, academics, critics everywhere came to the conclusion that when the private company contracted directly with the NGOs, it was just their attempt to put lipstick on a pig. Instead of a real partnership, NGOs felt a power imbalance and that the company was in charge. So they came up with a possible solution. Rather than it just being the NGO and the company, they would involve a third party to help balance out the power. Like a government partner, one not driven by the bottom-line (the company) or only having goals without accountability (NGOs). Depending on the developed country’s situation, and governance situation, the “best practice” was to include a local government or a national government, or both. In developing countries, the local and national governments are often quite weak and suspected of being corrupt — so the NGOs wanted an outside, trustworthy partner. Like a donor. So Canada, through CIDA, followed the best practice, but this apparently is evidence that they are in the company’s pocket? It must be nice to be an academic with a slippery sense of accountability for your rhetoric to say “It’s bad if you do and bad if you don’t”, either way, you get to publish something critical I suppose.
But I’m a real-world manager, not an academic. And I kind of think it’s probably better to follow the NGOs’ own best practices and be involved rather than not. I really like how it’s terrible that the company is getting money in one paragraph to do the project, and in the next, saying “no, the NGOs are doing the projects”. Does that mean the NGOs, who are also funding the project, are subsidizing the company too? I’m seriously confused by the logic chain for this argument. Then it gets really confused — it’s terrible, but then he states “The mining companies’ inputs are mainly financial and the projects themselves do not differ substantively from traditional aid projects” and that there are almost no funds from CIDA at all (minuscule budget). WTF? Whose argument was he trying to make?
Sarcasm aside, I do give him a nod that “The reasons CIDA has promoted these partnerships are unclear” and that “there is insufficient coordination among the various parties in explaining the initiative’s nature and rationale to the Canadian public. Many actors involved may also lack an understanding of the issues, or even lack competence in public relations.” It’s a very important point, albeit not in the negative way Brown portrays it. Absolutely true. That’s why they are called “pilot projects” and they are only a small amount of money. Because it is not a tried and true approach for CIDA, they are dipping their toes in to respond to the private sector, NGO and local recipient demand, and are not going to be definitive at day one.
Do you know what else started like that? AIDS projects. There were groups advocating that treating AIDS was just a waste of money and that donors should have let them die and focused their resources on the living. Farther back, basic education was considered a waste of money initially — focus on the adults and get the immediate bang for your donor dollar, not wait 20 years for basic education investments to kick in. Bed nets. Landmine removal. All of them started with pilot projects that were vague and ambiguous about their development goals, nature and rationale. 10 years afterwards, there was a lot to say based on those pilot projects — if only to say in some cases that they didn’t work. Even on a bureaucratic level, you don’t spend millions of dollars on consultations and policy development to give guidance to 0.1% of your budget. However, Brown is right to raise it as an issue — because it is pretty good evidence of where more work is needed, and where interest will lie in the future in figuring out what works or not, as opposed to focusing attention on whether doing it represents a complete reorientation of the entire aid budget. And while Brown seems to think the evidence is “already in” (citing the hyperbole of some of the other chapters), most of it is rhetoric without any real comparative evidence. That’s what is really needed, and the reality is it will only come with pilot projects.
Going back to the roles and interests of the mining companies, Brown has several paragraphs of why different groups have an interest and what role they might play. I’m really disappointed with this section as it could be a much larger piece and talk intelligently about the struggles in the types of projects. It’s a good summary, just too brief.
I was surprised when I reached the end of the chapter, as there was nothing else about “illegal use” of funds (which was a huge bar to meet), nor about how it fits into a broader supposed trend of commercialization and self-interest (which Swiss kicked to the curb in an earlier chapter — while others might be forgiven for not knowing that piece was there in the book, Brown is one of the editors!).
Let’s look at how it could be “illegal use” of aid funds. There are five ways it could be “illegal”, although one is more “not right”, than illegal. First, as discussed above, it could be a subsidy under WTO rules (Brown sometimes uses the term above in a more generic layman sense, but that is like saying paying for someone’s lunch is subsidizing their lifestyle — a generic sense doesn’t rise to the level of what a subsidy means legally). The WTO has looked at these types of expenditures ad nauseum. Everyone hates subsidies, but some generic subsidy-like behaviour is allowable without it being a legal subsidy. These ones are not subsidies and therefore are not illegal as per WTO rules or trade agreements.
A second option could be if it violated a human rights tenet somehow; not only is there no evidence of that, it actually goes to expand human rights, giving some a voice they would not have had earlier and taking the edge off pure capitalism. Some might think it is lipstick on a pig, but it’s not illegal.
A third option would be domestic bid-rigging. If Canada said to country X, we’ll give you more aid money if you give Canadian company A the contract rather than Canadian company B, company B would have a valid complaint against the government that it illegally gave an unfair advantage to one Canadian company over another. No evidence of that, however.
A fourth option would be international bid-rigging. If Canada said to country X, we’re thinking of funding an aid project if you hire a Canadian company to do your extractive work (kind of quid-pro-quo-tied-aid policy, or extortion), you would think that would violate some law somewhere. It doesn’t. In fact, most countries have a different word for it. Trade. Or diplomacy, depending on who is doing the talking. Brown hints that there is evidence of this, that Canada promised more aid if the company got to mine locally, but there is nothing illegal about this. The aid isn’t a bribe going into someone’s pocket. It still might be unethical to some, or good diplomacy to others, but nothing illegal about it.
A fifth and final possibility would be if Canada was funding a purely commercial project with no aid benefit with aid money and claiming it as ODA. You would also be apt to think this was illegal somehow, but it’s not. Canada funds projects out of the international assistance envelope which funds lots of international organizations as well as some of Canada’s operations abroad. Not all of it is development. So there is no restriction on the envelope that says it is only for ODA-eligible aid — all assistance comes out of that envelope. So nothing illegal about the source of funds. After the fact, Canada calculates whether the project was ODAable or not, and reports it to the OECD. The OECD may or may not agree with our classification. If they don’t, they will deduct it from our reportable stats. Again, not illegal, just not “right”. If this is confusing, look at the UN Food and Agriculture Organization. We make an assessed contribution each year, based on a share of the approved budget of the FAO. But the FAO is not only about development. There’s a really complicated methodology that is periodically reviewed by the OECD’s Development Assistance Committee that estimates the FAO’s work is about 50% for the benefit of developing countries (a requirement to be ODA) and 50% is for the benefit of developed countries (such as food safety). So, if Canada’s assessed contribution in a given year is $10M, then $10M will come out of the International Assistance Envelope (IAE) with $5M being recorded as ODA. The fact that the other $5M is not “aid” yet is coming out of the IAE doesn’t make it illegal but it would be wrong to record $10M as ODA-eligible when we submitted our stats to the OECD.
I also want to address Brown’s suggestion that Canada’s tied aid policy, and specifically food aid, was changed to make Canada more “pure”. It wasn’t, particularly not for food aid. There are three huge problems with food aid — domestic oversupply, transportation costs, and expiry dates.
In Canada, there are specific sectors where we have an overabundance of production. I won’t go into specifics, but we do (it can be easily looked up, we get complaints about it at the WTO all the time). If we were to just dump it on the market, it would reduce prices and undercut production capacity. So we want to give away some of our excess, but the producers want some compensation for their costs in producing it. Food aid money seems like a perfect fit, right? Paying producers to grow food for developing countries who need food.
Except it costs a LOT of money to ship it. Extremely inefficient. Plus there are expiry dates by which some of it is unusable. In terms of efficiency, it makes a lot more sense to just give the money to someone like the World Food Programme, have them find a food source as close as possible to the need, and purchase it there. Plus, if it happens to be a neighbouring developing country, the other developing country gets the benefit of the purchase while the recipient gets the food (a double benefit). While much more complex, this is not unlike food banks preferring to get cash donations rather than food donations — they can stretch cash farther with discount purchases, lower their transport and storage fees = more people served. While we may have sung and danced about untied aid, the food aid was the most egregious form of it, not only for inefficiencies and potential ineffectiveness, it was also just straight out a pain in the patootie to manage. Now they can just write cheques, for the most part, no muss, no fuss. The change was more logistical than philosophical.
In the end, I expected and hoped for more, given the high initial bar. Unfortunately, the evidence just wasn’t there.
I am doing a series of articles on the book “Rethinking Canadian Aid” (University of Ottawa Press, 2015), and now it’s time for “Chapter 15: Charity Begins at Home: The Extractive Sector as an Illustration of Changes and Continuities in the New De Facto Canadian Aid Policy” by Gabriel C. Goyette. I’ve addressed some of the issues already in my review of Chapter 7 (Critique of Rethinking Canadian Aid – Chapter 7 – Continental Shift) so it will be interesting to see how far Goyette goes.
Of the many changes that have occurred, two stand out in the literature on Canadian aid for their importance. First, the government has placed programmatic emphasis on aid effectiveness, which has led to an overly technical conception of practices. Second, it has instrumentalized aid policy and made it subservient to broader foreign policy, notably through changes in CIDA’s countries of focus and the criteria for selecting them, the emergence of priority themes with a strong impact on disbursements, a religious and security turn in aid delivery, an emphasis on humanitarian assistance, the marginalization of gender issues, and the growth of the role of the private sector, both in policy making and in practice.
My first reaction is “wow”. What a dramatic turn for aid policy. Except most of it isn’t true except amongst the rhetoric of academics and NGOs. First, aid policy has already been shown in the above chapters to not being subservient, but rather at most, tangential or marginal. Pronouncements of policy do not change the reality on the ground (strike one), and the ground delivery is relatively unchanged as is the disbursement patterns amongst countries in need (strike two). Emphasis on humanitarian assistance has little to do with core development, count it as a foul ball. Gender marginalization? I’ll call it a ball, as one of the key ingredients for approaches to remain current is to show concrete approaches that are different from other methodologies and with higher results, and gender equality programming has not consistently done that…early on, it eclipsed “women in development”, but after that, it was mainstreamed and then new issues clamoured for new attention. Growth of the role of the private sector in policymaking and in practice? Yep, that’s strike three. Chapter 7 already gave lie to that premise.
The key basis for Goyette’s analysis is the Corporate Social Responsibility (CSR) Strategy for the extractive sector. First, let’s deal with the limits of CSR to tell you anything. Governance, by and large, is “huge” for development discussions. In-country governance, democracy, transparency, global governance, donor relations, etc., all huge topics. Human rights alone could overwhelm most governance discussions for developing countries. CSR? A tiny part of the picture. Gets press, lots of people (particularly NGOs) like to rail against big corps and the need for CSR, particularly in the extractive industries. But in terms of CIDA’s expenditures? A rounding error, on a good day (as Goyette notes, but discounts). EITI gets $10M over five years — $2M a year. Chicken feed. The Andean Regional Initiative? Another $20M over 5 years, or $4M a year. Three CSR projects at the bilateral level, and a new education institute at UBC. All in all, just enough cover to tell NGOs “we’re doing something” and to give all those letter writers something to read. Means nothing to the core of development policy.
Where Goyette loses me is the argument against aid effectiveness. “Nonetheless, it is worth recalling that no comprehensive study has substantiated the notion that aid concentration is a major contributor to development effectiveness.” Actually, he’s right. No comprehensive study. Well, except for 60 years of development by multiple donors that show spreading your development too thin makes for multiple drips in small buckets that make no difference whatsoever, hence every developing country itself talking about the need for larger-scale projects to actually impact the economy and repeated calls across the entire development community to “scale up”. I particularly like the skewed analysis that “more than half of the countries added to the list” (i.e. 4 > 3.5 of the 7) were priority markets. Except for the fact that all other things being equal, development projects in countries with basic infrastructure in place produce greater results than those with nothing, and since the current government is about results and aid effectiveness, it’s not surprising that it would choose those countries where development was likely to actually produce demonstrable results. And, if the policy was so “apparent”, wouldn’t they have added 7 countries, not 4, tied to trade?
I particularly like the apparent criticism that “CIDA’s thematic focus on sustainable economic growth is particularly well suited to support extractive industries. Its three areas of privileged intervention are “building economic foundations, growing business and investing in people” (CIDA 2011a). These areas favourably align with the requirements of the extractive sector.” That’s not a coincidence, nor is it as sinister as the NGOs claim. Think about it for a minute. Name a business sector that will produce large-scale economic returns in a developing country. No, go ahead, I’ll wait. (Insert humming of the Jeopardy theme.) Great, which did you choose — agriculture (like bananas) or mining? And then look at some of the countries chosen and ask if they have an agriculture option that will generate economic growth? No? Hmmm…maybe the country might focus on mining then. Bottom line, there are five big pillars for private-sector to contribute to development — agriculture (limited for some), extractive industries (sometimes the only resource), tourism (think small islands in particular), regional infrastructure and manufacturing (such as telecoms or factories, for narrow niches), and power generation (think dams). Agriculture is popular with NGOs, but it takes corporations to make it truly profitable without relying on “fair trade” marketing protection; extractive industries are terrible for the environment, and greed is always bad; tourism is condescending, unreliable, and exploitative of the human infrastructure as disposable income is spent on funding development (1%ers); infrastructure development is too big corp or exploiting cheap labour; and power generation destroys the environment. Pick one.
In sum, the Canadian government’s choice of countries of focus and priority themes demonstrates a desire to ensure that aid will contribute to Canada’s trade policy priorities and benefit Canada’s own economic interests, rather than those of developing countries. These new aid policies illustrate the practical implications of the integrated foreign policy mentioned above. This does not mean Canada’s aid will have no development result on the ground. However, it illustrates how decisions are made not for the sake of development efficiency or maximal impact of Canadian ODA, but for their foreign policy impact and benefits to Canadians.
Really? I must have missed the evidence of that in the chapter. No data, no proof, just “well, it’s obvious”. No, it’s not obvious. In fact, if you look at the rest of the government’s approach to all sectors (domestic and foreign), you see a consistent focus on “clear demonstrable results”. And, if you were choosing countries where results would be clear and demonstrable (i.e. a key component of aid effectiveness), those are the countries that are chosen. What isn’t proven is which part is the cart, and which part is the horse — if the government abandoned countries where results weren’t forthcoming and switched to some that were, and then looked at those countries and tried to maximize Canadian development investment, wouldn’t you get the same outcome? Only if you assume it fits and that there are no other explanations does the “evidence” hold.
I particularly like the further evidence of the corporate conspiracy by the fact that the government consulted the private-sector during the CSR preparations. Umm, a small question — if you were putting in place a policy that would effectively serve as policy “regulation”, wouldn’t you consult the sector being regulated? The government does in every other sector of the economy. And yet NGOs weren’t consulted. Wait, really? Because I’m pretty sure there were a bunch of meetings with NGOs during the same period. Oh, but those apparently didn’t count. So NGOs were silenced, apparently. Hmm…isn’t that the complaint in every field for the current government? Do they talk to the NGOs they want to talk to and ignore the ones they don’t? Apparently that has a more sinister logic when it’s development. I really like the fact that the NGOs say “Oh, we were consulted but it was too fast” and not a real consultation because nothing changed, but then conclude that *although nothing changed*, the private-sector was driving it. Presumably, the private-sector had changes that weren’t adopted too, then, if *nothing changed*?
We then come to the conclusions:
A new aid policy epitomized by the CSR policy, except that this policy only results in $5M a year in direct spending, has no relevance to more than a handful of actual CIDA projects, and is a minor but highly visible aspect of overall governance;
Aid is used as a tool for the expansion of Canadian companies abroad, which would be a little hard to do without money behind it, but sure, let’s ignore the need for evidence or the significance of the counter-evidence that tied-aid had dropped; and,
No official aid policy (except apparently every academic in the book knows what it “really” is) + no consultation (except what do you consult on if there isn’t a policy?) + unpredictable flows = a loss of legitimacy … by whose analysis? I didn’t see the government fall over this, as most Canadians don’t really care about what CIDA does or doesn’t do.
All of the above “analysis” with “no evidence” to come to conclusions that are unsupported and in two cases, almost random. So you would think, from the above, that my conclusion would be the chapter was completely worthless. But then, there’s this nugget at the end:
…these types of CSR can help in terms of risk and corporate image management (Porter and Kramer 2006, 2011), they fail to provide the full value, long-term stability, proper orientation of innovation, and first-mover opportunities attainable by a strategic or “civil” approach to CSR (Zadek 2004, 2006). Although qualitative progress on CSR can bring value to company shareholders and society alike, it is a complex, and often long and costly process. By promoting a simplistic and primary approach to CSR, the government has missed an opportunity to help Canadian companies in this process. An open policy process would presumably have helped identify and resolve this weakness in the government’s approach, thus maximizing the benefits of the expenses incurred.
Four sentences that are absolutely breathtaking in their thesis. THAT should have been the basis for the paper because that would add to the debate. Each one of the sentences could be argued, substantiated, and tested with data and evidence. Sigh. Maybe someday we’ll see that one instead of the content that was here.
I am doing a series of articles on the book “Rethinking Canadian Aid” (University of Ottawa Press, 2015), and now it’s time for “Chapter 14: Canada and Development in Other Fragile States: Moving beyond the ‘Afghanistan Model'” by Stephen Baranyi and Themrise Khan.
Like previous chapters on fragile states and security (Critique of Rethinking Canadian Aid – Chapter 13 – Canada’s Fragile States Policy and Critique of Rethinking Canadian Aid – Chapter 8 – Preventing, Substituting or Complementing the Use of Force?), the chapter starts with the same fallacy of the NGOs it cites, that “pure aid” (whatever that means) is corrupted by “security objectives”, apparently because peace/stability/development are somehow separate entities. Equally, relying on one of the author’s previous work, they come to the former conclusion and starting premise that high effectiveness of aid is correlated with low degree of “joint approaches” (securitization)…without explaining that perhaps the real variable is that areas of high instability that create the demand for a higher level of “joint approach” are the areas where aid is likely to be least effective (i.e. it isn’t the joined-up approach that is the cause, but rather than both joint approach and low effectiveness stem from the original cause –> high instability).
Their analysis of variables for the effectiveness of the joint approach is a combination of factors, including securitization as measured by how expansive the approach is for ODA (limited to peacekeeping or involved in all aspects of security) and the degree to which it aligns with commercial interests (which Swiss already disproved in an earlier chapter as being irrelevant and a red herring, and which is backed up here again).
What I find a bit puzzling is that they seem to assume that all of the aid is securitized to the same degree across all sectors. I’d be curious if their results would change if they only analyzed sub-totals weighted by the degree to which a sector was subject to the joint approach. For example, “joined-up” approaches are and have been often more rhetoric than reality. Just as donors don’t always cooperate fully even when agreements are signed, government departments often have “joint approaches” that are not “true policy coherence” but rather “programmatic cooperation”. By this I mean that often the government takes what CIDA was already going to do, adds it to what Foreign Affairs wants to do, adds it to what DND wants to do, rolls it all up and calls it a draft strategy, and then goes through it looking for synergies to exploit and externalities to eliminate. In the end, it looks like a “joint approach”, but really it’s three groups doing their own thing, talking regularly and thinking they’re “in it together” but the three groups could be doing it individually and the look and feel on the ground would be no different.
As such, departments might not do much “together” on trade or gender equality, and spending in those areas (and results) are irrelevant to the sector work in areas like security itself, humanitarian assistance or governance where the “joint approach” might be quite extensive. The analysis attempts to adjust for this somewhat through the degree of “conflict sensitivity” of CIDA programming, but that is at the macro level and doesn’t break it down by sector. I wonder if the results would be more pointed (either way) with such a disaggregation and perhaps weighting of that sector’s contribution to the overall total. For example, if they are fully integrated for peace and development programming, but that is only 10% of the aid total, and not at all integrated for health programming or education that make up 80%, it is perhaps unfair to say “securitization” is affecting the 80% where CIDA is just doing its own thing with low results because of the environment, not because Foreign Affairs and DND are messing with their priorities.
I am doing a series of articles on the book “Rethinking Canadian Aid” (University of Ottawa Press, 2015), and now it’s time for “Chapter 13: Canada’s Fragile States Policy” by David Carment and Yiagadeesen Samy. I have to confess upfront that as a public sector manager, I have a lot of trouble with this chapter. Overall, they basically say it has strong conceptual challenges, yet then conclude investments were squandered when the focus shifted from one area to another.
The complexity of dealing with, and responding to, fragile situations is reflected in the way CIDA has generally allowed “a thousand flowers to bloom,” to support partner organizations, academics, and NGOs that work on state fragility. Indeed, when it first appeared on the scene, as an idea in search of a policy, just around 9/11, the concept of state fragility brought with it a new and complex understanding of how donors and civil society interact and use analysis to support their policies. Given CIDA’s prior investments in conflict analysis, peacebuilding, public policy, and consultations with civil society, it could be assumed that the agency would have been prepared to address these challenges. Such was not the case, for a couple of reasons. First, if one examines the evolution of CIDA’s fragile states analysis and policy, we see that initially at least the organization relied on a number of initiatives that emphasized transparency, collaboration, and value-based analysis. This is because, at the time, CIDA turned to the academic, humanitarian, and NGO community to build analytical support for its policy developments. The truth is that a lot of the momentum and investments made during this period were either squandered or forgotten as various donors, including CIDA, scrambled to shift their emphasis from support to civil society (1994–2002) to state building (2003–14) with the onset of the Iraq war following 9/11 (Carment et al. 2010).
So here’s my question…how can an analysis determine that they’re not “doing it right” when no one knows what the “right way” actually is (or was)? Given the conceptual issues, the newness of the area, the complexity of the issue, there is no “one” right way. Hence a shift could be worse, better, or just neutral. But that isn’t how the chapter is written — they may note that “State fragility as a concept is relatively abstract and mostly unclear in terms of cause and effect”, but they still claim momentum was lost, investments were squandered, and that “bold, decisive, forward-looking action was in short supply.” All we really know is that there was a shift by multiple donors, and without clear lines of evidence that the old wasn’t working nor that the new would work better. And in the face of such uncertainty, basic strategic planning says to be cautious, stay diversified in approach, don’t over commit to “bold decisive action” that could be in the wrong direction.
These include several tools such as the “Conflict and Peace Analysis and Response Manual” (FEWER 1999) and the Peace and Conflict Impact Assessment (PCIA), which were never fully operationalized and integrated into policy making.
Before one concludes that these are failures, shouldn’t there be first some sort of evidence that they were useful approaches that worked and thus should have been fully operationalized and integrated? Which would seem unlikely given the ambiguity around the programming area. Yet their “evidence” is that the ‘scandal-plagued PCIA initiative’ did great work between 1997 and 2003. Having worked in the Department during that time, I have a very different perspective. While they may have done some good work, and I don’t wish to denigrate it, it was always a sidekick to normal development programming. Put more simply, and less normative, the issue was quite simple. Development is considered complex, and hard to unpack, yet years of work have allowed certain types of best practices to emerge and donors do have some general directions to follow; fragile states, even peace-building (which suffers from the same nomenclature problem that WID became GE, and capacity building became capacity development i.e. you can’t “build” peace but only help the participants to develop their own), were chaos. All of the analysis was “it’s large, chaotic, nothing works as development should”. An intractable problem that people wanted to throw money at, which not surprisingly, attracted little interest around the department. It was not, by and large, development. It is what you do when development isn’t possible. A step above humanitarian assistance, but generally, only a small step. And hence, of very little interest to most of CIDA employees who didn’t see it as “development”.
On the other hand, no self-respecting policy analyst at CIDA publicly decried the lack of space for decisions formed on the basis of good empirical evidence. That fact and the unwillingness at the political level to do things based on evidence unless it is expedient have been disconcerting. Canada’s failure to heed the evidence may well come home to roost as the situation in the Middle East worsens.
This is pretty much where the analysis disappears and the empty rhetoric reigns supreme. They set up a framework that says “it’s chaos, nobody knew what to do conceptually, no framework”, and yet now not only has CIDA failed to take decisive action (whatever that may have been), but the staff at CIDA are also to blame for not pushing for evidence-based decision making, despite the complete lack of anything resembling evidence or data?
When the authors decide which is true, that there is no framework of what works and doesn’t or that there is a framework that they can articulate and measure performance against, they might have a coherent argument. Otherwise, this chapter could easily be omitted, and probably should have been.