Like previous chapters on fragile states and security (Critique of Rethinking Canadian Aid – Chapter 13 – Canada’s Fragile States Policy and Critique of Rethinking Canadian Aid – Chapter 8 – Preventing, Substituting or Complementing the Use of Force?), the chapter starts with the same fallacy of the NGOs it cites, that “pure aid” (whatever that means) is corrupted by “security objectives”, apparently because peace/stability/development are somehow separate entities. Equally, relying on one of the author’s previous work, they come to the former conclusion and starting premise that high effectiveness of aid is correlated with low degree of “joint approaches” (securitization)…without explaining that perhaps the real variable is that areas of high instability that create the demand for a higher level of “joint approach” are the areas where aid is likely to be least effective (i.e. it isn’t the joined-up approach that is the cause, but rather than both joint approach and low effectiveness stem from the original cause –> high instability).
Their analysis of variables for the effectiveness of the joint approach is a combination of factors, including securitization as measured by how expansive the approach is for ODA (limited to peacekeeping or involved in all aspects of security) and the degree to which it aligns with commercial interests (which Swiss already disproved in an earlier chapter as being irrelevant and a red herring, and which is backed up here again).
What I find a bit puzzling is that they seem to assume that all of the aid is securitized to the same degree across all sectors. I’d be curious if their results would change if they only analyzed sub-totals weighted by the degree to which a sector was subject to the joint approach. For example, “joined-up” approaches are and have been often more rhetoric than reality. Just as donors don’t always cooperate fully even when agreements are signed, government departments often have “joint approaches” that are not “true policy coherence” but rather “programmatic cooperation”. By this I mean that often the government takes what CIDA was already going to do, adds it to what Foreign Affairs wants to do, adds it to what DND wants to do, rolls it all up and calls it a draft strategy, and then goes through it looking for synergies to exploit and externalities to eliminate. In the end, it looks like a “joint approach”, but really it’s three groups doing their own thing, talking regularly and thinking they’re “in it together” but the three groups could be doing it individually and the look and feel on the ground would be no different.
As such, departments might not do much “together” on trade or gender equality, and spending in those areas (and results) are irrelevant to the sector work in areas like security itself, humanitarian assistance or governance where the “joint approach” might be quite extensive. The analysis attempts to adjust for this somewhat through the degree of “conflict sensitivity” of CIDA programming, but that is at the macro level and doesn’t break it down by sector. I wonder if the results would be more pointed (either way) with such a disaggregation and perhaps weighting of that sector’s contribution to the overall total. For example, if they are fully integrated for peace and development programming, but that is only 10% of the aid total, and not at all integrated for health programming or education that make up 80%, it is perhaps unfair to say “securitization” is affecting the 80% where CIDA is just doing its own thing with low results because of the environment, not because Foreign Affairs and DND are messing with their priorities.