The Phoenix audit we could have had – Part 1
When I read the Office of the Auditor General’s audit of Phoenix, I was beyond disappointed (A disappointing audit of the Phoenix problems). In part, I think it is because I am too familiar with audits from my previous job where I read just about every audit done by my department in the last nine years, plus some of the broader OAG ones. Yep, I’m a public admin geek. I was even somewhat amused when I saw the news coverage about how aggressive the report was in its condemnation. And, if you weren’t a regular reviewer of audits, you might just go with the press conference and some of the findings and think, “Okay, they’re being appropriately harsh”.
Except the OAG knows how to be harsh when something isn’t working, and the language they would use for that kind of screw-up wasn’t present in the report. So let’s look at the report and see what they COULD (or even should?) have said, but didn’t.
What were the criteria?
Let’s go in reverse order, and start with the third criterion that the auditors set up in their audit. They based that criterion on a bunch of documents, including:
- Financial Administration Act
- Public Service Employment Act
- Department of Public Works and Government Services Act
- Shared Services Canada Act
- Pay Disbursement Administrative Services Order, 2011
- Policy on Internal Control, Treasury Board
- Directive on Financial Management of Pay Administration, Treasury Board
- Policy on Terms and Conditions of Employment and the Directive on Terms and Conditions of Employment, Treasury Board
- Policy Framework for People Management, Treasury Board
- Policy Framework for the Management of Compensation, Treasury Board
- COBIT 5: Enabling Processes, ISACA
Now, here’s the thing. NONE of those 11 documents say what that oversight framework should look like. They have hints, sure, like the fact that there should be clear roles and responsibilities, there should BE a framework that everyone knows, and generally speaking, that it should be focused on doing proper things related to the subject matter, in this case, pay and benefits.
But the really interesting thing is the last one. Quoting from the Wikipedia page, “COBIT (Control Objectives for Information and Related Technologies) is a good-practice framework created by international professional association ISACA for information technology (IT) management and IT governance.”
This is standard practice for the OAG. They look at the various docs from within the government, realize that there’s no professional standard to really say what SHOULD have been done, and thus they use an industry best-practice to help them figure out what the standard should have been. You know, as if the people running Phoenix had done the same research, read the same best practice that the OAG “discovered”, decided it applied to this situation and used it.
Now before you think I’m defending the Phoenix managers, I’m not. I’m just pointing out that using COBIT 5 as the benchmark basically is the OAG admitting that there was no clear, existing standard in the first 10 documents to tell them what should have been done, and thus they had to create one. But it is mainly to give themselves some sort of independent, industry-based “cover” for their approach. It’s mostly worthless and has nothing whatsoever to do with the audit.
What the REAL criterion should have focused on
The first criterion should have been broken down into some basic building blocks:
- Was there governance and oversight? Before you can decide on the rest, you need to rate whether there was anyone in charge. And if so, identify who and for what. If you look at it from a standard “project” basis, there would need to be clear parcelling out of problem identification, consideration of options and alternatives, analysis of individual options with recommendations, policy and program design of a single option, functional and operational design guidance, service delivery design, and implementation (which itself would break out into multiple sub-headings, more applicable to the other two criteria). Now, the audit says that the first 4 or 5 of those are either not part of this audit or political decisions that won’t be rated. However, without identifying all the steps, it’s almost impossible to know if there was any governance and oversight throughout. Basically, without getting too simplified, the question isn’t which of those steps did what and when, it’s to know if those in charge had the equivalent of a project charter that laid all that out from beginning to end. It is the most basic element. And it doesn’t exist. Public Services and Procurement Canada (PSPC) have bits and pieces of some of it, but they don’t have a comprehensive roadmap. Or anything even close to it. If there was, you would move on to secondary questions like was it risk-based, implementation schedules, work plans, updates, monitoring, key performance measures and milestones, detailed reporting, transparent sharing of the documents amongst all the key players, etc. Don’t get me wrong, almost no large-scale project in the government has all of those things, so it’s a sliding scale. But to even get ON the scale, you need the most basic tool of governance and oversight to lay out the various phases and steps in detail. Call it the picture on the box for putting the jigsaw together, and they not only don’t have the picture, but some of the people involved are also making up their own pieces.
- Were there detailed accountabilities and responsibilities? If you did the first step, you could then move to the second step where you lay out who is doing what, how, when and even why. But as you can tell from reading the audit, the first time all the roles and responsibilities were clearly laid out seems to be in response to the audit. In order to explain it to the auditors, documents were created after the fact to say “Hey, here’s what we’re doing!”. That sounds terrible, I know, but to be blunt, it’s not uncommon. Many projects get by with far less documented detail than auditors want, and even the COBIT thing is part of that…if PSPC had a single doc that put it all together as what they said they were going to do, then the auditors would just audit them against that document. Without it, they kind of have to invent it, and you can see that when the auditors describe it, they have no source material to base it on, other than the interviews and ad hoc materials created by PSPC (most likely for the audit, or at least, coinciding with the same timelines as the audit). But the audit does make it clear in multiple places that the docs weren’t being used by management, or even existed previously in some cases.
- Was the governance comprehensive and coherent? Okay, while I hate to let PSPC off the hook, this is a standard that is virtually impossible to meet with auditors. Because no matter what you have done, the auditors will always say you could have done more or done it better. There is only one organization in the entire government that tends to meet their standard and it is the military. If you are deploying troops to a battlefield with a mix of ice, snow and sand, there is a document somewhere in DND’s doctrine manuals that says how many latrines to build, where the materials are coming from, and who is responsible for setting them up on what day and likely the number of rolls of toilet paper that are needed by the size of the military force deployed. Most other departments don’t even come close to that standard, nor should they try. It is over management to the nth degree. However, the lower standard that can be met is if all the players are identified in terms of their roles (linked to B above) and the various stages (linked to A above), and if all of them are generally contributing to the same recognized goal that they all agree to in advance.
What did the audit find?
The audit concluded that there was “no comprehensive governance structure in place to resolve pay problems in a sustainable way”.
There is a bit more obiter dicta (commentary) in the text, but that was their big conclusion. That’s it, that’s all.
What COULD the audit have found?
It could have said that there was virtually no evidence of oversight, governance, accountability, or responsibilities at even the most rudimentary levels that you would expect to find in place for a branch picnic, let alone a multi-million dollar project of this level of complexity.
It could have listed 20 or 30 project tools or documents that the auditors expected could have been put in place for any project management endeavour and then noted that PSPC had virtually none of them complete. Maybe even ranked them on a scale of 1 to 5 for each, where 1 was notional and 5 was complete. It would clearly show on the chart/table that PSPC never went above 3 (partial) on any of the 20-30 elements. This isn’t rocket science or even a new methodology, it’s similar to how TBS does Management Accountability Framework assessments. TBS basically says “We expect the following six things, which ones do you have?” and then rates them for degree of completeness. And then assigns colours/risks based on how far off the standard the department is.
Any one of the three groupings I used above could have five to ten documents/tools under each. A good project wouldn’t necessarily have them all, but they should have MOST. And clear explanations why some of the others weren’t applicable.
So when the auditors say there was “no comprehensive governance structure in place to resolve pay problems in a sustainable way”, there are 5 weasel phrases in there that water it down:
- Was there NO structure or just not the FULL structure?
- Was it not comprehensive, only partial?
- Was there a plan but it wasn’t in place?
- Was it not sustainable?
- Was it not capable of full-resolution, only partial?
By sticking those five weasel phrases in, it softens the report by a factor of five.
Instead, it could have said there was no project management capacity applied to the project at even the most basic levels.
Or it could have said that the most fundamental aspects of governance and oversight are completely non-existent.
THAT would have been an auditor being harsh. And based on all the reports coming out, much more accurate. But if an auditor is that harsh, it means someone would pretty much need to be fired. Because someone was SUPPOSED to be in charge, and such a finding would mean that they clearly didn’t do any of the things they were expected to do.
And that’s only the first of three conclusions that we didn’t get, and would be disappointing by itself.