The New York Times has a great article from David Leonhart where he tries to predict what life in 2022, a scant 18 months from now, will look like in America. He assumes no vaccine arrives this year, and that we continue to see waves of outbreaks and lockdowns.
From a business perspective, he talks about which business models will likely prove less than resilient in weathering the storm. Some likely casualties are those who were already vulnerable businesses…newspapers losing advertisers, traditional department stores (Eatons, Zellers, K-Mart, WoolCo, Target have all bit the dust in Canada long ago) losing out to Walmart and Amazon, and malls closing when they lose their department store anchors.
While universities in Canada are unlikely to fail, the same budget pressures are hitting them as they are in the U.S. — enrolment stability, cancelled summer programs, residence and food service fees gone, parking revenue gone, and provincial and federal budgets are taking huge beatings. I follow Alex Usher on Twitter, and he has been actively watching which universities are planning for full virtual classes in September and which ones were hoping for some sort of mid-semester return.
I was a bit surprised Leonhart uses such pedantic examples and doesn’t spend more time on the hardest-hit sectors like health in general, agriculture and food processing, aviation and tourism, and restaurants. He notes in the intro that they may disappear, but there are entire sectors that present far more disruption to human life than the loss of paper newspapers, loss of department stores and malls, or disruption in higher education options.
In the area of habits, Leonhart identifies the importance for white-collar workers that working from home, working remotely in general, has been successful, and I couldn’t agree more. Education from home is less successful, but I love the quote from Microsoft:
As Satya Nadella, Microsoft’s chief executive, said this spring, “We’ve seen two years’ worth of digital transformation in two months.”
Where I work in government, we have accelerated our IT plans by 2-3 years for some major projects. Things that would normally have started in 2022 or 2023 and likely would have taken 1-2 years? They’re already 50% implemented or more. Doubling bandwidth, new platforms for collaboration, massive increase in mobile infrastructure for workers with huge increases in laptop deployments. We’re one department, in one government, in one country, and we literally have bought thousands of new laptops to get people connected from home. How are manufacturers keeping up with IT demand? The short answer in some cases is that they are not keeping up. If you were looking for video cameras in the first few weeks of WFH, they were scarcer than bread yeast. Months later, stocks are returning but only because everyone already has a webcam somewhere in their digital ecosystem. Many are just using their phones. I stopped by one of the computer stores last week, and some of their shelves are looking pretty empty, particularly for larger monitors. Not enough to declare a shortage, although again, that’s partly as they’ve restocked.
I’m less enamored of Leonhart’s predictions for the US political realm, not with a fall election hanging in the balance. Trying to do similar predictions for Canada without a set election date is probably equally useless. The Liberals are in a minority situation, and will likely to continue to be, as long as the NDP keeps getting what they want on various files. But they can only go to that well so many times before the Liberals can’t afford it, and the alliance / coalition / politician’s agreement falls apart. Just as with Leonhart’s opening question — how long does this last? — the political outcomes will be shaped by the health outcomes. Where I find Leonhart’s rationale lacking when he argues for sweeping roles for government in the U.S. if Biden wins is in the reality he talked about for higher education. Government budgets are taking a sh**-kicking and while they can literally print money, at some point, the bill comes due. Spending at current levels is not even remotely sustainable. And if you want to spend your way out of a recession / depression, eventually you also have to make serious cuts to government either during or afterwards.
Nevertheless, I hope there are more prediction articles I like these. If we crowd-source a couple of thousand of them, we might even approximate a forecast or come up with a to do list for contingency planners.
As part of an update to my website, I am revamping all my featured images (https://polywogg.ca/new-featured-images-astronomy/). Having already tackled a small one (astronomy) and a large one (website and computers), I am turning my attention to a different challenge — governance. I actually have multiple categories that fall into a “governance” theme, although in many ways, “government” might be a better term for some.
I have an actual category specifically called governance, and I tend to write about a variety of things related to running a government. Elections, public administration, audits. I have more of a technical bent to my topics, and if I was completely candid, it seems like public administration would be the more likely heading. Except from time to time I go above that and intersect with policy and politics. The running of a government at a level above. Not often, but occasionally, and usually related to how the two realms — politics and public administration — intersect. At one point, I wanted a new “image” to represent all that, and given the ethereal nature of the concepts, I made up a combined image representing different parts of a governance package — politics, legislation, judicial, and the people. It’s not a huge category for me, only 30 posts out of about 1400 deal with governance issues, but it may grow once I retire.
I also used to work at CIDA dealing with international development issues. I don’t write about it very often, only 27 posts in total, and 17 of those are about one specific book where I wrote about each chapter as I went. I do like to follow what’s happening in broad trends, though, since I spent 10 years of my career dealing with the files, yet even when I do write, I tend to have a “public administration” slant to my writing, rather than development in general. I didn’t have a great idea for my international development “image”, but managed to find one that was about food security, including both growing your own food and production of meals afterwards. It’s a bit cheesy, but it’ll do.
A third area I write about regularly is the “civil service” itself. And to be honest, I haven’t had a good image to reflect that area. It’s not a lot of posts, still only about 27, but I’ve tended to bop between one of two images. First, I’ve used the general governance image shown above, but that doesn’t really reflect what we do. I have also often used the bottom right-hand corner of that governance, the one of “people” to reflect the civil service (the fourth pillar of the governance stream). Which is fine. Except that I have also used that one a LOT for something else — my posts about HR in the government. In particular, when I’m writing items for my HR guide, I’ve tended to use that image as the theme. However, to be honest, I don’t really like it for my HR guide. I need a new one for that, so I can use it here now. And, as noted, there’s symmetry with the larger combined governance image.
Which leaves me with two very specific areas to deal with. One is a “one-off” conference that I helped organize way back in 2002. The reports and docs are on my site (13 pages), and I use the logo we had for the conference.
The other is my HR guide. I have struggled with this guide for a long time, in varying forms. Mostly I have used my large tree frog image to reflect my branding for it.
But a few years ago, before I ran into some publishing snags with the Conflict of Interest people, I went ahead and had the full cover page designed for the guide.
Okay, okay, it’s a little large for a featured image for a post. 🙂 So, I’ve played with cropping a bit, and I have this.
I ain’t gonna lie…I really like that one. Okay, good. Governance images are set!
There really weren’t any forward-looking ones, at least not upfront. They had some generic elements under governance, but that was it.
What the REAL criterion should have had
It is pretty simple — is there a plan in place going forward that addresses major issues, is risk-based, and is written down. There are lots of bells and whistles beyond that, things like cost and timelines, but the most basic element is “Do they have a plan?”
What did the audit find?
The audit found that
Departments and agencies had significant difficulties in providing timely and accurate pay information and in supporting employees in resolving pay problems
A sustainable solution will take years and cost much more than the $540 million the government expected to spend to resolve pay problems
What COULD the audit have found?
I need to digress for a minute and talk about the audit process. Generally speaking, auditors come forward and say, “Okay, here are the terms of reference for the audit, i.e. this is what we’re going to look at”. There may be some back and forth with the department to say, “Wait, what about this?” or more likely “Wait, that isn’t part of this project” — it looks at what is in scope and what is out of scope.
Then the actual audit process begins, there are lots of documents and meetings, and preliminary findings are shared with the Department. This is the opportunity for the auditors to say, “Based on the docs we have, and the info we have been given so far, this is what we’re thinking we might say.” At this time, departments go crazy and say, “whoa, THAT’S not true, did you read this doc and this doc and this doc?”, often three docs that the auditors were never given. So they’re wrong about some aspect because they didn’t have any evidence from what they had seen so far. A gap, if you will.
Then they come back with their draft audit findings, they go through some iterations where the department gets to agree or disagree with some of the wording, often saying, “Wait, if you say THAT, with that language, we have to disagree, it goes too far”, and the auditors balance out their wording with their findings. While some people get their backs up that this is interfering with the independence of the auditors, it is often more along the lines of the auditors saying, “We examined the building plans for a green cabinet, a blue cabinet, and a yellow cabinet, and we found no evidence of cost analysis.” And the department says, “wait a minute, we had full analysis for yellow, and partial for blue, but agree with nothing for green.” And the auditors go back and look at their evidence and come back with revised wording that likely says “Not all projects had full cost analysis in place and there were significant gaps for most.” They’re still slapping someone, just making sure they’re slapping the right someone with the right language. And to be candid, some of it is seeing how much pain the department can handle. Can it handle 4 slaps or only 2? So the auditors start by saying “YOU COMPLETELY SUCK” and water down the language a bit at a time until the department stops whining, somewhere between “YOU MOSTLY SUCK” and “YOU’RE KINDA SUCKY IN CERTAIN AREAS”.
Because after the audit is done, there are two things the department has to produce and the level of work depends on which of those phrases the department could live with:
A management response; and,
A management action plan.
The management response is a simple response where the departments says “We agree” or “We disagree” with the recommendations. The cycle of responses over the years has ebbed and flowed, with some periods existing where no DM ever wanted to disagree with an audit recommendation. Even if they thought the auditors completely misunderstood the situation, they would say “We agree” and then in the prose explain how they were planning to either not do what they said or the exact opposite of agreement. The responses were somewhere between a sorry/not sorry situation and a Sir Humphrey response from a Yes Minister episode.
In more recent years, and changes in Auditor Generals, DMs feel more comfortable to say, “Wait, hold on a minute, we grudgingly agree with your findings, but NOT your recommendations on how to fix it…so we disagree.” But auditor generals NEVER want the report to say the department disagrees, as it basically means they’re saying fairly confidently that the AG didn’t understand the subject matter or project. If a department disagrees, this means they seriously disagree and then suddenly the AG jumps into the project directly, often working to massage the language enough to be so much “motherhood and apple pie”-type statements, that NO deputy could ever disagree with the recommendation. And then they’re back to bland recommendations that the department can agree with easily. Yawn.
But, regardless of the MR, the department also has to create a management action plan. And there is one relatively universal truth to MRs and MAPs — a “plan to have a plan” is not a plan in and of itself. The department cannot say, “Oh, yeah, that’s a good idea, we’ll look it, develop a plan, and then implement it.” They are SUPPOSED to say, “Hey, good idea, we’re going to do THIS and THIS to implement.” In other words, you need to have some content and an actual plan, not a plan to have a plan.
What would this look like in this case? They would have recommendations for a clear set of roles and responsibilities between players. Which this audit did recommend, except that the response is that they’ll create such a plan. A plan to have a plan, not the plan itself.
They would have clear recommendations relating to the role of other departments who send the pay files to Phoenix. The audit found that “Departments and agencies contributed to the problems; however, Public Services and Procurement Canada did not provide them with all the information and support to allow them to resolve pay problems to ensure that their employees consistently receive their correct pay, on time” so they did articulate a problem. Yet the response is a plan to have a plan to fix that.
There should be clear recommendations on transparency, risk-based triaging, cost breakdowns and service delivery mechanisms. Again, there are some strong hints to do that, and the response is “We’ll develop a plan.”
DEVELOP A PLAN???? What the heck have they been doing for the last two years or even the last 8 months while the audit was busting their chops internally? The Department *knew* that the audit finding was coming, and they should have had the plan relatively complete in certain areas. It should be ready to go.
Heck, the language was so watered down, it looked more like “we’re working on a strategy on how to develop a work plan that will lead us to a complete plan to respond to the challenges identified …. zzzzz”. Their plan is to develop a plan to have a plan. They haven’t even developed the PLAN for the plan, let alone the actual plan.
No governance in place, but watered-down wording that could possibly lead to little concrete change.
No transparency in data, so employees are still wondering what the state of pay is, and no recommendations or commitments to change that reporting.
Hardly any commitment at all of anything, other than a plan to have a plan.
It’s unfathomable that such an audit passed even the most basic internal tests at the OAG. Based on the actions committed to, it seems more like the audit equivalent of a hangnail than a project that is way over cost and a disaster on the ground. The system is bleeding out, but good news, they think they might know someone who can come up with a plan to develop a strategy to stop bleeding in general. But let’s not rush into anything resembling a solution.
Directive on Financial Management of Pay Administration, Treasury Board
Policy on Results, Treasury Board
Directive on Results, Treasury Board
Supporting Effective Evaluations: A Guide to Developing Performance Measurement Strategies, Treasury Board of Canada Secretariat
COBIT 5: Enabling Processes, Information Systems Audit and Control Association, ISACA
As with the review yesterday, the policy on results, directive on results, guide to PM strategies, and COBIT 5 are virtually worthless to the exercise. They tend to talk heavily about programmatic delivery results (external results of spending), and have very little to offer in the way of measuring or monitoring internal services. To the extent they do, they tell them what types of things they should do in general, they don’t dictate or give explicit instructions. The first two, however, are a lot more detailed and do include some directive language, along with some indication of actual service standards and duties/obligations. Not enough to run the Phoenix system, but at least PSCP managers would have had SOMETHING to rely upon.
The second criterion was:
The resolution of problems related to paying public service employees is being effectively and efficiently managed.
For these ones, a few of the documents are the same, but it is the ones from TBS that are different and quite telling:
Guidelines on Costing, Treasury Board of Canada Secretariat
A Guide to Costing of Service Delivery for Service Standards, Treasury Board of Canada Secretariat
COBIT 5: Enabling Processes, ISACA
Information Technology Information Library Service Strategy, second edition, 2011
What the REAL criterion should have focused on
Now, if you take those above pieces, and break them down into manageable chunks to audit, you would expect to find some of the following:
A comprehensive inventory of all the pay action requests in the system;
Detailed reports of nature (type, age, and department) and impact (materiality, $$ estimate, $$ as a percentage of annual salary);
Clear project management principles showing differentiated approaches based on nature and impact;
A risk-based triage process and analysis of the various PARs;
Cost breakdowns of the steps taken to date and how they impacted the resolution of numbers outstanding;
Key performance measures in place for overall and individual workload management, tied to nature and impact; and,
Training in place to respond to basic training, ongoing maintenance and emerging issues.
While there are other things you COULD see, those seven items are pretty basic tools.
You could also likely examine three other items that deal not with the PARs themselves, but the client service function:
Clear identification and public sharing of service standards for various types of PARs and how the system is doing, updated likely weekly;
The system in place for people to access and receive status updates on their individual file and to know what is happening, even a queue number if the answer is nothing yet; and,
Detailed communications plan in place to transparently share the detailed reports.
What did the audit find?
The audit concluded that there was “The number of pay problems continues to increase”, “Public Services and Procurement Canada did not have a full understanding of the extent and causes of pay problems”, and “Departments and agencies had significant difficulties in providing timely and accurate pay information and in supporting employees in resolving pay problems.”
What COULD the audit have found?
It is clear to anyone and everyone that the solutions in place are not meeting the needs. And on some of the elements I mentioned above, the auditors did have some views:
The inventory was not comprehensive, there were clear gaps;
The reports are rudimentary at best, and don’t give details on nature or impact;
Differentiated approaches based on nature and impact were done, mainly based on various pressure points over time, but with little analysis or evidence of the result for each group; and,
The training was not done before launch and hasn’t kept up.
They could have also expressed concern that there were clearly other gaps:
No detailed risk-based triage;
No cost breakdowns;
Little in the pay of performance metrics or service standards; and,
No exception management system, nor any feedback and status mechanism.
The audit failed in this area on two counts. First, the audit recommendations could have been quite prescriptive and detailed, saying “We recommend you do x or y, and do it by such and such a date”. Which PSPC would then have to commit to doing, and to do so publicly. The recommendations are more general than that, telling them to do better rather than saying they failed to meet even the most basic standards at all. As a result, PSPC basically was able to respond that they are to develop a plan as to what their overall plan should be. A plan to have a plan, not even the final plan itself.
More importantly, though, the auditors had access to the internal data. While it is clear that the PSPC system is not robust enough to generate the reports needed, some more rudimentary reports could have been developed and calculated. And, given the public spectacle surrounding the audit, part of the role of auditors is to report on what is happening and how it is performing. Instead of giving us the reports we need, or coming as close as they could at least, they went for straight-up overall volumes. Stakeholders — namely employees — had almost no more useful info or data than before the audit.
Here is the most minimal of tables that I had expected and hoped to see:
Type of Pay Action Request
< 1 month
1 month to 6 months
> 6 months
Put in pay (i.e. new employees)
Acting pay (up to 2 weeks)
Acting pay (over 2 weeks)
Removals from pay (retirement)
Removals from pay (special leave)
Don’t get me wrong, I think that table is insufficient. I just think it was the most basic table and that they should have been able to provide it, or even generate it as part of the audit, even if there was a gaping sub-area / black hole called “other”. Basically, people, they hadn’t triaged yet so they weren’t even sure what was in there. But arguing there are 500K requests doesn’t tell me anything about nature OR impact.
Now, I expected that table, and it didn’t come. Nor was there any details on maternity leave, sick leave, overpayments, assignments, secondments, other administrative changes, etc. There could be another 10-20 categories for the Type of PAR, but it is not just the categories as the time factor besides it — how old are the requests, how big is the queue? Because once the report is generated the first time, it can be generated again. With showing changes since the last one. I suspect the age figures would have to be even more disaggregated (maybe 1m, 2-3m, 4-6m, 6m-1y, 1-2y, 2-3y, etc.). That would give you decent details on the nature — at least for the type of PAR and the age of the request.
Where I was apparently dreaming in technicolour was in thinking that they might even go further. Think of the above table, with the same PAR categories down the side, but instead of showing the age, the table showed the dollar value across the top. For some reason, the materiality threshold set by PSPC was $100. Almost every request would be over that threshold in gross terms, particularly as acting pay is only paid if it is longer than 3 days now. So their threshold is meaningless. Instead, I’d hope to see something along the lines of 0-1000, 1000-2500, 2500-5000, 5-10K, 10-20K, > 20K as the cutoffs.
Why? Because it would give a clear and compelling indication of the magnitude of the problem. There are lots of stories of people complaining about Phoenix, but not all complaints are created equal. Some are life-altering disasters, with people not being paid for over a year, running into tens of thousands of dollars owed. At the same time, there’s Joe Worker next to them in the queue who didn’t get their three-day acting pay last week. Both people deserve to be paid, in full and on time, but when I have to triage the files, the person who is owed more money is likely facing a larger personal impact, particularly if it is combined with a longer period of time and affecting multiple tax years.
Now, the auditors hinted that the info isn’t available, and to be blunt, I don’t entirely believe them. I believe the info isn’t READILY available, but I don’t believe that it couldn’t be generated with some basic methodology. Even if they had 300K files in the queue, and didn’t really have a way to code 40% of them, they would still know the profile for the ones they have and be able to extrapolate what it means for the others. Is it a perfect methodology? Nope, but there are lots of previous audits that did more with less.
While there were some basic charts and tables included, they really didn’t provide much info on the scope at all. I thought they at least might have profiled one of the participating departments as an example, but they didn’t even do that.
I literally felt like it was a completely missed opportunity to pull back the curtains and provide SOME info to all the affected employees. They deserved an audit that went farther and produced more, particularly when they saw that PSPC didn’t have the data already available.
When I read the Office of the Auditor General’s audit of Phoenix, I was beyond disappointed (A disappointing audit of the Phoenix problems). In part, I think it is because I am too familiar with audits from my previous job where I read just about every audit done by my department in the last nine years, plus some of the broader OAG ones. Yep, I’m a public admin geek. I was even somewhat amused when I saw the news coverage about how aggressive the report was in its condemnation. And, if you weren’t a regular reviewer of audits, you might just go with the press conference and some of the findings and think, “Okay, they’re being appropriately harsh”.
Except the OAG knows how to be harsh when something isn’t working, and the language they would use for that kind of screw-up wasn’t present in the report. So let’s look at the report and see what they COULD (or even should?) have said, but didn’t.
What were the criteria?
Let’s go in reverse order, and start with the third criterion that the auditors set up in their audit:
Comprehensive and coherent governance and oversight detailing accountabilities and responsibilities for resolving problems related to paying public service employees are defined, agreed to, and implemented.
That’s what they expected should be in place, and that’s what they were looking to find. They based that criterion on a bunch of documents, including:
Financial Administration Act
Public Service Employment Act
Department of Public Works and Government Services Act
Directive on Financial Management of Pay Administration, Treasury Board
Policy on Terms and Conditions of Employment and the Directive on Terms and Conditions of Employment, Treasury Board
Policy Framework for People Management, Treasury Board
Policy Framework for the Management of Compensation, Treasury Board
COBIT 5: Enabling Processes, ISACA
Now, here’s the thing. NONE of those 11 documents say what that oversight framework should look like. They have hints, sure, like the fact that there should be clear roles and responsibilities, there should BE a framework that everyone knows, and generally speaking, that it should be focused on doing proper things related to the subject matter, in this case, pay and benefits.
But the really interesting thing is the last one. Quoting from the Wikipedia page, “COBIT (Control Objectives for Information and Related Technologies) is a good-practice framework created by international professional association ISACA for information technology (IT) management and IT governance.”
This is standard practice for the OAG. They look at the various docs from within the government, realize that there’s no professional standard to really say what SHOULD have been done, and thus they use an industry best-practice to help them figure out what the standard should have been. You know, as if the people running Phoenix had done the same research, read the same best practice that the OAG “discovered”, decided it applied to this situation and used it.
Now before you think I’m defending the Phoenix managers, I’m not. I’m just pointing out that using COBIT 5 as the benchmark basically is the OAG admitting that there was no clear, existing standard in the first 10 documents to tell them what should have been done, and thus they had to create one. But it is mainly to give themselves some sort of independent, industry-based “cover” for their approach. It’s mostly worthless and has nothing whatsoever to do with the audit.
What the REAL criterion should have focused on
The first criterion should have been broken down into some basic building blocks:
Was there governance and oversight? Before you can decide on the rest, you need to rate whether there was anyone in charge. And if so, identify who and for what. If you look at it from a standard “project” basis, there would need to be clear parcelling out of problem identification, consideration of options and alternatives, analysis of individual options with recommendations, policy and program design of a single option, functional and operational design guidance, service delivery design, and implementation (which itself would break out into multiple sub-headings, more applicable to the other two criteria). Now, the audit says that the first 4 or 5 of those are either not part of this audit or political decisions that won’t be rated. However, without identifying all the steps, it’s almost impossible to know if there was any governance and oversight throughout. Basically, without getting too simplified, the question isn’t which of those steps did what and when, it’s to know if those in charge had the equivalent of a project charter that laid all that out from beginning to end. It is the most basic element. And it doesn’t exist. Public Services and Procurement Canada (PSPC) have bits and pieces of some of it, but they don’t have a comprehensive roadmap. Or anything even close to it. If there was, you would move on to secondary questions like was it risk-based, implementation schedules, work plans, updates, monitoring, key performance measures and milestones, detailed reporting, transparent sharing of the documents amongst all the key players, etc. Don’t get me wrong, almost no large-scale project in the government has all of those things, so it’s a sliding scale. But to even get ON the scale, you need the most basic tool of governance and oversight to lay out the various phases and steps in detail. Call it the picture on the box for putting the jigsaw together, and they not only don’t have the picture, but some of the people involved are also making up their own pieces.
Were there detailed accountabilities and responsibilities? If you did the first step, you could then move to the second step where you lay out who is doing what, how, when and even why. But as you can tell from reading the audit, the first time all the roles and responsibilities were clearly laid out seems to be in response to the audit. In order to explain it to the auditors, documents were created after the fact to say “Hey, here’s what we’re doing!”. That sounds terrible, I know, but to be blunt, it’s not uncommon. Many projects get by with far less documented detail than auditors want, and even the COBIT thing is part of that…if PSPC had a single doc that put it all together as what they said they were going to do, then the auditors would just audit them against that document. Without it, they kind of have to invent it, and you can see that when the auditors describe it, they have no source material to base it on, other than the interviews and ad hoc materials created by PSPC (most likely for the audit, or at least, coinciding with the same timelines as the audit). But the audit does make it clear in multiple places that the docs weren’t being used by management, or even existed previously in some cases.
Was the governance comprehensive and coherent? Okay, while I hate to let PSPC off the hook, this is a standard that is virtually impossible to meet with auditors. Because no matter what you have done, the auditors will always say you could have done more or done it better. There is only one organization in the entire government that tends to meet their standard and it is the military. If you are deploying troops to a battlefield with a mix of ice, snow and sand, there is a document somewhere in DND’s doctrine manuals that says how many latrines to build, where the materials are coming from, and who is responsible for setting them up on what day and likely the number of rolls of toilet paper that are needed by the size of the military force deployed. Most other departments don’t even come close to that standard, nor should they try. It is over management to the nth degree. However, the lower standard that can be met is if all the players are identified in terms of their roles (linked to B above) and the various stages (linked to A above), and if all of them are generally contributing to the same recognized goal that they all agree to in advance.
What did the audit find?
The audit concluded that there was “no comprehensive governance structure in place to resolve pay problems in a sustainable way”.
There is a bit more obiterdicta (commentary) in the text, but that was their big conclusion. That’s it, that’s all.
What COULD the audit have found?
It could have said that there was virtually no evidence of oversight, governance, accountability, or responsibilities at even the most rudimentary levels that you would expect to find in place for a branch picnic, let alone a multi-million dollar project of this level of complexity.
It could have listed 20 or 30 project tools or documents that the auditors expected could have been put in place for any project management endeavour and then noted that PSPC had virtually none of them complete. Maybe even ranked them on a scale of 1 to 5 for each, where 1 was notional and 5 was complete. It would clearly show on the chart/table that PSPC never went above 3 (partial) on any of the 20-30 elements. This isn’t rocket science or even a new methodology, it’s similar to how TBS does Management Accountability Framework assessments. TBS basically says “We expect the following six things, which ones do you have?” and then rates them for degree of completeness. And then assigns colours/risks based on how far off the standard the department is.
Any one of the three groupings I used above could have five to ten documents/tools under each. A good project wouldn’t necessarily have them all, but they should have MOST. And clear explanations why some of the others weren’t applicable.
So when the auditors say there was “no comprehensive governance structure in place to resolve pay problems in a sustainable way”, there are 5 weasel phrases in there that water it down:
Was there NO structure or just not the FULL structure?
Was it not comprehensive, only partial?
Was there a plan but it wasn’t in place?
Was it not sustainable?
Was it not capable of full-resolution, only partial?
By sticking those five weasel phrases in, it softens the report by a factor of five.
Instead, it could have said there was no project management capacity applied to the project at even the most basic levels.
Or it could have said that the most fundamental aspects of governance and oversight are completely non-existent.
THAT would have been an auditor being harsh. And based on all the reports coming out, much more accurate. But if an auditor is that harsh, it means someone would pretty much need to be fired. Because someone was SUPPOSED to be in charge, and such a finding would mean that they clearly didn’t do any of the things they were expected to do.
And that’s only the first of three conclusions that we didn’t get, and would be disappointing by itself.