A disappointing audit of the Phoenix problems
As a civil servant, I was incredibly disappointed with the recent Phoenix audit, although maybe I just expected too much of it. Things that should have been clearly there, I would have thought, were in fact absent. Wording that I expected to be extremely harsh was toned down. Recommendations that would seem to be obvious ways forward were missing in action.
A friend asked me earlier this week where my indignant anger was at the fiasco and I think part of my passivity was because I knew the audit was coming. And I expected it to be a bombshell…a true blockbuster for its impact. Based on the actual wording, it seems more like they were going for a children’s firecracker that fizzled.
I expect three things from an audit:
- A clear articulation of the project’s goal and what they were trying to do;
- A clear indication of assessment/analysis of performance based on evaluation against an objective standard; and,
- Clear indications of recommendations for a way forward and response by the organization how they’re going to address the recommendations.
This audit doesn’t do any of those three things.
Understanding what an audit actually does
Most people hear the word audit and they immediately think of audits like what happens to taxpayers when they get audited by Revenue Canada or the Internal Revenue Service. You get called in, you bring in all your receipts, they accept some things, deny others, make you go through every receipt and line item one at a time, and generally grill you like a fish. When it’s over, you wring out your underwear, write a cheque for whatever was denied, and trundle off muttering dark thoughts about their parents but settling for being happy to escape.
That’s a TAX audit of individuals and has no resemblance to an audit of an organization.
When people hear that a company is being audited, they think it’s the same thing. Line by line, item by item. Particularly when it is a company that is in financial trouble or has been playing fast and loose with financial reporting. But that’s not what an audit of an organization looks like. It can’t be. They can’t look at every line item in the books, there are too many. They have to find a way to triage the work, and so, regardless of whether or not it’s a financial audit, or a management audit, or consulting audit, or operational audit, they have a general approach to how they work.
A. SCOPING. The first step is to figure out what is in scope and what is out…that sounds kind of simple, or mundane, but it’s not. It’s the most important question of the whole audit. For the recent Phoenix audit, the first thing that was screened out was the initial decision to move everything to Miramichi. That’s a political decision, and the Office of the Auditor-General does not audit politicians nor political decisions. It only audits the implementation of political decisions by bureaucrats.
That’s how democracy works. Politicians are elected to serve, and as long as they don’t violate the constitution or commit actual crimes, their decisions are only formally evaluated/audited in one way — elections. To use the vernacular, and to quote my father, if they screw up, “you throw the bums out” at the next election.
So I knew there would be no reference to the political decision itself. However, in the lead up to the Phoenix audit, I missed a sequencing nuance. I realized there would be TWO audits, sure, and one would look at the implementation of the centralization and one would look at pay processing. But I expected they would be sequenced in chronological order — look first at the process that created the system, and then look at the outcomes of that system as it is currently running.
I’m sure there were lots of people who said, “Start with pay processing” because it is the part that matters to unpaid civil servants right NOW. Not how we got here, but why it’s not working and how to fix it.
Yet I cannot but remain confused — how do you explain gaps and ways to fix it if you don’t first examine how the problem was created in the first place?
There’s an analogy a friend shared with me about upstream and downstream activities. Downstream you see a whole bunch of people in the river, and they have to be saved from drowning. So you start treating the symptoms and saving people, hiring lifeguards and rescue workers, and save everyone you can. Meanwhile, upstream is someone on a pier pushing people into the water. The cause, not the symptom. Fix the cause, stop it from getting worse.
So I fully expected the audit to scope IN the initial implementation and scope OUT some of the current processing. Nope, it was the other way around. Odd.
B. GOALS / OBJECTIVES. Some people mistake this area as the same thing as the scope, and while they are related since the scope affects the wording, the question is simpler than that…the auditors basically ask the question: What did you intend to do? What was the objective of the program as you defined it?
To do that, it asks for source materials. Original decision memos. Treasury Board submissions. Memorandum to Cabinet. Not to see the policy advice and options, those are out of scope as being “political”, but to see what it was that the politicians approved. What was the “decision” that they said…did they say “Centralize and damn the torpedos?” Did they say, “Pay them right, pay them fast”? What was the actual goal?
Because the goal is what the auditors are going to measure you against in the audit. Did you make the right decisions at the right times with the right information and the right control structures in place for monitoring to achieve the goal that you set?
In this case, having already scoped out the implementation into a separate audit, the objective was defined as:
Whether selected departments and agencies resolved pay problems in a sustainable way (that is, effectively and efficiently) to ensure that public service employees consistently received their correct pay, on time?
It’s well-worded, very succinct, and entirely defensible to the public, the politicians and the public service. Put differently, auditors are really good at framing the question precisely and not hiding in the weeds.
That’s a great question… do the process and the system work?
Well, anyone looking at that is going to say, “ummm, no.” If they are affected by Phoenix, it will be a resounding “hell no”. Which then begs the question…why bother with an audit? Because it is the additional findings and recommendations that go with that conclusion that will attract responses, solutions and usually resources to resolve it.
Audits are BIG DEALS. There are management responses required, signed by deputy ministers, with action plans to implement and to report on regularly for how they are doing at implementing the recommendations. Clear, public, and shared with Parliament. This isn’t simple corporate reporting that is done annually…this is ON TOP of all of the normal reporting and almost always requires a separately designated team to manage the response, monitor the implementation, and report clearly to the DM if/when a recommendation isn’t being addressed.
C. TOUCH POINTS. This is my term of art, not what auditors use. But one of the things auditors do is try to establish a timeline for all the big decisions taken during the life of the audit period. In this case, the period ran from 24 February 2016 to 30 June 2017. Sixteen months.
In that time frame, they would look at all the decisions by the big players — Public Services and Procurement Canada, Treasury Board Secretariat, and various Departments and Agencies involved. Much of that timeline would usually show up near the front of the audit in the description of the implementation, may be broken down into phases. What I find weird though is that because they are looking ONLY at processing, they went only with the changes of completing hiring (Feb 2016), first wave onboarding, second wave onboarding, and by default, the end date of the period of the audit. Those aren’t decision points.
But the audit would usually go even further than that.
It would say okay, there was a decision at month 1.5 of 16 by TBS. Let’s look at the material provided with that, the options considered, the analysis that went with it, how it was communicated, etc. Then they would see another decision at month 4.5 of 16 made by PS&PC, and do the same. And so on through the whole process. That gives them their timeline of decision points, not events.
And lastly, they would then go through and identify where control and monitoring systems were put in place to allow management to know if things were working the way they were intended when the decision was made, if the systems were capable of sending error signals back to the decision-maker for action, and if/when error signals were sent, were they acted upon properly.
Those three pieces — decision points, decision making, and control mechanisms — are the cornerstones of every audit I have seen in the last 20 years. And it isn’t included in the report.
I find it difficult to believe it wasn’t done. I can’t see how they could have done the audit without it. But the failure to provide it is a giant lapse in transparency and communications that would have helped people understand who decided what and when, and why. And what they did about it. Separate from all the prose, I was looking forward to the diagram. So I’m incredibly disappointed it isn’t included. I would be embarrassed beyond belief if it was never even created.
D. BENCHMARKS. Auditors have to have some sort of benchmark by which to judge you, it can’t seem like they are merely Monday morning quarterbacks second-guessing what you decided the day before. Don’t get me wrong, that view is popular. Many people think auditors are basically a cross between robbers who pick the pockets of dead soldiers on the battlefield or actively running around bayoneting the wounded.
But they start with any standards already available. For example, on compliance audits, they judge you how you comply with some TBS or PCO or other government department-issued policy. Are you doing Gs&Cs management right? Well, what did TBS tell you to do when you are managing Gs&Cs? Are you doing it? Let’s measure you against that policy.
Are you doing privacy management right? Let’s measure you against the policies on privacy from TBS and guidance from the Office of the Privacy Commissioner, and see if you are complying with what those policies say.
However, in other areas, they go broader. On financial audits, the Financial Administration Act and other legislation are critical, but so are the Generally Accepted Accounting Principles (GAAP), the professional standards of accountants and financial officers. Or that is what they used to be called. I think it is now called the International Financial Reporting Standards (IFRS). In finance-related audits, the question often is, “Did you follow IFRS?”. If yes, you’re okay; if not, they recommend you do. Except, as a small digression, the recommendations are often ridiculous in this area because the “standard” is often a generic principle, which also has a corollary that says the exact opposite. In finance, you can make a decision to record a series of transactions one way, and it will make your balance sheet look PERFECT. Completely representative of your assets on a specific date in time. It’s great for a snapshot, a picture if you will. But on the other hand, your income statement is about changes over a period of time, more like a video. That same standard you chose to make your balance sheet look great almost always introduces an element of misrepresentation in your income statement, and violates a standard about how to make your I/S look PERFECT. One standard says to do X, the other says to do Y. So the accountants and the auditors argue back and forth, which is the right “standard”. Both, either, neither, all of them are right.
Equally, many of the benchmarks are also often misleading. I used to do a lot of performance measurement work. One set of policies tells you that if a PROGRAM (defined in one way) is over $5M, it has a bunch of rules and regs about how it works, and includes a reference that it should also be placed on a specific inventory each department updated annually. However, you then use that inventory for a slightly different purpose in other instances, and those policies say, all PROGRAMS on the inventory should do x and y as well. All of which is predicated on the same definition of a program, and they are consistent but not identical. Add in a third element — you can add stuff to this inventory that is NOT programs but are related activities that you want to highlight as if they WERE a program, and things get really messy. Because someone then tells us, “Well it’s a program” (it’s not), “it’s on the inventory” (true) and “therefore you have to comply with the other rules plus do X and Y as well.” (you don’t). That sounds pedantic, right?
Except then an auditor comes along, says “All programs on the inventory should have compliance with the rules plus do X and Y” and then dings us for not meeting that standard. Except that wasn’t the standard for non-programs, it was never intended to capture this type of activity, and you can’t do it. It can’t be met. I’ve even had a conversation where the auditor has ADMITTED a standard doesn’t apply, and then in the audit report, it said, “But it’s a good practice, so you should do it anyway.” ARGHHHHH!
While I appear to be digressing, I want to point out that the auditors have to establish a clear benchmark. One that the management hopefully knew about and was trying to meet, or was an internationally recognized best practice they should have been following or was a TBS guideline they had to meet. Something that says, “Here’s how we measure your performance in doing X”. Sometimes you agree, sometimes you think they’re inventing standards, but there is one you had to meet.
In this case, they need to state the benchmark for measuring if Phoenix was resolving pay problems effectively and efficiently to ensure employees got the right pay on time.
But…the benchmark isn’t there. I never see the yardstick. Near the end of the audit, it lists some criteria and it comes close to the benchmark, but not with any actual scale or way of turning it into a measurement. It’s just motherhood and apple pie statements.
Sure, they say that “the federal government has an obligation to pay its employees on time and accurately.” Uh huh, everyone knows that. But a yardstick is kind of like standardized testing. Which percentile of performance are they in?
Let’s be clear. Everybody knows Phoenix isn’t performing. It sucks. It isn’t meeting ANY performance standard on just about any measure. But the audit should be able to say, generally, a well-functioning pay system should be able to hit a standard of x% correct in y many days, etc. with a range of expectations.
I’ll be overly simplistic, and I know audits don’t do this, but something along the lines of the following for speed of pay change requests:
- 80% within one week;
- 90% within two weeks;
- 95% within three weeks;
- 99% within four weeks; and,
- 100% within two months.
Some sort of rating scale to say “a is acceptable” but 90% in one week would be good, 95% in one week would be excellent, 99% would be best-practice, etc.
There are hints here and there. A few phrases dropped in now and again. Like noting that on only two occasions has processing exceeded intakes, i.e. they were actually reducing rather than increasing the number of outstanding requests. Was that the standard? Was processing > intake an internationally recognized standard? A best practice? A minimum standard? I don’t know. It doesn’t tell me what the standard was.
Now, this is where my anger and frustration really starts to percolate. Because without that clearly articulated standard, it is the impact on the next four inter-related pieces that completely start to gut the usefulness of the audit.
E. FINDINGS / RECOMMENDATIONS / RESPONSE / ACTIONS The auditors grouped their findings and recommendations into two parts — the state of pay operations and the way forward. Nice groupings, too bad it doesn’t deliver on the promise.
They found that pay problems continue to increase. No surprise. Yet what IS interesting is that there is ONLY one superlative in the entire description of the pay problems — “significant”. And what does it apply to? The internal numbers were SIGNIFICANTLY higher than reported externally (yawn) and error rates were higher than targeted (ka-ching, that’s SIGNIFICANT). Yet what else did they find?
- 500K outstanding pay requests — no rating;
- Employees waiting more than 3 months, 49K had been waiting more than a year, and approximately 32000 employees were affected as having a high financial impact — no rating or qualification of that, and while it sounds alarming, the threshold was only $100 (i.e. 17K employees were waiting more than a year for something amounting to less than $100 in difference — was that the standard? I don’t know); and,
- $520M in pay outstanding for paid too much or too little.
Yet nowhere in there is there any standard about what is the right level or not compared to other systems. Errors happen, sure. But these look pretty DAMN BAD. Yet there is no indication of how bad. Dante’s Inferno bad? Another Batman movie with Michael Keaton bad? I’m sitting next to someone at lunch with a really smelly sandwich bad? HOW BAD IS IT?
That’s the job. Not just the facts, and numbers, but to actually measure the performance against the yardstick above.
And there are little bugaboos I see in the analysis that drive me insane. Auditors are independent, and they will choose to include calculations that they think makes sense, even when they may not. I’m thinking of a specific audit where they used a Public Accounts number that was off by a factor of four — it made something look huge and significant but included a ton of other stuff that wasn’t related to the audit. They said it was “context”.
In this case, PSPC excluded things where they thought it was more administrative…but sure, I can see why OAG wanted it in. Makes sense. Yet I also see why PSCP excluded it — no financial impact, lowest priority. Same argument for low effort ones, etc.
But wait a minute, PSPC eliminated ones that were suspected duplicates. That makes sense. Because I know there are people who had one problem and submitted 25 or more PARs to have it fixed, before the systems said, “Hey, don’t do that, it just gums up the system for no benefit, just wastes everyone’s time on the other end and delays us getting to other requests too”. But they had just sent them in. Followups, corrections, tweaks. Repeats even. So eliminating duplicates is a pretty solid management decision to know what the real problem is, not how many records you have. Put a different way, maybe a bit simplistic — when you book your car in for the winter tires to be put on, do you make one request or four separate ones? Did the garage do one installation in 20 minutes or are they so fast they can do 4 in 20 minutes? Yet OAG overrode those. I hate when they do that because the devil is in the details. SHOW ME THE DATA. Break it down, do a detailed table. And they’re supposed to measure in part how they did against THEIR own internal data. If the auditor disagrees, it can say it disagrees with the way it is calculated, and then add footnotes or alternative graphs, but they should be using the PSPC data as that is what PSPC was using.
Farther down in the analysis, it says PSPC did not do a thorough analysis of the financial impacts. Sounds damning. But compared to what? HOW DO I KNOW IF THIS IS SIGNIFICANT? WHAT’S THE STANDARD THEY DIDN’T MEET? Oh right, the audit doesn’t tell me.
When I get to processing times, it tells me the terms of conditions say that the GoC will make most changes in pay within two pay periods. They then say “on average” it’s 3 months. Wait…that too looks bad, and THEY HAD A STANDARD, so they can prove it’s bad. So where’s the assessment that this is A NO GOOD, HORRIBLE, VERY BAD THING? Nowhere. And, again, I see a bugaboo. Average? Really? We’re talking detailed service standards. Where’s the table that says 40% was within x weeks, 50% blah blah blah, etc.? That’s basic treatment of service standards, yet no table is shown. Nor a graph. Yet they had to be able to fully calculate the info to do a proper audit. Where’s the DATA?
I’m also not entirely sure I trust their numbers on the incident of pay rate errors. They say that PSPC doesn’t track the data that way, yet they came up with an estimate that is based on a pretty small sample and some of it looks self-reported. There is a huge difference between someone having a pay “ERROR” and someone saying they have a pay “ERROR”. Because the auditors are talking about a VERY specific type of error when they quote their statistics:
- employees not being paid for overtime;
- employees who work shift work not being paid for all the hours that they worked at the correct rate;
- employees who transferred from one department to another being paid by both departments; and
- employees not receiving correct pay when acting in a temporary role for a superior, or not receiving pay increases related to promotions on time because of late input into Phoenix.
At this point, I’m getting grumpy because it starts to look like a crappy audit with a sloppy methodology. Why, on any planet, would they NOT CHOOSE A KEY GROUP like the people NOT BEING PAID AT ALL? And more importantly, why is there no materiality mentioned for their review? If the number was that 98% was wrong, it would sound catastrophic unless you knew they were out by less than a $1.
But not being paid? Give me THAT statistic. It is clear. It is unassailable. It tells us the relative significance of the problem.
Instead, there are no benchmarks to tell me how significant any of those problems are. It doesn’t say “100% wrong all the time”, it says within this timeframe, who’s got an outstanding request? Which, not for nothing, may NOT BE CORRECT! And uses the weakest methodology available to them. I feel like loaning them darts for their next audit, I might have more faith in their numbers.
At the end of the section though, there is a fantastic statistic about how much is owed, and if you do the math, the average (which the auditors loved a few paragraphs before) owed is $4500 per employee. Gross. With an average salary for employees of around 60-75K across the public service, that’s between 1.5 to 2 paycheques. But even if that is significant on its own, and I don’t dispute that for a lot of families, we also know that there are cases that are WAY OFF that amount. People who are claiming they are owed months of non-pay. Where is the incident rate breakdown to tell me how that $228M is distributed? Are there 100 employees owed more than $50K? 500? 1000? 5000? I have no idea as they don’t give the data nor any indication of what the standard was that they were missing and how abysmal the performance was as a result.
Looking at the $295M overpaid, there are HUGE implications for that, including what a lot of people are pointing out online and elsewhere — the $295M was paid but with taxes deducted, yet when it is requested back, they ask for the gross amount, with the taxes to be Revenue Canada’s problem later. In other words, you might have got $15K gross, only saw $11K in total, and yet they’re asking you for you to pay the $15K back and your taxes will sort out the difference. Like that isn’t going to screw people big time. But radio silent in the report other than to say “it might have implications.” Implications? Really? That’s a five-alarm fire, baby!
Climbing out of the rabbit hole and moving on
I went through that first area with a fine-tooth comb as I wanted to show you how bad the audit seems to me. Given the public scrutiny, and the impact it has on so many public servants which goes to the heart of the employer-employee relationship for delivering government services to Canadians, the audit should be bulletproof. I shouldn’t be having these quibbles. I shouldn’t be asking for qualifiers or assessments or standards, they should be clearly laid out.
Moving on, the auditors found that PSPC didn’t have enough analysis of the extent or causes, and were still reactive. No surprise.
Hold on, I want to say something positive. I know, surprises me too. But I really like paragraphs 1.52 to 1.56 as to some of the causes of the problems:
- Departments not entering stuff right;
- The design was for real-time entry (forward-looking), but the majority of departments and agencies have huge retroactive processes (after the fact, which Phoenix didn’t bring online until March) — wait, what? Where is the detailed estimate of the numbers/time delays caused by THAT…oh right, the auditors say PSPC should do that in the future;
- A major design flaw by the creator (my rating) that certain types of data entry on a file that was already being calculated by the system produced technical errors;
- A major design flaw by the creator (my rating) that it couldn’t easily handle shift work; and,
- Expected complaints from pay processors of not enough training, high workload, shifting priorities, and having to redo things more than once.
As a result of all this audit work, they come to the conclusion that PSPC should analyze the root causes of pay problems (PSPC say they’ll do a full HR-to-pay process analysis) and PSPC should develop a sustainable pay solution with a detailed plan that integrates HR (see the previous commitment to an HR-to-Pay plan).
This is where I get REALLY GRUMPY. Even with faulty analysis and methodology, and a lack of good data shared openly and transparently to truly explain the situation, the recommendation says do an analysis and come up with a plan. Yawn.
Wait a minute…these are the same people who did the initial analysis and came up with the current plan that is a DISASTER. Where is the harsh evaluation of what is there? Oh right. They had no benchmark. So they couldn’t go to the really aggressive audit language options:
SIGNIFICANT MANAGEMENT FAILURE
NOWHERE NEAR PROFESSIONAL STANDARDS
NO GOOD, HORRIBLE, VERY BAD DAY
EPIC FAILURE
FUBAR / SNAFU
SITTING THROUGH GIGLI BAD
When auditors review something, they express an opinion at the end as to whether they felt they had all the info they needed which means the audit is presumably representative of reality and can be relied upon, and whether they are comfortable with the management response. Not surprisingly, they say this audit met those standards. It’s a full audit, no management variance from the norm. Yet how could you come to the conclusion that the same management that put itself into the hole is capable of getting itself out of the hole?
The only way to do that is to have a clear bulletproof action plan. One totally defensible for all to see.
But PSPC doesn’t have that. They have a plan to have a plan. Or more accurately, they have a plan on how to develop a plan that will help them develop a solution. That response is NOT even close to what I think is sufficient, nor what is required.
And in every audit I have seen in the last ten years, a plan to have a plan was officially insufficient. If you have an audit, you need to show the actual plan to address the problem, not a plan to think about maybe having a plan. It seems more like a Dilbert cartoon than an actual way to approach an audit.
In my view, this looks way below the normal standard and yet for some reason, the auditors are not calling them out on it in the report.
PSPC “Partners”
Departments and agencies are not blameless in the problems faced by Miramichi. Yet I am shocked by how self-serving the section about the Departments and Agencies is written. It looks, and I can’t believe the auditors fell for it, like they said, “It’s all Phoenix, it’s all Miramichi’s fault, we’re on the side of angels here”. Cuz that’s almost what the report says. Even in areas where the partners weren’t doing things right, it was only because, allegedly, PSPC wasn’t clear.
There is so much BS in those paragraphs, I can’t believe PSPC and TBS accepted it. I get that there are huge small p political pressures for PSPC to own the problem and not blame other departments. But there are two huge pieces of context that are missing from that equation.
First, I mentioned in my earlier post that many departments dumped outstanding requests on Miramichi and didn’t worry about getting it right when they did. More like dump and run. In addition, there is a stat buried in 1.67 that says very clearly departments were doing retro requests on pay (which automatically makes them late, nothing to do with Phoenix), even for employees who had already been working for two weeks. That is a GIANT problem that lies squarely with departments, not with Phoenix. Yet no slapping of them for it. Or the fact that Phoenix started working with what was essentially an existing backlog they didn’t create but they did inherit.
Second, there is a perception that everything ran smoothly without Phoenix, we didn’t have huge problems before, and that those departments without Phoenix are doing just fine. Except we know that isn’t true. One area that got a lot of attention in the past two years was the plight of co-op students. Hired in the summer, starting work in May, and not getting paid until July or later. Delays, for the most vulnerable employees, what a scandal for Phoenix!
Yet it wasn’t anything new. Almost every department with more than 2500 employees (rough estimate) has treated students as a low priority for years. Letters of offers that didn’t get sent in advance. Security screenings not done before they start, thus delaying the start. Delays in reimbursement for moving expenses. And yes, delays until June or July to get their first paycheque. So what was different with Phoenix? It was all centralized. You could actually COUNT the number of students affected. For the first time, we knew a group of employees were not being handled properly and we could point to it as a collective problem. Which then got solved.
But it wasn’t a Phoenix problem. It wasn’t even a new process problem. It was that we had never bothered to fix it previously because 60+ departments and agencies never counted them all at once and shared the info to show how bad it was. About five years ago, a former deputy minister of some renown was at a gathering of co-op students one summer in late July, doing the outreach, showing they were valued. And after her little spiel, the meet and greet, and a few questions, one brave intrepid student said, somewhat shyly that she had a question. Were they going to be paid for their work? Three months in, none of them had been paid. And HR had NEVER TOLD THE DM. They weren’t hiding it, it just wasn’t significant enough to share. Needless to say, the DM was suitably and appropriately pissed.
And guess what? All of those lovely numbers we don’t have for Phoenix? We don’t have them for any non-Phoenix group either. The lack of analysis that PSPC hasn’t done? Nobody else has done it either for the other systems. A lot of those complicated 82000 files that they were working to fix at the beginning? They came from departments that hadn’t fixed them either. But they were spread out so they looked like little pockets of problems. Centralization revealed it, it didn’t necessarily create it.
So what’s missing from this section? Any resemblance at all to some sort of comparator. Show me a department that isn’t using Phoenix. Show me the standard. Show me the old stats based on numbers that Departments and Agencies have NEVER shared. Even in the audit, where it says a gathering of info is being done about problems in various D&As, the departments haven’t submitted the info. Haven’t done it. Haven’t got to it. Why? Because many of them have NO data. No info at all. They’re creating it from scratch, much of it manually. Yet the only one being slammed is PSPC?
In my first post, I said it wasn’t a technological problem, and the audit shows that. There is very little in the audit pointing fingers at IBM and the system. It is clearly a process issue, one that PSPC owns a major part of, but not all of it.
And while I am happy that the audit recommends that TBS and PSPC work with departments and agencies, it narrows to working on targets, timelines, performance metrics, reports, and access. Nothing that says D&As need to do anything except cooperate.
Have auditors decided not to point out epic fails?
The Auditors’ Way Forward
If I thought the opening was bad, I shouldn’t have too big of expectations for the solutions, right? Right.
But I didn’t think they would find an international comparator and decide it must be the model to follow. Yet it does. It points to one country, Australia, and says they should have done what they did for governance. No other benchmark is referenced. Nor any assessment other than a general statement of the auditors that the Australian solution is actually working or a model to follow. The Australian public servants aren’t exactly singing its praises as a success. In fact, as the auditors note, the great solution didn’t fix everything, and some of the issues are still present 8 years later. Is that their standard? Anything better than eight years is a success?
SHOW ME THE STANDARD! State it clearly.
Oh, and the solution of a new governance structure? Is it just a coincidence that it almost mirrors exactly what was already announced at the political level?
Okay, I’ll stop being grumpy for a second and say something positive about the report. Nice font. No, I’m joking, I did find something good in 1.90, 1.91 and 1.92. The three paragraphs talk about how PSPC tried to respond to the need to triage. Awesome paragraphs.
First, they did it by time and got 94% of them done before they had to shift to focusing on terminations (overpayments), leave without pay (overpayments), disability insurance (ups and downs) and new hires (not getting paid at all). Later they got pressured and they switched to disability and parental leaves. And then later they went by dollar value (although why the dollar value is $100 is beyond me, plus there is NO comment from the auditors about materiality and if that was the right amount).
Where did these pressures come from? Us. Civil servants. Directly and through the unions. They are all crappy situations. And after we dampen down the noise on one group to a light roar, another squeaky wheel gets some grease. THEY ALL NEED THE HELP.
So what did we do? We spoke with different voices. PSPC had a plan, but it wasn’t meeting one or more group’s needs, so they shifted it and focused on another group. Different stakeholders had different priorities.
But if I’m fair, that’s not on PSPC. That’s on us. You can’t do effective triaging of the victims that way. We need a better system, and there isn’t one envisioned by the audit.
In fact, what is the auditor’s recommendation? Review them all and set priorities with timelines. Well, thanks for coming out. I’m sure that will fix everything immediately.
The final recommendation, which is always a given, is to have better costing of everything.
My conclusions
I said at the beginning that I expect three things from an audit.
Clear articulation of the project’s goals…well, it was kind of there, but then there is no fully outlined timeline. That seems odd.
A clear indication of the standard and how they are doing against it…very little in the way of articulated standards, yet also no assessments (mostly, never, sometimes, often, more, less, better, worse, SOMETHING?) of the performance.
Recommendations with a plan to address them…it isn’t there. Analyze the problem, develop a plan, talk to others, and cost it out — and the solution is that they agree to “come up with a plan”.
That’s what they came up with to respond to a crisis that is having a more negative impact on employees as a whole than DRAP ever did?
Underwhelming. Infuriating even.
Apparently it was a nice press conference though. Two thumbs up.
Hopefully, the auditors don’t experience pay problems from Phoenix before the next audit, and maybe it will have more teeth.
Remember, the OAG is limited by what information PSPC has on hand. And by all indications, PSPC doesn’t really know what they’re doing. This is a tire fire of badness – PSPC doesn’t really know what it’s doing, departments internal processes are equally dysfunctional, most employees go along blindly assuming things are OK. Unions are very cautious on this, since one major problem is the staff in Miramichi – both in terms of size and in terms of experience.
Fundamentally, throughout, there’s an information vacuum. PSPC lets out only heavily sanitized graphs that provide eye candy but no information; the “Track My Case” tool is less than useful (PSPC closing cases where people are still owed thousands of dollars is particular bugaboo of mine); the phone lines cannot provide any help – by design.
I did notice, though, that Executives get white glove treatment: call backs from real compensation advisors and problems quickly addressed. Thus, the problems aren’t understood by the directors and above, as their issues are quickly fixed – perhaps a bit of strategic cunning by PSPC, but one that’s only going to serve as a further catalyst for anger as it becomes more widely known. (At first, executives had a dedicated help line; that got quashed a few months ago).
There’s no shining star in this whole mess; the PS writ large looks really, really bad.
And apparently the ploy of emailing the DM at PSPC directly when you don’t get paid no longer works…
I’m curious — and I have no way of knowing directly — if the info isn’t available or the report isn’t being generated. Partly as it is a retrofit of an existing program to the PS environment. Interesting too re: the ploy of getting action other ways no longer working.
P.