There really weren’t any forward-looking ones, at least not upfront. They had some generic elements under governance, but that was it.
What the REAL criterion should have had
It is pretty simple — is there a plan in place going forward that addresses major issues, is risk-based, and is written down. There are lots of bells and whistles beyond that, things like cost and timelines, but the most basic element is “Do they have a plan?”
What did the audit find?
The audit found that
Departments and agencies had significant difficulties in providing timely and accurate pay information and in supporting employees in resolving pay problems
A sustainable solution will take years and cost much more than the $540 million the government expected to spend to resolve pay problems
What COULD the audit have found?
I need to digress for a minute and talk about the audit process. Generally speaking, auditors come forward and say, “Okay, here are the terms of reference for the audit, i.e. this is what we’re going to look at”. There may be some back and forth with the department to say, “Wait, what about this?” or more likely “Wait, that isn’t part of this project” — it looks at what is in scope and what is out of scope.
Then the actual audit process begins, there are lots of documents and meetings, and preliminary findings are shared with the Department. This is the opportunity for the auditors to say, “Based on the docs we have, and the info we have been given so far, this is what we’re thinking we might say.” At this time, departments go crazy and say, “whoa, THAT’S not true, did you read this doc and this doc and this doc?”, often three docs that the auditors were never given. So they’re wrong about some aspect because they didn’t have any evidence from what they had seen so far. A gap, if you will.
Then they come back with their draft audit findings, they go through some iterations where the department gets to agree or disagree with some of the wording, often saying, “Wait, if you say THAT, with that language, we have to disagree, it goes too far”, and the auditors balance out their wording with their findings. While some people get their backs up that this is interfering with the independence of the auditors, it is often more along the lines of the auditors saying, “We examined the building plans for a green cabinet, a blue cabinet, and a yellow cabinet, and we found no evidence of cost analysis.” And the department says, “wait a minute, we had full analysis for yellow, and partial for blue, but agree with nothing for green.” And the auditors go back and look at their evidence and come back with revised wording that likely says “Not all projects had full cost analysis in place and there were significant gaps for most.” They’re still slapping someone, just making sure they’re slapping the right someone with the right language. And to be candid, some of it is seeing how much pain the department can handle. Can it handle 4 slaps or only 2? So the auditors start by saying “YOU COMPLETELY SUCK” and water down the language a bit at a time until the department stops whining, somewhere between “YOU MOSTLY SUCK” and “YOU’RE KINDA SUCKY IN CERTAIN AREAS”.
Because after the audit is done, there are two things the department has to produce and the level of work depends on which of those phrases the department could live with:
A management response; and,
A management action plan.
The management response is a simple response where the departments says “We agree” or “We disagree” with the recommendations. The cycle of responses over the years has ebbed and flowed, with some periods existing where no DM ever wanted to disagree with an audit recommendation. Even if they thought the auditors completely misunderstood the situation, they would say “We agree” and then in the prose explain how they were planning to either not do what they said or the exact opposite of agreement. The responses were somewhere between a sorry/not sorry situation and a Sir Humphrey response from a Yes Minister episode.
In more recent years, and changes in Auditor Generals, DMs feel more comfortable to say, “Wait, hold on a minute, we grudgingly agree with your findings, but NOT your recommendations on how to fix it…so we disagree.” But auditor generals NEVER want the report to say the department disagrees, as it basically means they’re saying fairly confidently that the AG didn’t understand the subject matter or project. If a department disagrees, this means they seriously disagree and then suddenly the AG jumps into the project directly, often working to massage the language enough to be so much “motherhood and apple pie”-type statements, that NO deputy could ever disagree with the recommendation. And then they’re back to bland recommendations that the department can agree with easily. Yawn.
But, regardless of the MR, the department also has to create a management action plan. And there is one relatively universal truth to MRs and MAPs — a “plan to have a plan” is not a plan in and of itself. The department cannot say, “Oh, yeah, that’s a good idea, we’ll look it, develop a plan, and then implement it.” They are SUPPOSED to say, “Hey, good idea, we’re going to do THIS and THIS to implement.” In other words, you need to have some content and an actual plan, not a plan to have a plan.
What would this look like in this case? They would have recommendations for a clear set of roles and responsibilities between players. Which this audit did recommend, except that the response is that they’ll create such a plan. A plan to have a plan, not the plan itself.
They would have clear recommendations relating to the role of other departments who send the pay files to Phoenix. The audit found that “Departments and agencies contributed to the problems; however, Public Services and Procurement Canada did not provide them with all the information and support to allow them to resolve pay problems to ensure that their employees consistently receive their correct pay, on time” so they did articulate a problem. Yet the response is a plan to have a plan to fix that.
There should be clear recommendations on transparency, risk-based triaging, cost breakdowns and service delivery mechanisms. Again, there are some strong hints to do that, and the response is “We’ll develop a plan.”
DEVELOP A PLAN???? What the heck have they been doing for the last two years or even the last 8 months while the audit was busting their chops internally? The Department *knew* that the audit finding was coming, and they should have had the plan relatively complete in certain areas. It should be ready to go.
Heck, the language was so watered down, it looked more like “we’re working on a strategy on how to develop a work plan that will lead us to a complete plan to respond to the challenges identified …. zzzzz”. Their plan is to develop a plan to have a plan. They haven’t even developed the PLAN for the plan, let alone the actual plan.
No governance in place, but watered-down wording that could possibly lead to little concrete change.
No transparency in data, so employees are still wondering what the state of pay is, and no recommendations or commitments to change that reporting.
Hardly any commitment at all of anything, other than a plan to have a plan.
It’s unfathomable that such an audit passed even the most basic internal tests at the OAG. Based on the actions committed to, it seems more like the audit equivalent of a hangnail than a project that is way over cost and a disaster on the ground. The system is bleeding out, but good news, they think they might know someone who can come up with a plan to develop a strategy to stop bleeding in general. But let’s not rush into anything resembling a solution.
Directive on Financial Management of Pay Administration, Treasury Board
Policy on Results, Treasury Board
Directive on Results, Treasury Board
Supporting Effective Evaluations: A Guide to Developing Performance Measurement Strategies, Treasury Board of Canada Secretariat
COBIT 5: Enabling Processes, Information Systems Audit and Control Association, ISACA
As with the review yesterday, the policy on results, directive on results, guide to PM strategies, and COBIT 5 are virtually worthless to the exercise. They tend to talk heavily about programmatic delivery results (external results of spending), and have very little to offer in the way of measuring or monitoring internal services. To the extent they do, they tell them what types of things they should do in general, they don’t dictate or give explicit instructions. The first two, however, are a lot more detailed and do include some directive language, along with some indication of actual service standards and duties/obligations. Not enough to run the Phoenix system, but at least PSCP managers would have had SOMETHING to rely upon.
The second criterion was:
The resolution of problems related to paying public service employees is being effectively and efficiently managed.
For these ones, a few of the documents are the same, but it is the ones from TBS that are different and quite telling:
Guidelines on Costing, Treasury Board of Canada Secretariat
A Guide to Costing of Service Delivery for Service Standards, Treasury Board of Canada Secretariat
COBIT 5: Enabling Processes, ISACA
Information Technology Information Library Service Strategy, second edition, 2011
What the REAL criterion should have focused on
Now, if you take those above pieces, and break them down into manageable chunks to audit, you would expect to find some of the following:
A comprehensive inventory of all the pay action requests in the system;
Detailed reports of nature (type, age, and department) and impact (materiality, $$ estimate, $$ as a percentage of annual salary);
Clear project management principles showing differentiated approaches based on nature and impact;
A risk-based triage process and analysis of the various PARs;
Cost breakdowns of the steps taken to date and how they impacted the resolution of numbers outstanding;
Key performance measures in place for overall and individual workload management, tied to nature and impact; and,
Training in place to respond to basic training, ongoing maintenance and emerging issues.
While there are other things you COULD see, those seven items are pretty basic tools.
You could also likely examine three other items that deal not with the PARs themselves, but the client service function:
Clear identification and public sharing of service standards for various types of PARs and how the system is doing, updated likely weekly;
The system in place for people to access and receive status updates on their individual file and to know what is happening, even a queue number if the answer is nothing yet; and,
Detailed communications plan in place to transparently share the detailed reports.
What did the audit find?
The audit concluded that there was “The number of pay problems continues to increase”, “Public Services and Procurement Canada did not have a full understanding of the extent and causes of pay problems”, and “Departments and agencies had significant difficulties in providing timely and accurate pay information and in supporting employees in resolving pay problems.”
What COULD the audit have found?
It is clear to anyone and everyone that the solutions in place are not meeting the needs. And on some of the elements I mentioned above, the auditors did have some views:
The inventory was not comprehensive, there were clear gaps;
The reports are rudimentary at best, and don’t give details on nature or impact;
Differentiated approaches based on nature and impact were done, mainly based on various pressure points over time, but with little analysis or evidence of the result for each group; and,
The training was not done before launch and hasn’t kept up.
They could have also expressed concern that there were clearly other gaps:
No detailed risk-based triage;
No cost breakdowns;
Little in the pay of performance metrics or service standards; and,
No exception management system, nor any feedback and status mechanism.
The audit failed in this area on two counts. First, the audit recommendations could have been quite prescriptive and detailed, saying “We recommend you do x or y, and do it by such and such a date”. Which PSPC would then have to commit to doing, and to do so publicly. The recommendations are more general than that, telling them to do better rather than saying they failed to meet even the most basic standards at all. As a result, PSPC basically was able to respond that they are to develop a plan as to what their overall plan should be. A plan to have a plan, not even the final plan itself.
More importantly, though, the auditors had access to the internal data. While it is clear that the PSPC system is not robust enough to generate the reports needed, some more rudimentary reports could have been developed and calculated. And, given the public spectacle surrounding the audit, part of the role of auditors is to report on what is happening and how it is performing. Instead of giving us the reports we need, or coming as close as they could at least, they went for straight-up overall volumes. Stakeholders — namely employees — had almost no more useful info or data than before the audit.
Here is the most minimal of tables that I had expected and hoped to see:
Type of Pay Action Request
< 1 month
1 month to 6 months
> 6 months
Put in pay (i.e. new employees)
Acting pay (up to 2 weeks)
Acting pay (over 2 weeks)
Removals from pay (retirement)
Removals from pay (special leave)
Don’t get me wrong, I think that table is insufficient. I just think it was the most basic table and that they should have been able to provide it, or even generate it as part of the audit, even if there was a gaping sub-area / black hole called “other”. Basically, people, they hadn’t triaged yet so they weren’t even sure what was in there. But arguing there are 500K requests doesn’t tell me anything about nature OR impact.
Now, I expected that table, and it didn’t come. Nor was there any details on maternity leave, sick leave, overpayments, assignments, secondments, other administrative changes, etc. There could be another 10-20 categories for the Type of PAR, but it is not just the categories as the time factor besides it — how old are the requests, how big is the queue? Because once the report is generated the first time, it can be generated again. With showing changes since the last one. I suspect the age figures would have to be even more disaggregated (maybe 1m, 2-3m, 4-6m, 6m-1y, 1-2y, 2-3y, etc.). That would give you decent details on the nature — at least for the type of PAR and the age of the request.
Where I was apparently dreaming in technicolour was in thinking that they might even go further. Think of the above table, with the same PAR categories down the side, but instead of showing the age, the table showed the dollar value across the top. For some reason, the materiality threshold set by PSPC was $100. Almost every request would be over that threshold in gross terms, particularly as acting pay is only paid if it is longer than 3 days now. So their threshold is meaningless. Instead, I’d hope to see something along the lines of 0-1000, 1000-2500, 2500-5000, 5-10K, 10-20K, > 20K as the cutoffs.
Why? Because it would give a clear and compelling indication of the magnitude of the problem. There are lots of stories of people complaining about Phoenix, but not all complaints are created equal. Some are life-altering disasters, with people not being paid for over a year, running into tens of thousands of dollars owed. At the same time, there’s Joe Worker next to them in the queue who didn’t get their three-day acting pay last week. Both people deserve to be paid, in full and on time, but when I have to triage the files, the person who is owed more money is likely facing a larger personal impact, particularly if it is combined with a longer period of time and affecting multiple tax years.
Now, the auditors hinted that the info isn’t available, and to be blunt, I don’t entirely believe them. I believe the info isn’t READILY available, but I don’t believe that it couldn’t be generated with some basic methodology. Even if they had 300K files in the queue, and didn’t really have a way to code 40% of them, they would still know the profile for the ones they have and be able to extrapolate what it means for the others. Is it a perfect methodology? Nope, but there are lots of previous audits that did more with less.
While there were some basic charts and tables included, they really didn’t provide much info on the scope at all. I thought they at least might have profiled one of the participating departments as an example, but they didn’t even do that.
I literally felt like it was a completely missed opportunity to pull back the curtains and provide SOME info to all the affected employees. They deserved an audit that went farther and produced more, particularly when they saw that PSPC didn’t have the data already available.
When I read the Office of the Auditor General’s audit of Phoenix, I was beyond disappointed (A disappointing audit of the Phoenix problems). In part, I think it is because I am too familiar with audits from my previous job where I read just about every audit done by my department in the last nine years, plus some of the broader OAG ones. Yep, I’m a public admin geek. I was even somewhat amused when I saw the news coverage about how aggressive the report was in its condemnation. And, if you weren’t a regular reviewer of audits, you might just go with the press conference and some of the findings and think, “Okay, they’re being appropriately harsh”.
Except the OAG knows how to be harsh when something isn’t working, and the language they would use for that kind of screw-up wasn’t present in the report. So let’s look at the report and see what they COULD (or even should?) have said, but didn’t.
What were the criteria?
Let’s go in reverse order, and start with the third criterion that the auditors set up in their audit:
Comprehensive and coherent governance and oversight detailing accountabilities and responsibilities for resolving problems related to paying public service employees are defined, agreed to, and implemented.
That’s what they expected should be in place, and that’s what they were looking to find. They based that criterion on a bunch of documents, including:
Financial Administration Act
Public Service Employment Act
Department of Public Works and Government Services Act
Directive on Financial Management of Pay Administration, Treasury Board
Policy on Terms and Conditions of Employment and the Directive on Terms and Conditions of Employment, Treasury Board
Policy Framework for People Management, Treasury Board
Policy Framework for the Management of Compensation, Treasury Board
COBIT 5: Enabling Processes, ISACA
Now, here’s the thing. NONE of those 11 documents say what that oversight framework should look like. They have hints, sure, like the fact that there should be clear roles and responsibilities, there should BE a framework that everyone knows, and generally speaking, that it should be focused on doing proper things related to the subject matter, in this case, pay and benefits.
But the really interesting thing is the last one. Quoting from the Wikipedia page, “COBIT (Control Objectives for Information and Related Technologies) is a good-practice framework created by international professional association ISACA for information technology (IT) management and IT governance.”
This is standard practice for the OAG. They look at the various docs from within the government, realize that there’s no professional standard to really say what SHOULD have been done, and thus they use an industry best-practice to help them figure out what the standard should have been. You know, as if the people running Phoenix had done the same research, read the same best practice that the OAG “discovered”, decided it applied to this situation and used it.
Now before you think I’m defending the Phoenix managers, I’m not. I’m just pointing out that using COBIT 5 as the benchmark basically is the OAG admitting that there was no clear, existing standard in the first 10 documents to tell them what should have been done, and thus they had to create one. But it is mainly to give themselves some sort of independent, industry-based “cover” for their approach. It’s mostly worthless and has nothing whatsoever to do with the audit.
What the REAL criterion should have focused on
The first criterion should have been broken down into some basic building blocks:
Was there governance and oversight? Before you can decide on the rest, you need to rate whether there was anyone in charge. And if so, identify who and for what. If you look at it from a standard “project” basis, there would need to be clear parcelling out of problem identification, consideration of options and alternatives, analysis of individual options with recommendations, policy and program design of a single option, functional and operational design guidance, service delivery design, and implementation (which itself would break out into multiple sub-headings, more applicable to the other two criteria). Now, the audit says that the first 4 or 5 of those are either not part of this audit or political decisions that won’t be rated. However, without identifying all the steps, it’s almost impossible to know if there was any governance and oversight throughout. Basically, without getting too simplified, the question isn’t which of those steps did what and when, it’s to know if those in charge had the equivalent of a project charter that laid all that out from beginning to end. It is the most basic element. And it doesn’t exist. Public Services and Procurement Canada (PSPC) have bits and pieces of some of it, but they don’t have a comprehensive roadmap. Or anything even close to it. If there was, you would move on to secondary questions like was it risk-based, implementation schedules, work plans, updates, monitoring, key performance measures and milestones, detailed reporting, transparent sharing of the documents amongst all the key players, etc. Don’t get me wrong, almost no large-scale project in the government has all of those things, so it’s a sliding scale. But to even get ON the scale, you need the most basic tool of governance and oversight to lay out the various phases and steps in detail. Call it the picture on the box for putting the jigsaw together, and they not only don’t have the picture, but some of the people involved are also making up their own pieces.
Were there detailed accountabilities and responsibilities? If you did the first step, you could then move to the second step where you lay out who is doing what, how, when and even why. But as you can tell from reading the audit, the first time all the roles and responsibilities were clearly laid out seems to be in response to the audit. In order to explain it to the auditors, documents were created after the fact to say “Hey, here’s what we’re doing!”. That sounds terrible, I know, but to be blunt, it’s not uncommon. Many projects get by with far less documented detail than auditors want, and even the COBIT thing is part of that…if PSPC had a single doc that put it all together as what they said they were going to do, then the auditors would just audit them against that document. Without it, they kind of have to invent it, and you can see that when the auditors describe it, they have no source material to base it on, other than the interviews and ad hoc materials created by PSPC (most likely for the audit, or at least, coinciding with the same timelines as the audit). But the audit does make it clear in multiple places that the docs weren’t being used by management, or even existed previously in some cases.
Was the governance comprehensive and coherent? Okay, while I hate to let PSPC off the hook, this is a standard that is virtually impossible to meet with auditors. Because no matter what you have done, the auditors will always say you could have done more or done it better. There is only one organization in the entire government that tends to meet their standard and it is the military. If you are deploying troops to a battlefield with a mix of ice, snow and sand, there is a document somewhere in DND’s doctrine manuals that says how many latrines to build, where the materials are coming from, and who is responsible for setting them up on what day and likely the number of rolls of toilet paper that are needed by the size of the military force deployed. Most other departments don’t even come close to that standard, nor should they try. It is over management to the nth degree. However, the lower standard that can be met is if all the players are identified in terms of their roles (linked to B above) and the various stages (linked to A above), and if all of them are generally contributing to the same recognized goal that they all agree to in advance.
What did the audit find?
The audit concluded that there was “no comprehensive governance structure in place to resolve pay problems in a sustainable way”.
There is a bit more obiterdicta (commentary) in the text, but that was their big conclusion. That’s it, that’s all.
What COULD the audit have found?
It could have said that there was virtually no evidence of oversight, governance, accountability, or responsibilities at even the most rudimentary levels that you would expect to find in place for a branch picnic, let alone a multi-million dollar project of this level of complexity.
It could have listed 20 or 30 project tools or documents that the auditors expected could have been put in place for any project management endeavour and then noted that PSPC had virtually none of them complete. Maybe even ranked them on a scale of 1 to 5 for each, where 1 was notional and 5 was complete. It would clearly show on the chart/table that PSPC never went above 3 (partial) on any of the 20-30 elements. This isn’t rocket science or even a new methodology, it’s similar to how TBS does Management Accountability Framework assessments. TBS basically says “We expect the following six things, which ones do you have?” and then rates them for degree of completeness. And then assigns colours/risks based on how far off the standard the department is.
Any one of the three groupings I used above could have five to ten documents/tools under each. A good project wouldn’t necessarily have them all, but they should have MOST. And clear explanations why some of the others weren’t applicable.
So when the auditors say there was “no comprehensive governance structure in place to resolve pay problems in a sustainable way”, there are 5 weasel phrases in there that water it down:
Was there NO structure or just not the FULL structure?
Was it not comprehensive, only partial?
Was there a plan but it wasn’t in place?
Was it not sustainable?
Was it not capable of full-resolution, only partial?
By sticking those five weasel phrases in, it softens the report by a factor of five.
Instead, it could have said there was no project management capacity applied to the project at even the most basic levels.
Or it could have said that the most fundamental aspects of governance and oversight are completely non-existent.
THAT would have been an auditor being harsh. And based on all the reports coming out, much more accurate. But if an auditor is that harsh, it means someone would pretty much need to be fired. Because someone was SUPPOSED to be in charge, and such a finding would mean that they clearly didn’t do any of the things they were expected to do.
And that’s only the first of three conclusions that we didn’t get, and would be disappointing by itself.
As a civil servant, I was incredibly disappointed with the recent Phoenix audit, although maybe I just expected too much of it. Things that should have been clearly there, I would have thought, were in fact absent. Wording that I expected to be extremely harsh was toned down. Recommendations that would seem to be obvious ways forward were missing in action.
A friend asked me earlier this week where my indignant anger was at the fiasco and I think part of my passivity was because I knew the audit was coming. And I expected it to be a bombshell…a true blockbuster for its impact. Based on the actual wording, it seems more like they were going for a children’s firecracker that fizzled.
I expect three things from an audit:
A clear articulation of the project’s goal and what they were trying to do;
A clear indication of assessment/analysis of performance based on evaluation against an objective standard; and,
Clear indications of recommendations for a way forward and response by the organization how they’re going to address the recommendations.
This audit doesn’t do any of those three things.
Understanding what an audit actually does
Most people hear the word audit and they immediately think of audits like what happens to taxpayers when they get audited by Revenue Canada or the Internal Revenue Service. You get called in, you bring in all your receipts, they accept some things, deny others, make you go through every receipt and line item one at a time, and generally grill you like a fish. When it’s over, you wring out your underwear, write a cheque for whatever was denied, and trundle off muttering dark thoughts about their parents but settling for being happy to escape.
That’s a TAX audit of individuals and has no resemblance to an audit of an organization.
When people hear that a company is being audited, they think it’s the same thing. Line by line, item by item. Particularly when it is a company that is in financial trouble or has been playing fast and loose with financial reporting. But that’s not what an audit of an organization looks like. It can’t be. They can’t look at every line item in the books, there are too many. They have to find a way to triage the work, and so, regardless of whether or not it’s a financial audit, or a management audit, or consulting audit, or operational audit, they have a general approach to how they work.
A. SCOPING.The first step is to figure out what is in scope and what is out…that sounds kind of simple, or mundane, but it’s not. It’s the most important question of the whole audit. For the recent Phoenix audit, the first thing that was screened out was the initial decision to move everything to Miramichi. That’s a political decision, and the Office of the Auditor-General does not audit politicians nor political decisions. It only audits the implementation of political decisions by bureaucrats.
That’s how democracy works. Politicians are elected to serve, and as long as they don’t violate the constitution or commit actual crimes, their decisions are only formally evaluated/audited in one way — elections. To use the vernacular, and to quote my father, if they screw up, “you throw the bums out” at the next election.
So I knew there would be no reference to the political decision itself. However, in the lead up to the Phoenix audit, I missed a sequencing nuance. I realized there would be TWO audits, sure, and one would look at the implementation of the centralization and one would look at pay processing. But I expected they would be sequenced in chronological order — look first at the process that created the system, and then look at the outcomes of that system as it is currently running.
I’m sure there were lots of people who said, “Start with pay processing” because it is the part that matters to unpaid civil servants right NOW. Not how we got here, but why it’s not working and how to fix it.
Yet I cannot but remain confused — how do you explain gaps and ways to fix it if you don’t first examine how the problem was created in the first place?
There’s an analogy a friend shared with me about upstream and downstream activities. Downstream you see a whole bunch of people in the river, and they have to be saved from drowning. So you start treating the symptoms and saving people, hiring lifeguards and rescue workers, and save everyone you can. Meanwhile, upstream is someone on a pier pushing people into the water. The cause, not the symptom. Fix the cause, stop it from getting worse.
So I fully expected the audit to scope IN the initial implementation and scope OUT some of the current processing. Nope, it was the other way around. Odd.
B. GOALS / OBJECTIVES. Some people mistake this area as the same thing as the scope, and while they are related since the scope affects the wording, the question is simpler than that…the auditors basically ask the question:
What was the organization trying to achieve?
To do that, it asks for source materials. Original decision memos. Treasury Board submissions. Memorandum to Cabinet. Not to see the policy advice and options, those are out of scope as being “political”, but to see what it was that the politicians approved. What was the “decision” that they said…did they say “Centralize and damn the torpedos?” Did they say, “Pay them right, pay them fast”? What was the actual goal?
Because the goal is what the auditors are going to measure you against in the audit. Did you make the right decisions at the right times with the right information and the right control structures in place for monitoring to achieve the goal that you set?
In this case, having already scoped out the implementation into a separate audit, the objective was defined as:
Whether selected departments and agencies resolved pay problems in a sustainable way (that is, effectively and efficiently) to ensure that public service employees consistently received their correct pay, on time?
It’s well-worded, very succinct, and entirely defensible to the public, the politicians and the public service. Put differently, auditors are really good at framing the question precisely and not hiding in the weeds.
That’s a great question… do the process and the system work?
Well, anyone looking at that is going to say, “ummm, no.” If they are affected by Phoenix, it will be a resounding “hell no”. Which then begs the question…why bother with an audit? Because it is the additional findings and recommendations that go with that conclusion that will attract responses, solutions and usually resources to resolve it.
Audits are BIG DEALS. There are management responses required, signed by deputy ministers, with action plans to implement and to report on regularly for how they are doing at implementing the recommendations. Clear, public, and shared with Parliament. This isn’t simple corporate reporting that is done annually…this is ON TOP of all of the normal reporting and almost always requires a separately designated team to manage the response, monitor the implementation, and report clearly to the DM if/when a recommendation isn’t being addressed.
C. TOUCH POINTS. This is my term of art, not what auditors use. But one of the things auditors do is try to establish a timeline for all the big decisions taken during the life of the audit period. In this case, the period ran from 24 February 2016 to 30 June 2017. Sixteen months.
In that time frame, they would look at all the decisions by the big players — Public Services and Procurement Canada, Treasury Board Secretariat, and various Departments and Agencies involved. Much of that timeline would usually show up near the front of the audit in the description of the implementation, may be broken down into phases. What I find weird though is that because they are looking ONLY at processing, they went only with the changes of completing hiring (Feb 2016), first wave onboarding, second wave onboarding, and by default, the end date of the period of the audit. Those aren’t decision points.
But the audit would usually go even further than that.
It would say okay, there was a decision at month 1.5 of 16 by TBS. Let’s look at the material provided with that, the options considered, the analysis that went with it, how it was communicated, etc. Then they would see another decision at month 4.5 of 16 made by PS&PC, and do the same. And so on through the whole process. That gives them their timeline of decision points, not events.
And lastly, they would then go through and identify where control and monitoring systems were put in place to allow management to know if things were working the way they were intended when the decision was made, if the systems were capable of sending error signals back to the decision-maker for action, and if/when error signals were sent, were they acted upon properly.
Those three pieces — decision points, decision making, and control mechanisms — are the cornerstones of every audit I have seen in the last 20 years. And it isn’t included in the report.
I find it difficult to believe it wasn’t done. I can’t see how they could have done the audit without it. But the failure to provide it is a giant lapse in transparency and communications that would have helped people understand who decided what and when, and why. And what they did about it. Separate from all the prose, I was looking forward to the diagram. So I’m incredibly disappointed it isn’t included. I would be embarrassed beyond belief if it was never even created.
D. BENCHMARKS. Auditors have to have some sort of benchmark by which to judge you, it can’t seem like they are merely Monday morning quarterbacks second-guessing what you decided the day before. Don’t get me wrong, that view is popular. Many people think auditors are basically a cross between robbers who pick the pockets of dead soldiers on the battlefield or actively running around bayoneting the wounded.
But they start with any standards already available. For example, on compliance audits, they judge you how you comply with some TBS or PCO or other government department-issued policy. Are you doing Gs&Cs management right? Well, what did TBS tell you to do when you are managing Gs&Cs? Are you doing it? Let’s measure you against that policy.
Are you doing privacy management right? Let’s measure you against the policies on privacy from TBS and guidance from the Office of the Privacy Commissioner, and see if you are complying with what those policies say.
However, in other areas, they go broader. On financial audits, the Financial Administration Act and other legislation are critical, but so are the Generally Accepted Accounting Principles (GAAP), the professional standards of accountants and financial officers. Or that is what they used to be called. I think it is now called the International Financial Reporting Standards (IFRS). In finance-related audits, the question often is, “Did you follow IFRS?”. If yes, you’re okay; if not, they recommend you do. Except, as a small digression, the recommendations are often ridiculous in this area because the “standard” is often a generic principle, which also has a corollary that says the exact opposite. In finance, you can make a decision to record a series of transactions one way, and it will make your balance sheet look PERFECT. Completely representative of your assets on a specific date in time. It’s great for a snapshot, a picture if you will. But on the other hand, your income statement is about changes over a period of time, more like a video. That same standard you chose to make your balance sheet look great almost always introduces an element of misrepresentation in your income statement, and violates a standard about how to make your I/S look PERFECT. One standard says to do X, the other says to do Y. So the accountants and the auditors argue back and forth, which is the right “standard”. Both, either, neither, all of them are right.
Equally, many of the benchmarks are also often misleading. I used to do a lot of performance measurement work. One set of policies tells you that if a PROGRAM (defined in one way) is over $5M, it has a bunch of rules and regs about how it works, and includes a reference that it should also be placed on a specific inventory each department updated annually. However, you then use that inventory for a slightly different purpose in other instances, and those policies say, all PROGRAMS on the inventory should do x and y as well. All of which is predicated on the same definition of a program, and they are consistent but not identical. Add in a third element — you can add stuff to this inventory that is NOT programs but are related activities that you want to highlight as if they WERE a program, and things get really messy. Because someone then tells us, “Well it’s a program” (it’s not), “it’s on the inventory” (true) and “therefore you have to comply with the other rules plus do X and Y as well.” (you don’t). That sounds pedantic, right?
Except then an auditor comes along, says “All programs on the inventory should have compliance with the rules plus do X and Y” and then dings us for not meeting that standard. Except that wasn’t the standard for non-programs, it was never intended to capture this type of activity, and you can’t do it. It can’t be met. I’ve even had a conversation where the auditor has ADMITTED a standard doesn’t apply, and then in the audit report, it said, “But it’s a good practice, so you should do it anyway.” ARGHHHHH!
While I appear to be digressing, I want to point out that the auditors have to establish a clear benchmark. One that the management hopefully knew about and was trying to meet, or was an internationally recognized best practice they should have been following or was a TBS guideline they had to meet. Something that says, “Here’s how we measure your performance in doing X”. Sometimes you agree, sometimes you think they’re inventing standards, but there is one you had to meet.
In this case, they need to state the benchmark for measuring if Phoenix was resolving pay problems effectively and efficiently to ensure employees got the right pay on time.
But…the benchmark isn’t there. I never see the yardstick. Near the end of the audit, it lists some criteria and it comes close to the benchmark, but not with any actual scale or way of turning it into a measurement. It’s just motherhood and apple pie statements.
Sure, they say that “the federal government has an obligation to pay its employees on time and accurately.” Uh huh, everyone knows that. But a yardstick is kind of like standardized testing. Which percentile of performance are they in?
Let’s be clear. Everybody knows Phoenix isn’t performing. It sucks. It isn’t meeting ANY performance standard on just about any measure. But the audit should be able to say, generally, a well-functioning pay system should be able to hit a standard of x% correct in y many days, etc. with a range of expectations.
I’ll be overly simplistic, and I know audits don’t do this, but something along the lines of the following for speed of pay change requests:
80% within one week;
90% within two weeks;
95% within three weeks;
99% within four weeks; and,
100% within two months.
Some sort of rating scale to say “a is acceptable” but 90% in one week would be good, 95% in one week would be excellent, 99% would be best-practice, etc.
There are hints here and there. A few phrases dropped in now and again. Like noting that on only two occasions has processing exceeded intakes, i.e. they were actually reducing rather than increasing the number of outstanding requests. Was that the standard? Was processing > intake an internationally recognized standard? A best practice? A minimum standard? I don’t know. It doesn’t tell me what the standard was.
Now, this is where my anger and frustration really starts to percolate. Because without that clearly articulated standard, it is the impact on the next four inter-related pieces that completely start to gut the usefulness of the audit.
E. FINDINGS / RECOMMENDATIONS / RESPONSE / ACTIONS The auditors grouped their findings and recommendations into two parts — the state of pay operations and the way forward. Nice groupings, too bad it doesn’t deliver on the promise.
They found that pay problems continue to increase. No surprise. Yet what IS interesting is that there is ONLY one superlative in the entire description of the pay problems — “significant”. And what does it apply to? The internal numbers were SIGNIFICANTLY higher than reported externally (yawn) and error rates were higher than targeted (ka-ching, that’s SIGNIFICANT). Yet what else did they find?
500K outstanding pay requests — no rating;
Employees waiting more than 3 months, 49K had been waiting more than a year, and approximately 32000 employees were affected as having a high financial impact — no rating or qualification of that, and while it sounds alarming, the threshold was only $100 (i.e. 17K employees were waiting more than a year for something amounting to less than $100 in difference — was that the standard? I don’t know); and,
$520M in pay outstanding for paid too much or too little.
Yet nowhere in there is there any standard about what is the right level or not compared to other systems. Errors happen, sure. But these look pretty DAMN BAD. Yet there is no indication of how bad. Dante’s Inferno bad? Another Batman movie with Michael Keaton bad? I’m sitting next to someone at lunch with a really smelly sandwich bad? HOW BAD IS IT?
That’s the job. Not just the facts, and numbers, but to actually measure the performance against the yardstick above.
And there are little bugaboos I see in the analysis that drive me insane. Auditors are independent, and they will choose to include calculations that they think makes sense, even when they may not. I’m thinking of a specific audit where they used a Public Accounts number that was off by a factor of four — it made something look huge and significant but included a ton of other stuff that wasn’t related to the audit. They said it was “context”.
In this case, PSPC excluded things where they thought it was more administrative…but sure, I can see why OAG wanted it in. Makes sense. Yet I also see why PSCP excluded it — no financial impact, lowest priority. Same argument for low effort ones, etc.
But wait a minute, PSPC eliminated ones that were suspected duplicates. That makes sense. Because I know there are people who had one problem and submitted 25 or more PARs to have it fixed, before the systems said, “Hey, don’t do that, it just gums up the system for no benefit, just wastes everyone’s time on the other end and delays us getting to other requests too”. But they had just sent them in. Followups, corrections, tweaks. Repeats even. So eliminating duplicates is a pretty solid management decision to know what the real problem is, not how many records you have. Put a different way, maybe a bit simplistic — when you book your car in for the winter tires to be put on, do you make one request or four separate ones? Did the garage do one installation in 20 minutes or are they so fast they can do 4 in 20 minutes? Yet OAG overrode those. I hate when they do that because the devil is in the details. SHOW ME THE DATA. Break it down, do a detailed table. And they’re supposed to measure in part how they did against THEIR own internal data. If the auditor disagrees, it can say it disagrees with the way it is calculated, and then add footnotes or alternative graphs, but they should be using the PSPC data as that is what PSPC was using.
Farther down in the analysis, it says PSPC did not do a thorough analysis of the financial impacts. Sounds damning. But compared to what? HOW DO I KNOW IF THIS IS SIGNIFICANT? WHAT’S THE STANDARD THEY DIDN’T MEET? Oh right, the audit doesn’t tell me.
When I get to processing times, it tells me the terms of conditions say that the GoC will make most changes in pay within two pay periods. They then say “on average” it’s 3 months. Wait…that too looks bad, and THEY HAD A STANDARD, so they can prove it’s bad. So where’s the assessment that this is A NO GOOD, HORRIBLE, VERY BAD THING? Nowhere. And, again, I see a bugaboo. Average? Really? We’re talking detailed service standards. Where’s the table that says 40% was within x weeks, 50% blah blah blah, etc.? That’s basic treatment of service standards, yet no table is shown. Nor a graph. Yet they had to be able to fully calculate the info to do a proper audit. Where’s the DATA?
I’m also not entirely sure I trust their numbers on the incident of pay rate errors. They say that PSPC doesn’t track the data that way, yet they came up with an estimate that is based on a pretty small sample and some of it looks self-reported. There is a huge difference between someone having a pay “ERROR” and someone saying they have a pay “ERROR”. Because the auditors are talking about a VERY specific type of error when they quote their statistics:
employees not being paid for overtime;
employees who work shift work not being paid for all the hours that they worked at the correct rate;
employees who transferred from one department to another being paid by both departments; and
employees not receiving correct pay when acting in a temporary role for a superior, or not receiving pay increases related to promotions on time because of late input into Phoenix.
At this point, I’m getting grumpy because it starts to look like a crappy audit with a sloppy methodology. Why, on any planet, would they NOT CHOOSE A KEY GROUP like the people NOT BEING PAID AT ALL? And more importantly, why is there no materiality mentioned for their review? If the number was that 98% was wrong, it would sound catastrophic unless you knew they were out by less than a $1.
But not being paid? Give me THAT statistic. It is clear. It is unassailable. It tells us the relative significance of the problem.
Instead, there are no benchmarks to tell me how significant any of those problems are. It doesn’t say “100% wrong all the time”, it says within this timeframe, who’s got an outstanding request? Which, not for nothing, may NOT BE CORRECT! And uses the weakest methodology available to them. I feel like loaning them darts for their next audit, I might have more faith in their numbers.
At the end of the section though, there is a fantastic statistic. The report says:
As of June 2017, departments and agencies reported that the government owed 51,000 employees a total of $228 million (because the employees were underpaid) and 59,000 employees owed the government a total of $295 million (because they were overpaid). In other words, there were over $520 million worth of pay errors that still needed to be corrected.
That’s huge, right? Off the charts? But let’s do the math. That means the average (which the auditors loved a few paragraphs before) owed is $4500 per employee. Gross. With an average salary for employees of around 60-75K across the public service, that’s between 1.5 to 2 paycheques. But even if that is significant on its own, and I don’t dispute that for a lot of families, we also know that there are cases that are WAY OFF that amount. People who are claiming they are owed months of non-pay. Where is the incident rate breakdown to tell me how that $228M is distributed? Are there 100 employees owed more than $50K? 500? 1000? 5000? I have no idea as they don’t give the data nor any indication of what the standard was that they were missing and how abysmal the performance was as a result.
Looking at the $295M overpaid, there are HUGE implications for that, including what a lot of people are pointing out online and elsewhere — the $295M was paid but with taxes deducted, yet when it is requested back, they ask for the gross amount, with the taxes to be Revenue Canada’s problem later. In other words, you might have got $15K gross, only saw $11K in total, and yet they’re asking you for you to pay the $15K back and your taxes will sort out the difference. Like that isn’t going to screw people big time. But radio silent in the report other than to say “it might have implications.” Implications? Really? That’s a five-alarm fire, baby!
Climbing out of the rabbit hole and moving on
I went through that first area with a fine-tooth comb as I wanted to show you how bad the audit seems to me. Given the public scrutiny, and the impact it has on so many public servants which goes to the heart of the employer-employee relationship for delivering government services to Canadians, the audit should be bulletproof. I shouldn’t be having these quibbles. I shouldn’t be asking for qualifiers or assessments or standards, they should be clearly laid out.
Moving on, the auditors found that PSPC didn’t have enough analysis of the extent or causes, and were still reactive. No surprise.
Hold on, I want to say something positive. I know, surprises me too. But I really like paragraphs 1.52 to 1.56 as to some of the causes of the problems:
Departments not entering stuff right;
The design was for real-time entry (forward-looking), but the majority of departments and agencies have huge retroactive processes (after the fact, which Phoenix didn’t bring online until March) — wait, what? Where is the detailed estimate of the numbers/time delays caused by THAT…oh right, the auditors say PSPC should do that in the future;
A major design flaw by the creator (my rating) that certain types of data entry on a file that was already being calculated by the system produced technical errors;
A major design flaw by the creator (my rating) that it couldn’t easily handle shift work; and,
Expected complaints from pay processors of not enough training, high workload, shifting priorities, and having to redo things more than once.
As a result of all this audit work, they come to the conclusion that PSPC should analyze the root causes of pay problems (PSPC say they’ll do a full HR-to-pay process analysis) and PSPC should develop a sustainable pay solution with a detailed plan that integrates HR (see the previous commitment to an HR-to-Pay plan).
This is where I get REALLY GRUMPY. Even with faulty analysis and methodology, and a lack of good data shared openly and transparently to truly explain the situation, the recommendation says do an analysis and come up with a plan. Yawn.
Wait a minute…these are the same people who did the initial analysis and came up with the current plan that is a DISASTER. Where is the harsh evaluation of what is there? Oh right. They had no benchmark. So they couldn’t go to the really aggressive audit language options:
SIGNIFICANT MANAGEMENT FAILURE
NOWHERE NEAR PROFESSIONAL STANDARDS
NO GOOD, HORRIBLE, VERY BAD DAY
FUBAR / SNAFU
SITTING THROUGH GIGLI BAD
When auditors review something, they express an opinion at the end as to whether they felt they had all the info they needed which means the audit is presumably representative of reality and can be relied upon, and whether they are comfortable with the management response. Not surprisingly, they say this audit met those standards. It’s a full audit, no management variance from the norm. Yet how could you come to the conclusion that the same management that put itself into the hole is capable of getting itself out of the hole?
The only way to do that is to have a clear bulletproof action plan. One totally defensible for all to see.
But PSPC doesn’t have that. They have a plan to have a plan. Or more accurately, they have a plan on how to develop a plan that will help them develop a solution. That response is NOT even close to what I think is sufficient, nor what is required.
And in every audit I have seen in the last ten years, a plan to have a plan was officially insufficient. If you have an audit, you need to show the actual plan to address the problem, not a plan to think about maybe having a plan. It seems more like a Dilbert cartoon than an actual way to approach an audit.
In my view, this looks way below the normal standard and yet for some reason, the auditors are not calling them out on it in the report.
Departments and agencies are not blameless in the problems faced by Miramichi. Yet I am shocked by how self-serving the section about the Departments and Agencies is written. It looks, and I can’t believe the auditors fell for it, like they said, “It’s all Phoenix, it’s all Miramichi’s fault, we’re on the side of angels here”. Cuz that’s almost what the report says. Even in areas where the partners weren’t doing things right, it was only because, allegedly, PSPC wasn’t clear.
There is so much BS in those paragraphs, I can’t believe PSPC and TBS accepted it. I get that there are huge small p political pressures for PSPC to own the problem and not blame other departments. But there are two huge pieces of context that are missing from that equation.
First, I mentioned in my earlier post that many departments dumped outstanding requests on Miramichi and didn’t worry about getting it right when they did. More like dump and run. In addition, there is a stat buried in 1.67 that says very clearly departments were doing retro requests on pay (which automatically makes them late, nothing to do with Phoenix), even for employees who had already been working for two weeks. That is a GIANT problem that lies squarely with departments, not with Phoenix. Yet no slapping of them for it. Or the fact that Phoenix started working with what was essentially an existing backlog they didn’t create but they did inherit.
Second, there is a perception that everything ran smoothly without Phoenix, we didn’t have huge problems before, and that those departments without Phoenix are doing just fine. Except we know that isn’t true. One area that got a lot of attention in the past two years was the plight of co-op students. Hired in the summer, starting work in May, and not getting paid until July or later. Delays, for the most vulnerable employees, what a scandal for Phoenix!
Yet it wasn’t anything new. Almost every department with more than 2500 employees (rough estimate) has treated students as a low priority for years. Letters of offers that didn’t get sent in advance. Security screenings not done before they start, thus delaying the start. Delays in reimbursement for moving expenses. And yes, delays until June or July to get their first paycheque. So what was different with Phoenix? It was all centralized. You could actually COUNT the number of students affected. For the first time, we knew a group of employees were not being handled properly and we could point to it as a collective problem. Which then got solved.
But it wasn’t a Phoenix problem. It wasn’t even a new process problem. It was that we had never bothered to fix it previously because 60+ departments and agencies never counted them all at once and shared the info to show how bad it was. About five years ago, a former deputy minister of some renown was at a gathering of co-op students one summer in late July, doing the outreach, showing they were valued. And after her little spiel, the meet and greet, and a few questions, one brave intrepid student said, somewhat shyly that she had a question. Were they going to be paid for their work? Three months in, none of them had been paid. And HR had NEVER TOLD THE DM. They weren’t hiding it, it just wasn’t significant enough to share. Needless to say, the DM was suitably and appropriately pissed.
And guess what? All of those lovely numbers we don’t have for Phoenix? We don’t have them for any non-Phoenix group either. The lack of analysis that PSPC hasn’t done? Nobody else has done it either for the other systems. A lot of those complicated 82000 files that they were working to fix at the beginning? They came from departments that hadn’t fixed them either. But they were spread out so they looked like little pockets of problems. Centralization revealed it, it didn’t necessarily create it.
So what’s missing from this section? Any resemblance at all to some sort of comparator. Show me a department that isn’t using Phoenix. Show me the standard. Show me the old stats based on numbers that Departments and Agencies have NEVER shared. Even in the audit, where it says a gathering of info is being done about problems in various D&As, the departments haven’t submitted the info. Haven’t done it. Haven’t got to it. Why? Because many of them have NO data. No info at all. They’re creating it from scratch, much of it manually. Yet the only one being slammed is PSPC?
In my first post, I said it wasn’t a technological problem, and the audit shows that. There is very little in the audit pointing fingers at IBM and the system. It is clearly a process issue, one that PSPC owns a major part of, but not all of it.
And while I am happy that the audit recommends that TBS and PSPC work with departments and agencies, it narrows to working on targets, timelines, performance metrics, reports, and access. Nothing that says D&As need to do anything except cooperate.
Have auditors decided not to point out epic fails?
The Auditors Way Forward
If I thought the opening was bad, I shouldn’t have too big of expectations for the solutions, right? Right.
But I didn’t think they would find an international comparator and decide it must be the model to follow. Yet it does. It points to one country, Australia, and says they should have done what they did for governance. No other benchmark is referenced. Nor any assessment other than a general statement of the auditors that the Australian solution is actually working or a model to follow. The Australian public servants aren’t exactly singing its praises as a success. In fact, as the auditors note, the great solution didn’t fix everything, and some of the issues are still present 8 years later. Is that their standard? Anything better than eight years is a success?
SHOW ME THE STANDARD! State it clearly.
Oh, and the solution of a new governance structure? Is it just a coincidence that it almost mirrors exactly what was already announced at the political level?
Okay, I’ll stop being grumpy for a second and say something positive about the report. Nice font. No, I’m joking, I did find something good in 1.90, 1.91 and 1.92. The three paragraphs talk about how PSPC tried to respond to the need to triage. Awesome paragraphs. So much so that I’m going to repeat them here:
1.90 First, in July 2016, Public Services and Procurement Canada said it would process what it said were the outstanding pay requests for 82,000 employees by the end of October 2016. When the Department realized that it could not process these requests by 31 October 2016, it said it would process the remaining requests as soon as possible. However, by the end of our audit period, Public Services and Procurement Canada told us that pay requests for 5,000 of these 82,000 employees still had not been processed. As our analysis shows (see paragraph 1.29), there were more than 150,000 employees with outstanding pay requests at the end of our audit period.
1.91 Then, in December 2016, Public Services and Procurement Canada publicly announced that pay advisors would focus on pay requests in four priority areas: terminations, leave without pay, disability insurance, and new hires. The priorities changed again in January 2017, to focus on disability and parental leave, which the Department stated was based on a request from unions. The Department committed to meeting its target processing timelines by April 2017 for parental leave and May 2017 for disability leave. In March 2017, the Department reported that it was making significant progress in both of these areas and that it was meeting its target by the end of June 2017. We found that Public Services and Procurement Canada’s processing target did not include the time it took departments to get a pay request to the Miramichi Pay Centre. This means that from an employee’s perspective, the time it took to receive the pay was usually more than the number of days specified on the Public Services and Procurement Canada website.
1.92 Public Services and Procurement Canada later said that it would prioritize pay requests that had a high financial impact on employees, which accounted for over half of outstanding pay requests. (The Department defines a high financial impact as over $100 and a low financial impact as up to $100.) Its goal was to process all outstanding high-impact pay requests and meet the processing target for new requests every month by the end of summer 2017. However, we found at the end of our audit that high-impact pay requests still made up more than half the requests and were increasing (Exhibit 1.7).
First, they did it by time and got 94% of them done before they had to shift to focusing on terminations (overpayments), leave without pay (overpayments), disability insurance (ups and downs) and new hires (not getting paid at all). Later they got pressured and they switched to disability and parental leaves. And then later they went by dollar value (although why the dollar value is $100 is beyond me, plus there is NO comment from the auditors about materiality and if that was the right amount).
Where did these pressures come from? Us. Civil servants. Directly and through the unions. They are all crappy situations. And after we dampen down the noise on one group to a light roar, another squeaky wheel gets some grease. THEY ALL NEED THE HELP.
So what did we do? We spoke with different voices. PSPC had a plan, but it wasn’t meeting one or more group’s needs, so they shifted it and focused on another group. Different stakeholders had different priorities.
But if I’m fair, that’s not on PSPC. That’s on us. You can’t do effective triaging of the victims that way. We need a better system, and there isn’t one envisioned by the audit.
In fact, what is the auditor’s recommendation? Review them all and set priorities with timelines. Well, thanks for coming out. I’m sure that will fix everything immediately.
The final recommendation, which is always a given, is to have better costing of everything.
I said at the beginning that I expect three things from an audit.
Clear articulation of the project’s goals…well, it was kind of there, but then there is no fully outlined timeline. That seems odd.
A clear indication of the standard and how they are doing against it…very little in the way of articulated standards, yet also no assessments (mostly, never, sometimes, often, more, less, better, worse, SOMETHING?) of the performance.
Recommendations with a plan to address them…it isn’t there. Analyze the problem, develop a plan, talk to others, and cost it out — and the solution is that they agree to “come up with a plan”.
That’s what they came up with to respond to a crisis that is having a more negative impact on employees as a whole than DRAP ever did?
Underwhelming. Infuriating even.
Apparently it was a nice press conference though. Two thumbs up.
Hopefully, the auditors don’t experience pay problems from Phoenix before the next audit, and maybe it will have more teeth.
I wrote earlier on Phoenix and attempted to deconstruct the mess that it has become, although perhaps it is more apt to say the mess it was from the beginning and remains so even now. My focus was on the process, and some people asked me about an apparent lack of sensitivity or where my anger was for the disaster on the victims’ behalf. I’ll defer my anger to my next post, as it goes in a slightly different direction than most.
But let’s address a couple of those sympathy concerns.
First, am I cold, heartless, unsympathetic? Not really, but I am capable of writing about it in a dispassionate tone. Partly because it’s public administration and anything less dissolves into rhetoric. And partly as I view public issues like this almost like a battlefield of wounded. And you have to triage the victims somehow, see who you need to stabilize quickly while prioritizing the serious cases to the head of the line.
Sure, I said upfront that everyone should be paid in full, on time and without reservation. Saying it is easy. It’s a fundamental principle.
But they weren’t paid in full, or in some cases, at all. Nor were they paid on time, or in some cases, at all. Nor were they paid without reservation, or with any explanation of how it was correct or not, or if it was ever to be corrected in the future.
And I can sympathize all I want, but there’s a saying about where you can find sympathy — in the dictionary between sh** and syphilis. And about as welcome or useful.
Sympathy doesn’t change the facts. Or the process that has been a disaster. Or the need to triage the victims somehow.
Do all of them deserve to be paid? Absolutely. In full, on time and without reservation.
But if you want to talk empathy, or sympathy, and throw yourself on the bonfire of righteous indignation as a victim with claims for support, we as civil servants need a bit of a reality check too. There is a limit to empathy in the public, as there should be, and we’d better do the triaging properly before someone else does it for us.
Seeking public sympathy
I know lots of people complain that the public doesn’t understand our jobs, blah blah blah, we’re underpaid against some fictional private-sector equivalent, blah blah blah. But here’s the thing…if we think our jobs suck that bad, why do so many of us work for the government until retirement? The short answer is that they are good jobs, relatively secure compared to lots of other jobs, and decently paid. And in most cases, better than the alternatives we have available to us. Sure, I don’t discount personal values, it’s the primary reason I work for the government. But I also know that I couldn’t make the same money doing the same thing in any other sector in Canada. Yep, I’m told by subordinates, peers, and bosses that I’m really good at my job, a combination of interest and skills. But I’m under no illusions that I’m some wunderkind who could leave tomorrow and light up the private sector in anything other than consulting areas, which I have little to no interest in doing. I don’t care to debate whether people earn the right salary or if it could be higher or lower. That’s subjective. But if you compare salaries with the vast number of Canadians who work in other sectors, almost all of them think we have pretty good jobs. So there’s that.
Secondly, there are people who say they haven’t been paid at all in months. Yep, that totally sucks. From top to bottom, end to end. Does my saying that make you feel better? Of course not, you’re not a superficial snowflake.
But at the same time, I probably have the most sympathy for anyone who isn’t being paid at all — from the long-term worker whose pay got screwed up, to the young employee who changed jobs and it got screwed up, to the new co-op student trying to pay rent and save for the next term of school.
And yet I still have to triage somehow.
There’s another external perspective. The vast majority of Canadians we serve could not go three months with no paycheque. They couldn’t do it. Not without social assistance, or EI, or looking for another job. They sure as hell couldn’t afford to keep showing up for work. Yet civil servants apparently can. And are. That should give you some idea of our relative standard of living compared to the taxpayers who pay our salary.
Yep, you can tell me “But, but, but, but…” and I fully agree with you on how much your life is getting screwed up by the lack of being paid. But you’re still going to work, partly as you know it is the best job you have ever had or ever will have, and because you are able to bootstrap and hack your life into increasingly screwed up ways and still keep going. You shouldn’t have to, totally agree, but that isn’t the point. The point is that the economic resilience you have that lets you do that is about ten to fifteen times greater than most Canadians.
And if I have to triage which victim to help, you’re not dying on the battlefield in the next five minutes. Pretty cold, I admit.
A crappy comparator
We also need to have an almost crappy element that goes with that comparison. We have Canadians both receiving and not receiving benefits from the government who are in way worse difficulty and personal hardship than our lack of a paycheque. Health issues, personal trauma, unimaginable loss. As an example, one that I try to keep in mind, there is a program at ESDC for temporary financial assistance for parents of murdered or missing children. Because they’re in hell and likely not able to be at work. Thousands of stories across the country of people with unimaginable hardship asking for help.
And we, as civil servants, are viewed by many as having good jobs, maybe even being overpaid, definitely viewed as often incompetent or lazy, or both. And we say, “Hey, we deserve to be paid.”
Absolutely. In full, on time, and without reservation. Absolutely. A solid right. Reinforced in law, written in agreements, the fundamental principle of our economic system.
But when I look at a parent who lost a child through going missing or being murdered, or a family dealing with cancer of their 35-year-old mother, or dealing with an abusive stepparent, I have to put a civil servant with a pretty good life and pay rate yet who is suffering financial hardship, personal stress, worsening health, and then put it in the same triage context that most Canadians do. We’re not at the top of their sympathy card list. Not as a group at least.
Killers of empathy
I know you probably don’t think that sounds very sympathetic. Mostly because I speak truth, not hypocrisy. And no, I’m not talking about simply “counting your blessings”. But we also shouldn’t come forward as civil servants with stories you have heard from John in accounting who knows someone who works over at another department, who heard that their cousin, twice removed, once met someone at Bluesfest who hadn’t been paid in 20 years.
Because that’s the kind of crap that people are peddling out there. You want sympathy for a bunch of fellow civil servants who are experiencing pay disruptions, and treat them all as a single group worthy of the same sympathy? Not a chance. Because not all of them are the same. I don’t mean they’re not all victims, I’m saying they are not all in the same predicament or facing the same level of hardship.
I’ll try to formally triage the group a bit, and only focus on those who are actually having their pay messed up:
People who aren’t getting paid at all;
People who are getting paid, but less than 60% of their regular salary;
People who are getting paid, but 60-80% of their regular salary;
People who are getting paid, but 80-100% of their regular salary;
People who are getting paid their regular salary but haven’t got a promotional raise yet;
People who are getting paid but haven’t got their retro cheques yet;
People who are getting paid, but haven’t got corrections from some time back; and,
People who are being overpaid and will eventually have to pay it back.
There are probably some other categories in there, but let’s start with those 8.
Should all of them be getting paid? Absolutely. In full, on time and without reservation. Are they? No.
But I don’t have a magic wand, nor does anyone else, that will get all of those people paid at once. So who needs to be prioritized first? The list above is my personal priority order. It’s a bit subjective in some places…I’m not sure about #s 5, 6 and 7, and I could go either way on the head of that pin. But #1 (not being paid) vs. #8 (overpaid)? No question which one I think is higher priority. And defensible both publicly and in public admin policy.
I could see maybe someone else arguing that the time factor is the most important issue, that those who have been affected the longest and thus have had to stretch their resiliency network the farthest should be the highest priority. I’d generally agree with that as a second parameter in each of the categories above, but not being paid at all is a black hole from which there may be no easy escape and those with cauterized wounds are a lower priority than those openly bleeding out.
Equally, I could see someone else arguing it isn’t about length of time owed or who’s being paid, but what the total $$ value that is owed. A magnitude issue. Again, I could see that as a viable analysis, but for the same reason as the previous one, I’d likely defer to my list as primary.
But what do we ACTUALLY do as a group of public servants? We lump all the irregularities together and talk about 500K cases. That’s the best way to kill empathy because there aren’t even 500K employees. And complaining publicly that person X is being overpaid? That makes us look like we’re just whiny.
The way forward
Do I think all of these people need to be helped? Of course I do. And that has nothing to do with empathy, because there is a simpler rationale than personal values. PAYING PEOPLE IS BASIC GOVERNANCE. Everybody should be able to change their mailing address, use whichever first name they want in email and correspondence, update their banking info, get hired, get paid, take leave, get promotions, retire, etc. with a minimum of administrative angst.
On what f***ing planet is it acceptable in any business to have people work for you and NOT PAY THEM? None. (Okay, I’ll take the anger down a notch). But it’s still administratively ridiculous. And as a civil servant, freaking embarrassing. It’s the worst confirmation for people who thought simple servants were lazy, incompetent and overpaid.
But I digress…
What do we actually need to do? We need to decide WHICH CATEGORY OF PATIENT is the most in need of help. We need to say, at least in my view, that those not getting paid are the first priority. And honestly, despite the fact that I am generally against government workers striking for anything over than workplace safety issues (again reflecting a lot of the sentiment already against us plus the reality check of our life against the average Canadian), I think people not being paid fundamentally violates our collective agreements with our employer.
Which leaves me aghast at the lack of organized union activity around the issue, including a blue-flu equivalent — maybe beige flu? — on the first Monday of every month, work to rule campaigns, organized demonstrations, or perhaps demand that all processing stop on every other type of claim until those who are NOT being paid at all are entirely corrected. If Miramichi is not able to do the triaging to fix the problem, why hasn’t the unions forced their way into doing it for them?
We also have to stop putting forward the wrong poster children if we want public opinion on our side. This is where I get into looking like an inconsiderate d-bag.
But there are some people who have been stepping forward to say, “I’m losing my house because of Phoenix.” Which is a great soundbite. Except it wasn’t true.
The people portrayed were not losing their house. They were maybe stressed about it, but guess what? They had dual incomes, the other one was just fine, and yes, they were playing with bill payments and timing and stuff, but they were not ACTUALLY going to be foreclosed upon, it was just rhetoric and puffs of smoke (foreclosure, by the way, is extremely rare in Canada as most banks don’t want your house and they can’t profit from it, unlike some US jurisdictions — they’d rather send you nasty letters for lots of months and refinance things out the wazoo than take your house). A few others who have sought the limelight failed to mention that their problem with Phoenix didn’t even result in a pay disruption — they just made a change that hasn’t been reflected yet — while others did experience pay problems, but the total was less than $1000 net. One who made the national press failed to mention that the main reason they were facing financial problems was that they were running TWO PRIVATE BUSINESSES THAT WENT BANKRUPT (a restaurant and something else to do with pets as I recall). It wasn’t even 100% clear that the person actually HAD a Phoenix problem. But the union and then the press trotted them out with no vetting of their claims.
I’ve seen this before. Back in the early 2000s, or was it the late 1990s?, there was an article in the Citizen that you could make more money as a roofer than you could as a foreign service officer. The union thought it was awesome, the senior management knew it was crap. The person they profiled wasn’t really a roofer — he ran a roofing company. Along with four other businesses that he started. He was an entrepreneur at heart, not a bureaucrat. Not exactly the “real story”. Equally, they profile someone who left to go to academia. Again, not exactly true…he left to be with a woman he’d fallen in love with overseas. When that didn’t work out in the end, alas the path of true love, he went back to do his Ph.D. A third was equally skewed. In government, we see this ALL of the time. And we know the REAL story, and we know that bad reporting and skewing may make newbies think it’s all about the narrative, but that’s now how the real government works, and it isn’t how pay systems get fixed.
That seems almost like irresponsible representation in my view because the union propped them up to speak against their employer — oh, sorry, OUR employer — and then let them misrepresent the facts of our relationship with the employer. Isn’t there a name for that — bad faith? And OUR reps did it on OUR behalf.
Plus, as I mentioned in my previous post, some people put in claims for incremental benefits and asked for things that had nothing to do with Phoenix. Like trying to claim their mortgage payments, not the extra interest charged. Really? How does THAT help our cause? It really doesn’t. It just pisses off politicians and senior managers and makes them think, “Hmm, maybe all those other claims aren’t as truthful as they claim they are.” Our credibility is destroyed each time one of us does something that stupid and then acts as if they represent all of us. Grrr….
If we want public sympathy or even attention from the government powers that be, if we want to keep our credibility, it is no different than what we expect from the people who lobby us.
Show the real victims (we have them, they’re real, we don’t have to make them up!), not victims-with-spin.
Pick the highest priority group.
And insist they get treated first.
Because 95% of Phoenix victims who are still getting paid, but it isn’t the right amount or there’s some other glitch, are drowning out the voices of the unpaid 5% who are truly in need of our collective support.
And as crappy as the lives are in the 95% group as a result of Phoenix, I find it hard to express sympathy for them as an overall group when they aren’t putting the other 5% first.
Once their wounds are treated, we can move to the next group in the triage. Until then, I guess I’ll remain cold and heartless.