- Sep 27, 2021
- Aug 19, 2021
Updated: Sep 3, 2021
Where does it all go wrong?
In 2016, A major US bank became embroiled in a scandal after it was revealed that, over the course of several years, bank employees opened millions of checking, savings, and credit card accounts on behalf of customers without their consent. This outcome, driven by unsustainable goals set by management, eventually led to congressional intervention and historic penalties. On paper, the fallout approached $3 Billion in penalties and refunds. The reputational damage is immeasurable and immense.
In the following years, the account fraud scandal has become a case study on management, ethics and business metrics. For years, everyone involved had been convinced they were doing the right thing, doing their job, and doing what was best for the business. Even when the fraud began to unravel, bank management blamed rogue workers for what was, in retrospect, obviously a systemic problem. Where did it all go wrong?
What's usually overlooked in this discussion is the impact on people. Managers showed contempt not only to customers, but also to the workers who were subjected to unreasonable and often impossible goals. Managers created a toxic work environment that lead to many workers suffering from massive anxiety and panic attacks, even as internal complaints fell on def ears. We should ask ourselves, what place does a business have that not only fails to serve customers, but also egregiously destroys its own employees?
Well-intentioned metrics are often developed on an arbitrary basis or based on so-called best practices. In other words, we make educated guesses, based on either experience or hubris, about what organizational goals should be. Metrics derived in this manner are patently baseless, informing no specific methodology. Suppose we want to improve a particular outcome. With no understanding of process capability (more on this in a moment), how will these goals be achieved? A salesperson, seeking to reach a sales target (perhaps as a condition of continued employment), while being unable to directly affect other parts of the system (such as the quality of the finished product) resorts to high-pressure sales tactics or selling an ill-fitting solution. Even if the adverse consequences are not immediately obvious, providing the customer with the wrong solution is no different than selling a defective solution. If reputation and good name were tangible, this would be like chipping off a chunk and selling it. The salesperson may achieve their individual goals and by all accounts do a great job, but in doing so contributes an incalculable loss to everyone involved. What metric does your organization have to quantify the damage to reputation? What metric measures employee wellness?
People care about doing a good job. Given a clear goal, most will strive to achieve it. Because of this, the pursuit of reckless or poorly informed goals, metrics, KPI's, or SLA's leads to degradation of worker morale, service, and quality. You could consider profit made this way as a loan against not only the equity and reputation of the business, but also against the well-being of customers and employees. At some undetermined point in the future, the bill will come due. Although the account fraud scandal is an extreme example, uninformed goal setting, with their hidden losses, is fairly typical across broad sectors of business. Organizations that commit this practice, either knowingly or unknowingly, are no longer serving their ostensible purpose. Instead, they are engaging in the practice of extracting joy and dignity from their own employees and converting them into dollars.
Symptoms of poorly informed metrics
The following are just a few examples of behaviors that manifest and lead to erosion of quality.
Feedback loops:
Cost-cutting measures: Buying substandard materials leads to a drop in quality of the finished product. Market share drops. The result? The company must cut costs further in an attempt to stay competitive, resulting in further losses, and the cycle continues.
Process Drift:
Setting goals based on previous outcomes. A machine produces widgets that fails to meet specifications. To compensate, after each widget is produced the machine operator adjusts the machine to account for the error observed in the previous result. This greatly increases the amount of variation in the finished product.
Parity Seeking:
Feast and famine cycles caused by knee-jerk responses to a previous outcome. Over-correcting results in wildly varying oscillations due to uninformed reaction to inconsistent demand signals. We may order extra inventory based on a momentary increase in demand and reduce inventory when demand wanes. Too much inventory one month followed by too little the next.
Process walking:
Using an individual outcome as a basis for setting subsequent goals. Making copies of copies. This includes rote application of so-called best practices with no understanding of underlying theory. An example of this is when senior employees train junior employees in succession, each iteration results in a diminished outcome. Another example is observed through normalization of deviance. In other words, an outcome that falls outside of specification becomes the new de facto standard. At some point this too is exceed with a new de facto standard.
In each of these examples, management fails to see the causal relationship between established goals and the eventual damaged caused. Worse, the blame falls on what management perceives as rogue, under-trained, or incompetent workers or teams; folks who can do little to improve the systems or processes they work with and are powerless to do anything else but attempt to achieve the goals set by management. It's their literal job to do so. This leads to adversarial relationships and demoralized team members.
By definition, a process is a sequence of actions that results in the transformation of the object being acted upon. Any observable process has a performance capability that can be defined. With enough observations, it's possible to predict how any stable process will perform in perpetuity.
Take an average 20 minute morning commute as an example. On some days, traffic might be a little light, or you get a series of green lights that trims your commute to a mere 15 minutes. On other days, heavy traffic extends your commute to 25 minutes. For the vast majority of days, your commute takes between 15 and 25 minutes. This observed difference is systemic variation, or put another way, noise that comes from within and is a characteristic of the system. This is the first step in defining the capability of the system (this example has been simplified for illustrative purposes). As an individual driver, there is little you can do to affect these outcomes, even though you contribute noise as a part of the larger overall traffic system. You can't control the timing of traffic lights, traffic volume, weather, or any number of other factors. However, with relatively few observations you can establish with high confidence a basis for when to leave your doorstep with time to grab a coffee and find a good parking spot.
Now suppose you oversleep or have a flat tire. This type of variation, interference, is imposed by forces originating from outside the system. However, even these types of events are not immune from prediction. Obviously you won't leave your house 2 hours early every day on the off chance of having a flat tire. But by defining process capability, we're able to develop insights that enable informed risk taking.
Continuing this example, suppose that due to a particularly bad traffic jam, you arrive at work 10 minutes late. With the prevailing logic, the solution is obvious, The next day you leave 15 minutes earlier… only to find that you're now half an hour early. If your goal was to not be late to work, you've achieved that with certainty, but the process has now incurred a tremendous amount of waste. If your stated goal was to arrive on time, you did not achieve it. As any logistician will tell you, being early is not the same as being on time. The more we engage in this sequence of acting and reacting on individual data points, the less stable and less predictable the process becomes, in defiance of all efforts otherwise. Occasionally, an improvement will be observed. But be warned: Even apparent improvements are subject to, and caused by natural variation within a system. These observations taken alone frequently result in confirmation bias. In other words, random action taken against an undesired outcome results in an apparent improvement, which reinforces the intervention (bias to action).
Processes have no regard for our opinions. They will produce what they are designed to produce, including unintended and undesired results. Arbitrarily wishing for a different outcome won't make it so. When processes capability isn't deliberately measured, we have no meaningful basis for determining if outcomes are caused by noise (systemic), or interference (non-systemic). Therefore, we can only guess what the goals should be, and the result is haphazard and wasteful problem solving. With near certainty, this leads to frustration, blame, and failure. The correct solution will occasionally and accidentally be discovered. However, as in the commuter example, the vast majority of time further damage is caused as even well-intended interventions (interference) impose waste and cause the system or process becomes less reliable.
Because of this lack of clarity, we tend to commit the mistake of confusing interference and noise. Systemic problems are particularly confounding, as the cause and effect relationship are usually obscured by time and space. This makes it easy to blame an outcome on whatever action(s) immediately preceded it, and the persons who committed them. Most undesired outcomes are falsely attributed to interference (as if they originated from outside the system) and because of this, most improvement efforts are focused on local optimization. In other words, find the problem or person at fault and remove, repair, or improve it. However, this too is interference. Renowned quality pioneer W. Edwards Deming called this tampering.
Surely, as the logic goes, if we act to ensure every part of a system is individually optimized, the result is a fully optimized system. It sounds reasonable, however this assumption couldn't be more wrong. Interference caused by local optimization, without regard to the performance of the whole system, usually just pushes problems into other areas. Reducing a problem in one area magnifies it down the line. This is evident as costs increase and quality decreases. A grease fire in the kitchen, moved to the living room, becomes an objectively worse problem even though the kitchen's problem is solved (from the kitchen's perspective). At this point, the waste of intervention should be clear: putting out fires, even when it must be done, is waste. The effort that goes into inspection and intervention consumes resources, creates nothing of value, and improves nothing. This tyranny of management exposes a contempt for people, and in practice might resemble one or more of the following:
Badgering people to do better, exert more effort, pay more attention, get more training
Repeatedly replacing failed components.
Measuring things that are done, but not measuring if they are done well, or should be done at all.
Measuring defects in the form of errors committed , but not errors omitted (when something should be done but isn't).
Measuring if we are doing things right (adherence to specification), vs doing the right thing (alignment of core vision)
Cutting corners to meet production quotas.
Over-dependency on Quality Control: Additive layers of inspection, instead of designing quality into the system (Contrary to popular belief, inspection will never create quality).
There is a better way
We've all heard management say something to the effect of, "We need to stop being reactive and start being proactive." We may have even said this ourselves on occasion, perhaps to applause and fanfare for this spectacular nugget of leadership. This is no different than telling a drowning person to stop being wet. Some may be offended by this, but the stakes are far too high abdicate this responsibility. Although this statement and others like it offer no methodology, they at least acknowledge the problem and suggest a desire to find a better way of doing things.
First and foremost, stop blaming your problems on people, teams, etc. There is a saying in quality management, "There is no such thing as human error." That's not to suggest that people don't make mistakes, but that blaming people for process outcomes betrays opportunities for meaningful and innovative improvement. Blaming people for results produced from poorly designed or poorly understood systems is an admission that management doesn't know what the problem actually is, nor how to solve it.
The second step is simple. Stop reacting! This may be difficult, as organizations tend to be biased to action. When faced with a problem, doing something, anything, is better than doing nothing. Instead of reacting to and attempting to manage individual outcomes, we might consider measuring and managing system performance instead. We must determine the correct action to take, then execute it in earnest.
Managers and process owners must be accountable for their products. Each process and system must have a defined purpose and capability informed by parameters that are critical to quality. In other words, why are we doing this, by what method is it done, and how do we know if it's working as intended? Further, we must understand how the process performs over time. How do we know if the process is consistent, improving, or worsening? Through tools like Walter Shewhart's control charts, which measure the capability and variation (noise) of a process, we have a critical tool for informing Continuous Improvement initiatives, aimed at increasing quality and reducing variation. This is the essence of quality management, and the practical difference between being proactive and reactive.
As understanding of process capability develops, it will reveal wide disconnects between stated goals and actual process outputs. Some may ask, are we lowering the bar if process outputs fail to meet expectations? The answer is not necessarily. If a metric is critical to the quality of the finished product, raising or lowering the goal will achieve nothing meaningful. However, if the goal is ill-informed, capricious, or arbitrary, revision of the process and alignment of goals should be given serious consideration. Remember, we are discovering if the process or system is fundamentally designed to achieve the stated goals. If it is not, management now has an informed basis for driving improvement, and the informed goals we set should measure those efforts. Further, this encourages the removal of adversarial barriers between management and workers. Nobody knows the job better than the people doing it. If management is committed to continuous improvement, open and earnest engagement of the people doing the work will facilitate and foster innovation to the enrichment and benefit of everyone involved.
From the Author:
Thank you for taking the time to read this article. If you find this content informative, useful or insightful, please consider taking a moment to subscribe and share on social media using the links below.
​
I value your insight and feedback! If you have a comment, criticism, or particularly challenging problem, I'd be delighted to discuss it! Please reach me at:
Comments