Hat tip to Henrik Martin at the IC Knowledge Center for posting on an interesting paper by Herman Miller (the manufacturer of office furniture) on “Quantifying and Fostering Organizational Effectiveness.”
The paper has many interesting points, some of which I agree with, others I strongly disagree with. This just goes to show how much work needs be done in understanding knowledge workers, something Peter Drucker wrote about for decades. We just don’t know how to measure their productivity, and the statistics we are using—which were designed for an Industrial Era—are obsolete, and I would add, are probably doing more harm than good inside of Professional Knowledge Firms.
Knowledge work is hard to define, since it’s largely based on tacit knowledge. It’s harder to quantify, so I was skeptical right off the bat with the paper’s title. It confuses effectiveness with efficiency. Effectiveness requires a judgment (usually by another knowledge worker), whereas efficiency is a measurement.
Nevertheless, the paper is worth reading for those of you interested in this topic.
The paper begins by defining organizational effectiveness:
While it isn’t limited to productivity, organizational effectiveness certainly encompasses it. Defined narrowly as the amount of physical output for each unit of productive input, productivity has been a human concern for centuries. The Chinese philosopher Mencius (372-279BC) wrote about conceptual models and systems that would qualify today as production-management techniques. Plato (427-347BC) spoke of the division of labor in The Republic: “A man whose work is confined to such limited task must necessarily excel at it.”
Early thinking about productivity remains relevant to production today. Measuring it, however, has become more complex. The U.S. Bureau of Labor Statistics (BLS) measures what it calls “multifactor productivity,” in which “output is related to combined inputs of labor, capital and intermediate purchases. Labor is measured by the number of hours of labor expended in the production of output. Capital includes equipment, structures, land, and inventories. Intermediate purchases are composed of materials, fuels, electricity, and purchased services.”
The problem with measurement comes in defining output for non-manufacturing, service activities commonly thought of as “white collar” or “knowledge work” that the BLS labels “hard-to-measure.”
There’s an understatement, “hard to measure.” Notice how the inputs are much easier to quantify than the outputs. Further, the outputs are usually measured based on price, not value, and therefore “productivity” as measured by the government—and firms themselves—is not only understated, it’s also meaningless.
Economists have a saying that if you’re driving with a broken speedometer you probably shouldn’t get it fixed because at least you’re used to it. I feel the same way about how the government measures productivity for knowledge workers. It’s sort of meaningless, but better than nothing for statistical, macroeconomic comparisons.
But at the microeconomic level of running a PKF, it doesn’t shed much light on knowledge worker effectiveness. Consider Einstein. Sure, we can tally up the amount of inputs he uses, along with their costs, but how do you measure the output? How do you price it? How do you value it? This is the age old problem: it’s easier to count the bottles than describe the wine. Efficiency is the count, the description is a judgment. I’ve posted on this topic before.
Individual vs. Team Productivity
The paper quotes Michael O’Neill who leads a program of research into organizational effectiveness for Herman Miller:
In traditional productivity measures the unit of analysis is the individual. In terms of knowledge work, that may be irrelevant because increases in individual productivity, no matter how they’re measured, do not automatically transfer to the productivity of the organization. Instead, the team, or functional group, becomes the unit of analysis. This requires managers to make a conceptual shift in their thinking, to understand that it’s more relevant to measure activities that contribute to overall business goals and strategies, such as the speed of organizing around new opportunities and the quality of business processes.
This is a good point, since one knowledge worker’s output greatly effects another in the chain. Here’s an example. I was recently hospitalized where my surgeon ordered a CT Scan so he could operate. The technician didn’t read the surgeon’s order properly and did the scan incorrectly.
However, if all you were doing was measuring inputs and outputs with some efficiency metric, the CT Scan would have rated 100%. After all, he got me in and out in record time, processed the image to the doctor on time, etc. However, it wasn’t until the surgeon looked at the results, requiring a judgment, when he discovered the scan had been done improperly and had to be done over. This cost me an extra day or two in the hospital, not at all adding to my satisfaction with the hospital’s service.
In other words, the scan was efficient, but not at all effective. And you can’t measure effectiveness. It has to be judged. Obviously if one knowledge worker screws up on a team, it’s going to have ripple effects throughout the entire project.
This is why the paper points out that “no single measure is likely to capture the outcomes,” which is why PKFs can’t rely on measures alone. As the paper points out:
If the organization’s goal is to guide, motivate, or even control knowledge workers’ behavior, it must supplement measurement with strong cultures and value systems. Measurements may be attractive for their apparent preciseness, but they must be tempered with the observations, experience, and common sense of managers.
The old canard that what you can measure you can manage is nonsense on stilts, since we can’t change our weight simply by weighing ourselves more accurately or frequently. Here’s how the paper addresses this topic:
An old business adage says that whatever cannot be measured cannot be managed. Yet, as Susan Cantrell, research fellow at the Accenture Institute for Strategic Change, points out, knowledge workers resist being measured, “both because they have no history of being measured and because they believe it might take the ‘magic’ out of their work. Most high-end knowledge workers…tend to work on unique, one-off, highly specialized problems, making it impossible to have one measure for all such knowledge workers. Moreover, many knowledge worker…work interdependently, making it difficult to isolate one knowledge worker’s contribution from another’s. And, because the work performed is generally unobservable, a knowledge worker could be working for months, or sometimes even years, before an output is tangibly realized.”
I have an alternative explanation for why knowledge workers resist being measured. It has nothing to do with taking the “magic” out their work. It’s because the measures are wrong. In no way can they capture the value created. Why continue to measure things that don’t matter?
The paper also claims that improving knowledge worker productivity is the only true way to improve a company’s competitive position. Surely this is an overstatement. What about innovation? What about a differentiated value proposition for customers? The most innovative companies, like Google or Apple, are certainly not the most productive as traditionally measured. So what? They are incredibly effective in creating value for their customers.
It just goes to show how theories that are so hardwired in us are so difficult to change. We are going to have to get over our obsession with measuring everything and begin to realize that in PKFs a judgment is far more valuable, effective, and right, than a mere measurement.
Any measurement can be manipulated, so we ultimately rely on judgments in the end. It’s time to face the cold reality: knowledge work cannot be quantified with the metrics we are now using. Our speedometer is broken. And while that might be alright for government cars—statistics—I would suggest we get ours fixed.
This is why we advocate Key Predictive Indicators, most of which are judgments rather than mere measurements.