Book Review: How to Measure Anything
I picked up this book as a complete skeptic, since I have written extensively on measuring what matters, including the seven moral hazards of measurement, as well as observing that a lot of things that are measured are not worth the effort and cost.I became an even greater skeptic after reading the Preface where Hubbard states:
I wrote this book to correct a costly myth that permeates many organizations today: that certain things can't be measured.
Then in Chapter 1 he cites the hoary Lord Kelvin statement that when you can measure something and express it in numbers you know something about it, but if you can't put it in numbers, your knowledge of it is of a meager and unsatisfactory kind. But this statement itself cannot be expressed in numbers, but that doesn't mean it's meager.At the end of this chapter, Hubbard challenges us to write down those things you believe are immeasurable, or at the least, you are unsure how to measure. Here's my list:
- Music
- Poetry
- Art
- Religion
- Humans
- Black Swans
- Entrepreneurial spirit
- Productivity of knowledge workers
Later in Chapter 3 Hubbard lays out three often stated arguments against measurement:
- It's too expensive
- You can prove anything with statistics
- The ethical objection, like putting a value on a human life
But this is a limited list; there are other legitimate objections to measurement. But to give Hubbard his due, he does answer some of these other objections without knowing he's doing so.Hubbard's main thesis is that measurement is "a set of observations that reduce uncertainty where the result is expressed as a quantity."This is key, since he's saying a mere reduction in uncertainty, not elimination, is sufficient for a measurement. A measure also does not have to be precise, it can have a range of error.He uses the terms risk and uncertainty unlike how the economist Frank Knight used them. Knight argued that while risk could be measured, uncertainty could not. I find this definition far more useful than Hubbard's, especially after reading The Black Swan, but you can get used to his terms, and it doesn't effect his argument.For instance, he mentions measuring the risk of rare events, such as September 11, 2001. But that is a Black Swan that no one foresaw—an uncertainty—so a measure was not on the horizon. I don't think Hubbard gives enough credence to Black Swans, and just how significant they are in changing the course of business or history. The Mirage hotel in Las Vegas is expert in measuring risk, thereby reducing it with security measures, internal controls, etc. But it couldn't measure, or effect, the uncertainty of a tiger mauling one of its star performers, causing an enormous financial loss.A measurement not guided by a theory is just a statistical orphan, like reading the phone book. But a measurement guided by a theory can be useful. The important point being that the theory is the senior partner, since our theories determine what we can observe. Hubbard never says this explicitly, though he quotes someone who does later in the book. But he does say it, implicitly, with his six questions that need to be asked about measurement:
- What are you trying to measure?
- Why do you care?
- How much do you know now?
- What is the value of the information?
- Within a cost justified by that value, which observations would confirm or eliminate different possibilities?
- How do you conduct the measurement that accounts for various types of avoidable errors?
This is just another way of stating the scientific method. Even better are Milton Friedman's two devastating seminar questions:
- How do you know?
- So what?
In other words, observation and prediction—the scientific method.Later on, Hubbard posits The Measurement Inversion:
In a business case, the economic value of measuring a variable is usually inversely proportional to how much measurement attention it usually gets.
He cites the example of a time tracking system, whereby the actual time to complete an IT project was not materially different from the estimated initial estimate, yet it was the largest measurement in the organization and "it literally added no value since it didn't reduce uncertainty after all."Amen. It is certainly easier to measure inputs rather outputs, which is the main problem with time tracking. We can count the cost of the bottles but it's much harder to describe the wine, let alone measure it.Hubbard also uses an "epiphany equation" to determine the value of information: "you almost always have to look at something other than what you have been looking at" for a truly meaningful measure. Yes, you must posit and test a theory. This requires a deeper understanding of what it is we are trying to measure.This differs from counting, since a theory allows us to peer into the future and predict. It allows us to measure what really matters. Since there are so many things a business can measure, and there's no such thing as a free statistic, we should focus on leading indicators, rather than lagging indicators—that is, those measures that help us peer into the future.Hubbard makes this argument, too, when he discusses how a measure can be a range of probabilities rather than an exact number. This is something I wish accountants understood better. We present audited financial statements as if they were precise rather than explaining a tolerable error range.But we actually think auditors should be force to disclose their materiality level, as well as their measurement error ranges. Just because we don't have a precise number doesn't mean we have no knowledge. A measurement does not have to be a precise count.Hubbard provides a very useful explanation of measurement error: systemic and random, the former being consistently produced errors and the latter randomly produced errors. These errors are related to the concepts of precision and accuracy.He observes most businesses choose precision with unknown systemic error over a highly imprecise measurement with random error, and cites timesheets as a classic example. This is a more scientific way of saying you should prefer to approximately correct rather than precisely wrong.It may also be a way of saying that a judgment is more important than a measurement, though I'm not sure Hubbard would agree unless the judgment was quantifiable.Which brings us back to my productivity of knowledge workers conundrum, which I don't think can be meaningfully measured without judgment. Sure, I can quantify the quantity of work produced, get a numerical ranking of feedback from customers and co-workers, track billable hours, realization rates, and a host of other metrics.But what those metrics tell me is not very useful without judgment. And unless I'm measuring the things that truly matter, I won't even meet Hubbard's definition of measurement value. So how do you measure—even imprecisely—the characteristics of a successful knowledge worker?Hubbard even makes the point that some readers may think he has lowered the bar on what counts as a measurement so much that this change makes everything measurable. That may be.In the end, I became less skeptical, since Hubbard and I really don't disagree. The book does provide thought-provoking ways to make measurements, and hence you'll find it useful.I don't disagree with the fundamentals of his points, but I still wish he spent more time explaining that just because we can measure something doesn't mean we can change it. We don't change our weight by measuring ourselves more frequently, or accurately.I also don't think Hubbard answered how to measure the items on my list above, but he has provided a variety of approaches I could use to do so.The question remains: Does it matter? Will measuring it help me merely label it, or understand it a deeper level? Will it help me change it?That depends on my theory, which should dominate everything we measure.For more information, visit Hubbard's Web site.