Book Review: How to Measure Anything

I picked up this book as a complete skeptic, since I have written extensively on measuring what matters, including the seven moral hazards of measurement, as well as observing that a lot of things that are measured are not worth the effort and cost.

I became an even greater skeptic after reading the Preface where Hubbard states:

I wrote this book to correct a costly myth that permeates many organizations today: that certain things can’t be measured.

Then in Chapter 1 he cites the hoary Lord Kelvin statement that when you can measure something and express it in numbers you know something about it, but if you can’t put it in numbers, your knowledge of it is of a meager and unsatisfactory kind. But this statement itself cannot be expressed in numbers, but that doesn’t mean it’s meager.

At the end of this chapter, Hubbard challenges us to write down those things you believe are immeasurable, or at the least, you are unsure how to measure. Here’s my list:

  • Music

  • Poetry
  • Art
  • Religion
  • Humans
  • Black Swans
  • Entrepreneurial spirit
  • Productivity of knowledge workers

Later in Chapter 3 Hubbard lays out three often stated arguments against measurement:

  1. It’s too expensive

  2. You can prove anything with statistics
  3. The ethical objection, like putting a value on a human life

But this is a limited list; there are other legitimate objections to measurement. But to give Hubbard his due, he does answer some of these other objections without knowing he’s doing so.

Hubbard’s main thesis is that measurement is “a set of observations that reduce uncertainty where the result is expressed as a quantity.”

This is key, since he’s saying a mere reduction in uncertainty, not elimination, is sufficient for a measurement. A measure also does not have to be precise, it can have a range of error.

He uses the terms risk and uncertainty unlike how the economist Frank Knight used them. Knight argued that while risk could be measured, uncertainty could not. I find this definition far more useful than Hubbard’s, especially after reading The Black Swan, but you can get used to his terms, and it doesn’t effect his argument.

For instance, he mentions measuring the risk of rare events, such as September 11, 2001. But that is a Black Swan that no one foresaw—an uncertainty—so a measure was not on the horizon. I don’t think Hubbard gives enough credence to Black Swans, and just how significant they are in changing the course of business or history. The Mirage hotel in Las Vegas is expert in measuring risk, thereby reducing it with security measures, internal controls, etc. But it couldn’t measure, or effect, the uncertainty of a tiger mauling one of its star performers, causing an enormous financial loss.

A measurement not guided by a theory is just a statistical orphan, like reading the phone book. But a measurement guided by a theory can be useful. The important point being that the theory is the senior partner, since our theories determine what we can observe. Hubbard never says this explicitly, though he quotes someone who does later in the book. But he does say it, implicitly, with his six questions that need to be asked about measurement:

  1. What are you trying to measure?

  2. Why do you care?
  3. How much do you know now?
  4. What is the value of the information?
  5. Within a cost justified by that value, which observations would confirm or eliminate different possibilities?
  6. How do you conduct the measurement that accounts for various types of avoidable errors?

This is just another way of stating the scientific method. Even better are Milton Friedman’s two devastating seminar questions:

  1. How do you know?

  2. So what?

In other words, observation and prediction—the scientific method.

Later on, Hubbard posits The Measurement Inversion:

In a business case, the economic value of measuring a variable is usually inversely proportional to how much measurement attention it usually gets.

He cites the example of a time tracking system, whereby the actual time to complete an IT project was not materially different from the estimated initial estimate, yet it was the largest measurement in the organization and “it literally added no value since it didn’t reduce uncertainty after all.”

Amen. It is certainly easier to measure inputs rather outputs, which is the main problem with time tracking. We can count the cost of the bottles but it’s much harder to describe the wine, let alone measure it.

Hubbard also uses an “epiphany equation” to determine the value of information: “you almost always have to look at something other than what you have been looking at” for a truly meaningful measure. Yes, you must posit and test a theory. This requires a deeper understanding of what it is we are trying to measure.

This differs from counting, since a theory allows us to peer into the future and predict. It allows us to measure what really matters. Since there are so many things a business can measure, and there’s no such thing as a free statistic, we should focus on leading indicators, rather than lagging indicators—that is, those measures that help us peer into the future.

Hubbard makes this argument, too, when he discusses how a measure can be a range of probabilities rather than an exact number. This is something I wish accountants understood better. We present audited financial statements as if they were precise rather than explaining a tolerable error range.

But we actually think auditors should be force to disclose their materiality level, as well as their measurement error ranges. Just because we don’t have a precise number doesn’t mean we have no knowledge. A measurement does not have to be a precise count.

Hubbard provides a very useful explanation of measurement error: systemic and random, the former being consistently produced errors and the latter randomly produced errors. These errors are related to the concepts of precision and accuracy.

He observes most businesses choose precision with unknown systemic error over a highly imprecise measurement with random error, and cites timesheets as a classic example. This is a more scientific way of saying you should prefer to approximately correct rather than precisely wrong.

It may also be a way of saying that a judgment is more important than a measurement, though I’m not sure Hubbard would agree unless the judgment was quantifiable.

Which brings us back to my productivity of knowledge workers conundrum, which I don’t think can be meaningfully measured without judgment. Sure, I can quantify the quantity of work produced, get a numerical ranking of feedback from customers and co-workers, track billable hours, realization rates, and a host of other metrics.

But what those metrics tell me is not very useful without judgment. And unless I’m measuring the things that truly matter, I won’t even meet Hubbard’s definition of measurement value. So how do you measure—even imprecisely—the characteristics of a successful knowledge worker?

Hubbard even makes the point that some readers may think he has lowered the bar on what counts as a measurement so much that this change makes everything measurable. That may be.

In the end, I became less skeptical, since Hubbard and I really don’t disagree. The book does provide thought-provoking ways to make measurements, and hence you’ll find it useful.

I don’t disagree with the fundamentals of his points, but I still wish he spent more time explaining that just because we can measure something doesn’t mean we can change it. We don’t change our weight by measuring ourselves more frequently, or accurately.

I also don’t think Hubbard answered how to measure the items on my list above, but he has provided a variety of approaches I could use to do so.

The question remains: Does it matter? Will measuring it help me merely label it, or understand it a deeper level? Will it help me change it?

That depends on my theory, which should dominate everything we measure.

For more information, visit Hubbard’s Web site.


  1. Ron,

    Thanks for the review. I have responses to points where you challenge parts of the book.

    First, everything on your list (Art, Humans, etc.) and heard before and can explain how to measure. Just follow the book. Start by defining what you mean. What *about* humans would you like to measure? Their happiness? The value they put on their own lives? Their confidence? (the last three are actually specific examples in the book). Likewise, what *about* art are you trying to measure? Quality? Value? Again, I think I point to solutions if you just state what it is you are trying to observe. State how you observe what you are talking about, and you are halfway to measuring it.

    Second, there is a very good reason I don’t mention The Black Swan and it is also a reason he couldn’t have mentioned me. We would both have to have finished our manuscripts and submitted them for final review before the other’s book was published. I came out in early August 2007, and he came out in April of the same year. Most publishers require at least that much time between the completion of the final manuscript and the publication date. But since I did read both of Taleb’s books, I can’t find anything I would contradict now.

    Even still, regarding the measurement of rare events, again, just follow the method. I don’t mention very rare events explicitly but the same general approach applies. Actually, some of the events routinely assessed in predicion markets (which I discuss in chapter 13) are very rare. Some events on sell at less than 5 cents even though the period of the claim is decades. In other words, they are measuring, a one-in-several-hundred-year event.

    Third, I never state that measurement requires being done without judgement, so just because something requires judgement does not mean it can’t be measured. In fact, the chapter titled “The Ultimate Measurement Instrument: Human Judges” is specifically about measurements that require human judgement.

    Finally, I don’t use Knight’s definition of risk because almost nobody in the risk industry (insurance, hedge funds, etc.) uses the term that way. Knight only states that to measure risk, the probabilities can be assigned. He doesn’t mention the other key component of risk: the magnitude of loss. I also reject Knight’s definition of uncertainty for the same reason…people who make their living computing odds for various events don’t use the term that way. Knight states uncertainty is unquantifiable but decades of literature in information theory and game theory routinely treat uncertainty with quantified probabilities. A quantifiable uncertainty is even closer to the popular use of the term (most people would say a coin flip is uncertain, yet quantify the outcome of heads as 50% likely). With all due respect to Knight, his proposed definition is simply not the de facto definition of business or science.

    But I do appreciate the time you took to read my book. Be sure to catch my next book “The Failure of Risk Management: Why It’s Broken and How to Fix It” coming in the spring of 2008. In it, I will blasphemously (again) contradict Frank Knight and I will explain focus on risk, including those of very rare events (e.g. the vert rare events that are routinely assessed in nuclear power and insurance).

    Thanks again,
    Doug Hubbard

  2. One other item, you mention the list of three common objections to measurement is incomplete. It is, even in my book. You only show the second half of the list. The first half are three reasons why something *can’t* be measured. The three you mention are additional reasons often used to argue that something *shouldn’t* be measured.

    Also, regarding how to measure the productivity of a knowledge worker, you might try the examples listed on pages 195 to 200 and the Rasch Model explained on pages 210 to 215

    Doug Hubbard

  3. In the absence of time tracking the actual investment is not known, so I would argue that the uncertainty is vast, and almost identical to risk since there is little upside and tremendous downside on IT projects in particular. The idea that the estimate was the same as the actual is an outlier as far as IT projects go.

    The value of a time tracking system is to create accountability for everyone in the value chain, from the person delivering the service to the project manager to the owner and on to the client. Without feedback from the time tracking system it is not possible to know if a budget is being adhered to. In the case of retainer based services this can lead to dramatic overservicing of the client with little benefit to the service provider.

    Heisenberg applies to management as well. The act of measuring influences what is being measured. With better information at hand, processes can be refined, quality can be improved or at least traded against additional investment, and everyone benefits.

    For one take on integrating time tracking, project management and collaboration for better performance see

  4. Ed Kless says:

    Not having read the book, I am certainly disadvantaged, but I did what to add some thoughts about this.

    1. From a project management perspective risk is measured as probability times impact (positive or negative). This yields what is called a risk analysis. Risks are also classified by the level of uncertainty, but this does not refer to the magnitude of the probably or impact, but rather then knowledge of the probability and/or impact.

    For example, there is what is called an unknown-unknown risk (remeber Rumsfeld got into hot water trying to define this). It does not mean we do not know about the risk, rather it means that we have not yet done enough analysis of the situation to put numbers on the probabilty or impact. Hence, it is an unknown(probability)-unknown (impact).

    2. I would be curious as to how Douglas Hubbard would address the Heisenberg uncertainty principle of quantum physics which states that locating a particle in a small region of space makes the velocity of the particle uncertain; and conversely, that measuring the velocity of a particle precisely makes the position uncertain.

    3. While I do not believe that all measurement is wrong, I do believe that some measurement is not possible. Please do not insult me by saying that the love I have for my children can be measured in any way. To quote the great Peter Block, “Many of the things that matter most defy measurement. Our obsession with measurement is really an expression of our doubt. It is most urgetn when we have lost faith in something. Doubt is fine, but no amount of mesurement will asuage it.”

    This could all be a question of semantics – in how I amd defining “measure” and how Hubbard is defining “measure.”

    Love the dialogue!

  5. @UglyAmerican,

    I am confused by your comment. You seem to be saying that without tracking time project management is not possible. This is inane.

    With project (and with most things in life) effort is meaningless. To track a project by number of effort hours is nonsense and, as Ron has pointed out elsewhere on this blog and in his books, it is like timing the doneness of your cookies with a smoke detector.

    What is important is duration not effort! Was a task scheduled to be completed on 7/21 completed on or before 7/21. That is what is important.

    You are clearly using the labor theory of value in your thinking. If you read more on this site you will see that that theory is completely bogus. Project managers who focus purley on time spent do themselves and their constituants a disservice. One of the main weaknesses in traditional project management is that is often (not always) ignores opportunity costs. It should not be about what did we spend or time on, but rather, what could we have spent our time on.

    You also do not understand Heisenberg at all. As noted in a previous comment Heisenberg would not imply that “With better information at hand, processes can be refined, quality can be improved or at least traded against additional investment, and everyone benefits.” In fact, it implies the EXACT opposite.

  6. Response to Ed Kless
    I would recommend reading my book because I think you will find that I do address each of your comments.

    1. Regarding risk, first, you are correct that risk has two dimensions, probability and cost (impact). But if the impact is positive, it is not risk by any use of the term in actuarial science or decision theory . Risk is specifically limited to those situations where the possible impact is negative (In my next book I?m correcting some of the other practices in the PMBOK about risk). Second, probability times impact is technically just the “risk neutral” definition. It is possible that a person could value a 1% chance of losing $100,000 the same as losing $1,000 for certain (the risk neutral position) but most people don?t. The insurance industry exists in part because people are risk averse and would pay a slightly higher amount than the probability times the cost of the insured event.

    2. I do mention the Heisenberg uncertainty principle in my book. But when you ask how I would ?address? it I think you are assuming that I must contradict it in some way, which I do not. HUP simply says the product of the measurement errors a particle?s momentum and position cannot be less than half of Planck?s constant. I defined measurement in the book how it is defined in the empirical sciences and measurement theory ? observation(s) which reduce uncertainty expressed as a quantity. Until you get to the lowest possible uncertainty about the state of some system, you can reduce uncertainty further. If you get anywhere close to the HUP constraint regarding the momentum and position of a particle, you have already measured it a high degree of precision. In other words, HUP does not state that momentum and position are immeasurable, it simply says it is only measurable up to a particular constraint ? and I do not deny this. If you have removed all the uncertainty that is possible to remove, you can?t measure it further but you have already measured it quite a lot.

    3. Regarding your love for your children, you have already measured it and so have I. Let?s say that you, like me, would say that you would be willing to die to protect your children. Let?s say also that you have made several life choices, as I have, that clearly prioritize your children above your own comforts and that these choices are clear for others to see. You and I both know that there are parents in this world that clearly, to the unbiased observer, have not made those same choices. They have behaved in such a way that it is clear that they do not value their children as much as you and I do ours. As soon as you show that you can determine that X is more than Y even in just some cases, you have shown X and Y to be measurable. I address the ethical issues of measurement very early and I argue why it is often unethical to fail to measure and why it is sometimes critical to get beyond the feeling of being ?insulted? by a measurement (whether one is insulted by something is really not an argument that it is impossible). Also, Demming said something similar to the Peter Block quote. I argue why they are both wrong. If something is really important to us, it has observable consequences. (e.g. It is inconsistent for a man to say he loves his children more than anything yet abuse and abandon them, as observations will reveal that some do.) Furthermore, if something has observable consequences, then it can be observed in some amount. If it is observable in some amount then it is measurable. Anyone who claims something is utterly immeasurable is, in effect, saying that it is utterly undetectable in any way ? directly or indirectly. The truly immeasurable cannot have any bearing on the observable world including observable choices people make. If that is the case, then it is not important at all. In fact, even saying something is important is an attempt to rank it above what are apparently less important things, which is one of the categories of measurement under the S. Stevens framework.

    Regarding semantics, I?m sure we do have very different understandings of what measurement is just by the points you?ve made. I?ve stated the definition I use as the one which would be very different to accountants and projects managers but entirely familiar to those in the empirical sciences or statistics.

    Let me also further address the ethics issues of measurement ? which I mention in the book. To me, measuring something like love (I show the work of Adrew Oswald on the value of happy marriages) should be no more ?insulting? than using the word ?love? to describe it. Language and the ability to quantify things are both among the higher human talents (although the latter might even more unique among the animal kingdom than the former). It is common for people to think that measurement somehow dehumanizes things, even though I think this makes about as much sense as saying that music (another special human talent) dehumanizes it. But these are all just value statements and there is no right or wrong answer on it. And none of them have anything to do with the claim of whether measurement is possible. Some people are insulted by the idea of evolution or that Earth not just a few thousand years old ? but they are wrong, nonetheless. To discuss the possibility of measurement ? or any other scientific topic ? it?s a good idea to leave indignation out of the argument and stick to facts and reason.

    Thanks for the dialog

    Doug Hubbard

  7. To UglyAmerican

    By the way, the management equivalent of the Heisenberg Uncertainty is called the Hawthorne Effect. I compare them in a sidebar in my book called “Heisenberg and Hawthorne”.

    Doug Hubbard

  8. I’m currently reading this book and visited the website to download some of the examples only to find that some of the links were not working.

    When I tried to “Email the Author” my emails were returned with an error “ could not be found”. Is there some reason why the author would not want to receive emails from his own readers?


  9. I get “Contact the Author” emails every day, including today, so I’m not sure why you got that error message.

    In fact, I may have gotten your message – are you the one that emailed about one link redirecting to the wrong file? That was fixed. If that was you, thanks for the heads up.

    I don’t know what to tell you about your email issue. I just had someone do a test and it worked fine. You might check the standard things (security settings on your browser, etc.).

    I certainly wan’t to recieve emails from readers and I spend time almost every day responding to readers. So, if you have a question, I’m happy to respond.

Speak Your Mind


Time limit is exhausted. Please reload CAPTCHA.