Unpacking the Term “Probability”

My new essay on probability concludes:

Producers and consumers live in a world of non-repeatable events…Treating probabilities as if they were objective is a conceptual error. It is analogous to the conceptual errors that treat value as objective…We will be less likely to overstate the robustness of equilibrium and the precision of economic models if we stop conflating subjective degrees of conviction with verifiable scientific concepts of probability.

I argue that one cannot assign an objective probability to a non-repeatable event, such as “will hurricane Sandy cause flooding in the New York subway system?” I could have used “Will Barack Obama win re-election?” as my illustrative example, given that Nathan Singer Silver famously assigned a very precise-sounding probability to that event.

11 thoughts on “Unpacking the Term “Probability”

  1. I’m unclear on the difference between probodef and probostat probabilities. I can’t think of any expressions of probability that would fit one of these definitions but not the other. The example given, of a coin toss having a 50% chance of landing on either side, is a ‘probadef’ probability only because objects with the physical characteristics of coins have been reliably observed to behave a certain way. Would it be probadef if the probability were stated for a newly minted coin that had never been tossed, but probostat if the probability were stated for a coin that had already been tossed 1000 times and observed to land on heads 50% of the time?

  2. The problem with your taxonomy is that all probability statements about the real world are fundamentally statements about model uncertainty. We say a fair coin lands heads 50% of the time because we don’t know enough about the coin and the flipping process to make the correct prediction. If a fair coin is flipped by a specialized coin-flipping robot, maybe our model should predict 100% heads or 0%.

    As you say, it’s true that you cannot evaluate the correctness of a stand-alone probability estimate for a one-off event like Hurricane Sandy. It’s not repeatable, but more important, if you could rewind the tape and play Sandy over again, the same thing (flooding or non-flooding) would happen every time.

    However, you can evaluate a given model by looking at the probability estimates it spits out, even if those estimates all relate to one-offs. If your weatherman is constantly predicting 99% chance of rain and you live in Phoenix, you should begin to suspect that he’s a bad weatherman. Moreover, the specific numerical values of a model’s probability estimates should have a quantifiable impact on your evaluation of the model.

    So, I think it’s mistaken to adopt a nihilistic approach about quantitative probability. It’s more accurate to say that numerical predictions about real-world events inherently involve complex and tentative models. As a result, the value of the probability estimate has large error bars due to the uncertainty of the model. However, there is ultimately a fact of the matter about how the universe works. The NYT had an interesting article recently discussing how modern meteorologists are actually getting good at delivering relatively accurate quantitative predictions. As your model gets better, you start to converge towards probastat.

  3. I don’t really see the difference between probstat and probtherm.

    One canot verify probabilistic statements based on one sample with another, similar, sample. One can only compute the probability that these two samples are governed by the same model. This computed probability is the degree of our conviction that these are two _similar_ samples.

    For example, suppose you have 200 patients (mortgages). 7 of them die (default) after being treated with a new drug (a new pattern of specialization and trade). Our empirical probability is 3.5%. Let’s take another sample of 100 patients. 5 of them are dead. Did we just verify the empirical probability, or did we just disprove our initial estimate? No we did not. And no we did not. All we can make is some statements about the plausibility of these two samples being essentially similar or coming from the same state of the world etc. These statements will measure our degree of belief – given the data.

    Almost all science is like this.

  4. To me, there is the territory (the real world) and the map (the representation of the real world in our heads). Probability, then, represents our state of partial knowledge about the world.

    The coin flip will be either heads or tails. But we don’t know either way, so our state of partial knowledge is 50%.

    Similarly, Hurricane Sandy would have flooded the NYC subway or it would have not, but Dr. Masters’ statement says that based on his analysis of models/past floodings/etc., his state of partial knowledge is 50%. The 50% number is significant in so far as much as it varies from what I would consider my prior probability of a flooding event – i.e. if I thought that a flooding event /should/ have a 10% chance (9:1 odds) then Dr. Masters’ had better have log_2(9) information bits of information that I don’t have, in order to claim that 50% chance (1:1 odds)

  5. Weather forecasting uses probabilities in a way that is often confusing. A typical use is the forecast “there is a 30% probability of rain tomorrow.” Obviously, the actual outcome is either rain (100%) or no rain (0%). A probability is used as a means of improving accuracy, and of expressing the confidence in the prediction. In the case of that forecast and rain, the error in the forecast is 0.70. This may be useful for making decisions. It is usually useful for improving forecasts.

    Accuracy improvement is based on feedback. There is some forecast methodology at use, whether psychic intuition or complex computer model. The methodology will have a mean error from prior forecasts. Most weather forecasting analysis uses RMS error rather than mean error because it has better utility. A real forecasting system will make hundreds or thousands of forecasts each year, and will be revised based on error measurements.

    So a forecaster’s statement “the probability is X” is shorthand for the statement “using the methodology that has been measured over the past N years for forecasting events of this type, the forecast probability of X is expected to have the lowest RMS error contribution for this event.”

  6. Would your view be different if there were a person who tracked all of his probability estimates and found that, for example, of all of the events to which he assigned 80% probability, roughly 80% occurred?

    • No, my view would not be different. The event “making a prediction” is not a draw from a distribution.

  7. These all seem like aspects of the same thing (=counting/measure, in a situation of uncertainty). A ‘probstat’ probability is an attempt to confirm or reject a ‘probdef’ probability. A ‘probtherm’ probability is one informed by doing some ‘probstats’ on your prior model in similar past situations. Etc. They are all parts of the same elephant.

    It’s true that there is a lot of confusion surrounding such uses of probability but I’m not sure splitting up the definition into 3 adds more light than heat.

    In particular, a lot of recent resistance to the use of probability is overkill and unwarranted. Yes, when the weatherman says “30% chance of rain” he’s describing a situation that will either come out 0% or 100%, but is it *really* so confusing what he means? He means that (his stats say) in weather situations categorized as similar to [what we’re observing right now], rain occurred the next day 30% of the time. Or, maybe, he means that some prediction model run using today’s weather as initial conditions, evolved to produce rain the next day in 30% of scenarios. Etc. You can take that result or leave it. But is it really so hard to interpret?

    Similarly, righty gripes about the Nate Silver ‘probability’, for example, are reconciled by stipulating that he was simply counting universes: http://rwcg.wordpress.com/2012/11/06/what-an-election-probability-means/

    Which is fundamentally the same exercise as the rain-prediction example. You want to call this probtherm and distinguish it from probstats, but one can always run some probstats on their probtherm-predicting model, and having done so, use it to confirm or reject this or that probdef. Yeesh. What’s the end goal here?

  8. Good essay, and I’ll note that the essence of all conmen’s sleight of hand is in this conflation: I’ll make subjective statements of probatherm and make you think I’m talking about probastat. That’s how conmen convince cancer patients that a coffee enema will cure them and how college recruiters get students to pay their outrageous tuition. It’s no surprise that weathermen, metereologists, and Nate Silver all sign up for the same con, though in the latter case, it would all depend on how authoritatively he claimed Obama would win. This is just more evidence of the scientism that Hayek railed against.

  9. Coffee is a brewed beverage with a distinct aroma and flavor, prepared from the roasted seeds of the Coffea plant. The seeds are found in coffee “berries”, which grow on trees cultivated in over 70 countries, primarily in equatorial Latin America, Southeast Asia, India and Africa. Green (unroasted) coffee is one of the most traded agricultural commodities in the world.,

    Find out about our very own blog site too
    <http://www.caramoanpackage.com

Comments are closed.