On Climate Science

Phillip W. Magness writes,

In a strange way, modern climatology shares much in common with the approach of 1950s Keynesian macroeconomics. It usually starts with a number of sweeping assumptions about the relation between atmospheric carbon and temperature, and presumes to isolate them to specific forms of human activity. It then purports to “predict” the effects of those assumptions with extraordinarily great precision across many decades or even centuries into the future. It even has its own valves to turn and levers to pull – restrict carbon emissions by X%, and the average temperature will supposedly go down by Y degrees. Tax gasoline by X dollar amount, watch sea level rise dissipate by Y centimeters, and so forth. And yet as a testable predictor, its models almost consistently overestimate warming in absurdly alarmist directions and its results claim implausible precision for highly isolated events taking place many decades in the future. These faults also seem to plague the climate models even as we may still accept that some level of warming is occurring.

Pointer from Don Boudreaux. Read the whole thing. I have this same instinct about climate models, which does not necessarily mean that I am correct in my skepticism.

9 thoughts on “On Climate Science

  1. Another issue I have noticed is how much gets done at the interpretive level. For instance, a study that once might have been described by the researchers as a measure of the sensitivity of some frog population to changing rainfall is now written in terms of how climate change is killing frogs. This has nothing to do with p values or models. It’s just a shift in the rhetorical foundations used in the conclusions of many studies that are only tangentially related to climate change. Just through the subtle shift in rhetorical norms, a whole new set of research appears to become evidence relating change and destruction.

  2. Implausible precision of such wide error bounds? If you have the slightest inclination to believe this, you should be entirely skeptical.

  3. The question has never been whether CO2 is entropic, or an externality.

    The question is,”so what?”

    If they focus on the important question they lose elections. That is why they don’t care about the important questions and care more about claiming proper skepticism is anti-science.

  4. In both cases it depends on the size of the multipliers. In Keynesian economics a large multiplier implies dramatic effects for government intervention. A small multiplier implies government impotence.

    In Global Warming or Climate Change, CO2 is a weak greenhouse gas and a large multiplier is required for large effects.

    In both cases the evidence I see indicates small multipliers.

  5. The thing I loved about the piece was why they don’t use the tools from time series statistics in climate modeling. I’ve always been like, well why don’t you test for Granger Causality, or fit an ECM model?

  6. Attempting to model a physical system with just a statistical model is foolish. Well understood physics is at play and must be a part of your model. Magness writes through the lens of an economist who had never dealt with a physical system, or taken physics class beyond 101.

  7. Aaron, I completely agree, but that’s a great description of a lot of what passes for climate science, at least up to the standard that the IPCC will accept and report on work.

    Michael Mann has been particularly brazen about this. He consistently ignores the well-known biological connections between temperature and the growth rate of trees. He’s published numerous correlations between temperature and tree-ring growth despite specialists in the field being pretty skeptical that there’s such a direct connection. He just ignores how real-world trees work, runs his statistical models, and then tweaks them until he gets the shape of graph he wants.

    For that matter the same can be said about all the regressions against surface temperature data. After the experiments done by Anthony Watts among others, we know that they aren’t very well correlated with temperature measured by more accurate means, in particular satellite measurement. Awkwardly, that means there’s not really any good global temperature measure if you go back more than about 40 years in the past.

    This hasn’t stopped a number of scientists from averaging all the emperature data together, anyway, and doing all manner of post-processing on it to get a clean curve out of that data. They must be aware by now of the fundamental problems in the underlying data, but who’s going to stop them?

  8. To me it seems like a clear two way street: creating a model based entirely on historical temperature data without an understanding of the underlying physical processes is probably not going to give you much predictive value because you don’t know what you’re not capturing. On the other hand, if you have a physical model which you have to really hammer the historical temperature data to get it to fit, then that should tell you you don’t understand the underlying physical processes as well as you think you do. Magness’ critique focuses on this latter issue, which I think is clearly a salient critique given the divergence of observed and predicted surface temperature warming since, say, 1998.

  9. It’s worse than just noisy input causing large error bars. The systems being modeled are governed by differential equations that are provably unsolvable. The best models of the climate don’t have solutions, so even if the input was 100% accurate an estimate 50 years from now would still not be reliable.

Comments are closed.