What I’m Reading

Scientific Perspectivism, by Ronald N. Giere. Based on a reader’s suggestion that I try to get up to speed on philosophy of science since the 1960s. He is trying to find a middle ground between scientific objectivism and social constructionism. The former says that there is absolute truth, and science is the process for discovering it. The latter says that all scientific beliefs are socially constructed, and hence none can claim to be Truth.

From p. 15:

Perspectivism makes room for constructivist influences in any scientific investigation. The extent of such influences can be judged only on a case-by-case basis, and then far more easily in retrospect than during the ongoing process of research. But full objectivist realism (“absolute objectivism”) remains out of reach, even as an ideal. The inescapable, even if banal, fact is that scientific instruments and theories are human creations. We simply cannot transcend our human perspective, however much some may aspire to a God’s-eye view of the universe. Of course, on one denies that doing science is a human activity. What needs to be shown in detail is how the actual practice of science limits the claims scientists can legitimately make about the universe.

In economics, we want to make is seem as though the “science” is objective, and all policy differences are ideological. In fact, differences of “scientific” opinion are not so objective. Many economists process this as “the other guy is a blinkered ideologue, but I’m not.” But nobody has a God’s-eye view of the economy.

4 thoughts on “What I’m Reading

  1. Even social constructions have a reality and convey meaning. There may be multiple ones that circumscribe what meanings are conveyable but in so far as they overlap, the meanings they convey must be equivalent. Predictions and testing are essential to avoid dogma which is devoid of meaning and to tie science to reality.

  2. Some posts at neurologica blog seem relevant.

    “NEJM Article On Randomized Clinical Trials”
    http://theness.com/neurologicablog/index.php/nejm-article-on-randomized-clinical-trials/

    Also the series on “Mental Illness Denial”
    http://theness.com/neurologicablog/?s=mental+illness+denial

    He talks about a hierarchy of evidence in medicine, with random controlled trials being the gold standard, although for some procedures these may not be possible for practical or ethical reasons.

  3. May I humbly/respectfully suggest, Arnold, that you consider replacing your economics-is-not-a-science meme with an economics-is-like-applied-science (i.e. engineering and medicine) one.

    Practitioners of applied science apply objective principles/techniques to real-world problems and they encounter failure after failure after failure. The models used are not deterministic but that does not mean they are random or useless. They are imperfectly deterministic and each failure informs us in making better models/designs that are closer to deterministic than the last one or at least informs of us narrower operating conditions.

    The problem with MIT-like economic models is that economists expect them to work without failure or informed revision/evolution. This gets more to the heart of the issue discussed in the Timothy Taylor post you previously linked to; given the current conditions what economic models can we usefully apply, given an economic model what is the extent of the conditions in which it can usefully be applied. It shouldn’t be either-or, it should be both-simultaneously and expecting bad-**stuff**-to-happen despite our careful efforts.

    Henry Petroski has written a great deal about the value of failure in engineering and Charles Perrow’s classic “Normal Accidents” is a useful exploration of common failure modes in all types of systems. Medical “Morbidity and Mortality” meetings/conferences are another useful model/technique.

    You are looking to science for insight about improving economics but scientists are not very good at applied science either. Just consider the confidence most scientists have in their models and their recommendations about public policy.

  4. Since you have the math for it, I strongly recommend adding to Giere’s book some book that covers enough of the modern fundamentals of inductive logic to help you reason quantitatively about overfitting and about when we can (and cannot) demonstrate that it is justifiable to fit a theory to mostly-historical evidence. It is an issue that arises a lot in fields that you care about like social science and climate science, it is intimately related to questions that philosophers of science tend to talk about, it is hard to treat properly without math, and a lot of numerate progress was made on it in the same time period in which you are studying the philosophers’ point of view.

    I don’t know what book would be an optimal recommendation, but I know two that might be at least useful, emphasizing two different points of view. (Like mechanics, induction can be looked at in ways that are fundamentally related but superficially very different: stuff like Lagrangian mechanics vs. Newton’s laws or path integrals vs. more-algebraic approaches to QM.) If you happen to already know the basic ideas of information theory (a guess based vague memory of you working with computers enough to do an Internet business) then skimming some of Gruenwald’s _the Minimum Description Length Principle_ might be a effective way to quickly get the general idea of a digital information-theory-oriented approach to the question. And I like Vapnik’s _The Nature of Statistical Learning Theory_, monumental ego and all, for its more classically statistical approach to (among several other things) overfitting through Vapnik-Chervonenkis dimension and through optimizing generalization tradeoffs in fitting algorithms as a function of the amount of data available. Either book would be a lot of work to actually work through carefully, but both books are structured in such a way that you can learn a very useful amount about the conclusions and the about the structure of the analysis rather more quickly, by focusing on things like examples and summary text.

    At least when I last looked at it some years ago, there seemed to be a weird _Two_Cultures_-turned-up-to-11 pattern — I dunno whether it affects the book you are reading, but beware if it does — where the quantitative statistical/machinelearning guys know how to cite back to the relevant work in philosophy, but the philosophers cultivated ignorance of work by s/ml guys even as the s/ml guys achieved ever-more-conspicuous practical success in systematically automating reasoning by induction. Philosophers didn’t seem to be at all good at citing the s/ml stuff, and it’s probably not merely childish pretending that the work doesn’t exist but at least some honest ignorance, because philosophers could find things to be difficult or seriously confusing even when they have been made clear and routine in the numerate literature.

Comments are closed.