Increased longevity for victims of violence

Roger Dobson writes,

a team from Massachusetts University and Harvard Medical School found that technological developments had helped to significantly depress today’s murder rates, converting homicides into aggravated assaults.

Pointer from Tyler Cowen. Some thoughts.

1. I give the study a less than 50 percent chance of holding up. The method seems unreliable.

The team looked at data going back to 1960 on murder, manslaughter, assault, and other crimes. It merged these data with health statistics and information on county level medical resources and facilities, including trauma centres, population, and geographic size. The researchers then worked out a lethality score based on the ratio of murders to murders and aggravated assaults.

2. As I understand it, many statistics on crime show a decline, not just murders. This analysis says the opposite, that the rate of violent crime has remained high, and better treatment has reduced murder.

From the comments on statistical malpractice

1.

Frankly there are just too many PhDs. The talent pool is too diluted.

Imagine if the NBA had 1,000 teams instead of 30. Those extra 10,000+ players will be playing basketball by its official rules but it will mostly be of very poor quality compared to what we see from LeBron James. There’s nothing you can do about that if you’ve committed to having 1,000 teams. There simply aren’t 15,000 basketball players who are as talented as an average NBAer.

2.

Basically, every clever idea that requires human beings to do their duty without reliable detection and penalty for violation, has already been thought of, implemented, and failed entirely. Not just failed entirely, which is bad enough, but made it around two orders of magnitude more burdensome to get papers done. Not good papers, just any paper, which are still mostly bad papers. “Huge additional costs, zero apparent benefit” is the worst of all possible worlds, and such a bad world, that one just needs to move to a totally different planet.

So, the only solution is a completely different mechanism and institution of accountability.

I agree with both of these points.

Widespread statistical malpractice

Alvaro de Menard writes,

It is difficult to convey just how low the standards are. The marginal researcher is a hack and the marginal paper should not exist. There’s a general lack of seriousness hanging over everything—if an undergrad cites a retracted paper in an essay, whatever; but if this is your life’s work, surely you ought to treat the matter with some care and respect.

You have to read the whole long post to see how he got to that point. Pointer from Tyler Cowen.

General update, May 9

1. Erin Bromage writes,

We know most people get infected in their own home. A household member contracts the virus in the community and brings it into the house where sustained contact between household members leads to infection.

But where are people contracting the infection in the community? I regularly hear people worrying about grocery stores, bike rides, inconsiderate runners who are not wearing masks…. are these places of concern? Well, not really. Let me explain.

In order to get infected you need to get exposed to an infectious dose of the virus; the estimate is that you need about ~1000 SARS-CoV2 viral particles for an infection to take hold, but this still needs to be determined experimentally. That could be 1000 viral particles you receive in one breath or from one eye-rub, or 100 viral particles inhaled with each breath over 10 breaths, or 10 viral particles with 100 breaths. Each of these situations can lead to an infection.

2. Here is the paper on vitamin D.

we show that the risk of severe COVID-19 cases among patients with severe Vit D deficiency is 17.3% while the equivalent figure for patients with normal Vit D levels is 14.6%

3. Annette Alstadsæter and others write,

First, layoffs started in sectors of the economy directly affected by the policy measures but then quickly spilled over to the rest of the economy so that after 4 weeks 2/3 of layoffs are accounted for by businesses that were not directly targeted. Second, close to 90% of layoffs are temporary rather than permanent and while this classification may change as the crisis progresses, that is one glimmer of hope in the data. Third, while permanent layoffs are a minority, they still correspond to a 1.5 percentage point increase in unemployment — an unprecedented monthly change. Fourth, the layoffs have a strong socio-economic gradient and hit financially vulnerable populations. Fifth, there are hints of the important role of childcare—within firms, layoffs appear to be skewed toward workers with younger children, in particular toward women. Finally, layoffs are more common in less productive and financially weaker firms so that the employment loss may be overstating total output loss

Pointer from John Alcorn.

4. David Beckworth writes,

This existing demand for safe assets is one reason why interest rates on long-term U.S. treasury bonds remain very low despite the large runup in public debt this year. It also helps explain low inflation, since the increased demand for safe assets means less spending. A new generation of more risk-averse investors will add to this already elevated demand for safe assets and create additional disinflationary pressure that will be with us for some time.

…The use and expansion of the Fed facilities to backstop markets sends another strong signal to foreign investors that the U.S. financial system will not fail. This will encourage them to hold more dollar-denominated assets issued in America. Put differently, the biggest kid on the financial block just got stronger.

This is another case of linking to a point of view with which I disagree. Vehemently.

The demand for low-interest U.S. paper is sufficient for now. But as Herbert Stein said, things that can’t go on forever stop. And in this case, the stop will be sudden and will catch markets by surprise. It will surprise David Beckworth and others who think it can go on forever.

Also, contrary to Beckworth and others, I believe that the additional issuance of government paper does nothing to solve the main problem for the economy, which is to discover new patterns of sustainable specialization and trade.

5. For speculation on the outlook that is closer to mine, consider a paper by Victoria Gregory and others.

We find that the recession has an L-shape. The finding is easy to explain. First, even when the cost of maintaining and reactivating a suspended employment relationship is fairly small—in the order of less than a month of the worker’s value added—the fraction of workers whose employment relationship is permanently terminated is about 35%. This is consistent with survey evidence, which finds that between 40 and 50% of the workers who have entered unemployment during the first month of the lockdown have no expectation of being recalled to their previous job (see, Adams-Prassl et al. 2020 and Bick and Blandin 2020). Second, the workers who are permanently laid-off are
disproportionately of the ”fickle” type, who need to search for several years in order to find a long-lasting job.

Pointer from JA. Note that the depiction of unemployment as a search/matching problem is not to my taste, because it makes it sound as if the job opportunities are given, rather than emerging from entrepreneurial trial and error. And simulation models are not to my taste. So I cannot endorse the methods, as much as I agree with the conclusion.

6. Leonidas Palaiodimos and others write,

Patients were classified in three groups based on the BMI: BMI<25 kg/m2, BMI 25-34 kg/m2, and BMI≥35 kg/m2 as per the most recent BMI assessment prior to or during the index admission. Severe obesity was defined as BMI≥35 kg/m2.

Pointer from JA. It might be somewhat counterintuitive, but this kind of piecewise linear specification is a more robust way of dealing with possibly nonlinear relationships than is imposing a particular nonlinear functional form, such as log or exponential. These investigators find a significant role for severe obesity along with the usual role for age.

7. Doc Searls writes,

Now, haul Arnold’s template over to The U.S. Labor Market During the Beginning of the Pandemic Recession, by Tomaz Cajner. . .

The highest employment drop, in Arts, Entertainment and Recreation, leans toward inessential + fragile. The second, in Accommodation and Food Services is more on the essential + fragile side. The lowest employment changes, from Construction on down to Utilities, all tending toward essential + robust.

8. Joseph Sternberg writes,

We all know people on social media who enjoy decrying lockdown violators and protesters as “covidiots.” Project Fear works by appealing to believers’ sense that they are smarter than their peers, better able to read the tea leaves to see the impending disaster and also better able to protect society from its more benighted members. And don’t discount the joy in the sense of moral superiority when one’s position allows one to value “lives” when one’s opponents care only about “the economy.”

Pointer from Alberto Mingardi. Fear of the virus has been transformed into Fear Of Others’ Liberty. So even though Vitamin D is protective, many people applaud the California governor for closing the beaches.

9. Finally, if Tyler can link to an A-WA music video, I can link to a video of an A-WA song with me dancing to it. I need to study the video to remember the last section before the very end.

The testing scam

I used to be a big proponent of testing to help manage the virus. But now I am backing off that. Here is the problem.

Suppose that as a scam, I say that I have a test for the virus. But in fact, I plan to use a random number generator that 5 percent of the time will say that you have the virus and 95 percent of the time will say that you don’t.

If half the population has the virus and half does not, then my scam will be exposed very quickly. My test will be making lots of mistakes, telling people who have it that they don’t and vice-versa.

But if less than 5 percent of the population has the virus, it may not be so clear. Most of the people who “test negative” in my scam will in fact be negative, so I will have that going for me. My problem, which may not be readily apparent, is that most of my positives will be false positives and a few of my negatives will be false negatives.

I am not saying that existing tests are pure scams. But to be better than pure scams, there has to be a much lower margin of error than you might think.

The tests that we have are giving nonsensical results, such as a husband and wife with identical symptoms getting opposite results, or studies that if they were extrapolated would imply that more than 100 percent of New York state has had the virus.

I was one of those FDA-bashers who thought that requiring certification for tests was peacetime bureaucratic thinking. I have come to realize that in order to be useful, the tests have to be highly accurate. If that is where FDA was coming from, I can now appreciate that.

After I wrote the above, but before posting, a commenter pointed me to an essay by Peter Kolchinsky, which aligns with my thinking.

The meaning of getting a positive result also depends on the percent of the population that has been infected. If 50 percent of people have been infected, then a test with a 97 percent sensitivity and a 2 percent false-positive rate is still likely to be 98 percent right if it tells you you’re positive. If only 2 percent of people are infected, then such a test would be only 50 percent right if it said you’re positive.

Probability and action

Scott Alexander writes,

Nate Silver said there was a 29% chance Trump would win. Most people interpreted that as “Trump probably won’t win” and got shocked when he did. What was the percent attached to your “coronavirus probably won’t be a disaster” prediction? Was it also 29%? 20%? 10%? Are you sure you want to go lower than 10%? Wuhan was already under total lockdown, they didn’t even have space to bury all the bodies, and you’re saying that there was less than 10% odds that it would be a problem anywhere else? I hear people say there’s a 12 – 15% chance that future civilizations will resurrect your frozen brain, surely the risk of coronavirus was higher than that?

And if the risk was 10%, shouldn’t that have been the headline. “TEN PERCENT CHANCE THAT THERE IS ABOUT TO BE A PANDEMIC THAT DEVASTATES THE GLOBAL ECONOMY, KILLS HUNDREDS OF THOUSANDS OF PEOPLE, AND PREVENTS YOU FROM LEAVING YOUR HOUSE FOR MONTHS”? Isn’t that a better headline than Coronavirus panic sells as alarmist information spreads on social media? But that’s the headline you could have written if your odds were ten percent!

My cynical view is that the typical reader of the NYT or the WaPo does not notice the lack of consistency between how they treated the virus in February and how they treat it now. They consistently viewed President Trump as wrong, and that is the consistency that their readers care about.

But forget the complaints about media. This is a much bigger issue with human nature. Scott’s basic point is that people tend to treat low-probability events as if they could not possibly occur. Scott points out that the anti-Trump media were far from the only virus denialists back in February. The stock market also behaved like a virus denialist.

Somehow, we seem to be hardwired to think in binary terms–either we believe something will happen or we believe it won’t happen. Notice that we have understood formal binary logic since Aristotle but according to many accounts, formal probability theory waits for Pascal in the 1600s.

You may notice that when I illustrate probability on this blog, I try to avoid using decimals. Instead, I say “out of 10,000 people. . .” That is because I noticed when teaching high school students that they grasped probability much more quickly if the examples used whole numbers.

Most people are concrete thinkers. For a concrete thinker, an object is either there or it isn’t there. Probabilistic reasoning is abstract, and that makes it harder.

Thoughts on testing for the virus

There are two purposes of tests.

Individual: To tell whether a particular patient has the virus.
Social: To enable public health officials and policy makers to know the prevalence of the virus in the population.

For the individual purpose, the quantity of tests and the speed with which results can be read matters more than quality. There is hardly any point in testing someone who is unlikely to be infected. And if a hospital uses one type of test on one patient and a different type of test on a different patient, that is hardly a problem.

For the social purpose, the quantity of tests does not matter, as long as enough people are tested to produce a reliable sample. If you have to wait a week for a test result, that is ok. You want to include a representative sample of the entire population, including people without any reason to believe that they have been infected. It is important that every person tested using the same method.

What level of accuracy do you need? Suppose that 95 percent of the people who test positive are in fact positive, and 95 percent of the people who test negative are in fact negative. Is that good enough? Imagine that out of 1000 people, 40 test positive and 960 test negative. You would have:

test positive test negative
have virus 38 48
virus-free 2 912

Do you see the problem? More people who have the virus test negative for it than test positive for it. That is certainly not good for the individual purpose.

For the social purpose, you can back out the true prevalence of the virus provided you know precisely the rate of false positives and false negatives. But if you don’t know those, and if you just go by the test results, in this example you would say that only 4 percent of people have the virus, even though 8.6 percent of people actually have the virus. But at least you would be in the right ballpark. In the absence of rigorous testing, right now the estimates from different “experts” are orders of magnitude apart.

For the individual purpose, I would prefer the testing method with the lowest rate of false negatives. For the social purpose, I would prefer the test where we have the most precise estimate of the false positive and false negative rates, even if the false negative rate is a bit higher than that of some other test.

Trying to raise the status of Edward Leamer

[Note: askblog had an existence prior to the virus crisis. I still schedule occasional posts like this one.]

In this article, I say that Edward Leamer deserves a Nobel Prize.

Edward Leamer deserves the Nobel Prize in Economic Sciences for launching the movement to examine critically the uses of statistical methods in empirical research. The movement has had repercussions that go beyond econometrics. It has affected medicine and epidemiology, where John P. A. Ioannidis has been a leading figure in pointing out methodological failures (Ioannidis 2005; 2016; Begley and Ioannidis 2015). It has impelled psychology and behavioral economics to confront what has become known as the ‘replication crisis’ (Camerer et al. 2018).

. . .After Leamer pointed out problems inherent in the multiple-regression
approach and the inevitable specification searching that it involves, economists
have turned to quasi-experimental methods

The policy of Econ Journal Watch is not to allow the author to give acknowledgments to editorial staff, but Brendan Beare and Jason Briggeman did a great deal to improve the essay.