Joe Rini asks me anything

Transcript.

Some of my blog readers asked me to say more about my intellectual influences so I’ve been writing a series of posts that haven’t gone up yet about that and what I notice is the longest post I’ve written is about the Freddie Mac career which, because I learned more in those you know half a dozen years in business than I learned in my undergraduate and graduate economics. I mean in some ways the learning was complementary but if you had to throw out one it would be the classical economics. Because you know one of the things that first you know hits you if you work in business and all you’ve done is study economics is these things that that a business is supposed to optimize – like it’s supposed to minimize costs or maximize profits, you know solving a calculus problem. That’s not the problem at all. You don’t even- you don’t know – your challenge is to figure out what’s going on, and beyond that if you were in a large organization the challenge is to get the large organization to function.

Video version.

Audio version.

How my thinking evolved–a few loose ends

Related to yesterday’s long post.

1. As I think about it, I find it hard to see how I could have been anywhere but on the left when I was growing up. Conservatives in those days were wrong on race and wrong on how to fight Communism. If there is someone here who identified as conservative in 1964 and was in favor of racial integration and against the Vietnam War, I would love to hear about it. You were the only one.

Note that in those days, there were Republicans who were for Civil Rights, and the most stalwart segregationists were Southern Democrats. So for this purpose conservative does not equal Republican.

2. Some people asked what I think about using data in economics. I think that one should judge theories in social science by looking at as many different types of evidence as possible. But not “this one chart” or “this one significance test.” It is good to take a statistics class to learn the formal theory of inference, so that you know how to use data to make an argument. But then you have to unlearn formal statistics to some degree, so that you don’t end up deceiving yourself and others. There are many abuses of data, and the academic publication process does not do enough to incent better methods. Recall my recommendation to give a Nobel Prize to Ed Leamer.

3. Some people asked me to talk about intellectual influences in terms of specific people and ideas. I will get to that in about ten days or so.

What is the GDP of Jeff Bezos?

1. A comment on a recent post led me to an article by Mark Muro and others.

Biden’s winning base in 477 counties encompasses fully 70% of America’s economic activity, while Trump’s losing base of 2,497 counties represents just 29% of the economy.

Suppose that each candidate had 50 votes. But the 50 voters for Biden produced $70 of output and the 50 voters for Trump produced $30 of output. The per capita output of Biden voters is much higher.

I am not sure how the Commerce Department computes GDP at a county level. Does Seattle produce a lot of output based on the sales revenue of Microsoft and Amazon? If Amazon moved its headquarters out of Seattle, would Seattle’s GDP suddenly plummet? If Jeff Bezos moved to a rural Texas county, would that county’s GDP soar?

The Commerce data show that a lot of GDP is created in the District of Columbia and Fairfax County, Virginia. Do we believe that a lot of “production” is coming from the government workers in this area?

I find it credible that, on average, Biden voters have significantly higher incomes than Trump voters. And if you assume that people’s incomes are highly correlated with the value of what they produce, then it becomes likely that, on average, Biden voters produce output that is more highly valued in today’s economy.

But I think that it is unfortunate that we report a statistic like GDP as if it has a precision of several significant figures, and that we try to decompose trends in its second difference (to get the alleged change in trend productivity growth) or to decompose the location of production down to the county level. The concept of GDP is too crude to support such fine-grained breakdowns.

2. A reader sent me a link to a paper by Gavin Wright that argues that the de-industrialization of the South broke up a biracial political coalition that was dedicated to trade protection of the Southern textile industry. I am not sure about the political analysis, but there is a somewhat valid economic point that taking away trade protection from that industry caused widespread damage to people who had worked in that industry.

But consider the counterfactual: what if we had continued with restrictions on textile imports? Would jobs have been saved, or would automation have been accelerated? Would households have maintained a high level of demand for Southern textiles, or would they have substituted away to other types of clothing or other forms of consumption? How much would income growth been retarded in textile-exporting countries, and what would have been the consequences?

Vaccine testing and the trolley problem

As you know, I am not ready to proclaim a vaccine a success based on what I see as a small number of cases in the sample population. A lot of you think I am wrong about that, and I hope I am.

But my intuition is still that there is something unsatisfying about the testing protocol.

The actual protocol seems to be to give the vaccine to a large sample and wait for people to get exposed to the virus as they go about their normal business. My inclination is to deliberately expose a smaller sample to the virus and see how well the vaccine works.

One argument against deliberate exposure is that the response of people to deliberate exposure may not be the same as their response to normal-business exposure.

But in favor of deliberate exposure:

–you can clearly test how well the vaccine does in a specific populations, such as people with and without obesity.
–you can clearly test how well the vaccine does at two levels of viral load.
–you can get results quickly, rather than wait for months for people in the sample population to become exposed the the virus
–you could identify the main contacts of people to whom the virus is exposed and make sure that the vaccine reduces spreading.

Whether you deliberately expose 100 people or wait for 100 cases to emerge, that is still 100 cases either way. It seems to me that deliberate exposure is equivalent to throwing the switch in the trolley problem. Those 100 cases are 100 cases either way, it’s just that the experimenter didn’t specifically choose them.

I think that the case for deliberate exposure as a testing protocol ends up being pretty strong. What am I missing?

UPDATE: a reader points me to a story about a proposed challenge trial.

Challenge trials are controversial because of the risks involved with infecting patients with a potentially lethal virus

But again I ask, what is the difference between infecting a group of people and waiting for a group of people to become infected?

Further thoughts on a vaccine

UPDATE: Thanks to a commenter for finding the study protocol for the Pfizer study. That helps a lot.

1. If you assume that everyone is identical, and if you want to try to eradicate the virus by giving everyone the vaccine, then you don’t need to show that it is 90 percent effective. My guess is that 20 percent effectiveness will do the trick. The problem you face is a PR problem. Most people hear the word “vaccine” and think “super-power that keeps me from getting sick.” But to be honest you would have to say “You might still get sick, but if everyone takes the vaccine and also exercises some degree of precaution, the virus will die out and eventually you won’t have to worry about it.”

2. But if your goal with the virus is to target high-risk populations or people whose working conditions might expose them to high viral loads and make them safe, then you want the super-power story to be true. I would want to see that high viral load does not degrade the effectiveness of the vaccine, and I would want to see that being in a high-risk category does not degrade the effectiveness of the vaccine.

3. The most I can find out about the Pfizer study is from this press release.

The first primary objective analysis is based on 170 cases of COVID-19, as specified in the study protocol, of which 162 cases of COVID-19 were observed in the placebo group versus 8 cases in the BNT162b2 group. Efficacy was consistent across age, gender, race and ethnicity demographics. The observed efficacy in adults over 65 years of age was over 94%.

There were 10 severe cases of COVID-19 observed in the trial, with nine of the cases occurring in the placebo group and one in the BNT162b2 vaccinated group.

There are a number of comparisons you could make between the placebo group and the treatment group.

–number of people who tested positive for the virus
–number of people who tested positive and showed symptoms (this appears to be what they used)
–number of “severe cases” (this was reported in the press release)
–number of hospitalizations (is this the same as “severe cases”?)
–number of deaths

Suppose that nobody in either group died from the virus. A headline that says “Vaccine prevents zero deaths” would not be very inspiring, would it?

When a study can look at many possible outcome measures and chooses to report only those that favor the drug, this is known as p-hacking. I don’t know that Pfizer was p-hacking, but I don’t know that they weren’t.

UPDATE: the protocol specified the outcome measures in advance, so no p-hacking.

The context here is one in which people who spread the virus differ greatly in their ability to spread it (at least, that seems like a good guess), and people who come in contact with the virus differ greatly in the extent to which they get sick. In that context, a ratio of 162/22000 compared to 8/22000 is promising but not definitive. I would be much more impressed if it were 1620/22000 and 80/22000. With the smaller numbers, I can think of a dozen ways to get those results without the vaccine actually being effective.

UPDATE: Now that I have seen the protocol, I can go back to the first two points. For the purpose of trying to eradicate the virus with universal vaccination, you don’t need much efficacy. But I think you want to be really, really, confident that there is some efficacy, because otherwise you will have blown it with the public if your vaccine campaign does not eradicate the virus. If I were the public official in charge of making the decision, I would want to see a larger number of cases in the sample before I would make this kind of a bet.

For the purpose of protecting vulnerable populations, you need to conduct a different protocol, in my opinion. Again, you want to know how the vaccine does as viral load goes up and as vulnerability of the individual goes up. I think that would argue for a different study protocol altogether.

Testing a vaccine

A follow-up/clarification to my earlier post:

I believe in what I call the Avalon-Hill model of how the virus affects people. That is, it depends on a combination of viral load and patient vulnerability. Accordingly, I would like to see a vaccine tested on various combinations of these factors. That means that the experimenter should control the viral load rather than leave it to chance in the context of selection bias (people who volunteer for the trial may be behaving in ways that reduce their probability of being exposed to high viral load).

In principle, that means assigning a high viral load to some high-risk subjects in both the control group and the placebo group. That could discomfit the experimenter, not to mention the experimental subjects.

But if you don’t do that, what have you learned? If the most severe cases in the real world come from people exposed to high viral loads, and almost no one in your trial was exposed to a high viral load, then you have at best shown that the vaccine is effective under circumstances where it is least needed.

Increased longevity for victims of violence

Roger Dobson writes,

a team from Massachusetts University and Harvard Medical School found that technological developments had helped to significantly depress today’s murder rates, converting homicides into aggravated assaults.

Pointer from Tyler Cowen. Some thoughts.

1. I give the study a less than 50 percent chance of holding up. The method seems unreliable.

The team looked at data going back to 1960 on murder, manslaughter, assault, and other crimes. It merged these data with health statistics and information on county level medical resources and facilities, including trauma centres, population, and geographic size. The researchers then worked out a lethality score based on the ratio of murders to murders and aggravated assaults.

2. As I understand it, many statistics on crime show a decline, not just murders. This analysis says the opposite, that the rate of violent crime has remained high, and better treatment has reduced murder.

Robert Litan on Milgrom and Wilson

In Trillion Dollar Economists, Robert Litan wrote (p. 254),

Various economists had ideas for how the commission could best achieve these apparently conflicting objectives, but none were as influential as Paul Milgrom, his colleague Robert Wilson, and longtime senior economic advisor at the FCC Evan Kwerel. . .

The key mechanism these economists designed was the simultaneous auction that required bidders to remain active in every round of bidding in order to be eligible to receive any licenses

Litan’s 2014 book about economists whose work proved valuable in practice is remarkable in that it included several subsequent Nobel laureates: Richard Thaler (2017), William Nordhaus (2018), Banerjee/Duflo/Kremer (2019) and now Milgrom and Wilson.