Update on the road to sociology

In September, Jeremy Horpedahl and I wrote,

The topics of gender, race and ethnicity, and inequality are important economic and social issues. In this paper, we analyze how often those topics are addressed in two outlets of the American Economic Association: peer-reviewed papers in the American Economic Review and the conference papers from the AEA’s annual meeting that are published in its Papers and Proceedings. We find that these topics have been increasingly represented in both of these outlets when considered as a group between 1991 and 2020. Published articles and conference papers addressing gender have seen the largest increase, both in absolute numbers and in percent of total papers.

The trend toward obsession with gender and race in economics seems to have accelerated. The recent Papers and Proceedings issue of the American Economic Review has 34 sessions, with 13 of them devoted to gender or ethnicity. If it were not for sessions devoted to COVID, the American Economic Association would now be 50 percent focused on race and gender.

UPDATE: See also John Cochrane’s latest.

Road to sociology watch

M. V. Lee Badgett, Christopher S. Carpenter and Dario Sansone report,

Public attitudes and policies toward LGBTQ individuals have improved substantially in recent decades. Economists are actively shaping the discourse around these policies and contributing to our understanding of the economic lives of LGBTQ individuals.

From the Journal of Economic Perspectives (an important mainstream journal), as reported by Timothy Taylor, the long-time managing editor.

Let’s be clear. The Democratic Party was once the home of economists who, while rather too confident in their technocratic skills for my taste, at least understood how to add, subtract, and distinguish demand from supply. They would not have embarked on the dangerous, inflation-stoking efforts of the current Administration.

But contemporary economists are thriving. They are contributing to the study of LGBTQ! And the status of women in the profession is now a major concern!

Don’t trust doctors as statisticians

1. Daniel J. Morgan and others write,

These findings suggest that many practitioners are unaccustomed to using probability in diagnosis and clinical practice. Widespread overestimates of the probability of disease likely contribute to overdiagnosis and overuse.

Pointer from Tyler Cowen.

In 2008, Barry Nalebuff and Ian Ayres in Supercrunchers also reported that doctors tend to do poorly in basic probability. When I taught AP statistics in high school, I always used an example of bad experience inflicted on me by a Harvard-trained physician who did not know Bayes’ Theorem.

2. From a 2014 paper by Ralph Pelligra and others

The Back to Sleep Campaign was initiated in 1994 to implement the American Academy of Pediatrics’ (AAP) recommendation that infants be placed in the nonprone sleeping position to reduce the risk of the Sudden Infant Death Syndrome (SIDS). This paper offers a challenge to the Back to Sleep Campaign (BTSC) from two perspectives: (1) the questionable validity of SIDS mortality and risk statistics, and (2) the BTSC as human experimentation rather than as confirmed preventive therapy.

The principal argument that initiated the BTSC and that continues to justify its existence is the observed parallel declines in the number of infants placed in the prone sleeping position and the number of reported SIDS deaths. We are compelled to challenge both the implied causal relationship between these observations and the SIDS mortality statistics themselves.

I thank Russ Roberts for the pointer. This specific issue has been one of my pet peeves since 2016. See this post, for example. I think that back-sleeping is a terrible idea, and I never trusted the alleged evidence for it. Doctors do not understand the problem of inferring causation from non-experimental data, etc.

UPDATE: A commenter points to this Emily Oster post, reporting on a more careful study that supports back-sleeping.

The recent evolution of central banking in the U.S.

Timothy Taylor writes,

when I was teaching big classes in the late 1980s and into the 1990s, the textbooks all discussed three tools for conducting monetary policy: open market operations, changing the reserve requirement, or changing the discount rate.

Somewhat disconcertingly, when my son took AP economics in high school last year, he was still learning this lesson–even though it does not describe what the Fed has actually been doing for more than a decade since the Great Recession. Perhaps even more disconcertingly, when Ihrig and Wolla looked the latest revision of some prominent intro econ textbooks with publication dates 2021, like the widely used texts by Mankiw and by McConnell, Brue and Flynn, and found that they are still emphasizing open market operations as the main tool of Fed monetary policy.

I recommend the whole post. I think this is an important issue.

The way I see it, central bank practices moved away from the textbook story at least 40 years ago. There were three important steps.

1. Intervention via the market for repurchase agreements, commonly called the repo market.

2. The use of risk-based capital requirements (RBC) to steer the banking sector.

3. The expansion of bank reserves and the payment of interest on reserves (IOR).

I will discuss these in turn. Continue reading

Number One Pick talks about complexity

Scott Alexander writes,

Recessions are fractally complicated. Not only do they have different causes, but the causes have different causes, and so on to infinity.

Most of the post is about psychiatric conditions, which he argues are even more complex.

Speaking of Scott Alexander, yesterday, the NYT published a piece on him that is totally misleading. If you read it at all, read it here, not at the NYT web site. Don’t feed trolls. Scott blogged a response, but for me the article was self-evidently dreck.

I have to say that Scott is not the first person about whom the NYT published a lie to make the individual sound racist. I wrote to the NYT asking for a correction, but none was forthcoming. One of these days, somebody actually will sue them.

Administrative data in economic research

Timothy Taylor writes,

economic research often goes well beyond these extremely well-known data sources. One big shift has been to the use of “administrative” data, which is a catch-all term to describe data that was not collected for research purposes, but instead developed for administrative reasons. Examples would include tax data from the Internal Revenue Service, data on earnings from the Social Security Administration, data on details of health care spending from Medicare and Medicaid, and education data on teachers and students collected by school districts. There is also private-sector administrative data about issues from financial markets to cell-phone data, credit card data, and “scanner” data generated by cash registers when you, say, buy groceries.

Vilhuber writes: “In 1960, 76% of empirical AER [American Economic Review- articles used public-use data. By 2010, 60% used administrative data, presumably none of which is public use …”

The quote is from a paper by Lars Vilhuber.

My thoughts:

1. The Census Bureau has procedures in place for allowing researchers to use administrative data while making sure that no individual record can be identified. I know this because one of my daughters worked at Census in this area for a few years.

2. The private firms that collect data as part of their business are not going to waste resources making sure that the data is free from coding mistakes and other errors. Researchers who are used to assuming that they are working with clean data are going to be surprised.

3. The data surrounding COVID are particularly treacherous. I found this out first-hand back when I was trying to follow the virus crisis closely.

4. This course should be taught more widely. As a point of trivia, the professor, Robert McDonald, was an undergraduate roommate of Russ Roberts (now the host of EconTalk) and also shared an apartment with me our first year of graduate school.

The vaccine, the market, and government

John Cochrane writes,

In a free market, vaccines would be sold to the highest bidder. The government could buy too, but you wouldn’t be forbidden from buying them yourself, and companies and schools would not be forbidden from buying them for their employees. Businesses would likely pay top dollar to vaccinate crucial employees who are off the job due to the pandemic. And only businesses know just which employees are crucial to the economy, and which can wait.

Government rationing gives power to public officials. It does not necessarily lead to superior moral outcomes. So far, the main accomplishment has been to slow the rollout.

Intervening for racial equality

Glenn Loury says,

I must address myself to the underlying fundamental developmental deficits that impede the ability of African Americans to compete. If, instead of doing so, I use preferential selection criteria to cover for the consequences of the historical failure to develop African American performance fully, then I will have fake equality. I will have headcount equality. I will have my-ass-is-covered-if-I’m-the-institution equality. But I won’t have real equality.

I recommend the entire interview.

Meanwhile, Lilah Burke reports,

In 2013, the University of Texas at Austin’s computer science department began using a machine-learning system called GRADE to help make decisions about who gets into its Ph.D. program — and who doesn’t. This year, the department abandoned it.

Before the announcement, which the department released in the form of a tweet reply, few had even heard of the program. Now, its critics — concerned about diversity, equity and fairness in admissions — say it should never have been used in the first place.

The article does not describe GRADE well enough for me to say whether or not it was a good system. For me, the key question is how well it predicts student performance in computer science.

I draw the analogy with credit scoring. If a credit scoring system correctly separates borrowers who are likely to repay loans from borrowers who are likely to default, and its predictions for black applicants are accurate, then it is not racially discriminatory, regardless of whether the proportion of good scores among blacks is the same as that among whites or not.

David Arnold and co-authors find that

Estimates from New York City show that a sophisticated machine learning algorithm discriminates against Black defendants, even though defendant race and ethnicity are not included in the training data. The algorithm recommends releasing white defendants before trial at an 8 percentage point (11 percent) higher rate than Black defendants with identical potential for pretrial misconduct, with this unwarranted disparity explaining 77 percent of the observed racial disparity in algorithmic recommendations. We find a similar level of algorithmic discrimination with regression-based recommendations, using a model inspired by a widely used pretrial risk assessment tool.

That does seem like a bad algorithm. On the face of it, the authors believe that they have a better model for predicting pretrial misconduct than that used by the city’s algorithm. The city should be using the authors’ model, not the algorithm that they actually chose.

I take Loury as saying that intervening for racial equality late in life, at the stage where you are filling positions in the work place or on a college campus, is wrong, especially if you are lowering standards in order to do so. Instead, you have to do the harder work of improving the human capital of the black population much earlier in their lives.

It seems to me that Loury’s warning about the harms of affirmative action is being swamped these days by a tsunami of racialist ideology. Consider the way that a major Jewish movement seeks to switch religions.

In order to work toward racial equality through anti-racism, we must become aware of the many facets of racial inequality created by racism in the world around us and learn how to choose to intervene. Join us as we explore:

– How race impacts our own and each others’ experiences of the world

– The choice as bystander to intervene or overlook racist behavior

– How to be an anti-racist upstander

There is more of this dreck at the link.

I foresee considerable damage coming from this. Institutions and professions where I want to see rigor and a culture of excellence are being degraded. Yascha Mounk, who doesn’t think of himself as a right-wing crank, recently wrote Why I’m Losing Trust in the Institutions.

Finally, this seems like as good a post as any to link to an essay from last June by John McWhorter on the statistical evidence concerning police killings.