Set Up for Failure

W. Scott Frame, Kristoper Gerardi, and Joseph Tracy write,

The extinction of the private subprime market and the rapid rise of the government insurance programs may strike many as a largely positive development. After all, it was the subprime segment of the mortgage market that triggered the global financial crisis and subsequent Great Recession as subprime loans defaulted at an astronomical rate during the housing bust. However, while government-insured mortgages are typically underwritten with more rigor and discipline than private subprime loans, they are not low-risk loans. The combination of high leverage and low credit scores documented above translates into extremely high default rates. The table above shows that five-year cumulative default rates (CDRs) by year of origination varied between 5 and 25 percent over our sample period. To put these numbers into perspective, the five-year CDRs associated with loans insured by Fannie Mae and Freddie Mac (the housing government-sponsored enterprises, or GSEs) are typically an order of magnitude lower. According to our calculations, the 2002 and 2009 vintages of GSE loans had five-year CDRs of approximately 2 percent, while Ginnie Mae’s same vintages had five-year CDRs of almost 10 percent and 13 percent, respectively.

Pointer from Mark Thoma.

The Federal Housing Administration, FHA, sets up many borrowers to fail. One could argue that these borrowers put up so little of their own money that this is a worthwhile risk from their point of view. It is the taxpayers that are being set up to fail.

Consolidating the Central Bank and the Treasury

Thomas Klitgaard and Harry Wheeler write,

The discussion above offers up a perspective on what is meant by “monetizing debt.” This term refers to a central bank buying government bonds and promising to keep them on its balance sheet with the result that the increase in reserves in the banking system translates into higher prices. This outcome, though, requires that the central bank not pay the appropriate interest rates on reserves. If it does, then an asset purchase program is just an effort that shortens the maturity of public-sector debt and will likely have few or no implications for future inflation.

Pointer from Mark Thoma.

Another implication is that it makes the interest cost of the government more sensitive to movements in short-term interest rates. So a sudden loss of confidence in the government by investors which raises interest rates would become self-reinforcing. And if the only way out of such a debt crisis is to print money, then there are implications for future inflation.

Jason Furman’s Puzzle

He writes,

In the absence of economic rents, the return on corporate capital should generally follow the path of interest rates, which reflect the prevailing return to capital in the economy. But over the past three decades, the return to productive capital generally has risen, despite the large decline in yields on government bonds.

Pointer from Mark Thoma.

For a moment, think that there is just one interest rate. If “the” interest rate is low, then the rate of return on new capital ought to be low. Otherwise, firms would borrow at the low interest rate in order to purchase new capital.

One possibility is that the marginal return on new capital is low, but the returns on existing capital are high. That would be true in an economy where there are economic rents available, due to monopoly power and/or government favoritism. I gather that this is the story that Furman thinks is right.

I would note that there is more than one interest rate. It could be that there is a high interest rate charged to firms that are trying to invest in new capital at the margin. Microsoft can borrow at a low interest rate, but when it buys Linked-In that is not new capital investment.

Despite that possibility, my inclination is to believe that Furman is onto something. Read his whole essay.

James Surowiecke on the Universal Basic Income

He writes,

One striking thing about guaranteeing a basic income is that it’s always had support both on the left and on the right—albeit for different reasons. Martin Luther King embraced the idea, but so did the right-wing economist Milton Friedman, while the Nixon Administration even tried to get a basic-income guarantee through Congress. These days, among younger thinkers on the left, the U.B.I. is seen as a means to ending poverty, combatting rising inequality, and liberating workers from the burden of crappy jobs. For thinkers on the right, the U.B.I. seems like a simpler, and more libertarian, alternative to the thicket of anti-poverty and social-welfare programs.

Pointer from Mark Thoma. A few thoughts of mine:

1. The apparent left-right consensus breaks down if in the last sentence the left is thinking that the word “alternative” should instead be “in addition to.”

2. There is a question of how to finance the UBI. For those on the right, the answer is by getting rid of the other programs. For those on the left, it may be less clear. See (1).

3. The existing approach to anti-poverty programs fits with what in my forthcoming book I describe as real-world economic policy: stimulate demand, restrict supply. Food stamps stimulate demand for food. Housing subsidies stimulate demand for housing. Student loan subsidies stimulate demand for accredited colleges. Medicaid stimulates demand for medical services.

A UBI would allow the recipients to decide on their own priorities. It thus lacks the base of support that the other programs have.

The New Consensus on Macroeconomics

Noah Smith writes,

Assuming Wolfers and DeLong and I aren’t just blowing smoke out of our rear ends, and DSGE models really don’t work, why do so many macroeconomists spend so much time on them?

Pointer from Mark Thoma. Smith refers to Brad DeLong’s post, in which he writes,

DSGE macro–has indeed proven a degenerating research program and a catastrophic failure: thirty years of work have produced no tools for useful forecasting or policy analysis.

As for Noah’s possible explanations for how the profession got into that cul de sac, and why it remains there, I vote for a combination. I endorse the following snippets of his post:

Maybe since macro data is very uninformative, no one actually knows what good research looks like, so they all settle on some random thing

In my new book, I say that economists in general, and macroeconomists in particular, deal in interpretive frameworks rather than in testable hypotheses.

it’s just fun for some people to do

I always suspected that Stan Fischer and Olivier Blanchard liked their preferred models because they found the math fun. They found it even more fun when they could see that other people had trouble following the math.

if the prevailing research paradigm is not really better than alternatives, then you probably want macroeconomists who are willing to “play the game”, as it were. So DSGE might be an expensive way of proving that you’re willing to spend a lot of time and effort doing silly stuff that the profession tells you to do.

Sad, but true.

I see this as vindication. During the thirty years that I abandoned interest in academic macroeconomics, I missed nothing. It was not me that was being obtuse. It was the profession.

Indulging in Confirmation Bias

For my view of the housing bubble. John Geanokoplos and others wrote,

Notice that if we freeze leverage (LTV) at constant levels, the boom gets dramatically attenuated, and the bust disappears.

This statement is based on a simulation of an “agent-based” model for house prices. Pointer from Eric Beinhocker from Mark Thoma.

Beinhocker writes,

rather than predict we should experiment. Policymaking often starts with an engineering perspective – there is a problem and government should fix it. For example, we need to get student mathematics test scores up, we need to reduce traffic congestion, or we need to prevent financial fraud. Policy wonks design some rational solution, it goes through the political meat grinder, whatever emerges is implemented (often poorly), unintended consequences occur, and then – whether it works or not – it gets locked in for a long time. An alternative approach is to create a portfolio of small-scale experiments trying a variety of solutions, see which ones work, scale-up the ones that are working, and eliminate the ones that are not.

American pragmatist John Dewey also thought that technocrats should take an experimental approach. That is not a new idea. (I learned this from Jeffrey Friedman, who sent me a draft from his forthcoming book.) Of course, my view is that I would rather see experiments come from the market than from technocrats.

Later, Beinhocker writes,

A major challenge for these more adaptive approaches to policy is the political difficulty of failure. Learning from a portfolio of experiments necessitates that some experiments will fail. Evolution is a highly innovative, but inherently wasteful process – many options are often tried before the right one is discovered. Yet politicians are held to an impossibly high standard, where any failure, large or small, can be used to call into question their entire record.

I would argue that avoidance of failure is natural in any large organization, not just government. That is why I think that markets are better able to conduct experiments to solve problems.

I found Beinhocker’s essay interesting. However, if we are going to try to improve economics, it is important to include behavioral policy-making and politics into the analysis. Do not simply assume a benevolent, rational technocrat as decision-maker.

Speaking of confirmation bias, a recent Instapundit post linked to an old essay of mine, one which speaks to this comparison between expertise mediated by markets and expertise mediated by government.

Epistemology and Economics

I am starting to read up on the topic. I see epistemology as the attempt to articulate what criteria we are using to evaluate the usefulness of economic analysis. Various initial thoughts:

1. I emailed Pete Boettke for advice on what to read to get me started on the topic, and not surprisingly he had useful suggestions. I told him that I of course know about Milton Friedman’s classic position.

2. One can argue that there is no need for epistemology. You could just assume that good economics is what leading economists do. Without having to articulate what they do, just try to do things similarly. However, for someone like myself, who is inclined toward heterodoxy and to doubt that leading economists are doing useful economic analysis, that is not the right answer.

3. One of the articles that Boettke suggested was by Dan Hausman, who wrote

Instead of attempting to discover what methodology neoclassical economists actually practice and to think seriously about how that methodology might be justified, … critics … have usually relied on indefensible philosophical theories of science to support broad condemnations. … Philosophers have, however, little to offer by way of informative well-supported systematic theories of the scientific enterprise and that little does not lend itself to mechanical application.

In other words, if you have a problem with how economists are doing things, that is your problem, not the economists’. Those who can, do, and those who can’t, do epistemology.

To put it another way, I read Hausman as saying that the task of the epistemologist is to figure out what economists do and then justify it. Again, from where I sit as a heterodox economist, this is hardly satisfactory.

What I contend in my forthcoming book is that economic theories are interpretive frameworks. These cannot be tested decisively. They are not falsifiable in the Popperian sense. Think of AS-AD. There is no combination of output and price movements that can falsify it. If they move together, you call it a demand shock. If they move in opposite directions, you call it a supply shock.

What I do not address in the book is the question of how best to evaluate competing interpretive frameworks. I believe that it is an important question. I am not convinced that the best way to answer to the question is to study how frameworks become popular in the profession. I think that factors such as path dependence (model X got published in a good journal, so something that pertains to model X can also get published in a good journal) and ideological preference play outsized roles in such evaluations in practice.

Somewhat related: Noah Smith on objectivity and economics. Pointer from Mark Thoma.

Four Forces and Urbanism

Justin Fox writes,

Basically, urban life is becoming a luxury good in much of the U.S., in part because there isn’t enough of it to go around.

Pointer from Mark Thoma.

I think of this in terms of the four forces. The New Commanding Heights of education and health care tend to concentrate in inner cities. This helps draw affluent professionals to cities, leading to gentrification. Then you get the sorts of amenities that affluent professionals like–bike lanes, sushi restaurants, yoga studios. The urban-suburban demographics becomes sorted by taste as well as by occupation.

Axel Leijonhufvud vs. MIT Economics

He writes,

For concreteness, think of a controlled experiment in a natural science as an example of a closed system. The conditions of an experiment controlled in this sense are never met or approximated in macroeconomics. (Adding more variables to the right handside of our regression equations will never get us there). But in constructing intertemporal models – such as in DSGE – we insist on the make-believe that the macroeconomy is a closed system

Link found here, thanks to a pointer from Mark Thoma.

Leijonhufvud was an early influence on me, and I still feel a strong affinity towards him.

PSST: the idea is spreading

Mark Muro writes,

Adjustment happens, but it’s a far more painful process than the models and textbooks have imagined. Policy, and the economists, should take it seriously.

Pointer from Mark Thoma.

Difficulty with adjustment is the essence of the PSST story for recessions. If the economy were a GDP factory, then the factory foreman would be temporarily confused about which job to give to which person. Of course, for the factory foreman, substitute the set of entrepreneurs and potential entrepreneurs.

Muro cites three recent papers, two of which I have covered. The new one is by Danny Yagan, who writes,

living in 2007 in a below-median 2007-2009-fluctuation area caused those workers to have a 1.3%-lower 2014 employment rate. Hence, U.S. local labor markets are limitedly integrated: location has caused long-term joblessness and exacerbated within-skill income inequality. The enduring impact is not explained by more layoffs, more disability insurance enrollment, or reduced migration. Instead, the employment outcomes of cross-area movers are consistent with severe-fluctuation areas continuing to depress their residents’ employment. Impacts are correlated with housing busts but not manufacturing busts, possibly reconciling current experience with history. If recent trends continue, employment rates are estimated to remain diverged into the 2020s—adding up to a relative lost decade for half the country. Employment models should allow market-wide shocks to cause persistent labor force exit, leaving employment depressed even after unemployment returns to normal.

The standard remedies for adjustment, including trade adjustment assistance, and worker re-training, are among the least effective programs government has ever tried. Not surprisingly if decentralized entrepreneurs are having calculation problems, the socialist calculation problem proves worse.