Does Inflation Have a Zero Bound?

Chris House writes,

If the 1 percent reduction in core inflation is sufficient for the Keynesian model to generate the huge recession we just went through then where was the huge recession in the late 1990s? Where was the enormous recession in 1986?

I think I made this point somewhere along the way, also.

When the Phillips Curve was trumpeted by Samuelson and Solow, the thinking was that causality ran from unemployment to (wage) inflation. In the 1970s, the idea was that causality ran from inflation to unemployment. This latter view is what is now impossible to defend. To the extent that New Keynesians accept the inflation-to-unemployment causality story (which they seemed to do), they are in trouble as I see it.

However, even for older Keynesians, who look at the causality as unemployment-to-inflation, recent experience is somewhat puzzling. I wrote this in 2010:

Looking ahead, the next 12 to 18 months should be interesting. The unemployment rate has been so far above 6 percent for so long that if the Fuhrer equation holds up, we should be seeing some pretty strong downward pressure on inflation. Instead, if inflation remains between 0 and 2 percent, this will look to me like another anomaly for the Phillips Curve.

But, you know, the beauty of 1970s undergraduate textbook macro is that you can use it to tell a story for anything. What we seem to be getting is a story that suggests a zero lower bound for inflation. You cannot cut nominal wages, so there.

Mark Thoma has more links, and his conclusion is worth quoting:

this is an empirical question that will be difficult to resolve empirically (because there are so many different ways to estimate a Phillips curve, and different specifications give different answers, e.g. which measure of prices to use, which measure of aggregate activity to use, what time period to use and how to handle structural and policy breaks during the period that is chosen, how should natural rates be extracted from the data, how to handle non-stationarities, if we measure aggregate activity with the unemployment rate, do we exclude the long-term unemployed as recent research suggests, how many lags should be included, etc., etc.?).

Also, read the follow-up. The phrase “an empirical question that will be difficult to resolve empirically” pretty much sums up macroeconomics, as far as I am concerned.

The Incentive to Invest

Timothy Taylor writes,

The very slow rebound in investment isn’t obvious to explain.

Read the whole thing. He walks through such explanations as uncertainty, difficulty obtaining financing, low aggregate demand (what Keynesians used to call the Accelerator Effect), and investment that has become less capital-intensive. On the latter, he writes,

it’s often a form of investment that involves reorganizing their firm around new information and communications technology–whether in terms of design, business operation, or far-flung global production networks. As a result, this form of investment doesn’t involve enough demand to push the economy to full employment.

All of these explanations are from a conventional AS-AD perspective. From a PSST perspective, I would look for bottlenecks, particularly in the service sector, where growth is most likely to occur. In the Setting National Economic Priorities Project, the following are considered possible bottlenecks:

–labor-market distortions, including high implicit marginal tax rates embedded in means-tested benefit programs
–the research/FDA approval/patent regime in medicine, given the state of the art in genetics and biochemistry
–the FCC spectrum regime, given the state of the art in spectrum utilization possibilities
–occupational licensing
–regulation of medical practice
–regulation/accreditation barriers to education innovation

The WSJ adds another layer to the mystery.

corporations used almost $600 billion in cash to buy back their own shares in 2013 and the uptrend continues into 2014. While that’s a positive trend for household wealth, it raises questions about companies’ commitment to move ahead with capital spending projects.

Remember the Tobin’s q theory of investment? It says that when stock prices are high relative to the value of existing capital, firms will invest more. Instead, we are seeing firms buy back stock to try to raise q.

This deserves more thought and analysis.

I, Too, Remember the 1970s

Robert Waldmann writes,

There is a blog discussion among Keynesian to New Keynesian economists on the cause of the new Classical & Rational expectations revolutions. I have been typing my usual comments. I will now try a post. The question is: how important was stagflation in causing the abandonement of old Keynesian models ? I basically agree with Simon Wren-Lewis this time.

On the other hand, Noah Smith writes,

Sure, the old paradigm could explain the 70s, but it didn’t predict it. You can always add some wrinkles after the fact to fit the last Big Thing that happened. When people saw the “Keynsian” economists (a label I’m using for the pre-Lucas aggregate-only modelers) adding what looked like epicycles, they probably did the sensible thing, and narrowed their eyes, and said “Wait sec, you guys are just tacking stuff on to cover up your mistake!” The 70s probably made aggregate-only macro seem like a degenerate research program.

Pointers from Mark Thoma.

Let me try to clear some things up. In some sense, Milton Friedman predicted stagflation in his 1967 Presidential address, but he did not use a New Classical Model. By introducing rational expectations, Lucas produced the New Classical Model.

I want to talk about two stages of “conversion” of mainstream macro. Stage one was the conversion to Milton Friedman’s Presidential Address, delivered in 1967 and published in 1968. Stage two was the conversion from Friedman’s non-rational-expectations version to the New Classical model, which had rational expectations.

Stage one:

–By the mid-1970s, mainstream macro featured a natural rate of unemployment. Before the Great Stagflation, it didn’t.

–By the mid-1970s, mainstream macro featured an equation in which the price level was determined by the money supply. Before the Great Stagflation, it didn’t.

Stage two took over graduate macro without really penetrating freshman macro. It changed from Friedman’s story, which implicitly used backward-looking expectations, to Lucas’ model, which used forward-looking expectations.

In my view, there was no empirical event that drove the stage two conversion. That is, there was nothing that took place in the 1970s that required a rational-expectations explanation. You could predict stagflation just using Friedman’s model with backward-looking expectations. So maybe I am agreeing here with Wren-Lewis rather than with Smith, or maybe I am disagreeing a tad with both.

In my view, what drove stage two was (a) the inconsistency between believing in the paradigm of rational agents and believing in backward-looking expectations and (b) the aura of mathematical superiority that was associated with rational expectations modeling. Honestly, I think that (b) had a lot to do with it.

My macro memoir blames Stan Fischer at MIT for a lot of (b).

Larry Summers on SecStag

He writes,

With very low real interest rates and with low inflation, this also means very low nominal interest rates, so one would expect increasing risk-seeking by investors. As such, one would expect greater reliance on Ponzi
finance and increased financial instability.

Pointer from Tyler Cowen.

Larry thinks that if the government spends more money, it will work on improving JFK airport. If government does not spend more, then the private sector will take crazy risks.

I mean, if you think that government spends money wisely and the private sector does not, you do not need a whole theory of secular stagnation. To me, it looks like an opinion masquerading as a theory.

Maybe I am being too grumpy. The actual grumpy economist™, John Cochrane, has this to say.

The natural rate is per Laubach and Williams, about -0.5%. But we still have 2% inflation, so the actual real interest rate is -1.5%, well above -0.5%. With 2% inflation, we need something like a 4-5% negative “natural rate” to cause a serious zero bound problem. While Summers’ discussion points to low interest rates, it is awfully hard to get any sensible economic model that has a sharply negative long run real rate.

And he adds this:

From my point of view, the focus on and evident emptiness of the “demand” solution — its reliance on magic — just emphasizes where the real hard problems are.

Mortgage Equity Withdrawal

Bill McBride tracks data on mortgage equity withdrawal as a percentage of disposable income. You withdraw mortgage equity when you take out a second mortgage or refinance your existing mortgage with a larger loan (“cash-out refi,” as we call it). The graph at the link shows how from 2002-2007, the withdrawal rate was between 4 and 9 percent each quarter. Ordinarily, the number should be slightly negative, as people pay down the principal in their mortgages. We had big negative numbers in 2009-2011, “mostly because of debt cancellation per foreclosures and short sales, and some from modifications.”

Thanks to a commenter for the pointer. Some further comments:

1. From an AS-AD perspective, you can say that mortgage equity withdrawal boosted AD from 2002-2007, and then it went into reverse when the subprime crisis hit. This might be the best story for the drop in AD.

2. From a PSST perspective, you can say that a lot of consumption patterns were unsustainable, based on people spending capital gains on housing. When the capital gains leveled off and then turned into capital losses, the economy needed to find new patterns of trade, and it still has not done so.

3. Apropos of nothing, I once cursed out the guy who developed the measure of mortgage equity withdrawal. In about 1982 or so, Jim Kennedy was the forecaster for Industrial Production, and I was the forecast co-ordinator (we were both economists at the Fed). The forecast process, which was pretty much all clerical, was time-consuming and grueling. I had finally put a forecast to bed when Jim came in and said that the forecast for Industrial Production was out of synch and needed an update. He was very concerned about who might be blamed for the glitch. I shouted, “I don’t care whose bleeping fault it is!” I really lost my temper. It was just a case of my being tired, still at work long after I usually went home, and caught off balance by having my relief at being finished turned to anguish at finding that there was more work I had to do.

The Dismal Forecasting Record

Tim Harford writes,

There were 77 countries under consideration, and 49 of them were in recession in 2009. Economists – as reflected in the averages published in a report called Consensus Forecasts – had not called a single one of these recessions by April 2008.

… Making up for lost time and satisfying the premise of an old joke, by September of 2009, the year in which the recessions actually occurred, the consensus predicted 54 out of 49 of them – that is, five more than there were. And, as an encore, there were 15 recessions in 2012. None were foreseen in the spring of 2011 and only two were predicted by September 2011.

He cites research from Prakash Loungani and Hites Amir. Some comments:

1. This underlines the fact that macroeconomists are using equations that are not verified empirically.

2. Perhaps the “target the forecast” mantra of market monetarists would not work as robustly as they might hope.

3. According to the NBER, the U.S. was already well into a recession by April 2008 and already out of recession by September of 2009.

Reform of Macroeconomics Teaching

Simon Wren-Lewis writes,

So my first point, which I have made before, is that we can get rid of a lot of stuff that is simply out of date. Like the LM curve (and theories of money demand that go with it). And the Aggregate Demand curve which is derived from it. And Mundell Fleming which is an open economy version of it (and inconsistent with UIP to boot). And the money multiplier (which, apart from being very misleading, is unnecessary if we stop fixing the money supply).

That is fine. But he winds up with this:

So there you have it. Econ 101 with just three basic relationships: an IS curve, a Phillips curve and UIP

Pointer from Mark Thoma. UIP stands for uncovered interest parity, with the impact that a higher interest rate at home is associated with a stronger currency and reduced net exports.

Actually, I do not think that replacing the equations that have gone out of fashion with those that are currently fashionable represents an improvement. Quite possibly, it is worse. As Noah Smith points out, these equations are not empirically verified. They are merely asserted.

I think that macroeconomics ought to be taught as a combination of economic history and history of thought. In that regard, I think that my macro memoir would have some value, although other perspectives also deserve to be included.

My Least Favorite Macroeconomic Statistic

John Cochrane writes,

Philadelphia Fed President Charles Plosser made this nice graph, showing how reduced views of potential GDP are closing the gap, not rises in actual GDP.

Obviously, it would help to go to his post and look at the graph. But potential GDP is perhaps my least favorite economic statistic. Keep in mind that potential GDP refers to real GDP, not nominal GDP.

How to define potential GDP? I think then when you come down to it, the definition is “what GDP would be if there were no shortfall of aggregate demand.” So I think that in order to buy into potential GDP, you have to be really committed to the AS-AD paradigm.

How is potential GDP arrived at? I think that the process involves taking a graph of the history of GDP and fitting trend lines in between the peaks, perhaps with some smoothing thrown in. Since we never know what the next peak of GDP will be, we pretty much never have a good idea of potential GDP in real time, only in retrospect.

When I think in terms of PSST, there is no analogue to potential GDP. Patterns of specialization and trade are always breaking and re-forming. When patterns break, unless the break is caused by war, disaster, or government policy, it is probably the economy’s way of freeing up resources that otherwise would remain misallocated. When new patterns form, it is only a good thing if the patterns are sustainable for a decent interval of time.

The FOMC and its Target

Van R. Hoisington and Lacy H. Hunt write,

The Federal Open Market Committee (FOMC) has continuously been overly optimistic regarding its expectations for economic growth in the United States since the last recession ended in 2009. If their annual forecasts had been realized over the past four years, then at the end of 2013 the U.S. economy should have been approximately $1 trillion, or 6%, larger.

They go on to say that the Fed has over-estimated the “wealth effect” by which higher asset prices lead to more consumption.

If the wealth effect was as powerful as the FOMC believes, consumer spending should have turned in a stellar performance last year. In 2013 equities and housing posted strong gains. On a yearly average basis, the real S&P 500 stock market index increase was 17.7%, and the real Case Shiller Home Price Index increase was 9.1%. The combined gain of these wealth proxies was 26.8%, the eighth largest in the 84 years of data. The real per capital PCE gain of just 1.2% ranked 58th of 84. The difference between the two was the fifth largest in the 84 cases. Such a huge discrepancy in relative performance in 2013, occurring as it did in the fourth year of an economic expansion, raises serious doubts about the efficacy of the wealth effect

Let me try to put on a Scott Sumner hat and speak for him. If he reads this, he can correct me.

1. If the Fed under-forecasts nominal GDP (NGDP) in one quarter, than it ought to try raise its target for NGDP in subsequent quarters. That is, it should engage in level targeting, not simply stick to a growth-rate target after a forecast miss.

2. The Fed should use market forecasts rather than rely on a model to forecast. Ideally, we would have NGDP futures contracts. But in their absence, other nominal market variables, such as the spread between non-indexed and indexed bonds, can be helpful.

3. Wealth effect, shmealth effect. Who cares what particular component of the Fed model caused it to underpredict NGDP? Given (1) and (2), there is no good excuse for the Fed missing its targets by such a large cumulative amount.

4. The period 2008-2014 is the mirror image of 1969-1979. In the 1970s’, the Fed consistently under-predicted NGDP, with the result that monetary policy was too loose. In the recent episode, the Fed consistently over-predicted NGDP, with the result that monetary policy was too tight.

Remember that when I take off my Scott Sumner hat, I reject macroeconomics altogether in favor of PSST.

Nick Rowe on Secular Stagnation

He writes,

What is it with you townies? Have you never looked out of the window, when you fly (do you ever drive?) from one city to another, and wondered about all that stuff you see out there? It’s called “land”. It grows food, that you eat. And that land is valuable stuff, and there’s a lot of it, and it can last a very long time, and it pays rent (or owner-equivalent rent). And if the rent on that land is strictly positive (which it is), and if the price of that land is finite (which it is), then the rate of interest you get by dividing that annual rent on land by the price of land is going to be strictly positive. And that’s a real rate of interest, because land is real stuff, and what it produces is real stuff too.

So when you go to a helluva lot of trouble to build a model with a negative equilibrium real rate of interest, and it’s a very fancy complicated model, but it totally ignores land, I really wonder where you are coming from. Actually I don’t wonder. I know where you are coming from. You are coming from the town, or the big city, where you can easily forget about land. But even then: you know that stuff your house or condo is built on? That’s called “land” too.