Innovation

Yesterday, I attended a discussion held at the Hudson Institute on the topic of innovation. I go to these sorts of things because every once in a while one hears a stimulating idea. Most of the time, one doesn’t, but for me the occasional gain makes up for the frequent loss.

This was for me one of the winners. I will assume a Chatham House Rule, and not associate ideas with names. Some ideas that came up.

1. The presumption is that more innovation undertaken in the U.S. would be better for us. The main fear is that innovation is being held back by cultural and regulatory factors. Assuming that you want the innovation to occur in this country, then encouraging highly-skilled, entrepreneurial immigrants would seem to be the most reliable, least politically fraught way to make that happen.

2. In the area of science, government funding is a two-edged sword. Perhaps we have gotten to the point where the academic community in this country now selects for grant-writers rather than for people who can think outside the box. The most extreme claim was that the last 30 years have produced a “massacre” of genuine innovative scientists at our universities.

I did not bring this up, but I would be happy to offer macroeconomics as an example to illustrate that the use of the term “massacre” may not be too strong.

It had not occurred to me that part of my inability to fit in with academia might be that I might be too willing to challenge the status quo. But it sure would be an affirming notion to believe. Indeed, given my memoirs, I am prepared to consider that perhaps my willingness to challenge the status quo was better received at Freddie Mac in 1988-1994 than it was in academia. Not that Freddie was exactly all gung-ho for innovation (nor should it have been), but compared with the DSGE cartel…and then, just when I was getting worn out from trying to be an “intrapreneur” at Freddie Mac, along came the Web, with the opportunities it opened up for challenging the status quo. hmmm….

3. Maybe where we need innovation is in the area of sovereignty and governance. No, I was not the one who brought this up–someone else did. From that person, I got a tip about some literature that was new to me. I’ll let you know if it proves interesting.

4. Is innovation the product of just a small proportion of the population, or does the entire cultural milieu matter? What if these days, in order to be a significant innovator, you need to be in the 99.5th percentile in terms of cognitive ability?

5. One participant claimed that the existing telecom regulatory structure (the FCC, primarily) has slowed digital communications progress by ten years. Although I often think that markets find a way around regulation, I am afraid that I found this estimate plausible. The FCC is one of the more evil agencies around, and I am afraid that libertarians are less aware of the FCC’s evil than they are of that of the FDA, for example.

6. Since 2008, the number of startups has slowed from 600,000 per year to just over 400,000 per year. This leads me to wonder–has the success rate of startups gone up? If so, then the decline it startups may suggest an unfortunate increase in risk aversion. If not, then it could be a rational responses to an adverse environment.

7. Is there a causal relationship between cultural attitudes toward innovation and entrepreneurs, and which way does it run? It was suggested that support for innovation would go up if somebody discovered, say, a cure for cancer or Alzheimers’.

8. Is innovation dominated by a few really important creations, or by the cumulative effects of lots of incremental improvements? Your answer to that probably is correlated with our answer to (4).

9. One scenario for the future is that perhaps 30 percent of the population is intelligent and adaptive enough to contribute in the work place. The rest enjoy the “narcotics” of digital entertainment, recreational drugs, and so forth, while being supported by, well, I wasn’t the one who made this point, but if I had I would have used the term Vickys.

10. How much has IT contributed to productivity growth? There were optimists as well as pessimists. You know where I stand, given my article on Diane Coyle’s book on GDP.

11. Is there too much separation between value creation and value capture? It was suggested that finance captures too much value, and one effect of that is to draw people with STEM aptitude and skills away from science and engineering. I admit to being sympathetic to that view, although I think you want to try to fight the bias that sees finance as entirely parasitical. It really is important that capital flow to its most efficient uses, and if that means that we need smart people involved in financial markets, then so be it.

I think where finance goes wrong is in allowing people to capture value from inflated prices. So my Internet company gets sold in 1999 for more than what it was really worth (good for me, great for our main backers, not good for whoever ended up owning shares in the company that bought us). Or WhatsApp gets $19 billion today. But if the problem with Wall Street is a lack of wisdom, do you want to subtract talent from Wall Street? If so, which talent?

Another suggestion is that the “winners-take-most” nature of certain markets means that excess value gets captured. Think of Bill Gates in the heyday of Windows, or Google today. Maybe, although I would argue (and did) that, throwing consumers’s surplus into the mix, it’s not clear that the value captured by corporate winners is so excessive.

Correlation, Signal, and Noise

As a public service, I am going to offer two propositions about correlation.

1. Where there is correlation, there is signal.

2. Where there is noise, correlation is understated.

The other night, I met with a large group of people to discuss Gregory Clark’s new book. Many people made comments that were uninformed regarding these two propositions.

For example, I gather that people who are strongly into political correctness are wont to say that “There is no reason to believe that IQ measures anything.” I think that is untrue.

Measured IQ is correlated with other variables, including education and income. Any variable that is reliably correlated with other variables must have some signal. It must be measuring something. It may not be measuring what it purports to measure. It may not have a causal relationship with the variables to which it is correlated. But to deny that it measures anything at all moves you deeply into science-denier territory.

Other comments suggest that people believe that if the correlation between parents and children on some variable is, say, 0.4, then this represents a ceiling on heritability. In fact, if measurement of the variable in question is subject to noise, then true heritability could be higher. For example, if IQ tests are inexact (which I assume they are), then it could be that the heritability of “true IQ” could be 0.6, even though the heritability of measured IQ is only 0.4. The opposite is not the case–random noise will not cause the measured IQ to appear more correlated than it really is. The bias is only downward.

I have written a review (may appear next month) of Clark’s book, and in my view the main contribution of his multigenerational studies of social mobility is to give us a means for assessing the impact of noise on heritability estimates. The affect appears to be large, meaning that some characteristics are far more heritable than one-generation correlation studies suggest.

Redefaults

The Washington Post reports,

Five years after the federal government bailed out more than 1 million struggling homeowners, many who got the relief may end up losing their homes after all.

…The initiative was based on the flawed assumption that the economy would bounce back more quickly, undoing the damage wrought by plunging home prices and high unemployment.

No, the initiative was based on the flawed assumption that keeping people in homes that they should never have purchased in the first place was a good idea. At the time, I kept saying over and over that we should pay people’s moving expenses to get into rental units that they could afford.

These results are exactly what I predicted would happen. I remember complaining about setting people up to fail again when I testified at a Congressional hearing almost five years ago.

Evaluating Program Effectiveness

One of the themes of Why Government Fails So Often, the new book by Peter Schuck, referred to by David Henderson, is that government programs should be evaluated for their effectiveness. The 2014 President’s Economic Report, which Greg Mankiw spotted, includes an entire chapter called “Evaluation as a tool for improving Federal programs.” It begins,

Since taking office, President Obama has emphasized the need to determine what works and what does not in government, and to use those answers to inform Federal policy and budget decisions…Today, evaluating Federal programs and interventions to understand their impact, and developing the infrastructure within agencies to support a sustained level of high-quality evaluations, remains an administration priority. By rigorously testing which programs and interventions are most effective at achieving important goals, the government can improve its programs, scaling up the approaches that work best and modifying or discontinuing those that are less effective.

The idea of shifting focus from intent to results is laudable. Ironically, however, all I can see from this effort is intent without results.

UPDATE: Jason Richwine heads his latest post “The White House’s Standard for Social Programs: Hints of Success Are Good Enough”

Internet Bubble 2.0 Watch

Ryun Chittum writes,

Is the news going to become a $5 trillion industry? No. That would be one-third of the US economy. Could the news become a $500 billion industry? No. All advertising spending in the US comes to about $170 billion a year, and only a small portion of ad money goes to news organizations.

He is applying some arithmetic to Marc Andreessen’s claim that the news business will grow by a factor of 10 or a factor of 100 over the next couple of decades. Internet Bubble 1.0 was also vulnerable to arithmetic, as I pointed out in July of 1999 (i.e., when the bubble was inflating).

Speaking of the Internet news bubble business, Tyler Cowen points to the soft launch of vox.com.

Velasquez-Manoff on Causal Density

From An Epidemic of Absence.

The scientific method that had proven so useful in defeating infectious disease was, by definition, reductionist in its approach. Germ theory was predicated on certain microbes causing certain diseases. Scientists invariably tried to isolate one product, reproduce one result consistently in experiments, and then, based on this research, create one drug. But we’d evolved surrounded by almost incomprehensible microbial diversity, not just one, or even ten species. And the immune system had an array of inputs for communication with microbes. What if we required multiple stimuli acting on these sensors simultaneously? How would any of the purified substances mentioned above mimic that experience? “The reductionist approach is going to fail in this arena,” says Anthony Horner, who’d used a melange of microbes in his experiment. “There are just too many things we’re exposed to.”

In an essay over ten years ago, I wrote,

E.D. Hirsch, Jr., writes, “If just one factor such as class size is being analyzed, then its relative contribution to student outcomes (which might be co-dependent on many other real-world factors) may not be revealed by even the most careful analysis…And if a whole host of factors are simultaneously evaluated as in ‘whole-school reform,’ it is not just difficult but, despite the claims made for regression analysis, impossible to determine relative causality with confidence.”

In the essay, my own example of a complex process that is not amenable to reductionist scientific method is economic development and growth. In that essay, I also provide a little game, like the children’s game “mastermind,” to illustrate the difficulty of applying reductionism in a complex, nonlinear world. Try playing it (it shows up better in Internet Explorer than in Google Chrome).

The phrase “causal density” is, of course, from James Manzi and his book, Uncontrolled.

The Financial Crisis and Wealth Transfer

Amir Sufi writes (with Atif Mian).

The strong house price rebound in high foreclosure-rate cities likely reflects these markets bouncing back after excessive price declines. But these foreclosed properties are not being bought by traditional owner-occupiers that plan on living in the home. Instead, they have been bought by investors in large numbers.

This is from a new blog spotted by Tyler Cowen, and both of the first two posts are worth reading in their entirety.

The picture that I get is of a pre-crisis economy in which middle- and lower-middle-income households thought they were doing well in the housing market. Then their house prices collapsed. Vulture investors swooped in to buy. Meanwhile, the government bailed out big banks and the stock market boomed. Some folks will credit the Fed for the latter. I don’t, but that is a bit beside the point here.

Net this all out–the sucker bets on housing by the non-rich, followed by big gains by wealthier folks in stocks and in foreclosed houses, and you get a picture of a huge regressive wealth transfer engineered in Washington. Carried out primarily by those who profess to be outraged by inequality.

Labor Force Participation Chartfight

1. John Cochrane presents a chart showing that over the last 25 years, the employment-population ratio tracks the ratio of people aged 25-54 to the total population.

Pointer from Mark Thoma. The chart is from Torsten Slok of Deutsche Bank.

2. John Taylor has a chart showing that the labor force participation rate is several percentage points that which was projected several years ago based on demographics. The chart comes from a paper by Chris Erceg and Andy Levin.

The first chart suggests that most of the decline in the employment/population ratio in recent years is due to demographic changes. The second chart suggests the opposite. How to reconcile the two?

3. And then there is Binyamin Appelbaum:

In February 2008, 87.4 percent of men in that demographic had jobs.

Six years later, only 83.2 percent of men in that bracket are working.

Pointer from Tyler Cowen.

My verdict is that Slok’s chart, referred to by Cochrane, is misleading. Here is the chart:

The way that the two lines are superimposed makes it appear that 2007 was a glorious year of over-employment, and the plunge in the employment-population ratio looks like a reversion to trend. Suppose you were to slide the blue line up vertically so that it just touches the red line at the peak in 2007. That would make the chart look much more like Appelbaum’s, shown below:


Some other issues:

–I suspect that some of the drop-off in employment has occurred among youth, who are outside of the 25-54 bracket that Slok uses.

–Another issue is what you think should have happened outside Slok’s bracket at the other end, namely 55-64 year olds. These are baby boomers, so that their share of the labor market has been soaring. The most likely reconciliation of the two charts is that the baby boomers have been retiring early at rates higher than historical norms.

As far as labor force participation goes, is 55 the new 65? If so, then somebody should trace out what that means for Social Security. Fewer people paying in and more people collecting disability cannot be a good thing for solvency.

Update: Cochrane offers another take, more nuanced.

Caroline Hoxby on Education

She writes,

(1.) A teacher who is in the top 10 percent of the current distribution of value-added raises student achievement by several times what a teacher in the bottom 10 percent does.

(2.) If all US teachers had value-added equal to what the current top 10 percent has, the average American student would achieve at the level of students whose parents have incomes in the top 10 percent of the family income distribution. This is approximately equivalent to the level at which the average student in Singapore achieves.

Clearly, this contradicts my null hypothesis, which is that there is no intervention that can achieve a significant, durable, replicable impact on student outcomes.

She also favors choice and competition, as well as attempts to introduce technology in a cost-effective way.

Perhaps most interestingly, she begins the essay by saying that she is describing what might be feasible ideally, not with what the political system is likely to produce. Implicit in this view is that the political process is likely to be distorted by interest groups. This is something that progressives tend to ignore when they advocate for government playing a larger role in education, health care, and so forth.

Working with the Tautology Model

Scott Sumner writes,

the NGDP approach is a very naive model that treats NGDP sort of like a big pot of money, which is shared out among workers with sticky wages. If some day the pot is smaller, then there’s less money to share, and some workers end up disappointed (unemployed.) It’s completely agnostic about the micro foundations…

Which is fine with me. Again, think of the mineshaft analogy, with real-world observations on the surface and the optimization-equilibrium paradigm buried below. To connect the two, you can try to start inside the mine and tunnel out, or you can start outside and tunnel in. My displeasure with much of macro the past thirty years is that it insists on the inside-out approach.

Consider the tautology model: hours worked = total wages divided by the hourly wage.

If the Fed were to target total wages, what could go wrong? Sumner cites the Lucas critique. In this context that would mean that the sticky nominal wages you observed in the past were due to the Fed not trying to mess with total wages. As soon as the Fed tries to mess with total wages, workers will catch on and start paying closer attention to real wages.

I believe that something else will go wrong. The Fed will not be able to hit its target for total wages! Suppose we write MV = W, where W is total wages and V is the velocity of money expressed in terms of total wages rather than nominal GDP. What I am inclined to believe is that moderate changes in M will lead to approximately equal and opposite changes in V.

Picture this as the Fed having a steering wheel, M, that is only loosely connected with the front axle, W. The Fed can turn the wheel quite hard while the axle barely wiggles. It may take extensive turning of the monetary steering wheel over a long period of time to obtain a response of total wages. In fact, the period over which wages are sticky may turn out to be shorter than the lag between shifts in monetary policy and changes in total nominal wages.

We have a complex, sophisticated monetary system, in which people’s ability to undertake transactions is not proportional to the amount of currency in circulation. We have a large financial system, in which the Fed is only one player. I keep trying to hold down people’s estimation of the power of the Fed.