Teenagers in the Court System

Jan Hoffman’s post

What none did, however, was exercise his constitutional rights. It was not clear whether the youths even understood them.

Therefore none had a lawyer at his side. None left, though all were free to do so, and none remained silent. Some 37 percent made full confessions, and 31 percent made incriminating statements.

These were among the observations in a recent study of 57 videotaped interrogations of teenagers, ages 13 to 17, from 17 police departments around the country. The research, published in Law and Human Behavior, adds to accumulating evidence that teenagers are psychologically vulnerable at the gateway to the criminal justice system. Youths, some researchers say, merit special protections.

reminded me of a personal experience when I sat on a jury.

At a cognitive level, the video of the detective and the defendant showed an incriminating confession, obtained by the book, without threats, intimidation, or promises. At an emotional level, it showed a teenage boy, in an awful mess, with no adult there to help him. He was polite, and almost endearing. The majority of jurors had children, and the main effect of the video was to trigger our Parent Reflex. In our particular courtroom drama, the role that many of us chose was that of the defendant’s Surrogate Parents.

It was a traumatic experience, and we let a guilty young person off. Go back and read my whole essay.

Patent Pools

Josh Lerner and Jean Tirole (the latter was just awarded a Nobel Prize) write,

Innovations in hardware, software or biotechnology often build on a number of other innovations owned by a diverse set of owners.

Pointer from Joshua Gans. For more on Tirole, see Tyler Cowen and subsequent posts by Alex and Tyler.

Two (or more) firms may hold complementary patents. That is, the value of using firm A’s patented innovation is higher to a licensee who can also use firm B’s patented innovation. Lerner and Tirole ask when a social planner would want these firms to pool their patents, that is to license them together. If you do not care to follow their mathematical analysis, you can skip to the end where they summarize their conclusions.

The situation is a form of the dual-monopoly problem. As I once explained,

suppose that a single company has a monopoly in both peanut butter and jelly. When it sets the price of jelly, it knows that the more jelly it sells the more peanut butter it will sell. Therefore, at the margin, it will tend to want to set a lower price for jelly than if it were just looking at jelly as a stand-alone product.

If you then break up the PB and J monopoly into two separate companies, the incentives of the two separate monopolies will change. The peanut butter company is not going to worry about the fact that higher peanut butter prices will reduce jelly consumption, and the jelly company is not going to worry about the fact that higher jelly prices will reduce peanut butter consumption. The net result of the breakup is that prices to consumers will rise.

This theory goes all the way back to Cournot.

It seems to me that we observe patent pools more often internally than externally. Think of Apple Computer as one gigantic internal patent pool. Or a large pharmaceutical company. It might be easier for one firm to internalize complementary patents than for several firms to get together and pool them.

The State of Macroeconomic Analysis

Olivier Blanchard, who in August of 2008 describe it as “good,” has modified his views. Concerning the consensus methodology that he praised back then, Blanchard writes,

However, these techniques only made sense under a vision in which economic fluctuations were regular enough so that, by looking at the past, people and firms (and the econometricians who apply statistics to economics) could understand their nature and form expectations of the future; and simple enough so that small shocks had small effects, and a shock twice as big as another had twice the effect on economic activity. The reason for this assumption, called linearity, was technical. Models with nonlinearities – those in which a small shock, such as a decrease in housing prices, can sometimes have large effects, or in which the effect of a shock depends on the rest of the economic environment – were difficult, if not impossible, to solve under rational expectations.

Pointers from Mark Thoma and Greg Mankiw. Read the whole thing. I have to give Blanchard credit for writing this:

The reality of financial regulation is that new rules open new avenues for regulatory arbitrage, as institutions find loopholes in regulations. That in turn forces authorities to institute new regulations in an ongoing cat-and-mouse game (between a very adroit mouse and a less nimble cat). Staying away from dark corners will require continuous effort, not one-shot regulation.

That is the theme of The Chess Game of Financial Regulation.

However, my overall take on Blanchard’s essay will be harsh.*

1. He still wants to believe in equations and technical brilliance. He implies that if we were just better at manipulating nonlinear models, all would be well. Once again, an MIT economist is unable to grasp Hayek’s insight that there is knowledge embedded in the economic order that no individual can possess. No thanks to MIT, and long after I had left it, I wound up on the side of Hayek. In fact, when it comes to macro I would argue that I am more Hayekian than Hayek.

2. Rational expectations helped make the careers of Fischer, Blanchard, and other MIT contemporaries, and refusal to tool up in rational-expectations modeling is what un-made my own academic career. In hindsight, and with an assist from Frydman and Goldberg, I would say that rational expectations was the ultimate anti-Hayek proposition. In effect, Chicago said that everyone knows everything. Eventually, MIT countered with “behavioral economics,” which said that some people are often mistaken while assuming at least implicitly that the technocratic elites know everything.

3. My least favorite paragraph:

Now that we are more aware of nonlinearities and the dangers they pose, we should explore them further theoretically and empirically – and in all sorts of models. This is happening already, and to judge from the flow of working papers since the beginning of the crisis, it is happening on a large scale. Finance and macroeconomics in particular are becoming much better integrated, which is very good news.

I’ll be uncharitable (and sarcastic) and say that he is telling us once again that the state of macro is good, because the same modeling hubris still predominates. The way I see it, the drunks are still looking under the same lamppost.

As for integrating finance and macroeconoimcs, my prediction is that this will accomplish nothing. I believe that mainstream macroeconomists are over-stating the importance of the financial crisis. Instead, I am inclined to treat the financial crisis as a blip, one whose apparent macroeconomic impact was made somewhat worse by the very policies that mainstream economists claim were successful.

This blip took place in the context of key multi-decade trends:

–the transition away from goods-producing sectors and toward the New Commanding Heights of education and health care

–the transition of successful men away from marrying housekeepers and toward marrying successful women

–the integration of workers in other nations, most notably China and India, into the U.S. production system

–the increasing power of computer technology that ise more complementary to some workers than others

These trends are what explain the patterns of employment and relative wages that we observe. The financial crisis, and the government panic in response, pushed the impact of some of these developments forward in time. Overall, however, the focal points of mainstream macroeconomics, including fiscal and monetary measures, are not nearly as significant to the actual economy as they are on paper in the models.

I have always been harsh on Blanchard. You should discount for that.

The Wedge Between Compensation and Wages

Mark Warshawsky and Andrew Biggs write,

Most employers pay workers a combination of wages and benefits, the most important of which is health coverage. Economic theory says that when employers’ costs for benefits like health coverage rise, they will hold back on salary increases to keep total compensation costs in check. That’s exactly what seems to have happened: Bureau of Labor Statistics data show that from June 2004 to June 2014 compensation increased by 28% while employer health-insurance costs rose by 51%. Consequently, average wages grew by just 24%.

The kicker:

Health costs are a bigger share of total compensation for lower-wage workers, and so rising health costs hit their salaries the most. The result is higher income inequality.

I don’t think you can blame company-provided health insurance as a first-order cause. Suppose that there were no company-provided health insurance, and everyone instead bought health insurance on their own. In that case, more of the compensation of employees would have been in the form of wages and salaries. If health insurance in the individual market had gone up as rapidly as it has in the company-provided market, then this would have a stronger effect on the cost of living for low-income workers. So even if you did not have company-provided health insurance, you would still have the “wedge” between compensation and disposable income after health insurance.

As a second-order effect, you can argue that company-provided health insurance, and its tax exemption, push in the direction of raising health care costs. But that is not such a compelling argument.

I do think that it is increasingly misleading to speak of a single “cost of living,” when so much of the market basket consists of medical procedures and college expenses that not everyone undertakes. That is, I still believe that Calculating trends in the real wage is much harder than we realize, because every household has different tastes.

Related, from Timothy Taylor:

it’s also intriguing to note that since 1984, the share of income spent on luxuries is rising for each income group, and the share of income spent on necessities is falling for each income group.

He refers to a study by LaVaughn M. Henry.

Notes from a Hayek Tribute

I refer to yesterday’s event, held at Mercatus.

1. Google has made me stupid. I know where Mercatus is, but the address on the invitation was different, and I went with the address, and with Google Maps, and got off at the wrong subway stop (along with at least two other would-be attendees).

2. Everyone who was everyone was there.

3. The main speaker was Israel Kirzner. He spoke really well. I took his main point to be that the causality ran fromm Hayek’s 1974 Nobel Prize to the interest in his insights rather than the other way around. Those insights include the knowledge problem, the implications of subjectivism, and the importance of the open-ended world in which we live, as opposed to the closed world of general equilibrium theory. Instead, the Nobel folks focused on Hayek’s macro work. Hayek’s speech at the Nobel might be considered an attempt to shift the focus to his other insights. Whether it was that speech (which Pete Boettke later pointed out was not accepted for publication by Economica and thus was not published until almost two decades later) or something else, a rebirth of interest in Austrian economics can be traced to that period.

4. Among the all-stars in the audience asking questions was Russ Roberts, who admitted that although he had moved far in the Austrian direction he still liked old-fashioned supply-and-demand. Kirzner was sympathetic, saying that it is much easier to teach new students supply-and-demand than to teach the insights of Hayek. (Note that Russ has made a remarkably good attempt to teach Hayek in his didactic novel, The Price of Everything.)

5. The afternoon was to feature three Nobel Laureates, but one of them, Edmund Phelps, was sick, and so Boettke read Phelps’ remarks. The other Nobelists were Vernon Smith and Eric Maskin, and I disagreed with both of them.

6. Smith said that the financial crisis was caused by principal-agent problems in mortgage securitization. He suggested that loan originators should not be paid up front, but they should instead be paid over time, as the mortgage is paid off. That is an approach for reducing principal-agent problems, but in my view there are better approaches–the stream of payments over time is a complex financial asset that few originators would be equipped to manage.

One alternative, of course, is to go back to the old originate-to-hold model, in which the loan originator is an employee of the bank, and the bank is in a position to reward or punish originators based on how well they adhere to standards. But more important, I do not believe that principal-agent problems were at the heart of the crisis. Originators were contractually obligated to deliver loans that met the guidelines of investors. Loans that did not meet those guidelines can be considered fraud, and there was plenty of that going on. But the real problem is that investors were, for the most part, getting the loans that they were asking for. The geniuses on Wall Street, and at Freddie and Fannie, believed that they could make money on loans with no down payment, shaky credit history, and so on, because–so the thinking went–if they bought enough of them, the risk would be diversified, particularly since everyone knew that house prices only go down in some places, never in lots of places at once.

Anyway, I’ve made the point about cognitive failure, as opposed to moral failure, at length.

7. Maskin said that mathematical proofs in mechanism design demonstrated formally Hayek’s point that markets make efficient use of information. During the Q&A, I asked if it was possible to reconcile the methodology of those proofs, which involve closed-end models, with the larger point stressed by Kirzner that the world is open-ended, including new technology that has not yet been discovered. Maskin answered, in effect, that all you have to do is extend the Arrow-Debreu state-space to include all possible technological discoveries, and the proofs carry over. I was not satisfied with that answer. Some possibilities:

a) He is correct, and I am too prejudiced against formal modeling.

b) I asked the question poorly, and had I been more articulate he would have given a different answer.

c) He just does not “get” the point about open-ended economics and that it eludes formal treatment.

Among those I spoke with afterward–and obviously there would be selection bias at work–the unanimous opinion was (c). This raises the intriguing possibility that mainstream economics and Austrian thinking are still a long way from reconciliation. In effect, Maskin is no further along the road to understanding Hayek than is a freshman to whom Kirzner would only teach supply-and-demand.

Perhaps Hayekian economics is a bundle of insights that are deceptively simple. Some people think that they get them, but, like Maskin, they are still stuck in the mainstream paradigm.

Russ and I are examples of mainstream economists who drifted toward a Hayekian view. I cannot think of economists who have drifted in the other direction. To me, this suggests that there is something difficult to grasp about Hayekian economics, or the Austrian viewpoint more generally, and that training in mainstream economics does not necessarily ease that difficulty.

That is my main take-away from the event.

My Review of Peter Thiel

I write,

the business environment of biotechnology, which Thiel and I agree is a very promising field for future economic growth, may be different from that of software. In software, companies like Microsoft and Facebook grew to dominance in large part because consumers find an advantage in using the same software as other consumers — this is the network effect. This in turn creates an opportunity for venture capitalists to back the rapid expansion of a firm that is unprofitable for a few years and then wildly profitable a few years later, once the network effect has been captured. It is not necessarily the case that biotechnology will exhibit network effects in which profits are created by rapidly expanding on an early lead.

I should note that Edmund Phelps, in Mass Flourishing, argues that progress is driven not by big individual breakthroughs but instead by cumulative entrepreneurial progress.

UPDATE: Peter Lawler also writes about Thiel. A sample:

What, today, would be “the largest endeavor over which you can have definite mastery”? This would be the startup. For the libertarian Thiel, the startup has replaced the country as the object of the highest human ambition. And that’s the foundation of the future that comes from being ruled by the intelligent designers who are Silicon Valley founders.

Anti-Poverty Consensus?

I write,

There seems to me to be a close alignment of Ryan’s block-grant approach with the many instances in which the authors of the Hamilton Project volume propose flexible, low-cost, small-scale, locally administered programs, rather than large-scale, federally administered universal solutions. In addition, I was struck by the way that both Ryan and the Hamilton Project focus on rigorous evaluation of results as well as the need for further experimentation.

I do not expect to see a bipartisan reform of anti-poverty programs any time soon. If it were up to policy experts, yes. But politically, improving anti-poverty efforts takes a back seat to offering goodies to the middle class and to the clout of people with a stake in the existing programs.