The Journal of Economic Perspectives, which Timothy Taylor has been editing since its inception, has a symposium on robotics. One of the articles is by Gill A. Pratt.
The exponential growth in computing and storage performance has led researchers to explore memory-based methods of solving the perception, planning, and control problems relevant to the development of additional degrees of robot autonomy. Instead of decomposing these tasks into a set of hand-coded algorithms customized for particular circumstances, large numbers of memories of prior experiences can be searched, and a solution based on matching prior experience is used to guide response.
… human beings communicate externally with one another relatively slowly, at rates on the order of 10 bits per second. Robots, and computers in general, can communicate at rates over one gigabit per second—or roughly 100 million times faster. Based on this tremendous difference in external communication speeds, a combination of wireless and Internet communication can be exploited to share what is learned by every robot with all robots. Human beings take decades to learn enough to add meaningfully to the compendium of common knowledge. However, robots not only stand on the shoulders of each other’s learning, but can start adding to the compendium of robot knowledge almost immediately after their creation.
He does not predict when it will occur, but he thinks that at some point these sorts of capabilities will result in a rapid increase in robot intelligence.
think about elected officials and regulators in the spirit of behavioral economics: they often lack self-control; have a difficult time evaluating complex situations; tend to stick with rules-of-thumb and default options rather than accept the cognitive and organizational costs of re-evaluating their positions; do not evaluate costs and benefits in a consistent way across different contexts; are not good at evaluating risks accurately, instead often respond to limited information and hype; and are overly averse to the risk of taking responsibility for decisions that might turn out poorly. This perspective must have widespread implications for decisions involving the complexities of the tax code or government budgets, policies affecting the workforce and the environment, openness to new sources of domestic and foreign competition, and foreign policy as well.
He is riffing off a paper by W. Kip Viscusi and Ted Gayer.
He cites in particular a paper by Jan Christoph Steckel, Ottmar Edenhofer, and Michael Jakob. Of all of the factors affecting carbon dioxide emissions, the most important is probably the increase in the carbon intensity of energy use in Asia and in developing countries, fueled (so to speak) by coal. Taylor notes that simply going for a global crackdown on coal use would punish countries that are well behind the U.S. and other developed countries in terms of wealth. He concludes,
if you aren’t a big supporter of near-term, large-scale, non-coal methods of producing electricity around the world, you aren’t really serious about reducing global carbon emissions.
Timothy Taylor writes,
The gains from reducing costs of end-of-life care shouldn’t be overstated. The proportion of Medicare spending that goes to end-of-life care has been roughly the same for the last few decades at about 25%. This regularity suggests that while overall health care costs have been rising, end-of-life care is not an increasing part of that overall issue. Intriguingly, Aldridge and Kelley report: “Medicare expenditures in the last year of life decrease with age, especially for those aged 85 or older … This is in large part because the intensity of medical care in the last year of life decreases with increasing age.” Indeed, older adults as a group are a minority of those with the highest health care costs in any given year
Read the whole thing. His Aldridge-Kelly citation is to a report of the sort that only Tim Taylor seems to dig up.
The purpose of money markets is to provide liquidity for individuals and firms. The cheapest way to do so is by using over-collateralised debt that obviates the need for price discovery. Without the need for price discovery the need for public transparency is much less. Opacity is a natural feature of money markets and can in some instances enhance liquidity, as I will argue later.
Pointer from Timothy Taylor. The theory that debt is used when the underlying assets are opaque is not quite new. My articulation of it owes a bit to the delegated monitoring idea of Doug Diamond.
The natural error for economists to make is to assume that bank creditors “see through” the bank to the underlying assets. What Diamond got me thinking is that the whole point of a debt contract is to ensure that the creditor does not have to see through the bank unless the bank gets into trouble. That is the insight that Holmstrom is re-discovering.
Timothy Taylor writes,
The EITC adds a lot of complexity to the tax forms of the working poor, who are often not well-positioned to cope with that complexity, nor to hire someone else to cope with it. About 20% of EITC payments go to those who don’t actually qualify, which seems to happen because low-income people hand over their tax forms to paid tax preparers who try to get them signed up. Of course, there’s another group, not well-measured as far as I know, of working-poor households who would be eligible for the EITC but don’t know how to sign up for it.
One of the advantages of a universal benefit is that you give the money to everyone. My idea is that you would then tax some of it back at a marginal rate of 20 or 25 percent. That is, for every dollar that someone earns in the market, they are lose 20 cents or 25 cents in universal benefits. Compared to a marginal tax rate of zero, 25 percent is more complex and has a disincentive. But it is much less complex and de-motivating than our current system of sharp cut-off points for benefits like food stamps and housing assistance. And having a non-zero tax rate allows you to have a higher basic benefit at lower overall budget cost.
Luigi Zingales writes,
there is precious little evidence that shows the positive role of other forms of financial development, particularly important in the United States: equity market, junk bond market, option and future markets, interest rate swaps, etc.
Found by Timothy Taylor.
Many financial practices are designed to evade regulations or optimize with respect to them. If regulators had not been so laggard in removing the interest rate ceilings on bank deposits, we might never have seen money market funds. If interstate banking had not been so restricted in the 1960s and 1970s, then there would have been no need for a mortgage securities market. If there were fewer short-sale restrictions and looser margin requirements in the stock market, then futures and options in the stock market might not have been created. My guess is that if you were to examine why firms use junk bonds rather than equity finance, you would find a regulatory story there as well.
Back in the early 1990s, someone coined the expression, “The Internet interprets censorship as damage and routes around it.” Financial markets attempt to do the same with regulation.
Timothy Taylor points to an IMF report, which says,
The investment slump in the advanced economies has been broad based. Though the contraction has been sharpest in the private residential (housing) sector, nonresidential (business) investment—which is a much larger share of total investment—accounts for the bulk (more than two-thirds) of the slump
I have an argument that this represents crowding out, caused by increased government deficits. However, this is not textbook crowding out, in which the government increases the demand for savings, raising interest rates and reducing investments.
Instead, it is crowding out of financial intermediation. Recall that my view of financial intermediation is that the public wants to issue risky, long-term liabilities and hold safe, short-term assets. Financial intermediaries accommodate this by doing the opposite.
When the government incurs large deficits, it issues safe, short-term liabilities. This crowds out private financial intermediation, because much of the demand for safe, short-term liabilities is satisfied by government debt. Think of the public holding a $100 balance sheet. Without government deficits, financial intermediaries might hold $100 in risky, long-term investments and issue $100 in safe, short-term securities. Instead, with $100 in government bonds issued to financed deficits, the public’s demand for safe, short-term securities can be satisfied with zero investment. Financial intermediation goes away altogether, or just consists of intermediaries who issue safe securities backed by government bonds.
For electric cars to be truly cost-competitive with gas-fueled vehicles, battery costs need to drop dramatically. The rule-of-thumb has been that the cost of the battery pack in an electrical car needs to drop to $150 per kilowatt/hour or less. A few years back, it was standard to read that battery packs in electric cars were costing $700 per kilowatt/hour or more. Given the historically slow pace of progress in battery technology, it looked as if achieving these costs savings might be three or four decades away.
…the market leaders for electric cars have already reached a cost of $300 per kilowatt-hour–that is, they aren’t just writing with another set of predictions for how batteries will improve, but arguing that they have already improved.
…On this trajectory, nonsubsidized electric vehicle would be commercially viable in about a decade.
I have my doubts that batteries will improve at a high rate going forward. I suspect that energy efficiency will improve at least as quickly. That means that in relative terms, all-electric cars will gain little, if anything, on gasoline-powered cars.
[UPDATE: A reader writes,
In March 2013 Bjorn Lomborg wrote a piece for the WSJ suggesting that electric cars have “a dirty little secret”: these cars are much more energy intensive to produce, especially because of the mining of lithium for the batteries. As a result, there are no environmental gains from such cars until they’ve been driven about 80,000 miles. So the real effect of the subsidies to get people to buy such vehicles is to allow upper income people to feel good about themselves. That’s a laudable goal for a government program, isn’t it?
This is a general problem with trying to be a “green” consumer. When X costs less than Y, the market is telling you that X uses fewer resources. When you think that X uses too much of a particular (seen) resource and you buy Y instead, then you use more of another (unseen) resource.
Oscar Jorda and others write,
The rapid increase in credit-to-GDP ratios since the mid-1980s was just the final phase of a long historical process. The run-up started at the end of World War II and was shaped by a long boom in mortgage lending. One of the startling revelations has been the outsize role that mortgage lending has played in shaping the pace of recoveries, whether in financial crises or not, a factor that has been underappreciated until now.
Pointer from Mark Thoma.
When I read this, I wanted to shout “Underappreciated by who?” Maybe by the macroeconomists who were trained by Stan Fischer, Thomas Sargent, and their progeny. But until Genghis Khan pillaged macro, every macroeconomist knew that housing and mortgage credit rationing were major economic forces in the United States. Until the late 1980s, the process generating recessions consisted of interest rates rising, mortgage lenders losing deposits (because of interest rate ceilings), home buyers losing access to credit, and housing collapsing. And every macro economist knew this.
And even if you are too young to know any old-fashioned macro, you could read Ed Leamer. I would suggest that the authors of this essay try searching for Leamer Housing is the business cycle.
What this essay teaches shows to be underappreciated is Google.
Note that there is more to the essay, which Timothy Taylor found worthwhile.