Regulate big tech?

Peggy Noonan writes,

In February 2018 Nicholas Thompson and Fred Vogelstein of Wired wrote a deeply reported piece that mentioned the 2016 meeting. It was called so that the company could “make a show of apologizing for its sins.” A Facebook employee who helped plan it said part of its goal—they are clever at Facebook and knew their mark!—was to get the conservatives fighting with each other. “They made sure to have libertarians who wouldn’t want to regulate the platform and partisans who would.” Another goal was to leave attendees “bored to death” by a technical presentation after Mr. Zuckerberg spoke.

It all depends on Congress, which has been too stupid to move in the past and is too stupid to move competently now. That’s what’s slowed those of us who want reform, knowing how badly they’d do it.

Yet now I find myself thinking: I don’t care. Do it incompetently, but do something.

On this issue, I am in the libertarian camp. It is not just that government regulation will be incompetent. In the end, it will lead to concentration of power that is tighter and more dangerous than what we have now. The more power we cede to government over the Internet, the less open and free it is going to be.

Rooting for government to regulate tech is like rooting for Putin to kill off Russian oligarchs. The oligarchs may be no-goodniks, but Putin is not going to make Russia a better place by killing them.

I am also wary of the government taking the initiative to stop robocalls. It seems almost certain that any government solution is going to involve enhanced technology for tracking individuals on the Internet and for censorship. Eventually, it is going to be used for those purposes.

All I want are spam filters on my phone. Imagine an app that sent into voicemail a call from any phone number that is not in my contacts. How hard is that to do?

UPDATE: not hard at all, according to this article. On an iPhone, just deploy do not disturb, but with exceptions for you contacts.

Open Settings > Do Not Disturb.
Tap Allow Calls From.
You have several options, but one is All Contacts.

Thanks to a commenter on this post for the pointer.

Cultural mixing watch

Elisabeth Braw writes (WSJ),

For a model, look to Finland. For nearly six decades, the Finnish government has offered the National Defense Course, a quarterly boot camp for leaders from the armed forces, government, industry and civil society. “The beauty is that every sector of society is present,” explains retired Lt. Gen. Arto Räty, a former director of the National Defense Course. “Yes, the course is run by the armed forces, but it’s not a military course. It’s a national security course.”

Without the course, many of the participants would never cross paths. The course has allowed Finland to bridge the national-security gap between civil society and the armed forces that exists in most other developed countries.

Recall that in my annotation of the Cowen-Andreessen-Horowitz podcast. I wrote

In the case of government and tech, I think that the highest potential for mixing is in applications related to the military and to security.

Question from Handle

He writes,

“Should we worry about low domestic population growth rates” is actually a fascinating question, along with typical follow-up questions of “What could be done about it, and should any of that be done?”, and I hope Arnold gives his take on it one day.

For more than twenty years, I have thought that the age of “mass ____” was ending, as the Industrial Revolution gives way to the digital age. Henry Ford needed a big army of workers. Bill Gates did not.

One implication of this is that we should not worry about low domestic population growth rates. The masses are not going to make the same contributions to economic and military strength that they did in the industrial era.

If you scratch me, you will find an elitist. That is, I think that the future will be shaped by a relatively small elite. But the catch is that I do not know what that elite will look like or what it should look like. I don’t want it to look like the Progressive products of American higher education. Unless they turn out to be very Straussian.

Tyler, Marc, and Ben

That is, Cowen, Andreessen, and Horowitz, in a 40-minute podcast. I chose to annotate it. Annotating is, like writing a book review, a way for me to absorb the material. Some excerpts from my essay:

1. As far as I can tell, blockchain can only help to prevent one type of cheating: digital forgery. If blockchain is going to have a killer app, then it has to enable a transaction to take place where the only impediment to undertaking such a transaction currently is the threat of digital forgery.

I would add that digital money faces the threat of digital forgery. But digital money also faces other impediments. ICYMI, my whole point is that other impediments to trust are, in the grand scheme of things, much more important.

2. New listening technology strikes me as incremental, not revolutionary. Portable radios are a very old technology. I listened to the Beatles sing “When I’m 64” on a transistor radio when I was 13. Now I’m 64.

3. Could AR and VR become a big part of everyone’s life? In my opinion, yes. Have the breakthroughs necessary for that to happen occurred or are they on the verge of occurring? In my opinion, no.

I would add that I do not know what the key breakthroughs will be. In fact, we will only have a better idea in hindsight. Who knew ahead of time that the breakthroughs needed to make mobile Internet access a winning technology had taken place by 2007 but not earlier?

4. I assume that in Israel and China, security issues provide an arena for cultural mixing between government and technology. Presumably, there is also some cultural mixing between Silicon Valley and part of the American military and homeland security apparatus.

More AI skepticism

From a commenter.

If we start from new loans to train the computer, we have no real test until lots of defaults happen. If, as mentioned above, only 2.5% of mortgages default in normal situations, it will take a long time to accumulate more observations than there are variables to look at. The machine can’t test itself until there are hundreds or thousands of defaults to compare, even assuming there was not a special case like a financial crisis that skews the numbers. Our only real hope is that the defaults that do happen occur very quickly in the life of the mortgage, first 3-5 years or so, in which case in a decade we will probably have good amounts of data. I don’t know how long it takes the average mortgage default to happen, so it might work, or it might not.

Defaults do tend to occur early in the life of a mortgage., Over time, there is usually equity buildup due to paydown of principal and rising home values, so that seasoned loans tend to perform well. This was true in 2007 and 2008–loans originated in 2003 or earlier were not prone to default. But the commenter’s points (read the whole thing) are still well taken.

With chess, a database of games is probably very representative of all of the circumstances that the computer is going to encounter. That is not true with mortgage lending.

I read recently that average credit scores are currently the highest they have ever been. Does that mean that making a loan right now is safer than it’s ever been? I doubt it. If conditions are unprecedented, then obviously they cannot be represented in the database.

Russ Roberts’ podcast with Rodney Brooks also elaborates on AI skepticism.

Kai-fu Lee talks his book

In an essay in the WSJ adapted from a book due out today, Lee writes,

While a human mortgage officer will look at only a few relatively crude measures when deciding whether to grant you a loan (your credit score, income and age), an AI algorithm will learn from thousands of lesser variables (what web browser you use, how often you buy groceries, etc.). Taken alone, the predictive power of each of these is minuscule, but added together, they yield a far more accurate prediction than the most discerning people are capable of.

I am willing to bet against that.

1. A credit score already makes use of a lot of information that human underwriters did not use to look at. Credit scoring was “big data” before that term existed.

2. To use those “lesser variables” in the United States, you have to prove that they don’t harm access to credit of minorities.

3. The marginal value of additional information about the borrower is not very high. In a home price boom, “bad” borrowers will repay their loans; in a bust, some “good” borrowers will default.

I am starting to believe that artificial intelligence, when it consists of making predictions using “big data,” is overrated by many pundits. As with Lee’s mortgage underwriting example, it does not help if AI solves the wrong problem. To take another example, figuring out which ads to serve on content sites that shouldn’t be ad-supported in the first place is solving the wrong problem.

Emergent Ventures: first thoughts

It is a Tyler Cowen project, with seed funding from Peter Thiel. The press release says that it is

an incubator fellowship and grant program for social entrepreneurs with highly scalable ideas for meaningfully improving society.

1. It definitely is not “Shark Tank.” I have only seen parts of a few episodes, but the entrepreneurs had very small ideas, and the sharks only cared about whether the entrepreneurs had made some progress and could demonstrate that the market had enough revenue potential.

2. In a brief podcast, when Tyler says that his comparative advantage is spotting talent, it almost made me spill my orange juice (I don’t drink coffee). If I had a dime for everyone who thinks that spotting talent is their comparative advantage, I could fund Emergent Ventures. I am not saying that Mercatus is bad at spotting talent, but are they better than Google or Andreessen, Horowitz, or Paul Graham, or. . .? I guess it depends on what domain you are talking about.

3. Maybe their slogan should be, “We’re looking for the next Robin Hanson.”

4. One way to come up with a moonshot is to think of a big, annoying problem to solve. Some possibilities that come to mind:

–the intellectual collapse of American education, including higher education and K-12.

–terrorism and the responses to terrorism

–potential use or mis-use of biotechnology, nanotechnology, and artificial intelligence

5. Have I ever had a “change-the-world” idea? Back in October of 2000, I wrote,

Now, imagine that everyone in the world is given an “ethics rating” that is analogous to a chess rating. Maybe 2500 would be the highest, and 0 would be the lowest. Your rating would affect how you could use various technologies. “Ethical grandmasters” would be allowed to do advanced research in biotechnology and robotics.

Note the passive voice. It raises the question of who is going to create and control such an “ethics rating” system. The Chinese government? They seem inclined to implement such an idea, but they are not necessarily the ones I want to see doing it.

At the time, I assumed that I would initiate the ethics rating system by designating a few people as ethical grandmasters. They would in turn rate other people, and these would rate other people, until everyone had a rating. You can read the essay to see the idea sketched out a bit more. Note that as of the time I wrote the essay, I was still left of center, as you can see from the people I named as possible ethical grandmasters.

As I re-read the essay, I think that this qualifies as a moonshot idea. It might even be worth trying.

Tyler Cowen and Robin Hanson on how the truth can hurt

Sounding a bit like Martin Gurri, Tyler Cowen writes,

An informed populace, however, can also be a cynical populace, and a cynical populace is willing to tolerate or maybe even support cynical leaders. The world might be better off with more of that naïve “moonshot” optimism of the 1960s.

Carrying the idea to extremes, Robin Hanson writes,

These two facts, better tech for reading feelings and widespread hypocrisy, seem to me to be on a collision course. As a result, within a few decades, we may see something of a “hypocrisy apocalypse”, or “hypocralypse”, wherein familiar ways to manage hypocrisy become no longer feasible, and collide with common norms, rules, and laws.

I don’t think David Brin ever thought it through this way.

Kling on George Gilder’s Google prophecy

One brief excerpt from my essay on Life After Google.

I also think that decentralization is important for liberty. I once hoped that a decentralized Internet would enhance freedom. Now, I am inclined to see lack of appreciation for liberty as a fundamentally human problem, not a technological one. Decentralized computer architecture solves some problems, but it creates others.

Elsewhere, David Henderson offers extensive comments on the recent WSJ interview with Gilder.

A non-totalitarian future

As I mentioned the other day, George Gilder does not accept the vision of a totalitarian future. Here is Gilder in the WSJ.

With the cryptographic revolution, he says, “we’re now in charge of our own information. For the first time in history, really, you don’t have to prove who you are, or what you are, before a transaction.” A blockchain allows users “to be anonymous if they wish, while also letting them keep a time-stamped record of all their previous transactions. It allows us to establish unimpeachable facts on the internet.”

In his new book, Life After Google, Gilder argues that the concentration of power at Google or Facebook is a temporary phenomenon. He believes that power will diffuse once again.

Where I a skeptical of his view is that I believe that the power that the top companies have comes from their skills at strategy and managing software development. Those skills are highly concentrated, and I do not see that changing.