Social media and the art of thinking unreasonably

Here comes my rant against political uses of social media, notably Twitter and Facebook.

Politics on social media is not reflective. It is not deliberative. It is not long-term thinking. It is short-term, reactive, tribal, and emotional.

Social media facilitates the formation of mobs. Contra Howard Rheingold, mobs are not smart. When it comes to politics, mobs are epitomized by Charlottesville.

Politics on social media is cyber-bullying. Progressives started it, and they have been relentless and ruthless practitioners of it. But in recent years their opponents have discovered it, culminating in the election of the cyber-bully-in-chief.

I wish somebody could figure out how to walk it back. Social media is not the whole problem when it comes to political polarization and anger, but the way it works today, it sure as heck is not the solution.

The art of thinking reasonably

David Brooks starts out talking about Richard Thaler, but he moves on to recommend a forthcoming book by Alan Jacobs, called How to Think. Brooks writes,

Jacobs nicely shows how our thinking processes emerge from emotional life and moral character. If your heart and soul are twisted, your response to the world will be, too. He argues that by diagnosing our own ills, we can begin to combat them. And certainly I can think of individual beacons of intellectual honesty today: George Packer, Tyler Cowen, Scott Alexander and Caitlin Flanagan, among many.

My thoughts.

1. Read the column. It sounds as though Jacobs focuses on tribal dynamics. I expect his themes will overlap with my own Three Languages of Politics. The book will be tomorrow, and unless I am put off by perusing a sample or other reviews, I expect to read it.

2. Thaler’s focus, which is irrational (in the economic sense) individual choices, differs from Jacobs’ focus, which is how social context can influence us to have an unreasonable attachment to our opinions. In that sense, the Thaler opening is a head fake, and I think that the column would have been better without it.

Dealing with complex social problems

Seth Kaplan writes,

Despite our current state of social decay, a systems approach could help rebuild habits, manners, and morals that have fallen into disrepair. Because it takes a comprehensive view of social problems, such an approach would entail a multi-step process.

First, it would bring together a wide range of concerned actors from government, philanthropies, religious institutions, NGOs, the private sector, and local communities to build a common picture of the current reality. Second, these stakeholders would be given an opportunity to lay out their competing explanations for why the complex problems persist and even grow despite decades of attempts to improve conditions. Third, these different perspectives would be integrated into a much more comprehensive picture of the whole system, including underlying drivers of the problems. Fourth, with this comprehensive picture, stakeholders would see how various well-intended efforts to solve the problems in the past often made things worse. Finally, they could use this knowledge to forge a new vision of how the future might unfold through a wide range of complementary initiatives that can combine to produce sustainable, system-wide change.

Borrowing from David Peter Stroh, Kaplan contrasts conventional thinking with systems thinking.

Note that the first point highlights the oversimplification bias that is embedded in conventional thinking.

The paradox of software development

I finished Tim O’Reilly’s WTF. For the most part, his discussion of the way that the evolution of technology affects the business environment is really insightful. This is particularly true around chapter 6, where he describes how companies try to manage the process of software development.

I like to say that computer programming is easy and software development is hard. One or two people can write a powerful set of programs. Getting a large group of people to collaborate on a complex system is a different and larger challenge.

It is like an economy. We know that the division of labor makes people more productive. We know that some of the division of labor comes from roundabout production, meaning producing a final output by using inputs that are themselves produced (also known as capital). Having more people involved in an economy increases the opportunities to take advantage of the division of labor and roundabout production. However, the more people are involved, the more challenging are the problems of coordination.

O’Reilly describes Amazon as being able to handle the coordination problem in software development by dividing a complex system into small teams. You might think, “Aha! That’s the solution, Duh!” But as he points out, dividing the work among different groups of programmers was the strategy used in building the original healthcare.gov, with famously disastrous results. You risk doing the equivalent of having one team start to build a bridge from the north bank of a river and another team start to build from the south bank, and because of a misunderstanding their structures fail to meet in the middle.

He suggests that Amazon avoids such pitfalls by using what I would call a “document first” strategy. The natural tendency in programming is to wait until the program is working to document it. You go back and insert comments in the code explaining why you did what you did. You give users tips and warnings.

With disciplined software development, you try to document things early in the process rather than late. Before you start coding, you undertake design. Before you design, you gather requirements. I’m oversimplifying, but you get the point.

As O’Reilly describes it, Amazon uses a super-disciplined process, which he calls the promise method. The final user documentation comes first. Each team’s user documentation represents a promise. I’ve sketched the idea in a couple of sentences, but O’Reilly goes into more detail and also references entire books on the promise method.

Why isn’t most software developed in a super-disciplined way? I think it is because software development reflects the organizational culture of a business, and most business cultures are just not that disciplined. They impose on their software developers a combination of unstable requirements and deadline pressure. In practice, the developers cannot solidify requirements early, because they cannot get users to articulate exactly what they want in the first place.

Also, requirements change based on what people experience, and it takes discipline to decide how to handle these discoveries. What must you implement before you release, and what can you put off for the next version?

Consider three methods of software development. All of these have something to be said for them.

1. Document first–specify exactly what each component of the system promises to do.
2. Rapid prototyping–keep coming up with new versions, and learn from each version
3. Start simple–get a bare-bones system working, then move on to add in the more sophisticated features.

If you do (1) without (3), you end up with healthcare.gov. If you do (1) without (2) your process is not agile enough. You stay stuck with the first version that you designed, before you found out the real requirements. If you do (2) and (3) without (1), you get to a point where implementing a minor change requires assembling 50 people to meet regularly for six months in order to unravel the hidden dependencies across different components.

From O’Reilly, I get the sense that Amazon has figured out to do all three together. That seems like a difficult trick, and it left me curious to know more about how it’s done.

Sentences about Puerto Rico

From commenter Handle.

it was already in really bad shape before the storm. But Puerto Rico has had just about as many institutions of American government as it possible for any slightly-foreign, Spanish-speaking place to have for a long, long time. It obeys federal law, gets generous federal subsidies, has elections, courts, local offices of all the federal agencies, military bases, etc. Puerto Ricans are American citizens, have open borders with the rest of the U.S., and so forth.

And yet all those institutions don’t seem to have done the island much good in terms of convergence: it always seems to trail the rest of the US by the same proportion economically, consistently lags much more educationally, local governance is poor quality, and they are effectively bankrupt – though this could also be said for some of the worst mainland states (Connecticut) and cities (Chicago). They have been going through the motions, but not getting the results.

South Korea and North Korea are a case of similar people, different institutions. Are Puerto Rico and the U.S. a case of similar institutions, different people?

Re-litigating Netscape vs. Microsoft

In WTF, Tim O’Reilly writes,

Netscape, built to commercialize the web browser, had decided to provide the source code to its browser as a free software project using the name Mozilla. Under competitive pressure from Microsoft, which had built a browser of its own and had given it away for free (but without source code) in order to “cut off Netscape’s air supply,” Netscape had no choice but to go back to the web’s free software roots.

This is such an attractive myth that it just won’t die. I have been complaining about it for many years now.

The reality is that Netscape just could not build reliable software. I know from bitter personal experience that their web servers, which were supposed to be the main revenue source for the company, did not work. And indeed Netscape never used its server to run its own web site. They never “ate their own dog food,” in tech parlance.

On the browser side, Netscape had a keen sense of what new features would enhance the Web as an interactive environment. They came up with “cookies,” so that when you visit a web site it can leave a trace of itself on your computer for later reference when you return. They came up with JavaScript, a much-maligned but ingenious tool for making web pages more powerful.

But Netscape’s feature-creation strategy backfired because they couldn’t write decent code. Things played out this way.

1. Netscape would introduce a feature in to the web browser.
2. An Internet standards committee would bless the feature, declaring it a standard.
3. Microsoft would make Internet Explorer standards-compliant, so that the feature would work.
4. The feature would fail to work on the Netscape browser.

In short, Netscape kept launching standards battles and Microsoft kept winning them, not by obstructing Netscape’s proposed standards but by implementing them. Netscape’s software development was too incompetent to write a browser that would comply with its own proposed standards.

I’m sure that if Netscape could have developed software corporately they would have done so. But because they could not manage software development internally, they just gave up and handed the browser project over to the open source community. And need I add that the most popular browser is not the open source Mozilla but the proprietary Chrome.

Here is one of my favorite old essays on the Microsoft-Netscape battle.

I don’t tweet with discretion

This is a public service announcement. Some people on Twitter expect me to engage with them on that service. That is not going to happen. I only tweet automatically, through blog posts. The blog posts on this site are echoed to Twitter in a way that only the blogging software understands. I have never issued a discretionary tweet, nor do I plan to in the future.

Twitter’s rapid-fire format is contrary to what I consider to be a process conducive to reasonable thinking. Instead, I am a big believer in slow-reaction commentary.

Most posts, including this one, are composed days in advance of when they are scheduled to appear. That slows me down in two ways.

1. It keeps me from being able to jump in right away to comment on today’s news. In fact, the Topic Du Jour often passes by before I can comment on it. That is a good thing. I save most of my thoughts for issues with more enduring significance.

2. When I do compose a post, I have time to reconsider it before it appears. Sometimes, new information on the topic comes in. Sometimes, I have second thoughts. I probably edit close to half of my posts, occasionally deleting one altogether, before they appear. As it is, I probably regret about 1 out of every 20 posts that appears. If I took away the time lag, it would be closer to 10 out of 20 that I wish I could take back.

Why people distrust government

Jeffrey Friedman’s answer:

In principle, people might have responded to accumulating perceptions of government failure by dialing back their expectations of government performance. But in practice, this would have violated the tacit assumption that justifies government’s attempt to solve social and economic problems to begin with: the assumption that modern society is so simple that the solutions to its problems are self-evident.

In the earlier essay, Friedman coined the phrase “epistemological utopianism.” I think we need to come up with a catchier phrase to describe the belief that modern society’s problems are easily solved by people with the right motives. I nominate “oversimplification bias,” but I welcome other suggestions. Provisionally, let me work with that term.

1. Friedman’s thesis is that oversimplification bias leads people to expect government to solve problems that it cannot solve. When the problems persist, distrust in government rises.

2. This leads people to hate those with whom they disagree. After all, if you believe that your side has the solutions, then you must assume that the other side does not want to solve the problems.

3. It also leads people to be arrogant about their own side. If the problems are simple, then our solutions must be correct, and that makes us really superior. (Note: an anti-Bobo Trump supporter can be just as arrogant in this sense as a Bobo elitist.)

4. I am afraid that mainstream economists are often afflicted with oversimplification bias. Reducing the problems of patterns of sustainable specialization and trade to monetary policy. Seeing health care policy in terms of mathematical and statistical models, ignoring all of the cultural baggage that we inherit. etc.

Re-litigating Open Source Software

In his new book, Tim O’Reilly reminisces fondly about the origins of “open source” software, which he dates to 1998. Well he might, for his publishing company made much of its fortune selling books about various open source languages.

In contrast, in April of 1999, I called open source The User Disenfranchisement Movement.

…The ultimate appeal of “open source” is not the ability to overthrow Microsoft. It is not to implement some socialist utopian ideal in which idealism replaces greed. The allure of the “open source” movement is the way that it dismisses that most irksome character, the ordinary user.

In that essay, I wrongly predicted that web servers would be taken over by proprietary software. But that is because I wrongly predicted that ordinary civilians would run web servers. Otherwise, that essay holds up. In the consumer market, you see Windows and MacOS, not Linux.

The way that open source developers are less accountable to end users is reminiscent of the way that non-profit organizations are less accountable to their clients. Take away the profit motive, and you reduce accountability to the people you are supposed to be serving.

Still, the business environment is conducive to firms trying to expose more of their software outside the firm. When a major business need is to exchange data with outside entities, you do not want your proprietary software to be a barrier to doing that.

A local college computer teacher, whose name I have forgotten (I briefly hired him as a consultant but fired him quickly because he was disruptive) used to make the outstanding point that the essential core of computer programming is parsing. There is a sense in which pretty much every chunk of computer code does the job of extracting the characters in a string and doing something with them.

Computer programs don’t work by magic. They work by parsing. In principle, you can reverse engineer any program without having to see the code. Just watch what it takes in and what it spits out. In fact, the code itself is often inscrutable to any person who did not recently work on it.

Ironically, some of the most inscrutable code of all is written in Perl, the open-source language that was a big hit in the late 1990s and was an O’Reilly fave. If you want to reverse-engineer someone else’s Perl script (or your own if it’s been more than a couple of years since you worked on it), examining the code just wastes your time.

There are just two types of protection for proprietary software. One is complexity. Some software, like Microsoft Windows or Apple IOS, is so complex that it would be crazy to try to reverse engineer it. The other form of protection is legal. You can file for a patent for your software and then sue anybody who comes up with something similar. Amazon famously tried to do that with its “click here to order” button.

In today’s world, with data exchange such a crucial business function, you do not want to hide all of your information systems.
You want to expose a big chunk of them to partners and consumers. The trick is to expose your software to other folks in ways that encourage them to enhance its value rather than steal its value. Over the past twenty years, the increase in the extent to which corporations use software that is not fully hidden is a reflection of the increase in data sharing in the business environment, not to some magical properties of open source software.