Handle’s theory of consolidation

Referring to Hayek’s claim that local knowledge favors decentralized decision processes, Handle comments,

IT and increasingly capable and sophisticated management information systems, which themselves benefit from massive economies of scale, and the management techniques they enable, has invalidated this argument. If anything, big companies now seem to have a clear advatange with regards to acquiring and leveraging ‘local knowledge’, and combined with the other advantages of brand recognition, size and sophistication and capacity for, e.g., rent-seeking and bearing the burden of compliance overhead, that leaves “the little, genuinely-independent guy” with zero chance in the long run

1. I wonder if this applies to government. If the U.S. federal government took advantage of Big Data, could we be as well-run as Singapore or Norway? I tend to doubt it. Perhaps someone wants to argue that we could be that well-run if we had an epistocracy.

2. Once again, I am reminded of a Diamond Age world. Technology can allow giant enterprises to meet everyone’s needs cheaply. By the same token, luxury consists of goods and services provided by small artisan craftspeople.

3. This is another instance in which the Internet vision of the 1990s, the days of the “hippie Internet,” is turning out to be wrong.

4. Will there be sufficient dynamism in this big-firm economy? Where will competition come from? From small firms that can suddenly get big and unseat big incumbents? From big incumbents trying to encroach on one another’s turf?

The O’Reilly Cycle

One of the ideas in Tim O’Reilly’s new book is about a cycle in technology. I describe it this way.

Phase 1: a new hardware platform opens up (personal computers in the late 1970s; the Web in the mid-1990’s’; smart phones in this century) Lots of entrepreneurs try to play with it, figure out what to do with it.

Phase 2: competition to become a dominant infrastructure player within the platform: operating systems in personal computers (came down to Windows vs. Mac); the Web portal (came down to Google vs. AOL vs. Yahoo); capturing user attention in mobile phones (still up for grabs, I think, but with Facebook and Twitter as prominent examples today). In this phase, being more “open” is a competitive advantage. Having a bigger ecosystem of other people adding value to your platform is the winner. For example, Google won because it did the best job of incorporating the entire Web into its ecosystem. Amazon has opened its platform to just about any seller.

Phase 3: cannibalization. The winner in phase 2 decides that its revenue has maxed out in just being an agnostic open platform, so it starts to take over profitable niches within the ecosystem. Microsoft creates Excel. Google captures ad revenue from content providers. In some sense, the winner in phase 3 backs away from the “open” strategy and instead tilts the playing field to favor its own offerings in the most profitable areas. But cannibalizing your ecosystem helps to drive ambitious entrepreneurs to move on to the next hardware platform, where they can have more opportunity.

There is a widespread perception that Apple, Facebook, Google, and Amazon are moving to the cannibalization phase. For example, Amazon is creating some of its own brands. Along these lines, commenter Handle and Tyler Cowen recommend this piece by Andre Staltz. I recommend it, also, and I plan to post on it after I read it again.

More thoughts on disciplined software development

Tom Killalea wrote,

Robert C. Martin described the single responsibility principle: “Gather together those things that change for the same reason. Separate those things that change for different reasons.” Clear separation of concerns, minimal coupling across domains of concern, and the potential for a higher rate of change lead to increased business agility and engineering velocity.

This is another characteristic of the disciplined approach to software development that Tim O’Reilly describes at Amazon. I didn’t include it in my previous post. Thanks to a reader for the pointer.

Thanks also to a commenter, who sends a link to a memo.

There are without question pros and cons to the SOA approach, and some of the cons are pretty long. But overall it’s the right thing because SOA-driven design enables Platforms.

SOA stands for “service-oriented architecture.”

O’Reilly explains SOA, as does Zack Kanter.

each piece of Amazon is being built with a service-oriented architecture, and Amazon is using that architecture to successively turn every single piece of the company into a separate platform — and thus opening each piece to outside competition.

Thanks yet again to another commenter for the link.

I reiterate my view that a firm’s software development can only be as disciplined as its business units. I have a hard time picturing Freddie Mac in the late 1980s and early 1990s (when I was there) having a culture to do SOA, even if you gave the IT department all of 2017-vintage technology to work with. The dependencies across business units were extremely tight, with decisions made by the people negotiating terms with loan originators having downstream effects on folks servicing defaulted loans years later as they tried to determine what recourse Freddie had, if any, back to the originator in order to compensate for losses. In practice, this was handled in an informal, ad hoc manner. To get to the point where you could have executed SOA, you would have needed business units to understand and buy into a much more explicitly documented business process. That is what I cannot picture happening.

Recently, some economists have been struck by the high degree of inequality across firms. I can imagine that one source of inequality would be in the ability of management teams to take advantage of highly disciplined processes for software development. The skill and culture of the management team might be more important, and more unequal across firms, than was the case a few decades ago.

Social media and the art of thinking unreasonably

Here comes my rant against political uses of social media, notably Twitter and Facebook.

Politics on social media is not reflective. It is not deliberative. It is not long-term thinking. It is short-term, reactive, tribal, and emotional.

Social media facilitates the formation of mobs. Contra Howard Rheingold, mobs are not smart. When it comes to politics, mobs are epitomized by Charlottesville.

Politics on social media is cyber-bullying. Progressives started it, and they have been relentless and ruthless practitioners of it. But in recent years their opponents have discovered it, culminating in the election of the cyber-bully-in-chief.

I wish somebody could figure out how to walk it back. Social media is not the whole problem when it comes to political polarization and anger, but the way it works today, it sure as heck is not the solution.

The paradox of software development

I finished Tim O’Reilly’s WTF. For the most part, his discussion of the way that the evolution of technology affects the business environment is really insightful. This is particularly true around chapter 6, where he describes how companies try to manage the process of software development.

I like to say that computer programming is easy and software development is hard. One or two people can write a powerful set of programs. Getting a large group of people to collaborate on a complex system is a different and larger challenge.

It is like an economy. We know that the division of labor makes people more productive. We know that some of the division of labor comes from roundabout production, meaning producing a final output by using inputs that are themselves produced (also known as capital). Having more people involved in an economy increases the opportunities to take advantage of the division of labor and roundabout production. However, the more people are involved, the more challenging are the problems of coordination.

O’Reilly describes Amazon as being able to handle the coordination problem in software development by dividing a complex system into small teams. You might think, “Aha! That’s the solution, Duh!” But as he points out, dividing the work among different groups of programmers was the strategy used in building the original healthcare.gov, with famously disastrous results. You risk doing the equivalent of having one team start to build a bridge from the north bank of a river and another team start to build from the south bank, and because of a misunderstanding their structures fail to meet in the middle.

He suggests that Amazon avoids such pitfalls by using what I would call a “document first” strategy. The natural tendency in programming is to wait until the program is working to document it. You go back and insert comments in the code explaining why you did what you did. You give users tips and warnings.

With disciplined software development, you try to document things early in the process rather than late. Before you start coding, you undertake design. Before you design, you gather requirements. I’m oversimplifying, but you get the point.

As O’Reilly describes it, Amazon uses a super-disciplined process, which he calls the promise method. The final user documentation comes first. Each team’s user documentation represents a promise. I’ve sketched the idea in a couple of sentences, but O’Reilly goes into more detail and also references entire books on the promise method.

Why isn’t most software developed in a super-disciplined way? I think it is because software development reflects the organizational culture of a business, and most business cultures are just not that disciplined. They impose on their software developers a combination of unstable requirements and deadline pressure. In practice, the developers cannot solidify requirements early, because they cannot get users to articulate exactly what they want in the first place.

Also, requirements change based on what people experience, and it takes discipline to decide how to handle these discoveries. What must you implement before you release, and what can you put off for the next version?

Consider three methods of software development. All of these have something to be said for them.

1. Document first–specify exactly what each component of the system promises to do.
2. Rapid prototyping–keep coming up with new versions, and learn from each version
3. Start simple–get a bare-bones system working, then move on to add in the more sophisticated features.

If you do (1) without (3), you end up with healthcare.gov. If you do (1) without (2) your process is not agile enough. You stay stuck with the first version that you designed, before you found out the real requirements. If you do (2) and (3) without (1), you get to a point where implementing a minor change requires assembling 50 people to meet regularly for six months in order to unravel the hidden dependencies across different components.

From O’Reilly, I get the sense that Amazon has figured out to do all three together. That seems like a difficult trick, and it left me curious to know more about how it’s done.

Re-litigating Netscape vs. Microsoft

In WTF, Tim O’Reilly writes,

Netscape, built to commercialize the web browser, had decided to provide the source code to its browser as a free software project using the name Mozilla. Under competitive pressure from Microsoft, which had built a browser of its own and had given it away for free (but without source code) in order to “cut off Netscape’s air supply,” Netscape had no choice but to go back to the web’s free software roots.

This is such an attractive myth that it just won’t die. I have been complaining about it for many years now.

The reality is that Netscape just could not build reliable software. I know from bitter personal experience that their web servers, which were supposed to be the main revenue source for the company, did not work. And indeed Netscape never used its server to run its own web site. They never “ate their own dog food,” in tech parlance.

On the browser side, Netscape had a keen sense of what new features would enhance the Web as an interactive environment. They came up with “cookies,” so that when you visit a web site it can leave a trace of itself on your computer for later reference when you return. They came up with JavaScript, a much-maligned but ingenious tool for making web pages more powerful.

But Netscape’s feature-creation strategy backfired because they couldn’t write decent code. Things played out this way.

1. Netscape would introduce a feature in to the web browser.
2. An Internet standards committee would bless the feature, declaring it a standard.
3. Microsoft would make Internet Explorer standards-compliant, so that the feature would work.
4. The feature would fail to work on the Netscape browser.

In short, Netscape kept launching standards battles and Microsoft kept winning them, not by obstructing Netscape’s proposed standards but by implementing them. Netscape’s software development was too incompetent to write a browser that would comply with its own proposed standards.

I’m sure that if Netscape could have developed software corporately they would have done so. But because they could not manage software development internally, they just gave up and handed the browser project over to the open source community. And need I add that the most popular browser is not the open source Mozilla but the proprietary Chrome.

Here is one of my favorite old essays on the Microsoft-Netscape battle.

Re-litigating Open Source Software

In his new book, Tim O’Reilly reminisces fondly about the origins of “open source” software, which he dates to 1998. Well he might, for his publishing company made much of its fortune selling books about various open source languages.

In contrast, in April of 1999, I called open source The User Disenfranchisement Movement.

…The ultimate appeal of “open source” is not the ability to overthrow Microsoft. It is not to implement some socialist utopian ideal in which idealism replaces greed. The allure of the “open source” movement is the way that it dismisses that most irksome character, the ordinary user.

In that essay, I wrongly predicted that web servers would be taken over by proprietary software. But that is because I wrongly predicted that ordinary civilians would run web servers. Otherwise, that essay holds up. In the consumer market, you see Windows and MacOS, not Linux.

The way that open source developers are less accountable to end users is reminiscent of the way that non-profit organizations are less accountable to their clients. Take away the profit motive, and you reduce accountability to the people you are supposed to be serving.

Still, the business environment is conducive to firms trying to expose more of their software outside the firm. When a major business need is to exchange data with outside entities, you do not want your proprietary software to be a barrier to doing that.

A local college computer teacher, whose name I have forgotten (I briefly hired him as a consultant but fired him quickly because he was disruptive) used to make the outstanding point that the essential core of computer programming is parsing. There is a sense in which pretty much every chunk of computer code does the job of extracting the characters in a string and doing something with them.

Computer programs don’t work by magic. They work by parsing. In principle, you can reverse engineer any program without having to see the code. Just watch what it takes in and what it spits out. In fact, the code itself is often inscrutable to any person who did not recently work on it.

Ironically, some of the most inscrutable code of all is written in Perl, the open-source language that was a big hit in the late 1990s and was an O’Reilly fave. If you want to reverse-engineer someone else’s Perl script (or your own if it’s been more than a couple of years since you worked on it), examining the code just wastes your time.

There are just two types of protection for proprietary software. One is complexity. Some software, like Microsoft Windows or Apple IOS, is so complex that it would be crazy to try to reverse engineer it. The other form of protection is legal. You can file for a patent for your software and then sue anybody who comes up with something similar. Amazon famously tried to do that with its “click here to order” button.

In today’s world, with data exchange such a crucial business function, you do not want to hide all of your information systems.
You want to expose a big chunk of them to partners and consumers. The trick is to expose your software to other folks in ways that encourage them to enhance its value rather than steal its value. Over the past twenty years, the increase in the extent to which corporations use software that is not fully hidden is a reflection of the increase in data sharing in the business environment, not to some magical properties of open source software.

What I’m Reading

Tim O’Reilly’s new book. He tries to grasp how technology affects the current business environment. He then proceeds to look at the overall economic and social implications. You can get some of the flavor of it by listening to his interview with Russ Roberts. And here is more O’Reilly, where he says,

Microsoft lost leadership because they had taken away the opportunities for their developer ecosystems, so those developers went over to the Internet and to Google. Now, we see this same thing playing out again.

I am not persuaded by these sentences. The Internet was quite a powerful phenomenon. I cannot envision an alternative history in which Microsoft does not lose a lot of its commanding position because of the Internet. You can make a case that Bill Gates could have positioned Microsoft better had he grasped the significance of the Internet sooner, but that would not have changed the game, only made Microsoft a more agile player. And you could argue that whatever Microsoft lost in terms of time, they made up for in terms of spending, so that they wound up doing about as well in the Internet environment as one could reasonably expect.

Overall, I disagree with O’Reilly quite a bit. Early in the book, he writes,

there are far too many companies that are simply using technology to cut costs and boost their stock price

Take this rhetoric and apply it to trade, and it could come from the lips of Donald Trump. In fact, good economists will explain that trade and technology are so intertwined as to be indistinguishable as economic phenomena. Austrian capital theory says that capital is roundabout production, i.e., roundabout trade. Suppose an economy consists of farm equipment and crops, and you want to explain its efficiency. Do you give the credit to farmers applying technology or do you give the credit to trade between the manufacturing sector and the agricultural sector? It’s the same phenomenon, just described differently.

Russ Roberts did not go after O’Reilly on the anti-corporate demagoguery. A charitable interpretation was that Russ wanted to focus on the Internet “platform model” that O’Reilly waxes eloquently about. A less charitable interpretation is that Russ switched to Tyler Cowen’s philosophy of interviewing.