The Behavioral Scientist

it is a web site that may prove interesting. For example, David Rand and Jonathan Cohen write,

Within a population, controlled processing may—rather than ensuring undeterred progress—usher in short-sighted, irrational, and detrimental behavior, ultimately leading to population collapse. This is because the innovations produced by controlled processing benefit everyone, even those who do not act with control. Thus, by making non-controlled agents better off, these innovations erode the initial advantage of controlled behavior. This results in the demise of control and the rise of lack-of-control. In turn, this eventually leads to a return to poor decision making and the breakdown of the welfare-enhancing innovations, possibly accelerated and exacerbated by the presence of the enabling technologies themselves. Our models therefore help to explain societal cycles whereby periods of rationality and forethought are followed by plunges back into irrationality and short-sightedness.

Call it the theory of mediocracy.

Elsewhere, Jason Collins writes,

Absent limiting human intervention to the right level, the pattern we will see is not humans and machines working together for enhanced decision making, but machines slowly replacing humans decision by decision. Algorithms will often be substitutes, not complements, with humans left to the (at the moment, many) places where the algorithms can’t go yet.

4 thoughts on “The Behavioral Scientist

  1. The bit in Collins’ article about statistical prediction vs. expert judgment was amusing, and the mention of Meehl’s book “Clinical versus Statistical Prediction” (1954!) reinforced the point. It seems where there is a profit motive and competition with implicaitons or results that don’t conflict with political necessity or ideological orthodoxy, the algorithms will quickly supplant human judgment whenever it can be reduced to a well-defined process. In other fields, we can apparently go generations without forcing “experts” to surrender their grasp on decision power on their turf, even when all the evidence argues to the contrary.

    A coming intellectual trend will be a whole cottage industry in rationalizing why we can’t rely on algorithms that come to politically incorrect conclusions or decisions or which cause disparate impacts. One has to find culpability somewhere. It used to be conscious prejudice. Then is was unconscious bias. Not it will be conscious and unconscious placement of bias in algorithm programming. “Non-conscious bias” maybe. But watch, no one will actually point ot any smoking gun lines of code, and that will have to be supported by a prophylactic norm by which these rationalizations are to be accepted at face value and it will be impolite to scrutinize the claims too closely or demand, you know, “proof”.

    • I think this is correct, and unfortunately the industry is really setting itself up for big problems here. Most for-profit implementations of this kind of thing are kept opaque as trade secrets. One one hand that makes a lot of sense. But on the other hand, it 1) makes it impossible to answer these kinds of charges 2) makes it impossible to figure out if they are legit concerns for any particular implementation.

      In fact, these charges are already here: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

      So we are currently in a world where dog-identifying neural networks are open to public inspection, but recidivism predictions used by the state to put people in cages is not. Shades of 2000AD :/

  2. The second link is missing something. Setting an adult down with a new expert system isn’t going to lead to a better expert system. Put expert systems out in the wild, let a generation grow up with them, and build a profit-seeking economy up that realizes the gains from matching good users up with good expert systems, and, well, who knows?

    “Although somewhat under-explored, this study is typical of when people are given the results of an algorithm or statistical method (see here, here, here, and here). The algorithm tends to improve their performance, yet the algorithm by itself has greater accuracy. This suggests the most accurate method is often to fire the human and rely on the algorithm alone.”

    But saying this is like when I sat my mother down and tried to get her to understand email and calendaring applications in 1990. The applications sucked, my mom didn’t know what to do with it, and it wasn’t very effective. Now I live and die with those things professionally, and our office has one secretary for a dozen full time workers (and that’s only because we need to be 100% immediately available to our clients at all times because of our particular industry)

  3. “The task of designing interventions that support controlled processing and avoid societal collapse falls to policy experts, not us. ”

    With that from Rand and Cohen, they seem to not notice that the the “policy experts” are on the whole empowered “non-controlled agents” thus exacerbating the problem.

    Heinlein has a quote on this “bad luck” of those capable of controlled processing being placed under the “non-controlled agents”.

    Also:
    “All mankind’s progress has been achieved as a result of the initiative of a small minority that began to deviate from the ideas and customs of the majority until their example finally moved the others to accept the innovation themselves. To give the majority the right to dictate to the minority what it is to think, to read, and to do is to put a stop to progress once and for all.”

    Mises, Ludwig von (1927). Liberalism (p. 54)

Comments are closed.