Martin Gurri watch

Matthew B. Crawford writes,

Among those ensconced in powerful institutions, the view seems to be that the breakdown of trust in establishment voices is caused by the proliferation of unauthorized voices on the internet. But the causal arrow surely goes the other way as well: our highly fragmented casting-about for alternative narratives that can make better sense of the world as we experience it is a response to the felt lack of fit between experience and what we are offered by the official organs, and a corollary lack of trust in them. For progressives to now seek to police discourse from behind an algorithm is to double down on the political epistemology that has gotten us to this point. The algorithm’s role is to preserve the appearance of liberal proceduralism, that austerely fair-minded ideal, the spirit of which is long dead.

Thanks to a reader for recommending the article. I think that Crawford’s main point is that choices made by algorithms can be difficult to legitimize. People want to know who made the decision and the reasoning behind it. Hiding behind an algorithm may seem to be a good way to avoid blame, but it is likely to exacerbate public distrust.

I know that in the world of credit decisions, where algorithms have been around for a long time, the standard when credit is denied is to list the top three reasons for denial. If Google and Facebook are going to get into the censorship business (against my recommendation), then they might want to adapt this approach. That is, whenever they do censor someone, list specific reasons, rather than some vague claim that someone “violated our terms of service.”

22 thoughts on “Martin Gurri watch

  1. For progressives to now seek to police discourse from behind an algorithm is to double down on the political epistemology that has gotten us to this point.

    Why shouldn’t they double down? It has served them well: it got them to “this point”, as Crawford acknowledges. They govern by manipulating procedural outcomes. They could probably sign under the words of Timothy Burke, professor of history at Swarthmore:

    I am drawn to procedural liberalism because I live in worlds that are highly procedural and my skills and training are adapted to manipulating procedural outcomes./blockquote>

    • Right.

      Also, what’s the algorithm for correcting the algorithm? On this side of the singularity, there’s no getting around a human judgment, so there’s no way of getting around human bias, which by legal construct we are reasonable in attributing to the chief executive of the organization.

      “Explanations” legitimizing the exercise of power by pretending domination and favoritism are fairness and justice are merely the spoonful of sugar that helps the medicine go down.

      In effect, you are lying to the kid’s tongue about what he’s really swallowing, which if the tongue believes it, is great, because the kid will then swallow it voluntarily with minimal coaxing and trouble.

      And if the sugar isn’t sweet enough to cover up the bitterness and horrid stench and the kid doesn’t fall for the lie, it’s kind of absurd for the kid to complain that the artificial flavor doesn’t taste quite enough like real cherries.

      Ok kid, guess what, the medicine’s going down, one way or the other. If the only option is to shove it down your throat, I can totally do that, and if I have to, I will. So complain if you want, but really, who cares?

      Also, it’s not so much medicine as poison, but the covering up is just the same, and just as absurd to complain about.

      What if the Big Techs were completely honest about progressive bias in deciding who gets promoted and blue-checkmarked on the one hand, or purged and shadow-deplatformed on the other? Ok, what then?

      Well, apparently, most libertarians, establishment “conservative” commentators, and around half of GOP politicians would say, “What’s the problem?” Most progressives would cheer unless complaining that the bias didn’t go far enough. “How can Donald Trump still be allowed to have a Twitter account?!?!” There is no question whatsoever he violated Twitter’s Terms of Service according the Twitter’s own explanation of why they banned other people.

      And most ordinary non-progressives? If they don’t already understand these facts, they are hopeless, and if they aren’t fooled and do understand what’s going on, then they have been either unwilling or unable to do anything about it.

      There’s no point unless one is talking about regulation, in which case, all the private censorship algorithms go out the window, to be replaced either with nothing, or a public censorship algorithm.

      • I’m just waiting for Google to institute the first Court of Algorithm Appeal, so that they can claim procedural liberalism is what caused censorship, not the individual whim of a Google manager. Of course, the other side will have its day in court, for whatever good that will do.

        • The latest annual Supreme Court season was held between 9:00 and 9:00:26 EST this morning. Legal scholars call the output, a 1.4 gigabyte Microsoft Access file, “completely incomprehensible to any human intelligence”, adding “other than that, it’s not very similar to the old human court’s work”.

        • The process of rational decision-making can be thought of a labor+capital complementary arrangement, in which some facts and rules can be codified as an ‘algorithm’, with instructions for the human member of the partnership to either perform functions which ought to be the ones too hard or expensive to automate for the moment, or else to give the human discretionary authority (i.e., power) to select options from within a range of room for maneuver.

          “The law” is not just the rules but also the whole complementary system of decision-making of which those rules and an implicit ‘procedural algorithm’ form a part.

          But, like any code, garbage in, garbage out. Worse, to the extent it defers to humans like judges, if they are mostly garbage too, then you get garbage-squared.

          But, if you’ve got sane code (a fantasy, I know), but garbage judges, then to the extent you could reliably and auditably automate judicial functions, you totally should, and I for one would welcome our new fair robot overlords if unfair garbage humans were the alternative, which they are.

          A contemporary example from another field could be radiology. It seems that image-processing computers are now better than human radiologists at recognizing problems, with fewer errors of both types, and getting even better every day, while human performance stagnates.

          So it’s reasonable – of not now then probably soon – to say that radiology should be automated and thus absorbed into the best practices “algorithm” of the “medical decision making process”.

          The trouble for the law is that anything handed over to a fair algorithm threatens to produce results which, while factually correct, are politically incorrect. So they will have to be corrected, and the meta-algorithm of how to decide how to correct empirical algorithms effectively constitutes a social ideology / secular religion.

          • Functionally, what’s the difference between “correcting for politically incorrect conclusions” and “correcting for what Moscow thinks” when the Soviet judge calls up his superiors?

            Perhaps this is a good measure of judicial corruption – the degree to which they feel the must make factually incorrect decisions in order to please the “true overlords” – whomever they may be.

  2. “violated our terms of service” seems closely equivalent to a parental “because I said so” explanation to a child for denial of some want.
    I don’t think it’s ever been considered satisfactory, even to children.

  3. Algorithms have their place. There is a body of research by Paul E. Meehl that is decades old, on “Clinical vs. statistical prediction,” discussed a bit in the book _Thinking fast and slow_.

    Wikipedia has some details on such matters. I’ve not ever read Meehl, but his work is discussed in _Thinking fast and slow_ and you can get a general sense from things about him online.

    = – = – = – = – =

    I think what some of us find so vexatious is what we sense about the infoscape surrounding us. For example…

    1. Facebook is full of weird assertions. But…I can’t share anything to Facebook from the blog “Ethics Alarms,” which seems relatively sane and measured. Today I read something asserting that Saint Augustine quotes cannot be posted on Facebook because of the algorithms. It was probably at PJMedia which is mostly polemics and clickbait, but you could try it and see if you are on Facebook. Find the quote from Agustine, attempt to post it on Facebook, and see if the rumors are true.

    2. Denis Prager does not seem like a wild eyed zealot, but his videos were demonetized on Youtube.

    3. Charles Murray is not an alt-right white supremacist, but the people who formed a mob and attacked him at Middlebury College believed he was, because the internet made it easy to convince themselves that he was. Type in his name to google and check the SPLC “extremist files.”

    Just now I typed into google “Charles Murray White Supremacist” and this was the top hit.

    https://www.splcenter.org/fighting-hate/extremist-files/individual/charles-murray

    Where is this leading us? I don’t want to keep going in that direction.

    • Per #1:

      I don’t track this stuff closely myself, but my church friends are often involved in pro-life and there does appear to be a lot of censoring of pro-life views on Facebook (etc). Either outright censorship, demonization, or a strange Orwellian change of language. “This photo may be sensitive to some people” and censorship over a photo of a baby in utero. An anti-abortion movie they went to see was designated “propaganda” when you searched for it on Google. Other things.

  4. “But the causal arrow surely goes the other way as well: our highly fragmented casting-about for alternative narratives that can make better sense of the world as we experience it is a response to the felt lack of fit between experience and what we are offered by the official organs, and a corollary lack of trust in them.”

    Amen.

    The more patently ideological the establishment media and internet giants get, the more opportunity is opened for smaller, forums to become influential. Forums that are algorithmically censored and curated are hardly worth the time for most curious people and probably appeal to an audience that is less questioning and more persistent in their ways. The most relevant margin will eventually be outside the establishment altogether.

    Let a thousand blogs bloom.

  5. whenever they do censor someone, list specific reasons, rather than some vague claim

    Yes. Social media’s censorship should have to win people’s trust and basic belief of fairness. You will never please everyone, but broadly, they can and should earn broad trust. They are not. I suspect people of influence have chosen to fight dirty to win political fights which means not playing fair. And the cost of losing trust hasn’t motivated the big social media companies to change that yet.

  6. test

    this is italicized
    this is italicized
    this is italicized
    this is italicized
    this is italicized

  7. Branded social platforms with unverified accounts have emerged in the free market as monster profit makers. These platforms cannot bake in a layer of curation by human beings and make anywhere near the profits they make now. These platforms based on an intrinsically flawed set of ideas.

    There is the danger that platforms will become too unpleasant and drive away the users, and danger that the platforms will attract political action against them. Because they are based on an indefensible system of openness, anonymity and automation, the platforms will eventually collapse under the weight of public anger while attempting to protect their past profits, but they will fight to survive until they do.

    Perhaps they will get less annoying algorithms (or more accurately, AI based filtering), but they can’t get rid of them and replace them with humans. It would be impressive if that AI could also generate satisfactory explanations for their decisions to their customer bases, but I doubt they will be able to do that in time.

    This is getting tedious. The “Progressive” boogeyman is now not just a political fear, it is seen everywhere, in everything. Facebook isn’t trying to manipulate us to make money, instead, they have some creepy political agenda where they are willing to risk hundreds of billions of dollars to bend the political system to their will.

    How paranoid can you get?

    • Risk hundreds of billions of dollars? All American advertisers are super-eager not to have their ads air near anything cancelable, why should they punish Facebook for helping them? Progressive critics of Facebook want it to be more aggressive in purging cancelable content and users, to give more jobs to women and minorities, and to stop anti-progressive political campaigns under whatever heading works, “Russian influence”, “fake news”, “hate” etc. That works in the same direction. Conservative critics of Facebook are extremely unlikely to get anything done except help progressives get their way by creating a show of bipartisanship, and make Zuckerberg wear a suit once a month.

      As for rich men having creepy political agendas, that’s nothing new. Do the names of Carnegie and Rockefeller ring a bell?

      • Facebook cares about their business. They care about managing their users experience, pleasing advertisers, and avoiding problems with Congress. Critics only matter if they affect those constituencies.

        Your point about why conservative critics fail to influence Facebook eludes me. I can guarantee you Facebook does not want to be labeled as progressive activists and lose a third of their user base.

        And yes, rich men having creepy political agendas is nothing new. What is new is the idea that very rich men are more driven by their political agendas than they are about their economic self interests.

        Carnegie and Rockefeller may have had certain political ideas, but when they had to choose, they chose to make more money. There has been a slow, steady relentless change in the tone of these discussions where commenters increasingly imply that market forces take huge risks to shape political agendas rather than just adapt to them to pursue profit.

        If your argument is that this is where the markets are leading Facebook, well, then we agree.

        • I can guarantee you Facebook does not want to be labeled as progressive activists and lose a third of their user base.

          The first part of this is false by revealed preference of Facebook. To go by their actions, they don’t particularly mind being labeled progressive activists — I mean, read any conservative blogs, or listen to Tucker Carlson — as long as this doesn’t create too much trouble in Congress. As for the second part, where the hell will this third of their user base go? Compare Twitter and Gab. Also, conservatives being yesterday’s progressives, most of them apparently can’t help feeling bad about violating progressive dictums, especially those closer to them along the axis of political development. They’d rather stay and feel occasionally bad and sinful than leave and feel condemned to destruction of body and eternal perdition of their progressive soul. Compare e.g. the laughable state of biblical marriage in supposedly arch-conservative evangelical churches: Dalrock regularly provides primary-source material on this.

          Carnegie and Rockefeller may have had certain political ideas, but when they had to choose, they chose to make more money.

          Carnegie sold his business before embarking on large-scale philantropy. Selling your business is not widely known as a way to make more money.

  8. “Facebook isn’t trying to manipulate us to make money, instead, they have some creepy political agenda where they are willing to risk hundreds of billions of dollars to bend the political system to their will.” is an inaccurate summary.

    “Facebook is trying to minimize regulation that will cost them billions of dollars by bowing to political pressure by progressives to censor some conservative ideas by calling it “hate” speech.” is a more accurate summary.

    • That does describe Facebook acting in their own business interests, but it is not a particularly likely political observation.

      Any regulatory demands on Facebook will require a bill. Only a bi-partisan bill is possible on any such matters. Their exposure on that front will likely be for privacy issues and embarrassing election foreign propaganda efforts. There won’t be any progressive or conservative partisan regulatory pressure coming from Washington. Facebook knows this.

      The hate speech stuff is their attempt at not upsetting their users experience. They will continue to adjust in an attempt to lose as few of their user base as they can. It is an awkward balance and they may be making mistakes. Their workers may show some bias. But their desire to make as much money as possible is the driving factor, not some top down progressive agenda.

      • If there are enough users who are upset at the idea of hate speech to affect profits it represents a potentially political force. And bi-partisan support is possible by calling it terrorist speech and representing the bill as acting against Islamic terrorists groups recruiting and propaganda.

Comments are closed.