Reading our own minds

Thanks to a commenter, I found a paper by Peter Carruthers.

metacognition always results from people turning their mindreading abilities upon themselves.

By metacognition he means our description of our own mental processes. We call this introspection, although his theory makes introspection something of a misnomer.

He points out that what I call the “outside-in” view of Theory of Mind implies that autistics should have weak introspection. He argues that studies are consistent with this, or at least not terribly inconsistent with it.

I believe that there are a lot of studies showing that people will clearly make up explanations for what they do. An experimenter will do something to move subjects’ hands, and the subjects will “explain” that they were reaching for something. I think of this is evidence for the “outside-in” view.

The Theory of Mind

I just finished reading The Enigma of Reason by Dan Sperber and Hugo Mercier. They look at the process by which we arrive at reasons for actions. The following thought occurs to me:

You probably assume that understanding your own mind is prior to having a “theory of mind” about other humans. However, it could be the other way around.

Sperber and Mercier do not make this sort of claim. However, I do not think that it is terribly inconsistent with their views.

A theory of mind seeks to explain why agent X performs action A. What I am suggesting is that we arrive at this theory not through introspection but instead by observing action A followed by consequence B repeatedly. After we have seen this happen enough, we develop the insight that perhaps agent X is performing action A in order to achieve consequence B. Call this the basic theory of mind, or at least a theory of what motivates others. Note that we might hold such a basic theory of mind or motivation about animals or even about an inanimate object.

Given that we have a basic theory of mind and that we assume that others have similar basic theory of mind, we can engage in a new form of teaching. If I tell you that I am performing action A in order to achieve consequence B, then you can get the point of performing action A without my having to repeat action A many times.

This explanatory form of teaching is very efficient. With cultural communication so important in humans, we have become very good at explaining to others why we do things. Moreover, explanation and justification are similar functions. We develop the ability to justify to others why we do things.

We are concerned with what others think of what we say and do. As I read Sperber and Mercier, they argue that the natural function of reason is to try to gain respect and approval of others for our actions. I think that Sperber and Mercier do not give enough credit to the role of reasons in making teaching more effective. Imagine telling a child to look both ways before crossing a street without telling the child why they should do so. The child could perform the ritual exactly as directed and then walk right in front of moving car.

But the role of reasons in teaching does not address the enigma to which Sperber and Mercier refer. The enigma is that our reasoning process evolved to be biased rather than optimized to arrive at truth. Their explanation is that our reasoning process evolved as a mechanism to explain and justify our actions to others. The goal of reasoning is not to seek Truth but to defend our status. Biased reasoning is helpful for defending status. Bias is less helpful when we are trying to make decisions, but when we make decisions we are simply adapting our reasoning tool to a less natural context.

Sperber and Mercier make another claim, which is that when we argue with one another, we arrive at more reasonable conclusions than when we reason on our own. They say that this is because when we evaluate our own reasons we lack objectivity. They think we are more objective when we evaluate others’ reasons, so that our evaluations are more reliable. I do not find that persuasive. I think that part of defending our own reasons is attacking our opponents’ reasons, and I believe that we tend to be uncharitable to those who disagree with us. I am more inclined to ascribe the benefit of arguing to exposure to reasoning that we have not considered, rather than to a greater objectivity in hearing others’ points of view than in evaluating one’s own.

If reasoning evolved to justify our actions, then how do we get to a point where we use reasoning to make decisions? I think that the most consistent application of their idea would be to say that when we make decisions we anticipate having to defend our actions. As we go through this mental process, we may decide that some actions are unwise. Anticipating my wife’s reaction should I come home drunk, I stop drinking.

It could be that people with poor self-control have difficulty engaging in this exercise. That is, they either lack the ability to anticipate the reactions of others or they are less sensitive to such anticipated reactions.

It is interesting to note that I have often advised people in the throes of making a decision to imagine explaining that decision to a variety of other people. If you are thinking of quitting your job, imagine explaining that to your family, to close friends, to co-workers, and so on. I have suggested that such an exercise can help to clarify your thoughts.

Anyway, what occurs to me is that we obtain our theory of mind “outside-in” rather than “inside-out.” That is, by observing other people and listening to their reasons, we develop a theory of how our own minds ought to work.

The problem of overconfidence

Chris Dillow writes,

overconfidence can be a form of strategic self-deception. A new paper by Peter Schwardmann and Joel van der Weele shows this. They got subjects to do intelligence tests and then selected some at random and told them that they could earn more money if they could convince other subjects that they had performed well on the tests. They found that the selected subjects were even more overconfident about their performance than non-selected ones.

Pointer from Mark Thoma.

We might ask when overconfidence will be selected for. Perhaps in sales. Perhaps in high-level executives. Almost certainly in politicians.

When will overconfidence be subject to checks and balances? My first thought is when firms face competition, profits, and losses.

Emotions about winning and losing

Tyler Cowen writes,

In venture capital, I suspect that hatred of losing may be a disadvantage. No matter how successful you may be, most of your individual investments will lose money and hatred of losing may make you too risk-averse. It might be better to have the ability to simply forget your losses and put them behind you.

I discovered in high school playing poker with friends that I hated losing, but I didn’t enjoy winning very much in that context. So I stopped playing poker. Decades later, I hated “angel investing,” perhaps for the same reason, and I stopped doing that.

Romance is probably another area in which you probably take a different approach if you love winning more than you hate losing, or vice-versa. Think of it as a game in which your self-esteem is at stake.

But I wonder if the relative strength of your feelings about winning or losing is generic or instead is specific to context. As Tyler points out, there are some activities where losses are relatively costly, and in that context I would say that it makes sense to be averse to losing.

My advice to people is to take chances that have high upside and low downside, while avoiding the reverse. It is relatively simple, obvious advice, and in some sense it is neutral with respect to winning and losing per se.

Meaningless rituals might be good for you

Veronika Rybanska and others write,

To be accepted into social groups, individuals must internalize and reproduce appropriate group conventions, such as rituals. The copying of such rigid and socially stipulated behavioral sequences places heavy demands on executive function. Given previous research showing that challenging executive functioning improves it, it was hypothesized that engagement in ritualistic behaviors improves children’s executive functioning, in turn improving their ability to delay gratification. A 3-month circle time games intervention with 210 schoolchildren (Mage = 7.78 years, SD = 1.47) in two contrasting cultural environments (Slovakia and Vanuatu) was conducted. The intervention improved children’s executive function and in turn their ability to delay gratification. Moreover, these effects were amplified when the intervention task was imbued with ritual, rather than instrumental, cues.

Pointer from Kevin Lewis. The rationalist would be opposed to meaningless rituals. Perhaps unwisely so.

Jonathan Parker Discusses Financial Behavior

In an interview format with Aaron Steelman. Pointer from Timothy Taylor. Interesting throughout. A few tidbits:

people don’t spend the money the week before it shows up — they spend it the week it shows up. And it seems like you’re going to have a lot of difficulty quantitatively fitting that little foresight into a life cycle model unless people are often literally liquidity constrained, absolutely at their debt limits.

What equilibrium supports high-fee mutual funds, index funds, and so on, and how does that change the flow of funds between the corporate and household sector and the pricing of risk?

a high propensity to consume correlates with low liquidity, which is useful for theorizing but also presents a little bit of a chicken-and-egg problem. Is it different preferences, objectives, or behavioral constraints that are causing both the low liquidity and the propensity to spend, or is it the low liquidity that is causing the lack of planning and high spending responses? So for many purposes, what I take my findings to mean is that the buffer-stock model is a quite reasonable model with one critical ingredient. The critical difference relative to the way I modeled households in the 2002 paper with Gourinchas is that I think there’s much more heterogeneity in preferences across households. While in that paper we looked at differences in preferences across occupation and industry, I think there’s just much more persistence in heterogeneity in behavior, consistent in the buffer-stock model with differences in impatience.

There is a significant portion of the population with above-median income and close to zero saving. I think it is hard to tell a story that explains that in terms of rational behavior. Remember, we are talking about a lot of people, not just a few random exceptions.

Are Moral Absolutists Easier to Trust?

Molly Crockett says,

We’ve done experiments where we give people the option to play a cooperative game with someone who endorses deontological morality, who says there are some rules that you just can’t break even if they have good consequences. We compare that to someone who’s consequentialist, who says that there are certain circumstances in which it is okay to harm one person if that will have better consequences. The average person would much rather interact with and trust a person who advocates sticking to moral rules.

Another Theory of the Trump Phenomenon

Someone writes,

Girard discovered the answer. Society has survived because it has developed a mechanism for concentrating violence on a limited number of victims. This he called the “scapegoating mechanism”. In fact the scapegoating mechanism exploits the very mimetic mechanisms that render it necessary for society’s survival. People who fall into violent, obsessive desire quickly lose their grip on reality. It is easy to convince them that the source of their frustration – their inability to satisfy their mimetic desires without running into violent conflict – is the fault of some group of scapegoats. It is important for the scapegoats to be a disenfranchised minority, so that the violence of society can be turned upon them without fear that they will be avenged. Here, again, Girard’s theory renders unsurprising that which economists and political scientists are at a loss to explain: for instance how the favoured ‘cure’ for economic depression is to visit structural violence upon low-paid immigrants, racial minorities, the homeless, the unemployed and the disabled.

Pointer from Tyler Cowen.

In summary, there is this:

the most historically common form of spontaneous order is that of a human community tacitly agreeing to vent all of its violent frustration upon a defenceless subgroup.

I have some doubts about Girard’s core hypothesis, which is that we all want the same few goods. It seems to me that there are all sorts of things that other people want which interest me not at all. By the same token, many of my most favorite pastimes seem to be shared by only a few others.

The theory that our tastes are modeled on the tastes of others may be right, but I am not sure that its implications are as dire as Girard would have it. When I go to a folk dance session, it is true that I will like a dance more when there are others who also like the same dance. But I don’t want to kill them so that I can “possess” the dance. Quite the contrary.

In fact, I think that this is generally true. Nobody wants to be the only person who owns an i-Phone. Even with status goods, we all want others to own them, in order to validate our tastes.

I am sure that in some sense we sometimes desire that others have less than they do. But I am not ready to sign on to the idea that this is a major driver of a lot of our behavior.

Intelligence, Leftism, and Academia

Noah Carl tries to sort out the relationships. (pointer from Tyler Cowen.)

I first bring together data on the political beliefs of three separate populations: academics, the general population, and a high-IQ population. I then calculate the proportion of each population that identifies with various political positions (e.g., thinking of oneself as a liberal, supporting the Democratic Party). The extent of overrepresentation for any particular position is simply the percentage-point difference between academics and the general population (i.e., the total length of the right-hand bar in Fig. 1). And the fraction of this overrepresentation that can be explained by intelligence is simply the percentage-point difference between the high-IQ population and the general population divided by the percentage-point difference between academics and the general population

The results:

Overall, intelligence may account for: most of the disparity between academics and the general population on the issues of abortion, homosexuality and traditional gender roles; none of the disparity on the issue of income inequality (but see Section 3.2); less than half the disparity on liberal versus conservative ideology; and much less than half the disparity on Democrat versus Republican identity.

The conclusion:

Possible explanations for the remaining overrepresentation comprise: self-selection on personality, interests, cognitive style or preferences; social homophily and political typing; self-selection on strength and stature; individual conformity; status inconsistency; and discrimination.

I have to believe that a lot of the over-representation comes from discrimination. If you ask left-wing academics, they will tell you that essentially all of the over-representation comes from intelligence. This tells you that they associate conservative beliefs with stupidity and are therefore almost certain to under-rate the intelligence of anyone who fails to chime in with the appropriate left-wing dogma when relevant topics come up in casual conversation.

But read the paper for yourself. I think it is an important one.