Pride and Prejudice

14 Jul

It is ‘so obvious’ that policy A >> policy B that only who don’t want to know or who want inferior things would support policy B. Does this conversation remind you of any that you have had? We don’t just have such conversations about policies. We also have them about people. Way too often, we are being too harsh.

We overestimate how much we know. We ‘know know’ that we are right, we ‘know’ that there isn’t enough information in the world that will make us switch to policy B. Often, the arrogance of this belief is lost on us. As Kahneman puts it, we are ‘ignorant of our own ignorance.’ How could it be anything else? Remember the aphorism, “the more you know, the more you know you don’t know”? The aphorism may not be true but it gets the broad point right. The ignorant are overconfident. And we are ignorant. The human condition is such that it doesn’t leave much room for being anything else (see the top of this page).

Here’s one way to judge your ignorance (see here for some other ideas). Start by recounting what you know. Sit in front of a computer and type it up. Go for it. And then add a sentence about how do you know. Do you recall reading any detailed information about this person or issue? From where? Would you have bought a car if you had that much information about a car?

We not just overestimate what we know, we also underestimate what other people know. Anybody with different opinions must know less than I. It couldn’t be that they know more, could it?

Both, being overconfident about what we know and underestimating what other people know leads to the same thing: being too confident about the rightness of our cause and mistaking our prejudices for obvious truths.

George Carlin got it right. “Have you ever noticed that anybody driving slower than you is an idiot, and anyone going faster than you is a maniac?” It seems the way we judge drivers is how we judge everything else. Anyone who knows less than you is either being willfully obtuse or an idiot. And those who know more than you just look like ‘maniacs.’

Firmly Against Posing Firmly

31 May

“What is crucial for you as the writer is to express your opinion firmly,” writes William Zinsser in “On Writing Well: An Informal Guide to Writing Nonfiction.” To emphasize the point, Bill repeats the point at the end of the paragraph, ending with, “Take your stand with conviction.”

This advice is not for all writers—Bill particularly wants editorial writers to write with a clear point of view.

When Bill was an editorial writer for the New York Herald Tribune, he attended a daily editorial meeting to “discuss what editorials … to write for the next day and what position …[to] take.” Bill recollects,

“Frequently [they] weren’t quite sure, especially the writer who was an expert on Latin America.

“What about that coup in Uruguay?” the editor would ask. “It could represent progress for the economy,” the writer would reply, “or then again it might destabilize the whole political situation. I suppose I could mention the possible benefits and then—”

The editor would admonish such uncertainty with a curt “let’s not go peeing down both legs.”

Bill approves of taking a side. He likes what the editor is saying if not the language. He calls it the best advice he has received on writing columns. I don’t. Certainty should only come from one source: conviction born from thoughtful consideration of facts and arguments. Don’t feign certainty. Don’t discuss concerns in a perfunctory manner. And don’t discuss concerns at the end.

Surprisingly, Bill agrees with the last bit about not discussing concerns in a perfunctory manner at the end. But for a different reason. He thinks that “last-minute evasions and escapes [cancel strength].”

Don’t be a mug. If there are serious concerns, don’t wait until the end to note them. Note them as they come up.

“Cosal” Inference

27 Apr

We often make causal claims based on fallible heuristics. Some of the heuristics that we commonly use to make causal claims are:

  1. Selecting on the dependent variable. How often have you seen a magazine article with a title like “Five Habits of Successful People”? The implicit message in such articles is that if you were to develop these habits, you would be successful too. The articles never discuss how many unsuccessful people have the same habits or all the other dimensions on which successful and unsuccessful people differ.
  2. Believing that correlation implies causation. A common example goes like this: children who watch more television are more violent. From this data, people deduce that watching television causes children to be violent. It is possible, but there are other potential explanations.
  3. Believing that events that happen in a sequence are causally related. B follows A so A must cause B. Often there isn’t just one A, but lots of As. And the B doesn’t instantaneously follow A.

Beyond this, people also tend to interpret vague claims such as X causes Y as X causes large changes in Y. (There is likely some motivated aspect to how this interpretation happens.)

Bad Hombres: Bad People on the Other Side

8 Dec

Why do many people think that people on the other side are not well motivated? It could be because they think that the other side is less moral than them. And since opprobrium toward the morally defective is the bedrock of society, thinking that the people in the other group are less moral naturally leads people to censure the other group.

But it can’t be that two groups simultaneously have better morals than the other. It can only be that people in the groups think they are better. This much logic dictates. So, there has to be a self-serving aspect to moral standards. And this is what often leads people to think that the other side is less moral. Accepting this is not the same as accepting moral relativism. For even if we accept that some things are objectively more moral—not being sexist or racist say—some groups—those that espouse that a certain sex is superior or certain races are better—will still think that they are better.

But how do people come to know of other people’s morals? Some people infer morals from political aims. And that is a perfectly reasonable thing to do as political aims reflect what we value. For instance, a Republican who values ‘life’ may think that Democrats are morally inferior because they support the right to abortion. But the inference is fraught with error. As matters stand, Democrats would also like women to not go through the painful decision of aborting a fetus. They just want there to be an easy and safe way for women should they need to.

Sometimes people infer morals from policies. But support for different policies can stem from having different information or beliefs about causal claims. For instance, Democrats may support a carbon tax because they believe (correctly) the world is warming and because they think that the carbon tax is what will help reduce global warming the best and protect American interests. Republicans may dispute any part of that chain of logic. The point isn’t what is being disputed per se, but what people will infer about others if they just had information about the policies they support. Hanlon’s razor is often a good rule.

Estimating Bias and Error in Perceptions of Group Composition

14 Nov

People’s reports of perceptions of the share of various groups in the population are typically biased. The bias is generally greater for smaller groups. The bias also appears to vary by how people feel about the group—they are likelier to think that the groups they don’t like are bigger—and by stereotypes about the groups (see here and here).

A new paper makes a remarkable claim: “explicit estimates are not direct reflections of perceptions, but systematic transformations of those perceptions. As a result, surveys and polls that ask participants to estimate demographic proportions cannot be interpreted as direct measures of participants’ (mis)information, since a large portion of apparent error on any particular question will likely reflect rescaling toward a more moderate expected value…”

The claim is supported by a figure that takes the form of plotting a curve over averages. (It also reports results from other papers that base their inferences on similar such figures.)

The evidence doesn’t seem right for the claim. Ideally, we want to plot curves within people and show that the curves are roughly the same. (I doubt it to be the case.)

Second, it is one thing to claim that the reports of perceptions follow a particular rescaling formula, and another to claim that people are aware of what they are doing. I doubt that people are.

Third, if the claim that ‘a large portion of apparent error on any particular question will likely reflect rescaling toward a more moderate expected value’ is true, then presenting people correct information ought not to change how people think about groups, for e.g., perceived threat from immigrants. The calibrated error should be a much better moderator than raw error. Again, I doubt it.

But I could be proven wrong about each. And I am ok with that. The goal is to learn the right thing, not to be proven right.


21 Aug

When strapped for time, some resort to wishful thinking, others to lashing out. Both are illogical. If you are strapped for time, it is either because you scheduled poorly or because you were a victim of unanticipated obligations. Both are understandable, but neither justify ‘acting out.’ So don’t.

Whatever the reason, being strapped for time means either that some things won’t get done on time, or that you will have to work harder, or that you will need more resources (yours or someone else’s), or all three. And the only things to do are:

  1. Triage,
  2. Ask for help, and
  3. Communicate effectively to those affected

If you have landed in soup because of poor scheduling, for instance, by not budgeting any time to deal with things you haven’t scheduled, make a note. And improve.

And since it is never rational to worry—it is at best unproductive, and at worst corrosive—avoid it like plague.

Motivated Citations

13 Jan

The best kind of insight is the ‘duh’ insight—catching something that is exceedingly common, almost routine, but something that no one talks about. I believe this is one such insight.

The standards for citing congenial research (that supports the hypothesis of choice) are considerably lower than the standards for citing uncongenial research. It is an important kind of academic corruption. And it means that the prospects of teleological progress toward truth in science, as currently practiced, are bleak. An alternate ecosystem that provides objective ratings for each piece of research is likely to be more successful. (As opposed to the ‘echo-system’—here are the people who find stuff that ‘agrees’ with what I find—in place today.)

An empirical implication of the point is that the average ranking of journals in which congenial research that is cited is published is likely to be lower than in which uncongenial research is published. Though, for many of the ‘conflicts’ in science, all sides of the conflict will have top-tier publications—-which is to say that the measure is somewhat crude.

The deeper point is that readers generally do not judge the quality of the work cited for support of specific arguments, taking many of the arguments at face value. This, in turn, means that the role of journal rankings is somewhat limited. Or more provocatively, to improve science, we need to make sure that even research published in low ranked journals is of sufficient quality.

Stereotypical Understanding

11 Jul

The paucity of women in Computer Science, Math and Engineering in the US is justly widely lamented. Sometimes, the imbalance is attributed to gender stereotypes. But only a small fraction of men study these fields. And in absolute terms, the proportion of women in these fields is not a great deal lower than the proportion of men. So in some ways, the assertion that these fields are stereotypically male is in itself a misunderstanding.

For greater clarity, a contrived example: Say that the population is split between two similar sized groups, A and B. Say only 1% of Group A members study X, while the proportion of Group B members studying X is 1.5%. This means that 60% of those to study X belong to Group B. Or in more dramatic terms: activity X is stereotypically Group B. However, 98.5% of Group B doesn’t study X. And that number is not a whole lot different from 99%, the percentage of Group A that doesn’t study X.

When people say activity X is stereotypically Group B, many interpret it as ‘activity X is quite popular among X.’ (That is one big stereotype about stereotypes.) That clearly isn’t so. In fact, the difference between the preferences for studying X between Group A and B — as inferred from choices (assuming same choices, utility) — is likely pretty small.

Obliviousness to the point is quite common. For instance, it is behind arguments linking terrorism to Muslims. And Muslims typically respond with a version of the argument laid out above—they note that an overwhelming majority of Muslims are peaceful.

One straightforward conclusion from this exercise is that we may be able to make headway in tackling disciplinary stereotypes by elucidating the point in terms of the difference between p(X|Group A) and p(X| Group B) rather than in terms of p(Group A | X).

Congenial Invention and the Economy of Everyday (Political) Conversation

31 Dec

Communication comes from the Latin word communicare, which means `to make common.’ We communicate not only to transfer information, but also to establish and reaffirm identities, mores, and meanings. (From my earlier note on a somewhat different aspect of the economy of everyday conversation.) Hence, there is often a large incentive for loyalty. More generally, there are three salient aspects to most private interpersonal communication about politics — shared ideological (or partisan) loyalties, little knowledge of, and prior thinking about political issues, and a premium for cynicism. The second of these points — ignorance — cuts both ways. It allows for the possibility of getting away with saying something that doesn’t make sense (or isn’t true). And it also means that people need to invent stuff if they want to sound smart etc. (Looking stuff up is often still too hard. I am often puzzled by that.)

But don’t people know that they are making stuff up? And doesn’t that stop them? A defining feature of humans is overconfidence. And people often times aren’t aware of the depth of the wells of their own ignorance. And if it sounds right, well it is right, they reason. The act of speaking is many a time an act of invention (or discovery). And we aren’t sure and don’t actively control how we create. (Underlying mechanisms behind how we create — use of ‘gut’ are well-known.) Unless we are very deliberate in speech. Most people aren’t. (There generally aren’t incentives to be.) And find it hard to vet the veracity of the invention (or discovery) in the short time that passes between invention and vocalization.