Estimating Bias and Error in Perceptions of Group Composition

14 Nov

People’s reports of perceptions of the share of various groups in the population are typically biased. The bias is generally greater for smaller groups. The bias also appears to vary by how people feel about the group—they are likelier to think that the groups they don’t like are bigger, and stereotypes (see Ahler and Sood).

A new paper makes a remarkable claim: “explicit estimates are not direct reflections of perceptions, but systematic transformations of those perceptions. As a result, surveys and polls that ask participants to estimate demographic proportions cannot be interpreted as direct measures of participants’ (mis)information, since a large portion of apparent error on any particular question will likely reflect rescaling toward a more moderate expected value…”

The claim is supported by a figure that takes the form of plotting a curve over averages. (It also reports results from other papers that base their inferences on similar such figures.)

The evidence doesn’t seem right for the claim. Ideally, we want to plot curves within people and show that the curves are roughly the same. (I doubt it to be the case.)

Second, it is one thing to claim that the reports of perceptions follow a particular rescaling formula, and another to claim that people are aware of what they are doing. I doubt that people are.

Third, if the claim that ‘a large portion of apparent error on any particular question will likely reflect rescaling toward a more moderate expected value’ is true, then presenting people correct information ought not to change how people think about groups, for e.g., perceived threat from immigrants. The calibrated error should be a much better moderator than raw error. Again, I doubt it.

But I could be proven wrong about each. And I am ok with that. The goal is to learn the right thing, not to be proven right.

Unstrapped

21 Aug

When strapped for time, some resort to wishful thinking, others to lashing out. Both are illogical. If you are strapped for time, it is either because you scheduled poorly or because you were a victim of unanticipated obligations. Both are understandable, but neither justify ‘acting out.’ So don’t.

Whatever the reason, being strapped for time means either that some things won’t get done on time, or that you will have to work harder, or that you will need more resources (yours or someone else’s), or all three. And the only things to do are:

  1. Triage,
  2. Ask for help, and
  3. Communicate effectively to those affected

If you have landed in soup because of poor scheduling, for instance, by not budgeting any time to deal with things you haven’t scheduled, make a note. And improve.

And since it is never rational to worry—it is at best unproductive, and at worst corrosive—avoid it like plague.

Motivated Citations

13 Jan

The best kind of insight is the ‘duh’ insight — catching something that is exceedingly common, almost routine, but something that no one talks about. I believe this is one such insight.

The standards for citing congenial research (that supports the hypothesis of choice) are considerably lower than the standards for citing uncongenial research. It is an important kind of academic corruption. And it means that the prospects of teleological progress toward truth in science, as currently practiced, are bleak. An alternate ecosystem that provides objective ratings for each piece of research is likely to be more successful. (As opposed to the ‘echo-system’—here are the people who find stuff that ‘agrees’ with what I find—in place today.)

An empirical implication of the point is that the average ranking of journals in which congenial research that is cited is published is likely to be lower than in which uncongenial research is published. Though, for many of the ‘conflicts’ in science, all sides of the conflict will have top-tier publications —-which is to say that the measure is somewhat crude.

The deeper point is that readers generally do not judge the quality of the work cited for support of specific arguments, taking many of the arguments at face value. This, in turn, means that the role of journal rankings is somewhat limited. Or more provocatively, to improve science, we need to make sure that even research published in low ranked journals is of sufficient quality.

Stereotypical Understanding

11 Jul

The paucity of women in Computer Science, Math and Engineering in the US is justly widely lamented. Sometimes, the imbalance is attributed to gender stereotypes. But only a small fraction of men study these fields. And in absolute terms, the proportion of women in these fields is not a great deal lower than the proportion of men. So in some ways, the assertion that these fields are stereotypically male is in itself a misunderstanding.

For greater clarity, a contrived example: Say that the population is split between two similar sized groups, A and B. Say only 1% of Group A members study X, while the proportion of Group B members studying X is 1.5%. This means that 60% of those to study X belong to Group B. Or in more dramatic terms: activity X is stereotypically Group B. However, 98.5% of Group B doesn’t study X. And that number is not a whole lot different from 99%, the percentage of Group A that doesn’t study X.

When people say activity X is stereotypically Group B, many interpret it as ‘activity X is quite popular among X.’ (That is one big stereotype about stereotypes.) That clearly isn’t so. In fact, the difference between the preferences for studying X between Group A and B — as inferred from choices (assuming same choices, utility) — is likely pretty small.

Obliviousness to the point is quite common. For instance, it is behind arguments linking terrorism to Muslims. And Muslims typically respond with a version of the argument laid out above—they note that an overwhelming majority of Muslims are peaceful.

One straightforward conclusion from this exercise is that we may be able to make headway in tackling disciplinary stereotypes by elucidating the point in terms of the difference between p(X|Group A) and p(X| Group B) rather than in terms of p(Group A | X).

Congenial Invention and the Economy of Everyday (Political) Conversation

31 Dec

Communication comes from the Latin word communicare, which means `to make common.’ We communicate not only to transfer information, but also to establish and reaffirm identities, mores, and meanings. (From my earlier note on a somewhat different aspect of the economy of everyday conversation.) Hence, there is often a large incentive for loyalty. More generally, there are three salient aspects to most private interpersonal communication about politics — shared ideological (or partisan) loyalties, little knowledge of, and prior thinking about political issues, and a premium for cynicism. The second of these points — ignorance — cuts both ways. It allows for the possibility of getting away with saying something that doesn’t make sense (or isn’t true). And it also means that people need to invent stuff if they want to sound smart etc. (Looking stuff up is often still too hard. I am often puzzled by that.)

But don’t people know that they are making stuff up? And doesn’t that stop them? A defining feature of humans is overconfidence. And people often times aren’t aware of the depth of the wells of their own ignorance. And if it sounds right, well it is right, they reason. The act of speaking is many a time an act of invention (or discovery). And we aren’t sure and don’t actively control how we create. (Underlying mechanisms behind how we create — use of ‘gut’ are well-known.) Unless we are very deliberate in speech. Most people aren’t. (There generally aren’t incentives to be.) And find it hard to vet the veracity of the invention (or discovery) in the short time that passes between invention and vocalization.

That’s Smart! What We Mean by Smartness and What We Should

16 Aug

Many people conceive of intelligence as a CPU. To them, being more intelligent means having a faster CPU. And this is despairing as the clock speed is largely fixed. (Prenatal and childhood nutrition can make a sizable difference, however. For instance, iodine deficiency in children causes mental retardation.)

But people misconceive intelligence. Intelligence is not just the clock speed of the CPU. It is also the short-term cache and the OS.

Clock speed doesn’t matter much if there isn’t a good-sized cache. The size of short-term memory matters enormously. And the good news is that we can expand it with effort.

A super fast CPU with a large cache is still only as good as the operating system. If people know little or don’t know how to reason well, they generally won’t be smart. Think of the billions of people who came before we knew how to know (science). Some of those people had really fast CPUs. But many of them weren’t able to make much progress on anything.

The chances an ignoramus who doesn’t know correlation isn’t causation will come across as stupid are also high. In fact, we often mistake being knowledgeable and possessing rules of how to reason well for being intelligent. Though it goes without saying, it is good to say it: how much we know and knowledge of how to reason better is in our control.

Lastly, people despair because they mistake skew for the variance. People believe there is a lot of variance in processing capacity. My sense is that variance in the processing capacity is low, and the skew high. In layman’s terms, most people are as smart as the other with very few very bright people. None of this is to say that the little variance that exists is not consequential.

Toward a Better OS

Ignorance of relevant facts limits how well we can reason. Thus, increasing the repository of relevant facts at hand will help you reason smarter. If you don’t know about something, that is ok. Today, the world’s information is at your fingertips. It will take you some time to go through things but you can become informed about more things than you think possible.

Besides knowledge, there are some ‘frameworks’ of how to approach a problem. For instance, management consultants have something called MECE. This ‘framework’ can help you reason better about a whole slew of problems. There are likely others.

Besides reasoning frameworks, there are simple rules that can help you reason better. You can look up books devoted to common errors in thinking, and use those to derive the rules. The rules can look as follows:

1. Correlation is not the same as causation
2. Replace categorical thinking with continuous where possible, and be precise. For e.g., rather than claim that ‘there is a risk’, quantify the risk. Or replace the word possibility with probability where applicable

Some Hard Feelings: Feelings Towards Some Racial and Ethnic Groups in 4 Countries

8 Aug

According to YouGov surveys in Switzerland, Netherlands and Canada, and the 2008 ANES in the US, Whites, on average, in each of the four countries feel fairly coldly — giving an average thermometer rating of less than 50 on a 0 to 100 scale — toward Muslims, and people from Muslim-majority regions (Feelings towards different ethnic, racial, and religious groups). However, in Europe, Whites’ feelings toward Romanians, Poles, and Serbs and Kosovars are scarcely any warmer, and sometimes cooler. Meanwhile, Whites feel relatively warmly towards East Asians.

A Relatively Ignored Mediational Variable in Deliberation

5 Jun

Deliberation often causes people to change their attitudes. One reason this may happen is because deliberation also causes people’s values to change. Thus, one mediational model is as follows: change in values causes change in attitudes. However, deliberation can also cause people to connect their attitudes better with their values, without changing those values. This may mean that ceteris paribus, same values yield different attitudes post-treatment. For identification, one may want to devise a treatment that changes respondent’s ability to connect values to attitudes but has no impact on values. Or a researcher may try to gain insight by measuring people’s ability to connect values to attitudes in current experiments, and estimating mediational models. One can also simply try simulation (again subject to the usual concerns): use post-treatment model (regressing attitudes on values), use pre-deliberation values, and simulate attitudes.

Capuchin Monkeys and Fairness: I Want At Least As Much As The Other

1 Dec

In a much heralded experiment, we see that a Capuchin monkey rejects a reward (food) for doing a task after seeing another monkey being rewarded with something more appetizing for doing the same task. It has been interpreted as evidence for our ‘instinct for fairness’. But there is more to the evidence. The fact that the monkey that gets the heftier reward doesn’t protest the more meager reward for the other monkey is not commented upon though highly informative. Ideally, any weakly reasoned deviation from equality should provoke a negative reaction. Monkeys who get the longer end of the stick, even when aware that others are getting the shorter end of the stick, don’t complain. Primates are peeved only when they are made aware that they are getting the short end of the stick. Not so much if someone else gets it. My sense is that it is true for most humans as well – people care far more about them holding the short end of the stick than others. It is thus incorrect to attribute such behavior to an ‘instinct for fairness’. A better attribution may be to the following rule: I want at least as much as the others are getting.

Raising Money for Causes

10 Nov

Four teenagers, on the cusp of adulthood, and eminently well to do, were out on the pavement raising money for children struck with cancer. They had been out raising money for a couple of hours, and from a glance at their tin pot, I estimated that they had raised about $30 odd dollars, likely less. Assuming donation rate stays below $30/hr, or more than what they would earn if they were all working minimum wage jobs, I couldn’t help but wonder if their way of raising money for charity was rational; they could have easily raised more by donating their earnings from doing minimum wage job. Of course, these teenagers aren’t alone. Think of the people out in the cold raising money for the poor on New York pavements. My sense is that many people do not think as often about raising money by working at a “regular job”, even when it is more efficient (money/hour) (and perhaps even more pleasant). It is not clear why.

The same argument applies to those who run in marathons etc. to raise money. Preparing and running in marathon generally costs at least hundreds of dollars for an average ‘Joe’ (think about the sneakers, the personal trainers that people hire, the amount of time they `donate’ to train, which could have been spent working and donating that money to charity etc.). Ostensibly, as I conclude in an earlier piece, they must have motives beyond charity. These latter non-charitable considerations, at least at first glance, do not seem to apply to the case of teenagers, or to those raising money out in the cold in New York.