Bad Hombres: Bad People on the Other Side

8 Dec

Why do many people think that people on the other side are not well motivated? It could be because they think that the other side is less moral than them. And since opprobrium toward the morally defective is the bedrock of society, thinking that the people in the other group are less moral naturally leads people to censure the other group.

But it can’t be that two groups simultaneously have better morals than the other. It can only be that people in the groups think they are better. This much logic dictates. So, there has to be a self-serving aspect to moral standards. And this is what often leads people to think that the other side is less moral. Accepting this is not the same as accepting moral relativism. For even if we accept that some things are objectively more moral—not being sexist or racist say—some groups—those that espouse that a certain sex is superior or certain races are better—will still think they are better.

But how do people come to know of other people’s morals? Some people infer morals from political aims. And that is a perfectly reasonable thing to do as political aims reflect what we value. For instance, a Republican who values ‘life’ may think Democrats are simply morally inferior for supporting the right to abortion. But the inference is fraught with error. As matters stand, Democrats would also like women to not go through the painful decision of aborting a fetus. They just want there to be an easy and safe way for women should they need to.

Sometimes people infer morals from policies. But support for different policies can stem from having different information or beliefs about causal claims. For instance, Democrats may support a carbon tax because they believe (correctly) the world is warming and because they think that the carbon tax is what will help reduce global warming the best and protect American interests. Republicans may dispute any part of that chain of logic. The point isn’t what is being disputed per se, but what people will infer about others if they just had information about the policies they support. Hanlon’s razor is often a good rule.

Why Do People (Re)-Elect Bad Leaders?

7 Dec

‘Why do people (re)-elect bad leaders?’ used to be a question that people only asked of third-world countries. No more. The recent election of unfit people to prominent positions in the U.S. and elsewhere has finally woken some American political scientists from their mildly racist reverie—the dream that they are somehow different.

So why do people (re)-elect bad leaders? One explanation that is often given is that people prefer leaders that share their ethnicity. The conventional explanation for preferring co-ethnics is that people expect co-ethnics (everyone) to do better under a co-ethnic leader. But often enough, the expectation seems more like wishful thinking than anything else. After all, the unsuitability of some leaders is pretty clear.

If it is wishful thinking, then how do we expose it? More importantly, how do we fix it? Let’s for the moment assume that people care about everyone. And if they were to learn that the co-ethnic leader is much worse than someone else, they may switch votes. But what if people care about the welfare of co-ethnics more than others? The ‘good’ thing about bad leaders is that they are generally bad for everyone. So, if they knew better, they would still switch their vote.

You can verify these points using a behavioral trust game where people observe allocators of different ethnicities and different competence, and also observe welfare of both co-ethnics and others. You can also use the game to study some of the deepest concerns about ‘negative party ID’—that people will harm themselves to spite others.

Party Time

2 Dec

It has been nearly five years since the publication of Affect, Not Ideology: A Social Identity Perspective on Polarization. In that time, the paper has accumulated over 450 citations according to Google Scholar. (Citation counts on Google Scholar tend to be a bit optimistic.) So how does the paper hold up? Some reflections:

  • Disagreement over policy conditional on aims should not mean that you think that people you disagree with are not well motivated. But regrettably, it often does.
  • Lack of real differences doesn’t mean a lack of perceived differences. See here, here, here, and here.
  • The presence of real differences is no bar to liking another person or group. Nor does a lack of real differences come in the way of disliking another person or group. History of racial and ethnic hatred will attest to the point. In fact, why small differences often serve as durable justifications for hatred is one of the oldest and deepest questions in all of social science. (Paraphrasing from Affectively Polarized?.) Evidence on the point:
    1. Sort of sorted but definitely polarized
    2. Assume partisan identity is slow moving as Green, Palmquist, and Schickler (2002) among others show. And then add to it the fact people still like their ‘own’ party a fair bit—thermometer ratings are a toasty 80 and haven’t budged. See the original paper.
    3. People like ideologically extreme elites of the party they identify with a fair bit (see here).
  • It may seem surprising to some that people can be so angry when they spend so little time on politics and know next to nothing about it. But it shouldn’t be. Information generally gets in the way of anger. Again,
    the history of racial bigotry is a good example.
  • The title of the paper is off in two ways. First, partisan affect can be caused by ideology. Not much of partisan affect may be founded in ideological differences, but at least some of it is. (I always thought so.) Secondly, the paper does not offer a social identity perspective on polarization.
  • The effect that campaigns have on increasing partisan animus is still to be studied carefully. Certainly, ads play but a small role in it.
  • Evidence on the key take-home point—that partisans dislike each other a fair bit—continues to mount. The great thing is that people have measured partisan affect in many different ways, including using IAT and trust games. Evidence that IAT is pretty unreliable is reasonably strong, but trust games seem reasonable. Also see my 2011 note on measuring partisan affect coldly.
  • Interpreting over-time changes is hard. That was always clear to us. But see Figure 1 here that controls for a bunch of socio-demographic variables, and note that the paper also has over-time cross-country to clarify inferences further.
  • If you assume that people learn about partisans from elites, reasoning what kinds of people would support this ideological extremist or another, it is easy to understand why people may like the opposing party less over time (though trends among independents should be parallel). The more curious thing is that people still like the party they identify with and approve of ideologically extreme elites of their party (see here).

Learning About [the] Loss (Function)

7 Nov

One of the things we often want to learn is the actual loss function people use for discounting ideological distance between self and a legislator. Often people try to learn the loss function using over actual distances. But if the aim is to learn the loss function, perceived distance rather than actual distance is better. It is so because perceived = what the voter believes to be true. People can then use the function to simulate out scenarios if perceptions = fact.

Confirmation Bias: Confirming Media Bias

31 Oct

It used to be that searches for ‘Fox News Bias’ were far more common than searches for ‘CNN bias’. Not anymore. The other notable thing—correlated peaks around presidential elections.

Note also that searches around midterm elections barely rise above the noise.

God, Weather, and News vs. Porn

22 Oct

The Internet is for porn (Avenue Q). So it makes sense to measure things on the Internet in porn units.

I jest, just a bit.

In Everybody Lies, Seth Stephens Davidowitz points out that people search for porn more than weather on GOOG. Data from Google Trends for the disbelievers.

But how do searches for news fare? Surprisingly well. And it seems the new president is causing interest in news to outstrip interest in porn. Worrying, if you take Posner’s point that people’s disinterest in politics is a sign that they think the system is working reasonably well. The last time searches for news > porn was when another Republican was in the White House!

How is the search for porn affected by Ramadan? For answer, we turn to Google Trends from Pakistan. But you may say that the trend is expected given Ramadan is seen as a period for ritual purification. And that is a reasonable point. But you see the same thing with Eid-ul-Fitr and porn.

And in Ireland, of late, it seems searches for porn increase during Christmas.

Measuring Segregation

31 Aug

Dissimilarity index is a measure of segregation. It runs as follows:

\frac{1}{2} \sum\limits_{i=1}^{n} \frac{g_{i1}}{G_1} - \frac{g_{i2}}{G_2}
where:

g_{i1} is population of g_1 in the ith area
G_{i1} is population of g_1 in the larger area
from which dissimilarity is being measured against

The measure suffers from a couple of issues:

  1. Concerns about lumpiness. Even in a small area, are black people at one end, white people at another?
  2. Choice of baseline. If the larger area (say a state) is 95\% white (Iowa is 91.3% White), dissimilarity is naturally likely to be small.

One way to address the concern about lumpiness is to provide an estimate of the spatial variance of the quantity of interest. But to measure variance, you need local measures of the quantity of interest. One way to arrive at local measures is as follows:

  1. Create a distance matrix across all addresses. Get latitude and longitude. And start with Euclidean distances, though smart measures that take account of physical features are a natural next step. (For those worried about computing super huge matrices, the good news is that computation can be parallelized.)
  2. For each address, find n closest addresses and estimate the quantity of interest. Where multiple houses are similar distance apart, sample randomly or include all. One advantage of n closest rather than addresses in a particular area is that it naturally accounts for variations in density.

But once you have arrived at the local measure, why just report variance? Why not report means of compelling common-sense metrics, like the proportion of addresses (people) for whom the closest house has people of another race?

As for baseline numbers (generally just a couple of numbers): they are there to help you interpret. They can be brought in later.

The Innumerate American

19 Feb

In answering a question, scientists sometimes collect data that answers a different, sometimes yet more important question. And when that happens, scientists sometimes overlook the easter egg. This recently happened to me, or so I think.

Kabir and I recently investigated the extent to which estimates of motivated factual learning are biased (see here). As part of our investigation, we measured numeracy. We asked American adults to answer five very simple questions (the items were taken from Weller et al. 2002):

  1. If we roll a fair, six-sided die 1,000 times, on average, how many times would the die come up as an even number? — 500
  2. There is a 1% chance of winning a $10 prize in the Megabucks Lottery. On average, how many people would win the $10 prize if 1,000 people each bought a single ticket? — 10
  3. If the chance of getting a disease is 20 out of 100, this would be the same as having a % chance of getting the disease. — 20
  4. If there is a 10% chance of winning a concert ticket, how many people out of 1,000 would be expected to win the ticket? — 100
  5. In the PCH Sweepstakes, the chances of winning a car are 1 in a 1,000. What percent of PCH Sweepstakes tickets win a car? — .1%

The average score was about 57%, and the standard deviation was about 30%. Nearly 80% (!) of the people couldn’t answer that 1 in a 1000 chance is .1% (see below). Nearly 38% couldn’t answer that a fair die would turn up, on average, an even number 500 times every 1000 rolls. 36% couldn’t calculate how many people out of a 1,000 would win if each had a 1% chance. And 34% couldn’t answer that 20 out of 100 means 20%.

If people have trouble answering these questions, it is likely that they struggle to grasp some of the numbers behind how the budget is allocated, or for that matter, how to craft their own family’s budget. The low scores also amply illustrate that the education system fails Americans.

Given the importance of numeracy in a wide variety of domains, it is vital that we pay greater attention to improving it. The problem is also tractable — with the advent of good self-learning tools, it is possible to intervene at scale. Solving it is also liable to be good business. Given numeracy is liable to improve people’s capacity to count calories, make better financial decisions, among other things, health insurance companies could lower premiums in lieu of people becoming more numerate, and lending companies could lower interest rates in exchange for increases in numeracy.