Figure credit: Yphtach Lelkes. From joint work on selective exposure.
Four teenagers, at the cusp of adulthood, and eminently well to do, were out on the pavement raising money for children struck with cancer. They had been out raising money for a couple of hours, and from a glance at their tin pot, I estimated that they had raised about $30 odd dollars, likely less. Assuming donation rate stays below $30/hr, or more than what they would earn if they were all working minimum wage jobs, I couldn’t help but wonder if their way of raising money for charity was rational; they could have easily raised more by donating their earnings from doing minimum wage job. Of course, these teenagers aren’t alone. Think of the people out in the cold raising money for the poor on New York pavements. My sense is that many people do not think as often about raising money by working at a “regular job”, even when it is more efficient (money/hour) (and perhaps even more pleasant). It is not clear why.
The same argument applies to those who run in marathons etc. to raise money. Preparing and running in marathon generally costs at least hundreds of dollars for an average ‘Joe’ (think about the sneakers, the personal trainers people hire, the amount of time they `donate’ to train, which could have been spent working and donating that money to charity etc.). Ostensibly, as I conclude in an earlier piece, they must have motives beyond charity. These latter non-charitable considerations, at least at first glance, do not seem to apply to the case of teenagers, or to those raising money out in the cold in New York.
What do single shot evaluations of MT (replace it with anything else) samples (vis-a-vis census figures) tell us? I am afraid very little. Inference rests upon knowledge of the data (here – respondent) generating process. Without a model of the data generating process, all such research reverts to modest tautology – sample A was closer to census figures than sample B on parameters X,Y, and Z. This kind of comparison has a limited utility: as a companion for substantive research. However, it is not particularly useful if we want to understand the characteristics of the data generating process. For even if respondent generation process is random, any one draw (here – sample) can be far from the true population parameter(s).
Even with lots of samples (respondents), we may not be able to say much if the data generation process is variable. Where there is little expectation that the data generation process will be constant, and it is hard to understand why MT respondent generation process for political surveys will be a constant one (it likely depends on the pool of respondents, which in turn perhaps depends on the economy etc., the incentives offered, the changing lure of incentives, the content of the survey, etc.), we cannot generalize. Of course one way to correct for all of that is to model this variation in the data generating process, but that will require longer observational spans, and more attention to potential sources of variation etc.
Two things are often stated about American politics (most vociferously by Mo Fiorina): political elites are increasingly polarized, and that the issue positions of the masses haven’t budged much. Assuming such to be the case, one expects the average distance between where partisans place themselves and where they place the ‘in-party’ (or the ‘out-party’) to increase. However it appears that the distance to the in-party has remained roughly constant (and very small), while distance to the out-party has grown, in line with what one expects from the theory of ‘affective polarization’ and group-based perception.
From the Introduction of their edited volume:
Tversky and Kahneman used the following experiment for testing ‘representativeness heuristic’ –
Subjects are shown a brief personality description of several individuals, sampled at random from 100 professionals – engineers and lawyers.
Subjects are asked to assess whether description is of an engineer or a lawyer.
In one condition, subjects are told group = 70 engineers/30 lawyers. Another the reverse = 70 lawyers/30 engineers.
Both conditions produced same mean probability judgments.
Tversky and Kahneman call this result a ‘sharp violation’ of Bayes Rule.
Counter Point -
I am not sure the experiment shows any such thing. Mathematical formulation of the objection is simple and boring so an example. Imagine, there are red and black balls in an urn. Subjects are asked if the ball is black or red under two alternate descriptions of the urn composition. When people are completely sure of the color, the urn composition obviously should have no effect. Just because there is one black ball in the urn (out of say a 100), it doesn’t mean that the person will start thinking that the black ball in her hand is actually red. So on and so forth. One wants to apply Bayes by accounting for uncertainty. People are typically more certain (lots of evidence it seems – even in their edited volume) so that automatically discounts urn composition. People may not be violating Bayes Rule. They may just be feeding the formula incorrect data.
Differential measurement error across control and treatment groups or (pre and post-treatment measurement waves) can vitiate estimates of treatment effect. One prominent reason for differential error in survey context is differential motivation. For instance, if participants are less motivated to respond accurately on the pre-treatment survey than on the post-treatment survey (or respondents satisfice more on the pre-treatment survey that post-treatment survey), the pre-post difference score will be a biased estimator of the treatment effect. Weiksner (2008) provides an empirical example of the broad problem that he identifies – participants acquiesced more during the pre-treatment (treatment is the Deliberative Poll) survey than the post-treatment survey, compromising inferences about how many much attitude change happens over the course of the treatment. To correct for it, one may want to replace agree/disagree responses with construct specific questions (Weiksner, 2008). A perhaps better solution would be to incentivize all (or a random subset of) responses to the pre-treatment survey. Possible incentives include â€“ monetary rewards, adding a preface to the screens telling people how important accurate responses are to research, etc. This is the same strategy that I advocate for dealing with satisficing more generally (see here) â€“ which translates to minimizing errors, than the dominant though somewhat more suboptimal strategy of â€œbalancing errorsâ€ by randomizing the response order.
Across UK and US, a large majority of politicians seem to believe that increasing levels of education will reduce economic inequality. However it isn’t clear if the policy is empirically supported. Here are some potential ways increasing levels of education can impact economic inequality â€“
- As Grusky argues, the current high wage earners whose high wages depend on education and lack of competition from similarly educated men and women (High Education Low Competition or HELCO) from similarly highly educated will start earning a lower wage because of increased competition (thereby reducing inequality). This is assuming that HELCO wonâ€™t respond by trying to burnish their education credentials, etc. This is also assuming that HELCO exists as a large class. What likely exists, instead of HELCO, success attributable to networks, etc. That kind of advantage cannot be blunted by increasing education among those â€˜not in the networkâ€™.
- Another possibility is that education increases the number of high paying jobs available in the economy and it raises the boats of non-HELCO more than HELCO. There is some evidence for that, though mostly anecdotal.
- Another plausible scenario is that additional education produces only a modest effect with non-HELCO still mostly doing low paying jobs. This may due to only a modest increase in overall availability of â€˜good jobsâ€™. This outcome is in fact likely if data are any indication. Already easy access to education has meant that many a janitor, and store clerks walk around with college degrees (see Why Did 17 Million Students Go to College?, and The Underemployed College Graduate).
Without an increase in ‘good jobs’, the result of increase in education is an increased heterogeneity in who succeeds (random draw at the extreme) but no change in proportion of those successful. Or, increasing equality of opportunity (a commendable goal) but not reduction in economic inequality (though in a multi-generation game, it may even out). Increasing access to education also has the positive externality of producing a more educated society, another worthy goal.
How plentiful the ‘good jobs’ are depends partly on how the economic activity is constructed. For instance, there may have once have been a case for only hiring one ‘super-talented person’ (say ‘superstar’) for a top shelf job (say CEO). Now we have systems that can harness the wisdom of many. It is also plausible that that wisdom is greater than that of the superstar. It reasons then that the superstar be replaced; economic activity will be more efficient. Or else let other smart people who can contribute equally (if educated) be recompensed alternately for doing work that is ‘beneath them’.
In Predictably Irrational, Dan Ariely discusses the clever (ex)-subscription menu of The Economist that purportedly manipulates people to subscribe to a pricier plan. In an experiment based on the menu, Ariely shows that addition of an item to the menu (that very few choose) can cause preference reversal over other items in the menu.
Letâ€™s consider a minor variation of Arielyâ€™s experiment. Assume there are two different menus that look as follows â€“
1. 400 cal, 500 cal.
2. 400 cal, 500 cal, 800 cal.
Assume that all items cost and taste the same. When given the first menu, say 20% choose the 500 calorie item. When selecting from the second menu, percent of respondents selecting the 500 calorie choice is likely to be significantly greater.
Now why may that be? One reason may be that people do not have absolute preferences; here for specific number of calories. And that people make judgments about what is the reasonable number of calories based on the menu. For instance, they decide that they do not want the item with the maximum calorie count. And when presented with a menu with more than two distinct calorie choices, another consideration comes into mind â€“ they do not too little food either. More generally, they may let the options on the menu anchor for them what is â€˜too muchâ€™ and what is â€˜too littleâ€™.
If this is true, it can have potentially negative consequences. For instance, McDonaldâ€™s has on menu a Bacon Angus Burger that is about 1360 calories (calories are now being displayed on McDonaldâ€™s menus courtesy Richard Thaler). It is possible that people choose higher calorie items when they see this menu option, than when they do not.
More generally, peopleâ€™s reliance on the menu to discover their own preferences means that marketers can manipulate what is seen as the middle (and hence â€˜reasonableâ€™). This also translates to some degree to politics where what is considered the middle (in both social and economic policy) is sometimes exogenously shifted by the elites.
That is but one way a choice on the menu can impact preference order over other choices. Separately, sometimes a choice can prime people about how to judge other choices. For instance, in a paper exploring effect of Nader on preferences over Bush and Kerry, researchers find that â€œ[W]hen Nader is in the choice set all votersâ€™ choices are more sharply aligned with their spatial placements of the candidates.â€
This all means, assumptions of IIA need to be rethought. Adverse conclusions about human rationality are best withheld (see Sen).
Further Reading -
R. Duncan Luce and Howard Raiffa. Games and Decision. John Wiley and Sons, Inc., 1957.
Amartya Sen. Internal consistency of choice. Econometrica, 61(3):495Â– -521, May 1993.
Amartya Sen. Is the idea of purely internal consistency of choice bizarre? In J.E.J. Altham and Ross Harrison, editors, World, Mind, and Ethics. Essays on the ethical philosophy of Bernard Williams. Cambridge University Press, 1995.
- older browsers are likelier to display the survey incorrectly.
- type of browser can be a proxy for respondent’s proficiency in using computers, and speed of the Internet connection.
People using older browsers may abandon surveys at higher rates than those using more modern browsers.
Using data from a large Internet survey, we test whether people who use older browsers abandon surveys at higher rates, and whether their surveys have larger amount of missing data. Read More >>.
Internet has revolutionized the dissemination of misinformation. Easy availability of incorrect information, gullible and eager masses, and ease of â€˜sharingâ€™ has created fertile conditions for misinformation epidemics.
While a fair proportion of misinformation is likely created deliberately, it may well spread inadvertently. Misinformation that people carry is often no different than fact to them. People are likely to share misinformation with the same enthusiasm as they would fact.
Attitude congenial misinformation is more likely to be known (and accepted as fact), and more likely to be enthusiastically shared with someone who shares the same attitude (for social, and personal rewards). Misinformation considered â€˜usefulâ€™ is also more likely to be shared, e.g. (mis)-information about health related topics.
The chance of acceptance of misinformation may be greater still if people know little about the topic, or if they have no reason to think that the information is motivated. Lastly, these epidemics are more likely to take place among those less familiar with technology.