Tversky and Kahneman, in â€œThe Framing of Decisions and the Psychology of Choiceâ€, â€œdescribe decision problems in which people systematically violate the requirements of consistency and coherence, and [â€¦] trace these violations to the psychological principles that govern the perception of decision problems and the evaluation of options.â€
They start with the following example â€“
Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimate of the consequences of the programs are as follows:
[one set of respondents, condition 1]
If Program A is adopted, 200 people will be saved. [72 percent]
If Program B is adopted, there is 1/3 probability that 600 people will be saved, and 2/3 probability that no people will be saved. [28 percent]
[second set of respondents, condition 2]
If Program C is adopted 400 people will die. [22 percent]
If Program D is adopted there is 1/3 probability that nobody will die, and 2/3 probability that 600 people will die. [78 percent]
They add –
â€œ[I]t is easy to see that the two problems are effectively identical. The only difference between them is that the outcomes are described in problem 1 by the number of lives saved and in problem 2 by the number of lives lost. The change is accompanied by a pronounced shift from risk aversion to risk taking.â€
Given the empirical result, they propose â€˜prospect theoryâ€™ which we can summarize as â€“ people exhibit risk aversion when faced with gains, and risk seeking when faced with losses.
Why is Program A less â€œriskyâ€ than Program B?
Expected utility of A can be seen as equal to that of B over repeated draws. However over the next draw â€“ which we can assume to the questionâ€™s intent, Program A provides a certain outcome of 200, while Program B is a toss-up between 0 or 600. Hence, Program B can be seen as risky.
Looked at more closely, however, the interpretation of Program B is still harder –
Probability is commonly understood as over repeated draws. Here – Given infinite draws â€“ 1/3 of the times it will yield a 600, and the rest of the 2/3 of times a 0. (~ Frequentist) Tversky and Kahneman share the frequentist take on probability (though they frame it differently) – â€œThe utility of a risky prospect is equal to the expected utility of its outcomes, obtained by weighting the utility of each possible outcome by its probability.â€ (This takes directly from statistical decision theory that defines risk as integral of the loss function. The calculation is inapplicable for any one draw.)
What is the meaning of probability for the next draw? If it is a random event, then we have no knowledge of the next toss. The way it is used here however is different – we know that it isnâ€™t a â€˜random eventâ€™ and that we have some knowledge of the outcome of the next toss, and we are expressing â€˜confidenceâ€™ in the outcome of the next toss. (~ Bayesian) Transcribing Program Bâ€™s description in Bayesian framework, we are 33% confident that all 600 will be saved, while 66% confident that we will fail utterly. (N.B. The probability distribution for the predicted event emanates likely from a threshold process â€“ all or nothing kind of gambling event. Alternate processes may entail that the counterfactual to utter failure is a slightly less than utter failure, and so on and so forth on a continuum.) Two-third confidence in utter failure (all die), makes the decision task â€˜riskyâ€™.
Argument about Rationality and Equality of utility (between A, B, C, and D)
According to Tversky and Kahneman, utility of Program A is same as Program B. As we can infer from above, if we constrain estimation of utility to the next draw â€“ which is in line with the way the question is put forth, Program A is superior to Program B. An alternate way to put the question could have been â€“ â€œover the next 100,000 draws, programs provide these outcomes. Which one would you prefer?â€ Looked at in that light, the significant majority who choose Program A over B can be seen as rational.
However, the central finding of Tversky and Kahneman is â€œpreference reversalâ€ between battery 1 (gains story) and battery 2 (losses story). We see a reversal from majority preferring â€˜risk aversionâ€™ to a majority preferring â€˜risk takingâ€™ between the two â€˜conditionsâ€™. Looked independently, the majorityâ€™s support in each condition seems logical, but why is that the case? We have already made a case for battery 1, and for battery 2 the case would run something like this â€“ given overwhelming number of fatalities, one would want to try a risky option. Except of course, mortality figures in A and C, and B and D, are the same, and so is the risk calculus.
For Tversky and Kahnemanâ€™s findings to be seen as a testimony of human irrationality, Program A should basically be seen as equivalent to Program C, and Program B to Program D. And the lack of â€˜consistencyâ€™ between choices an indicator of irrationality. In condition 1, our attention is selectively moored towards the positive, while in condition 2, towards the negative, and respondents evaluate risk based on different DVs (even though they are the same). The findings are unequivocally normatively problematic, and provide a manuscript for strategic actors for how to â€œframeâ€ policy choices in ways that will garner support.
Brief points about measurement and experiment design
1) There is no â€˜controlâ€™ group. One imagines that the â€˜rationalâ€™ split would be one gotten in condition A, or condition B, or as the authors indicate some version of 50-50 split. There is reason to believe that 50-50 split is not the rational split in either of the conditions (with perhaps 100-0 split in either conditions being â€˜rationalâ€™. This doesnâ€™t overturn the findings but merely provides an interpretation of the control. Definitions of control are important as they allow us to see the direction of bias. Here – it allows us to see that condition 1 allows for more people to reach the â€˜correct decisionâ€™ than condition 2.)
2) To what extent is the finding an artifact of the way the question is posed? It is hard to tell.
- A 50-50 split response condition would be achieved if respondents think that both choices are equivalent, and hence pick one choice randomly. But given respondents are liable to imagine that a â€˜uniqueâ€™ solution exists, given they have been brought into a university laboratory and asked a question, people are likely to try to read the tea leaves. Of course, people systematically reading tea-leaves in one way means something else is perhaps going on, but still it is very likely that deviations from 50-50 split would be much less if one were to provide a response option that both choices are equivalent. This is so because some number will choose 3, and then you can either eliminate that sub-sample, and calculate new percentages of deviations from 50-50 by constraining to the two choices (which will likely yield a larger percentage swing) or include everyone, and find a smaller percentage swing.
- The stump for condition B (that offers Program C or Program D) is the same as stump for condition A – the disease is â€˜expected to kill 600 peopleâ€™. In light of that, description of Program C (â€œIf Program C is adopted 400 people will dieâ€) offers no information about the other 200. Respondent can imagine that 200 will be saved, but isnâ€™t particularly sure of their fate. On the other hand, with the same stump, information about â€˜200 will be savedâ€™ allows us to weakly infer that 400 people will die. This biases the swing in favor of the results that we see.
- If we are to imagine that results are driven entirely differential risk aversion in losses and profits, and not some other cognitive malaise, then it would have been nice to see clearer enunciation of the outcomes. For example, description of Program A could have reworded as â€˜If Program A is adopted, 200 people will be saved, and 400 people will not. Or only 200 out of the 600 people will be savedâ€™. This would have likely attenuated the swing that we see, though it is open to empirical investigation. The larger point perhaps is that there are multiple â€˜manipulationsâ€™, and that the results we see may be artifactual, coming instead from another source. For example – Kuhberger (1995) noted that outcomes in the Asian disease problem are inadequately specified. When Kuhberger made the outcomes explicit (e.g. stating that 200 will be saved and 400 will die), â€˜â€˜framingâ€™â€™ effects vanish[ed].â€
Bless, H., Betsch, T. & Franzen, A. (1998). Framing the framing effect: The impact of context cues on solutions to the ‘Asian disease’ problem. European Journal of Social Psychology, 28, 287â€“291.
Druckman, J. N. (2001). Evaluating framing effects. Journal of Economic Psychology, 22, 91â€“101.
Kahneman, D. & Tversky, A. (1979). Prospect Theory: An analysis of decision under risk. Econometrica, 47, 263â€“291.
KÃ¼hberger, A. (1995). The faming of decisions: A new look at old problems. Organizational Behavior and Human Decision Processes, 62, 230â€“240.
Levin, I. P., Schneider, S. L. & Gaeth, G. J. (1998). All frames are not created equal: A typology and critical analysis of framing effects. Organizational Behavior and Human Decision Making Processes, 76, 149â€“188.