French for Bread

14 Feb

Pain is an “unpleasant sensation” in response to actual or perceived injury. It is generally assumed that the purposes of pain are twin—to stop the person from engaging in a behavior that is causing the pain, say continuing to dip hand in boiling water, albeit not water that is being slowly brought to boil, and to “train” (in the Pavlovian sense) the body to not engage in such behavior in the future. Given the purpose, the pain response is poorly implemented in many ways. It also sheds light on how the body is architected.

Think of a system that is coded to send a message to the controller to “alert” it to damage and to ask it to reconsider engaging in activity that is causing the damage (or independently take pre hard-coded action). One envisions that the message is sent in a manner that “makes” the controller pay attention, if such attention is warranted, and efficiently conveys a summary of what is going wrong and to what degree, and what particular action that the user is taking that is causing that to happen. One also imagines an “acknowledge” button that the controller presses to assume the responsibility for further action. Then using this information, controller, depending on the circumstance, takes action, and updates the memory and circuiting, if warranted, to create an appropriate aversion for certain activities.

Such signaling is implemented very differently in our body. Firstly it is implemented as “pain.” Next, pain is not proportional to the extent of the injury. This sometimes creates “irrational” aversion. More bizarrely, some harmful things are pleasant, while some good things are painful. Thirdly, there is no direct way for the brain to acknowledge the signal, assume the responsibility of action, and shut off the pain. Next, and worryingly, depending on the extent to which our brain is distracted (say watching television), pain’s intensity varies (This last point has been exploited to build “treatments” for pain). Lastly, our brains can’t temporarily order the signals shut.

It is ‘possible’ that people may warm up to the science of warming

25 Jan

In science we believe

Belief in science is likely partly based on scientists’ ability to predict.
As M.S. notes, climate scientists accurately predicted that temperatures were going to rise in the future in late 1980s. Hence, for people who are aware of that (like himself), belief in climate science is greater.

Similarly, unpredictability in weather (as opposed to climate), e.g., snowstorms, which are typically widely covered in media, etc., may lower people’s belief in climate science.

Possibility of showers in the afternoon
Over conversations with lay and not to say lay people, I have observed that sometimes people conflate probability and possibility. In particular, they typically over-weight the probability of a possible event and then use that inflated weight to form the judgment. When I ask them to assign a probability to the event they identify as a possibility, they almost always assign very low probabilities, and their opinion comes to better reflect this realization.

Think of it like this: a possibility for people, once raised (by them or others) is very real. Only consciously thinking about the probability of that possibility allows them to get out of funky thinking.

Something else to note. Politicians use ‘possibility’ as a persuasion tool, e.g., ‘there is a possibility of a terror attack’ etc. This is something I have dealt with before but I leave the task of where to people motivated to pursue the topic.

Delay Shooting the Help

25 Dec


Procrastination, delaying without reason doing something that one has to do, makes little sense. If the voluntary delay also causes anxiety, which in most cases it does, it may be particularly pointless. Yet a lot of people procrastinate at least some of the times. Why?

One can make a case for postponing unpleasant things that are avoidable—in fact why bother doing those things at all—but not things that are unavoidable, or things that a person intends to do.

The desirability of a task influences the decision to procrastinate or not. Rationally, it should not matter but the fact that it does provides a possible toehold into why we procrastinate:

  1. Sometimes problems don’t appear to have a good solution right away and one hopes that a solution would appear over time even though one may not have good reasons to think that such good fortune would strike.
  2. For example, one hopes that the ‘unavoidable’ task would become avoidable? Or we wait till the point ‘it is clear’ that the problem cannot be avoided?

Procrastination is understood exclusively as a problem about ordering and assumes that the net amount of time expended on task remains constant, irrespective of order. Perhaps that is a problematic assumption. Starting things later may just mean that we spend less time on the task than we otherwise would. However, one can easily reframe the issue as one about when to spend the reduced time, than one where we must delay achieving the aim of spending less time on the task.

Shooting the help

At times when people are worried, and when well-intentioned people try to help them, they simply become annoyed, or even mildly angry at them. Why is it that we refuse help, or more puzzlingly become angry or annoyed with people who are trying in good faith to be of help.

When people offer advice, they often use munitions from similar events and incidents they have encountered. This can be a bit galling as it undermines the ‘uniqueness’ of our problems. This ‘feeling’ is further compounded by the fact that many a time people are also over-eager and often too quick to offer solutions without a more patient listening to the individuating data. And then arguably many people while eager to help don’t do much thinking (either through incapacity, lack of motivation, or on the assumption that no thought is needed) about the problem itself, and offer comments that are not particularly insightful. And then many a time all people want is a sympathetic ear or a pat on the back. In other words, sometimes public self-pity is all we want. This is typically so when either the solutions are obvious or non-existent.

Outside of this, it is also the case that high achievers are less likely to seek help, and bristle when offered help, for seeking help forces them to face their own vulnerability.

On Love

25 Nov

Mothers spend a great deal of their time and energy on their kids, especially newborns. They spend far more time thinking, and working to ensure welfare of another human being (their kid), than most humans will spend on any other human being, say their romantic partner or sibling. Add to this that till the child reaches adulthood, often till much later, the relationship is overwhelmingly one-way – with children thinking little about their parent’s welfare. Still, most mothers find the experience, and the work that goes with it, greatly rewarding.

Joy despite this sizable disparity has been explained by cynics, but only poorly. While parents proclaim that children bring joy to their lives, it doesn’t cause parents to invest time and money for often similar joy can be had at much lower rates of work. And financial investments, investments of time, the toll on the mother’s body, and inconveniences like lack of sleep, problems traveling, etc. likely outweigh potential financial benefits, which are likely either way away from most parents’ minds.

In this puzzle there are perhaps a couple of lessons about love – one is that relationships that are overwhelmingly one-way, say in resources like time and money, can still be basis for mutual joy; second perhaps is that spending more time on people we love can make those relationships more fulfilling, and us happier.

Measuring Love

Love and to love are vague conceptually and in minds of people who use these terms. The concepts have evolved such that attempts to define or deliberate these concepts rile sensibilities – offend the notion that love is really the domain of emotion, not thought. Such thoughtlessness has meant that nearly everyone can get away with claiming love, even when the overall impact on the quality of life they have on their loved one is a deeply negative one.

One way to think about how much we love each other is to take into account how much time we spend actively thinking about the welfare of those we love. Such exercise when done without deliberation elicits an emotionally biased line up of instances where we were thoughtful. To move beyond such selective counting, for what matters are averages, with a penalty for egregious negatives, one ought to think carefully and honestly, and perhaps base assessments on the testimony of the loved one. Intentionality ought to play a part in calculations but not as a blanket defense – I truly want your welfare but there is nothing in the record that backs up the claim – but to down-weight some instances of sub-optimal decision making under pressure, and limited resources.

Personal Milestone Campaigners

24 May

We have all read about beautiful women undressing to raise awareness about cruelty to animals (less handsome women just don’t care about the animals), men and women strutting on runways wearing fashionable clothes to raise awareness about global warming, intrepid souls sailing across vast oceans to raise awareness about plastic waste in the sea, people climbing mountains to raise awareness about cancer and global warming, and swarms of marathoners running many times a year to raise awareness about breast cancer and of late, St. Jude’s Children Research Hospital. To this pantheon of heroes, add the ‘human polar bear’, who recently swam in a glacial lake in the Himalayas to raise awareness about global warming [BBC].

I often wonder how many things we would have remained in the dark about had it not been for these heroic men and women, and the camera crews in tow. And then I mull over the contributions they have made to the world. It is work by such people and cameras following them that has led 76% of the people to believe that breast cancer is the single most deadly disease afflicting women. (It is the fourth largest.)

Not only do these men and women raise awareness, but they also raise money. The ‘acts’ they perform are now sponsored by corporations; the boosters have their boosters. And all of these incredible personal achievements, which only incidentally flood trophy cases and Facebook walls, have been accumulated selflessly in service of some worthy cause (applause). It also reasons that the money and hours that these selfless volunteers spend on training, recovering from training, registering for races, and organizing events, pales in comparison to the money they raise.

The Myth of Knowledge

14 Apr

People frequently overestimate how much they know. They also confidently share things they don’t know.

But why?

One reason is social desirability—the motivation to appear knowledgeable in front of others, and perhaps yourself (our ability to fool ourselves deserves closer inspection). Another is that conversations are typically not carried out for any epistemological purpose, but for emotional and social purposes. So we make up things because the truth is unimportant, at times even a hindrance. For example, envision a conversation where both parties profess their ignorance. The conversation will be pretty short. Fencing yourself within what you know means you can’t ‘discuss’ various salient topics.

For two, lay conjecture often suffices when the audience is ignorant. More generally, propensity to make up something depends on the speaker’s assessment of chances of getting caught, which is a function of audience’s knowledge, sympathies—towards the speaker or the topic being spoken about, and volubility, which in itself may depend on the status difference, among other things. But as any professor will tell you — students happily make things up even when they are in the presence of ‘experts.’ So the rate of decline in propensity to make up stuff as the chance of getting caught increases is low, and the absolute level of propensity to make up stuff, high. It is also likely that the confidence with which people typically say such ‘lies’ is an attempt to cover up their ignorance as confidence is taken by others at its face value: as a sign of surety about facts. Thus, never seriously threatened by their ignorance, people build a somewhat more positive assessment of how much they know.

But can it also be that people think that lying about their ignorance is only a minor transgression? Do people have this innate belief (which rarely gets challenged) that somehow when they speak, they will be able to be right; some sort of a ‘God bias’ — that they will be the exception to making sense without knowing?


The modern era of knowledge production has brought its own challenges. First, the rate of production of knowledge has exploded. And as the rate of production of knowledge accelerates, and the rate of learning stagnates, relative ignorance increases, which I take is the case. But not only has the rate of knowledge production has exploded, so has the complexity. Increasingly substantive discussions about important areas of human activity (public policy more broadly but say health, fiscal policy etc.) need more sophisticated thought and deeper immersion in the wealth of knowledge that has been produced.


It is often said, often without a whoop of surprise, ‘the more you know, the more you realize how much you don’t know.’ But why would that be? Are limits of knowledge so obscure that they cannot be known by the proverbial (and now the literal) average Joe? In fact, it is easy to deduce one’s ignorance. For example, we are surrounded by phenomena that we can’t describe well, much less explain. To infer our mean level of knowledge, we may want to do the following: recognize the fact that even in areas where we claim expertise we often fall short, hence we must really know very little about the things we don’t spend time learning.

The point about limits of our knowledge is broader, and not there to malign the average Joe. There are real limits to what humans can achieve. Think about the following: If one were to read a book a week for the next 50 years, one would end up reading 2500 books. If the smallness of the number surprises you, then let that be a lesson. The point allows me to segue into the next one — ‘the myth of being well-read.’

The Myth of Being Well-Read

Often, people confuse being well-read to mean reading a few bad books poorly. To be well-read, one must satisfy three criteria: 1) have read at least 100-150 books; 2) a substantial majority, if not all, of which ought to be ‘good’ — literature or non-fiction; 3) and they ought to ‘read well.’

The 7 step program for correctly classifying yourself as ‘well-read’ or not ‘well-read’

  • Only a few people (<< 1%) are well-read. Do you think you are in such elite company? And then again, do you want to be in the company of 'nerds' and 'geeks'?
  • Force yourself to list as many books that you have read in the past year (or life). If that number is less than 10, you may not be well-read. Apply this to specific areas as needed. For example, you may ask yourself whether you are well read in political science.
  • Alternately, think hard about how many books you bought or checked out from the library last year. If you haven’t spent more than $100 on books in the past year, and/or haven’t checked out more than 10 books from the library, you are unlikely to be well-read.
  • If you catch yourself citing one book repeatedly, whenever the topic of books comes up during conversations, you are unlikely to be well-read. In the US, it is typically The Catcher in the Rye, though that may be changing — Harry Potter, Dan Brown, Twilight series, etc. may be the new ‘go to’ books. For a colleague of mine, it is White Noise. In some circles, it may be some Malcolm Gladwell or Thomas Friedman book.
  • Relatedly, if Harry Potter, Blink, Atlas Shrugged, Catcher in the Rye, etc., are among your favorite books, it is unlikely that you are well-read.
  • If you are younger than 21 (or typically 25), you couldn’t have read much. There are few exceptions. They help make the rule.
  • Count the books you own. If you don’t own more than 50 books, you are unlikely to have read much.

An Appetizing Question: Why Don’t We Read Nutrition Labels?

24 Feb

A single wheat tortilla has 18% of the daily value of sodium. I discovered it just recently. The discovery followed my reading an article discussing serious negative consequences of excess sodium for heart health. In turn, the discovery was followed by two others: I realized that I insufficiently assimilated information from food labels in general and that I was curious to know why that was the case.

One way to understand this shift is the following: attention is a limited resource and only allocated to things deemed important. Once the importance was established, attention followed.


  1. Attention is not that limited a resource.
  2. The marginal cost of attention to sodium is small given I generally do look at the calorie content.
  3. Given that I encounter same or similar choice tasks repeatedly—cost per choice, invested once, is close to zero.

What are the reasons behind this seeming conundrum?

In the domain of food, we care about three things: price, taste, and health. Let’s leave out the price for now. This leaves us with taste and health. While making a choice, we recruit ‘relevant’ (more later) information to assess choices in a manner that maximize our utility.

Assume we have a strict preference for taste over health; more narrowly —taste always wins whatever the health information; health information comes into play only when the taste is equivalent. Health information in this scenario is immaterial, and one only needs to focus on information about taste to maximize his/her utility. So, one way to explain my inattention to relevant health information is just that.

Expanding upon the toy example, orderings of things we care about (utilities) dictate the way we seek information, and what information is sought. However, observation tells us that the ordinal structure of utilities is manipulable in the domain of food. For example, we prefer taste strongly but if health information were to be made salient, we would be liable to choose something healthy. One inference which we can draw from such manipulability of order is that the initial preference ordering must not have been strong. But that doesn’t seem right given our strong preference for taste ‘explains’ subdual of information seeking on health.

Rational choice assumes that if information acquisition costs are zero, more information should always be sought, and used in decision making. Rational Choice seems inadequate to the task of explaining, hide and not seek.

Let’s assume we have subconscious and conscious preferences (aside from assuming a subconscious). Subconsciously we greatly prefer taste more than health. Consciously we prefer the reverse. Taste wins if health information is not made salient at the time of purchase. Assuming subconscious controls behavior, health information is deliberately not sought.

Another way to think about underlying preference structure is the following— aside from preferring tastier food, we also prefer feeling good about our decision. Feelings about a decision are evaluative and emerge from whether we chose wisely given information. So one way to include good feelings is to choose healthy food but that sacrifices our preference for taste. Another way this is resolved is ignoring information about health, which is much more easily ignored than information about taste.

Given this, we suppress health information. Interestingly this suppression doesn’t extend to information seeking about health on all fronts but applies only during decision time about a food.

Another interesting psychological thing to note here is that we have negative affect associated with decisions that lead to negative long-term consequences, but we also have ways to prevent this negative affect pathway from being triggered at all. Additionally, the information suppression isn’t a one-time-only but long term because we want to repeatedly ”sin.” This, in turn, means that we firstly somehow ‘know’ that the food is unhealthy and hence not look at the health information, otherwise wouldn’t it just help boost one of the reasons for consuming something tasty, but don’t consciously acknowledge this information.

Yet another way to think about the problem is to assume that we have preferences for health but they are somewhat lower in order of priority. For lower order preferences (here health), information seeking becomes more passive and increasingly depends on how easy it is to acquire information, for example, how prominently it is displayed. Social desirability pressures may also play a larger role in moderating information acquisition when importance is low. For example, in US people frown upon those who look at labels in a supermarket. Thus cowed, people may be less likely to look up information. Though it was always possible to look up the information once home, and now given ease and convenience of anonymous information gathering (Internet), it is likely that social desirability issues are less of a factor (it is likely that social desirability pressures continue to apply when one is alone.) However in cases where other lower order preferences predict same choice information about them is likely highlighted. For example, if a tasty thing were healthy as well, it is likely that one reminds oneself of the health benefits while making the choice.

But why is taste implicitly prioritized over health? One explanation is that preference for taste is evolutionary – the positive immuno-response from eating calorie-rich food is biologically potent. Another is that consequences on health from choosing unhealthy food are long term while gratifications from taste are instantaneous. Given that, it allows us to more readily imagine the consequences of one which in turn is perhaps define one of the key ways we decide our preferences. Lastly, the preference for taste in matters of food has become likelier due to advertising and its constant valorization of taste over everything else.

It is still awe-inducing to see to what degree our brain is lazy and inhibits acquisition of reasonably readily available information.

The above analysis assumed considerations dictating choice at the point of purchase. Once we have bought something, however, another consideration applies — we have invested in x, so now enjoy it. What is the point of reading information now that I have already spent money?

A few straight forward policy proposals emerge, given what we know about how people behave,

  1. Front of package labeling
  2. Prominent, easy to read, comprehend, labeling
  3. Priming aim — be healthy
  4. Priming habit — look for information when buying food
  5. Priming consequences


The word ‘we’ is in quotes in the title because I do not have data to show how widespread the tendency is.

You Got (Fraudulent) Mail!

22 Jan

Republican [Con-]census

The party that dislikes the census, put a hold on the nominee for census bureau, is now sending out fraudulent mail surveys that seem as if they were from the census bureau. Read here, here, and here. Accusation for being ‘fraudulent’ stems, not only from the use of `census’, but also from it being an attempt at “frugging”, the practice of cloaking a fund raising appeal in what appears to be a research. (“Suggers” sell using surveys.)

Who Knew
One of the people who has benefited the most from Macaulay’s reforms recently sent an email, part of a larger email chain that now generally implies some travesty, which quoted Macaulay as having spoken the following (on Feb 2nd, 1835 in India, the same day his gave his Minute on Education in Britain) –

I have traveled across the length and breadth of India and I have not seen one person who is a beggar, who is a thief. Such wealth I have seen in this country, such high moral values, people of such caliber, that I do not think we would ever conquer this country, unless we break the very backbone of this nation, which is her spiritual and cultural heritage, and, therefore, I propose that we replace her old and ancient education system, her culture, for if the Indians think that all that is foreign and English is good and greater than their own, they will lose their self-esteem, their native self-culture and they will become what we want them, a truly dominated nation?

Of course, Lord Macaulay never said such a thing. He said many other tawdry things but never did he eulogize the absolutely hokum positive images of India. While we all want glorious histories, past isn’t as glorious. Neither are confessions of colonial rapacity generally so naked.

Suggestion: For greater imagined glories one must go further back than just 1835, when the ‘Muggles’ had had their way (as Hindus may point out), writing had been invented, into the mists of more obscure pre-history where one can have his way with the truth. How about the crowning glories of Lord Rama and his return on an airplane like ‘pushpak’ vahan?

Prospect Theory in Retrospect

6 Jun

Tversky and Kahneman, in “The Framing of Decisions and the Psychology of Choice”, “describe decision problems in which people systematically violate the requirements of consistency and coherence, and […] trace these violations to the psychological principles that govern the perception of decision problems and the evaluation of options.”

They start with the following example,

Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimate of the consequences of the programs are as follows:

[one set of respondents, condition 1]
If Program A is adopted, 200 people will be saved. [72 percent]
If Program B is adopted, there is 1/3 probability that 600 people will be saved, and 2/3 probability that no people will be saved. [28 percent]

[second set of respondents, condition 2]
If Program C is adopted 400 people will die. [22 percent]
If Program D is adopted there is 1/3 probability that nobody will die, and 2/3 probability that 600 people will die. [78 percent]

They add:
“[I]t is easy to see that the two problems are effectively identical. The only difference between them is that the outcomes are described in problem 1 by the number of lives saved and in problem 2 by the number of lives lost. The change is accompanied by a pronounced shift from risk aversion to risk taking.”

Given the empirical result, they propose “prospect theory” which we can summarize as people exhibit risk aversion when faced with gains, and risk-seeking when faced with losses.

Why is Program A less “risky” than Program B?

Expected utility of A can be seen as equal to that of B over repeated draws. However, over the next draw – which we can assume to the question’s intent, Program A provides a certain outcome of 200, while Program B is a toss-up between 0 or 600. Hence, Program B can be seen as risky.

Looked at more closely, however, the interpretation of Program B is still harder –
Probability is commonly understood as over repeated draws. Here – Given infinite draws – 1/3 of the times it will yield a 600, and the rest of the 2/3 of times a 0. (~ Frequentist) Tversky and Kahneman share the frequentist take on probability (though they frame it differently) – “The utility of a risky prospect is equal to the expected utility of its outcomes, obtained by weighting the utility of each possible outcome by its probability.” (This takes directly from statistical decision theory that defines risk as integral of the loss function. The calculation is inapplicable for any one draw.)

What is the meaning of probability for the next draw? If it is a random event, then we have no knowledge of the next toss. The way it is used here however is different – we know that it isn’t a ‘random event’ and that we have some knowledge of the outcome of the next toss, and we are expressing ‘confidence’ in the outcome of the next toss. (~ Bayesian) Transcribing Program B’s description in Bayesian framework, we are 33% confident that all 600 will be saved, while 66% confident that we will fail utterly. (N.B. The probability distribution for the predicted event emanates likely from a threshold process – all or nothing kind of gambling event. Alternate processes may entail that the counterfactual to utter failure is a slightly less than utter failure, and so on and so forth on a continuum.) Two-third confidence in utter failure (all die), makes the decision task ‘risky’.

Argument about Rationality and Equality of utility (between A, B, C, and D)

According to Tversky and Kahneman, utility of Program A is same as Program B. As we can infer from above, if we constrain estimation of utility to the next draw – which is in line with the way the question is put forth, Program A is superior to Program B. An alternate way to put the question could have been – “over the next 100,000 draws, programs provide these outcomes. Which one would you prefer?” Looked at in that light, the significant majority who choose Program A over B can be seen as rational.

However, the central finding of Tversky and Kahneman is “preference reversal” between battery 1 (gains story) and battery 2 (losses story). We see a reversal from majority preferring ‘risk aversion’ to a majority preferring ‘risk taking’ between the two ‘conditions’. Looked independently, the majority’s support in each condition seems logical, but why is that the case? We have already made a case for battery 1, and for battery 2 the case would run something like this – given overwhelming number of fatalities, one would want to try a risky option. Except of course, mortality figures in A and C, and B and D, are the same, and so is the risk calculus.

For Tversky and Kahneman’s findings to be seen as a testimony of human irrationality, Program A should basically be seen as equivalent to Program C, and Program B to Program D. And the lack of ‘consistency’ between choices an indicator of irrationality. In condition 1, our attention is selectively moored towards the positive, while in condition 2, towards the negative, and respondents evaluate risk based on different DVs (even though they are the same). The findings are unequivocally normatively problematic, and provide a manuscript for strategic actors for how to “frame” policy choices in ways that will garner support.

Brief points about measurement and experiment design

1) There is no “control” group. One imagines that the rational split would be one gotten in condition A, or condition B, or as the authors indicate some version of 50-50 split. There is reason to believe that 50-50 split is not the rational split in either of the conditions (with perhaps 100-0 split in either conditions being ‘rational’. This doesn’t overturn the findings but merely provides an interpretation of the control. Definitions of control are important as they allow us to see the direction of bias. Here – it allows us to see that condition 1 allows for more people to reach the ‘correct decision’ than condition 2.)
2) To what extent is the finding an artifact of the way the question is posed? It is hard to tell.

  • A 50-50 split response condition would be achieved if respondents think that both choices are equivalent, and hence pick one choice randomly. But given respondents are liable to imagine that a ‘unique’ solution exists, given they have been brought into a university laboratory and asked a question, people are likely to try to read the tea leaves. Of course, people systematically reading tea-leaves in one way means something else is perhaps going on, but still it is very likely that deviations from 50-50 split would be much less if one were to provide a response option that both choices are equivalent. This is so because some number will choose 3, and then you can either eliminate that sub-sample, and calculate new percentages of deviations from 50-50 by constraining to the two choices (which will likely yield a larger percentage swing) or include everyone, and find a smaller percentage swing.
  • The stump for condition B (that offers Program C or Program D) is the same as stump for condition A – the disease is ‘expected to kill 600 people’. In light of that, description of Program C (“If Program C is adopted 400 people will die”) offers no information about the other 200. Respondent can imagine that 200 will be saved, but isn’t particularly sure of their fate. On the other hand, with the same stump, information about ‘200 will be saved’ allows us to weakly infer that 400 people will die. This biases the swing in favor of the results that we see.
  • If we are to imagine that results are driven entirely differential risk aversion in losses and profits, and not some other cognitive malaise, then it would have been nice to see clearer enunciation of the outcomes. For example, description of Program A could have reworded as ‘If Program A is adopted, 200 people will be saved, and 400 people will not. Or only 200 out of the 600 people will be saved’. This would have likely attenuated the swing that we see, though it is open to empirical investigation. The larger point perhaps is that there are multiple ‘manipulations’, and that the results we see may be artifactual, coming instead from another source. For example – Kuhberger (1995) noted that outcomes in the Asian disease problem are inadequately specified. When Kuhberger made the outcomes explicit (e.g. stating that 200 will be saved and 400 will die), ‘‘framing’’ effects vanish[ed].”

Further Reading

Bless, H., Betsch, T. & Franzen, A. (1998). Framing the framing effect: The impact of context cues on solutions to the ‘Asian disease’ problem. European Journal of Social Psychology, 28, 287–291.

Druckman, J. N. (2001). Evaluating framing effects. Journal of Economic Psychology, 22, 91–101.

Kahneman, D. & Tversky, A. (1979). Prospect Theory: An analysis of decision under risk. Econometrica, 47, 263–291.

Kühberger, A. (1995). The framing of decisions: A new look at old problems. Organizational Behavior and Human Decision Processes, 62, 230–240.

Levin, I. P., Schneider, S. L. & Gaeth, G. J. (1998). All frames are not created equal: A typology and critical analysis of framing effects. Organizational Behavior and Human Decision Making Processes, 76, 149–188.

Vague Apprehensions

18 Jul

“There is a possibility of a terrorist attack.” Or, “There is a heightened possibility of a terrorist attack.”

These statements are often interpreted as, “There is a high (or very high) probability of a terrorist attack.” These are not sensible interpretations. Of course, news media do plenty to encourage such interpretations. Footage of prior attacks, police sirens, SWAT teams, helicopters, all encourage the sense of dread, which likely encourages such interpretations. On the flip side, few news reports spend any length of time on elucidating the probability of winning the ‘Megamillion Jackpot.’

Like the news media, in everyday life, people also tend to speak in terms of possibilities than probabilities. And often enough possibilities are used to denote high probabilities. And often enough incorrectly so. Sometimes because individuals are mistakenly convinced that probabilities are actually higher (most people often do not remember numbers, replacing them with impressionistic accounts consistent with imagery or their own biases) and sometimes because people are interested in heightening the drama.

Speaking in terms of possibilities also allows one the advantage of never being technically wrong, while all the time encouraging incorrect interpretations. It allows people to casually exaggerate the threat of crime, or indeed any threat they feel like exaggerating. And it allows people to underestimate the frequency of things they would rather deny: for instance, dangers of driving faster than the speed limit, or when drunk, or both. The same benefits are afforded to strategic elite actors. It allows policymakers to sound logically coherent without being so. And to sell less rational courses of action.

By possibility, we mean that something that has a chance of occurring. It doesn’t give us information as to how probable the scenario is. A little information or thinking on probabilities that can go a long way. So aim for precision. Vagueness can be a cover for insidious reasoning (including your own). Avoidable vagueness ought to be avoided.