Learning About [the] Loss (Function)

7 Nov

One of the things we often want to learn is the actual loss function people use for discounting ideological distance between self and a legislator. Often people try to learn the loss function using over actual distances. But if the aim is to learn the loss function, perceived distance rather than actual distance is better. It is so because perceived = what the voter believes to be true. People can then use the function to simulate out scenarios if perceptions = fact.

Confirming media bias

31 Oct

It used to be that searches for ‘Fox News Bias’ were far more common than searches for ‘CNN bias’. Not anymore. The other notable thing—correlated peaks around presidential elections.

Note also that searches around midterm elections barely rise above the noise.

God, Weather, and News vs. Porn

22 Oct

The Internet is for porn (Avenue Q). So it makes sense to measure things on the Internet in porn units.

I jest, just a bit.

In Everybody Lies, Seth Stephens Davidowitz points out that people search for porn more than weather on GOOG. Data from Google Trends for the disbelievers.

But how do searches for news fare? Surprisingly well. And it seems the new president is causing interest in news to outstrip interest in porn. Worrying, if you take Posner’s point that people’s disinterest in politics is a sign that they think the system is working reasonably well. The last time searches for news > porn was when another Republican was in the White House!

How is the search for porn affected by Ramadan? For answer, we turn to Google Trends from Pakistan.

In Ireland though, of late, it appears the searches for porn increase during Christmas.

Measuring Segregation

31 Aug

Dissimilarity index is a measure of segregation. It runs as follows:

\frac{1}{2} \sum\limits_{i=1}^{n} \frac{g_{i1}}{G_1} - \frac{g_{i2}}{G_2}
where:

g_{i1} is population of g_1 in the ith area
G_{i1} is population of g_1 in the larger area
from which dissimilarity is being measured against

The measure suffers from a couple of issues:

1. Concerns about lumpiness. Even in a small area, are black people at one end, white people at another?
2. Choice of baseline. If the larger area (say a state) is 95\% white (Iowa is 91.3% White), dissimilarity is naturally likely to be small.

One way to address the concern about lumpiness is to provide an estimate of the spatial variance of the quantity of interest. But to measure variance, you need local measures of the quantity of interest. One way to arrive at local measures is as follows:

1. Create a distance matrix across all addresses. Get latitude and longitude. And start with Euclidean distances, though smart measures that take account of physical features are a natural next step. (For those worried about computing super huge matrices, the good news is that computation can be parallelized.)
2. For each address, find n closest addresses and estimate the quantity of interest. Where multiple houses are similar distance apart, sample randomly or include all. One advantage of n closest rather than addresses in a particular area is that it naturally accounts for variations in density.

But once you have arrived at the local measure, why just report variance? Why not report means of compelling common-sense metrics, like the proportion of addresses (people) for whom the closest house has people of another race?

As for baseline numbers (generally just a couple of numbers): they are there to help you interpret. They can be brought in later.

The Innumerate American

19 Feb

In answering a question, scientists sometimes collect data that answers a different, sometimes yet more important question. And when that happens, scientists sometimes overlook the easter egg. This recently happened to me, or so I think.

Kabir and I recently investigated the extent to which estimates of motivated factual learning are biased (see here). As part of our investigation, we measured numeracy. We asked American adults to answer five very simple questions (the items were taken from Weller et al. 2002):

  1. If we roll a fair, six-sided die 1,000 times, on average, how many times would the die come up as an even number? — 500
  2. There is a 1% chance of winning a $10 prize in the Megabucks Lottery. On average, how many people would win the $10 prize if 1,000 people each bought a single ticket? — 10
  3. If the chance of getting a disease is 20 out of 100, this would be the same as having a % chance of getting the disease. — 20
  4. If there is a 10% chance of winning a concert ticket, how many people out of 1,000 would be expected to win the ticket? — 100
  5. In the PCH Sweepstakes, the chances of winning a car are 1 in a 1,000. What percent of PCH Sweepstakes tickets win a car? — .1%

The average score was about 57%, and the standard deviation was about 30%. Nearly 80% (!) of the people couldn’t answer that 1 in a 1000 chance is .1% (see below). Nearly 38% couldn’t answer that a fair die would turn up, on average, an even number 500 times every 1000 rolls. 36% couldn’t calculate how many people out of a 1,000 would win if each had a 1% chance. And 34% couldn’t answer that 20 out of 100 means 20%.

If people have trouble answering these questions, it is likely that they struggle to grasp some of the numbers behind how the budget is allocated, or for that matter, how to craft their own family’s budget. The low scores also amply illustrate that the education system fails Americans.

Given the importance of numeracy in a wide variety of domains, it is vital that we pay greater attention to improving it. The problem is also tractable — with the advent of good self-learning tools, it is possible to intervene at scale. Solving it is also liable to be good business. Given numeracy is liable to improve people’s capacity to count calories, make better financial decisions, among other things, health insurance companies could lower premiums in lieu of people becoming more numerate, and lending companies could lower interest rates in exchange for increases in numeracy.

Town Level Data on Cable Operators and Cable Channels

12 Sep

I am pleased to announce the release of TV and Cable Factbook Data (1997–2002; 1998 coverage is modest). Use of the data is restricted to research purposes.

Background

In 2007, Stefano DellaVigna and Ethan Kaplan published a paper that used data from Warren’s Factbook to identify the effect of the introduction of Fox News Channel on Republican vote share (link to paper). Since then, a variety of papers exploiting the same data and identification scheme have been published (see, for instance, Hopkins and Ladd, Clinton and Enamorado, etc.)

In 2012, I embarked on a similar such project—trying to use the data to study the impact of the introduction of Fox News Channel on attitudes and behaviors related to climate change. However, I found the original data to be limited—DellaVigna and Kaplan had used a team of research assistants to manually code a small number of variables for a few years. So I worked on extending the data. I planned on extending the data in two ways: adding more years, and adding ‘all’ the data for each year. To that end, I developed custom software. The data collection and parsing of a few thousand densely packed, inconsistently formatted, pages (see below) to a usable CSV (see below) finished sometime early in 2014. (To make it easier to create a crosswalk with other geographical units, I merged the data with Town lat/long (centroid) and elevation data from http://www.fallingrain.com/world/US/.)

Sample Page
cable_factbook_example
Snapshot of the Final CSV
csv_snap

Soon after I finished the data collection, however, I became aware of a paper by Martin and Yurukoglu. They found some inconsistencies between the Nielsen data and the Factbook data (see Appendix C1 of paper), tracing the inconsistencies to delays in updating the Factbook data—“Updating is especially poor around [DellaVigna and Kaplan] sample year. Between 1999 and 2000, only 22% of observations were updated. Between 1998 and 1999, only 37% of observations were updated.” Based on their paper, I abandoned the plan to use the data, though I still believe the data can be used for a variety of important research projects, including estimating the impact of the introduction of Fox News. Based on that belief, I am releasing the data.

The Value of Money: Learning from Experiments Offering Money for Correct Answers

10 Jul

Papers at hand:

Prior et al. and Bullock et al.


Two empirical points that we learn from the papers:

1. Partisan gaps are highly variable and the mean gap is reasonably small (without money, control condition). See also: Partisan Retrospection?
(The point is never explicitly commented on by either of the papers. The point has implications for proponents of partisan retrospection.)

2. When respondents are offered money for the correct answer, partisan gap reduces by about half on average.


Question in front of us: Interpretation of point 2.


Why are there partisan gaps on knowledge items?

1. Different Beliefs: People believe different things to be true: People learn different things. For instance, Republicans learn that Obama is a Muslim, and Democrats that he is an observant Christian. For a clear exposition on what I mean by belief, see Waters of Casablanca.

2. Systematic Lazy Guessing: The number one thing people lie about on knowledge items is that they have the remotest clue about the question being asked. And the reluctance to acknowledge ‘Don’t Know’ is in itself a serious point worthy of investigation and careful interpretation.

When people guess on items with partisan implications, some try to infer the answer using the cues in the question stem. For instance, a Republican, when asked whether unemployment rate under Obama increased or decreased, may reason that Obama is a socialist and since socialism is bad policy, it must have increased the unemployment rate.

3. Cheerleading: Even when people know that things that reflect badly on their party happened, they lie. (I will be surprised if this is common.)


The Quantity of Interest: Different Beliefs.
We do not want: Different Beliefs + Systematic Lazy Guessing


Why would money reduce partisan gaps?

1. Reducing Systematic Lazy Guessing: Bullock et al. use pay for DK, offering people small incentive (much smaller than pay for correct) to confess to ignorance. The estimate should be closer to the quantity of interest: ‘Different Beliefs.’

2. Considered Guessing: On being offered money for the correct answer, respondents replace ‘lazy’ (for a bounded rational human —optimal) partisan heuristic described above with more effortful guessing. Replacing Systematic Lazy Guessing with Considered Guessing is good to the extent that Considered Guessing is less partisan. If it is so, the estimate will be closer to the quantity of interest: ‘Different Beliefs.’ (Think of it as a version of correlated measurement error. And we are now replacing systematic measurement error with error that is more evenly distributed, if not ‘randomly’ distributed.)

3. Looking up the Correct Answer: People look up answers to take the money on offer. Both papers go some ways to show that cheating isn’t behind the narrowing of the partisan gap. Bullock et al. use ‘placebo’ questions, and Prior et al. timing etc.

4. Reduces Cheerleading: For respondents for whom utility from lying < $, they stop lying. The estimate will be closer to the quantity of interest: 'Different Beliefs.'

5. Demand Effects: Respondents take the offer of money as a cue that their instinctive response isn’t correct. The estimate may be further away from the quantity of interest: ‘Different Beliefs.’

Some Facts About PolitiFact

27 May

I assessed PolitiFact on:

1. Imbalance in scrutiny: Do they vet statements by Democrats or Democratic-leaning organizations more than statements Republicans or Republican-leaning organizations?

2. Batting average by party: Roughly n_correct/n_checked, but instantiated here as mean Politifact rating.

To answer the questions, I scraped the data from PolitiFact and independently coded and appended data on the party of the person or organization covered. (Feel free to download the script for scraping and analyzing the data, scraped data and data linking people and organizations to party from the GitHub Repository.)

Until now, Politifact has checked veracity 3,859 statements by 703 politicians and organizations. Of these, I was able to establish the partisanship of 554 people and organizations. I restrict the analysis to 3,396 statements by organizations and people whose partisanship I could establish and who lean either towards the Republican or Democratic party. I code the Politifact 6-point True to Pants on Fire scale (true, mostly-true, half-true, barely-true, false, pants-fire) linearly so that it lies between 0 (pants-fire) and 1 (true).

Of the 3,396 statements, about 44% (n = 1506) of the statements checked by PolitiFact are by Democrats or Democratic-leaning organizations. Rest of the roughly 56% (n = 1890) are by Republicans or Republican-leaning organizations. The average PolitiFact rating of statements by Democrats or Democratic-leaning organizations (batting average) is .63; it is .49 for statements by Republicans or Republican-leaning organizations.

To check whether the results are driven by some people receiving a lot of scrutiny, I tallied the total number of statements investigated for each person. Unsurprisingly, there is a large skew, with a few prominent politicians receiving a bulk of the attention. For instance, PolitiFact investigated more than 500 claims by Barack Obama alone. The figure below plots the total number of statements investigated for thirty politicians receiving the most scrutiny.
t30_total_investigated

If you take out Barack Obama, the percentage of Democrats receiving scrutiny reduces to 33.98%. More generally, limiting ourselves to the bottom 90% of the politicians in terms of scrutiny received, the share of Democrats is about 42.75%.

To analyze whether there is selection bias in covering politicians who say incorrect things more often, I estimated the correlation between the batting average and the total number of statements investigated. The correlation is very weak and does not appear to vary systematically by party. Accounting for the skew by taking the log of the total statements or by estimating a rank-ordered correlation has little effect. The figure below plots batting average as a function of total statements investigated.

batting_average_total_investigated

Caveats About Interpretation

To interpret the numbers, you need to make two assumptions:

1. The number of statements made by Republicans and Republican-leaning persons and organizations is the same as that made by people and organizations on the left.

2. Truthiness of statements by Republican and Republican-leaning persons and organizations is the same as that of left-leaning people and organizations.