# Interviewer Assesments of Respondent’s Level of Political Information

In the National Election Studies (NES), interviewers have been asked to rate respondent’s level of political information – “Respondent’s general level of information about politics and public affairs seemed – Very high, Fairly high, Average, Fairly low, Very low.” John Zaller, among others, have argued that these ratings measure political knowledge reasonably well. However there is some evidence that challenges the claim. For instance, there is considerable unexplained inter and intra-interviewer heterogeneity in ratings – people with similar levels of knowledge (as measured via closed-ended items) are rated very differently (Levendusky and Jackman 2003 (pdf)). It also appears that mean interviewer ratings have been rising over the years, compared to the relatively flat trend observed in more traditional measures (see Delli Carpini, and Keeter 1996 and Gilens, Vavreck, and Cohen 2004, etc).

Part of the increase is explained by higher ratings of respondents with less than a college degree; ratings of respondents with BS or more have remained somewhat more flat. As a result, difference in ratings of people with a Bachelor’s Degree or more and those with less than a college degree is decreasing over time. Correlation between interviewer ratings and other criteria like political interest are also trending downward (though decline is less sharp). This conflicts with evidence for increasing ‘knowledge gap’ (Prior 2005).

The other notable trend is the sharp negative correlation (over .85) between intercept and slope of within-year regressions of interviewer ratings and political interest, education, etc. This sharp negative correlation hints at possible ceiling effects. And indeed there is some evidence for that.

Interviewer Measure – The measure is sometimes from the pre-election wave only, other times in the post-election wave only, and still other times in both waves. Where both pre and post measures were available, they were averaged. The correlation between pre-election and post-election rating was .69. The average post-election ratings are lower than pre-election ratings.

# Comparing datasets, reporting only non-duplicated rows

The following is in response to question on the R-Help list.

Consider two datasets –

reported <-
structure(list(Product = structure(c(1L, 1L, 1L, 1L, 2L, 2L,
3L, 4L, 5L, 5L), .Label = c(“Cocoa”, “Coffee C”, “GC”, “Sugar No 11″,
“ZS”), class = “factor”), Price = c(2331, 2356, 2440, 2450, 204.55,
205.45, 17792, 24.81, 1273.5, 1276.25), Nbr.Lots = c(-61L, -61L,
5L, 1L, 40L, 40L, -1L, -1L, -1L, 1L)), .Names = c(“Product”,
“Price”, “Nbr.Lots”), row.names = c(1L, 2L, 3L, 4L, 6L, 7L, 5L,
10L, 8L, 9L), class = “data.frame”)

exportfile <-
structure(list(Product = c(“Cocoa”, “Cocoa”, “Cocoa”, “Coffee C”,
“Coffee C”, “GC”, “Sugar No 11″, “ZS”, “ZS”), Price = c(2331,
2356, 2440, 204.55, 205.45, 17792, 24.81, 1273.5, 1276.25), Nbr.Lots = c(-61,
-61, 6, 40, 40, -1, -1, -1, 1)), .Names = c(“Product”, “Price”,
“Nbr.Lots”), row.names = c(NA, 9L), class = “data.frame”)

Two possible solutions –
A.
m <- rbind(reported, exportfile)
m1 <- m[duplicated(m),]
res <- m[is.na(match(m$key, m1$key)),]

B.

exportfile$key <- do.call(paste, exportfile) reported$key <- do.call(paste, reported)
a <- reported[is.na(match(reported$key, exportfile$key)),]
b <- exportfile[is.na(match(exportfile$key, reported$key)),]
res <- rbind(a, b)

# Correcting for Differential Measurement Error in Experiments

Differential measurement error across control and treatment groups or in a within-subjects experiment, pre and post-treatment measurement waves, can vitiate estimates of treatment effect. One reason for differential measurement error in surveys is differential motivation. For instance, if participants in the control group (pre-treatment survey) are less motivated to respond accurately than participants in the treatment group (post-treatment survey), the difference in means estimator will be a biased estimator of the treatment effect. For example, in Deliberative Polls, participants acquiesce more during the pre-treatment survey than the post-treatment survey (Weiksner, 2008). To correct for it, one may want to replace agree/disagree responses with construct specific questions (Weiksner, 2008). Perhaps a better solution would be to incentivize all (or a random subset of) responses to the pre-treatment survey. Possible incentives include – monetary rewards, adding a preface to the screens telling people how important accurate responses are to research, etc. This is the same strategy that I advocate for dealing with satisficing more generally (see here) – which translates to minimizing errors, than the more common, more suboptimal strategy of “balancing errors” by randomizing the response order.

# Against Proxy Variables

Lacking direct measures of the theoretical variable of interest, some rely on “proxy variables.” For instance, some have used years of education as a proxy for cognitive ability. However, using “proxy variables” can be problematic for the following reasons — (1) proxy variables may not track the theoretical variable of interest very well, (2) they may track other confounding variables, outside the theoretical variable of interest. For instance, in the case of years of education as a proxy for cognitive ability, the concerns manifest themselves as follows —

1) Cognitive ability causes, and is a consequence of, what courses you take, and what school you go to, in addition to of course, years of education. GSS for instance contains more granular measures of education – for instance did the respondent take science course in college. And nearly always the variable proves significant when predicting knowledge, etc. This all is somewhat surmountable as it can be seen as measurement error.

2) More problematically, years of education may tally other confounding variables – diligence, education of parents, economic strata, etc. And then education endows people with more than cognitive ability; it also causes potentially confounding variables such as civic engagement, knowledge, etc.

Conservatively we can only attribute the effect of the variable to the variable itself. That is – we only have variables we enter. If one does rely on proxy variables then one may want to address the two points mentioned above.

# Education and Economic Inequality

Across UK and US, a large majority of politicians seem to believe that increasing levels of education will reduce economic inequality. However it isn’t clear if the policy is empirically supported. Here are some potential ways increasing levels of education can impact economic inequality â€“

1. As Grusky argues, the current high wage earners whose high wages depend on education and lack of competition from similarly educated men and women (High Education Low Competition or HELCO) from similarly highly educated will start earning a lower wage because of increased competition (thereby reducing inequality). This is assuming that HELCO wonâ€™t respond by trying to burnish their education credentials, etc. This is also assuming that HELCO exists as a large class. What likely exists, instead of HELCO, success attributable to networks, etc. That kind of advantage cannot be blunted by increasing education among those â€˜not in the networkâ€™.
2. Another possibility is that education increases the number of high paying jobs available in the economy and it raises the boats of non-HELCO more than HELCO. There is some evidence for that, though mostly anecdotal.
3. Another plausible scenario is that additional education produces only a modest effect with non-HELCO still mostly doing low paying jobs. This may due to only a modest increase in overall availability of â€˜good jobsâ€™. This outcome is in fact likely if data are any indication. Already easy access to education has meant that many a janitor, and store clerks walk around with college degrees (see Why Did 17 Million Students Go to College?, and The Underemployed College Graduate).

Without an increase in ‘good jobs’, the result of increase in education is an increased heterogeneity in who succeeds (random draw at the extreme) but no change in proportion of those successful. Or, increasing equality of opportunity (a commendable goal) but not reduction in economic inequality (though in a multi-generation game, it may even out). Increasing access to education also has the positive externality of producing a more educated society, another worthy goal.

How plentiful the ‘good jobs’ are depends partly on how the economic activity is constructed. For instance, there may have once have been a case for only hiring one ‘super-talented person’ (say ‘superstar’) for a top shelf job (say CEO). Now we have systems that can harness the wisdom of many. It is also plausible that that wisdom is greater than that of the superstar. It reasons then that the superstar be replaced; economic activity will be more efficient. Or else let other smart people who can contribute equally (if educated) be recompensed alternately for doing work that is ‘beneath them’.

# R – Recoding variables reliably and systematically

Survey datasets typically require a fair bit of repetitive recoding of variables. Reducing errors in recoding can be done by writing functions carefully (see some tips here) and automating and systematizing naming, and application of the recode function (which can be custom) –

fromlist <- c(“var1″, “var2″, “var3″, “var4″, “var5″)
tolist <- paste(c(“var1″, “var2″, “var3″, “var4″, “var5″), “recoded”, sep=””)
data[,tolist] <- sapply(data[,fromlist], function(x) car::recode(x , “recode.directions”))

Simple functions can also be directly applied to each column of a data.frame. For instance,
data[,tolist] <- !is.na(data[,fromlist])
data[,tolist] <- abs(data[,fromlist] – .5)

# Measuring Impact of Media

Measuring the impact of media accurately has proven a challenge. Findings of minimal effects abound when intuition tells us that an activity that an average American engages in over forty hours a week is likely to have a larger impact. These insignificant findings have been typically attributed to frailty of survey self-reports of media exposure, though debilitating error in dependent variables has also been noted as a culprit. Others have noted weaknesses in research design, inadequate awareness of analytic techniques that allow one to compensate for error in measures, etc. as stumbling blocks.

Here are a few of the methods that have been used to overcome some of the problems in media research, along with some modest new proposals of my own â€“

• Measurement
Since measures are error prone, one strategy has been to combine multiple measures. Multiple measures of a single latent concept can be combined using latent variable models, factor analysis, or even simple averaging. Precaution must be taken to check that errors across measures arenâ€™t heavily correlated, for under such conditions improvements from combining multiple measures are likely to be weak or non-existent. In fact deleterious effects are possible.

Another point of worry is that measurement error can be correlated with irrelevant respondent characteristics. For instance, women guess less than men on knowledge questions. Hence responses to knowledge questions are a function of ability and propensity to guess when one doesnâ€™t know (tallied here by gender). By conditioning on gender, we can recover better estimates of â€˜abilityâ€™. Another application would be in handling satisficing.

• Measurement of exposure
Rather than use self-assessments of exposure, which have been shown to be correlated to confounding variables, one may want to track incidental consequences of exposure as a measure for exposure. For example, knowledge of words of a campaign jingle, attributes of a character in a campaign commercial, source (~channel) on which the campaign was shown, program, etc. These measures factor in attention, in addition to exposure, which is useful. Unobtrusive monitoring of consumption is of course likely to be even more effective.

• Measurement of Impact
1. Increased exposure to positive images ought to change procedural memory and implicit associations. One can use IAT or AMP to assess the effect.
2. Tracking Twitter and Facebook feeds for relevant information. These measures can be calibrated to opinion poll data to get a sense of what they mean.
• Data Collection
1. Data collection efforts need to reflect half-life of the effect. Recent research indicates that some of the impact of the media may be short-lived. Short-term effects may be increasingly consequential as people increasingly have the ability to act on their impulses â€“ be it buying something, or donating to a campaign, or finding more information about the product. Behavioral measures (e.g. website hits) corresponding to ads may thus be one way to track impact.
2. Future ‘panels’ may contain solely passive monitoring of media use (both input and output) and consumption behavior.
• Estimating recipient characteristics via secondary data
1. Geocoded IP addresses can be used to harvest secondary demographic data (race, income, etc.) from census
2. Para-data like what browser and operating system the customer uses etc. are reasonable indicators of tech. savvy. And these data are readily harvested.
3. Datasets can be merged via â€˜matchingâ€™ or by exploiting correlation across items and by calibrating.

# Working with modestly large datasets in R

Even modestly large (< 1 GB) datasets can quickly overwhelm modern personal computers. Working with such datasets in R can be still more frustrating because of how R uses memory. Here are a few tips on how to work with modestly large datasets in R.

Setting Memory Limits
On Windows, right click R and in the Target field set maximum vector size and memory size as follows â€“
“path\to\Rgui.exe” –max-vsize=4800M (Deprecated as of 2.14).

Alternately, use utils::memory.limit(size=4800) in .Rprofile.

Type in mem.limits() to check maximum vector size

Either specify column classes manually or get the data type for each column by reading in the first few rows â€“ enough so that data type can be inferred correctly â€“ and using the class that R is using.

# Read the first 10 rows to get the classes

Specifying number of rows in the dataset (even a modestly greater number than what is there) can be useful.

Improvements in performance are not always stupendous but given the low cost of implementation, likely worthwhile.

You can selectively read columns by specifying colClasses=NULL for the columns you don’t want read.
Alternately, you can rely on cut. For instance,
data <- read.table(pipe("cut -f 2,5 -d, data.csv"))

Opening Connections
Trying to directly read csv can end in disaster. Open a connection first to reduce memory demands.
abc <- file("data.csv")

Using SQLDF
library(sqldf)
f <- file("data.csv")
Df <- sqldf("select * from f", dbname=tempfile(), file.format=list(header=T, row.names=F))
Problems include inability to deal with fields which have commas etc.

Using Filehash

Filehash package stores files on the hard drive. You can access the data using either with() if dealing with env variable, or directly via dbLoad() that mimics functionality of attach. Downside: it is tremendously slow.

library(filehash)

Selecting Columns
Use subset(data, select=columnList) rather than data[, columnList].

# Impact of Menu on Choices: Choosing What You Want Or Deciding What You Should Want

In Predictably Irrational, Dan Ariely discusses the clever (ex)-subscription menu of The Economist that purportedly manipulates people to subscribe to a pricier plan. In an experiment based on the menu, Ariely shows that addition of an item to the menu (that very few choose) can cause preference reversal over other items in the menu.

Letâ€™s consider a minor variation of Arielyâ€™s experiment. Assume there are two different menus that look as follows â€“
1. 400 cal, 500 cal.
2. 400 cal, 500 cal, 800 cal.

Assume that all items cost and taste the same. When given the first menu, say 20% choose the 500 calorie item. When selecting from the second menu, percent of respondents selecting the 500 calorie choice is likely to be significantly greater.

Now why may that be? One reason may be that people do not have absolute preferences; here for specific number of calories. And that people make judgments about what is the reasonable number of calories based on the menu. For instance, they decide that they do not want the item with the maximum calorie count. And when presented with a menu with more than two distinct calorie choices, another consideration comes into mind â€“ they do not too little food either. More generally, they may let the options on the menu anchor for them what is â€˜too muchâ€™ and what is â€˜too littleâ€™.

If this is true, it can have potentially negative consequences. For instance, McDonaldâ€™s has on menu a Bacon Angus Burger that is about 1360 calories (calories are now being displayed on McDonaldâ€™s menus courtesy Richard Thaler). It is possible that people choose higher calorie items when they see this menu option, than when they do not.

More generally, peopleâ€™s reliance on the menu to discover their own preferences means that marketers can manipulate what is seen as the middle (and hence â€˜reasonableâ€™). This also translates to some degree to politics where what is considered the middle (in both social and economic policy) is sometimes exogenously shifted by the elites.

That is but one way a choice on the menu can impact preference order over other choices. Separately, sometimes a choice can prime people about how to judge other choices. For instance, in a paper exploring effect of Nader on preferences over Bush and Kerry, researchers find that â€œ[W]hen Nader is in the choice set all votersâ€™ choices are more sharply aligned with their spatial placements of the candidates.â€

This all means, assumptions of IIA need to be rethought. Adverse conclusions about human rationality are best withheld (see Sen).