Two things are often stated about American politics: political elites are increasingly polarized, and that the issue positions of the masses haven’t budged much. Assuming such to be the case, one expects the average distance between where partisans place themselves and where they place the ‘in-party’ (or the ‘out-party’) to increase. However it appears that the distance to the in-party has remained roughly constant, while distance to the out-party has grown, in line with what one expects from the theory of ‘affective polarization’ and group-based perception. (Read More: Still Close: Perceived Ideological Distance to Own and Main Opposing Party)
Over the past forty years, Proportion of Respondents Reporting At least one union member in the household has declined precipitously. (Source American National Election Studies) .
In the National Election Studies (NES), interviewers have been asked to rate respondent’s level of political information — “Respondent’s general level of information about politics and public affairs seemed – Very high, Fairly high, Average, Fairly low, Very low.” John Zaller, among others, have argued that these ratings measure political knowledge reasonably well. However, there is some evidence that challenges the claim. For instance, there is considerable unexplained inter- and intra-interviewer heterogeneity in ratings – people with similar levels of knowledge (as measured via closed-ended items) are rated very differently (Levendusky and Jackman 2003 (pdf)). It also appears that mean interviewer ratings have been rising over the years, compared to the relatively flat trend observed in more traditional measures (see Delli Carpini, and Keeter 1996 and Gilens, Vavreck, and Cohen 2004, etc).
Part of the increase is explained by higher ratings of respondents with less than a college degree; ratings of respondents with BS or more have remained somewhat more flat. As a result, difference in ratings of people with a Bachelor’s Degree or more and those with less than a college degree is decreasing over time. Correlation between interviewer ratings and other criteria like political interest are also trending downward (though decline is less sharp). This conflicts with evidence for increasing ‘knowledge gap’ (Prior 2005).
The other notable trend is the sharp negative correlation (over .85) between intercept and slope of within-year regressions of interviewer ratings and political interest, education, etc. This sharp negative correlation hints at possible ceiling effects. And indeed there is some evidence for that.
Interviewer Measure – The measure is sometimes from the pre-election wave only, other times in the post-election wave only, and still other times in both waves. Where both pre and post measures were available, they were averaged. The correlation between pre-election and post-election rating was .69. The average post-election ratings are lower than pre-election ratings.
(Based on data from the 111th Congress)
Law is the most popular degree at the Capitol Hill (it has been the case for a long time). Nearly 52% of the senators, and 36% of congressional representatives have a degree in law. There are some differences across parties and across houses, with Republicans likelier to have a law degree than Democrats in the Senate (58% to 48%), and the reverse holding true for the Congress, where a greater share of Democrats holds law degrees than Republicans (40% to 32%). Less than 10% of members of congress have a degree in the natural sciences or engineering. Nearly 8% have a degree from Harvard, making Harvard’s the largest alumni contingent at the Capitol. Yale is a distant second with less than half the number that went to Harvard.
More women identify themselves as Democrats than as Republicans. The disparity is yet greater among single women. It is possible (perhaps even likely) that this difference in partisan identification is due to (perceived) policy positions of Republicans and Democrats.
Now let’s do a thought experiment: Imagine a couple about to have a kid. Also assume that the couple doesn’t engage in sex-selection. Two things can happen – the couple can have a son or a daughter. It is possible that having a daughter persuades the parent to change his or her policy preferences towards a direction that is perceived as more congenial to women. It is also possible that having a son has the opposite impact — persuading parents to adopt more male congenial political preferences. Overall, it is possible that gender of the child makes a difference to parents’ policy preferences. With panel data one can identify both movements. With cross-sectional data, one can only identify the difference between those who had a son, and those who had a daughter.
Let’s test this using cross-sectional data from Jennings and Stoker’s “Study of Political Socialization: Parent-Child Pairs Based on Survey of Youth Panel and Their Offspring, 1997”.
Let’s assume that a couple’s partisan affiliation doesn’t impact the gender of their kid.
Number of kids, however, is determined by personal choice, which in turn may be impacted by ideology, income, etc. For example, it is likely that conservatives have more kids as they are less likely to believe in contraception, etc. This is also supported by the data. (Ideology is a post-treatment variable. This may not matter if impact of having a daughter is same in magnitude as impact of having a son, and if there are similar numbers of each across people.)
Hence, one may conceptualize “treatment” as gender of the kids, conditional on the number of kids.
Understandably, we only study people who have one or more kids.
Conditional on number of kids, the more daughters respondent has, the less likely respondent is to identify herself as a Republican (b = -.342, p < .01) (when dependent variable is curtailed to Republican/Democrat dichotomous variable; the relationship holds — indeed becomes stronger — if the dependent variable is coded as an ordinal trichotomous variable: Republican, Independent, and Democrat, and an ordered multinomial estimated)
If what we observe is true then we should also see that as party stances evolve, impact of gender on policy preference of a parent should vary. One should also be able to do this cross-nationally.
Some other findings:
- Probability of having a son (limiting to live births in the U.S.) is about .51. This â€˜natural rateâ€™ varies slightly by income â€“ daughters are more likely to be born among lower income. However effect of income is extremely modest in the U.S., to the point of being ignorable. The live birth ratio is marginally rebalanced by the higher child mortality rate among males. As a result, among 0-21, the ratio between men and women is about equal in U.S.
In the sample, there are significantly more daughters than sons. The female/male ratio is 1.16. This is ‘significantly’ unusual.
- If families are less likely to have kids after the birth of a boy, number of kids will be negatively correlated with proportion sons. Among people with just one kid, number of sons is indeed greater than number of daughters, though the difference is insignificant. Overall correlation between proportion sons and number of kids is also very low (corr. = -.041).
By now students of American Politics have all become accustomed to seeing graphs of DW-NOMINATE scores showing ideological polarization in Congress. Here are the equivalent graphs (we assume two dimensions) at the mass-level.
Here’s how to interpret the graphs –
1) There is a large overlap in preference profiles of Rs and Ds.
2) Conditional on same preferences, there is a large gap in thermometer ratings. Without partisan bias – same-preferences should yield about the same R-D thermometer ratings. And this gap is not particularly responsive to change in preferences within parties.
Each election cycle many hands are waved, and spit launched in air, when the topic of registration rates of Latinos (and other minorities) comes up. And indeed registration rates of Latinos substantially lag those of Whites â€“ In California, percent eligible Latinos who are registered is 62.8%, whereas percent eligible Whites registered to vote is approximately 72.9%.
This somewhat large difference in registration rates doesnâ€™t automatically translate to (equally) wide distortions in racial distribution of the eligible population and the registered voter population. For example, while self-identified Whites constitute 62.8% of the VEP, they constitute marginally more â€“ 64.2% of the voting eligible respondents who self-identify as having registered to vote.
Hereâ€™s the math â€“
Assume VEP Pop. = 100
Whites = 63/100; of these 72% register = 45
Latinos = 23/100; of these 62% register = 14
Rest = 14/100; of these 62% register = 9
New Registered Population = 45 + 14 +9 = 68
Registered: Whites = 66.2; Latinos = 20.6
Source: PPIC Survey (September 2010).
Note: CPS 2008, Secretary of State data confirm this. Voting day population estimates from Exit Poll also show no large distortions.
Some simple math:
For a two category case, say proportion category a = pa
Proportion category b = 1 – pa
Assume response rates for category a = qa, and for category b = qb = c*qa
Initial Ratio = pa/(1 -pa)
Final Ratio = pa*qa/(1-pa)*qb
Or between time 1 and 2, ratio changes by qa/qb or 1/c
T1 Diff. = pa – (1- pa) = 2pa – 1
T2 Diff. = (pa*qa – qb + pa*qb)/(pa*qa + (1-pa)*qb)
= (pa(qa + qb) – qb)/(pa(qa – qb) + qb)
= [pa*qa (1 + c) – c*qa]/[pa*qa(1-c) + c*qa]
T2 Diff. – T1 Diff. = [pa*qa (1 + c) – c*qa]/[pa*qa(1-c) + c*qa] – (2pa -1)
= [pa*qa (1 + c) – c*qa + pa*qa(1-c) + c*qa – 2pa (pa*qa(1-c) + c*qa)]/[pa*qa(1-c) + c*qa]
= [pa*qa + pa*qa*c – c*qa + pa*qa – pa*qa*c + c*qa – 2pa*pa*qa + 2pa*pa*qa*c – 2pa*c*qa]/[pa*qa(1-c) + c*qa]
= [2pa*qa – 2pa*pa*qa + 2pa*pa*qa*c – 2pa*c*qa]/[pa*qa(1-c) + c*qa]
= [2pa*qa(1- pa + pa*c -c)]/[pa*qa(1-c) + c*qa]
= [2pa*qa((1- c) – pa(1-c))]/[pa*qa(1-c) + c*qa]
= [2pa*qa(1-pa)(1-c)]/[pa*qa(1-c) + c*qa]
Diff. in response rates = qa – qb
When will diff. in response rates be greater than T2 – T1 Diff. –
qa – qb > [2pa*qa(1-pa)(1-c)]/(pa*qa – pa*qac + cqa)
qa(1-c)(pa*qa – pa*qac + cqa) > 2pa*qa(1-pa)(1-c)
qa(1-c)(pa*qa – pa*qa*c + c*qa) – 2pa*qa(1-pa)(1-c) > 0
(1-c)qa [pa*qa – pa*qa*c + c*qa – 2pa(1 -pa)] > 0
(1-c)qa[pa*qa -pa*qa*c + c.qa – 2pa + 2pa*pa] > 0
(1-c)qa[pa(qa – qa*c -2 + 2pa) – c.qa] > 0
(1- c) and qa are always greater than 0. Lets take them out.
pa.qa – pa.qa.c – 2pa + 2pa.pa – c.qa > 0
qa – qa.c – 2 + 2pa – c.qa/pa > 0 [ dividing by pa]
qa + 2pa – c.qa(1 + 1/pa) > 0
qa + 2pa > c.qa(1 + 1/pa)
(qa + 2pa)/[qa(1 + 1/pa)] > c
[pa*(qa + 2pa)]/[(pa + 1)qa] > c
When will diff. in response rates + initial diff. > T2 diff.
qa – qa*c + 2pa – 1 > [pa*qa (1 + c) – c*qa]/[pa*qa(1-c) + c*qa]
[pa*qa(1-c) + c*qa][qa – qa*c + 2pa – 1] – [pa*qa (1 + c) – c*qa] > 0
– pa*qa + pa*qa*c – c*qa + [pa*qa(1-c) + c*qa][qa – qa*c + 2pa] – pa*qa – pa*qa*c + c*qa > 0
-2pa*qa + [pa*qa(1-c) + c*qa][qa – qa*c + 2pa] > 0
-2pa*qa + [pa*qa – pa*qa*c + c*qa][qa – qa*c + 2pa] > 0
-2pa*qa + pa*qa[qa – qa*c + 2pa] – pa*qa*c[qa – qa*c + 2pa] + c*qa[qa – qa*c + 2pa] > 0
-2pa*qa + pa*qa*qa – pa*qa*qa*c + 2pa*qa*pa – pa*qa*c*qa + pa*qa*c*qa*c + 2pa*qa*c*pa + c*qa*qa – c*qa*qa*c + 2pa*c*qa> 0
-2pa*qa + pa*qa^2 – 2c*pa*qa^2 + 2qa*pa^2 + pa*c^2*qa^2 + 2pa^2*c*qa + c*qa^2 + c^2*qa^2 + 2pa*c*qa > 0
-2pa*qa + 2qa*pa^2 + 2pa*c*qa + 2pa^2*c*qa + pa*qa^2 – 2c*pa*qa^2 + pa*c^2*qa^2 + c*qa^2 + c^2*qa^2 > 0
2qa*pa(-1 + c + pa + pa*c) + pa*qa^2 (1 – 2c + c^2) + c*qa^2(1 + c) > 0
2qa*pa(-1 + c + pa(1+c)) + pa*qa^2 (1 – c)^2 + c*qa^2(1 + c) > 0
two of the terms are always 0 or more.
2qa*pa(-1 + c + pa(1+c)) > 0
-1 + c + pa(1+c) > 0
pa > (1-c)/(1 +c)
Outside of the variety of ways of explicitly asking people how they feel about another group — feeling thermometers, like/dislike scales, favorability ratings — explicit measures asked using mechanisms designed to overcome or attenuate social desirability concerns — bogus pipeline, ACASI — and a plethora of implicit measures — affect misattribution, IAT — there exist a few other interesting ways of measuring affect:
- Games as measures – Jeremy Weinstein uses games like the dictator game to measure (inter-ethnic) affect. One can use prisoner’s dilemma, among other games, to do the same.
- Systematic bias in responding to factual questions when ignorant about the correct answer. For example, most presidential elections years since 1988, ANES has posed a variety of retrospective evaluative and factual questions including assessments of the state of the economy, whether the inflation/unemployment/crime rose, remained the same, or declined in the past year (or some other time frame). Analyses of these questions have revealed significant ‘partisan bias’, but these questions have yet to be used as a measure of ‘partisan affect’ that is the likely cause of the observed ‘bias’.