On balance, let there be imbalance on observables

27 Sep

For whatever reason, some people are concerned with imbalance when analyzing data from randomized experiments. The concern may be more general, but its fixes devolve into reducing imbalance on observables. Such fixes may fix things or break things. More generally, it is important to keep in mind what one experiment can show. If randomization is done properly, and other assumptions hold, the most common estimator of experiment effects – difference in means – is unbiased. We also have a good idea of how often the true estimate will be in the bounds. For tightening those bounds, relying on sample size is the way to go. General rules apply. Larger is better. But some refinements to that general rule. When everyone/thing is the same – for instance, neutrons* (in most circumstances) – and if measurement error isn’t a concern, samples of 1 will do just fine. The point holds for a potentially easier to obtain case than everybody/thing being same – when treatment effect is constant across things/people. When everyone is different w.r.t treatment effect, randomization won’t help. Though one can always try to quantify the difference. More generally, sample size required for a particular level of balance is greater, greater the heterogeneity. Stratified random assignment (or blocking) will help. This isn’t to say that raw difference estimator will be biased. It won’t. Just that variance will be higher.

* Based on discussion in Gerber and Green on why randomization is often not necessary in physics.

Bad Weather: Getting weather data by zip and date

27 Jun

High-quality weather data are public. But they aren’t easy to make use of.

Here below, is some advice, and some scripts for finding out the weather in a particular zip code on a particular day (or a set of dates).

Some brief ground clearing before we begin. Weather data come from weather stations, which can belong to any of the five or more “networks”, each of which collect somewhat different data, sometimes label the same data differently, and have different reporting protocols. The only geographic information that typically comes with weather stations is their latitude and longitude. By “weather”, we may mean temperature, rain, wind, snow, etc. and we may want data on these for every second, minute, hour, day, month etc. It is good to keep in mind that not all weather stations report data for all units of time, and there can be a fair bit of missing data. Getting data at coarse time units like day, month, etc. typically involves making some decisions about what particular statistic is the most useful. So for instance, you may want, minimum and maximum (for daily temperature), or totals (for rainfall and snow). With that primer, let’s begin.

We begin with what not to do. Do not use the NOAA web service. The API provides a straightforward way to get “weather” data for a particular zip for a particular month. Except, the requests often return nothing. It isn’t clear why. The documentation doesn’t say whether the search for the closest weather station is limited to X kilometers because without that, one should have data for all zip codes and all dates. Nor does the API bother to return how far the weather station is from which it got the data, though one can get that post hoc using Google Geocoding API. However, given the possibility that the backend for the API would improve over time, here’s a script for getting the daily weather data, and hourly precipitation data.

On to what can be done. The “web service” that you can use is Farmer’s Almanac’s. Sleuthing using scripts that we discuss later reveal that The Almanac reports data from the NWS-USAF-NAVY stations (ftp link to the data file). And it appears to have data for most times though no information is provided on the weather station from which it got the data and the distance to the zip code.

If you intend to look for data from GHCND, COOP or ASOS, there are two kinds of crosswalks that you can create – one that goes from zip codes to weather stations, and one that goes from weather stations to zip codes. I assume that we don’t have access to shape files (for census zip codes), and that postal zip codes encompass a geographic region. To create a weather station to zip code crosswalk, web service such as Geonames or Google Geocoding API can be used. If the station lat,./long. is in the zip code, the distance comes up as zero. Otherwise the distance is calculated as distance from the “centroid” of the zip code (see geonames script that finds 5 nearest zips for each weather station). For creating a zip code to weather station crosswalk, we get centroids of each zip using a web service such as Google (or use already provided centroids from free zip databases). And then find the “nearest” weather stations by calculating distances to each of the weather stations. For a given set of zip codes, you can get a list of closest weather stations (you can choose to get n closest stations, or say all weather stations within x kilometers radius, and/or choose to get stations from particular network(s)) using the following script. The output lists for each zip code weather stations arranged by proximity. The task of getting weather data from the closest station is simple thereon – get data (on a particular set of columns of your choice) from the closest weather station from which the data are available. You can do that for a particular zip code and date (and date range) combination using the following script.

Not feeling as warm about Whites anymore

4 Mar

The difference between Whites’ thermometer ratings (0 = cold, 100=warm) of Whites and Blacks has gone down relatively steadily over the past 50 or so years.
therm.bw_.nes_.ovrtime

However, the decline is almost entirely explained by drop in ratings of Whites.
therm.white_.nes_.ovrtime

Aging Curves of Political Knowledge

2 Mar

age.pk.nes

Each line represents data from a different year. Source: American National Election Studies.

Now, aging curves of political knowledge by cohort.

4yr.cohort.pk.nes

Why were the polls so accurate?

16 Nov

The Quant. Interwebs have overflowed with joy since the election. Poll aggregation works. And so indeed does polling, though you won’t hear as much about it on the news, which is likely biased towards celebrity intellects, than the hardworking many. But why were the polls so accurate?

One potential explanation: because they do some things badly. For instance, most fail at collecting “random samples” these days, because of a fair bit of nonresponse bias. This nonresponse bias – if correlated with propensity to vote – may actually push up the accuracy of the vote choice means. There are a few ways to check this theory.

One way to check this hypothesis – were results from polls using Likely Voter screens different from those not using them? If not, why not? From Political Science literature, we know that people who vote (not just those who say they vote) do vary a bit from those who do not vote, even on things like vote choice. For instance, there is just a larger proportion of `independents’ among them.

Other kinds of evidence will be in the form of failure to match population, or other benchmarks. For instance, election polls would likely fare poorly when predicting how many people voted in each state. Or tallying up Spanish language households or number of registered. Another way of saying this is that the bias will vary by what parameter we aggregate from these polling data.

So let me reframe the question – how do polls get election numbers right even when they undercount Spanish speakers? One explanation is positive correlation between selection into polling, and propensity to vote, which makes vote choice means much more reflective of what we will see come election day.

The other possible explanation to all this – post-stratification or other posthoc adjustment to numbers, or innovations in how sampling is done: matching, stratification etc. Doing so uses additional knowledge about the population and can shrink ses and improve accuracy. One way to test such non-randomness: over tight confidence bounds. Many polls tend to do wonderfully on multiple uncorrelated variables – for instance, census region proportions, gender, … etc. – something random samples cannot regularly produce.

Raising money for causes

10 Nov

Four teenagers, at the cusp of adulthood, and eminently well to do, were out on the pavement raising money for children struck with cancer. They had been out raising money for a couple of hours, and from a glance at their tin pot, I estimated that they had raised about $30 odd dollars, likely less. Assuming donation rate stays below $30/hr, or more than what they would earn if they were all working minimum wage jobs, I couldn’t help but wonder if their way of raising money for charity was rational; they could have easily raised more by donating their earnings from doing minimum wage job. Of course, these teenagers aren’t alone. Think of the people out in the cold raising money for the poor on New York pavements. My sense is that many people do not think as often about raising money by working at a “regular job”, even when it is more efficient (money/hour) (and perhaps even more pleasant). It is not clear why.

The same argument applies to those who run in marathons etc. to raise money. Preparing and running in marathon generally costs at least hundreds of dollars for an average ‘Joe’ (think about the sneakers, the personal trainers people hire, the amount of time they `donate’ to train, which could have been spent working and donating that money to charity etc.). Ostensibly, as I conclude in an earlier piece, they must have motives beyond charity. These latter non-charitable considerations, at least at first glance, do not seem to apply to the case of teenagers, or to those raising money out in the cold in New York.

Random redistricting (a bit more efficiently)

25 Sep

In a forthcoming article, Chen and Rodden estimate the effect of ‘Unintentional gerrymandering’ on number of seats that go to a particular party. To do so they pick a precinct at random, and then add (randomly chosen) adjacent precincts to it till the district is of a certain size (decided by total number of districts one wants to create). Then they go about creating a new district in the same manner, randomly selecting a precinct bordering the first district. This goes on till all the precincts are assigned to a district. There are some additional details but they are immaterial to the point of the note. A smarter way to do the same thing would be to just create one district over and over again (starting with a randomly chosen precinct). This would reduce the computational burden (memory for storing edges, differencing shape files, etc.) while leaving estimates unchanged.

A potential source of bias in estimating impact of televised campaign ads

16 Aug

One popular strategy for estimating impact of televised campaign ads is by exploiting ‘accidental spillover’ (see Huber and Arceneaux 2007). The identification strategy builds on the following facts: Ads on local television can only be targeted at the DMA level. DMAs sometimes span multiple states. Where DMAs span battleground and non-battleground states, ads targeted for residents of battleground states are seen by those in non-battleground states. In short, people in non-battleground states are ‘inadvertently’ exposed to the ‘treatment’. Behavior/Attitudes etc. of the residents who were inadvertently exposed are then compared to those of other (unexposed) residents in those states. The benefit of this identification strategy is that it allows television ads to be decoupled from the ground campaign and other campaign activities, such as presidential visits (though people in the spillover region are exposed to television coverage of the visits). It also decouples ad exposure etc. from strategic targeting of the people based on characteristics of the battleground DMA etc. There is evidence that content, style, the volume, etc. of television ads is ‘context aware’ – varies depending on what ‘DMA’ they run in etc. (After accounting for cost of running ads in the DMA, some variation in volume/content etc. across DMAs within states can be explained by partisan profile of the DMA, etc.)

By decoupling strategic targeting from message volume and content, we only get an estimate of the ‘treatment’ targeted dumbly. If one wants an estimate of ‘strategic treatment’, such quasi-experimental designs relying on accidental spillover may be inappropriate. How to estimate then the impact of strategically targeted televised campaign ads: first estimate how ads are targeted depending on area and people (Political interest moderates the impact of political ads [see for e.g. Ansolabehere and Iyengar 1995]) characteristics, next estimate effect of messages using the H/A strategy, and then re-weight the effect using estimates of how the ad is targeted.

One can also try to estimate effect of ‘strategy’ by comparing adjusted treatment effect estimates in DMAs where treatment was targeted vis-a-vis (captured by regressing out other campaign activity) and where it wasn’t.

Sample this

1 Aug

What do single shot evaluations of MT (replace it with anything else) samples (vis-a-vis census figures) tell us? I am afraid very little. Inference rests upon knowledge of the data (here – respondent) generating process. Without a model of the data generating process, all such research reverts to modest tautology – sample A was closer to census figures than sample B on parameters X,Y, and Z. This kind of comparison has a limited utility: as a companion for substantive research. However, it is not particularly useful if we want to understand the characteristics of the data generating process. For even if respondent generation process is random, any one draw (here – sample) can be far from the true population parameter(s).

Even with lots of samples (respondents), we may not be able to say much if the data generation process is variable. Where there is little expectation that the data generation process will be constant, and it is hard to understand why MT respondent generation process for political surveys will be a constant one (it likely depends on the pool of respondents, which in turn perhaps depends on the economy etc., the incentives offered, the changing lure of incentives, the content of the survey, etc.), we cannot generalize. Of course one way to correct for all of that is to model this variation in the data generating process, but that will require longer observational spans, and more attention to potential sources of variation etc.