A Relatively Ignored Mediational Variable in Deliberation

5 Jun

Deliberation often causes people to change their attitudes. One reason this may happen is because deliberation also causes people’s values to change. Thus, one mediational model is as follows: change in values causes change in attitudes. However, deliberation can also cause people to connect their attitudes better with their values, without changing those values. This may mean that ceteris paribus, same values yield different attitudes post-treatment. For identification, one may want to devise a treatment that changes respondent’s ability to connect values to attitudes but has no impact on values. Or a researcher may try to gain insight by measuring people’s ability to connect values to attitudes in current experiments, and estimating mediational models. One can also simply try simulation (again subject to the usual concerns): use post-treatment model (regressing attitudes on values), use pre-deliberation values, and simulate attitudes.

A Quick Scan: From Paper to Digital

28 May

There are two ways of converting paper to digital data: ‘human OCR and input’, and machine-assisted. Here are some useful pointers about the latter.

Scanning

Since success of so much of what comes after depends on the quality of the scanned document, invest a fair bit of effort in obtaining high quality scans. If the paper input is in the form of a book, and if the book is bound, and especially if it is thick, it is hard to copy text close to the spine without destroying the spine. If the book can be bought at a reasonable price, do exactly that – destroy the spine – cut the book and then scan. Many automated scanning services will cut the spine by default. Other things to keep in mind: scan at a reasonably high resolution (since storage is cheap, go for at least 600 dpi), and if choosing PDF as an output option, see if the scan can be saved as a “searchable PDF.” (So scanning + OCR in one go.)

An average person working without too much distraction can scan 60-100 images per hour. If two pages of the book can be scanned at once, this means 120-200 pages can be scanned in an hour. Assuming you can hire someone to scan for $10/hr, it comes to 12-20 pages per dollar, which translates to 5 to 8 cents per page. However, scanning manually is boring, and people who are punctilious about quality page after page are far and few between. Relying on automated scanning companies may be better. And cheaper. 1dollarscan.com charges 2 cents per page with OCR. But there is no free lunch. Most automated scanning services cut the spines of the book, and many places don’t send back the paper copy for reasons to do with copyright. So you may have to factor in the cost of the book. And typically the scanning services do all or nothing. You can’t give directions to scan the first 10 pages, followed by middle 120, etc. Thus per relevant page costs may exceed those of manual scanning.

Scan to Text

If the text is clear and laid out simply, most commonly available OCR software will do just fine. Acrobat Professional’s own facility for recognizing text in images, found under ‘tools’, is reasonable. Acrobat also notes words that it is unsure about – it calls them ‘suspects'; you can click through the line up of ‘suspects’, correcting them as needed. The interface for making corrections is suboptimal but it is likely to prove satisfactory for small documents with few errors.

Those without ready access to Adobe Professional can extract text from `searchable PDF’ using xpdf or any of the popular programming languages. In Python, pyPdf (see script) or pdfminer (or other libraries) are popular. If the document is a set of images, one can use libraries based off Tesseract (see script). PDF documents need to be converted to images using ghostcript or similar such rasterization software before being fed to Tesseract.

But what if the quality of scans is poor or the page layout complicated? First try enhancing images – fixing orientation, using filters to enhance readability etc. This process can be automated if the images are distorted in similar ways. Second, try extracting boundary boxes for columns/paragraphs and words/characters (position/size) and font styles (name/size) by choosing XML/HTML as the output format. This information can be later exploited for aligning etc. However, how much you gain from extracting style and boundary box information depends heavily on the quality of the original pdf. For low quality pdfs, mislabeling of font size and style can be common, which means the information cannot be used reliably. Third, explore training the OCR. Tesseract can be trained to improve OCR though training it isn’t straightforward. Fourth, explore good professional OCR engines such as AbbyyFine. OCR in AbbyyFine can be easily improved by adding training data, and tuning various options for identifying the proper ‘area order’ (which text area follows which, which portion of the page isn’t part of text area etc.).

Post-processing: Correcting Errors

The OCR process makes certain kinds of errors. For instance, ‘i’ may be confused for a ‘l’ or for a ‘pipe.’ And these errors are often repeated. One consequence of these errors is that some words are misspelled systematically. One way to deal with such errors is to simply search and replace particular strings (see script). When using search and replace, it pays to be mindful to problems that may ensue from searching and replacing short strings. For instance, replacing ‘lt’ with ‘it’ may mean converting ‘salt’ to ‘sait.’

Note: For a more general account on matching dirty data, read this article.

It is typically useful to script a search and replace script alongside a database of search terms and terms you propose to replace them with. For one it allows you to apply the same set of corrections to many documents. For two, it can be easily amended and re-rerun. While writing these scripts (see script), it is important to keep issues to do with text encoding in mind; OCR yields some ligatures (e.g. fi) and some other Unicode characters.

Searching and replacing particular strings can prove time consuming as same word can often be misspelled in tens of different ways. For instance, in a recent project, the word “programming” was misspelled in no less than 28 ways. The easiest extension to this algorithm is to search for patterns. For instance, some number of sequential errors at any point in the string. One can extend it to include various ‘edit distances’, e.g. Levenshtein distance, which is the number of characters you need to switch to convert one word to another, allowing the user to handle non-sequential errors (see script). Again the trick is to keep the length of string in mind as false positives may abound. One can do that by choosing a metric that factors in the size of the string and the number of errors. For instance, a metric like (Levenshtein distance)/(size of the original string). Lastly, rather than use edit distance, one can apply ‘better’ pattern matching algorithms, such as Ratcliff/Obershelp.

Spell checks, a combination of pattern matching libraries and databases (dictionaries), are another common way of searching and replacing strings. There are various freely available spell-check databases (to be used with your own pattern matching algorithm) and libraries, including pyEnchant for Python. One can even use VB to call MS-Word’s fairly reasonable spell checking functions. However, if the document contains lots of unique proper nouns, spell check is liable to create more problems than it solves. One way to reduce these errors is to (manually) amend the database. Another is to limit corrections to words of a certain size, or to cases where the suggested words and the words in text differ only by certain kinds of letters (or non-letters, ‘pipe’ for the letter l). One can also leverage ‘Google suggest’ to fix spelling errors. Lastly, spell checks built for particular OCR errors, such as ocrspell, can also be used. If these methods still yield too many false corrections, one can go for a semi-automated approach: use spell-check to harvest problematic words and recommended replacements and then let people pick the right version. A few tips for creating human assisted spell check versions: eliminate duplicates, and provide the user with neighboring words (2 words before and after can work for some projects). Lastly, one can use M-Turk to iteratively proof-read the document (see TurkIt).

Error Free Multi-dimensional Thinking

1 May

Some recent research suggests that Americans’ policy preferences are highly constrained, with a single dimension able to correctly predict over 80% of the responses (see Jessee 2009, Tausanovitch and Warshaw 2013). Not only that, adding a new (orthogonal) dimension doesn’t improve prediction success by more than a couple of percentage points.

All this flies in the face of conventional wisdom in American Politics, which is roughly antipodal to the new view: most people’s policy preferences are unstructured. In fact, many people don’t have any real preferences on many of the issues (`non-preferences’). Evidence that is most often cited in support of this view comes from Converse – weak correlation between preferences across measurement waves spanning two years (r ~ .4 to .5), and even lower within wave cross-issue correlations (r ~ .2).

What explains this double disagreement – over authenticity of preferences, and over structuration of preferences?

First, authenticity of preferences. When reports of preferences change across waves, is it a consequence of attitude change or non-preferences or measurement error? In response to concerns about long periods between test-retest – which allowed for opinions to genuinely change – researchers tried shorter time periods. Correlations were notably stronger (r ~ .6 to .9)(see Brown 1970). But the sheen of these healthy correlations was worn off by concerns that stability was merely an artifact of people remembering and reproducing what they put down last time.

Redemption of correlations over longer time periods came from Achen (1975). While few of the assumptions behind the redemption are correct – notably uncorrelated errors (across individuals, waves, etc.) – for inferences to be seriously wrong, much has to go wrong. More recently, and, subject to validation, perhaps more convincingly, work by Dean Lacy suggests that once you take out the small number of implausible transitions between waves – those from one end of the scale to another – cross-wave correlations are fairly healthy. (This is exactly opposite to the conclusion Converse came to based on a Markov model; he argued that aside from a few consistent responses, rest of the responses were mostly noise.) Much simpler but informative tests are still missing. For instance, it seems implausible that lots of people who hold well-defined preferences on an issue would struggle to pick even the right side of the scale when surveyed. Tallying stability of dichotomized preferences would be useful.

Some other purported evidence for authenticity of preferences has come from measurement error models that rest upon more sizable assumptions. These models assume an underlying trait (or traits) and pool preferences over disparate policy positions (see for instance, Ansolabehere, Rodden, and Snyder 2006 but also Tausanovitch and Warshaw 2013). How do we know there is an underlying trait? That isn’t clear. Generally, it is perfectly okay to ask whether preferences are correlated, less so to simply assume that preferences are structured by an unobserved underlying mental construct.

With the caveat that dimensions may not reflect mental constructs, we next move to assessing claims about dimensionality of preferences. Differences between recent results and conventional wisdom about “constraint” may be simply due to increase in structuration of preferences over time. However, research suggests that constraint hasn’t increased over time (Baldassari and Gelman 2008). Perhaps more plausibly, dichotomization – which presumably reduces measurement error – is behind some of the differences. There are of course less ham-handed ways of reducing measurement error. For instance, using multiple items to measure preferences on a single policy, as psychologists often do. Since it cannot be emphasized enough, the lesson of past two paragraphs is: keep adjustments for measurement error, and measurement of constraint separate.

Analysis suggesting higher constraint may also be an artifact of analysts’ choices. Dimension reduction techniques are naturally sensitive to the pool of items. If a large majority of the items solicit preferences on economic issues (as in Tausanovitch and Warshaw 2013), the first principal component will naturally pick preferences on that dimension. Since majority of the gains would come from correctly predicting a large majority of the items, gains in percentage correctly predicted would be poor at judging whether there is another dimension, say preferences on cultural issues. Cross-validation across selected large item groups (large enough to overcome idiosyncratic error) would be a useful strategy. And then again, gains in percentage correctly predicted over the entire population may miss subgroups with very different preference structures. For instance, Blacks and Catholics, who tend to be more socially conservative but economically liberal. Lastly, it is possible that preferences on some current issues (such as those used by Jessee 2009) may be more structured (by political conflict) than some old standing issues.

Gandhi and his critics

7 Mar

Gandhi could never come to terms with the fact that he took leave from his dying father to have sex with his wife; his father died while he copulated. This episode produced a lifelong obsession with overcoming sexual desire, and sanitation (or so Freudians will claim). Unrelatedly, Gandhi had unconventional (even bad, for their time) ideas about some other important matters – he wasn’t a fan of industrialization. All this is well known.

Was Gandhi a hidden, if not manifest, Hindu nationalist with an upper caste agenda? None too careful ideological hobbyist historians like Arundhati Roy will have you believe that. Do they have a point? No.

There are great many similarities between how Jinnah and Ambedkar argued their respective cases with Gandhi. Not much distinguishes how Gandhi responded to each, often refusing to agree to the ‘facts’ that motivated their arguments, and always disagreeing with the claim that there was just one solution (the solution they proposed) to the problem they had identified. Gandhi saw both these leaders as too infatuated with their solutions (Gandhi was a touch too infatuated with his own solutions). He thought their solutions were irresponsible, if not illogical. Gandhi saw both Jinnah and Ambedkar eye to eye on the problems (we have good evidence on that), but never on the solutions. Does it make him opposed to their aims? No. His aims were the same as theirs, if not more ambitious.

(Upper caste) Hindus are never going to change. Replace ‘upper caste Hindus’ with any other group and you have a fair gist of the dominant understanding of people of ‘other groups.’ No easier caricature of humanity than this. If you believe that, the solution is obvious. Kill or split. Order restored. Except often enough order isn’t. The legacy of hatred lives on. The oppressed mutate into oppressors of their ‘own’ kind. (Who is your own is something we don’t think about enough about, relying often on simple heuristics. Is Lalu Prasad Yadav a well-wisher of all Yadavs? I think not. The same goes for enemies.)

You need more courage to see the greater truth – that people so thoughtlessly cruel can just as easily become defenders of enlightened ‘common sense’, that certain truths can be understood by people and that many will (and do) happily sacrifice their material advantage once they understand those facts. You also need courage to work from this greater truth. Creating change in people isn’t easy. Quite the opposite. But over the long run it is perhaps the only solution.

But then, a lot of change (both positive and negative) has come incidentally, not as a result of conscious programs. Demographics along with particular democratic institutions in India have increased political power of the lower castes (though like everybody, they haven’t always used it wisely). And economic liberalization – brought upon for different reasons – may have done more to erase caste boundaries than many other conscious attempts.

Capuchin Monkeys and Fairness: I want at least as much as the other

1 Dec

In a much heralded experiment, we see that a Capuchin monkey rejects a reward (food) for doing a task after seeing another monkey being rewarded with something more appetizing for doing the same task. It has been interpreted as evidence for our ‘instinct for fairness’. But there is more to the evidence. The fact that the monkey that gets the heftier reward doesn’t protest the more meager reward for the other monkey is not commented upon though highly informative. Ideally any weakly reasoned deviation from equality should provoke a negative reaction. Monkeys who get the longer end of the stick – even when aware that others are getting the shorter end of the stick – don’t complain. Primates are peeved only when they are made aware that they are getting the short end of the stick. Not so much if someone else gets it. My sense is that it is true for most humans as well – people care far more about them holding the short end of the stick than others. It is thus incorrect to attribute such behavior to an ‘instinct for fairness’. A better attribution may be to the following rule – I want at least as much as the others are getting.

Sampling on M-Turk

13 Oct

In many of the studies that use M-Turk, there appears to be little strategy to sampling. A study is posted (and reposted) on M-Turk till a particular number of respondents take the study. If the pool of respondents reflects true population proportions, if people arrive in no particular order, and all kinds of people find the monetary incentive equally attractive, the method should work well. There is reasonable evidence to suggest that at least points 1 and 3 are violated. One costly but easy fix for the third point is to increase payment rates. We can likely do better.

If we are agnostic about variable on which we want precision, here’s one way to sample: Start with a list of strata, and their proportions in the population of interest. If the population of interest is sample of US adults, the proportions are easily known. Set up screening questions, and recruit. Raise price to get people in cells that are running short. Take simple precautions. For one, to prevent gaming, do not change the recruitment prompt to let people know that you want X kinds of people.

On balance, let there be imbalance on observables

27 Sep

For whatever reason, some people are concerned with imbalance when analyzing data from randomized experiments. The concern may be more general, but its fixes devolve into reducing imbalance on observables. Such fixes may fix things or break things. More generally, it is important to keep in mind what one experiment can show. If randomization is done properly, and other assumptions hold, the most common estimator of experiment effects – difference in means – is unbiased. We also have a good idea of how often the true estimate will be in the bounds. For tightening those bounds, relying on sample size is the way to go. General rules apply. Larger is better. But some refinements to that general rule. When everyone/thing is the same – for instance, neutrons* (in most circumstances) – and if measurement error isn’t a concern, samples of 1 will do just fine. The point holds for a potentially easier to obtain case than everybody/thing being same – when treatment effect is constant across things/people. When everyone is different w.r.t treatment effect, randomization won’t help. Though one can always try to quantify the difference. More generally, sample size required for a particular level of balance is greater, greater the heterogeneity. Stratified random assignment (or blocking) will help. This isn’t to say that raw difference estimator will be biased. It won’t. Just that variance will be higher.

* Based on discussion in Gerber and Green on why randomization is often not necessary in physics.

Bad Weather: Getting weather data by zip and date

27 Jun

High-quality weather data are public. But they aren’t easy to make use of.

Here below, is some advice, and some scripts for finding out the weather in a particular zip code on a particular day (or a set of dates).

Some brief ground clearing before we begin. Weather data come from weather stations, which can belong to any of the five or more “networks”, each of which collect somewhat different data, sometimes label the same data differently, and have different reporting protocols. The only geographic information that typically comes with weather stations is their latitude and longitude. By “weather”, we may mean temperature, rain, wind, snow, etc. and we may want data on these for every second, minute, hour, day, month etc. It is good to keep in mind that not all weather stations report data for all units of time, and there can be a fair bit of missing data. Getting data at coarse time units like day, month, etc. typically involves making some decisions about what particular statistic is the most useful. So for instance, you may want, minimum and maximum (for daily temperature), or totals (for rainfall and snow). With that primer, let’s begin.

We begin with what not to do. Do not use the NOAA web service. The API provides a straightforward way to get “weather” data for a particular zip for a particular month. Except, the requests often return nothing. It isn’t clear why. The documentation doesn’t say whether the search for the closest weather station is limited to X kilometers because without that, one should have data for all zip codes and all dates. Nor does the API bother to return how far the weather station is from which it got the data, though one can get that post hoc using Google Geocoding API. However, given the possibility that the backend for the API would improve over time, here’s a script for getting the daily weather data, and hourly precipitation data.

On to what can be done. The “web service” that you can use is Farmer’s Almanac’s. Sleuthing using scripts that we discuss later reveal that The Almanac reports data from the NWS-USAF-NAVY stations (ftp link to the data file). And it appears to have data for most times though no information is provided on the weather station from which it got the data and the distance to the zip code.

If you intend to look for data from GHCND, COOP or ASOS, there are two kinds of crosswalks that you can create – one that goes from zip codes to weather stations, and one that goes from weather stations to zip codes. I assume that we don’t have access to shape files (for census zip codes), and that postal zip codes encompass a geographic region. To create a weather station to zip code crosswalk, web service such as Geonames or Google Geocoding API can be used. If the station lat,./long. is in the zip code, the distance comes up as zero. Otherwise the distance is calculated as distance from the “centroid” of the zip code (see geonames script that finds 5 nearest zips for each weather station). For creating a zip code to weather station crosswalk, we get centroids of each zip using a web service such as Google (or use already provided centroids from free zip databases). And then find the “nearest” weather stations by calculating distances to each of the weather stations. For a given set of zip codes, you can get a list of closest weather stations (you can choose to get n closest stations, or say all weather stations within x kilometers radius, and/or choose to get stations from particular network(s)) using the following script. The output lists for each zip code weather stations arranged by proximity. The task of getting weather data from the closest station is simple thereon – get data (on a particular set of columns of your choice) from the closest weather station from which the data are available. You can do that for a particular zip code and date (and date range) combination using the following script.