Some Hard Feelings: Feelings Towards Some Racial and Ethnic Groups in 4 Countries

According to data from YouGov surveys in Switzerland, Netherlands and Canada, and the 2008 ANES in the US, Whites, on average, in each of the four countries feel fairly coldly – giving typically an average thermometer rating of less than 50 on a 0 to 100 scale – toward Muslims, and people from Muslim majority regions (Feelings towards different ethnic, racial, and religious groups). However, in Europe, Whites’ feelings toward Romanians, Poles, and Serbs and Kosovars are scarcely any warmer, and sometimes cooler. Meanwhile, Whites feel relatively warmly towards East Asians.

Liberal politicians are referred to more often in news

The median Democrat referred to in television news is to the left of the House Democratic Median, and the median Republican politician referred to is to the left of the House Republican Median.

Click here for the aggregate distribution.

And here’s a plot of top 50 politicians cited in news. The plot shows a strong right skewed distribution with a bias towards executives.

Data:
News data: UCLA Television News Archive, which includes closed-caption transcripts of all national, cable and local (Los Angeles) news from 2006 to early 2013. In all, there are 155,814 transcripts of news shows.

Politician data: Database on Ideology, Money in Politics, and Elections (see Bonica 2012).

Note:
Taking out data from local news channels or removing Obama does little to change the pattern in the aggregate distribution.

(No) Value Added Models

This note is in response to some of the points raised in the Agnoff Lecture by Ed Haertel.

The lecture makes two big points:
1) Teacher effectiveness ratings based on current Value Added Models are ‘unreliable.’ They are actually much worse than just unreliable; see below.
2) Simulated counterfactuals of gains that can be got from ‘firing bad teachers’ are upwardly biased.

Three simple tricks (one discussed; two not) that may solve some of the issues:
1) Estimating teaching effectiveness: Where possible, random assignment of children to classes. I would only do within school comparisons. Inference will still not be clean (SUTVA violations, though they can be dealt with). Simply cleaner.

2) Experiment with teachers. Teach some teachers some skills. Estimate the impact. Rather than teacher level VAM, do a skill level VAM. Teachers = sum of skills + idiosyncratic variation.

3) For current VAMs: To create better student level counterfactuals, use modern ML techniques (SVM, Neural Networks..), lots of data (past student outcomes, past classmate outcomes etc.), cross-validate to tune. Have a good idea about how good the prediction is. The strategy may be applicable to other venues.

Other points:
1) Haertel says, “Obviously, teachers matter enormously. A classroom full of students with no teacher would probably not learn much — at least not much of the prescribed curriculum.” A better comparison perhaps would be to self-guided technology. My sense is that as technology evolves, teachers will come up short in a comparison between teachers and advanced learning tools. In most of the third world, I think it is already true.

2) It appears no model for calculating teacher effectiveness scores yields identified estimates. And it appears we have no clear understanding of the nature of bias. Pooling biased estimates over multiple years doesn’t recommend itself to me as a natural fix to this situation. And I don’t think calling this situation as ‘unreliability’ of scores is right. These scores aren’t valid. The fact that pooling across years ‘works’ may suggest issues are smaller. But then again, bad things may be happening to some kinds of teachers, especially if people are doing cross-school comparisons.

3) Fade-out concern is important given the earlier 5*5 =25 analysis. My suspicion would be that attenuation of effects varies depending on when the timing of the shock. My hunch would be that shocks at an earlier age matter more – they decay slower.

(No) Missing daughters of Indian Politicians

Indian politicians get a bad rap. They are thought to be corrupt, inept, and sexist. Here we check whether there is prima facie evidence for sex-selective abortion.

According to data on the Indian Government ‘Archive’, 15th Lok Sabha members (csv) had, in all, 696 sons and 666 daughters for a sex ratio of 957 females to 1000 males. Progeny of members from states with the most skewed sex ratios (Punjab, Haryana, Jammu and Kashmir, and Haryana) had a surprisingly healthy sex ratio of 1245 females to 1000 males. Sex ratios of children of BJP and INC members were 930/1000 and 965/1000 respectively. Rajya Sabha members (csv) had 271 sons and 272 daughters for a sex ratio of 1003 females to 1000 males. Not only was there little evidence of sex-selective abortion, data also suggest that fertility rates were modest. Lok Sabha members had on average 2.5 kids while members of Rajya Sabha had on average 2.2 kids.

Impact of selection bias in experiments where people treat each other

Selection biases in the participant pool generally have limited impact on inference. One way to estimate population treatment effect from effects estimated using biased samples is to check if treatment effect varies by ‘kinds of people’, and then weight the treatment effect to population marginals. So far so good.

When people treat each other, selection biases in participant pool change the nature of the treatment. For instance, in a Deliberative Poll, a portion of the treatment is other people. Naturally then, the exact treatment depends on the pool of people. Biases in the initial pool of participants mean treatment is different. For inference, one may exploit across group variation in composition.

A Relatively Ignored Mediational Variable in Deliberation

Deliberation often causes people to change their attitudes. One reason this may happen is because deliberation also causes people’s values to change. Thus, one mediational model is as follows: change in values causes change in attitudes. However, deliberation can also cause people to connect their attitudes better with their values, without changing those values. This may mean that ceteris paribus, same values yield different attitudes post-treatment. For identification, one may want to devise a treatment that changes respondent’s ability to connect values to attitudes but has no impact on values. Or a researcher may try to gain insight by measuring people’s ability to connect values to attitudes in current experiments, and estimating mediational models. One can also simply try simulation (again subject to the usual concerns): use post-treatment model (regressing attitudes on values), use pre-deliberation values, and simulate attitudes.

A Quick Scan: From Paper to Digital

There are two ways of converting paper to digital data: ‘human OCR and input’, and machine-assisted. Here are some useful pointers about the latter.

Scanning

Since success of so much of what comes after depends on the quality of the scanned document, invest a fair bit of effort in obtaining high quality scans. If the paper input is in the form of a book, and if the book is bound, and especially if it is thick, it is hard to copy text close to the spine without destroying the spine. If the book can be bought at a reasonable price, do exactly that – destroy the spine – cut the book and then scan. Many automated scanning services will cut the spine by default. Other things to keep in mind: scan at a reasonably high resolution (since storage is cheap, go for at least 600 dpi), and if choosing PDF as an output option, see if the scan can be saved as a “searchable PDF.” (So scanning + OCR in one go.)

An average person working without too much distraction can scan 60-100 images per hour. If two pages of the book can be scanned at once, this means 120-200 pages can be scanned in an hour. Assuming you can hire someone to scan for $10/hr, it comes to 12-20 pages per dollar, which translates to 5 to 8 cents per page. However, scanning manually is boring, and people who are punctilious about quality page after page are far and few between. Relying on automated scanning companies may be better. And cheaper. 1dollarscan.com charges 2 cents per page with OCR. But there is no free lunch. Most automated scanning services cut the spines of the book, and many places don’t send back the paper copy for reasons to do with copyright. So you may have to factor in the cost of the book. And typically the scanning services do all or nothing. You can’t give directions to scan the first 10 pages, followed by middle 120, etc. Thus per relevant page costs may exceed those of manual scanning.

Scan to Text

If the text is clear and laid out simply, most commonly available OCR software will do just fine. Acrobat Professional’s own facility for recognizing text in images, found under ‘tools’, is reasonable. Acrobat also notes words that it is unsure about – it calls them ‘suspects'; you can click through the line up of ‘suspects’, correcting them as needed. The interface for making corrections is suboptimal but it is likely to prove satisfactory for small documents with few errors.

Those without ready access to Adobe Professional can extract text from `searchable PDF’ using xpdf or any of the popular programming languages. In Python, pyPdf (see script) or pdfminer (or other libraries) are popular. If the document is a set of images, one can use libraries based off Tesseract (see script). PDF documents need to be converted to images using ghostcript or similar such rasterization software before being fed to Tesseract.

But what if the quality of scans is poor or the page layout complicated? First try enhancing images – fixing orientation, using filters to enhance readability etc. This process can be automated if the images are distorted in similar ways. Second, try extracting boundary boxes for columns/paragraphs and words/characters (position/size) and font styles (name/size) by choosing XML/HTML as the output format. This information can be later exploited for aligning etc. However, how much you gain from extracting style and boundary box information depends heavily on the quality of the original pdf. For low quality pdfs, mislabeling of font size and style can be common, which means the information cannot be used reliably. Third, explore training the OCR. Tesseract can be trained to improve OCR though training it isn’t straightforward. Fourth, explore good professional OCR engines such as AbbyyFine. OCR in AbbyyFine can be easily improved by adding training data, and tuning various options for identifying the proper ‘area order’ (which text area follows which, which portion of the page isn’t part of text area etc.).

Post-processing: Correcting Errors

The OCR process makes certain kinds of errors. For instance, ‘i’ may be confused for a ‘l’ or for a ‘pipe.’ And these errors are often repeated. One consequence of these errors is that some words are misspelled systematically. One way to deal with such errors is to simply search and replace particular strings (see script). When using search and replace, it pays to be mindful to problems that may ensue from searching and replacing short strings. For instance, replacing ‘lt’ with ‘it’ may mean converting ‘salt’ to ‘sait.’

Note: For a more general account on matching dirty data, read this article.

It is typically useful to script a search and replace script alongside a database of search terms and terms you propose to replace them with. For one it allows you to apply the same set of corrections to many documents. For two, it can be easily amended and re-rerun. While writing these scripts (see script), it is important to keep issues to do with text encoding in mind; OCR yields some ligatures (e.g. fi) and some other Unicode characters.

Searching and replacing particular strings can prove time consuming as same word can often be misspelled in tens of different ways. For instance, in a recent project, the word “programming” was misspelled in no less than 28 ways. The easiest extension to this algorithm is to search for patterns. For instance, some number of sequential errors at any point in the string. One can extend it to include various ‘edit distances’, e.g. Levenshtein distance, which is the number of characters you need to switch to convert one word to another, allowing the user to handle non-sequential errors (see script). Again the trick is to keep the length of string in mind as false positives may abound. One can do that by choosing a metric that factors in the size of the string and the number of errors. For instance, a metric like (Levenshtein distance)/(size of the original string). Lastly, rather than use edit distance, one can apply ‘better’ pattern matching algorithms, such as Ratcliff/Obershelp.

Spell checks, a combination of pattern matching libraries and databases (dictionaries), are another common way of searching and replacing strings. There are various freely available spell-check databases (to be used with your own pattern matching algorithm) and libraries, including pyEnchant for Python. One can even use VB to call MS-Word’s fairly reasonable spell checking functions. However, if the document contains lots of unique proper nouns, spell check is liable to create more problems than it solves. One way to reduce these errors is to (manually) amend the database. Another is to limit corrections to words of a certain size, or to cases where the suggested words and the words in text differ only by certain kinds of letters (or non-letters, ‘pipe’ for the letter l). One can also leverage ‘Google suggest’ to fix spelling errors. Lastly, spell checks built for particular OCR errors, such as ocrspell, can also be used. If these methods still yield too many false corrections, one can go for a semi-automated approach: use spell-check to harvest problematic words and recommended replacements and then let people pick the right version. A few tips for creating human assisted spell check versions: eliminate duplicates, and provide the user with neighboring words (2 words before and after can work for some projects). Lastly, one can use M-Turk to iteratively proof-read the document (see TurkIt).