- Solicit Ideas
Talk to customers, analyze usage data, talk to peers, managers, think, hold competitions, raffles,…
For each idea:
- Define the idea
Write out the idea for clarity — at least 5–10 sentences. Do some due diligence to see what else is there. Learn and revise (including abandon).
- Does the idea make business sense? Does it make sense to the developers (if the idea is mechanistic or implementation related)?
Ideally, peer review. But at the minimum: Talk about your idea with the manager(s) and the developers. If both manager(s) and developer(s) agree (for some mechanistic things, only developers need to agree), move to the next step.
This is optional, and for major, complex innovations that can be easily prototyped only. Write code. Produce Results. Does it make sense? Peer review, if needed. If not, abandon. If it does, move to the next step
- Write the specifications
Spend no less than 20% of the entire development time on writing the specs, including proposed functions, options, unit tests, concerns, implications. Run the specifications by developers, get this peer reviewed, improve, and finalize.
- Set Priority and Release Target
Talk to the manager about the priority order of the change, and assign it to a particular release cycle.
- Who Does What by When?
Create JIRA ticket(s)
General cycle = code, test -> peer review -> code, test -> peer review …
MVP of Expected Code or Aspects of good code: ‘Code your documentation’ (well-documented), modular, organized, tested, nice style, profiled. Do it once, do it well.
In answering a question, scientists sometimes collect data that answers a different, sometimes yet more important question. And when that happens, scientists sometimes overlook the easter egg. This recently happened to me, or so I think.
Kabir and I recently investigated the extent to which estimates of motivated factual learning are biased (see here). As part of our investigation, we measured numeracy. We asked American adults to answer five very simple questions (the items were taken from Weller et al. 2002):
- If we roll a fair, six-sided die 1,000 times, on average, how many times would the die come up as an even number? — 500
- There is a 1% chance of winning a $10 prize in the Megabucks Lottery. On average, how many people would win the $10 prize if 1,000 people each bought a single ticket? — 10
- If the chance of getting a disease is 20 out of 100, this would be the same as having a % chance of getting the disease. — 20
- If there is a 10% chance of winning a concert ticket, how many people out of 1,000 would be expected to win the ticket? — 100
- In the PCH Sweepstakes, the chances of winning a car are 1 in a 1,000. What percent of PCH Sweepstakes tickets win a car? — .1%
The average score was about 57%, and the standard deviation was about 30%. Nearly 80% (!) of the people couldn’t answer that 1 in a 1000 chance is .1% (see below). Nearly 38% couldn’t answer that a fair die would turn up, on average, an even number 500 times every 1000 rolls. 36% couldn’t calculate how many people out of a 1,000 would win if each had a 1% chance. And 34% couldn’t answer that 20 out of 100 means 20%.
If people have trouble answering these questions, it is likely that they struggle to grasp some of the numbers behind how the budget is allocated, or for that matter, how to craft their own family’s budget. The low scores also amply illustrate that the education system fails Americans.
Given the importance of numeracy in a wide variety of domains, it is vital that we pay greater attention to improving it. The problem is also tractable — with the advent of good self-learning tools, it is possible to intervene at scale. Solving it is also liable to be good business. Given numeracy is liable to improve people’s capacity to count calories, make better financial decisions, among other things, health insurance companies could lower premiums in lieu of people becoming more numerate, and lending companies could lower interest rates in exchange for increases in numeracy.
The best kind of insight is the ‘duh’ insight—catching something that is exceedingly common, almost routine, but something that no one talks about. I believe this is one such insight.
The standards for citing congenial research (that supports the hypothesis of choice) are considerably lower than the standards for citing uncongenial research. It is an important kind of academic corruption. And it means that the prospects of teleological progress toward truth in science, as currently practiced, are bleak. An alternate ecosystem that provides objective ratings for each piece of research is likely to be more successful. (As opposed to the ‘echo-system’—here are the people who find stuff that ‘agrees’ with what I find—in place today.)
An empirical implication of the point is that the average ranking of journals in which congenial research that is cited is published is likely to be lower than in which uncongenial research is published. Though, for many of the ‘conflicts’ in science, all sides of the conflict will have top-tier publications—-which is to say that the measure is somewhat crude.
The deeper point is that readers generally do not judge the quality of the work cited for support of specific arguments, taking many of the arguments at face value. This, in turn, means that the role of journal rankings is somewhat limited. Or more provocatively, to improve science, we need to make sure that even research published in low ranked journals is of sufficient quality.
Limited Time Only
Lifespan ~ 80 years*52 weeks/year ~ 4k weeks
If you are 40: 2k weeks to go.
If you are 70: another 500–700 weeks.
Interpolate for rest.
If you read a book a week for 50 years, you will read ~ 2500 books.
Approximately 130M books have been published.
India’s PPP adjusted median per capita income was $1,497/yr in 2013. So ~ 550M people < $1500/yr.
- Since it is PPP wage, think about living on $1500/year in the US
- Another way to think about it:
Wage of median American working for a year ($26,964) > wage of 18 Indians working for a year
- Yet another way to think about it:
Avg. career ~ 35 years (30–65)
What median American will earn in 2 yrs = lifetime earning of an Indian earning median income
- Yet another way to think about it:
median CEO/median worker wage ~ 4x
median US Worker/median Indian worker ~ 18x
- Yet another way to think about it:
From a company’s perspective, median American has to be more than 45x productive (where shipping 0, as in lots of digital products) than median Indian or outsourcing or automation makes more sense.
For the data and scripts used to generate the graphs, see https://github.com/soodoku/pollbias.
I am pleased to announce the release of TV and Cable Factbook Data (1997–2002; 1998 coverage is modest). Use of the data is restricted to research purposes.
In 2007, Stefano DellaVigna and Ethan Kaplan published a paper that used data from Warren’s Factbook to identify the effect of the introduction of Fox News Channel on Republican vote share (link to paper). Since then, a variety of papers exploiting the same data and identification scheme have been published (see, for instance, Hopkins and Ladd, Clinton and Enamorado, etc.)
In 2012, I embarked on a similar such project—trying to use the data to study the impact of the introduction of Fox News Channel on attitudes and behaviors related to climate change. However, I found the original data to be limited—DellaVigna and Kaplan had used a team of research assistants to manually code a small number of variables for a few years. So I worked on extending the data. I planned on extending the data in two ways: adding more years, and adding ‘all’ the data for each year. To that end, I developed custom software. The data collection and parsing of a few thousand densely packed, inconsistently formatted, pages (see below) to a usable CSV (see below) finished sometime early in 2014. (To make it easier to create a crosswalk with other geographical units, I merged the data with Town lat/long (centroid) and elevation data from http://www.fallingrain.com/world/US/.)
Soon after I finished the data collection, however, I became aware of a paper by Martin and Yurukoglu. They found some inconsistencies between the Nielsen data and the Factbook data (see Appendix C1 of paper), tracing the inconsistencies to delays in updating the Factbook data—“Updating is especially poor around [DellaVigna and Kaplan] sample year. Between 1999 and 2000, only 22% of observations were updated. Between 1998 and 1999, only 37% of observations were updated.” Based on their paper, I abandoned the plan to use the data, though I still believe the data can be used for a variety of important research projects, including estimating the impact of the introduction of Fox News. Based on that belief, I am releasing the data.
Three goals: impart information, spur deeper thinking about the topic and the social world more generally, and inculcate care in thinking. As is perhaps clear, working toward achieving any one of these goals creates positive externalities that help achieve other goals. For instance, care in exposition, which is a necessary though insufficient condition for imparting correct information, is liable to produce, either through mimesis or further thought, care in how students think about questions.
Supplement such synergies by actively seeking and utilizing pertinent opportunities during both, class-wide discussions about the materials, and one-to-one discussions about research projects, to raise (and clarify) relevant points. During discussions, encourage students to seriously consider questions about epistemology, fundamental to science but also more generally to reasoning and discourse, by weaving in questions such as, “What is the claim that we are making?”, and “When can we make this claim and why?”.
Some of the epistemological questions are most naturally (and perhaps best) handled when students are engaged in working on their own research projects. Guiding students as they collect and analyze their own data provides unique opportunities to discuss issues related to research design, and logic. And it is my hunch that students are more engaged with the material (and hence learn more of it, and think more about it) when they work on their own projects than when asked to learn the materials through lectures alone. For instance, undergraduates at Stanford often excel at knowing the points made in the text, but often have yet to spend time thinking about the topic itself. My sense is (and some experience corroborates it) that thinking broadly about an issue allows students to gain new insights, and helps them contextualize their findings better. It also spurs curiosity about the social world and the broader set of questions about society. Hence, in addition to the above, ask students to discuss the topics that they are working on more generally, and think carefully and deeply about what else could be going on.
The paucity of women in Computer Science, Math and Engineering in the US is justly widely lamented. Sometimes, the imbalance is attributed to gender stereotypes. But only a small fraction of men study these fields. And in absolute terms, the proportion of women in these fields is not a great deal lower than the proportion of men. So in some ways, the assertion that these fields are stereotypically male is in itself a misunderstanding.
For greater clarity, a contrived example: Say that the population is split between two similar sized groups, A and B. Say only 1% of Group A members study X, while the proportion of Group B members studying X is 1.5%. This means that 60% of those to study X belong to Group B. Or in more dramatic terms: activity X is stereotypically Group B. However, 98.5% of Group B doesn’t study X. And that number is not a whole lot different from 99%, the percentage of Group A that doesn’t study X.
When people say activity X is stereotypically Group B, many interpret it as ‘activity X is quite popular among X.’ (That is one big stereotype about stereotypes.) That clearly isn’t so. In fact, the difference between the preferences for studying X between Group A and B — as inferred from choices (assuming same choices, utility) — is likely pretty small.
Obliviousness to the point is quite common. For instance, it is behind arguments linking terrorism to Muslims. And Muslims typically respond with a version of the argument laid out above—they note that an overwhelming majority of Muslims are peaceful.
One straightforward conclusion from this exercise is that we may be able to make headway in tackling disciplinary stereotypes by elucidating the point in terms of the difference between p(X|Group A) and p(X| Group B) rather than in terms of p(Group A | X).
Papers at hand:
Two empirical points that we learn from the papers:
1. Partisan gaps are highly variable and the mean gap is reasonably small (without money, control condition). See also: Partisan Retrospection?
(The point is never explicitly commented on by either of the papers. The point has implications for proponents of partisan retrospection.)
2. When respondents are offered money for the correct answer, partisan gap reduces by about half on average.
Question in front of us: Interpretation of point 2.
Why are there partisan gaps on knowledge items?
1. Different Beliefs: People believe different things to be true: People learn different things. For instance, Republicans learn that Obama is a Muslim, and Democrats that he is an observant Christian. For a clear exposition on what I mean by belief, see Waters of Casablanca.
2. Systematic Lazy Guessing: The number one thing people lie about on knowledge items is that they have the remotest clue about the question being asked. And the reluctance to acknowledge ‘Don’t Know’ is in itself a serious point worthy of investigation and careful interpretation.
When people guess on items with partisan implications, some try to infer the answer using the cues in the question stem. For instance, a Republican, when asked whether unemployment rate under Obama increased or decreased, may reason that Obama is a socialist and since socialism is bad policy, it must have increased the unemployment rate.
3. Cheerleading: Even when people know that things that reflect badly on their party happened, they lie. (I will be surprised if this is common.)
The Quantity of Interest: Different Beliefs.
We do not want: Different Beliefs + Systematic Lazy Guessing
Why would money reduce partisan gaps?
1. Reducing Systematic Lazy Guessing: Bullock et al. use pay for DK, offering people small incentive (much smaller than pay for correct) to confess to ignorance. The estimate should be closer to the quantity of interest: ‘Different Beliefs.’
2. Considered Guessing: On being offered money for the correct answer, respondents replace ‘lazy’ (for a bounded rational human —optimal) partisan heuristic described above with more effortful guessing. Replacing Systematic Lazy Guessing with Considered Guessing is good to the extent that Considered Guessing is less partisan. If it is so, the estimate will be closer to the quantity of interest: ‘Different Beliefs.’ (Think of it as a version of correlated measurement error. And we are now replacing systematic measurement error with an error that is more evenly distributed, if not ‘randomly’ distributed.)
3. Looking up the Correct Answer: People look up answers to take the money on offer. Both papers go some ways to show that cheating isn’t behind the narrowing of the partisan gap. Bullock et al. use ‘placebo’ questions, and Prior et al. timing etc.
4. Reduces Cheerleading: For respondents for whom utility from lying < $, they stop lying. The estimate will be closer to the quantity of interest: 'Different Beliefs.'
5. Demand Effects: Respondents take the offer of money as a cue that their instinctive response isn’t correct. The estimate may be further away from the quantity of interest: ‘Different Beliefs.’
Say you want to learn about the average number of potholes per unit paved street in a city. To estimate that quantity, the following sampling plan can be employed:
1. Get all the streets in a city from Google Maps or OSM
2. Starting from one end of the street, split each street into .5 km segments till you reach the end of the street. The last segment, or if the street is shorter than .5km, the only segment, can be shorter than .5 km.
3. Get the lat/long of start/end of the segments.
4. Create a database of all the segments: segment_id, street_name, start_lat, start_long, end_lat, end_long
5. Sample from rows of the database
6. Produce a CSV of the sampled segments (subset of step 4)
7. Plot the lat/long on Google Map — filling all the area within the segment.
8. Collect data on the highlighted segments.
For Python package that implements this, see https://github.com/soodoku/geo_sampling.