How Do We Know?

17 Aug

How can fallible creatures like us know something? The scientific method is about answering that question well. To answer the question well, we have made at least three big innovations:

1. Empiricism. But no privileged observer. What you observe should be reproducible by all others.

2. Open to criticism: If you are not convinced about the method of observation, the claims being made, criticize. Offer reason or proof.

3. Mathematical Foundations: Reliance on math or formal logic to deduce what claims can be made if certain conditions are met.

These innovations along with two more innovations have allowed us to ‘scale.’ Foremost among the innovations that allow us to scale is our ability to work together. And our ability to preserve information on stone, paper, electrons, allows us to collaborate with and build on the work done by people who are now dead. The same principle that allows us to build as gargantuan a structure as the Hoover Dam and entire cities allows us to learn about complex phenomenon. And that takes us to the final principle of science.

Peer to Peer

20 Mar

Peers are equals, except as reviewers, when they are more like capricious dictators. (Or when they are members of a peerage.)

We review our peers’ work because we know that we are all fallible. And because we know that the single best way we can overcome our own limitations is by relying on well-motivated, informed, others. We review to catch what our peers may have missed, to flag important methodological issues, to provide suggestions for clarifying and improving the presentation of results, among other such things. But given a disappointingly long history of capricious reviews, authors need assurance. So consider including in the next review a version of the following note:

Reviewers are fallible too. So this review doesn’t come with the implied contract to follow all ill-advised things or suffer. If you disagree with something, I would appreciate a small note. But rejecting a bad proposal is as important as accepting a good one.

Fear no capriciousness. And I wish you well.

Motivated Citations

13 Jan

The best kind of insight is the ‘duh’ insight — catching something that is exceedingly common, almost routine, but something that no one talks about. I believe this is one such insight.

The standards for citing congenial research (that supports the hypothesis of choice) are considerably lower than the standards for citing uncongenial research. It is an important kind of academic corruption. And it means that the prospects of teleological progress toward truth in science, as currently practiced, are bleak. An alternate ecosystem that provides objective ratings for each piece of research is likely to be more successful. (As opposed to the ‘echo-system’—here are the people who find stuff that ‘agrees’ with what I find—in place today.)

An empirical implication of the point is that the average ranking of journals in which congenial research that is cited is published is likely to be lower than in which uncongenial research is published. Though, for many of the ‘conflicts’ in science, all sides of the conflict will have top-tier publications —-which is to say that the measure is somewhat crude.

The deeper point is that readers generally do not judge the quality of the work cited for support of specific arguments, taking many of the arguments at face value. This, in turn, means that the role of journal rankings is somewhat limited. Or more provocatively, to improve science, we need to make sure that even research published in low ranked journals is of sufficient quality.

The Case for Ending Closed Academic Publishing

21 Mar

A few commercial publishers publish a large chunk of top flight of academic research. And earn a pretty penny doing so. The standard operating model of the publishers is as follows: pay the editorial board no more than $70-$100k, pay for typesetting and publishing, and in turn get copyrights to academic papers. And then go on and charge already locked in institutional customers—university and government libraries—and ordinary scholars extortionary rates. The model is gratuitously dysfunctional.

Assuming there are no long term contracts with the publishers, the system ought to be rapidly dismantled. But if dismantling is easy, creating something better may not be. It just happens to be. A majority of the cost of publishing is in printing on paper. Twenty first century has made printing large organized bundles on paper largely obsolete; those who need it can print on paper at home. Beyond that, open source software for administering a journal already exists. And the model of a single editor with veto powers seems anachronistic. Editing duties can be spread around much like peer review. As unpaid peer review can survive as it always has, though better mechanisms can be thought about. If some money is still needed for administration, it could be gotten easily by charging a nominal submission tax, waived where the author self identifies as being unable to pay.

Bad Science: A Partial Diagnosis And Some Remedies

3 Sep

Lack of reproducibility is a symptom of science in crisis. An eye-catching symptom to be sure, but hardly the only one vying for attention. Recent analyses suggest that nearly two-thirds of the (relevant set of) articles published in prominent political science journals condition on post-treatment variables (see here.) Another set of analysis suggests that half of the relevant set of articles published in prominent neuroscience journals treat difference in significant and non-significant result as the basis for the claim that difference between the two is significant (see here). What is behind this? My guess: poor understanding of statistics, poor editorial processes, and poor strategic incentives.

  1. Poor understanding of statistics: It is likely the primary reason. For it would be good harsh to impute bad faith on part of those who use post-treatment variables as control or treating difference between significant and non-significant result as significant. There is likely a fair bit of ignorance — be it on the part of authors or reviewers. If it is ignorance, then the challenge doesn’t seem as daunting. Let us devise good course materials, online lectures, and teach. And for more advanced scholars, some outreach. (And it may involve teaching scientists how to write-up their results.)

  2. Poor editorial processes: Whatever the failings of authors, they aren’t being caught during the review process. (It would be good to know how often reviewers are actually the source of bad recommendations.) More helpfully, it may be a good idea to create small questionnaires before submission that alert authors about common statistical issues.

  3. Poor strategic incentives: If authors think that journals are implicitly biased towards significant findings, we need to communicate effectively that it isn’t so.

Stemming the Propagation of Error

2 Sep

About half of the (relevant set of) articles published in neuroscience mistake difference between a significant result and an insignificant result as evidence for the two being significantly different (see here). It would be good to see if the articles that make this mistake, for instance, received fewer citations post publication of the article revealing the problem. If not, we probably have more work to do. We probably need to improve ways by which scholars are alerted about the problems in articles they are reading (and interested in citing). And that may include building different interfaces for the various ‘portals’ (Google Scholar, JSTOR etc., and journal publishers) that scholars heavily use. For instance, creating UIs that thread reproduction attempts, retractions, articles finding serious errors within the original article, etc.

Reviewing the Peer Review

24 Jul

Update: Current version is posted here.

Science is a process. And for a good deal of time, peer review has been an essential part of the process. Looked independently by people with no experience with it, it makes a fair bit of sense. For there is only one well-known way of increasing the quality of an academic paper — additional independent thinking. And who better than engaged, trained colleagues.

But this seemingly sound part of the process is creaking. Today, you can’t bring two academics together without them venting their frustration about the broken review system. The plaint is that the current system is a lose-lose-lose. All the parties — the authors, the editors, and the reviewers — lose lots and lots of time. And the change in quality as a result of suggested changes is variable, generally small, and sometimes negative. Given how critical the peer review is in the scientific production, it deserves closer attention, preferably with good data.

But data on peer review aren’t available to be analyzed. Thus, some anecdotal data. Of the 80 or so reviews that I have filed and for which editors have been kind enough to share comments by other reviewers, two things have jumped at me: a) hefty variation in quality of reviews, b) and equally hefty variation in recommendations for the final disposition. It would be good to quantify the two. The latter is easy enough to quantify.

Reliability of the review process has implications for how many recommenders we need to reliably accept or reject the same article. Counter-intuitively, increasing the number of reviewers per manuscript may not increase the overall burden of reviewing. Partly because everyone knows that the review process is so noisy, there is an incentive to submit articles that people know aren’t good enough. Some submitters likely reason that there is a reasonable chance of a `low quality’ article being accepted at top places. Thus, low reliability peer review systems may actually increase the number of submissions. Greater number of submissions, in turn, increases editors’ and reviewer’s load and hence reduces the quality of reviews, and lowers the reliability of recommendations still further. It is a vicious cycle. And the answer may be as simple as making the peer review process more reliable. At any rate, these data ought to be publicly released. Side-by-side, editors should consider experimenting with number of reviewers to collect more data on the point.

Quantifying the quality of reviews is a much harder problem. What do we mean by a good review? A review that points to important problems in the manuscript and, where possible, suggests solutions? Likely so. But this is much trickier to code. But perhaps there isn’t as much a point to quantifying this. What is needed perhaps is guidance. Much like child-rearing, there is no manual for reviewing. There really should be. What should reviewers attend to? What are they missing? And most critically, how do we incentivize this process?

When thinking about incentives, there are three parties whose incentives we need to restructure — the author, the editor, and the reviewer. Authors’ incentives can be restructured by making the process less noisy, as we discuss above. And by making submissions costly. All editors know this: electronic submissions have greatly increased the number of submissions. (It would be useful to study what the consequence of move to electronic submission has been on quality of articles.) As for the editors — if the editors are not blinded to the author (and the author knows this), they are likely to factor in the author’s status in choosing the reviewers, in whether or not to defer to the reviewers’ recommendations, and in making the final call. Thus we need triple blinded pipelines.

Whether or not the reviewer’s identity is known to the editor when s/he is reading the reviewer’s comments also likely affects reviewer’s contributions — in both good and bad ways. For instance, there is every chance that junior scholars in trying to impress editors file more negative reviews than they would if they would if they knew that the editor had no way of tying the identity of the reviewer with the review. Beyond altering anonymity, one way to incentivize reviewers would be to publish the reviews publicly, perhaps as part of the paper. Just like online appendices, we can have a set of reviews published online with each article.

With that, some concrete suggestions beyond the ones already discussed. Expectedly — given they come from a quantitative social scientist — they fall into two broad brackets: releasing and learning from the data already available, and collecting more data.

Existing Data

A fair bit of data can be potentially released without violating anonymity. For instance,

  • Whether manuscript was desk rejected or not
  • How many reviewers were invited
  • Time taken by each reviewer to accept (NA for those from whom you never heard)
  • Total time in review for each article (till R and R or reject) (And separate set of column for each revision)
  • Time taken by each reviewer
  • Recommendation by each reviewer
  • Length of each review
  • How many reviewers did the author(s) suggest?
  • How often were suggested reviewers followed-up on?

In fact, much of the data submitted in multiple-choice question format can probably be released easily. If editors are hesitant, a group of scholars can come together and we can crowdsource collection of review data. People can deposit their reviews and the associated manuscript in a specific format to a server. And to maintain confidentiality, we can sandbox these data allowing scholars to run a variety of pre-screened scripts on it. Or else journals can institute similar mechanisms.

Collecting More Data

  • In economics, people have tried to institute shorter deadlines for reviewers to effect of reducing review times. We can try that out.
  • In terms of incentives, it may be a good idea to try out cash but also perhaps experimenting with a system where reviewers are told that their comments will be public. I, for one, think it would lead to more responsible reviewing. It would be also good to experiment with triple-blind reviewing.

If you have additional thoughts on the issue, please propose them at: https://gist.github.com/soodoku/b20e6d31d21e83ed5e39

Here’s to making advances in the production of science and our pursuit for truth.

It is ‘possible’ that people may warm up to the science of warming

25 Jan

In science we believe

Belief in science is likely partly based on scientists’ ability to predict.
As M.S. notes, climate scientists accurately predicted that temperatures were going to rise in the future in late 1980s. Hence, for people who are aware of that (like himself), belief in climate science is greater.

Similarly, unpredictability in weather (as opposed to climate), e.g., snowstorms, which are typically widely covered in media, etc., may lower people’s belief in climate science.

Possibility of showers in the afternoon
Over conversations with lay and not to say lay people, I have observed that sometimes people conflate probability and possibility. In particular, they typically over-weight the probability of a possible event and then use that inflated weight to form the judgment. When I ask them to assign a probability to the event they identify as a possibility, they almost always assign very low probabilities, and their opinion comes to better reflect this realization.

Think of it like this: a possibility for people, once raised (by them or others) is very real. Only consciously thinking about the probability of that possibility allows them to get out of funky thinking.

Something else to note. Politicians use ‘possibility’ as a persuasion tool, e.g., ‘there is a possibility of a terror attack’ etc. This is something I have dealt with before but I leave the task of where to people motivated to pursue the topic.

Polling our Theories: Learning from Pollsters

4 Mar

For all those who cast aspersions on Social Science’s ability to produce valid replicable findings, they need look no further than pollsters (election polling) in the US, who have near perfected the ability to produce accurate results, except of course Rasmussen, which has perfected the art of producing reliably Republican findings.

But how are pollsters able to do that with samples of 1000, samples which if collected naively, vary in the estimates of truth enough to render the estimates pointless? The short answer is that they utilize knowledge about how people behave, and how they are distributed in the population, and adjust the results based on that knowledge. Given their ability to do so with great accuracy, it is likely that pollsters know more about how people vote, why they vote, etc. than political scientists. It is also likely however that social science will remain somewhat unaware of the results as most of the knowledge will be proprietary.

Using these techniques in ‘hypothesis testing’
The traditional (frequentist) model of hypothesis testing has shied away from utilizing knowledge about the population. Typically, multiple parameters are estimated simultaneously, using regression or any of its sibling methods. Bayesian of course embrace the concept of prior knowledge, though typically shy away from utilizing it fully. One modest and a somewhat defensible way to test theories would be to ‘fix’ relation of variables with others, using prior knowledge. So in the domain of voting, one can get away from vagaries of sampling, and directly ‘fix’ black respondents from voting 90% for Democrats with modest decreasing propensity given income. This shrinkage of variance from modeling part of data, or theory, would allow for other parameters from being estimated more reliably.

Causality and Generalization in Qualitative and Quantitative Methods

19 Nov

Science deals with the fundamental epistemological question of how can we claim to know something. The quality of the system, forever open to challenge, determines any claims of epistemic superiority that the scientific method may make over other competing claims of gleaning knowledge from data.

The extent to which claims are solely arbitrated on scientific merit is limited by a variety of factors, as outlined by Lakatos, Kuhn, and Feyerabend, resulting in at best an inefficient process, and at worst, something far more pernicious. I ignore such issues and focus narrowly on methodological questions around causality and generalizability in qualitative methods.

In science, the inquiry into generalizable causal processes is greatly privileged. There is a good reason for that. Causality and generalizability can provide the basis for an intervention. However, not all kinds of data make themselves readily accessible to imputing causality, or even to making generalizable descriptive statements. For example, causal inference in most historical research remains out of bounds. Keeping this in mind, I analyze how qualitative methods within Social Sciences (can) interrogate causality and generalizability.

Causality

Hume thought that there was no place for causality within empiricism. He argued that the most we can find is that “the one [event] does actually, in fact, follow the other”. Causality is nothing but an illusion occasioned when events follow each other with regularity. That formulation, however, didn’t prevent Hume from believing in scientific theories. He felt that regularly occurring constant conjunctions were sufficient basis for scientific laws. Theoretical advances in the 200 or so years since Hume have been able to provide a deeper understanding of causality, including a process-based understanding and an experimental understanding.

Donald Rubin, professor of Statistics at Harvard, defines causal effect as follows: “Intuitively, the causal effect of one treatment, E, over another, C, for a particular unit and an interval of time from t1 to t2 is the difference between what would have happened at time t2 if the unit had been exposed to E initiated at t1 and what would have happened at t2 if the unit had been exposed to C initiated at t1: ‘If an hour ago I had taken two aspirins instead of just a glass of water, my headache would now be gone,’ or because an hour ago I took two aspirins instead of just a glass of water, my headache is now gone.’ Our definition of the causal effect of the E versus C treatment will reflect this intuitive meaning.”

Note that the Rubin Causal Model (RCM), as presented above, depicts an elementary causal connection between two Boolean variables: one explanatory variable (two aspirins) with a single effect (eliminates headaches). Often, the variables take multiple values. And to estimate the effect of each change, we need a separate experiment. To estimate the effect of a treatment to a particular degree of precision in different subgroups, for example, the effect of aspirin on women and men, the sample size for each group needs to be increased.

RCM formulation can be expanded to include a probabilistic understanding of causation. A probabilistic understanding of causality means accepting that certain parts of the explanation are still missing. Hence, a necessary and sufficient condition is absent. Though attempts have been made to include necessary and sufficient clauses in probabilistic statements. David Papineau (Probabilities and Causes, 1985, Journal of Philosophy) writes, “Factor A is a cause of some B just in case it is one of a set of conditions that are jointly and minimally sufficient for B. In such a case we can write A&X ->B. In general, there will also be other sets of conditions minimally sufficient for B. Suppose we write their disjunction as Y. If now we suppose further that B is always determined when it occurs, that it never occurs unless one of these sufficient sets (let’s call them B’s full causes) occurs first, then we have, A and X condition conjugated with Y is equivalent with B. Given this equivalence, it is not difficult to see why A’s causing B should be related to A’s being correlated with B. If A is indeed a cause of B, then there is a natural inference to Prob(B/A) > Prob(B/-A): for, given A, one will have B if either X or Y occurs, whereas without A one will get B only with Y. And conversely it seems that if we do find that Prob(B/A) > Prob(B/-A), then we can conclude that A is a cause of B: for if A didn’t appear in the disjunction of full causes which are necessary and sufficient for B, then it wouldn’t affect the chance of B occurring.”

Papineau’s definition is a bit archaic and doesn’t entirely cover the set of cases we define as probabilistically causal. John Gerring (Social Science Methodology: A Criterial Framework, 2001: 127,138; emphasis in original), provides a definition of probabilistic causality: “[c]auses are factors that raise the (prior) probabilities of an event occurring. [Hence] a sensible and minimal definition: X may be considered a cause of Y if (and only if) it raises the probability of Y occurring.”

A still more sensible yet minimal definition of causality, can be found in Gary King et al. (Designing Social Inquiry: Scientific Inference in Qualitative Research, 1994: 81-82), “the causal effect is the difference between the systematic component of observations made when the explanatory variable takes one value and the systematic component of comparable observations when the explanatory variable takes on another value.”

Causal Inference in Qualitative and Quantitative Methods

While the above formulations of causality—the Rubin Causal Model, Gerring, and King—seem more quantitative, they can be applied to qualitative methods. We discuss how below.

A parallel understanding of causality, used much more often in qualitative social science, is a process-based understanding of causality wherein you trace the causal process to construct a theory. Simplistically, in Quantitative Methods in the Social Sciences, one often deduces the causal process, while in Qualitative methods the understanding of the causal process is learned from deep and close interaction with data.

Both deduction and induction, however, are rife with problems. Deduction privileges formal rules (statistics) that straightjacket the systematic deductive process so that the deductions are systematic and conditional on the veracity of assumptions like normal distribution of data, linearity of the effect, lack of measurement error, etc. The formal deductive process bestows a host of appealing qualities like generalizability, when an adequate random sample of the population is taken, or even systematic handle on causal inference. In quantitative methods, the methodological assumptions for deduction are cleanly separated from data. The same separation, between the formal deductive process with a rather arbitrarily chosen statistical model and data, however, makes the discovery process less than optimal, and sometimes deeply problematic. Recent research by Ho and King (Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference, Political Analysis, 2007), and methods like Bayesian Model Averaging (Volinsky), have gone some ways in providing ways to mitigate problems with model selection.

No such sharp delineation between method and data exists in qualitative research, where data is collected iteratively—[in studies using iterative abstraction (Sayer 1981, 1992; Lawson 1989, 1995) or grounded theory (Glaser 1978; Strauss 1987; Strauss and Corbin 1990)]—till it explains the phenomenon singled out for explanation. Grounded data-driven qualitative methods often run the risk of modeling particularistic aspects of data, which reduces the reliability with which they can come up with a generalizable causal model. This is indeed only one kind of qualitative research. There are others who do qualitative analysis in the vein of experiments, for example, with a 2×2 model, and yet others who will test apriori assumptions by analytically controlling for variables in a verbal regression equation to get at the systematic effect of the explanatory variable on the explanandum. Perhaps more than grounded theory method, the pseudo-quantitative style qualitative analysis runs the risk of coming to deeply problematic conclusions based on the cases used.

King et al. (1994: 75, note 1): “[a]t its core, real explanation is always based on causal inferences.”

Limiting Discussion to Positivist Qualitative Methods

Qualitative methods can be roughly divided into positivist methods, e.g., case-studies, and interpretive methods. I will limit my comments to positivist qualitative methods.

The differences between positivist qualitative and quantitative methods “are only stylistic and are methodologically and substantively unimportant” (King et al., 1994:4). Both methods share “an epistemological logic of inference: they all agree on the importance of testing theories empirically, generating an inclusive list of alternative explanations and their observable implications, and specifying what evidence might infirm or affirm a theory” (King et al. 1994: 3).

Empirical Causal Inference

To impute causality, we either need evidence about the process or an experiment that obviates the need to know the process, though researchers are often encouraged to have a story to explain the process, and test variables implicated in the story.

Experimentation provides one of the best ways to reliably impute causality. However, for experiments to have value outside the lab, the treatment must be ecological—it should reflect the typical values that the variables take in the world. For instance, the effect of televised news is best measured with real-life news clips shown in a realistic setting where the participant has the control of the remote. Ideally, we also want to elicit our measures in a realistic way, in the form of their votes, or campaign contributions, or expressions online. The problem is that most problems in social science cannot be studied experimentally. Brady et al. (2001:8) write, “A central reason why both qualitative and quantitative research are hard to do well is that any study based on observational (i.e., non-experimental) data faces the fundamental inferential challenge of eliminating rival explanations.” I would phrase this differently. It doesn’t make social science hard to do. It just means that we have to be ok with the fact that we cannot know certain things. Science is an exercise in humility, not denial.

Learning from Quantitative Methods

  1. Making Assumptions Clear: Quantitative methods often make a variety of assumptions including to make inferences. For instance, empiricists often use ceteris paribus—all other things being equal, which may mean assigning away everything ‘else’ to randomization—to make inferences. Others use the assumption that the error term is uncorrelated with other independent variables to infer the correlation between an explanatory variable x and dependent variable y can only be explained as x’s effect on y. There are a variety of assumptions in regression models and the penalty for violation of each of these assumptions. For example, we can analytically think through how education will affect (or not affect) racist attitudes. Analytical claims are based on deductive logic and a priori assumptions or knowledge. Hence the success of analytical claims is contingent upon the accuracy of the knowledge and the correctness of the logic.
  2. Controlling for things: Quantitative methods often ‘control’ for stuff. It is a way to eliminate an explanation. If gender is a ‘confounder,’ check for variation within men and women. In Qualitative Methods, one can either analytically (or where possible empirically) control for variables, or trace the process.
  3. Sampling: Traditional probability sampling theories are built on the highly conservative assumption that we that we know nothing about the world. And the only systematic way to go about knowing it is through random sampling, a process that delivers ‘representative’ data on average. Newer sampling theories, however, acknowledge what we know some things about the world and uses that knowledge by selectively over-sampling things (or people) we are truly clueless about, and under-sampling where we have a good idea. For example, polling organizations under-sample self-described partisans and over-sample non-partisans. This provides a window for positivist qualitative methods to make generalizable claims. Qualitative methods can overcome their limitations and make legitimate generalizable claims if their sampling reflects the extent of prior knowledge about the world.
  4. Moderators: Getting a handle on the variables that ‘moderate’ the effect of a particular variable that we may be interested in studying.
  5. Sample: One of the problems in qualitative research that has been pointed out is the habit of selecting on the dependent variable. Selection on dependent variable deviously leaves out cases where, for example, the dependent variable doesn’t take extreme values. Selection bias can not only lead to misleading conclusions about causal effects but also about causal processes. It is important hence not to use a truncated dependent variable to do one’s analysis. One of the ways one can systematically drill down to causal processes in qualitative research is by starting off with the broadest palette, either in prior research or elsewhere, to grasp the macro-processes and other variables that may affect the case. Then cognizant of the particularistic aspects of a particular case, analyze the microfoundations or microprocesses present in the system.
  6. Reproducible: One central point of the empirical method is that there is no privileged observed. What you observe, another should to be able to reproduce. So, whatever data you collect, howeveer you draw your inferences, all need to be clearly stated and explained.