Peer to Peer

20 Mar

Peers are equals, except as reviewers, when they are more like capricious dictators. (Or when they are members of a peerage.)

We review our peers’ work because we know that we are all fallible. And because we know that the single best way we can overcome our own limitations is by relying on well-motivated, informed, others. We review to catch what our peers may have missed, to flag important methodological issues, to provide suggestions for clarifying and improving the presentation of results, among other such things. But given a disappointingly long history of capricious reviews, authors need assurance. So consider including in the next review a version of the following note:

Reviewers are fallible too. So this review doesn’t come with the implied contract to follow all ill-advised things or suffer. If you disagree with something, I would appreciate a small note. But rejecting a bad proposal is as important as accepting a good one.

Fear no capriciousness. And I wish you well.

Bad Science: A Partial Diagnosis And Some Remedies

3 Sep

Lack of reproducibility is a symptom of science in crisis. An eye-catching symptom to be sure, but hardly the only one vying for attention. Recent analyses suggest that nearly two-thirds of the (relevant set of) articles published in prominent political science journals condition on post-treatment variables (see here.) Another set of analysis suggests that half of the relevant set of articles published in prominent neuroscience journals treat difference in significant and non-significant result as the basis for the claim that difference between the two is significant (see here). What is behind this? My guess: poor understanding of statistics, poor editorial processes, and poor strategic incentives.

  1. Poor understanding of statistics: It is likely the primary reason. For it would be good harsh to impute bad faith on part of those who use post-treatment variables as control or treating difference between significant and non-significant result as significant. There is likely a fair bit of ignorance — be it on the part of authors or reviewers. If it is ignorance, then the challenge doesn’t seem as daunting. Let us devise good course materials, online lectures, and teach. And for more advanced scholars, some outreach. (And it may involve teaching scientists how to write-up their results.)

  2. Poor editorial processes: Whatever the failings of authors, they aren’t being caught during the review process. (It would be good to know how often reviewers are actually the source of bad recommendations.) More helpfully, it may be a good idea to create small questionnaires before submission that alert authors about common statistical issues.

  3. Poor strategic incentives: If authors think that journals are implicitly biased towards significant findings, we need to communicate effectively that it isn’t so.

Stemming the Propagation of Error

2 Sep

About half of the (relevant set of) articles published in neuroscience mistake difference between a significant result and an insignificant result as evidence for the two being significantly different (see here). It would be good to see if the articles that make this mistake, for instance, received fewer citations post publication of the article revealing the problem. If not, we probably have more work to do. We probably need to improve ways by which scholars are alerted about the problems in articles they are reading (and interested in citing). And that may include building different interfaces for the various ‘portals’ (Google scholar, JSTOR etc., and journal publishers) that scholars heavily use. For instance, creating UIs that thread reproduction attempts, retractions, articles finding serious errors within the original article, etc.

Causality and Generalization in Qualitative and Quantitative methods: A second look (part 2)

19 Nov

King et al. (1994: 75, note 1): “[a]t its core, real explanation is always based on causal inferences.”

Limiting Discussion to positivist Qualitative Methods

Qualitative Methods can be roughly divided into positivist (case-studies etc.) and interpretive. I will limit my comments to positivist qualitative methods. The differences between positivist qualitative and quantitative methods “are only stylistic and are methodologically and substantively unimportant” (King et al., 1994:4). Both methods share “an epistemological logic of inference: they all agree on the importance of testing theories empirically, generating an inclusive list of alternative explanations and their observable implications, and specifying what evidence might infirm or affirm a theory” (King et al. 1994: 3).

Causal Inference in Empirical Data

To impute causality, science relies either on evidence about the process, or clever experiment design that obviates (perhaps more correctly, mitigates) the need to know the process, though researchers are often encouraged to have a story to explain the process, and test variables implicated in the story.

Experimentation provides one of the best ways to reliably impute causality. However for experiments to have value outside (the labs), the ‘treatment’ must be ‘ecological’ as in reflect the typical values that the variables takes in the world –for example, effect of news is best measured with either real life news clips or something similar, and the findings must hold up in the field (through surveys/field experiments). The problem is that most problems in Social Science cannot be studied experimentally. Brady et al. (2001:8) write: “A central reason why both qualitative and quantitative research are hard to do well is that any study based on observational (i.e., non-experimental) data faces the fundamental inferential challenge of eliminating rival explanations.” While most Social Science papers (and certainly practitioners) talk about ‘eliminating’ rival explanations, one doesn’t quite have to do that. Social Scientists often times include variables reflecting ‘rival explanations’ as ‘straw variables’ (to refer to the straw man they will blow at the end) in a regression equation, to show how much variance is ‘explained’ by their variable of choice as compared to the ‘straw variables’.

In Quantitative Methods, to impute causality, one makes a variety of assumptions including: a ‘ceteris paribus’ (all other things being equal – which may mean assigning away everything ‘else’ to randomization) clause, error term (non-systematic) part is not correlated with other independent variables, to infer the correlation between an explanatory variable x and dependent variable y can only be explained as x’s effect on y. In analyzing survey data, one ‘controls’ for variables using regression, or some other similar way. There are a variety of assumptions in regression models and the penalty for violation of each of these assumptions. In Qualitative Methods, one can either analytically (or where possible empirically) control for variables, or trace the process.

Qualitative Methods: some handles on generalizability

Traditional probability sampling theories are built on the highly conservative assumption that we that we know nothing about the world. And the only systematic way to go about knowing it is through ‘random sampling’, a process that delivers ‘representative data’ on average. Newer sampling theories, however, acknowledge what we know about the world by selectively over-sampling things (or people) we are truly clueless about, and under-sampling where we have a good idea. For example, polling organizations under-sample self-described partisans, and over-sample non-partisans. This provides a window for positivist qualitative methods to make generalizable claims. Qualitative methods can overcome their limitations and make legitimate generalizable claims if their sampling reflects the extent of prior knowledge about the world.

Other than sampling, there are other analytical ways of getting a handle on the variables that ‘moderate’ the effect of a particular variable that we may be interested in studying. For example, we can analytically think through how education will affect (or not affect) racist attitudes. Analytical claims are based on deductive logic and a priori assumptions or knowledge. Hence the success of analytical claims is contingent upon the accuracy of the knowledge, and the correctness of the logic.

One of the problems that have been repeatedly pointed out about Qualitative research is its propensity to select on the dependent variable. Selection on dependent variable deviously leaves out cases where for example the dependent variable doesn’t take extreme values. Selection bias can not only lead to misleading conclusions about causal effects but also about causal processes. It is important hence not to use a truncated dependent variable to do one’s analysis. One of the ways one can systematically drill down to causal processes in qualitative research is by starting off with the broadest palette, either in prior research or elsewhere, to grasp the macro-processes and other variables that may affect the case. Then cognizant of the particularistic aspects of a particular case, analyze the microfoundations or microprocesses present in the system.

Causality and Generalization in Qualitative and Quantitative methods: A second look (part 1)

19 Nov

Positivism and Empiricism

Science deals with the fundamental epistemological question of how can we make a claim to knowing something. The quality and the strength of the ‘system’—forever open to challenge—determines any claims of epistemic superiority that the ‘scientific method’ may make over other competing claims of gleaning ‘knowledge’ from data. The extent to which claims are solely arbitrated on scientific merit is limited by a variety of factors, as outlined by Lakatos, Kuhn, and Feyerabend, resulting in at best an inefficient process, and at worst something far more pernicious. I, however, ignore such issues and focus narrowly on methodological questions around causality and generalizability in qualitative methods.

In science, the inquiry into generalizable causal processes is greatly privileged, and for good reason—causality and generalizability taken together can provide the basis for policy action, for mounting intervention, among other things. One can also think of knowledge of causal processes as providing predictive power. However, not all kinds of data make themselves readily accessible to imputing causality, or even to making generalizable descriptive statements. For example, causal inference in most historical research remains out of bounds. AddKeeping this in mind, I analyze how qualitative methods within Social Sciences (can) interrogate causality and generalizability.

Causality:

Hume felt that there was no place for causality within empiricism. He argued that the most we can find is that “the one [event] does actually, in fact, follow the other”. More broadly, causality is nothing but an illusion occasioned when events follow each other with regularity. That formulation, however, didn’t prevent Hume from believing in scientific theories for he felt that regularly occurring constant conjunctions were sufficient basis for scientific laws. Theoretical advances in the 200 or so years since Hume have been able to provide a deeper understanding of causality, including a process-based understanding and an experimental understanding.

Donald Rubin, a professor of Statistics at Harvard, defines causal effect as, “Intuitively, the causal effect of one treatment, E, over another, C, for a particular unit and an interval of time from t1 to t2 is the difference between what would have happened at time t2 if the unit had been exposed to E initiated at t1 and what would have happened at t2 if the unit had been exposed to C initiated at t1: ‘If an hour ago I had taken two aspirins instead of just a glass of water, my headache would now be gone,’ or because an hour ago I took two aspirins instead of just a glass of water, my headache is now gone.’ Our definition of the causal effect of the E versus C treatment will reflect this intuitive meaning.”

It is important to note that RCM, as presented above, depicts an elementary causal connection between two Boolean variables: one level of explanatory variable (two aspirins) with single effect (the headache is gone). Often times, the variables take multiple values and to study that, we need to mount a series of experiments. Similarly one may want to analyze the effect of a variable in different subgroups of populations, for example, the effect of aspirin on women as compared to men. All the above two scenarios do is highlight the problems in coming up with a robust understanding of causal processes between (here) two variables. Fundamentally though, our RCM understanding of causal effect remains unchanged.

Rubin Causal Model provides a counterfactual deterministic understanding of causality that is firmly based in the logic of experiment design. RCM formulation can be expanded to include a probabilistic understanding of causal effect. Just as a note: A probabilistic understanding of causality implicitly accepts that certain parts of the explanation are still missing, and hence is absent of a necessary and sufficient condition though attempts have been made to include necessary and sufficient clauses in probabilistic statements. David Papineau (Probabilities and Causes, 1985, Journal of Philosophy) writes, “Factor A is a cause of some B just in case it is one of a set of conditions that are jointly and minimally sufficient for B. In such a case we can write A&X ->B. In general, there will also be other sets of conditions minimally sufficient for B. Suppose we write their disjunction as Y. If now we suppose further that B is always determined when it occurs, that it never occurs unless one of these sufficient sets (let’s call them B’s full causes) occurs first, then we have, A and X condition conjugated with Y is equivalent with B. Given this equivalence, it is not difficult to see why A’s causing B should be related to A’s being correlated with B. If A is indeed a cause of B, then there is a natural inference to Prob(B/A) > Prob(B/-A): for, given A, one will have B if either X or Y occurs, whereas without A one will get B only with Y. And conversely it seems that if we do find that Prob(B/A) > Prob(B/-A), then we can conclude that A is a cause of B: for if A didn’t appear in the disjunction of full causes which are necessary and sufficient for B, then it wouldn’t affect the chance of B occurring.”

Papineau’s definition is a bit archaic and doesn’t quite cover the set of cases we define as probabilistically causal. John Gerring (Social Science Methodology: A Criterial Framework, 2001: 127,138; emphasis in original), provides a definition of probabilistic causality: ‘[c]auses are factors that raise the (prior) probabilities of an event occurring. (…) [Hence] a sensible and minimal definition: X may be considered a cause of Y if (and only if) it raises the probability of Y occurring.’

A still more ‘sensible’ yet ‘minimal’ definition of causality, can be found in Gary King et al. (Designing Social Inquiry: Scientific Inference in Qualitative Research, 1994: 81-82), ‘the causal effect is the difference between the systematic component of observations made when the explanatory variable takes one value and the systematic component of comparable observations when the explanatory variable takes on another value.’

Brief discussion on Causal Inference in Qualitative and Quantitative Methods

While the above formulations of causality—the Rubin Causal Model, Gerring, and King—seem more quantitative, they can be applied equally to qualitative methods. A parallel understanding of causality, used much more often in qualitative social science, is a process-based understanding of causality wherein you trace the causal process to construct a theory. Simplistically, in Quantitative methods in the Social Sciences, one often times deduces the causal process, while in Qualitative methods the understanding of the causal process is induced from deep and close interaction with data.

Both deduction and induction processes, however, are rife with problems. Deduction privileges formal rules (statistics) that straightjacket the systematic deductive process so that the deductions are systematic and cognizant of explicit assumptions (like normal distribution of data, linearity of the effect, lack of measurement error, etc.). The formal deductive process bestows a host of appealing qualities like generalizability (when an adequate random sample of the population is taken) or even systematic handle on causal inference. In quantitative methods, the methodological assumptions for deduction are cleanly separated from data. The same separation – between the formal deductive process with a rather arbitrarily chosen statistical model and data – however makes the discovery process less than optimal, and sometimes deeply problematic. Recent research by Ho and King (Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference, Political Analysis, 2007), and methods like Bayesian Model Averaging by Volinsky, have gone some ways in providing ways to mitigate problems with model selection.

No such sharp delineation between method and data exists in qualitative research, where data is collected iteratively – [in studies using iterative abstraction (Sayer 1981, 1992; Lawson 1989, 1995) or grounded theory (Glaser 1978; Strauss 1987; Strauss and Corbin 1990)] – till it explains the phenomenon singled out for explanation. Grounded data-driven Qualitative methods often run the risk of modeling in particularistic aspects of data, which impedes the reliability with which they can come up with a generalizable causal model. This is indeed only one kind of qualitative research for there are others who do qualitative analysis in the vein of experiments – for example with a 2×2 model, and yet others who will test apriori assumptions by analytically controlling for variables in a verbal regression equation to get at the systematic effect of explanatory variable on the explanandum. Perhaps more than grounded theory method, the pseudo-quantitative style qualitative analysis runs the risk of coming to deeply problematic conclusions based on the cases used.