Causality and Generalization in Qualitative and Quantitative methods: A second look (part 1)

Positivism and Empiricism

Science deals with the fundamental epistemological question of how can we make a claim to knowing something. The quality and the strength of the ‘system’—forever open to challenge—determines any claims of epistemic superiority that the ‘scientific method’ may make over other competing claims of gleaning ‘knowledge’ from data. The extent to which claims are solely arbitrated on scientific merit is limited by a variety of factors, as outlined by Lakatos, Kuhn, and Feyerabend, resulting in at best an inefficient process, and at worst something far more pernicious. I, however, ignore such issues and focus narrowly on methodological questions around causality and generalizability in qualitative methods.

In science, the inquiry into generalizable causal processes is greatly privileged, and for good reason—causality and generalizability taken together can provide the basis for policy action, for mounting intervention, among other things. One can also think of knowledge of causal processes as providing predictive power. However, not all kinds of data make themselves readily accessible to imputing causality, or even to making generalizable descriptive statements. For example, causal inference in most historical research remains out of bounds. AddKeeping this in mind, I analyze how qualitative methods within Social Sciences (can) interrogate causality and generalizability.

Causality:

Hume felt that there was no place for causality within empiricism. He argued that the most we can find is that “the one [event] does actually, in fact, follow the other”. More broadly, causality is nothing but an illusion occasioned when events follow each other with regularity. That formulation, however, didn’t prevent Hume from believing in scientific theories for he felt that regularly occurring constant conjunctions were sufficient basis for scientific laws. Theoretical advances in the 200 or so years since Hume have been able to provide a deeper understanding of causality, including a process-based understanding and an experimental understanding.

Donald Rubin, a professor of Statistics at Harvard, defines causal effect as, “Intuitively, the causal effect of one treatment, E, over another, C, for a particular unit and an interval of time from t1 to t2 is the difference between what would have happened at time t2 if the unit had been exposed to E initiated at t1 and what would have happened at t2 if the unit had been exposed to C initiated at t1: ‘If an hour ago I had taken two aspirins instead of just a glass of water, my headache would now be gone,’ or because an hour ago I took two aspirins instead of just a glass of water, my headache is now gone.’ Our definition of the causal effect of the E versus C treatment will reflect this intuitive meaning.”

It is important to note that RCM, as presented above, depicts an elementary causal connection between two Boolean variables: one level of explanatory variable (two aspirins) with single effect (the headache is gone). Often times, the variables take multiple values and to study that, we need to mount a series of experiments. Similarly one may want to analyze the effect of a variable in different subgroups of populations, for example, the effect of aspirin on women as compared to men. All the above two scenarios do is highlight the problems in coming up with a robust understanding of causal processes between (here) two variables. Fundamentally though, our RCM understanding of causal effect remains unchanged.

Rubin Causal Model provides a counterfactual deterministic understanding of causality that is firmly based in the logic of experiment design. RCM formulation can be expanded to include a probabilistic understanding of causal effect. Just as a note: A probabilistic understanding of causality implicitly accepts that certain parts of the explanation are still missing, and hence is absent of a necessary and sufficient condition though attempts have been made to include necessary and sufficient clauses in probabilistic statements. David Papineau (Probabilities and Causes, 1985, Journal of Philosophy) writes, “Factor A is a cause of some B just in case it is one of a set of conditions that are jointly and minimally sufficient for B. In such a case we can write A&X ->B. In general, there will also be other sets of conditions minimally sufficient for B. Suppose we write their disjunction as Y. If now we suppose further that B is always determined when it occurs, that it never occurs unless one of these sufficient sets (let’s call them B’s full causes) occurs first, then we have, A and X condition conjugated with Y is equivalent with B. Given this equivalence, it is not difficult to see why A’s causing B should be related to A’s being correlated with B. If A is indeed a cause of B, then there is a natural inference to Prob(B/A) > Prob(B/-A): for, given A, one will have B if either X or Y occurs, whereas without A one will get B only with Y. And conversely it seems that if we do find that Prob(B/A) > Prob(B/-A), then we can conclude that A is a cause of B: for if A didn’t appear in the disjunction of full causes which are necessary and sufficient for B, then it wouldn’t affect the chance of B occurring.”

Papineau’s definition is a bit archaic and doesn’t quite cover the set of cases we define as probabilistically causal. John Gerring (Social Science Methodology: A Criterial Framework, 2001: 127,138; emphasis in original), provides a definition of probabilistic causality: ‘[c]auses are factors that raise the (prior) probabilities of an event occurring. (…) [Hence] a sensible and minimal definition: X may be considered a cause of Y if (and only if) it raises the probability of Y occurring.’

A still more ‘sensible’ yet ‘minimal’ definition of causality, can be found in Gary King et al. (Designing Social Inquiry: Scientific Inference in Qualitative Research, 1994: 81-82), ‘the causal effect is the difference between the systematic component of observations made when the explanatory variable takes one value and the systematic component of comparable observations when the explanatory variable takes on another value.’

Brief discussion on Causal Inference in Qualitative and Quantitative Methods

While the above formulations of causality—the Rubin Causal Model, Gerring, and King—seem more quantitative, they can be applied equally to qualitative methods. A parallel understanding of causality, used much more often in qualitative social science, is a process-based understanding of causality wherein you trace the causal process to construct a theory. Simplistically, in Quantitative methods in the Social Sciences, one often times deduces the causal process, while in Qualitative methods the understanding of the causal process is induced from deep and close interaction with data.

Both deduction and induction processes, however, are rife with problems. Deduction privileges formal rules (statistics) that straightjacket the systematic deductive process so that the deductions are systematic and cognizant of explicit assumptions (like normal distribution of data, linearity of the effect, lack of measurement error, etc.). The formal deductive process bestows a host of appealing qualities like generalizability (when an adequate random sample of the population is taken) or even systematic handle on causal inference. In quantitative methods, the methodological assumptions for deduction are cleanly separated from data. The same separation – between the formal deductive process with a rather arbitrarily chosen statistical model and data – however makes the discovery process less than optimal, and sometimes deeply problematic. Recent research by Ho and King (Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference, Political Analysis, 2007), and methods like Bayesian Model Averaging by Volinsky, have gone some ways in providing ways to mitigate problems with model selection.

No such sharp delineation between method and data exists in qualitative research, where data is collected iteratively – [in studies using iterative abstraction (Sayer 1981, 1992; Lawson 1989, 1995) or grounded theory (Glaser 1978; Strauss 1987; Strauss and Corbin 1990)] – till it explains the phenomenon singled out for explanation. Grounded data-driven Qualitative methods often run the risk of modeling in particularistic aspects of data, which impedes the reliability with which they can come up with a generalizable causal model. This is indeed only one kind of qualitative research for there are others who do qualitative analysis in the vein of experiments – for example with a 2×2 model, and yet others who will test apriori assumptions by analytically controlling for variables in a verbal regression equation to get at the systematic effect of explanatory variable on the explanandum. Perhaps more than grounded theory method, the pseudo-quantitative style qualitative analysis runs the risk of coming to deeply problematic conclusions based on the cases used.