Causality and Generalization in Qualitative and Quantitative methods: A second look (part 2)

19 Nov

King et al. (1994: 75, note 1): “[a]t its core, real explanation is always based on causal inferences.”

Limiting Discussion to positivist Qualitative Methods

Qualitative Methods can be roughly divided into positivist (case-studies etc.) and interpretive. I will limit my comments to positivist qualitative methods. The differences between positivist qualitative and quantitative methods “are only stylistic and are methodologically and substantively unimportant” (King et al., 1994:4). Both methods share “an epistemological logic of inference: they all agree on the importance of testing theories empirically, generating an inclusive list of alternative explanations and their observable implications, and specifying what evidence might infirm or affirm a theory” (King et al. 1994: 3).

Causal Inference in Empirical Data

To impute causality, science relies either on evidence about the process, or clever experiment design that obviates (perhaps more correctly, mitigates) the need to know the process, though researchers are often encouraged to have a story to explain the process, and test variables implicated in the story.

Experimentation provides one of the best ways to reliably impute causality. However for experiments to have value outside (the labs), the ‘treatment’ must be ‘ecological’ as in reflect the typical values that the variables takes in the world –for example, effect of news is best measured with either real life news clips or something similar, and the findings must hold up in the field (through surveys/field experiments). The problem is that most problems in Social Science cannot be studied experimentally. Brady et al. (2001:8) write: “A central reason why both qualitative and quantitative research are hard to do well is that any study based on observational (i.e., non-experimental) data faces the fundamental inferential challenge of eliminating rival explanations.” While most Social Science papers (and certainly practitioners) talk about ‘eliminating’ rival explanations, one doesn’t quite have to do that. Social Scientists often times include variables reflecting ‘rival explanations’ as ‘straw variables’ (to refer to the straw man they will blow at the end) in a regression equation, to show how much variance is ‘explained’ by their variable of choice as compared to the ‘straw variables’.

In Quantitative Methods, to impute causality, one makes a variety of assumptions including: a ‘ceteris paribus’ (all other things being equal – which may mean assigning away everything ‘else’ to randomization) clause, error term (non-systematic) part is not correlated with other independent variables, to infer the correlation between an explanatory variable x and dependent variable y can only be explained as x’s effect on y. In analyzing survey data, one ‘controls’ for variables using regression, or some other similar way. There are a variety of assumptions in regression models and the penalty for violation of each of these assumptions. In Qualitative Methods, one can either analytically (or where possible empirically) control for variables, or trace the process.

Qualitative Methods: some handles on generalizability

Traditional probability sampling theories are built on the highly conservative assumption that we that we know nothing about the world. And the only systematic way to go about knowing it is through ‘random sampling’, a process that delivers ‘representative data’ on average. Newer sampling theories, however, acknowledge what we know about the world by selectively over-sampling things (or people) we are truly clueless about, and under-sampling where we have a good idea. For example, polling organizations under-sample self-described partisans, and over-sample non-partisans. This provides a window for positivist qualitative methods to make generalizable claims. Qualitative methods can overcome their limitations and make legitimate generalizable claims if their sampling reflects the extent of prior knowledge about the world.

Other than sampling, there are other analytical ways of getting a handle on the variables that ‘moderate’ the effect of a particular variable that we may be interested in studying. For example, we can analytically think through how education will affect (or not affect) racist attitudes. Analytical claims are based on deductive logic and a priori assumptions or knowledge. Hence the success of analytical claims is contingent upon the accuracy of the knowledge, and the correctness of the logic.

One of the problems that have been repeatedly pointed out about Qualitative research is its propensity to select on the dependent variable. Selection on dependent variable deviously leaves out cases where for example the dependent variable doesn’t take extreme values. Selection bias can not only lead to misleading conclusions about causal effects but also about causal processes. It is important hence not to use a truncated dependent variable to do one’s analysis. One of the ways one can systematically drill down to causal processes in qualitative research is by starting off with the broadest palette, either in prior research or elsewhere, to grasp the macro-processes and other variables that may affect the case. Then cognizant of the particularistic aspects of a particular case, analyze the microfoundations or microprocesses present in the system.