Scientific inference, part 3 of 4

April 1, 2009 at 7:01 am

Yesterday I talked about positivism, which a lot of empirically-minded sociologists and other scientists think is a nifty term. What we tend not to know is that W. V. Quine published “Two Dogmas of Empiricism” in the Philosophic Review in 1951 and basically destroyed positivism. The two dogmas in question are that 1) there is a meaningful boundary between synthetic and analytical and 2) that a discrete synthetic statement can be evaluated. Quine feels that really these two dogmas have the same conceptual flaw but he treats them as relatively distinct so as to make his critique isomorphic to positivism itself. On the synthetic/analytic dichotomy, Quine’s critique is basically that it gets very messy distinguishing between a definition and a finding as they take the same grammatical form. Even more radically, he claimed that the profound weirdness of quantum physics demonstrates that even abstract logic is an empirical question.

The second critique is more directly of interest to practicing scientists. This “underdeterminism” problem shares a lot with an earlier argument by Pierre Duhem and so is sometimes known as the Duhem-Quine thesis. The positivists understood a version of this, called the “auxiliary hypotheses” problem but they underestimated how problematic it was. When we state a hypothesis, it is implied that the hypothesis is expected to hold ceteris paribus. The assumption is that the evidence will test a hypothesis if (and only if) the auxiliary hypotheses are well behaved. This raises the problem that when we encounter evidence that is facially contrary to a hypothesis we cannot be sure if this is really evidence against the hypothesis, or is it only one of the ceteri making mischief by failing to remain parilis.

One of the best known problems of this sort was Marx’s various failed historical predictions, most notably that the revolution would occur in an industrial nation (but also other expectations such as the emisseration of the proletariat). Many people, including both Popper and Gramsci, took this to mean that Marx was simply wrong. However many Marxists argued that there was nothing wrong with the theory of dialectical materialism as such, it merely hadn’t explicitly anticipated the skill and charisma of Lenin or the agency that the bourgeois state showed in creating the welfare state to defuse class struggle in the industrial world. Thus in this imagining Marx’s hypothesis (the Germans or English will have a socialist revolution) was suppressed by the auxiliary hypothesis that unusually capable leaders would not show up in backwaters that had only recently abandoned serfdom or that the welfare state would save capitalism from itself. The positivists didn’t see this case as especially problematic because they thought that the Marxist apologia of auxiliary hypotheses were embarrassingly ad hoc.

The Duhem-Quine underdeterminism thesis is that auxiliary hypotheses are literally infinite. Some of the examples of these infinite auxiliary hypotheses philosophers give are kind of silly, like “elves do not cause equipment to give inaccurate readings on Tuesdays” but it’s hard to say whether blaming elves is really any sillier than claiming that men make history but not in the circumstances of their choosing except for V I Lenin because he’s just so awesome. However positivists would have no problem saying that when a scientist makes such an interpretation it is clearly the fault of the scientist and not of either elves or Lenin. Unfortunately, Duhem and Quine argued that there are a very large number of very plausible auxiliary hypotheses. Prime among these are the innumerable ways in which data collection can suffer measurement error. Furthermore there is the fact that many of our measurement tools are themselves based in theories which conceivably could be wrong. For example, say your hypothesis is that if you heat a gas in an airtight and rigid chamber the pressure will rise and your barometer finds that the pressure does not rise. This could be interpreted as evidence against Boyle’s law or it could be that Boyle was right and you have one of the following problems:

• your chamber is not airtight and/or rigid

• your heater and/or thermometer are broken

• your barometer is broken

• barometers don’t actually measure pressure

• an infinite number of other more or less plausible auxiliary hypotheses

If we ignore the infinite number of unstated auxiliary hypotheses and focus on the specific ones, you can imagine testing each of them in turn. For instance, you could measure that your chamber is indeed airtight by putting a rat in it and seeing that it suffocates. But these verifications are themselves beset with problems such as that maybe the rat had a heart attack despite an abundance of oxygen, or perhaps it takes more ventilation to sustain a rat than to relieve a slowly expanding gas. The problem is recursive so that ultimately you can always spin a (progressively more convoluted) story that your original hypothesis was correct. In some cases this is actually a good thing to do since things like sloppy lab work are pretty common and if we never blamed anomalous results on auxiliary hypothesis we’d soon run out of theories. Here’s a true story, when I was in high school physics I once “measured” gravity to be almost 11 m/s^2. It never occurred to me that I had just disproven the the standard figure of 9.8 m/s^2 but rather I thought it rather obvious that I had made a mistake.

While it’s easiest to illustrate with the hard sciences, the issue of theory dependent tools is also a problem for social science. For instance, before we can get to the real problems, social scientists have to implicitly or explicitly decide issues like whether income is an adequate proxy for total consumption, how to reduce millions of jobs into hundreds of occupations and ultimately into something as manageable as the EGP class schema or the ISEI synthetic prestige index, the magnitude of social desirability bias, and how long (if ever) it takes informants to relax and act normally in front of an ethnographer. In my own substantive interest of radio, everyone agrees what the key hypothesis is (broadcasting monopolies create diverse content) and what the evidence shows facially (yes they do) but there is a big debate over an essentially auxiliary hypothesis about the quality of the evidence (whether “format” is a meaningful proxy for content).

Despite his rejection of positivism, Quine was no nihilist or skeptic. Indeed, he was explicit about offering a post-positivist way to recover empiricism. Quine felt that ultimately we cannot test any discrete hypothesis but only the entire system of science. However even in limited and narrow cases we must accommodate the evidence so that if we wish to salvage a particular hypothesis against contradictory evidence we must displace the doubt to some more or less specific auxiliary hypotheses. Quine speaks of belief as a web, fabric, or force field and treats surprising observations as not necessarily discrediting any particular belief but prodding the field as a whole and deforming it. There is a loose coupling between observations and beliefs so the main hypothesis may withstand the contrary observation but the anomaly’s evidentiary weight has to go somewhere. Implicit in Quine’s positive agenda is that parsimony is a worthy goal (lest evidence be diffused through the web indefinitely) but it is debatable whether parsimony is a distinctly scientific value or merely an aesthetic principle. This post-positivist empiricist agenda (usually called “holism”) is a bit fuzzy and its ambiguities would not be resolved until Kuhn.


Entry filed under: Uncategorized. Tags: .

Scientific Inference, part 2 of 4 St with shared frailty only

The Culture Geeks

%d bloggers like this: