On theory and observation (5): Testing theory-nets

nets
One of the most interesting aspects of Joseph Sneed’s structuralist view of science is the one I mentioned in passing in my last entry: the fact that a scientific theory must not be understood as a mere combination or conjunction of different propositions (its “axioms”, plus the indefinite number of assertions they make about singular systems or objects), but as one single statement referring to the whole set of systems falling under its scope. This gives scientific theories a ‘holistic’ flavour that was very consistent with Thomas Kuhn idea that theories (or ‘paradigms’, as he preferred to say) are accepted or rejected as a whole, rather than piecewise (and with the mysticism typical of the sixties). The question, of course, regarding the topic of this series, is how such a gargantuan theoretical edifice (remember, the singular claim that, for example, the mega-system consisting in the set of all the kinematic systems of the universe can be enlarged by one single, ‘universal’, function that assigns to each object –no matter in how many systems it appears– a pair of numbers expressing its mass and the forces exerted upon it, such that the resulting mechanical systems obey Newton’s second law) can be subject to empirical testing: how do we manage to know whether what the theory asserts about the world is true or false, or approximately true at least?

nets
Photo: Pietro Jeng / Unsplash

Things got more complicated after other philosophers (particularly Ulises Moulines and Wolfgang Balzer) refined Sneed’s description of theories to allow for the fact that these typically include many laws or “axioms” in a nested way. For example, classical mechanics includes a lot of “force laws” (Newton’s gravity law, Coulomb law –about electrically charged objects–, Hooke law –about springs–, etc.), and these specific laws are not supposed to apply to all mechanical systems. This led to the view that scientific theories are better represented as nets: the ‘universal’ set of all physical systems the theory is about is not homogeneous but can be decomposed in smaller sets that have to obey (according to the theory) additional laws besides the one ‘defining’ the theory (in our example, the law that force equals mass times acceleration). It is also possible that some of these subsets are in their turn divided in other sets that obey still more specific laws. And, of course, all these sets can in some cases intersect, making the metaphor of the “net” more adequate than that of a “tree”.

Taking all this into account, what a scientific theory (at least in mathematical physics) claims about the world is something like the following:

there is one pair of mathematical functions L and T, such that

–the domain of L is the set of all the kinds of empirical systems (KES’s) the theory is about, and such that

–the domain of T is the union of the domains of all individual empirical systems the theory refers to, and

T assigns to each entity a value for each theoretical concept (or magnitude) the theory contains, and

L assigns to each KES a specific theoretical law; and so that, finally,

–given the values assigned by T, all the KES obey the laws that L assigns to them.

Stated in a plainer way, what the theory asserts is that there is some combination of higher-level-plus-lower-level theoretical laws that, once applied to the kinds of systems each law is supposed to apply, make the resulting measurements of the theoretical and non-theoretical (or ‘observational’) physical magnitudes of all those systems consistent amongst each other.

Now, I beg you to focus on the underlined concepts. They are simply our well known existential (∃)and universal (∀) quantifiers. Surely you will also remember Karl Popper’s old argument about why scientific theories are not verifiable, but falsifiable: since physical laws are universal propositions (claiming that all objects or systems of a certain kind obey them), then, since we cannot check all the systems and objects of the universe, we cannot prove that the theory or law is right, but we can prove that it is false if we find at least one counterexample (and hence, scientists should pursue the empirical refutation of the conjectural laws they propose, rather than their confirmation; i.e., scientists should be falsificationists). Popper’s argument is based on the logical difference between the universal and the existential quantifiers, and hence it presupposes sensu contrario that existential claims are verifiable (but not falsifiable): if you claim that there is one thing such-and-such, then you can prove that what you say is true just if you happen to find one example of a such-and-such thing (but nobody can prove that you are wrong just because such an example has not been found till now).

The problem with the structuralist version of scientific theories is that their logical form combines the existential and the universal quantifiers in one single statement, so that the logical schema of a scientific theory (in its most minimalistic expression) would say something like:

(1) ∃x∀yTxy

(remember: the x’s stand here for the possible conjunctions of laws the theory might have in its final form –so to say–, and the y’s for the possible empirical systems and kinds of empirical systems the theory might be applied to). I say this is a problem, because a statement like (1) is neither falsifiable (because of the) nor verifiable (because of the ). One can never empirically prove the theory is false… because even if all attempts to find ‘the right’ combination of theoretical laws have failed (I mean, laws consistent with the ‘central’ law of the theory, like Newton’s second law in the case of classical mechanics), this does not mean that there is no successful combination just around the corner. And one can never empirically prove that the theory is true… because even if you have found a combination of laws that successfully describe the systems and objects examined till now, perhaps the next object or system you measure gives results inconsistent with those laws. (By the way, this is the situation typical of science according to Lakatos’ methodology of scientific research programmes: a structuralist ‘theory’ would be rather like the plan of ‘completing’ a few ‘core laws’ with different ‘protective belts’ of ‘auxiliary laws’ till scientists find one satisfactory completion). So, how can a proposition like (1) be empirically tested?

My suggestion is to adapt to this case something that Popper had already in mind when he realized that, in fact, real scientific theories are always ‘logically immune’ to falsification, because they are not tested isolated, but always in combination with some other laws or facts, and hence one might always propose some ad hoc explanation for why the theory does not ‘apparently’ work in the case at hand. Popper insists that, actually, theories are falsifiable by convention, i.e., by the scientific community’s decision of not using that kind of ad hoc strategies; or, in other words, by the precommitment to a limited number of ways the theory can be ‘saved’. In our case, this translates into the fact that the possible set of ‘special laws’ that might be considered as ‘natural extensions’ of the theory is more or less fixed and limited. But we could also add that there is a similar, parallel convention according to which there is no need of examining all the possible empirical systems in order to decide that the theory is right. Hence, scientific theories are both unverifiable and unfalsifiable by their logical form, but they can be made both verifiable and falsifiable by convention.

Once this convention is in force, its application allows interpreting the ways scientists typically test their theories as forms of what classical philosophers called induction. In particular, there are three types of inductive strategies that we might intuit in the process of testing a proposition like (1):

  1. Statistical (or Baconian) induction, which is the simplest type, consisting in just checking object after object, till you are satisfied with the truth of a proposition like “all A’s are B” (for example, “all planets obey Kepler laws”).
  2. Enumerative (or Aristotelian) induction, consisting in checking all the types that we assume to exist of a certain kind of object or systems, and concluding that all these systems are such-and-such, because we have proved that all existing kinds of them are so (for example, “all chemical isotopes have an integer atomic mass”), and lastly:
  3. Eliminative induction (or “inference to the best explanation”), when all possible laws have been disconfirmed by experience except the surviving one (what would amount to the consolidation of a big theory as the ‘triumphant’ paradigm after a Kuhnian scientific revolution, for example).

References

Balzer, W., C. U. Moulines, and J. Sneed, 1987, An Architectonic for Science: the Structuralist Program, Springer.

Zamora Bonilla, J., 2003, “Meaning and Testability in the Structuralist Theory of Science”, Erkenntnis, 59, 47-76.

‘On theory and observation’ series:

(1): The theoretician’s dilemma
(2): The Ramsey sentence
(3): Scientists selling lemons, a game-theoretic analysis of how scientific facts are constructed
(4): Sneed’s structuralism and T-theorecity

Written by

Leave a Reply

Your email address will not be published.Required fields are marked *