Robust domains

Bruce Charlton writes:

“That all complex adaptations (functionality) in biology and all the diversity of species, and indeed the reality of species differences could be explained by natural selection is a metaphysical assumption […] and un-provable by common sense.

But also: that such and such a breed of pig or dog – with a relatively specified appearance and behavior and functionality transmissible by heredity – was produced by the breeding experiments of Farmer Giles or Mr Smith… this is a matter of appropriate knowledge: common experience evaluated by common sense.”

This is a more specific but similar point to one I make here, where I say that when evaluating scientific claims, an important question is:

“What is the robust domain of this? […] Being robust means that the findings have been tested and confirmed extensively for a given set of phenomena. Very often, we find out that a theory that worked well to explain one domain doesn’t work as well when expanded to other domains.”

This is the main question with parts of evolutionary theory. We know that certain mechanisms for evolutionary change work in certain domains. The question is whether they are capable of the universal explanation some proponents claim. They probably aren’t – our current consensus view of how evolutionary change works is probably partial and, in some cases, wrong.

Charlton continues:

“Science properly works in this area of specific and local – constructing simplified models that are understandable and have consequences and ‘checking’ these models by using them in interacting with the world, to attain human purposes.”

An important point here is the checking. Without the ability to check, re-check, and so on, a scientific model and how it applies to the world, our confidence in it should decrease significantly. This is because human reasoning (and therefore theory) is weak and easily misled.

The same goes for computer models, say. People are easily misled by computer models typically because they don’t understand how they work. A computer model is typically only as good as the theory that goes into it, yet people often use the model results as evidence for the theory. This is fine with retrodictions, where we know already what the result is. If the computer model fits with the result, then that is confirmation of some validity in the model. Its robust domain, however, is therefore extended only to the past. The more interesting question is usually how it predicts. Running a computer model where its robust domain is in the past, in order to predict what is going to happen in the future, is moving outside of its robust domain (unless you are dealing with temporally uniform phenomena – typically simple systems where no relevant changes in the future are to be reasonably expected as compared to the past). Therefore, the probability that computer models in such cases are making correct predictions should be weighted accordingly.

Leave a Reply

Your email address will not be published. Required fields are marked *