Where do moral intuitions come from? According to the standard scientific worldview which many secular academic moral philosophers tacitly accept, they are evolved, and exist because of selection pressures in our near to distant evolutionary past. That is, moral intuitions come from the process of surviving, and in particular of surviving as social animals in a certain kind of context.
Understood in this way, moral intuitions tell us something important about social existence as a human being. The problem, however, is that the conditions in which we live nowadays are very different from the conditions under which those moral intuitions were selected for, and did most of their work.
What were those social conditions like? It seems like humans existed in small group environments, where practically everyone around us was someone we knew at some level, and who was part of our ‘group’. That is, most all interactions involved a cause-and-effect relationship that was pretty direct, significant, and with people with whom we would have repeated actions, and with whom we were closely related and that we knew somewhat.
This is why I am skeptical of applying moral intuitions, so understood, to various problems in the world today that involve us in very different circumstances. Where the cause-and-effect relationship of our actions in social situations was usually clear cut and direct, very often now it is diffuse and involves difficult to understand causal situations. Where it was with people with whom we would have repeated interactions, now there are often one-off interactions. Where the people with whom we would interact were people we knew, now they are typically people to whom we are not closely related and who we do not know. It is plausible to say that, once you change all these circumstances, our moral intuitions are no longer applicable.
Consider an example used by various contemporary academic philosophers: the drowning child in the pond. Say that on the way to work you are passing by a child in a pond who is drowning, and that the cost to you of saving the child will be that you ruin your suit ($50, say). If you don’t save the child, no one else will. Do you have a moral obligation to save the child?
If you answer ‘yes’, then an argument can be made: there is an analogous situation, that of a child dying from an easily preventable disease in some foreign land. For no greater cost ($50), you can save that child’s life. If you don’t, no one else will. Therefore, if you answer ‘yes’ to the first case, the argument goes, you are committed to saying that you have a moral obligation to help this distant child.
A simple way to see that there is something awry with this analog is to note number. In the first case, it is presented as an unusual situation where you save a drowning child (which is what it almost always is in real life – children drowning in ponds one is walking by do not occur very often). In the distant example, however, in reality there is not one child, but rather there is a multitude. To make the first situation more like the implicit aspects of the second situation, one would need to revise it to something more like: every day while on your way to work, you pass by millions of children drowning in ponds. Do you have a moral obligation to save one? Two? As many children as you can? Just for today? Today and tomorrow? Every day?
What is happening here? We are moving away from a situation our moral intuitions have been designed to guide us in (someone near us in imminent danger) to a situation that starts to boggle the mind, and which our moral intuitions weren’t designed to navigate.
The move from applying moral intuitions primarily to a small group of people one has repeated interactions with, to applying it to people in (say) one’s city, to then applying it to people everywhere (or all animals everywhere), is the trajectory of a kind of universalism. That is, the position that our moral intuitions are supposed to apply to possible interactions with everyone, no matter how distant or tenuous our interactions with them might be. Some people see the move towards this kind of universalism as a tangibly good thing. It might be good in some sense, but it is (in terms of what the intuitions were ostensibly designed to do in a narrow sense) a misapplication of our moral intuitions.
One can still recognize that certain situations will be better for people involved if x is done instead of y, say, and work towards that. Yet, this sort of situation is not a moral one in the same sense. It is a political one, say.
Therefore, as one moves away from the situations our moral intuitions were designed for, it is reasonable to deny that the new situations are moral ones, properly speaking, within the standard secular context of thinking about morality. This is not an argument against kinds of universalism that do not rely upon a typical evolutionary understanding of the origin and purpose of moral intuitions. Rather, it is to protect oneself from the badgering of secular universalists.
Put another way, it seems that one must make a choice: accept the limited sphere of the applicability of moral intuitions, properly speaking, or come up with a different framework for moral intuitions. To do the former is unpalatable to the universalist ambitions of many secularists, to do the latter is to go out on a limb scientifically.