Consider the following (presented in Freedom Evolves, 2003, Daniel Dennett, p. 134):
“A popular argument with many variations claims to demonstrate the incompatibility of determinism and (morally important) free will as follows:
1. If determinism is true, whether I Go or Stay is completely fixed by the laws of nature and events in the distant past.
2. It is not up to me what the laws of nature are, or what happened in the distant past.
3. Therefore, whether I Go or Stay is completely fixed by circumstances that are not up to me.
4. If an action of mine is not up to me, it is not free (in the morally important sense).
5. Therefore, my action of Going or Staying is not free.”
Is there a problem with the argument, and if so where is it? I think 4. conflates two senses.
On one sense, the action in the present is up to oneself, i.e., one is taking action, and this action stems from one’s past actions. Therefore, it is not clear whether 4. is saying
“If an action of mine is not up to me at the time or up to my actions preceding it in some relevant sense, it is not free.” (or something similar to this)
or whether 4. is saying
“If an action of mine does not descend from causal factors ultimately up to me, it is not free.”
In the former sense, the move from 4. to 5. isn’t warranted. In the latter sense, it is, but it is not clear if the latter sense is intuitively correct.
As an analogy, consider a computer agent that has a decision function. There is some input presented to the computer agent’s decision function. The computer agent then selects an output based on which output ranks highest according to the computer agent’s criteria. To put it in more familiar terms, the computer agent reviews the possibilities, and selects the one based on its belief about what is the best choice.
The computer agent’s choice might be entirely deterministic, yet it is still making a meaningful choice. That is, the decision is done by the computer agent, and how the computer agent evaluates the choices is important for the result.
Which is to say, if determinism is true, one’s decision making process is still a necessary part of the causal equation for one’s action. Furthermore, actions of one’s earlier in time may also be necessary parts of the causal equation. That is to say, the previous events outside of oneself which allow for prediction of one’s future action are sufficient only in a sense. That is, 1. should be rewritten as:
1. If determinism is true, whether I Go or Stay is completely fixed by the laws of nature and events in the distant past, plus the ‘running forward’ of reality such that the laws of nature combined with the events in the distant past lead to the creation of an ‘I’, which in turn creates various evaluative capacities which in turn allow for comparison of options, and so on, which then in turn lead to this I taking action.
which more accurately reflects how one’s decision making is a necessary part of one’s actions, even given a deterministic universe.
A conceptual tool for understanding necessity here might be a computer simulation. The current state of a computer (‘events in the distant past’), plus the function to advance the state one step (say) (‘the laws of nature’), don’t actually necessitate the state being advanced. Rather, the function actually has to be run to advance the state. In this case (to keep things analogous), in so doing many new functions are created when the state advances (decision making functions by new computer agents). If these new functions weren’t created, then there wouldn’t be the choices made by the computer agents, and there wouldn’t be their outputs (i.e., actions) in the simulation.
Which is to say, even if a computer agent doesn’t decide on the initial state of the computer, and doesn’t decide on the function to advance the state one step, and doesn’t decide whether the function actually is advanced however many steps, the computer agent is created, and its decision making function is necessary for whatever the computer agent does once the computer agent is created, and the outputs of the decision making function are ‘up to’ the computer agent, in the sense that the computer agent has a function that reviews the options, ranks them according to some criteria, and then selects the best one.
I don’t know if this sense of ‘up to oneself’ is sufficient to satisfy 4., but the actions certainly are ‘up to oneself’ in some sense (i.e., oneself is real and is really making decisions, and these decisions of one’s are necessary or the action won’t occur).