Category Archives: Philosophy of Mind

Can evolutionarily acquired cognitive traits be reliably considered as truthful?

Some, such as Alvin Plantinga (former Professor of Philosophy at Notre Dame University, now Professor of Philosophy at Calvin College), think that there are problems between notions of human cognitive reliability and a standard account of evolutionary development. The reasoning goes something like this:

1. Assume that a typical evolutionary account of the development of human cognitive abilities is true.

2. Evolution works by selecting things that allow for organisms to survive and reproduce successfully.

3. There seems no necessary (or even probabilistic) connection between this and cognitive abilities that tend toward truthfulness.

4. Therefore, we have no warrant for believing that the cognitive processes by which we arrive at (among other things) theories of evolution are reliable.

5. Therefore, if a typical evolutionary account is true, it undermines its own warrant for belief.

That is, the ‘universal acid’ of Darwinism eats away at its own epistemic foundations.

How should a Darwinist respond?

To see why there might be a connection between belief (map) and reality (terrain), consider a robot navigating a terrain. If the internal representation that the robot had of the terrain were random, it probably would not be able to navigate the terrain optimally. Rather, a specific kind of representation of the terrain, coupled with a certain interface between the representation and the world, are required for the robot to navigate accurately. Applied to the biological world, navigating accurately is vital to finding resources, and so on.

The way in which a representational system points is the interface between it and the world. So, a representation will have different significance (‘meaning’) if it is applied to the world through a different interface. As a maxim, ‘beliefs’ or representations do not exist in a vacuum. Plantinga seems to believe that they do – i.e., we can talk about an organism having a belief, where the ‘content’ of that belief is completely separate from how that organism interacts with the world. As far as evolution is concerned, this seems highly questionable, because the meaning of a representation, and therefore its truthfulness, is in part determined by how the organism containing the representation interacts with the world in relevant contexts.

However, the above Plantinga-esque critique of Darwinism gains some of its bite by realizations that we have (presumably evolved) cognitive systems that in some way are misleading. Consider an area where many people say that we have a false representation of the universe, the representation of the Earth as not moving. Here, it seems like what is true and what is evolutionarily useful diverge. Just so, it might be that theories of evolution themselves are based on useful but misleading cognitive systems.

I think a more accurate way to understand something like our representation of the Earth as not moving is to say that our representation of the world as stationary is true in the relevant context. I.e., to understand how representational systems are truthful is to have to understand their applications. I.e., any representational account has to be interpreted or applied. It is only when we start to apply a representational system designed for navigating on the Earth as if it were designed for navigating the solar system that it starts to mislead. (Even there, though, in this case it rather straightforwardly plugs into heliocentric models by use of a transformation algorithm.)

So, not only is a correspondence between map and terrain obviously useful for purposes of navigating that terrain, say, but science properly understood has not shown that our (presumably evolutionarily derived) representational systems tend to be unreliable, but rather has sharpened our understanding of the scope of their appropriate application. Indeed, these sorts of considerations bring into question the scope of applicability of certain cognitive mechanisms underlying typical scientific ways of representing the universe, and so we probably are warranted in emphasizing careful consideration of the limits of the epistemic mechanisms we use to build scientific representations. Here I think there is something very useful about Plantinga’s sort of critique, but this is not the conclusion he tries to draw.

So, these sorts of considerations don’t show an incoherence between evolutionary processes and reliable cognitive processes. If anything, they lead one to think there certainly should be correspondence between representations and terrains, properly interpreted.

See here.

Ultimate meaning, eternity

Matt Fradd quotes from William Craig here:

“If each individual person passes out of existence when he dies, then what ultimate meaning can be given to his life?

Does it really matter whether he ever existed at all?

It might be said that his life was important because it influenced others or affected the course of history. But this only shows a relative significance to his life, not an ultimate significance.

His life may be important relative to certain other events, but what is the ultimate significance of any of those events? If all the events are meaningless, then what can be the ultimate meaning of influencing any of them? Ultimately it makes no difference.

[…] But it is important to see that it is not just immortality that man needs if life is to be meaningful. Mere duration of existence does not make that existence meaningful. If man and the universe could exist forever, but if there were no God, their existence would still have no ultimate significance.”

Craig’s argument can only be thought to work by emphasizing the epithet ‘ultimate’. For it is obvious that human life has meaning without those things being eternal or without a cognizance of God, i.e., in a moment, there is meaning in various experiences.

When people feel most alive, when life has most meaning, they are doing things that in themselves are meaningful. ‘Flow’ experiences are a type of these. If one thinks of meaning as an aspect of natural existence, one can see how certain experiences in life bring meaning to one’s life – i.e., we are designed to find certain sorts of experiences meaningful. Indeed, the art of creating games is creating systems which create meaning by leveraging these naturally existing systems. There is a goal, there is progress toward it, and so on. When done correctly, people can find these sorts of experiences meaningful.

However, Craig’s argument is partially correct as far as this goes. Contemporary society can reduce meaning (think meaningless office jobs, for example), and a Christian view of various things can add meaning to those events (such-and-such wasn’t just a random occurrence, but happened for a reason, or meaningful existence doesn’t end with death, for examples).

The Nietzschean maneuver of thinking ‘there is no God, therefore we create our own meaning’ is essentially an error. Meaning is part of a natural human process. It is found in various sorts of experiences we have, but it is not arbitrary. It is also part of why Christianity is interesting – it posits significantly more meaning in certain aspects of our lives then there would be in various kinds of non-Christian views.

In other words, Craig is overplaying his hand – but there is enough truth in what he is saying to make it worthwhile to at least ponder. Indeed, computer games often purchase their meaning by having ‘highscores’, which are records of deeds which persist after the game. There is no question that these sorts of devices can add meaning to a game, just as a dimension of eternity can add meaning to human actions.

Consider Craig’s quotation from Jacques Monod:

“Man finally knows he is alone in the indifferent immensity of the universe.”

We might ask: why does this matter? First, humans are social animals. Second, the proposition is that he is alone among indifference. Yet, a moment’s reflection will show this is not true – humans are surrounded by other humans, and many of those humans are not indifferent. Christianity adds more benevolent beings (angels, and a tripartite God) who are also not indifferent. Yet, it is the same principle.

So although it is a truism that actions can’t have ‘ultimate meaning’ if ultimate meaning is defined as something eternal and in some way related to God and if there is no God or eternal aspect of reality, it’s not clear the conclusion one gets from not having this sort of ultimate meaning is the one that Craig thinks follows. Events and actions here in finite time can be meaningful in themselves – Christianity posits a possible eternity of meaningful experience, but these in turn are also meaningful in themselves. The difference is in scope and possibly magnitude, not relevant type.

Science, consciousness, explanation, theism

In a discussion with Alister McGrath (formerly Professor of Historical Theology at Oxford and now Professor of Theology at King’s College, London) Richard Dawkins (formerly Professor for Public Understanding of Science at Oxford) says (19:45):

“I totally agree with you that there is deep, deep mystery at the base of the universe, and physicists know this as well as anybody, questions like ‘What, if anything, was there before time began?’* Perhaps it’s because I’m a biologist who has been impressed through my whole career by the power of evolution, that the one thing I would not be happy about accepting in those deeply mysterious preconditions of the universe is anything complex. I could easily imagine something very hard to understand at the base of the universe, at the base of the laws of physics, and of course they are very hard to understand, but the one thing that seems to me clearly doesn’t help is to postulate anything in the nature of a complicated intelligence. There are lots of things that would help, and physicists are working on them […], but a theory that begins by postulating a deliberate, conscious intelligence seems to me to have sold the past right before you even start, because that’s one of the things that science has so triumphantly explained. Science hasn’t triumphantly explained, yet, the origin of the universe, but I feel, I have a very strong intuition and I wish I could persuade you of it, that science is not going to be helped by invoking conscious, deliberate intelligence, whatever else preceded the universe, whatever that might mean, it is not going to be the kind of thing which designs anything [.]”

*This question is nonsensical, as ‘before’ in this context is a temporal relation. One can ask “What exists non-temporally?”, and presumably something like this is what Dawkins is asking, and he suggests something similar near the end of the quotation.

Explaining phenomenal consciousness in materialist (‘scientific’) terms in contemporary philosophy of mind is often referred to as ‘the hard problem’. It is so called because a significant percentage of those who study this issue believe it is conceptually very difficult – that we do not know how it could be explained, and some believe it cannot be explained, by an account of the universe that is something like the one we have in current physics.

It is true, however, that evolutionary biology has given a general account of how biological organisms (including brains) might have arisen, and it is true that we understand more about the brain and behaviour than before (for example, we can generate computer-based ‘neural networks’, which use certains aspects of human brain functioning as an analogy for the computer programs, which in turn can accomplish certain behavioural tasks in ways that are somewhat similar to how humans do them).

So, it seems reasonable to assume that what Dawkins has in mind here is an explanation of the brain and behaviour, or of ‘functional consciousness’ as opposed to ‘phenomenal consciousness’. Science has started to give an explanation of how human brains do certain things (i.e., behaviour or functionality), and these explanations in turn seem to fit into a larger story of the development of organisms in general through evolutionary processes. I.e., we have begun to develop a plausible causal story which starts from simplicity and builds complexity relevant to brain functionality.

Yet. If there is a conceptual conundrum between phenomenal consciousness and materialist accounts of the universe, as indicated above, we also know regardless that there is a tight linkage between phenomenal consciousness and behaviour. For examples, I can talk about the contents of my phenomenal consciousness and act in other ways on them, brain activity seems to be highly correlated with certain kinds of phenomenal states, and so on. If science doesn’t seem near an explanation of phenomenal consciousness, and if certain behaviour or brain activity (i.e., ‘functional consciousness’) seems dependent on or to act in a tight causal relationship with phenomenal consciousness, then to what extent does it make sense to say that ‘deliberate, conscious intelligence’ is something we have triumphantly explained? It seems rather the opposite – it is one of the things that has stymied science the most, and led contemporary materialist philosophers into contortions in an attempt to explain it.

Prediction and choice

The trick of determinism is that it makes it sound like the initial conditions ‘choose’ the end result – they determine what happens. Yet, this is in a sense not really accurate – one may be able to predict the end result based on the initial conditions, but there are intermediaries (in this case, humans) who choose, and it is these choices that lead to the result. These choices are of course based on reasons and information – if not, what good would they be as choices? So, yes, one might be able to predict that someone will choose an apple instead of a spoonful of dirt, but that hardly undermines the autonomy of the chooser – what it means in saying we can predict this choice is that the chooser is able to take in information, and act on good reasons for doing x instead of y. This is the whole importance for being able to choose – it’s why organisms have the ability to choose. That the prediction comes true may require choosers as intermediaries.

However, I think another reason determinism seems incompatible with choice is that it seems to go along with reductionism – that ‘good reasons’ don’t really exist, but rather there are just mindless fundamental causal processes that can be used to describe a situation. So, when determinist thought experiments are set up, they sometimes involve a description of the universe in terms of fundamental, mindless causal processes. From these, so the thought experiment stipulates, one can predict some end result, that will have as intermediaries ‘choices’. Yet, this thought experiment implies, those choices in a sense don’t really matter – just the basic, mindless causal processes that go along with them do.

The most obvious response to this is: it is hypothetical, and not entailed by scenarios which involve prediction of certain sorts. It very well may not be the case that the universe works in that way, i.e., that there is no such thing as a conscious mind that can affect decision, but rather only mindless, causal processes which are then in some way reflected in the conscious mind (i.e., the mind is epiphenomenal). The idea that the universe does work this way in its entirety is somewhat speculative, and indeed, there is significant evidence that it is not true.

If one sets aside this consideration, however, and considers the thought experiment as specifying initial conditions that lead to choosers that have conscious experiences, are able to draw on memories, and then coordinate their actions based on that information and those reasons, and so on, then this implicit nihilism is no longer present, and I think the seeming incoherence between ‘determinism’ and free choice is reduced. To distinguish these kinds of cases, perhaps ‘predictionism’ would be a more useful term for the general case, which can include the causal role of conscious choosers beyond mindless, causal processes.

Of course, it might be the case that everything in the universe is determined by mindless, causal processes, and that ‘minds’ as we understand them are epiphenomena (or are abstractions of these mindless, causal processes – a kind of eliminativism). But here the ‘problem’ as far as whether ‘we’ ‘choose’ isn’t so much compatibilism (i.e., the compatibility of prediction and free choice) as a kind of reductionism.

Also see here.

The locality of causes

How do we determine where something is?

We look at where there are effects, and then postulate a cause that brings about those effects.

That is, to say that a thing has a location in science is to say that it has effects on things that have locations, plus an inference that, therefore, the cause also has a location.

This is how causes in science come to be thought to be in some place, i.e., spatial.

What are exceptions?

The exception would be if something seemingly brute happens at a locale. One gets an effect, but can detect nothing separate from that effect at that locale. There are a few options: 1. It is a truly brute effect, i.e., there is no reason for it occurring, and so no local cause. (This is perhaps the idea behind ‘truly random’ quantum effects.) 2. There is a cause, but the only way we can interact with it is through the effect already detected, and so can investigate it no further. (This could be contingent and due to the technical apparatus we have for investigating the cause, or it could be simply how that cause works.) 3. There is a cause, which can be investigated further, but it is originating somewhere else, and if we could get there we could investigate it further. (So, there is a sort of limited one-way causal pattern in the place of the effect. Here, there is usually a medium through which the cause leads to the effect.) 4. There is a cause, but it is ‘outside’ space, and so the only way we can investigate it while ‘in’ space is indirectly. (For example, if the universe had a beginning, then does it have a cause, and if so, what caused it? It is reasonable to postulate that there is a cause which is ‘before’ or ‘outside’ the creation of space, and therefore non-spatial.)

Here is the question, however: how do we distinguish between 4., 3., 2., and 1.? How, methodologically speaking, would we distinguish?

Also see here.

How can divine revelation make sense?

One of the more ridiculous notions in Christianity to a typical secularist is that of divine revelation. Whereas arguments from personal experience can be made fairly directly, arguments made from scripture are much more difficult to justify. Therefore, much of Christian discourse seems ridiculous, because it is couched in or buttressed by references to certain scripture as divine revelation.

‘Divine revelation’ has three main components – that there is a divine reality, that there can be revelation, and that the divine reality can be the cause of revelation. Christianity includes the idea that we have good reasons to believe that certain writings – such as the Gospel of Luke – contain divine revelation. How might the first three of these components seem a little closer to plausible?

1. That there is inspiration. Often, writers will say that when they write it ‘flows’ – the words just ‘come to them’. Similarly in music, or in problem solving – a solution ‘comes to oneself’. In typical discourse, we might call this ‘inspiration’ – it is a robustly evidenced phenomenon.

2. That there is revelation. The first question is: “What is ‘revelation’?” ‘Revelation’ and ‘reveal’ have the same root, and revelation basically means a ‘revealing’ of information. (For example, “New revelations about such-and-such case!”) The basic idea is the same as with inspiration. Some information – an idea, text, music, solution – comes to oneself, i.e., is ‘revealed’. ‘Revelation’ just suggests a ‘fuller’ or more detailed sense than ‘inspiration’. Again, (non-theistic) writers often say that what they wrote seemed like they were merely transcribing it, say. That there are these sorts of fuller or more detailed experiences of inspiration is also robustly evidenced – however, it happens more rarely than the more general sense of inspiration.

3. That revelation can reveal important information. This follows fairly simply from 2. Sometimes, people are ‘inspired’ and write nonsense or things that turn out to be false. Other times, however, a part of text, music, or a solution to a problem comes to them and it turns out to be veridical, beautiful, and or useful. I.e., it is straightforward to note that in some cases of inspiration or revelation, it’s true.

This leads to a simple question: where does the information which leads to inspiration or revelation come from? Those who think there is such a thing as divine revelation think that some of it comes from a divine source. This in turn leads to two problems as seen from a typical secularist’s viewpoint: is there really a divine aspect of the universe, and can this divine aspect really be the source of inspired or revealed information?

4. That there is a divine aspect. This can be made to seem a little more plausible by citing the large amount of evidence in people’s experiences which suggests there is. Consider here, where 3 sources of evidence are cited.

5. That the divine aspect can be a source of information in some cases of inspiration or revelation. The 3 sources of evidence considered at the link are not just evidence for a divine aspect, but evidence that this aspect can affect or interface with (or, perhaps, be a part of) human brains or minds.

A next question is then whether and to what degrees we have good evidence for divine revelation in specific claimed instances.

The basic idea with religious ‘faith’

‘Faith’ in religion, and particularly in Christian religion, is a central word, and as with most central words in, say, a language more broadly speaking (such as ‘know’), contains multiple meanings and works in various directions.

For the word to start to make some sense to a secularist, however, it might be useful to start at this point: faith can refer not to belief that God exists, but rather to belief in a specific version of God’s character. This is a more natural way of using the word in everyday language – as in ‘I have faith that someone will show up at a certain time, because I have had repeated experiences in the past where they have done so’. The basic idea here is one that basically everyone acts on in day to day life – habit or character is inferred from behaviour, and so one has warranted belief that someone will probably act in such-and-such a way in the future.

This leads to the next question: how would one know that God has a character or something like it (is person-like) and discern what that character is?

There are 3 main areas of evidential support typically cited in Christianity, as far as one’s own experience goes. The first is providence – the idea that, usually in retrospect, one can see a pattern or logic to events in one’s life, even though at the time it might have seemed like there wasn’t. The typical Christian idea here is that God has an intention – a forethought or will – for how things will turn out (but that the actual turning out of things in that way depends on human choices). Through repeated experiences of these sorts of things, and developing a better ability to listen and communicate with God (through various practices) one can then better align oneself with and allow for God’s providence, and so build a sense of the sort of God there is.

The second is ‘coincidence’ – seemingly unlikely events where things come together in a certain way. These are distinguishable from providence in typically being noticeable at the moment. Christians attribute these sorts of coincidences to God, or angels which are proximate representatives of God’s will. Similar to providence, through repeated experiences and developing an ability to notice God’s feedback to one’s own thoughts through these sorts of coincidences, one starts to build an evidenced concept of what sort of character God has.

The third main source of evidence is ‘religious experience’ – experiences of the ‘light of Christ’ or the ‘Holy Spirit’, for example, or even just of a general sense of ‘goodness’ that is perceived to indicate the presence of some divine aspect.

Although the three sources of evidence discussed above have to do with the specific nature of God or divine reality, they also work as evidence that there is a God – another reason why the word ‘faith’ is often run together on these issues. Interestingly, in Kreeft and Tacelli’s Handbook of Christian Apologetics, of the 20 arguments for the existence of God, only the third source of evidence above is mentioned, in argument 17.

‘Faith’, then, in the context discussed above more properly refers to the character of a relationship – having faith in a specific notion of God, say, because of evidence of his character that has built up in the past.

 

An analogy for being ‘outside’ the natural world

If the universe had a beginning (not eternal), then that gives a reason for thinking there is something that caused it to begin. That thing would be ‘outside’ the universe, so to speak. What would it be like for that thing (such as the Christian God) to be outside the universe, including outside time?

Since so many of our concepts and so much of our language is conditioned by or grows out of our experience of passing time, it is difficult to even talk about what it is like to not be in passing time. Perhaps, however, a useful analogy would be to viewing a series of tapestries. The tapestries in question, say, represent a procession of events through time. Say the Unicorn Tapestries, which illustrate the capturing of a unicorn.

To us, all the events are ‘happening’ simultaneously – we experience no procession of time from one event to the next, while looking at all simultaneously. Now consider interacting with the events. We can modify a given tapestry, yet we are not doing so inside the temporal events depicted in the tapestries.

When people talk about a non-natural cause affecting the universe, perhaps this is what the situation would be like, or would be a useful conceptual tool for thinking about the situation.

Natural law and non-natural intervention

In Handbook of Christian Apologetics, Kreeft and Tacelli write (p. 109):

“A miracle is a striking and religiously significant intervention of God in the system of natural causes. [… T]he concept of miracles presupposes, rather than sets aside, the idea that nature is a self-contained system of natural causes. Unless there are regularities, there can be no exceptions to them.”

Consider that the natural system (featuring causal processes unfolding in time and space) has a beginning. What caused it to begin? It is reasonable to posit that something caused it to begin. This something would be in some sense prior to (but not temporally prior, as time is a feature of the system) the natural system. If that something can create the natural system, then it doesn’t seem implausible, a priori, that it would be able to affect the system at various points. A simple metaphor for this might be someone who creates a game with its own rules that can play out (such as Conway’s Life) but then also steps in at various points to change the configuration of the game.

So far, this makes some sense. It is not conceptually problematic, in the broadest of brushes, to think this possible. The next question then becomes whether miracles (so defined) have, in fact, occurred.

This is where things get problematic. If miraculous causal events are those which aren’t determined by the system of natural causes running endogenously, but rather (at least partially) through an externally (so to speak) originating cause, then how could we tell which causes are of this latter type?

Consider Kreeft in The Philosophy of Tolkien (p. 54):

“It is easy to identify miracles when we see them[.]”

He does not elaborate. Yet, is it obvious? How can one distinguish between causes that originate external to the system, and those that are part of the system? Consider the notion that a significantly advanced technology could be considered ‘magic’ by those who don’t understand the technology, or the general conceptual backdrop for the technology. Since we don’t understand how the natural causal system works (and probably aren’t even close), why should we think that we can tell what are natural causal events and which are not?

Detecting immaterial causes

What would an immaterial cause look like?

That is, by which criteria will we decide that a cause of some effect in the physical causal network is immaterial?

Since science works by detecting effects, and then inferring causes, how would science distinguish a material from an immaterial cause?

My guess: there is no way. Science isn’t about ‘material’ causes, but about causes. Put another way, science isn’t about the ‘physical’ world, but about prediction. If there is some cause, but it isn’t predictable, will it be classified as immaterial? No – science will simply focus on how to make predictions about its unpredictability.

Consider the following passage by Edward Feser, where he is discussing the “Mechanical Philosophy” prominent in early science (The Last Superstition, 2008, p. 179):

“The original idea was that the interactions between particles were as “mechanical” as the interactions of the parts of a clock, everything being reducible to one thing literally pushing against another. That didn’t last long, for it is simply impossible to explain everything that happens in the material world on such a crude model, and as Newton’s theory of gravitation, Maxwell’s theory of electromagnetism, and quantum mechanics all show, physical science has only moved further and further away from this original understanding of what “mechanism” amounts to. [… T]here is by now really nothing more left to the idea of a “mechanical” picture of the world than the mere denial of Aristotelian final causes[.]”

That is, things that wouldn’t have been considered ‘material’ in the past are now routinely thought of as so – as paradigms of material processes. The reason is that science is opportunistic – it finds effects, and tries to create models that explain them. If there are causes, traditionally understood as ‘immaterial’, then in the limit science will have to account for them, and will not think it is describing something immaterial in doing so.

Consider this from Daniel Dennett (Freedom Evolves, 2003, p. 1):

“One widespread tradition has it that we human beings are responsible agents, captains of our fate, because what we really are are souls, immaterial and immortal clumps of Godstuff that inhabit and control our material bodies […] But this idea of immaterial souls, capable of defying the laws of physics, has outlived its credibility thanks to the advance of the natural sciences.”

So, how would we tell that there are immaterial causes to our material behaviour? There wouldn’t be a sign blazing down from the sky saying ‘that was an immaterial cause – physics defied!’ Rather, we would have effects in the brain (say), and we would then infer causes. “There’s something there, causing these effects.” We would then develop a model of what that thing is. It would then come under the rubric of the physical sciences.

That is, natural science says there aren’t immaterial causes, but that’s because science rules out the possibility of an immaterial cause on conceptual grounds – to be in natural science is to effect the physical world, and to effect the physical world is to be physical.

Determinism and an action being ‘up to oneself’

Consider the following (presented in Freedom Evolves, 2003, Daniel Dennett, p. 134):

“A popular argument with many variations claims to demonstrate the incompatibility of determinism and (morally important) free will as follows:

1. If determinism is true, whether I Go or Stay is completely fixed by the laws of nature and events in the distant past.

2. It is not up to me what the laws of nature are, or what happened in the distant past.

3. Therefore, whether I Go or Stay is completely fixed by circumstances that are not up to me.

4. If an action of mine is not up to me, it is not free (in the morally important sense).

5. Therefore, my action of Going or Staying is not free.”

Is there a problem with the argument, and if so where is it? I think 4. conflates two senses.

On one sense, the action in the present is up to oneself, i.e., one is taking action, and this action stems from one’s past actions. Therefore, it is not clear whether 4. is saying

“If an action of mine is not up to me at the time or up to my actions preceding it in some relevant sense, it is not free.” (or something similar to this)

or whether 4. is saying

“If an action of mine does not descend from causal factors ultimately up to me, it is not free.”

In the former sense, the move from 4. to 5. isn’t warranted. In the latter sense, it is, but it is not clear if the latter sense is intuitively correct.

As an analogy, consider a computer agent that has a decision function. There is some input presented to the computer agent’s decision function. The computer agent then selects an output based on which output ranks highest according to the computer agent’s criteria. To put it in more familiar terms, the computer agent reviews the possibilities, and selects the one based on its belief about what is the best choice.

The computer agent’s choice might be entirely deterministic, yet it is still making a meaningful choice. That is, the decision is done by the computer agent, and how the computer agent evaluates the choices is important for the result.

Which is to say, if determinism is true, one’s decision making process is still a necessary part of the causal equation for one’s action. Furthermore, actions of one’s earlier in time may also be necessary parts of the causal equation. That is to say, the previous events outside of oneself which allow for prediction of one’s future action are sufficient only in a sense. That is, 1. should be rewritten as:

1. If determinism is true, whether I Go or Stay is completely fixed by the laws of nature and events in the distant past, plus the ‘running forward’ of reality such that the laws of nature combined with the events in the distant past lead to the creation of an ‘I’, which in turn creates various evaluative capacities which in turn allow for comparison of options, and so on, which then in turn lead to this I taking action.

which more accurately reflects how one’s decision making is a necessary part of one’s actions, even given a deterministic universe.

A conceptual tool for understanding necessity here might be a computer simulation. The current state of a computer (‘events in the distant past’), plus the function to advance the state one step (say) (‘the laws of nature’), don’t actually necessitate the state being advanced. Rather, the function actually has to be run to advance the state. In this case (to keep things analogous), in so doing many new functions are created when the state advances (decision making functions by new computer agents). If these new functions weren’t created, then there wouldn’t be the choices made by the computer agents, and there wouldn’t be their outputs (i.e., actions) in the simulation.

Which is to say, even if a computer agent doesn’t decide on the initial state of the computer, and doesn’t decide on the function to advance the state one step, and doesn’t decide whether the function actually is advanced however many steps, the computer agent is created, and its decision making function is necessary for whatever the computer agent does once the computer agent is created, and the outputs of the decision making function are ‘up to’ the computer agent, in the sense that the computer agent has a function that reviews the options, ranks them according to some criteria, and then selects the best one.

I don’t know if this sense of ‘up to oneself’ is sufficient to satisfy 4., but the actions certainly are ‘up to oneself’ in some sense (i.e., oneself is real and is really making decisions, and these decisions of one’s are necessary or the action won’t occur).

Shaun Nichols, intuitions about reasons for action, and free will

Shaun Nichols – a philosopher and cognitive scientist at the University of Arizona – recently wrote an article in the popular-science magazine Scientific American – Mind entitled Is Free Will an Illusion? What struck me was how poor the reasoning was.

The crux of the argument seems to me thus:

“Yet psychologists widely agree that unconscious processes exert a powerful influence over our choices. In one study, for example, participants solved word puzzles in which the words were either associated with rudeness or politeness. Those exposed to rudeness words were much more likely to interrupt the experimenter in a subsequent part of the task. When debriefed, none of the subjects showed any awareness that the word puzzles had affected their behavior. That scenario is just one of many in which our decisions are directed by forces lurking beneath our awareness.

Thus, ironically, because our subconscious is so powerful in other ways, we cannot truly trust it when considering our notion of free will. We still do not know conclusively that our choices are determined. Our intuition, however, provides no good reason to think that they are not. If our instinct cannot support the idea of free will, then we lose our main rationale for resisting the claim that free will is an illusion.”

So, the argument seems to run: 1. We have some reasons to doubt our conscious intuitions about the reasons for the decisions we make. 2. Therefore, no conscious intuition about our decisions provides a good reason for anything. 3. Therefore, our conscious intuition about free will does not provide a good reason for believing there is such a thing as free will that guides our decisions.

Granted, Nichols’ article was written with brevity, and for a popular-level kind of discourse. Yet, this reasoning leaves me baffled. (It reminds me of Randolph Clarke’s statement that the idea our intuition about free will could tell us something about the causal nature of the universe was ‘incredible‘.)

Response: of course intuitions about the causes of our decisions can be incorrect. Indeed, one can see this in some cases by honest, conscious introspection on the real cause of our actions: “Did I make that comment about so-and-so because I really believe it, or as retribution for an earlier comment they made?”, and so on. Recent empirical evidence showing how certain of our conscious beliefs about the reasons for our actions seem to conflict with subconscious reasons is interesting. Yet, it is only suggestive when applied to something specific like intuitions about the causal efficacy of something we call free will.

An intuition is, prima facie, a reason to believe something. If other methods of reasoning and evidence suggest the intuition is incorrect, then we can investigate how to reconcile the views. Then, if ultimately we decide to go with the ‘counter-intuitive’ evidence, we can conclude that the intuition isn’t a good reason. Yet, we can’t get to that conclusion about the veracity of conscious intuitions about the nature of free will from mere speculation based on evidence about the nature of certain other kinds of conscious intuitions about reasons for action.

Words and conceptual change

How do words achieve their function of allowing communication between two speakers of a language?

Words gain meaning through work. That is, one must create the concept associated with a word, and make sure two speakers share enough of the concept that they can communicate. (One can imagine how this might occur piecemeal.)

In novel or rare situations for word usage, a consensus on the concept may not be there. That is, the ‘core’ concept may not speak directly to it.

Take ‘dog’. Now imagine that you start changing the genetics of things we take to be dogs today. At what point does it stop being a dog? There is probably no sharp consensus, because this is a novel situation that the concept hasn’t been created to deal with.

If it became important, we could then innovate on the concept, creating new distinctions so that the word ‘dog’ could be used to communicate relatively clearly. (We would ask: “What is important to us about the concept ‘dog’?”)

This makes many debates in philosophy moot. Consider what the word ‘know’ means. People use the word to do things, in situations that come up every day. If, as a philosopher, you conjure some strange situation and then ask people whether someone ‘knows’ something or not in such a situation, there might be no consensus to the concept that has been worked out. You might get differing intuitions, or people might say they don’t know.

As far as these sorts of thought experiments are doing anything, what the philosopher is doing might be seen as instigating new conceptual construction. “Let us innovate on this concept, so we can solve this conceptual problem …” (such as at what point in genetic change we should stop calling something a ‘dog’).  That is, most debates about what the meaning of words ‘are’, are actually debates about how to change the concept of a word. “What is the way to innovate on this concept so as to solve this problem?”

(Of course, some concepts refer to things, and the conflict is about some aspect of that thing. For example, if we want the concept ‘dog’ to refer to animals with a genetic code similar to the animals we paradigmatically call dogs, what kind of genetic code is that? If we don’t already know, we will have to investigate to find out.)

The purpose of asceticism

From The Catholic Encyclopaedia‘s article on free will by Michael Maher (1909):

“Our moral freedom, like other mental powers, is strengthened by exercise. The practice of yielding to impulse results in enfeebling self-control. The faculty of inhibiting pressing desires, of concentrating attention on more remote goods, of reinforcing the higher but less urgent motives, undergoes a kind of atrophy by disuse. In proportion as a man habitually yields to intemperance or some other vice, his freedom diminishes and he does in a true sense sink into slavery. He continues responsible in causa for his subsequent conduct, though his ability to resist temptation at the time is lessened. On the other hand, the more frequently a man restrains mere impulse, checks inclination towards the pleasant, puts forth self-denial in the face of temptation, and steadily aims at a virtuous life, the more does he increase in self-command and therefore in freedom. The whole doctrine of Christian asceticism thus makes for developing and fostering moral liberty, the noblest attribute of man. William James’s sound maxim: “Keep the faculty of effort alive in you by a little gratuitous exercise every day”, so that your will may be strong to stand the pressure of violent temptation when it comes, is the verdict of the most modern psychology in favour of the discipline of the Catholic Church.”

Before reading this, the idea of asceticism had never made sense to me. It seemed like an odd, backward thing – much like the depiction of the camp of Christians in Brave New World – irrational.

Yet, with this it suddenly makes sense. Will power is like a muscle – it increases with increased use. Asceticism is basically a training course for the muscle of will power. The end goal (among others) is therefore to create a freedom – to amplify free will through a developed will power.

(Not surprisingly, then, the word asceticism comes from the Greek askesis which means practice, bodily exercise, and more especially, athletic training.)

Contrariwise, the lack of an ascetic sense in significant parts of contemporary society (and instead the cultivation of ‘mere impulse’, as seen through much advertising, say) means that many people have lost a significant amount of freedom in an important sense.

So, asceticism in reality is primarily a constructive practice: the point is to create more will power, which when combined with free will allows one to choose the right or the good more often (such as long-term good over short-term good).

Also see here.

Randolph Clarke on the evidence for non-deterministic theories of free will

In an article in The Stanford Encyclopaedia of Philosophy, Randolph Clarke discusses the evidence for an incompatibilist account of free will. (Incompatibilism is the view that free will isn’t compatible with determinism.)

“It is sometimes claimed […] that our experience when we make decisions and act constitutes evidence that there is indeterminism of the required sort in the required place. We can distinguish two parts of this claim: one, that in deciding and acting, things appear to us to be the way that one or another incompatibilist account says they are, and two, that this appearance is evidence that things are in fact that way. [… E]ven if this first part is correct, the second part seems dubious. If things are to be the way they are said to be by some incompatibilist account, then the laws of nature—laws of physics, chemistry, and biology—must be a certain way. […] And it is incredible that how things seem to us in making decisions and acting gives us insight into the laws of nature. Our evidence for the required indeterminism, then, will have to come from the study of nature, from natural science.”

I don’t understand the reason for the ‘incredible’ claim, and no reason is given for it in the article.

Yet, it seems that there is a pretty straightforward empirical argument that how things seem to us in various mental events or processes can in theory give us insight into the laws of nature. Basic idea: reflecting on what happens in one’s mind can give one (correct) predictions about what is happening in the brain (say), which in turn involves natural laws.

More detailed: The way things seem to us has already given us insight into cognitive or neurological events or processes. That is, we have an experience of something working in our mind, then we look for a correlate in brain processing, and in certain cases we have found correlates. (This is the basis of the belief that the mind is, in some important sense, the brain.) There must be natural laws which are compatible with these brain processes, if the brain is part of nature. Therefore, how things seem to be working in the mind can in theory give insight into natural laws.

The question is just how the sense of decision making maps onto nature. Are we really able to peer into the workings of nature (or something very closely related to it) on the inside, or is that not how the mind works? One view in the ‘hard problem’ of consciousness, for example, is similar to the former: through reflecting on consciousness, one can get a glimpse of the inner nature of physical reality.

Newcomb’s Paradox: A Solution Using Robots

Newcomb’s Paradox is a situation in decision theory where the principle of dominance conflicts with the principle of expected utility. This is how it works:

The player can choose to take both box A and box B, or just take box B. Box A contains $1,000. Box B contains nothing or $1,000,000. If the Predictor believes that the player will take both boxes, then the Predictor puts $0 in box B. If the Predictor believes that the player will take just B, then the Predictor puts $1,000,000 in box B. Then the player chooses. The player doesn’t know whether the Predictor has put the $1,000,000 in box B or not, but knows that the Predictor is 99% reliable in predicting what the player will do.

Dominance reasoning says for the player to take both boxes. Here’s why:

If the Predictor predicted that the player will choose just one box, then if the player picks just box B the player gets $1,000,000, but if the player picks both boxes the player gets $1,001,000. $1,001,000 > $1,000,000, so in this case the player should pick both boxes.

If the Predictor predicted that the player will choose both boxes, then if the player picks just box B the player gets $0, but if the player picks both boxes, the player gets $1,000. $1,000 > $0, so in this case the player should pick both boxes.

So, no matter what the Predictor did, the player is better off choosing both boxes. Therefore, says dominance reasoning, the player should pick both boxes.

Expected utility reasoning, however, says for the player to take just box B:

If the player picks both boxes, expected utility is 0.99*$1,000 + 0.01*$1,100,000 = $11,990. If the player picks just box B, expected utility is 0.99*$1,000,000+0.01*$0 = $990,000. Expected utility is (much) higher if the player picks just box B.

The problem is called a ‘paradox’ because two decision making processes that both sound intuitively logical give conflicting answers to the question of what choice the player should make.

This description of Newcomb’s Paradox is actually ambiguous in certain respects. First, how does the Predictor predict? If you don’t have any idea, it could be difficult to figure out what’s going on here. The second (and related ambiguity) is how the player can choose. Can they choose randomly, for example? (If they choose in a completely random way, it is difficult to understand how the Predictor predicts correctly most of the time.)

Instead of addressing the ambiguous problem above, I decided to create a model of the situation that clarifies the exact mechanics. This model, then, might not address certain issues others have dealt with in the original problem, but it adheres to the general parameters above. Any solutions derived from the model apply to at least a subset of the formulations of the problem.

It is difficult to create a model with humans, because humans are too complex. That is, it is very difficult to predict human behaviour on an individualized basis.

Instead, I created a model involving robot agents, both player and Predictor.

This is how the model works (code at bottom of post):

time = 1

Player is either Defiant Dominance (DD) or Defiant Expected Utilitarian (DE). What this means is that

if player is DD, then % chance player picks both boxes = 99%.

if player is DE, then % chance player picks just box B = 99%.

time = 2

The Predictor checks the player’s state:

if player is DD, then Predictor puts no money in box B

if player is DE, then Predictor puts $1,000,000 in box B

time = 3

Then the player plays, based on its state as either DD or DE, as described above.

It follows that the Predictor will get it right about 99% of the time in a large trial, and that the DE (the player that consistently picks the expected utility choice) will end up much wealthier in a large trial.

Here are some empirical results:

trials = 100, average DD yield = $1,000, average DE yield = $1,000,000

trials = 10,000, $990.40, $1,000,010.20

trials = 100,000, $990.21, $1,000,010.09

Yet, to show the tension here, you can also imagine that the player is able to magically switch to dominance reasoning before selecting a box. This is how much that the players lost by not playing dominance (same set of trials):

trials = 100, total DD lost = $0, total DE lost = $100,000

trials = 10,000, $96,000, $9,898,000

trials = 100,000, $979,000, $98,991,000

What this shows is that dominance reasoning holds at the time the player chooses. Yet, the empirical results for yield for the kind of player (DD) that tends to choose dominance reasoning are abysmal (as shown in the average yield results earlier). This is the tension in this formulation of Newcomb’s Paradox.

What is clear, from looking at the code and considering the above, is that the problem isn’t with dominance reasoning at time = 3 (i.e., after the Predictor makes his prediction). A dominance choice always yields a better result than an expected utility choice, in a given environment.

The problem, rather, is with a player being a DD kind of player to begin with. If there is a DD player, the environment in which a player chooses becomes significantly impoverished. For example, here are the results for total rewards at stake (same trials):

trials = 100, total with DD = $100,000, total with DE = $100,100,000

trials = 10,000, $10,000,000 ($10M), $10,010,000,000 (> $10B)

trials = 100,000, $100,000,000 ($100M), $100,100,000,000 (> $100B)

DD is born into an environment of scarcity, while DE is born into an environment of abundance. DE can ‘afford’ to consistently make suboptimal choices and still do better than DD because DE is given so much in terms of its environment.

Understanding this, we can change how a robot becomes a DE or a DD. (Certainly, humans can make choices before time = 2, i.e., before the Predictor makes his prediction, that might be relevant to their later choice at time = 3.) Instead of simply being assigned to DD or DE, at time = 1 the robot can make a choice using the reasoning as follows:

if expected benefits of being a DE type > expected benefits of being a DD type, then type = DE, otherwise type = DD

This does not speak directly to the rationality of dominance reasoning at the moment of choice at time = 3. That is, if a DE robot defied the odds and picked both boxes on every trial, they would do significantly better than the DE robot who picked only 1 box on every trial. (Ideally, of course, the player could choose to be a DE, then magically switch at the time of choice. This, however, contravenes the stipulation of the thought experiment, namely that the Predictor accurately predicts.)

By introducing a choice at time = 1, we now have space in which to say that dominance reasoning is right for the choice at time = 3, but something that agrees with expected utility reasoning is right for the choice at time = 1. So, we have taken a step towards resolving the paradox. We still, however, have a conflict at time = 3 between dominance theory and expected utility theory.

If we assume for the moment that dominance reasoning is the rational choice for the choice at time = 3, then we have to find a problem with expected utility theory at time = 3. A solution can be seen by noting that there is a difference between what a choice tells you (what we can call ‘observational’ probability) and what a choice will do (what we can call ‘causal’ probability).

We can then put the second piece of the puzzle into place by noting that observational probability is irrelevant for the player at the moment of a choice. Expected utility theory at time = 3 is not saying to the player “if you choose just box B then that causes a 99% chance of box B containing $1,000,000″ but rather in this model “if you chose (past tense) to be a DE then that caused a 100% chance of box B containing $1,000,000 and also caused you to be highly likely to choose just box B.” I.e., expected utility theory at time = 3 is descriptive, not prescriptive.

That is, if you are the player, you must look at how your choice changes the probabilities compared to the other options. At t = 1, a choice to become a DE gives you a 100% chance of winning the $1,000,000, while a choice to become a DD gives you a 0% chance of winning the $1,000,000. At t = 3, the situation is quite different. Picking just box B does not cause the chances to be changed at all, as they were set at t = 2. To modify the chances, you must make a different choice at t = 1.

Observational probability, however, still holds at t = 3 in a different way. That is, someone looking at the situation can say “if the player chooses just box B, then that tells us that there is a very high chance there will be $1,000,000 in that box, but if they choose both boxes, then that tells us that there is a very low chance that there will be $1,000,000 in that box.”

Conclusion:

So, what is the Paradox in Newcomb’s Paradox? At first, it seems like one method, dominance, contravenes another, expected utility. On closer inspection, however, we can see that dominance is correct, and expected utility is correct.

First, there are two different decisions that can be made, in our model, at different times (time = 1 and time = 3).

player acts rationally at time = 1, choosing to become a DE

player thereby causes Predictor to create environmental abundance at time = 2

but player also thereby causes player to act irrationally at time = 3, choosing just box B

The benefits of acting rationally at time = 1 outweigh the benefits of acting rationally at time = 3, so “choosing just box B” is rational in so far as that phrase is understood as meaning to choose at time = 1 to be a DE, which in turn leads with high probability to choosing just box B.

Second, there are two different kinds of probability reasoning that are applicable: causal reasoning for agents (the player in this case), on the one hand, and expected utility for observers, on the other. Causal reasoning says at time = 1 to choose to be a DE, and at time = 3 to choose both boxes.

At neither time does causal reasoning conflict with dominance reasoning. Expected utility reasoning is applicable for observers of choices, while causal reasoning is applicable for agents making the choices.

Therefore, Newcomb’s Paradox is solved for the limited situation as described in this model.

Applying this to humans: For a human, there is probably no ‘choice’ to be a DD or DE at time = 1. Rather, there is a state of affairs at time = 1, which leads to their choice at time = 3. This state of affairs also causes the Predictor’s prediction at time = 2. The question of “free will” obscures the basic mechanics of the situation.

Since a human’s choice at t = 3 is statistically speaking preordained at time = 2 (per the stipulations of the thought experiment) when the Predictor makes his choice, all the human can do is make choices earlier than t = 2 to ensure that they in fact do pick just box B. How a human does this is not clear, because human reasoning is complex. This is a practical psychological question, however, and not a paradox.

Notes:

One lesson I took from solving Newcomb’s Paradox is that building a working model can help to ferret out ambiguities in thought experiments. Moving to a model using robots, instead of trying to think through the process first-person, helped significantly in this respect, as it forced me to decide how the players decide, and how the Predictor predicts.

Creating a model in this case also created a more simple version of the problem, which could be solved first. Then, that solution could be applied to a more general context.

It only took a few hours to get the solution. The first part, that there is a potential choice at time = 1 that agrees with expected utility reasoning, came first, then that what matters for the player is how their choice causally changes the situation. After thinking I had solved it, I checked Stanford’s Encyclopaedia of Philosophy, and indeed, something along these lines is the consensus solution to Newcomb’s Paradox. (Some people debate whether causality should be invoked because in certain kinds of logic it is more parsimonious to not have to include it, and there are debates about the exact kind of causal probability reasoning that should be used.) The answer given here could be expanded upon in terms of developing an understanding of the agent and observer distinction, and in terms of just what kind of causal probability theory should be used.

Code:

UnicodeString NewcombProblem()
{
int trialsCount = 100000;

enum
{
DefiantDominance, // i.e., likes to use dominance reasoning
DefiantExpectedUtilitarian
}playerType;

// setup which type of player for this trial
// time = 1;

//playerType = DefiantExpectedUtilitarian;
playerType = DefiantDominance;
double totalPlayerAmount = 0.0;
double totalPlayerAmountLost = 0.0;
double totalAmountAtStake = 0.0;
int timesPredictorCorrect = 0;

for (int trialIdx=0; trialIdx<trialsCount; trialIdx++)
{
// Predictor makes his decision
// time = 2;

bool millionInBoxB = false;
if (playerType == DefiantExpectedUtilitarian)
millionInBoxB = true;

// player makes their decision
// time = 3;

double chancePicksBoth = playerType == DefiantDominance ? 99 : 1;

// now results …
// time = 4;

bool picksBoth = THOccurs (chancePicksBoth);

// now tabulate return, if !millionInBoxB and !picksBoth, gets $0
if (millionInBoxB)
totalPlayerAmount += 1000000.0;
if (picksBoth)
totalPlayerAmount + = 1000.0; // box A always has $1,000

totalAmountAtStake += 1000.0;
if (millionInBoxB)
totalAmountAtStake + = 1000000.0;

if (!picksBoth)
totalPlayerAmountLost + = 1000.0;

if (picksBoth && !millionInBoxB)
timesPredictorCorrect++;
if (!picksBoth && millionInBoxB)
timesPredictorCorrect++;
}

double averageAmount = totalPlayerAmount/(double)trialsCount;
double percentagePredictorCorrect = (double)timesPredictorCorrect/(double)trialsCount*100.0;

UnicodeString s = “Trials: “;
s += trialsCount;
s += “, “;
s += playerType == DefiantDominance ? “DefiantDominance” : “DefiantExpectedUtilitarian”;
s += ” – Average amount: “;
s += averageAmount;
s += “, Total amount lost because didn’t use dominance reasoning at moment of choice: “;
s += totalPlayerAmountLost;
s += “, Total amount at stake (environmental richness): “;
s += totalAmountAtStake;
s += “, Percentage Predictor correct: “;
s += percentagePredictorCorrect;
s += “%”;
return s;
}
//—————————————————————————

Practice and proposition

Seth Roberts, Professor Emeritus of Psychology at U.C. Berkeley, says:

It is better to do an experiment than to think about doing an experiment, in the sense that you will learn more from an hour spent doing (e.g., doing an experiment) than from an hour thinking about what to do. Because 99% of what goes on in university classrooms and homework assignments is much closer to thinking than doing, and because professors often say they teach “thinking” (“I teach my students how to think”) but never say they teach “doing”, you can see this goes against prevailing norms.

Religion isn’t just a set of propositions, but is more a set of practices. Intellectuals like to focus on the propositions, because they are good at manipulating abstract symbols, arguing about them, and synthesizing them with other sets of abstract symbols (or showing that there are seeming contradictions between the sets, say).

The problem for intellectuals is two-fold:

1. Religion is more about a practice, like learning a musical instrument, than it is a set of propositions. To understand religion, then, one must do, but this is scary for intellectuals because a) it is often outside their area of core strength, and b) it entails the possibility of changing who they are.

2. Many of the propositions associated with something which is largely a practice often to an extent are nonsensical until one starts the practice. Only then do the propositions begin to make sense. This is because the practice involves creating new conceptual categories, and so on. When learning singing, for example, one’s teacher may say all sorts of things that use English words and are grammatical, but which one does not really understand … until one starts doing the various practices, at which point the propositions start to take on a (new) meaning.

Some of the best academic work involves the academic doing: an example of this is Mary Carruthers’ work on medieval memory, where she undertook to learn various techniques about which she was writing. This process changed her understanding of the plausibility of the techniques, and helped guide her in understanding the meaning of what the people using the techniques were saying.

If an academic wants to collect data about a religious practice, he must either begin the practice himself, or rely on what people who have done the practice themselves say. If the latter, he probably won’t really understand what they are talking about, but it is at least a step closer to figuring out the truth than logic chopping an unfamiliar set of abstract symbols on his own.

Also see here.

Science and agency

Jim Kalb writes that:

“Scientific knowledge is knowledge of mechanism that enables prediction and control. If you treat that kind of knowledge as adequate to all reality, which scientism has to do to be workable, then human agency disappears.”

I think it is more accurate to remove “of mechanism.” Science is about prediction and control. If one can predict that one cannot predict (for example, if something is random), though, then that is also a part of science.

Currently, human agency hasn’t disappeared in the scientific worldview – rather, when trying to understand or explain human agency, scientists tend to work within the current repertoire of scientific concepts. Right now, those aren’t classical mechanisms, but various electro-magnetic phenomena or cause-effect processes from quantum physics, say. Are all these things mechanisms? Yes, in that there is some sort of cause-effect relationship which can be described, and technologies created based on that description.

Consider creating robots: the robots can move about a room, decide whether to go left or right, and so on. There is agency there in some sense, and it all appears to be explicable in terms of contemporary science. So, when scientists are trying to understand human agency, they might look at how computer agents work. This might not be plausible (humans probably work in very different ways from any robots nowadays), but that’s not the point: the agency does not ‘disappear’ – it is just explained in terms that might work. If we find out that those terms don’t work, then scientists will postulate other ways in which human agency works.

I think Kalb underestimates how flexible ‘science’ has been – it changes once we figure out that certain representations don’t work. If there is something like an unpredictable human agency, then it will be included like other unpredictable phenomena in science. So, there might be new concepts developed to describe this. Nothing rests on this.

Kreeft, angels, and what’s unscientific

Peter Kreeft – Professor of Philosophy at Boston College – writes (Angels and Demons, 1995, pp. 32-3):

“Isn’t the supernatural unscientific?

Yes. Science can’t prove it.

But it’s not antiscientific, because science can’t disprove it either. All the reasonings about all the observations of all the events within the system of nature can’t disprove something outside of nature [. …] If angels can cause changes in nature – if angels can stop a speeding car in two seconds or appear and disappear in bodies – then the human senses can detect their effects […] But the cause is invisible, since angels are pure spirits. Science cannot observe angels. They don’t reflect light.” (original italics)

This isn’t quite right. Science can only detect effects. It then postulates causes based on the effects. So, angels wouldn’t be unscientific in this regard.

So, what makes angels seem unscientific? Is it that they might exist outside of space and time, and so are non-natural in a sense? I think that’s getting ahead of ourselves: there is another reason why many people consider postulating angels to be unscientific.

First, the phenomena that Kreeft mentions above are erratic – like many natural phenomena, such as meteorites falling to earth, it is difficult to replicate them in a laboratory. This just means that until one gets very good reason to believe it occurs, it’s easy to disregard reports or postulate closer to hand explanations, if not doing so requires a (hypothetically) large metaphysical shift from the normal kinds of explanations.

An analog to this would be bug detecting in coding. Sometimes, bugs are highly replicable (and therefore usually easy to solve). Other times, though, one gets an “erratic bug.” It seems to occur at unusual times, without any seeming reason. Often, one can’t even replicate the supposed bug – it’s rather a customer that is claiming to have one. One could jump to the conclusion that the bug isn’t real, or that the bug is not part of the code (but rather a problem with, say, the customer’s operating system). Yet, often that’s not the right conclusion to jump to. Rather, the first culprit is usually something going on within the code. Even so, though, figuring out the cause of the erratic effect may be difficult.

Similarly, most scientists probably think of angelic phenomena like Kreeft mentions above in a similar way: it’s difficult to say what, exactly, is going on. For the time being, goes the line of thought, let’s assume it’s something occurring within the established natural framework of causes and effects, and save more radical hypotheses about it for once we’ve eliminated the closer to hand possible causes.

(This is combined with the question: does it make sense to focus on trying to explain these phenomena right now? Similarly, one might not focus on an erratic bug, because it seems more likely one will make more progress by focusing on replicable bugs instead. Perhaps it will be solved along with solving something else, or perhaps at a later point one can return to it, but for now, there are other things to do, goes the thinking.)

Second, the phenomena Kreeft doesn’t mention but seem more replicable – angels guiding humans (bringing them messages) – also seems explainable by something else: namely, part of the brain (or so the presumption goes). (Neurotheology investigates effects like this.)

In both cases, there’s nothing unscientific about postulating angels per se, except that it seems that there are easier explanations at hand, given the picture of the world that science already has. Both this and the first consideration are instances of Occam’s Razor. Perhaps, though, upon close inspection of the phenomena, there doesn’t seem to be any good conventional explanations at hand, and something like an angelic being outside of space and time does start to seem reasonable – just as sometimes, when learning about a bug, it starts to seem likely that it is something outside the code causing it: perhaps it’s in the customer’s operating system, or hardware, or some other program running at the same time. In this case, it becomes reasonable, and scientific, to start postulating other causes. There’s no general rule here, but rather an informed sense of what could be going on.

Having said all this, Kreeft is on to something right here. Warranted belief for a ‘supernatural’ cause is not possible in science in a sense, because “to be physical” = “to be in the physical causal network,” where the way that something is judged to be in the physical causal network is if it has effects on other things in the physical causal network. So, if one has good evidence of effects in the physical causal network, then the postulated causes become linked to the physical causal network, i.e., they become ‘physical’ also. As far as something becoming part of the physical causal network becomes ‘natural’, then causal evidence for ‘supernatural’ processes is impossible.

(Consider the idea that the universe was ‘birthed’ from another universe, and that there are many universes in existence. Is the original universe ‘supernatural’? That’s not my intuition – it’s natural, and physical, while existing outside of this universe.)

The corollary of this conclusion is the fact that the definition of what is physical is changing to fit whatever can be posited to cause effects in the physical causal network. So, electromagnetic causal processes would not have been considered ‘natural’ or ‘physical’ at some point in the past – but now they are paradigmatic.

Also see here.

Epistemic momentum

Bruce Charlton writes:

“The root reason why modern atheists are incredulous about Christianity is that they (and my former self) deny the *possibility* of the soul, the supernatural, God/ gods, revelation, miracles, prophecies etc.”

This is basically correct. Yet. It’s more that they have a project. That project is advancing. The project’s core metaphysical assumptions don’t include the soul (and so on). As long as the project advances, so they think, any problems with explanation don’t matter so much in the short term. “It will be sorted out later.” Or, “that’s a pseudo-problem now that we’ve gotten better concepts.” Or, they might believe that they have solved various problems (when they probably haven’t).

Take the soul. We have explained parts of the mind in materialistic terms. So, the thinking goes, we will go on to explain all about the mind and, by analog, the soul, in materialistic terms. There are various problems (the most obvious is probably the ‘hard’ problem of consciousness), but the point is: the unease these unresolved or “perhaps-problems” cause is reduced as long as the project is advancing.

It is like an economy – as long as it is growing, various concerns about the structure of the polity, damage to culture, and so on, move to the back-burner.

Philosophical Zombies, Chalmers, and Armstrong

In his paper What is consciousness? (1981), D.M. Armstrong – Professor Emeritus at the University of Sydney and one of the founders of functionalism – brings up an example of what sometimes happens to long-distance truck-drivers, to distinguish between ‘perceptual consciousness’ and ‘introspective consciousness’:

“After driving for long periods of time, particularly at night, it is possible to ‘come to’ and realize that for some time past one has been driving without being aware of what one has been doing. The coming-to is an alarming experience. It is natural to describe what went on before one came to by saying that during that time one lacked consciousness [i.e., lacked some kind of consciousness different from perceptual consciousness].” (p. 610, Philosophy of Mind, ed. John Heil, 2004)

He then introduces the notion of introspective consciousness:

“Introspective consciousness, then, is a perception-like awareness of current states and activities in our own mind.” (p. 611)

How does this tie into ‘philosophical zombies’? He continues at a later point in the paper:

“There remains a feeling that there is something quite special about introspective consciousness. The long-distance truck-driver has minimal [a technical term Armstrong uses to mean some sort of mental activity] and perceptual consciousness. But there is an important sense, we are inclined to think, in which he has no experiences, indeed is not really a person, during his period of introspective unconsciousness. Introspective consciousness seems like a light switched on, which illuminates utter darkness. It has seemed to many that with consciousness in this sense, a wholly new thing enters the universe.”

The language Armstrong uses here is reminiscent of the language David Chalmers, writing in The Conscious Mind (1996), uses to describe philosophical zombies:

“A [philosophical] zombie is just something physically identical to me [i.e., acts the same, talks the same, and so on – is the same as far as we can physically tell], but which has no conscious experience – all is dark inside.” (p. 96)

The idea is not that philosophical zombies have an experience of darkness – rather, it is a figurative way of speaking about a lack of what I would call subjective experience, or what Chalmers calls ‘conscious experience’ above. Yet, it is suggestive of a link between how Armstrong is conceptualizing ‘introspective consciousness’ and how Chalmers is conceptualizing ‘conscious experience’.

Armstrong seems to be conflating subjective experience with introspective consciousness. Chalmers picks up on this in his 1996 book The Conscious Mind:

“Armstrong (1968), confronted by consciousness [i.e., subjective experience] as an obstacle for his functionalist theory of mind, analyzes the notion in terms of the presence of some self-scanning mechanism. This might provide a useful account of self-consciousness and introspective consciousness, but it leaves the problem of phenomenal experience to the side. Armstrong (1981) talks about both perceptual consciousness and introspective consciousness, but is concerned with both only as varieties of awareness, and does not address the problems posed by the phenomenal qualities of experience. Thus the sense in which consciousness is really problematic for his functionalist theory is sidestepped, by courtesy of the ambiguity in the notion of consciousness.”

I think Chalmers is right here, but he makes it sound like Armstrong uses the ambiguity in order to sidestep the problem. My sense from reading Armstrong 1981, rather, is that there is some sort of implicit identity between certain functional states and subjective experience, and so for him the conceptual distinction carries less weight. However, it seems Armstrong is also working through the conceptual muddle that many other philosophers and scientists were working through at the time, and hasn’t clearly distinguished the two aspects.

When Chalmers (Facing up to the problem of consciousness, 1995) pulls apart consciousness into ‘consciousness’ proper (i.e., subjective experience) and ‘awareness’, this is doing heavy work:

“Another useful way to avoid confusion […] is to reserve the term ‘consciousness’ for the phenomena of experience, using the less loaded term ‘awareness‘ for the more straightforward phenomena described earlier [i.e., causal or functional phenomena, such as the ability to discriminate, categorize, and react to environment stimuli, the integration of information by a cognitive system, and so on].” (p. 619, Philosophy of Mind, ed. John Heil, 2004)

The conflation of the two elements of this and related psychological or mental terms occurs throughout the literature, and leads to a ‘side-stepping’ potential for various solutions, of which Armstrong is but one. However, it is only by making the conceptual distinction explicit, and then showing how an (functional in this case) identity is problematic, that the real problem becomes apparent.

Ned Markosian and Mereological Nihilism

Motivation:

If there are physical objects at “higher-levels” in a strong metaphysical sense, then it suggests a solution to one aspect of the problem of subjective experience vis a vis the physical universe. Namely, how there can be a complex thing such as subjective experience that is physical.

Mereological Nihilism:

In a recent talk, Ned Markosian, Professor of Philosophy at Western Washington University, said that the main problem with “mereological nihilism” – the view that the only physical objects properly speaking are simples – is its counter-intuitiveness. Instead, he offered an alternative called “regionalism.” (Mereology is the study of part and whole – from meros meaning part.)

How is a mereological nihilist to respond?

The first step is to make a distinction between objects in the everyday sense of the term, and objects in a philosophical or metaphysical sense of the term.

In the everyday sense, objects are identified based on things like whether they hang together in an identifiable way, whether there is some particular use for which they will be picked out, and so on. Scientific uses follow along similar lines.

In this sense of an object, a mereological nihilist need not deny that there are everyday or scientific objects. In this way, the counter-intuitiveness of the nihilist’s position is reduced.

However, the nihilist must add, it turns out that these “objects” don’t have an existence beyond the arrangement of the simples. They are useful conceptual (or perhaps perceptual) devices – shortcuts to help in interacting with the world.

The reason for believing this, is Occam’s Razor – to predict how the everyday objects behave, we don’t need to postulate anything more than simples moving in concert, say. We could say that there are ontologically strong objects above and beyond them, but why not just say that they are conceptually useful but ontologically weak (i.e., not real, i.e., mere devices) objects instead?

To motivate a position like Markosian’s regionalism, then, as opposed to mereological nihilism, it seems that we would need to motivate it beyond “intuitive” reasons that can be handled by mereological nihilism by the definitional split outlined above. To do this, I think that what is required is motivation for believing that there are higher-level physical objects in a strong sense for other-than-causal reasons (as physical science locates objects using causal criteria – to be is to be causal according to physical science – and the causal story seems to be covered at the lower level).

Subjective experience is reason for believing that there are higher-level physical objects, but is it sufficient? The alternatives are: deny subjective experience is real, posit that subjective experience is fundamental (and so a “simple”) in some sense, or say that subjective experience isn’t physical.

Intrinsic and Extrinsic

In discussing strategies for avoiding epiphenomenalism (the idea that subjective experience is causally irrelevant), David Chalmers lists one option as (The Conscious Mind, 1996, p. 153):

4. The intrinsic nature of the physical. The strategy to which I am most drawn stems from the observation that physical theory only characterizes its basic entities relationally, in terms of their causal and other relations to other entities. Basic particles, for instance, are largely characterized in terms of their propensity to interact with other particles. Their mass and charge is specified, to be sure, but all that a specification of mass ultimately comes to is a propensity to be accelerated in certain ways by forces, and so on. Each entity is characterized by its relation to other entities, and these entities are characterized by their relations to other entities, and so on forever (except, perhaps, for some entities that are characterized by their relation to an observer). The picture of the physical world that this yields is that of a giant causal flux, but the picture tells us nothing about what all this causation relates. Reference to the proton is fixed as the thing that causes interactions of a certain kind, that combines in certain ways with other entities, and so on; but what is the thing that is doing the causing and combining? As Russell (1927) notes, this is a matter about which physical theory is silent.

One might be attracted to the view of the world as pure causal flux, with no further properties for the causation to relate, but this would lead to a strangely insubstantial view of the physical world. It would contain only causal and nomic relations between empty placeholders with no properties of their own. Intuitively, it is more reasonable to suppose that the basic entities that all this causation relates have some internal nature of their own, some intrinsic properties, so that the world has some substance to it. But physics can at best fix reference to those properties by virtue of their extrinsic relations; it tells us nothing directly about what those properties might be. We have some vague intuitions about these properties based on our experience of their macroscopic analogs – intuitions about the very “massiveness” of mass, for example – but it is hard to flesh these intuitions out, and it is not clear on reflection that there is anything to them.

There is only one class of intrinsic, nonrelational property with which we have any direct familiarity, and that is the class of phenomenal properties. It is natural to speculate that there may be some relation or even overlap between the uncharacterized intrinsic properties of physical entities, and the familiar intrinsic properties of experience. Perhaps, as Russell suggested, at least some of the intrinsic properties of the physical are themselves a variety of phenomenal property? The idea sounds wild at first, but on reflection it becomes less so. After all, we really have no idea about the intrinsic properties of the physical. Their nature is up for grabs, and phenomenal properties seem as likely a candidate as any other.

It doesn’t matter if one postulates a “pure causal flux” or not. The reason is that the way we understand both things and their relations (or causality) is representational. If one is going down this road, then subjective experience could be identified with the ‘hidden’ aspect of the things, or the ‘hidden’ aspect of the relations.

In physical representation, one has an abstract, quantitative representation of things in a space, say. The relations are abstract as well as the things. That is to say, our representations of extrinsic properties presumably correlate to something real, but there is ‘room’ to put subjective experience ‘in’ behind the representations, as well.

This is to say, if we think that we have a direct line to (in some sense) non-representational knowledge of our own subjective experience, then it is not clear why that subjective experience can’t be identified with either the extrinsic, intrinsic, or both sorts of physical properties.

So when Chalmers says that there “is only one class of intrinsic, nonrelational property with which we have any direct familiarity, and that is the class of phenomenal properties,” this may be wrong. Chalmers is assuming a ‘relevant transparency‘ when it comes to representations of relations, but not to things. Phenomenal properties may turn out not to be ‘intrinsic’ or ‘nonrelational’.

Gods and Goddesses

Various conceptions of gods and goddesses map partially to aspects of the subconscious or unconscious. In this sense, one can think of a typical pantheon as a (antiquated) set of tools for interacting with and remembering aspects of the (subconscious or unconscious) mind.

In The Iliad, for example, there are various scenes where gods or goddesses appear to humans. In reality, there is a spectrum of sorts of appearances of information from the subconscious or unconscious to the conscious, and the appearance of gods or goddesses can be understood as standing in this spectrum (along with problem solving, the ‘muse’, and so on).

Understanding gods, say, or muses, as coming through part of the brain (or what have you) can lead to a possibly incorrect conclusion: the brain explains the phenomena, i.e., the phenomena are nothing but the brain acting in a certain way.* To see that this is problematic, consider an example: if I am looking at a tree, presumably there is some correlate processing occurring within my brain. Yet, it would be incorrect to conclude that trees are brain processing. The information being presented is of something external to my ‘self’, and is in some sense veridical.

*It is another question what the ‘brain’ is – i.e., the concept ‘brain’ is a specific kind of representation of something, i.e., a ‘scientific’ representation. See here.

So, evaluating the veridical nature of purported experiences of gods, say, requires looking at the causal chain which leads to that experience. Are there really things outside of ourselves which have these mental properties and these abilities? (In a way, the unconscious is a mental thing outside our ‘selves’.)

For a question like “Do gods (say) exist?”, it is easy to say ‘no’, but a more careful evaluation brings certain puzzlements to light. For example, to the question “Do (did) dragons really exist?”, the concept ‘dragon’ originated from fossil evidence, and in this sense refers to something real. If that is what is meant by whether there ‘really are’ dragons, then the answer is ‘yes’. Yet, usually something more is meant: just how accurate of a concept it is. Here, there isn’t a yes-no answer, but rather a sliding scale of accuracy. (It isn’t that simple: for example, one concept could get A right but B wrong, another A wrong but B right.) For example, moving from ‘dragons’ to ‘dinosaurs’ was (on the whole) a step in accurately conceptualizing the nature of those things to which the fossils belonged. This would favour the idea that dragons don’t exist. However, we have then continued to change our concept ‘dinosaur’, in minor and major ways. So we would be put in the position of continually saying “The ‘dinosaurs’ they believed in 5 years ago don’t exist,” which is an odd way to talk. We don’t say ‘dinosaurs’ no longer exist, every time we make a conceptual change. (See here.)

(Because we invented a new term for those things to which the fossils belonged, a radical conceptual dimorphism can develop (i.e., our concept attached to the word “dinosaur” can become much different from the concept attached to the word “dragon”). If we had used the same word, we would have had a conceptual change in what was associated with it, which would incline us less towards thinking: ‘old dragons’ didn’t exist, but ‘new dragons’ do – rather, we would be more inclined to think dragons are real, but our concept of them has changed. This can be seen by looking at all sorts of more quotidian concepts, where we have retained the name but our understanding of the nature of the phenomenon has changed significantly.)

Why do scientific representations work?

What defines a scientific representation? These representations are designed to work at predicting certain other representations (eventually, resulting in manifest representations via our sensory apparati). The network of these models refers to what we think constitutes the “physical universe.”

The nature of the representations, then, is to work to predict. In this sense, they are essentially ‘causal’, but we could instead just say ‘sequential’. Not much rests on the word ’cause’ in an ontological sense, here.

So, why do scientific representations work? If they are essentially abstract, quantitative representations, what about this allows them to work?

The reason that the representations work is that they correspond to things (or ‘states of affairs’) in the universe. That is, they are placeholders that in some sense correspond to things. In what way do they correspond?

That is where things become difficult. Reductive science tends to posit that the ‘smaller’ things are the ‘real’ things. Yet consider: a leaf in my subjective experience can have a correlate. Yet the leaf is not independent. My experience is not a ‘mere’ conglomeration of leaf experiences. Rather, there is an experience, and the leaf is in some sense a part of it.

Our instinct in reductive science is to ‘get rid of’ higher-level entities, if we can model them successfully in terms of more abstract, ‘lower’ level entities. We say: the higher-level entity was an illusion, what there really is, is these lower-level entities. (More carefully, we should say: our higher-level representation did not give us as accurate of ontological type correlates – i.e., placeholders – as we could have, and the lower-level representation is better in this sense.)

How does this work with subjective experience? The problem is that there are parts and wholes. Subjective experience isn’t reducible to its components (a ‘mere’ conglomeration). Rather, there is a whole, and we can analyze it into parts.

This would only be reflected in scientific representation if this whole-part relationship affected what is reflected in the sequential predictions of ‘physical’ representations. That is, the whole-part relationship will only turn up in science if it makes a causal difference.

It probably does make a causal difference, as it seems that what occurs in a subjective experience makes a difference. For example, I can reflect on it and say “It is united.” This is an effect, and so, that it is united seems to be causally important. Yet, it seems conceivable that it might not be causally important. Science can look for its effects by looking for phenomena that are unified but have parts. A concept like a ‘field’ in physics, for example, might reflect this state of affairs. (In this case, something like a ‘field’ would be the representation, the subjective experience would be the things.)

What does abstract, quantitative representation say?

If we understand the problem of qualia not to be of how to reduce qualia to abstract, quantitative representations but rather how to represent qualia in terms of abstract, quantitative representations, then that seems rather easy in a way. We can make a start on representing colour experiences as indicated here, for example. This then leads to a question: why is abstract, quantitative representation useful? The basic idea with science is that this sort of abstract representation reveals something important about the universe. So what does it reveal?

What is quantity? Quantity at its root is distinction. For example, we can quantify the length of the side of a rectangle by making distinctions – places – along it. We have a way to record these distinctions – 1, 2, 3, and so on, are placeholders for these distinctions. We could just as easily use different symbols – a, b, c, and so on. Mathematics is largely ways to understand the relationships between placeholders as such. That is, what can we say about placeholders, to the degree that they’ve been abstracted?

The problem with science is that reality isn’t abstract. Reality is concrete. My subjective experience instantiates certain things that can be ‘captured’ to an extent with abstract, quantitative representation. For example, my visual experience of a rectangle can be ‘divided’ up. These distinctions can then be mapped to abstract placeholders. (The trick is that many people mistake scientific representations for concrete things. “The deer is really a bunch of molecules,” for example, where the molecules are taken to be concrete. They aren’t – they are abstract representations of what was formerly represented by the ‘deer’ concept.)

Instead of thinking of science in this abstract, placeholder sense as revealing something, it is more like it is remembering something – in a symbolic language that can be ‘re-converted’ later on – much like letters can be converted to meaning later on, if one has some basic grasp of the meaning of letters arranged in such-and-such an order to begin with.

Certainly, science does reveal things. We use tools to perceive things hitherto unperceived. Yet, the language in which science records these things requires an interpreter on the other end, who can decode the ‘meaning’ of them (‘meaning’ here is understood in a disparate sense).

The same goes with scientific representations of subjective experience. The attempt to eliminate subjective experience as a phenomena itself, and in its place put only symbolic representations of it, is misguided. Rather, the symbolic representation can help us to understand the things (in various ways) – they don’t replace them, except when we are moving, rather, from one representational scheme to another. I.e., we can abandon one representational scheme for another in reductive science, where the new or ‘lower-level’ scheme is in some way more useful. However, even here they are both referring to something ‘real’, and so in a certain ontological sense of reference neither are primary.

In the case of fitting subjective experience into the physical universe, we are not moving from one mere representational scheme to another – rather, we have things (i.e., subjective experiences, say) and are looking for some possible physical representation of this that ‘fits’ with our other physical representations in some relevant sense.

Quantity, Quality, and a Materialist Pandora’s Box

Science (in an area like physics) works largely through abstract, quantitative representations. In philosophy of mind, there is a problem with ‘qualia’. Qualia are qualitative properties of subjective, experiential states. This is a problem because ‘qualities’ don’t seem reducible to quantity.

It is a straightforward matter to note that one can map from quantity to qualities. For example, one can understand (part of) the structure of human colour vision as correlating with 3 quantitative ‘dimensions’, A, B, and C. Each dimension holds a value from 0 to 255, say. The three numbers combined produce a location in an abstract, quantitative ‘space’.

This is how a colour is often represented in computer code – it is an abstract, quantified representation. More carefully: this is how coders represent the internal state of the computer, which combined with a causal chain involving a monitor produces experiences of certain sorts of colours in people under certain standard conditions. So, the three quantitative dimensions A, B, and C are usually called R, G, and B, which map to the phenomena of red, green, and blue colour experiences. By combining these three dimensions, one can produce a circle spectrum of regular human colours in a human who is looking at something like a monitor, because the monitor is made to receive inputs corresponding to the locations in the abstract quantitative space and produce certain optical phenomena as a result.

This is to say, subjective experience of colour has certain correlates in an abstract, quantitative ‘space’.

If scientific representation is taken to not be ‘relevantly transparent’ (contra someone like Daniel Dennett), then one can think of abstract, quantitative scientific representation as a kind of Pandora’s box, that upon being opened might allow subjective experience ‘into’ the scientific picture. More precisely: to allow subjective experience behind the abstract, quantitative picture that science ostensibly gives.

A seemingly coherent response to eliminativist materialists such as Dennett is: reality goes from subjective experience in certain cases (say) to abstract, quantatitive representations that occur in humans, so to speak. It is like the coder example, but in the other direction. That is, certain sorts of physical phenomena are subjective experiences which humans represent in an abstract, quantitative way.

If the above is right, though, then it also suggests that the problem of reducing qualia to quantity is confused: what we are doing in physics (say) isn’t reducing ‘things’ to other ‘things’, but rather introducing abstract quantitative symbols for things and then replacing one set of symbols for another in the case of reduction, where the nature of these symbols is useful for various reasons.

Truth, religion, and science

The cause of the fall of theism from the elite in Western society preceded the theory of natural selection (see here). What was the cause, then? In part, it was the introduction of a new system of truth, that could potentially conflict with (or corroborate) the truth claims of an extant system (i.e., various forms of Christianity, folk traditions, and so on).

Yet, the notion of truth is not static through this process. The introduction of scientific truth changed what ‘truth’ was – i.e., it’s ontology changed the notion of what could be ‘real’. This has led to contemporary academic philosophers, such as Daniel Dennett, denying that there is such a thing as subjective experience. Science trades in abstract, quantitative ‘things’ which are left as such. So, the logic Dennett might employ goes, therefore subjective experience is a mere illusion – it does not exist.

Yet, this leads to cognitive dissonance – the apparent fact of subjective experience is constantly before one. It is fairly straightforward to see that scientific representation is representation, which is symbolic, and so must be a symbol of something. Someone like Dennett posits the abstract symbols of science’s ontology to be transparently understood things in some relevant sense. So a response can be made: deny ‘transparent understanding’ vis a vis science’s things. If subjective experience is real and science is comprehensive in the relevant sense, then the symbols employed in science must be a representation of subjective experience at some point. So, in order to pursue this intuition of ‘facts’ and ‘reality’, one can say that Dennett posits an understanding of scientific things that is not actually obtained.

(One cannot say that subjective experience is not real just because current scientific ontology does not allow for it be real, say. Physical science has consistently expanded or modified its ontology upon discovery or investigation of phenomena with robust evidentiary bases.)

The Problem of the Subjective Self

I sometimes hear people describe humans as a “bunch of molecules,” or something to that effect. For example, “Humans are ultimately just a bunch of molecules.” This is false, unless a strong emphasis is on ‘bunch’.

The reason is that the most obvious aspect of humans is our subjective experience. (‘Experience’ here does not mean ‘experience’ as in memory and practical skills developed, it means experience as in “I am having an experience right now.” ‘Subjective’ here is not used in the sense of something distorted due to personal bias, but rather as in a mode of being.)

This subjective experience is complex, unified, and real. For example, I might have a visual experience of a meadow. There are many entities in the visual experience (grass here, sky there, and so on), so it is complex in a certain sort of way. These things are in a visual experience, so they are unified. The experience itself is real (i.e., it exists), as opposed to it being an arbitrary grouping of external things (such as may be the case when we talk of a ‘bunch of molecules’).

There is something which must account for the unity of an experience. To one who believes that humans are nothing but ‘molecules’, there does not appear to be anything at hand that is a good explanatory fit.

This problem interconnects with several other problems, including: the unity of experience through a series, subjective being, and the qualitative nature of experience. The above problem of the unity of an experience is symptomatic – someone who describes humans as a ‘bunch of molecules’ will probably be unable to adequately answer any of these problems. This is why someone like Colin McGinn says that the qualitative nature of experience is inexplicable. McGinn is better than many, in that he acknowledges that there is a difficult problem(s) at hand, where the conceptual resources of physicalism do not seem able to afford an answer.

It might be important to note that the concept ‘physical’ has changed dramatically over the past, say, 400 years. With our current conception of ‘physical’, my working guess is that McGinn is right that the problem of the subjective self is an insoluble mystery for physicalists. That is to say, if one wishes to remain a ‘physicalist’ and solve these problems, something must change within one’s conceptualization of what ‘physical’ things can include.