Low-hanging fruit and scientific innovation

A response that is made to the claim that there is a slowing rate of significant technological innovation (and, one can infer, therefore scientific innovation) is that there is less ‘low-hanging fruit’ now than, say, 150 years ago. Therefore, it requires more investment to get a similar return, and in fact in many areas it seems to require much more effort to get even less return than people were getting 150 years ago

 One problem with this response is that it assumes a static field of inquiry. Imagine an analogy in geographical exploration. Marginal exploration involves exploring parts of an already discovered continent. If one is only discovering within that continent, then it is easy to imagine how, in the beginning, explorers were able to make large discoveries relatively easily. It then would get more difficult to make new discoveries on the same continent on a similar scale, and at some point, it would become impossible.

 Yet, what if genius opens up new fields of inquiry (i.e., the field is not static)? This would be analogous to figuring out how to find new continents (or beyond).

 Consider those who were at the beginning of science – was their work easier than that of scientists now? Well, once you open up a new field for inquiry, there usually are low-hanging fruit, and the people first in an area are more likely to discover those. Yet, the people at the beginning of science were also those who were creating those fields.

 So, if you are in a conventional field of, say, physics, it might be that the low-hanging fruit has been picked. Yet, it seems genius is involved in creating new fields of physics, hitherto unimagined. To some extent, we can say that genius simply is the ability to create new fields of inquiry – to create new conceptual frameworks that open up new ways of investigating.

 The debate then seems to fall back on what reasons we have for believing that we have pretty much figured out everything, such that there isn’t much left to discover. This seems more an issue of judgment, and difficult to resolve.

The information economy vs. the know-how economy

The term ‘information economy’ is often used to describe what countries like Canada, the U.S., and so on are supposedly into or entering. Yet, when I look around, what sort of jobs do I see that are in demand? Skilled trades.

The term ‘information’ is too vague – what is in demand is more like ‘know-how’. Know-how means acquisition of skill, which takes time. It manifests in the form of a lawyer, doctor, electrician, undersea welder, and so on.

Non-skilled or lesser-skilled trades seem to be becoming less common due to a) outsourcing and b) roboticization and computerization.

The appropriate response is not to build a ‘broad base of vague information’ but instead ‘focused skilled trade + learning techniques’.

How should evidence be built?

Bruce Charlton has a post on “Evidence Based Medicine” (EBM), among other things, which often involves large randomized, controlled trials (RCTs), in “An Evidence Free World.”

I largely agree. The moniker is typically misleading, as EBM typically involves ignoring large amounts of evidence. Rather, a large RCT is one tool, which will be properly used in certain cases to investigate certain phenomena. Like any tool, a RCT may or may not be the best tool to use at a given time, and even when it is, may be constructed or implemented to varying degrees of effectiveness.

Similarly, large RCTs seem like something that isn’t the start of evidence, but typically a tool used later on.

Instead, the beginning is probably an individual noticing something unusual. Then, collecting anecdotes (similar experiences), either in one’s own experience or others’. Then, if it makes sense given the subject area, starting to construct a more formal probability framework to see if such experiences are inexplicable given current standardly posited cause-and-effect relationships. Something like a large RCT would probably most usefully come later, be carefully designed, and be just one more (although often important) line of evidence used to draw a conclusion about certain cause-and-effect relationships.

I.e., a large RCT is more the tail than the dog of evidence. They probably should come once we already have good reasons to believe there is a causal relationship, and where we understand the causal area sufficiently to not design a RCT that is misleading.

Science is something anyone can do (as noted above, most science starts with someone noticing something that isn’t quite right or is unusual). A modern notion is that science is the work of very expensive trials and experiments – but more often than not, these trials or experiments are not only not that effective at figuring out cause-and-effect relationships, but waste a huge amount of money (something Charlton suggests as well).

My suspicion is that a reliance on large RCTs is more the product of the professionalization and attempted bureaucratization of certain areas of inquiry, where an ‘amateur‘ mindset and motivational structure would be often better suited (and be more flexible in terms of discerning which tools are best suited for figuring out a given cause-and-effect relationship).

Biology as technology

The idea is old, and it is fashionable to deride it in a sense (i.e., Paley’s arguments, commonly thought to have been refuted).

Yet, it seems so obviously right in a sense (Dennett picks up on this, and starts using ‘design’ and so on to describe biological systems, where design is understood as a process without some intentional thought process behind it – design anyways).

A good trick in design is to start simple (as it is commonly believed organisms did) and then iterate, testing the design at multiple steps along the way. This seems to be what happens with organisms through evolutionary development, as far as our limited understanding of these things goes.

Furthermore, it seems like a very useful metaphor, because technology is a predominant aspect of our time.

Organisms are technological ‘artifacts’, yet of unfathomed complexity and probably making use of unknown causal ‘mechanisms’ (understood broadly as cause-and-effect systems – for example, before electromagnetism was discovered, the use of this by certain organisms would not have been known by biologists at the time).

By understanding this, it might help people to understand what biology encompasses.

 

Buildings as organisms

Buildings as organisms. This thought occurred to me while visiting Sainte-Chapelle, where they are repairing and replacing the glass, lead, and statuary.

Just like an organism, there are iterations of building (painting, and so on), which retain or enhance the building’s form.

Alternately, you could think of buildings as part of an ‘extended phenotype’ of organisms (us).

Either way, they participate in processes similar to organisms.

The study of traffic

Why aren’t more people studying traffic?

Consider that the limit on transportation speed is no longer the vehicle’s engine, but things like traffic jams, ‘rush hour’, and so on. I.e., the major problem in innovation isn’t about a car’s engine, but about ‘human’ related factors like traffic congestion and safe speed considerations. Consider how much time people spend driving from one place to another. Consider how many injuries and fatalities are driving-related in places like Canada.

So, why aren’t more of our brightest minds studying traffic?

The ways roads work as part of a user-interface

If you start thinking of the way roads work as part of the user-interface for driving, then it is straightforward to see why occasional one-way streets are problematic. Consider how clickable-things worked in Windows 7. Sometimes, to get something to ‘go’ in the relevant sense, you would click twice on the thing. Other times, you had to click just once. This caused significant confusion, because it created an inconsistent interface. Similarly, when driving, if roads interface usually one way, but then switch to another way occasionally, it can cause similar confusion and frustration for the user (i.e., the driver).

Poorly designed stove tops

A couple months ago, I started using a new stove. Instead of the traditional electric coil-top, it features a glass top.

The basic interface isn’t that complicated. Corresponding to the four circular burners, there are four knobs that control the heat.

This is where the first major design error is. The back-left burner is smaller than the front-left one, and so is positioned to the left of the front-left one.

Naturally, therefore, one would expect the left-most dial to control the back-left area, and the second most left dial to control the front-left one.

This is not so – instead, these are reversed. The situation is similar on the right side of the stove.

The second obvious design error is with the burner light. There is one light on the left of the stove top that indicates that the heating area is hot. There is another light on the right of the stove top that indicates that a burner is on. The only thing that distinguishes these lights is where they are (left or right side of the stove) and the small text next to each. The left one says “Hot Surface,” the right one says “Element On.”

The first problem is that it’s intuitive to think that a given light is indicating something about the burners on its side. The second problem is that the lights have the same colour. Since the only way to keep them straight at a distance (because the text is small) is to remember one is on the left side, the other on the right – which is not something easily remembered – an obvious solution to this design problem is to colour-code the lights. Perhaps yellow and red could be used to distinguish.

In a way, it’s baffling that these basic elements of the stove user interface could have been gotten wrong.

To fix a problem, do you need to know what causes it?

To fix a problem, do you need to know what causes it?

In a word, no.

In complex systems, such as human bodies, eco-systems, societies – it might be very difficult to pinpoint ‘the cause’. This is because many things often can be changed to in turn change the effect, and similarly, many things interact with other things in various feedback loops, such that it is often difficult or impossible to disentangle cause-and-effect in any simple, point-of-contact way.

For example, if someone has a health problem, you can attempt to pinpoint exactly what mechanism is ‘misfiring’, such as to cause the symptom. This is often more simple from the patient’s perspective, if it can be achieved. Another way to do it is to apply very broad ‘rules of thumb’, things that are in general good for a person and very well may cure the problem (in addition to other possible health benefits).

If someone came to me for health advice, and if upon consideration I were unable to delineate a simple cause-and-effect solution, I would recommend very general things that tend to put people into better health – consistently good sleeps, certain kinds of exercise, getting outside, good amounts of sunlight, certain things to eat and not eat, decreasing stress, and so on.

In some cases, these suggestions could ‘fix the problem’. In a sense, if they do then one knows ‘what caused it’, but in another sense one does not.

Extraordinary claims require extraordinary evidence

“Extraordinary claims require extraordinary evidence” is usually just another way of saying “That seems difficult for me to believe.”

The question is then: why is it difficult to believe?

A few key points are relevant.

1. Much evidence can be re-interpreted, given a theory choice.

For example, much of the evidence that supports a geocentric theory can be re-interpreted to support a heliocentric theory, given a few key, conceptual shifts. A claim may seem extraordinary because those few key, conceptual shifts are not there. Alternatively, the evidential debate will then be thrust back upon the evidence for or against some proposed conceptual shift that allows for a re-interpretation of existing data.

What this means is that the evidential ‘core’ of a theory, when compared to a competing theory, might be much slimmer than supposed.

2. Often, people are simply unfamiliar with certain evidence. The person who proposes a theory may not be familiar with certain evidence, or the person who finds it difficult to believe may not be familiar with certain evidence.

3. Often, people who think a theory is correct might be less critical at evaluating evidence that seems to corroborate it. This can be thought of as a ‘founder’ effect.

For example, is a recent tornado further evidence of global warming? One way to evaluate that is to check the frequency of tornados compared historically. Yet, if one already believes there is global warming, one might be less inclined to rigorously test that hypothesis, and more inclined to rigorously test competing hypotheses.

Science fiction as science

While accidentally coming across a popular science-and-technology magazine recently, I was struck by how much the articles on science and technology weren’t about the actual achievements of the scientists and engineers, but about what those achievements might be in the future. This is something echoed in many other journalistic accounts of science. I.e., much ‘science’ reporting is actually a kind of science fiction, where the hopes and beliefs of the writers are projected onto what has actually been achieved in the science.

The basic projected narrative is: “We are constantly making significant scientific and technological breakthroughs, which are continually transforming society. This is just the latest one.”

That this is a fiction can easily be seen retroactively – almost always, what is reported doesn’t happen. The supposed achievements fade away, to be replaced with other breathless accounts of breakthroughs, and so on.

Of course, sometimes significant breakthroughs do happen, but it is difficult to sort these out a priori. Indeed, my sense is often these changes occur more quietly – they just arrive one day, and work, and change (albeit often in minor ways) some aspect of everyday society.

Considerations on the Argument from Natural Evil

The argument from natural evil typically goes something like this:

  1. God is omnipotent, omniscient, and omnibenevolent.
  2. This kind of God would not allow natural evil to exist, where natural evil is understood as pain or suffering that isn’t caused by human choice.
  3. Natural evil exists.
  4. Therefore, there is no God as so defined.

How should a Christian respond?

If you think that Christianity should be and properly considered is robustly empirical and practical, then it is reasonable to wonder whether these sorts of theological arguments, important as they may be in pointing to conceptual inadequacies or tensions, are in some way missing the point of what is important about the phenomenon in question.

Consider. It is as if someone, noting the belief in gravity, also noted the seeming theoretical incompatibility between relativity theory and quantum theory, and concluded that, therefore, gravity did not exist. The correct response, it seems, would be to say that, whatever gravity turns out to be, what is relevantly important about it is real and so therefore overcomes such theoretical puzzles that are involved in relativity theory and quantum theory. It might be that gravity turns out be multiple phenomena, or it might be that our current conception of the ultimate nature of gravity is incorrect in some other way (and this is true about most everything). Regardless, our notion of gravity does important work. We know that gravity in some important sense exists, whatever it might turn out to be.

Consider that notions of God’s goodness, foresight, and benevolence are built up out of Christian experiences of providence, non-chance coincidence, the ‘Holy Spirit’, and so on. Arguments against Christianity of the above sort gain much of their perceived import from the mistaken notion that Christianity is primarily built on abstract theological speculation, when that sort of theology is, properly and as a matter of historical development, rather a result of consideration of a significant empirical base.

I.e., even if an argument such as the one above succeeded, it would only succeed in displacing a theological aspect, not the evidentiary bases that undergird that theological movement.

Can evolutionarily acquired cognitive traits be reliably considered as truthful?

Some, such as Alvin Plantinga (former Professor of Philosophy at Notre Dame University, now Professor of Philosophy at Calvin College), think that there are problems between notions of human cognitive reliability and a standard account of evolutionary development. The reasoning goes something like this:

1. Assume that a typical evolutionary account of the development of human cognitive abilities is true.

2. Evolution works by selecting things that allow for organisms to survive and reproduce successfully.

3. There seems no necessary (or even probabilistic) connection between this and cognitive abilities that tend toward truthfulness.

4. Therefore, we have no warrant for believing that the cognitive processes by which we arrive at (among other things) theories of evolution are reliable.

5. Therefore, if a typical evolutionary account is true, it undermines its own warrant for belief.

That is, the ‘universal acid’ of Darwinism eats away at its own epistemic foundations.

How should a Darwinist respond?

To see why there might be a connection between belief (map) and reality (terrain), consider a robot navigating a terrain. If the internal representation that the robot had of the terrain were random, it probably would not be able to navigate the terrain optimally. Rather, a specific kind of representation of the terrain, coupled with a certain interface between the representation and the world, are required for the robot to navigate accurately. Applied to the biological world, navigating accurately is vital to finding resources, and so on.

The way in which a representational system points is the interface between it and the world. So, a representation will have different significance (‘meaning’) if it is applied to the world through a different interface. As a maxim, ‘beliefs’ or representations do not exist in a vacuum. Plantinga seems to believe that they do – i.e., we can talk about an organism having a belief, where the ‘content’ of that belief is completely separate from how that organism interacts with the world. As far as evolution is concerned, this seems highly questionable, because the meaning of a representation, and therefore its truthfulness, is in part determined by how the organism containing the representation interacts with the world in relevant contexts.

However, the above Plantinga-esque critique of Darwinism gains some of its bite by realizations that we have (presumably evolved) cognitive systems that in some way are misleading. Consider an area where many people say that we have a false representation of the universe, the representation of the Earth as not moving. Here, it seems like what is true and what is evolutionarily useful diverge. Just so, it might be that theories of evolution themselves are based on useful but misleading cognitive systems.

I think a more accurate way to understand something like our representation of the Earth as not moving is to say that our representation of the world as stationary is true in the relevant context. I.e., to understand how representational systems are truthful is to have to understand their applications. I.e., any representational account has to be interpreted or applied. It is only when we start to apply a representational system designed for navigating on the Earth as if it were designed for navigating the solar system that it starts to mislead. (Even there, though, in this case it rather straightforwardly plugs into heliocentric models by use of a transformation algorithm.)

So, not only is a correspondence between map and terrain obviously useful for purposes of navigating that terrain, say, but science properly understood has not shown that our (presumably evolutionarily derived) representational systems tend to be unreliable, but rather has sharpened our understanding of the scope of their appropriate application. Indeed, these sorts of considerations bring into question the scope of applicability of certain cognitive mechanisms underlying typical scientific ways of representing the universe, and so we probably are warranted in emphasizing careful consideration of the limits of the epistemic mechanisms we use to build scientific representations. Here I think there is something very useful about Plantinga’s sort of critique, but this is not the conclusion he tries to draw.

So, these sorts of considerations don’t show an incoherence between evolutionary processes and reliable cognitive processes. If anything, they lead one to think there certainly should be correspondence between representations and terrains, properly interpreted.

See here.

Truth and interpretation

Any map of reality is so because of a purpose. That is, the map was designed to be used for a purpose – otherwise you could just substitute the reality, and forget the map.

Consider a topographical map, which shows elevation lines roughly corresponding to the actual geography. Now consider that the map might also have various colours on it. A person, looking at the map, might think: this map is claiming that the terrain in this spot is coloured such-and-such. However, if so construed, the map’s claim might be false – when looking at the terrain from an airplane (say), a person might not typically see the terrain as coloured the way the map has it. Does this mean the map is false in this respect?

No. Any map comes with an intended way the map is to be used – what appears on the map has to be translated and applied in a certain way. It is only if it is reasonable that this kind of map be interpreted as having colours which are accurate when a human would look at the terrain in such-and-such a way, that the map not having those colours would be equivalent to the map not saying something true. Any representation requires interpretation, and this means an understanding of the representation’s intended limitations.

The term ‘map’ can be used as shorthand for any account, or representation, of reality. Consider historical work as presented by certain ancient Greek writers. Sometimes, they would present a scene or event in a way that they thought would best capture its emotional significance or presumed import. (Often, people creating movies will do something similar, combining characters or changing chronology in order to better capture what they think is important about a story in the format they have. It’s not unique to ancient Greek historians.) If the reader thought they were presenting things in precisely the chronological way it happened, then they might think the writer was saying things that were false. Yet, if the writer did not intend for it to be so received, and if a typical reader of the day would not so receive it, then it follows that the historical account was not saying things that were false – rather, it is being badly interpreted by the contemporary reader.

This is a problem in general with old writing, where what the writers (and transmitters) of the account might have thought important may not be the same as what we might tend to think is important.

Ultimate meaning, eternity

Matt Fradd quotes from William Craig here:

“If each individual person passes out of existence when he dies, then what ultimate meaning can be given to his life?

Does it really matter whether he ever existed at all?

It might be said that his life was important because it influenced others or affected the course of history. But this only shows a relative significance to his life, not an ultimate significance.

His life may be important relative to certain other events, but what is the ultimate significance of any of those events? If all the events are meaningless, then what can be the ultimate meaning of influencing any of them? Ultimately it makes no difference.

[...] But it is important to see that it is not just immortality that man needs if life is to be meaningful. Mere duration of existence does not make that existence meaningful. If man and the universe could exist forever, but if there were no God, their existence would still have no ultimate significance.”

Craig’s argument can only be thought to work by emphasizing the epithet ‘ultimate’. For it is obvious that human life has meaning without those things being eternal or without a cognizance of God, i.e., in a moment, there is meaning in various experiences.

When people feel most alive, when life has most meaning, they are doing things that in themselves are meaningful. ‘Flow’ experiences are a type of these. If one thinks of meaning as an aspect of natural existence, one can see how certain experiences in life bring meaning to one’s life – i.e., we are designed to find certain sorts of experiences meaningful. Indeed, the art of creating games is creating systems which create meaning by leveraging these naturally existing systems. There is a goal, there is progress toward it, and so on. When done correctly, people can find these sorts of experiences meaningful.

However, Craig’s argument is partially correct as far as this goes. Contemporary society can reduce meaning (think meaningless office jobs, for example), and a Christian view of various things can add meaning to those events (such-and-such wasn’t just a random occurrence, but happened for a reason, or meaningful existence doesn’t end with death, for examples).

The Nietzschean maneuver of thinking ‘there is no God, therefore we create our own meaning’ is essentially an error. Meaning is part of a natural human process. It is found in various sorts of experiences we have, but it is not arbitrary. It is also part of why Christianity is interesting – it posits significantly more meaning in certain aspects of our lives then there would be in various kinds of non-Christian views.

In other words, Craig is overplaying his hand – but there is enough truth in what he is saying to make it worthwhile to at least ponder. Indeed, computer games often purchase their meaning by having ‘highscores’, which are records of deeds which persist after the game. There is no question that these sorts of devices can add meaning to a game, just as a dimension of eternity can add meaning to human actions.

Consider Craig’s quotation from Jacques Monod:

“Man finally knows he is alone in the indifferent immensity of the universe.”

We might ask: why does this matter? First, humans are social animals. Second, the proposition is that he is alone among indifference. Yet, a moment’s reflection will show this is not true – humans are surrounded by other humans, and many of those humans are not indifferent. Christianity adds more benevolent beings (angels, and a tripartite God) who are also not indifferent. Yet, it is the same principle.

So although it is a truism that actions can’t have ‘ultimate meaning’ if ultimate meaning is defined as something eternal and in some way related to God and if there is no God or eternal aspect of reality, it’s not clear the conclusion one gets from not having this sort of ultimate meaning is the one that Craig thinks follows. Events and actions here in finite time can be meaningful in themselves – Christianity posits a possible eternity of meaningful experience, but these in turn are also meaningful in themselves. The difference is in scope and possibly magnitude, not relevant type.

Stained glass windows and the language of light

One development that oft goes unremarked in the evolution of church buildings is lighting. Consider how lighting interplays with a common architectural feature of churches, stained glass windows. When there is low lighting inside a church, but daylight (say) outside, the stained glass windows are illuminated, creating (usually) beautiful pictures. When there is more lighting inside the church, this effect is reduced.

Historically, churches would have been lit by candles, torches, and so on. Not only would there be beautiful images lit up by the external light (which is symbolic in the context as well as beautiful), but inside the church would be candles and so on. This internal sort of lighting is generally more conducive to spiritual contemplation, and so on – the kinds of states churches generally are created for.

Now consider a common type of ‘modern’ church. It is lit up by rows of fluorescent lights, which often make it feel like an office, say. What is the language of such illumination saying to the person inside the church? Now consider the lighting at a typical Starbucks – which is better? Why?

Science, consciousness, explanation, theism

In a discussion with Alister McGrath (formerly Professor of Historical Theology at Oxford and now Professor of Theology at King’s College, London) Richard Dawkins (formerly Professor for Public Understanding of Science at Oxford) says (19:45):

“I totally agree with you that there is deep, deep mystery at the base of the universe, and physicists know this as well as anybody, questions like ‘What, if anything, was there before time began?’* Perhaps it’s because I’m a biologist who has been impressed through my whole career by the power of evolution, that the one thing I would not be happy about accepting in those deeply mysterious preconditions of the universe is anything complex. I could easily imagine something very hard to understand at the base of the universe, at the base of the laws of physics, and of course they are very hard to understand, but the one thing that seems to me clearly doesn’t help is to postulate anything in the nature of a complicated intelligence. There are lots of things that would help, and physicists are working on them [...], but a theory that begins by postulating a deliberate, conscious intelligence seems to me to have sold the past right before you even start, because that’s one of the things that science has so triumphantly explained. Science hasn’t triumphantly explained, yet, the origin of the universe, but I feel, I have a very strong intuition and I wish I could persuade you of it, that science is not going to be helped by invoking conscious, deliberate intelligence, whatever else preceded the universe, whatever that might mean, it is not going to be the kind of thing which designs anything [.]“

*This question is nonsensical, as ‘before’ in this context is a temporal relation. One can ask “What exists non-temporally?”, and presumably something like this is what Dawkins is asking, and he suggests something similar near the end of the quotation.

Explaining phenomenal consciousness in materialist (‘scientific’) terms in contemporary philosophy of mind is often referred to as ‘the hard problem’. It is so called because a significant percentage of those who study this issue believe it is conceptually very difficult – that we do not know how it could be explained, and some believe it cannot be explained, by an account of the universe that is something like the one we have in current physics.

It is true, however, that evolutionary biology has given a general account of how biological organisms (including brains) might have arisen, and it is true that we understand more about the brain and behaviour than before (for example, we can generate computer-based ‘neural networks’, which use certains aspects of human brain functioning as an analogy for the computer programs, which in turn can accomplish certain behavioural tasks in ways that are somewhat similar to how humans do them).

So, it seems reasonable to assume that what Dawkins has in mind here is an explanation of the brain and behaviour, or of ‘functional consciousness’ as opposed to ‘phenomenal consciousness’. Science has started to give an explanation of how human brains do certain things (i.e., behaviour or functionality), and these explanations in turn seem to fit into a larger story of the development of organisms in general through evolutionary processes. I.e., we have begun to develop a plausible causal story which starts from simplicity and builds complexity relevant to brain functionality.

Yet. If there is a conceptual conundrum between phenomenal consciousness and materialist accounts of the universe, as indicated above, we also know regardless that there is a tight linkage between phenomenal consciousness and behaviour. For examples, I can talk about the contents of my phenomenal consciousness and act in other ways on them, brain activity seems to be highly correlated with certain kinds of phenomenal states, and so on. If science doesn’t seem near an explanation of phenomenal consciousness, and if certain behaviour or brain activity (i.e., ‘functional consciousness’) seems dependent on or to act in a tight causal relationship with phenomenal consciousness, then to what extent does it make sense to say that ‘deliberate, conscious intelligence’ is something we have triumphantly explained? It seems rather the opposite – it is one of the things that has stymied science the most, and led contemporary materialist philosophers into contortions in an attempt to explain it.

Anti-culture

Edward Feser discusses act and potential, among other things, in these two lectures:

http://www.youtube.com/watch?v=_1Dkp1U9pek

http://www.youtube.com/watch?v=-O40N4nNGUc

What’s most interesting to me isn’t Feser’s arguments for a kind of goal-directedness in physics, say, or the deconstruction of New Atheist conceptions of certain classical arguments for God, but rather the surrounding decor. The room is incredibly bland. The architecture, lack of sculpture, paintings, there is nothing. This is not unique – it is typical of the standard environment in which university classes are held. It is like all culture has been removed from the academic environment. It is as if they are surrounded not by a culture, but an anti-culture.

Prediction and choice

The trick of determinism is that it makes it sound like the initial conditions ‘choose’ the end result – they determine what happens. Yet, this is in a sense not really accurate – one may be able to predict the end result based on the initial conditions, but there are intermediaries (in this case, humans) who choose, and it is these choices that lead to the result. These choices are of course based on reasons and information – if not, what good would they be as choices? So, yes, one might be able to predict that someone will choose an apple instead of a spoonful of dirt, but that hardly undermines the autonomy of the chooser – what it means in saying we can predict this choice is that the chooser is able to take in information, and act on good reasons for doing x instead of y. This is the whole importance for being able to choose – it’s why organisms have the ability to choose. That the prediction comes true may require choosers as intermediaries.

However, I think another reason determinism seems incompatible with choice is that it seems to go along with reductionism – that ‘good reasons’ don’t really exist, but rather there are just mindless fundamental causal processes that can be used to describe a situation. So, when determinist thought experiments are set up, they sometimes involve a description of the universe in terms of fundamental, mindless causal processes. From these, so the thought experiment stipulates, one can predict some end result, that will have as intermediaries ‘choices’. Yet, this thought experiment implies, those choices in a sense don’t really matter – just the basic, mindless causal processes that go along with them do.

The most obvious response to this is: it is hypothetical, and not entailed by scenarios which involve prediction of certain sorts. It very well may not be the case that the universe works in that way, i.e., that there is no such thing as a conscious mind that can affect decision, but rather only mindless, causal processes which are then in some way reflected in the conscious mind (i.e., the mind is epiphenomenal). The idea that the universe does work this way in its entirety is somewhat speculative, and indeed, there is significant evidence that it is not true.

If one sets aside this consideration, however, and considers the thought experiment as specifying initial conditions that lead to choosers that have conscious experiences, are able to draw on memories, and then coordinate their actions based on that information and those reasons, and so on, then this implicit nihilism is no longer present, and I think the seeming incoherence between ‘determinism’ and free choice is reduced. To distinguish these kinds of cases, perhaps ‘predictionism’ would be a more useful term for the general case, which can include the causal role of conscious choosers beyond mindless, causal processes.

Of course, it might be the case that everything in the universe is determined by mindless, causal processes, and that ‘minds’ as we understand them are epiphenomena (or are abstractions of these mindless, causal processes – a kind of eliminativism). But here the ‘problem’ as far as whether ‘we’ ‘choose’ isn’t so much compatibilism (i.e., the compatibility of prediction and free choice) as a kind of reductionism.

Also see here.

Neutrality and objectivity

Neutrality is not objectivity. It is quite possible, and common, that being neutral between two positions entails being non-objective.

Objectivity means making the relevant facts clear, giving relevant interpretations and the reasons for these, and then also giving one’s interpretation of those facts, and then the reasons for that interpretation. These last two steps are important so that a person can evaluate what is presented better – this notion of objectivity is why, in some scientific journals, scientists disclose any possible conflicts of interest. In so doing, a reader may better evaluate what is presented.

So, if two people are debating, and one is engaging in propaganda and intentionally trying to obscure facts, while the other less so, then being neutral between the two debaters is not be to objective – rather, if being objective, one should point out the former debater’s obfuscations, even though this entails no longer being neutral.

What I find frustrating is the pretense of objectivity. This is found, often, in textbooks or other books intended for an academic audience, say. The authors often do not present their own views, cloaking these under the pretense of objectivity. Yet, it is precisely the opposite of objectivity to do so. I would much rather a textbook where the author states their views, and even argues extensively for them. To be non-objective, on the other hand, would be to intentionally distort or occlude relevant facts, omit important responses to a given position, and so on. None of this is incompatible with presenting one’s view. (The pretense of objectivity also often makes for more boring reading.)

Arguments, in terms of their actual merit, often gain considerably by being more objective – the arguer is forced to understand the relevant facts, the responses to various arguments, the counter-responses, and so on. (Rhetoric is another issue.)

Marginal technological change

One of the ideas prevalent today is that there is unprecedented technological change. Much of this, however, is actually marginal technological change related to existing core technologies, i.e., in effect a playing out of existing core technologies.

Consider cars. The basic structure of the automobile hasn’t changed in over 100 years. There is an energy source (typically a combustion engine nowadays, although more and more so a battery or combination), which causes something to move, which in turn causes wheels to move. Once we had a technological breakthrough which allowed for sufficient portable power to be generated (whether through a battery or a controlled explosion of hydrocarbons), there were no consequent, widely utilized, and revolutionary changes to cars in the basic technology relevant to transportation. Energy source -> something moves -> wheels move.

Once we had the basic technological core of cars, advances in the main function of a car (speed of transportation) diminished and then eventually stopped, and in some cases reversed (due to new problems that weren’t being solved relevant to driving, such as rush hour traffic). This is a common theme in contemporary technologies – air travel is similar, where initial increases in speed of transportation eventually diminished, and then began to reverse.

Computer technology is where things seem to be advancing rapidly, and this area seems to be the main example of why we have exceptional technological change, but this sense is not due to the continual introduction of new, revolutionary core technologies. Rather, it is typically the refinement or marginal advancement of technologies related to the existing core technologies, which are electric current running through logical gates. More so than with car technology, the fundamental technologies involved had huge and diverse potential, and so we continue to make significant innovations using the same core technologies. This, however, should not be confused with introducing new, fundamental advances in technologies (which in turn usually require advances in the underlying science).

In other words, much of the technological advance today is actually a playing out of advances that were made well over 100 years ago (electricity, combustion engine, radio, logical gates, and so on). The playing out of advances is not something unique to our era – in most times, there were certain technologies which were seeing relatively constant innovation. These considerations suggest that the impression of exponential, contemporary technological change may be overstated.

Representation, truth, and art

Selected quotations from a conversation here:

“If you look at [the representationalist work ...], in a sense it’s a lie. This is coloured paste on canvas that is trying to represent something that it is not. It’s a falsehood, it’s an illusion.”

“You’re right, what’s on the left is a lie – it’s something trying to be something it’s not. While on the right [an abstract expressionist work], it literally is the painting. The painting is what you’re trying to see [...]”

“And so there is a kind of fundamental truth that was upending 2,000 years of tradition. How radical, how brave, how heroic is that?”

As far as reflecting a certain justification for abstract expressionism, this sounds plausible. Let’s stop pretending, and get down to what these objects (of art) are. On reflection, however, I think a more appropriate question is “How silly is that?”

Consider “This is coloured paste on canvas that is trying to represent something that it is not.” This is basically tautological, i.e., a representation almost always is about something it is not. Consider that when I say “The sun is shining” there is a representation achieved through the movement of air over vocal cords, leading to vibrations in the ambient air, which in turn lead to complex causal processes within a listener’s ear and brain, and so on. What determines whether I am saying something true or not is whether this phrase corresponds to a state of affairs – i.e., whether the sun is shining. It is irrelevant whether the movement of air and so on is the same thing as the sun shining. The sort of statement above misses the whole point of representation, i.e., thinking or talking about or understanding things without having to have the actual things present. That’s why we represent.

Of course, thinking of art objects in terms of those objects, and not in terms of what they are representing, might have some interest. It is not, however, because they are more ‘truthful’. If anything, they are less truthful, because they tend to be less capable of expressing complex ideas or scenarios which in turn are capable of being true or false.

On art

Preliminaries: ‘art’ is a word, attached to a concept (or number of concepts). If a word is used in a certain way, the concept attached to that word will reflect that usage. If you start to change how a word is used, the associated concept will start to change as well. Of course, you can reject a certain usage.

(One trick in science (i.e., figuring out the cause-and-effect processes in the universe) that has evolved is creating new words. Scientists typically do this by combining Latin or Greek base words to form a new compound. The advantage to this is that you don’t get confusing cases where existing concepts might be mistaken for the new concepts, the latter of which are created in light of new discoveries and theories about how that part of the universe works. (It’s not this simple.))

So, if there is a debate over whether something is art, the debate is not about what a word could mean (it could mean anything) or what it does mean (although this is relevant), but rather what it should and can mean. To ask what it should mean is to refer to some purpose for the word. To refer to a purpose is to ask what’s important about the world for us, and how a word might be used relevant to that.

So, a debate about a word like ‘art’ is often a debate about what is important about the world. What we think is important in the world is informed by what we believe to be true or real. So, if people have differing views on what is real, it isn’t surprising if they think a word should be used in different ways. The other main reason is if people have differing interests related to how a word is used – for example, if using a word one way helps one person, but is neutral or hinders another person, then their views on how the word ought to be used may diverge.

What is the primary importance of art? Is it ‘refocusing ideas‘? Or is it perhaps conveying notions of the Good – of Truth, Beauty, and Virtue? Where one stands on this to an extent will depend on what one thinks is real, and where one’s interests lie. For example, much of contemporary art holds to the idea of only subjective truth, is intentionally ugly (while saying it isn’t), and intentionally tries to destroy the sorts of things that would tend classically to be considered as virtuous.

So, if you think that a primary purpose of art is to explore, express, or better understand the Good, then much of contemporary art is either not art or is poorly done art. However, if you eschew these notions as ‘fuddy duddy’ ideas, or what have you, and instead think that the primary purpose of art is contextual novelty, say, then these sorts of works may be considered not only veritable art but art of high quality.

In the end, then, many debates over definitions come down to ontology (theories of what there is) and interest politics (what is advantageous to whom). Since many art critics, in this case, may be wrong as a group in terms of their ontology, and have certain interests which diverge from many others, it may not always be the soundest idea to listen to their theories of what art is supposed to be.

The locality of causes

How do we determine where something is?

We look at where there are effects, and then postulate a cause that brings about those effects.

That is, to say that a thing has a location in science is to say that it has effects on things that have locations, plus an inference that, therefore, the cause also has a location.

This is how causes in science come to be thought to be in some place, i.e., spatial.

What are exceptions?

The exception would be if something seemingly brute happens at a locale. One gets an effect, but can detect nothing separate from that effect at that locale. There are a few options: 1. It is a truly brute effect, i.e., there is no reason for it occurring, and so no local cause. (This is perhaps the idea behind ‘truly random’ quantum effects.) 2. There is a cause, but the only way we can interact with it is through the effect already detected, and so can investigate it no further. (This could be contingent and due to the technical apparatus we have for investigating the cause, or it could be simply how that cause works.) 3. There is a cause, which can be investigated further, but it is originating somewhere else, and if we could get there we could investigate it further. (So, there is a sort of limited one-way causal pattern in the place of the effect. Here, there is usually a medium through which the cause leads to the effect.) 4. There is a cause, but it is ‘outside’ space, and so the only way we can investigate it while ‘in’ space is indirectly. (For example, if the universe had a beginning, then does it have a cause, and if so, what caused it? It is reasonable to postulate that there is a cause which is ‘before’ or ‘outside’ the creation of space, and therefore non-spatial.)

Here is the question, however: how do we distinguish between 4., 3., 2., and 1.? How, methodologically speaking, would we distinguish?

Also see here.

How can divine revelation make sense?

One of the more ridiculous notions in Christianity to a typical secularist is that of divine revelation. Whereas arguments from personal experience can be made fairly directly, arguments made from scripture are much more difficult to justify. Therefore, much of Christian discourse seems ridiculous, because it is couched in or buttressed by references to certain scripture as divine revelation.

‘Divine revelation’ has three main components – that there is a divine reality, that there can be revelation, and that the divine reality can be the cause of revelation. Christianity includes the idea that we have good reasons to believe that certain writings – such as the Gospel of Luke – contain divine revelation. How might the first three of these components seem a little closer to plausible?

1. That there is inspiration. Often, writers will say that when they write it ‘flows’ – the words just ‘come to them’. Similarly in music, or in problem solving – a solution ‘comes to oneself’. In typical discourse, we might call this ‘inspiration’ – it is a robustly evidenced phenomenon.

2. That there is revelation. The first question is: “What is ‘revelation’?” ‘Revelation’ and ‘reveal’ have the same root, and revelation basically means a ‘revealing’ of information. (For example, “New revelations about such-and-such case!”) The basic idea is the same as with inspiration. Some information – an idea, text, music, solution – comes to oneself, i.e., is ‘revealed’. ‘Revelation’ just suggests a ‘fuller’ or more detailed sense than ‘inspiration’. Again, (non-theistic) writers often say that what they wrote seemed like they were merely transcribing it, say. That there are these sorts of fuller or more detailed experiences of inspiration is also robustly evidenced – however, it happens more rarely than the more general sense of inspiration.

3. That revelation can reveal important information. This follows fairly simply from 2. Sometimes, people are ‘inspired’ and write nonsense or things that turn out to be false. Other times, however, a part of text, music, or a solution to a problem comes to them and it turns out to be veridical, beautiful, and or useful. I.e., it is straightforward to note that in some cases of inspiration or revelation, it’s true.

This leads to a simple question: where does the information which leads to inspiration or revelation come from? Those who think there is such a thing as divine revelation think that some of it comes from a divine source. This in turn leads to two problems as seen from a typical secularist’s viewpoint: is there really a divine aspect of the universe, and can this divine aspect really be the source of inspired or revealed information?

4. That there is a divine aspect. This can be made to seem a little more plausible by citing the large amount of evidence in people’s experiences which suggests there is. Consider here, where 3 sources of evidence are cited.

5. That the divine aspect can be a source of information in some cases of inspiration or revelation. The 3 sources of evidence considered at the link are not just evidence for a divine aspect, but evidence that this aspect can affect or interface with (or, perhaps, be a part of) human brains or minds.

A next question is then whether and to what degrees we have good evidence for divine revelation in specific claimed instances.

The basic idea with religious ‘faith’

‘Faith’ in religion, and particularly in Christian religion, is a central word, and as with most central words in, say, a language more broadly speaking (such as ‘know’), contains multiple meanings and works in various directions.

For the word to start to make some sense to a secularist, however, it might be useful to start at this point: faith can refer not to belief that God exists, but rather to belief in a specific version of God’s character. This is a more natural way of using the word in everyday language - as in ‘I have faith that someone will show up at a certain time, because I have had repeated experiences in the past where they have done so’. The basic idea here is one that basically everyone acts on in day to day life – habit or character is inferred from behaviour, and so one has warranted belief that someone will probably act in such-and-such a way in the future.

This leads to the next question: how would one know that God has a character or something like it (is person-like) and discern what that character is?

There are 3 main areas of evidential support typically cited in Christianity, as far as one’s own experience goes. The first is providence – the idea that, usually in retrospect, one can see a pattern or logic to events in one’s life, even though at the time it might have seemed like there wasn’t. The typical Christian idea here is that God has an intention – a forethought or will – for how things will turn out (but that the actual turning out of things in that way depends on human choices). Through repeated experiences of these sorts of things, and developing a better ability to listen and communicate with God (through various practices) one can then better align oneself with and allow for God’s providence, and so build a sense of the sort of God there is.

The second is ‘coincidence’ – seemingly unlikely events where things come together in a certain way. These are distinguishable from providence in typically being noticeable at the moment. Christians attribute these sorts of coincidences to God, or angels which are proximate representatives of God’s will. Similar to providence, through repeated experiences and developing an ability to notice God’s feedback to one’s own thoughts through these sorts of coincidences, one starts to build an evidenced concept of what sort of character God has.

The third main source of evidence is ‘religious experience’ – experiences of the ‘light of Christ’ or the ‘Holy Spirit’, for example, or even just of a general sense of ‘goodness’ that is perceived to indicate the presence of some divine aspect.

Although the three sources of evidence discussed above have to do with the specific nature of God or divine reality, they also work as evidence that there is a God – another reason why the word ‘faith’ is often run together on these issues. Interestingly, in Kreeft and Tacelli’s Handbook of Christian Apologetics, of the 20 arguments for the existence of God, only the third source of evidence above is mentioned, in argument 17.

‘Faith’, then, in the context discussed above more properly refers to the character of a relationship – having faith in a specific notion of God, say, because of evidence of his character that has built up in the past.

 

Church as symbolic

Much of what occurs at church (in a Mass, say) is symbolic. (It can also be sacramental, which is at least symbolic.) Therefore, to understand what is going on in a Mass, it helps to understand the symbols.

Symbols point. So, much of church architecture or of, say, a Mass, is referring to things beyond itself. It is like a book, that can be read (and read with different interpretations).

So, in a church a candle is not simply to generate light – it usually symbolizes something else, such as the light of God that dwells in a human. And so on. Often, the symbols are compounded – that is, one symbol takes on meaning in reference to another symbol (for example, incense gains some symbolism because it rises upwards – but that refers to another symbol, that of upwards as where divine reality is, and so on).

If one does not know that much of what is occurring is symbolic, and then have some understanding of the symbolic language, churches or Masses, say, lose much of their meaning. It is like reading a book where one does not understand the alphabet or language in which it is written – or where one doesn’t even realize that the ink splotches on the pages refer to anything.

It is not only like reading a language, it is also like learning a language – building up a conceptual repertoire and connecting those to symbolic references to the things those concepts are about.

An analogy for being ‘outside’ the natural world

If the universe had a beginning (not eternal), then that gives a reason for thinking there is something that caused it to begin. That thing would be ‘outside’ the universe, so to speak. What would it be like for that thing (such as the Christian God) to be outside the universe, including outside time?

Since so many of our concepts and so much of our language is conditioned by or grows out of our experience of passing time, it is difficult to even talk about what it is like to not be in passing time. Perhaps, however, a useful analogy would be to viewing a series of tapestries. The tapestries in question, say, represent a procession of events through time. Say the Unicorn Tapestries, which illustrate the capturing of a unicorn.

To us, all the events are ‘happening’ simultaneously – we experience no procession of time from one event to the next, while looking at all simultaneously. Now consider interacting with the events. We can modify a given tapestry, yet we are not doing so inside the temporal events depicted in the tapestries.

When people talk about a non-natural cause affecting the universe, perhaps this is what the situation would be like, or would be a useful conceptual tool for thinking about the situation.

Is religion important?

Religions are, basically, answers to the question “How ought I to live?” or similarly “What is the Good life?” Every television show, movie, book, and so on, contains some (implicit or explicit) answers to these questions. This is one reason why media often is at odds with various kinds of religious ideas – they are overlapping in what they are doing.

Religions store up a body of ideas, beliefs, and practices that are believed to be answers to those questions. Within a religion, there are usually many different kinds of traditions, churches, services, and so on – many different ways to answer that question – although these answers may be more similar than answers compared from within a given religion and within some other tradition.

It seems to me that the way to figure out how good of an answer a given approach is (and different approaches may be better for different people in different circumstances), is to look at the people practicing it. (I am thinking here of more organized answers, such as religions give, and then secular answers as well.) Basically, does it work well on the whole – does it work better than the alternatives available? Does it work well for this sort of person, but not this?

This requires some exploring – actually seeing what the fabric of a given person or peoples’ lives are like, and then comparing one approach (whether ‘religious’ or not) to another. It is often not easy to do this, and it is not largely theoretical, it is empirical. That is, one must go out and see whether certain approaches work, and how well, and for what kinds of people.

Natural law and non-natural intervention

In Handbook of Christian Apologetics, Kreeft and Tacelli write (p. 109):

“A miracle is a striking and religiously significant intervention of God in the system of natural causes. [... T]he concept of miracles presupposes, rather than sets aside, the idea that nature is a self-contained system of natural causes. Unless there are regularities, there can be no exceptions to them.”

Consider that the natural system (featuring causal processes unfolding in time and space) has a beginning. What caused it to begin? It is reasonable to posit that something caused it to begin. This something would be in some sense prior to (but not temporally prior, as time is a feature of the system) the natural system. If that something can create the natural system, then it doesn’t seem implausible, a priori, that it would be able to affect the system at various points. A simple metaphor for this might be someone who creates a game with its own rules that can play out (such as Conway’s Life) but then also steps in at various points to change the configuration of the game.

So far, this makes some sense. It is not conceptually problematic, in the broadest of brushes, to think this possible. The next question then becomes whether miracles (so defined) have, in fact, occurred.

This is where things get problematic. If miraculous causal events are those which aren’t determined by the system of natural causes running endogenously, but rather (at least partially) through an externally (so to speak) originating cause, then how could we tell which causes are of this latter type?

Consider Kreeft in The Philosophy of Tolkien (p. 54):

“It is easy to identify miracles when we see them[.]“

He does not elaborate. Yet, is it obvious? How can one distinguish between causes that originate external to the system, and those that are part of the system? Consider the notion that a significantly advanced technology could be considered ‘magic’ by those who don’t understand the technology, or the general conceptual backdrop for the technology. Since we don’t understand how the natural causal system works (and probably aren’t even close), why should we think that we can tell what are natural causal events and which are not?

The basic idea with scientific methodology

Bruce Charlton posts the deleted ending to his forthcoming book on the corruption of science, here.

All the points in the 8 point list are interesting. One point in particular I found useful was:

“6. Derive methods from your problem (not vice versa). What you actually do – your methods - will depend on your talents, your interests, your opportunities – these will arise from the interaction between yourself as an individual and the ‘problem’ you are tackling. Your methods might be theoretical or empirical. If theoretical they might be critical or constructive. If empirical they might be statistical, observational, experimental. And so on. It is hard to be more precise than that.”

Much has been written about the methodology of science. If the above is correct – and I basically agree with it – then the general ‘methodology’ of science is not technical but ‘in spirit’, or as Charlton says:

“Only do science if you are genuinely motivated to discover the truth and will practice the habit of truth.”

That’s about it. The methodology of science is ‘genuinely try to figure out what’s going on.’ The rest is all contingent, and difficult to convey in simple maxims. It is a domain-based craft, where a significant amount of the ‘know-how’ is intuitive or not easily reducible to simple algorithmic advice. It involves, more or less, a large range of problem-solving techniques which vary based on the situation. Different approaches may work in different situations, where we won’t necessarily know in advance which approach will work.

 

Robust domains

Bruce Charlton writes:

“That all complex adaptations (functionality) in biology and all the diversity of species, and indeed the reality of species differences could be explained by natural selection is a metaphysical assumption [...] and un-provable by common sense.

But also: that such and such a breed of pig or dog – with a relatively specified appearance and behavior and functionality transmissible by heredity – was produced by the breeding experiments of Farmer Giles or Mr Smith… this is a matter of appropriate knowledge: common experience evaluated by common sense.”

This is a more specific but similar point to one I make here, where I say that when evaluating scientific claims, an important question is:

“What is the robust domain of this? [...] Being robust means that the findings have been tested and confirmed extensively for a given set of phenomena. Very often, we find out that a theory that worked well to explain one domain doesn’t work as well when expanded to other domains.”

This is the main question with parts of evolutionary theory. We know that certain mechanisms for evolutionary change work in certain domains. The question is whether they are capable of the universal explanation some proponents claim. They probably aren’t – our current consensus view of how evolutionary change works is probably partial and, in some cases, wrong.

Charlton continues:

“Science properly works in this area of specific and local – constructing simplified models that are understandable and have consequences and ‘checking’ these models by using them in interacting with the world, to attain human purposes.”

An important point here is the checking. Without the ability to check, re-check, and so on, a scientific model and how it applies to the world, our confidence in it should decrease significantly. This is because human reasoning (and therefore theory) is weak and easily misled.

The same goes for computer models, say. People are easily misled by computer models typically because they don’t understand how they work. A computer model is typically only as good as the theory that goes into it, yet people often use the model results as evidence for the theory. This is fine with retrodictions, where we know already what the result is. If the computer model fits with the result, then that is confirmation of some validity in the model. Its robust domain, however, is therefore extended only to the past. The more interesting question is usually how it predicts. Running a computer model where its robust domain is in the past, in order to predict what is going to happen in the future, is moving outside of its robust domain (unless you are dealing with temporally uniform phenomena – typically simple systems where no relevant changes in the future are to be reasonably expected as compared to the past). Therefore, the probability that computer models in such cases are making correct predictions should be weighted accordingly.

Economic arguments

The non-monetized economy involves more value than the monetized economy – substantially more value. Many arguments are made in terms of ‘economic’ – i.e., monetized economic – benefit or detriment. It follows that this sort of argument will be incomplete, and often will be significantly incomplete – i.e., most of the loss or gain following from such-and-such policy may be in the non-monetized economy.

The problem with the non-monetized economy is that it is difficult to quantify. The monetized economy is relatively easy to quantify (measure the money being exchanged). If there were a way to estimate and then quantify in a comparable way the non-monetized economic impact, that would make responding to monetized economic arguments easier, it seems.

Detecting immaterial causes

What would an immaterial cause look like?

That is, by which criteria will we decide that a cause of some effect in the physical causal network is immaterial?

Since science works by detecting effects, and then inferring causes, how would science distinguish a material from an immaterial cause?

My guess: there is no way. Science isn’t about ‘material’ causes, but about causes. Put another way, science isn’t about the ‘physical’ world, but about prediction. If there is some cause, but it isn’t predictable, will it be classified as immaterial? No – science will simply focus on how to make predictions about its unpredictability.

Consider the following passage by Edward Feser, where he is discussing the “Mechanical Philosophy” prominent in early science (The Last Superstition, 2008, p. 179):

“The original idea was that the interactions between particles were as “mechanical” as the interactions of the parts of a clock, everything being reducible to one thing literally pushing against another. That didn’t last long, for it is simply impossible to explain everything that happens in the material world on such a crude model, and as Newton’s theory of gravitation, Maxwell’s theory of electromagnetism, and quantum mechanics all show, physical science has only moved further and further away from this original understanding of what “mechanism” amounts to. [... T]here is by now really nothing more left to the idea of a “mechanical” picture of the world than the mere denial of Aristotelian final causes[.]“

That is, things that wouldn’t have been considered ‘material’ in the past are now routinely thought of as so – as paradigms of material processes. The reason is that science is opportunistic – it finds effects, and tries to create models that explain them. If there are causes, traditionally understood as ‘immaterial’, then in the limit science will have to account for them, and will not think it is describing something immaterial in doing so.

Consider this from Daniel Dennett (Freedom Evolves, 2003, p. 1):

“One widespread tradition has it that we human beings are responsible agents, captains of our fate, because what we really are are souls, immaterial and immortal clumps of Godstuff that inhabit and control our material bodies [...] But this idea of immaterial souls, capable of defying the laws of physics, has outlived its credibility thanks to the advance of the natural sciences.”

So, how would we tell that there are immaterial causes to our material behaviour? There wouldn’t be a sign blazing down from the sky saying ‘that was an immaterial cause – physics defied!’ Rather, we would have effects in the brain (say), and we would then infer causes. “There’s something there, causing these effects.” We would then develop a model of what that thing is. It would then come under the rubric of the physical sciences.

That is, natural science says there aren’t immaterial causes, but that’s because science rules out the possibility of an immaterial cause on conceptual grounds – to be in natural science is to effect the physical world, and to effect the physical world is to be physical.

Schopenhauer and innovation

“As the biggest library if it is in disorder is not as useful as a small but well-arranged one, so you may accumulate a vast amount of knowledge but it will be of far less value than a much smaller amount if you have not thought it over for yourself.” – Arthur Schopenhauer, The Art of Literature, chapter 5, 1891

The problem of the internet is that it has the potential to generate a vast cacophony of echoes distorting echoes.

The promise: by having more communication, we can build solutions to problems more rapidly. 1. One way to do this is by communicating sub-solutions, where one agent has produced one part of the solution, another a different part. This requires coordination of sub-solutions to achieve a solution. 2. Another is by allowing rapid feedback on an idea. “Can anyone see a problem with x? Can anyone see what is right with x?” An example of this would be getting comments on an article or post one has written. This requires evaluation of the feedback. 3. Another way is by allowing a given agent to find the information they need to generate a (sub-)solution themselves. For example, finding a book or article by using a search technology.

Some things to consider:

a) In any communications system, the static:signal ratio is important. Increasing ease of feedback, for example, might increase the static such that it outweighs the advantages from the increase in signal.

b) Similarly, with more information available, the ease by which to distract oneself with irrelevant writing increases. For example, one goes to find an article on x, only to get sidetracked by an article on y. Sometimes, such sidetracking is useful, but my guess is that the large majority of the time it is not.

More subtly, the important information may not be as readily apparent, so one may spend time reading things that wouldn’t have been available before because they aren’t as important.

c) Someone must do some new thinking for genuinely novel solutions, and often the depth of thought required by some individual will be very great.

d) There is something to thinking something through for oneself. When one thinks through something for oneself, one gains an understanding that often doesn’t obtain when reading someone else’s thoughts.

For both c) and d), increasing availability of things to read might decrease time spent thinking things through for oneself, which can actually result in a decrease in future solutions.

So, what is the right balance? Schopenhauer suggests that a “man should read only when his own thoughts stagnate at their source, which will happen often enough even with the best of minds.” For most complex and creative areas of enquiry – even ruggedly empirical ones – my guess is that this applies to a significant extent.

Unless one figures out how to sift, focus (using search technologies, say, to find relevant information more quickly), and limit, a technology like the internet may diminish significant scientific or technological progress, instead of increasing it.

This post was sparked by Bruce Charlton’s post.

Determinism and an action being ‘up to oneself’

Consider the following (presented in Freedom Evolves, 2003, Daniel Dennett, p. 134):

“A popular argument with many variations claims to demonstrate the incompatibility of determinism and (morally important) free will as follows:

1. If determinism is true, whether I Go or Stay is completely fixed by the laws of nature and events in the distant past.

2. It is not up to me what the laws of nature are, or what happened in the distant past.

3. Therefore, whether I Go or Stay is completely fixed by circumstances that are not up to me.

4. If an action of mine is not up to me, it is not free (in the morally important sense).

5. Therefore, my action of Going or Staying is not free.”

Is there a problem with the argument, and if so where is it? I think 4. conflates two senses.

On one sense, the action in the present is up to oneself, i.e., one is taking action, and this action stems from one’s past actions. Therefore, it is not clear whether 4. is saying

“If an action of mine is not up to me at the time or up to my actions preceding it in some relevant sense, it is not free.” (or something similar to this)

or whether 4. is saying

“If an action of mine does not descend from causal factors ultimately up to me, it is not free.”

In the former sense, the move from 4. to 5. isn’t warranted. In the latter sense, it is, but it is not clear if the latter sense is intuitively correct.

As an analogy, consider a computer agent that has a decision function. There is some input presented to the computer agent’s decision function. The computer agent then selects an output based on which output ranks highest according to the computer agent’s criteria. To put it in more familiar terms, the computer agent reviews the possibilities, and selects the one based on its belief about what is the best choice.

The computer agent’s choice might be entirely deterministic, yet it is still making a meaningful choice. That is, the decision is done by the computer agent, and how the computer agent evaluates the choices is important for the result.

Which is to say, if determinism is true, one’s decision making process is still a necessary part of the causal equation for one’s action. Furthermore, actions of one’s earlier in time may also be necessary parts of the causal equation. That is to say, the previous events outside of oneself which allow for prediction of one’s future action are sufficient only in a sense. That is, 1. should be rewritten as:

1. If determinism is true, whether I Go or Stay is completely fixed by the laws of nature and events in the distant past, plus the ‘running forward’ of reality such that the laws of nature combined with the events in the distant past lead to the creation of an ‘I’, which in turn creates various evaluative capacities which in turn allow for comparison of options, and so on, which then in turn lead to this I taking action.

which more accurately reflects how one’s decision making is a necessary part of one’s actions, even given a deterministic universe.

A conceptual tool for understanding necessity here might be a computer simulation. The current state of a computer (‘events in the distant past’), plus the function to advance the state one step (say) (‘the laws of nature’), don’t actually necessitate the state being advanced. Rather, the function actually has to be run to advance the state. In this case (to keep things analogous), in so doing many new functions are created when the state advances (decision making functions by new computer agents). If these new functions weren’t created, then there wouldn’t be the choices made by the computer agents, and there wouldn’t be their outputs (i.e., actions) in the simulation.

Which is to say, even if a computer agent doesn’t decide on the initial state of the computer, and doesn’t decide on the function to advance the state one step, and doesn’t decide whether the function actually is advanced however many steps, the computer agent is created, and its decision making function is necessary for whatever the computer agent does once the computer agent is created, and the outputs of the decision making function are ‘up to’ the computer agent, in the sense that the computer agent has a function that reviews the options, ranks them according to some criteria, and then selects the best one.

I don’t know if this sense of ‘up to oneself’ is sufficient to satisfy 4., but the actions certainly are ‘up to oneself’ in some sense (i.e., oneself is real and is really making decisions, and these decisions of one’s are necessary or the action won’t occur).

Shaun Nichols, intuitions about reasons for action, and free will

Shaun Nichols – a philosopher and cognitive scientist at the University of Arizona – recently wrote an article in the popular-science magazine Scientific American – Mind entitled Is Free Will an Illusion? What struck me was how poor the reasoning was.

The crux of the argument seems to me thus:

“Yet psychologists widely agree that unconscious processes exert a powerful influence over our choices. In one study, for example, participants solved word puzzles in which the words were either associated with rudeness or politeness. Those exposed to rudeness words were much more likely to interrupt the experimenter in a subsequent part of the task. When debriefed, none of the subjects showed any awareness that the word puzzles had affected their behavior. That scenario is just one of many in which our decisions are directed by forces lurking beneath our awareness.

Thus, ironically, because our subconscious is so powerful in other ways, we cannot truly trust it when considering our notion of free will. We still do not know conclusively that our choices are determined. Our intuition, however, provides no good reason to think that they are not. If our instinct cannot support the idea of free will, then we lose our main rationale for resisting the claim that free will is an illusion.”

So, the argument seems to run: 1. We have some reasons to doubt our conscious intuitions about the reasons for the decisions we make. 2. Therefore, no conscious intuition about our decisions provides a good reason for anything. 3. Therefore, our conscious intuition about free will does not provide a good reason for believing there is such a thing as free will that guides our decisions.

Granted, Nichols’ article was written with brevity, and for a popular-level kind of discourse. Yet, this reasoning leaves me baffled. (It reminds me of Randolph Clarke’s statement that the idea our intuition about free will could tell us something about the causal nature of the universe was ‘incredible‘.)

Response: of course intuitions about the causes of our decisions can be incorrect. Indeed, one can see this in some cases by honest, conscious introspection on the real cause of our actions: “Did I make that comment about so-and-so because I really believe it, or as retribution for an earlier comment they made?”, and so on. Recent empirical evidence showing how certain of our conscious beliefs about the reasons for our actions seem to conflict with subconscious reasons is interesting. Yet, it is only suggestive when applied to something specific like intuitions about the causal efficacy of something we call free will.

An intuition is, prima facie, a reason to believe something. If other methods of reasoning and evidence suggest the intuition is incorrect, then we can investigate how to reconcile the views. Then, if ultimately we decide to go with the ‘counter-intuitive’ evidence, we can conclude that the intuition isn’t a good reason. Yet, we can’t get to that conclusion about the veracity of conscious intuitions about the nature of free will from mere speculation based on evidence about the nature of certain other kinds of conscious intuitions about reasons for action.

Business, Science, and 10Xers

Authors Jim Collins and Morten Hanson write:

“Recently, we completed a nine-year research study of some of the most extreme business successes of modern times. We examined entrepreneurs who built small enterprises into companies that outperformed their industries by a factor of 10 in highly turbulent environments. We call them 10Xers, for “10 times success.”

[...]

The 10X cases and the control group both had luck, good and bad, in comparable amounts, so the evidence leads us to conclude that luck doesn’t cause 10X success. The crucial question is not, “Are you lucky?” but “Do you get a high return on luck? [ROL]”

[...]

This ability to achieve a high ROL at pivotal moments has a huge multiplicative effect for 10Xers. They zoom out to recognize when a luck event has happened and to consider whether they should let it disrupt their plans.”

I wonder to what extent this is applicable to scientific discovery. We already know that many major scientific discoveries were ‘luck’ – something unexpected happened, and then a scientist was able to notice it and follow it up.

I think one of the key phrases in the above quotation is ‘let it disrupt their plans’. The 10X cases were able to not only recognize that something important had happened, but to take massive action on it:

“Getting a high ROL requires throwing yourself at the luck event with ferocious intensity, disrupting your life and not letting up.”

To what extent does the typical bureaucratic funding structure make this sort of process in science difficult? If a scientist notices something strange, do they have the flexibility to pursue it? Often times, for significant scientific breakthroughs (’10X’ scientific progress) the current theory makes the theory suggested by the evidence ‘not make sense’, say. So similarly, are they able to get a grant to pursue the implications of a possible ’10X’ chance event?

What is the cause of the obesity epidemic?

One of the more spectacular failures of modern science and technology has been its inability to solve the obesity problem. What is the cause of the obesity epidemic? There are probably a significant number of causes, some of which might include:

  • reduction in average will power
  • rise of certain kinds of fast food
  • decline in home cooking
  • rise in relatively sedentary occupations
  • rise of television and other relatively sedentary past times
  • introduction of sugar to more foods (brought on in part by the rise of cheap sugar), and increase in sugar in general (including fruit and fruit juices)
  • misguided nutritional advice (for example, eat less fat, and instead make relatively high-glycemic index carbs basis of diet, or calories-in calories-out theory)
  • more driving, including longer commute times and more generally suburban environments
  • increase in hedonic ethic (through advertising, for example)

and so on.

Not only are there plausibly a significant number, but many of these causes are tangled. It’s not easy to separate one from another – they often interact in circles of causality.

That is why asking: “What is the cause of the obesity epidemic?” isn’t always that useful of a question. The human mind is drawn to uncomplicated answers, but in a complex system there might not be a simple causal story that is adequate.

A better question might be: “What things can we realistically change, that will significantly decrease the levels of obesity?”

In some cases, a solution is all-or-nothing, and might require multiple parts coming together to solve the puzzle. In this case, though, it seems plausible we can get partial solutions that in themselves are significant.

If we are talking about how to change things, then a useful concept might be that of a ‘causal lever‘. Something that, given the background causal situation, can when added cause a large difference. There might be multiple ones in any given situation. In this case, my guess is that the most at-hand causal lever that will make a significant difference is changing the standard theories about weight gain or loss (the ‘misguided nutritional advice’ above), because this will then ripple out to a large number of other areas of society.

Also see here.

Tyler Cowen and scientific progress

One of the recommendations Tyler Cowen – Professor of Economics at George Mason University – gives in The Great Stagnation (2011) for reversing the (supposedly) declining trend in technological innovation is to:

“Raise the social status of scientists.

[...] If we are going to see further major technological breakthroughs, it is a big help if people love science, care deeply about science, and science attracts a lot of the best [...] minds. The practice of science has to yield social esteem, and teams of scientists should have a strong esprit de corps and feel they are doing something that really matters.

When it comes to motivating human beings, status often matters at least as much as money. [...] Right now, scientists do not earn enough status and appreciation. [...] Science doesn’t have the cache of law, medicine, or high finance.

[...] I don’t want a bunch of extra science prizes [...]; what I want is that most people really care about science and view scientific achievement as a pinnacle of our best qualities as leaders of Western civilization.” (pp. 83-5)

What Cowen is arguing for is similar to what Sir Francis Galton argued for (and what subsequently was achieved, relatively speaking) near the end of the 19th century:

“As regards the future provision for successful followers of science, it is to be hoped that, in addition to the many new openings in industrial pursuits, the gradual but sure development of sanitary administration and statistical inquiry may in time afford the needed profession. These and adequately paid professorships may, as I sincerely hope they will, even in our days, give rise to the establishment of a sort of scientific priesthood throughout the kingdom, whose high duties would have reference to the health and well-being of the nation in its broadest sense, and whose emoluments and social position would be made commensurate with the importance and variety of their functions.” (English Men of Science, 1873, p. 259)

Galton was writing this in the exact year that, according to Jonathan Huebner (whose work Cowen references), technological innovation per capita peaked, i.e., 1873. Yet, since Galton’s time scientists’ emoluments and social position in broader society have risen significantly. So, over precisely this period where scientists’ status has increased, innovation has fallen. The sort of achievements made by a handful of men in Galton’s day have not been replicated in more and more cases per capita as more people have become scientists.

Compare the above with what Bruce Charlton writes in ‘The main reason science has declined‘:

“When scientists believe in reality and are motivated to seek the truth about it, then science will work.

That is all that is needed.

Therefore real science is very, very simple.

Questions of scientific methods are irrelevant, questions of organization are irrelevant - such real scientists will find a way.

But, since the pre-requisites are rare (not many reality-based people are truly truth-seeking and truthful), and the pressures for corruption are so strong, that real science is both rare and fragile.”

Increasing emoluments or social position increases the pressure for corruption, i.e., tends to attract people who then have an interest in those things instead of just figuring out the truth. My guess is that this is one reason why ‘amateur science‘ has been so successful – the people practicing it do so more to find things out than for status.

On reasoning

A simple way to see one aspect of the fragility of human reasoning is through joint probabilities.

P(1 .. n) = P(1) * P(2) * … P(n)

So, if I have an argument whose conclusion relies on 3 premises, and the argument is valid, then the chance that the argument is sound is:

probability (first premise being true) * probability (second premise being true) * probability (third premise being true)

(Assume independence of probabilities.)

Let’s say I’m ‘fairly sure’ about each premise, assigning each one a percentage of 80. That is, at each point along the argument, I’m fairly sure that it is right. At the end, I may then have the feeling that I should be fairly sure about the argument. Yet, this is not the case.

0.8 * 0.8 * 0.8 = 0.51, i.e., almost as likely to be unsound as sound.

Also see here.

Jonathan Huebner, John Smart, and the rate of technological change

In Tyler Cowen’s book The Great Stagnation (2011), he argues that the U.S. has been in an economic plateau since approximately 1973, and one of the main reasons is a slowing of technological innovation. In particular, he references a graph by Jonathan Huebner, a physicist, which took major technological innovations as presented in The History of Science and Technology (2004, Bryan Brunch and Alexander Hellemans, including 7,198 significant innovations from 1453 to recently before the book was published), and plotted them over time relative to global population (‘A possible declining trend for worldwide innovation’, Technological Forecasting and Social Change, 72:980-986, 2005).

Huebner’s result:

 

The peak, in terms of the modified Gaussian distribution, would be 1873.

In a previous post, commenting on Bruce Charlton’s hypothesis that the absolute amount of scientific advancement is falling, I said:

“Near the end of the 19th century, dramatic technological changes involving electrical power, the internal combustion engine, airplanes, and wireless telegraphy (i.e., radio), to name a few, were taking place. My working guess for a peak for technological change in terms of how it affects people would be then.”

So, Huebner’s graph corresponds to an extent with my guess based on reading about historical technological changes, and comparing that to my experience of technological change nowadays. My interest in technological change comes from my hunch that scientific ‘output’ is falling relative to ‘input’, i.e., given the resources devoted to it, we are getting less scientific progress than in, say, the 19th century. This is combined with the idea that significant technological advance is often driven by significant scientific advance, so a way to measure significant scientific advance is through significant technological advance.

John Smart, a systems theorist, has responded to Huebner (‘Measuring Innovation in an Accelerating World’, Technological Forecasting & Social Change, 72:988-995, 2005). Smart’s most interesting rejoinder, as I see it, is as follows:

“[T]echnological innovation may be becoming both smoother and subtler in its exponential growth the closer we get to the modern era. Perhaps this is because since the industrial revolution, innovation is being done increasingly by our machines, not by human brains. I believe it is increasingly going on below the perception of humans who are catalysts, not controllers, of our ever more autonomous technological world system.

Ask yourself, how many innovations were required to make a gasoline-electric hybrid automobile like the Toyota Prius, for example? This is just one of many systems that look the same “above the hood” as their predecessors, yet are radically more complex than previous versions. How many of the Prius innovations were a direct result of the computations done by the technological systems involved (CAD-CAM programs, infrastructures, supply chains, etc.) and how many are instead attributable to the computations of individual human minds? How many computations today have become so incremental and abstract that we no longer see them as innovations?

 [... Our brains] seem to be increasingly unable to perceive the technology-driven innovation occurring all around us.”

The basic idea here seems to be that much innovation is occurring ‘under the hood’. That is, we just don’t realize how much innovation is involved to do things like create a gasoline-electric hybrid car. So, it might seem like technology isn’t advancing as much as one would expect (given increases in the world’s population), but that isn’t really the case because it’s ‘hidden’.

I’m sure this is true to an extent – there is a large amount of innovation nowadays that we don’t really recognize as it’s hidden in the technological artifacts. Yet, what Huebner is trying to gauge isn’t ‘innovation’, but ‘important innovation’ (“For the purposes of this paper, the rate of innovation is defined as the number of important technological developments per year divided by the world population.”, p. 981). We don’t care how much more complex something is, or how many brute computations went into it, but whether it solves a relevant problem in a way that’s significantly better than the previous solution.

That is, what isn’t important is ‘complexity’. Any competent designer can tell you that complexity often impedes good solutions instead of moving them forward. To say that cars are more ‘complex’ nowadays than before is not to say that the solutions are much better – indeed, it hints that the solutions are marginal and lacking in significant advances in the underlying science.

The structure of causality

People often talk as if A causes B, but rarely is that the case. Rather, it is A, combined with R, S, T, … that cause B. That is, sufficient causes are almost always complex (having many elements). A causes B is a kind of short-hand, which can be paraphrased as ‘A is the relevant cause of B’, i.e., in the given context, A is the thing which makes sense to add or remove so as to bring about or stop B.

So, when arguing over whether A or R causes B, both arguers can be correct. That is, if A is added, B will occur, or if R is added, B will occur. (Or, depending on the context, if A is removed, B will cease, or if R is removed, B will cease.)

This becomes especially important when discussing things like economics, sociology, or medicine (i.e., complex systems) where there are lots of variable parts. In these cases, it might be the case that two seemingly contrary theories about ‘the cause’ are both correct (if A or R are removed, B will cease, say), and so the real debate is just about which approach should be pursued.

My hunch is that this basic aspect of causality isn’t appreciated enough in much debate in these sorts of fields.

Words and conceptual change

How do words achieve their function of allowing communication between two speakers of a language?

Words gain meaning through work. That is, one must create the concept associated with a word, and make sure two speakers share enough of the concept that they can communicate. (One can imagine how this might occur piecemeal.)

In novel or rare situations for word usage, a consensus on the concept may not be there. That is, the ‘core’ concept may not speak directly to it.

Take ‘dog’. Now imagine that you start changing the genetics of things we take to be dogs today. At what point does it stop being a dog? There is probably no sharp consensus, because this is a novel situation that the concept hasn’t been created to deal with.

If it became important, we could then innovate on the concept, creating new distinctions so that the word ‘dog’ could be used to communicate relatively clearly. (We would ask: “What is important to us about the concept ‘dog’?”)

This makes many debates in philosophy moot. Consider what the word ‘know’ means. People use the word to do things, in situations that come up every day. If, as a philosopher, you conjure some strange situation and then ask people whether someone ‘knows’ something or not in such a situation, there might be no consensus to the concept that has been worked out. You might get differing intuitions, or people might say they don’t know.

As far as these sorts of thought experiments are doing anything, what the philosopher is doing might be seen as instigating new conceptual construction. “Let us innovate on this concept, so we can solve this conceptual problem …” (such as at what point in genetic change we should stop calling something a ‘dog’).  That is, most debates about what the meaning of words ‘are’, are actually debates about how to change the concept of a word. “What is the way to innovate on this concept so as to solve this problem?”

(Of course, some concepts refer to things, and the conflict is about some aspect of that thing. For example, if we want the concept ‘dog’ to refer to animals with a genetic code similar to the animals we paradigmatically call dogs, what kind of genetic code is that? If we don’t already know, we will have to investigate to find out.)

What should the current picture of science tell us?

If our scientific picture is changing rapidly, and large parts of what we currently believe in science are wrong or incomplete, then why rely on the current state of scientific knowledge?

The answer: it’s better than nothing, and it’s the best we have.

This is true in some areas. Yet, for a large swath of science (nutritional science, psychology, biology, archaeology, and so on) there are other significant sources of possible knowledge. The first and most obvious, which applies to an area like psychology, is our intuitions about our own mental life. In areas like nutritional science, there are traditions about how to eat. In archaeology, there are myths or stories which involve archaeological aspects (consider the possible discovery of The Iliad‘s Troy, after critical history deigned it a legend). Beyond this, there are large amounts of anecdote (an + ecdote = un + published) or written works which impinge on various areas science studies, but haven’t been developed up into ‘scientific’ case studies, say (Darwin’s reading an animal breeder pamphlet to suggest ideas on change in animal form is an example of this in action).

(Indeed, the way science changes is often by noticing things that are outside science, and then working to incorporate them. If science is largely wrong or incomplete, then the importance of such things as a set seems greater than the current scientific beliefs.)

In these latter sorts of cases, the question becomes more complex: how much do we weight the current picture of science, and how much do we weight these other sources of possible information?

I think there is no simple answer, but if there’s a seeming divergence between science and some other seemingly important source of information, there seem to be a few questions that can be used as rules of thumb when evaluating scientific claims: 1. What is the robust domain of this? For example, Newtonian physics is robust when applied to certain kinds of causal processes on the surface of the planet, say. However, going from that to “the universe runs on Newtonian processes” is probably unwarranted (and as it turns out, we no longer believe this). Being robust means that the findings have been tested and confirmed extensively for a given set of phenomena. Very often, we find out that a theory that worked well to explain one domain doesn’t work as well when expanded to other domains. (In some cases, it turns out there is no robust domain – it’s just mistaken.) 2. What is the space between evidence and theory or presentation? (Pretty much every presentation of evidence contains some theory about what the evidence means.) Consider ‘brain maps’ which show neurological activity: how exactly are these maps created? What exactly are they showing? How much do the researchers’ own beliefs affect the presentation of the so-called evidence? 3. Following on that, given that there is always a leap from evidence to theory, could there be another way to interpret or explain this evidence?

Combining this with what the other source of evidence might be telling us can lead to a better view of things than just the current scientific view alone.

The purpose of asceticism

From The Catholic Encyclopaedia‘s article on free will by Michael Maher (1909):

“Our moral freedom, like other mental powers, is strengthened by exercise. The practice of yielding to impulse results in enfeebling self-control. The faculty of inhibiting pressing desires, of concentrating attention on more remote goods, of reinforcing the higher but less urgent motives, undergoes a kind of atrophy by disuse. In proportion as a man habitually yields to intemperance or some other vice, his freedom diminishes and he does in a true sense sink into slavery. He continues responsible in causa for his subsequent conduct, though his ability to resist temptation at the time is lessened. On the other hand, the more frequently a man restrains mere impulse, checks inclination towards the pleasant, puts forth self-denial in the face of temptation, and steadily aims at a virtuous life, the more does he increase in self-command and therefore in freedom. The whole doctrine of Christian asceticism thus makes for developing and fostering moral liberty, the noblest attribute of man. William James’s sound maxim: “Keep the faculty of effort alive in you by a little gratuitous exercise every day”, so that your will may be strong to stand the pressure of violent temptation when it comes, is the verdict of the most modern psychology in favour of the discipline of the Catholic Church.”

Before reading this, the idea of asceticism had never made sense to me. It seemed like an odd, backward thing – much like the depiction of the camp of Christians in Brave New World – irrational.

Yet, with this it suddenly makes sense. Will power is like a muscle – it increases with increased use. Asceticism is basically a training course for the muscle of will power. The end goal (among others) is therefore to create a freedom – to amplify free will through a developed will power.

(Not surprisingly, then, the word asceticism comes from the Greek askesis which means practice, bodily exercise, and more especially, athletic training.)

Contrariwise, the lack of an ascetic sense in significant parts of contemporary society (and instead the cultivation of ‘mere impulse’, as seen through much advertising, say) means that many people have lost a significant amount of freedom in an important sense.

So, asceticism in reality is primarily a constructive practice: the point is to create more will power, which when combined with free will allows one to choose the right or the good more often (such as long-term good over short-term good).

Also see here.

Randolph Clarke on the evidence for non-deterministic theories of free will

In an article in The Stanford Encyclopaedia of Philosophy, Randolph Clarke discusses the evidence for an incompatibilist account of free will. (Incompatibilism is the view that free will isn’t compatible with determinism.)

“It is sometimes claimed [...] that our experience when we make decisions and act constitutes evidence that there is indeterminism of the required sort in the required place. We can distinguish two parts of this claim: one, that in deciding and acting, things appear to us to be the way that one or another incompatibilist account says they are, and two, that this appearance is evidence that things are in fact that way. [... E]ven if this first part is correct, the second part seems dubious. If things are to be the way they are said to be by some incompatibilist account, then the laws of nature—laws of physics, chemistry, and biology—must be a certain way. [...] And it is incredible that how things seem to us in making decisions and acting gives us insight into the laws of nature. Our evidence for the required indeterminism, then, will have to come from the study of nature, from natural science.”

I don’t understand the reason for the ‘incredible’ claim, and no reason is given for it in the article.

Yet, it seems that there is a pretty straightforward empirical argument that how things seem to us in various mental events or processes can in theory give us insight into the laws of nature. Basic idea: reflecting on what happens in one’s mind can give one (correct) predictions about what is happening in the brain (say), which in turn involves natural laws.

More detailed: The way things seem to us has already given us insight into cognitive or neurological events or processes. That is, we have an experience of something working in our mind, then we look for a correlate in brain processing, and in certain cases we have found correlates. (This is the basis of the belief that the mind is, in some important sense, the brain.) There must be natural laws which are compatible with these brain processes, if the brain is part of nature. Therefore, how things seem to be working in the mind can in theory give insight into natural laws.

The question is just how the sense of decision making maps onto nature. Are we really able to peer into the workings of nature (or something very closely related to it) on the inside, or is that not how the mind works? One view in the ‘hard problem’ of consciousness, for example, is similar to the former: through reflecting on consciousness, one can get a glimpse of the inner nature of physical reality.

Epistemic humility

A pretty straightforward inductive argument: our scientific picture of the universe is changing rapidly (on an historical scale). Therefore, large parts of what we currently might believe, are wrong.

A reasonable inference from this is: there is much more that we don’t know than that we do – our scientific (and therefore academic) views of the universe are nowhere near complete.

Therefore, one should focus on establishing that something is the case, not that it could or could not happen because one’s scientific or academic theories tell one so.

Newcomb’s Paradox: A Solution Using Robots

Newcomb’s Paradox is a situation in decision theory where the principle of dominance conflicts with the principle of expected utility. This is how it works:

The player can choose to take both box A and box B, or just take box B. Box A contains $1,000. Box B contains nothing or $1,000,000. If the Predictor believes that the player will take both boxes, then the Predictor puts $0 in box B. If the Predictor believes that the player will take just B, then the Predictor puts $1,000,000 in box B. Then the player chooses. The player doesn’t know whether the Predictor has put the $1,000,000 in box B or not, but knows that the Predictor is 99% reliable in predicting what the player will do.

Dominance reasoning says for the player to take both boxes. Here’s why:

If the Predictor predicted that the player will choose just one box, then if the player picks just box B the player gets $1,000,000, but if the player picks both boxes the player gets $1,001,000. $1,001,000 > $1,000,000, so in this case the player should pick both boxes.

If the Predictor predicted that the player will choose both boxes, then if the player picks just box B the player gets $0, but if the player picks both boxes, the player gets $1,000. $1,000 > $0, so in this case the player should pick both boxes.

So, no matter what the Predictor did, the player is better off choosing both boxes. Therefore, says dominance reasoning, the player should pick both boxes.

Expected utility reasoning, however, says for the player to take just box B:

If the player picks both boxes, expected utility is 0.99*$1,000 + 0.01*$1,100,000 = $11,990. If the player picks just box B, expected utility is 0.99*$1,000,000+0.01*$0 = $990,000. Expected utility is (much) higher if the player picks just box B.

The problem is called a ‘paradox’ because two decision making processes that both sound intuitively logical give conflicting answers to the question of what choice the player should make.

This description of Newcomb’s Paradox is actually ambiguous in certain respects. First, how does the Predictor predict? If you don’t have any idea, it could be difficult to figure out what’s going on here. The second (and related ambiguity) is how the player can choose. Can they choose randomly, for example? (If they choose in a completely random way, it is difficult to understand how the Predictor predicts correctly most of the time.)

Instead of addressing the ambiguous problem above, I decided to create a model of the situation that clarifies the exact mechanics. This model, then, might not address certain issues others have dealt with in the original problem, but it adheres to the general parameters above. Any solutions derived from the model apply to at least a subset of the formulations of the problem.

It is difficult to create a model with humans, because humans are too complex. That is, it is very difficult to predict human behaviour on an individualized basis.

Instead, I created a model involving robot agents, both player and Predictor.

This is how the model works (code at bottom of post):

time = 1

Player is either Defiant Dominance (DD) or Defiant Expected Utilitarian (DE). What this means is that

if player is DD, then % chance player picks both boxes = 99%.

if player is DE, then % chance player picks just box B = 99%.

time = 2

The Predictor checks the player’s state:

if player is DD, then Predictor puts no money in box B

if player is DE, then Predictor puts $1,000,000 in box B

time = 3

Then the player plays, based on its state as either DD or DE, as described above.

It follows that the Predictor will get it right about 99% of the time in a large trial, and that the DE (the player that consistently picks the expected utility choice) will end up much wealthier in a large trial.

Here are some empirical results:

trials = 100, average DD yield = $1,000, average DE yield = $1,000,000

trials = 10,000, $990.40, $1,000,010.20

trials = 100,000, $990.21, $1,000,010.09

Yet, to show the tension here, you can also imagine that the player is able to magically switch to dominance reasoning before selecting a box. This is how much that the players lost by not playing dominance (same set of trials):

trials = 100, total DD lost = $0, total DE lost = $100,000

trials = 10,000, $96,000, $9,898,000

trials = 100,000, $979,000, $98,991,000

What this shows is that dominance reasoning holds at the time the player chooses. Yet, the empirical results for yield for the kind of player (DD) that tends to choose dominance reasoning are abysmal (as shown in the average yield results earlier). This is the tension in this formulation of Newcomb’s Paradox.

What is clear, from looking at the code and considering the above, is that the problem isn’t with dominance reasoning at time = 3 (i.e., after the Predictor makes his prediction). A dominance choice always yields a better result than an expected utility choice, in a given environment.

The problem, rather, is with a player being a DD kind of player to begin with. If there is a DD player, the environment in which a player chooses becomes significantly impoverished. For example, here are the results for total rewards at stake (same trials):

trials = 100, total with DD = $100,000, total with DE = $100,100,000

trials = 10,000, $10,000,000 ($10M), $10,010,000,000 (> $10B)

trials = 100,000, $100,000,000 ($100M), $100,100,000,000 (> $100B)

DD is born into an environment of scarcity, while DE is born into an environment of abundance. DE can ‘afford’ to consistently make suboptimal choices and still do better than DD because DE is given so much in terms of its environment.

Understanding this, we can change how a robot becomes a DE or a DD. (Certainly, humans can make choices before time = 2, i.e., before the Predictor makes his prediction, that might be relevant to their later choice at time = 3.) Instead of simply being assigned to DD or DE, at time = 1 the robot can make a choice using the reasoning as follows:

if expected benefits of being a DE type > expected benefits of being a DD type, then type = DE, otherwise type = DD

This does not speak directly to the rationality of dominance reasoning at the moment of choice at time = 3. That is, if a DE robot defied the odds and picked both boxes on every trial, they would do significantly better than the DE robot who picked only 1 box on every trial. (Ideally, of course, the player could choose to be a DE, then magically switch at the time of choice. This, however, contravenes the stipulation of the thought experiment, namely that the Predictor accurately predicts.)

By introducing a choice at time = 1, we now have space in which to say that dominance reasoning is right for the choice at time = 3, but something that agrees with expected utility reasoning is right for the choice at time = 1. So, we have taken a step towards resolving the paradox. We still, however, have a conflict at time = 3 between dominance theory and expected utility theory.

If we assume for the moment that dominance reasoning is the rational choice for the choice at time = 3, then we have to find a problem with expected utility theory at time = 3. A solution can be seen by noting that there is a difference between what a choice tells you (what we can call ‘observational’ probability) and what a choice will do (what we can call ‘causal’ probability).

We can then put the second piece of the puzzle into place by noting that observational probability is irrelevant for the player at the moment of a choice. Expected utility theory at time = 3 is not saying to the player “if you choose just box B then that causes a 99% chance of box B containing $1,000,000″ but rather in this model “if you chose (past tense) to be a DE then that caused a 100% chance of box B containing $1,000,000 and also caused you to be highly likely to choose just box B.” I.e., expected utility theory at time = 3 is descriptive, not prescriptive.

That is, if you are the player, you must look at how your choice changes the probabilities compared to the other options. At t = 1, a choice to become a DE gives you a 100% chance of winning the $1,000,000, while a choice to become a DD gives you a 0% chance of winning the $1,000,000. At t = 3, the situation is quite different. Picking just box B does not cause the chances to be changed at all, as they were set at t = 2. To modify the chances, you must make a different choice at t = 1.

Observational probability, however, still holds at t = 3 in a different way. That is, someone looking at the situation can say “if the player chooses just box B, then that tells us that there is a very high chance there will be $1,000,000 in that box, but if they choose both boxes, then that tells us that there is a very low chance that there will be $1,000,000 in that box.”

Conclusion:

So, what is the Paradox in Newcomb’s Paradox? At first, it seems like one method, dominance, contravenes another, expected utility. On closer inspection, however, we can see that dominance is correct, and expected utility is correct.

First, there are two different decisions that can be made, in our model, at different times (time = 1 and time = 3).

player acts rationally at time = 1, choosing to become a DE

player thereby causes Predictor to create environmental abundance at time = 2

but player also thereby causes player to act irrationally at time = 3, choosing just box B

The benefits of acting rationally at time = 1 outweigh the benefits of acting rationally at time = 3, so “choosing just box B” is rational in so far as that phrase is understood as meaning to choose at time = 1 to be a DE, which in turn leads with high probability to choosing just box B.

Second, there are two different kinds of probability reasoning that are applicable: causal reasoning for agents (the player in this case), on the one hand, and expected utility for observers, on the other. Causal reasoning says at time = 1 to choose to be a DE, and at time = 3 to choose both boxes.

At neither time does causal reasoning conflict with dominance reasoning. Expected utility reasoning is applicable for observers of choices, while causal reasoning is applicable for agents making the choices.

Therefore, Newcomb’s Paradox is solved for the limited situation as described in this model.

Applying this to humans: For a human, there is probably no ‘choice’ to be a DD or DE at time = 1. Rather, there is a state of affairs at time = 1, which leads to their choice at time = 3. This state of affairs also causes the Predictor’s prediction at time = 2. The question of “free will” obscures the basic mechanics of the situation.

Since a human’s choice at t = 3 is statistically speaking preordained at time = 2 (per the stipulations of the thought experiment) when the Predictor makes his choice, all the human can do is make choices earlier than t = 2 to ensure that they in fact do pick just box B. How a human does this is not clear, because human reasoning is complex. This is a practical psychological question, however, and not a paradox.

Notes:

One lesson I took from solving Newcomb’s Paradox is that building a working model can help to ferret out ambiguities in thought experiments. Moving to a model using robots, instead of trying to think through the process first-person, helped significantly in this respect, as it forced me to decide how the players decide, and how the Predictor predicts.

Creating a model in this case also created a more simple version of the problem, which could be solved first. Then, that solution could be applied to a more general context.

It only took a few hours to get the solution. The first part, that there is a potential choice at time = 1 that agrees with expected utility reasoning, came first, then that what matters for the player is how their choice causally changes the situation. After thinking I had solved it, I checked Stanford’s Encyclopaedia of Philosophy, and indeed, something along these lines is the consensus solution to Newcomb’s Paradox. (Some people debate whether causality should be invoked because in certain kinds of logic it is more parsimonious to not have to include it, and there are debates about the exact kind of causal probability reasoning that should be used.) The answer given here could be expanded upon in terms of developing an understanding of the agent and observer distinction, and in terms of just what kind of causal probability theory should be used.

Code:

UnicodeString NewcombProblem()
{
int trialsCount = 100000;

enum
{
DefiantDominance, // i.e., likes to use dominance reasoning
DefiantExpectedUtilitarian
}playerType;

// setup which type of player for this trial
// time = 1;

//playerType = DefiantExpectedUtilitarian;
playerType = DefiantDominance;
double totalPlayerAmount = 0.0;
double totalPlayerAmountLost = 0.0;
double totalAmountAtStake = 0.0;
int timesPredictorCorrect = 0;

for (int trialIdx=0; trialIdx<trialsCount; trialIdx++)
{
// Predictor makes his decision
// time = 2;

bool millionInBoxB = false;
if (playerType == DefiantExpectedUtilitarian)
millionInBoxB = true;

// player makes their decision
// time = 3;

double chancePicksBoth = playerType == DefiantDominance ? 99 : 1;

// now results …
// time = 4;

bool picksBoth = THOccurs (chancePicksBoth);

// now tabulate return, if !millionInBoxB and !picksBoth, gets $0
if (millionInBoxB)
totalPlayerAmount += 1000000.0;
if (picksBoth)
totalPlayerAmount + = 1000.0; // box A always has $1,000

totalAmountAtStake += 1000.0;
if (millionInBoxB)
totalAmountAtStake + = 1000000.0;

if (!picksBoth)
totalPlayerAmountLost + = 1000.0;

if (picksBoth && !millionInBoxB)
timesPredictorCorrect++;
if (!picksBoth && millionInBoxB)
timesPredictorCorrect++;
}

double averageAmount = totalPlayerAmount/(double)trialsCount;
double percentagePredictorCorrect = (double)timesPredictorCorrect/(double)trialsCount*100.0;

UnicodeString s = “Trials: “;
s += trialsCount;
s += “, “;
s += playerType == DefiantDominance ? “DefiantDominance” : “DefiantExpectedUtilitarian”;
s += ” – Average amount: “;
s += averageAmount;
s += “, Total amount lost because didn’t use dominance reasoning at moment of choice: “;
s += totalPlayerAmountLost;
s += “, Total amount at stake (environmental richness): “;
s += totalAmountAtStake;
s += “, Percentage Predictor correct: “;
s += percentagePredictorCorrect;
s += “%”;
return s;
}
//—————————————————————————

Charlton on questions

Bruce Charlton writes:

“Q: Is asking questions good or bad?

A: Neither: it depends on the reason for asking.

[...]

I can perceive that the skeptical and questioning stance is a reaction to the amount of nonsense and dishonesty in the world; but it is the wrong answer.

What we should do about nonsense and dishonesty is ignore them[.]

He continues:

“The proper motivation for questioning is not from skepticism but from belief.

The essence of proper questioning happens when we question authorities that we trust, to discover more concerning that which we believe.”

I think there is a corollary to this: the essence of proper answering happens when we answer people who trust us, and who are asking to discover more concerning that which they believe. Otherwise, the best tactic for the would-be answerer – as Charlton notes in the case of the questioner above – is probably to ignore the questions.

Practice and proposition

Seth Roberts, Professor Emeritus of Psychology at U.C. Berkeley, says:

It is better to do an experiment than to think about doing an experiment, in the sense that you will learn more from an hour spent doing (e.g., doing an experiment) than from an hour thinking about what to do. Because 99% of what goes on in university classrooms and homework assignments is much closer to thinking than doing, and because professors often say they teach “thinking” (“I teach my students how to think”) but never say they teach “doing”, you can see this goes against prevailing norms.

Religion isn’t just a set of propositions, but is more a set of practices. Intellectuals like to focus on the propositions, because they are good at manipulating abstract symbols, arguing about them, and synthesizing them with other sets of abstract symbols (or showing that there are seeming contradictions between the sets, say).

The problem for intellectuals is two-fold:

1. Religion is more about a practice, like learning a musical instrument, than it is a set of propositions. To understand religion, then, one must do, but this is scary for intellectuals because a) it is often outside their area of core strength, and b) it entails the possibility of changing who they are.

2. Many of the propositions associated with something which is largely a practice often to an extent are nonsensical until one starts the practice. Only then do the propositions begin to make sense. This is because the practice involves creating new conceptual categories, and so on. When learning singing, for example, one’s teacher may say all sorts of things that use English words and are grammatical, but which one does not really understand … until one starts doing the various practices, at which point the propositions start to take on a (new) meaning.

Some of the best academic work involves the academic doing: an example of this is Mary Carruthers’ work on medieval memory, where she undertook to learn various techniques about which she was writing. This process changed her understanding of the plausibility of the techniques, and helped guide her in understanding the meaning of what the people using the techniques were saying.

If an academic wants to collect data about a religious practice, he must either begin the practice himself, or rely on what people who have done the practice themselves say. If the latter, he probably won’t really understand what they are talking about, but it is at least a step closer to figuring out the truth than logic chopping an unfamiliar set of abstract symbols on his own.

Also see here.

Science and agency

Jim Kalb writes that:

“Scientific knowledge is knowledge of mechanism that enables prediction and control. If you treat that kind of knowledge as adequate to all reality, which scientism has to do to be workable, then human agency disappears.”

I think it is more accurate to remove “of mechanism.” Science is about prediction and control. If one can predict that one cannot predict (for example, if something is random), though, then that is also a part of science.

Currently, human agency hasn’t disappeared in the scientific worldview – rather, when trying to understand or explain human agency, scientists tend to work within the current repertoire of scientific concepts. Right now, those aren’t classical mechanisms, but various electro-magnetic phenomena or cause-effect processes from quantum physics, say. Are all these things mechanisms? Yes, in that there is some sort of cause-effect relationship which can be described, and technologies created based on that description.

Consider creating robots: the robots can move about a room, decide whether to go left or right, and so on. There is agency there in some sense, and it all appears to be explicable in terms of contemporary science. So, when scientists are trying to understand human agency, they might look at how computer agents work. This might not be plausible (humans probably work in very different ways from any robots nowadays), but that’s not the point: the agency does not ‘disappear’ – it is just explained in terms that might work. If we find out that those terms don’t work, then scientists will postulate other ways in which human agency works.

I think Kalb underestimates how flexible ‘science’ has been – it changes once we figure out that certain representations don’t work. If there is something like an unpredictable human agency, then it will be included like other unpredictable phenomena in science. So, there might be new concepts developed to describe this. Nothing rests on this.

Poetry and technique

While studying poetry, I found two singular facts:

1. I found most of contemporary poetry that I was studying to be poor – lacking in an important sense. Even that which was dazzling in its use of various techniques. It was, more or less, a waste of time except in terms of the negative lesson it taught (whatever that might be).

2. As suggested, there was an intense focus on poetry as changing technique – how can we experiment and create new forms by changing the techniques? Let us follow these seemingly arbitrary criteria to create a new poem about this random subject.

Yet, there was little discussion as to the root and nature of poetry – what does (did) it do? What is it supposed to do? I.e., what’s the point? To merely say that there is a point is to say something which from a certain perspective is dangerous, I suppose, because it is normative.

Thinking more about techniques, however, led me back to the reasons for those techniques. Why does poetry pulse? Why does it alliterate? Why does it invoke vivid visual, auditory, tactile, and so on, imagery? How did this come about – how was it used traditionally?

The importance of the techniques is in changing the state of the reader or listener. Alliteration, imagery that draws on the senses, and so on, all combine to put the reader in a specific sort of state where they can then experience or understand the ideas the poet has captured in his poetic writing. What kind of state? I seem to know this intuitively, but it is difficult to describe – it is a state where certain kinds of truths can be grasped. What kind of truths? Important ones. Mythical ones, perhaps.

My guess is that poetry in its purest form is a part of religion (or vice versa). I think that this is why I sensed that contemporary poetry was lacking: it has largely attempted to detach from religion (understood in a very broad sense – religious exploration as has occurred throughout human history and pre-history) – from exploring the ideas and experiences that religion is about (such as truth, beauty, virtue, and connected, sensing-of-divine, or even holy, sanctified, or mystical states), and from doing so in any coherent tradition of religious thought or symbolism.

In short, I feel like contemporary poetry, like much of the arts, is cheating people of the most important potential aspects of the craft. To the extent that art in general has done this (and often instead veered toward art for the form of art), this explains why the quality of cultural artifacts has largely declined in many spheres – the artists have detached themselves from what art is about, so of course we get poor art as a consequence.

(In reality, there is probably a large amount of high-quality art being made right now – but it is obscured due to the way that a person like myself hears about new art. Basically, the standard gate-keepers keep the really interesting art out.)

Mere myth, literalism, and truth

Is it literally true that the second person of the Trinity rose from the dead? Rose where – towards the atmosphere?

What does it even mean to claim that these sorts of things are literally true – is it rather that the question is misguided?

I think: the way that these things are comprehended must in some sense be symbolic …

Yet, that sounds like it is presuming a non-symbolic way of knowing which is standard. Do we have a dominant and non-symbolic, ‘literal’ way of knowing against which this can be contrasted? Note that scientific representations are necessarily representations, or symbols. In this sense, scientific models aren’t literally true. Yet, of course, they are literally true. It is literally true that the cat is composed of molecules. But both ‘cat’ and ‘molecule’ are symbols, an interface for dealing with reality.

I’m not sure what work the word ‘literal’ is doing in many cases. Consider: a) Is it true that the cat is made of molecules? b) Is it literally true that the cat is made of molecules? Is there a difference? Not usually: when we say that the cat is made of molecules, we usually mean that it is literally made of molecules.

What is the word ‘literal’ adding in some cases? It seems that the word ‘literal’ now often operates as something that could be paraphrased as: is this claim something that, when understood by interpreting the words in a conventional scientific sense, is true? For example, if the second person of the Trinity rose from the dead, we are to understand this in terms of the physical models of the universe that we have developed through science. The person must be made of something which has a corresponding representation in physical science (molecules, electrons, and so on), and the word ‘rose’ must map on to some other corresponding representation in physical science, such that we can say that something ‘literally’ moved somewhere.

Yet, I doubt very much that most people intend this, when they say that the second person of the Trinity rose from the dead. Therefore, the phrase isn’t intended to be ‘literally’ true. Yet it isn’t intended to be ‘mere’ myth or metaphor – it is meant to be true. In what sense?

It is meant to be true in terms of a complex set of interlocking models and representations which have developed over thousands of years (probably longer than that), and broadly belong to the domain of ‘religion’, among other things. Most contemporary, secular intellectuals are not very familiar with these models. They think, rather, that to be true is to be ‘literally’ true. Since these claims don’t seem to be literally true, they are nonsense. The third option is that they are mere myths (symbolic for something that is literally true). Yet, most intellectuals don’t consider that they could be ‘mythical truth’ – something just as true as (or more so than) literal truth, yet employing a different set of representations or models.

(This is not to argue for a separation of mythical truth and literal truth. They must, somewhere, impinge and inter-map. The question is just where exactly this is to happen, and how to move from one set of representations to another.)

This is just to say that understanding certain things ‘mythically’ may be to create a better, stronger, more robust, and so on, relationship with certain kinds of truths, where these truths aren’t second-hand for ‘literal’ truths (although the opposite may be so in some cases).

So why do people often consider this movement to myth to be a retreat from truth? Probably because they are under the impression that people understood these things to be literally true in the past, but, in the face of disconfirmation through science, that people no longer consider them to be literally true. The problem with this theory is that, to a significant extent, there was no such thing as what is contemporarily understood as literal truth in much of the past, because the contemporary sense of ‘literal’ requires the various scientific representations that have been built up over the last several hundred years. Since these did not exist in the past, people couldn’t have understood these claims in a ‘literal’ way. Now that I think about this, it seems obvious.

Yet, there is something more to say about this. As peoples’ understandings of various words started to change to what we now call a ‘literal’ sense (a physical, scientific sense), they probably did start to carry that in all sorts of directions. Then, as science continued to develop, it became clear that these things weren’t the case in a ‘literal’ sense. So then they either had to disown the claims or say that they were “mere myth.” Yet, as we’ve seen, this is rather merely to internalize an epistemic error (the error of thinking that these stories, as originally told, were intended ‘literally’ in the contemporary sense, when they couldn’t have been, as the contemporary models that undergird what the word ‘literally’ now means didn’t exist).

Also see here.

Kreeft, angels, and what’s unscientific

Peter Kreeft – Professor of Philosophy at Boston College – writes (Angels and Demons, 1995, pp. 32-3):

“Isn’t the supernatural unscientific?

Yes. Science can’t prove it.

But it’s not antiscientific, because science can’t disprove it either. All the reasonings about all the observations of all the events within the system of nature can’t disprove something outside of nature [. ...] If angels can cause changes in nature – if angels can stop a speeding car in two seconds or appear and disappear in bodies – then the human senses can detect their effects [...] But the cause is invisible, since angels are pure spirits. Science cannot observe angels. They don’t reflect light.” (original italics)

This isn’t quite right. Science can only detect effects. It then postulates causes based on the effects. So, angels wouldn’t be unscientific in this regard.

So, what makes angels seem unscientific? Is it that they might exist outside of space and time, and so are non-natural in a sense? I think that’s getting ahead of ourselves: there is another reason why many people consider postulating angels to be unscientific.

First, the phenomena that Kreeft mentions above are erratic – like many natural phenomena, such as meteorites falling to earth, it is difficult to replicate them in a laboratory. This just means that until one gets very good reason to believe it occurs, it’s easy to disregard reports or postulate closer to hand explanations, if not doing so requires a (hypothetically) large metaphysical shift from the normal kinds of explanations.

An analog to this would be bug detecting in coding. Sometimes, bugs are highly replicable (and therefore usually easy to solve). Other times, though, one gets an “erratic bug.” It seems to occur at unusual times, without any seeming reason. Often, one can’t even replicate the supposed bug – it’s rather a customer that is claiming to have one. One could jump to the conclusion that the bug isn’t real, or that the bug is not part of the code (but rather a problem with, say, the customer’s operating system). Yet, often that’s not the right conclusion to jump to. Rather, the first culprit is usually something going on within the code. Even so, though, figuring out the cause of the erratic effect may be difficult.

Similarly, most scientists probably think of angelic phenomena like Kreeft mentions above in a similar way: it’s difficult to say what, exactly, is going on. For the time being, goes the line of thought, let’s assume it’s something occurring within the established natural framework of causes and effects, and save more radical hypotheses about it for once we’ve eliminated the closer to hand possible causes.

(This is combined with the question: does it make sense to focus on trying to explain these phenomena right now? Similarly, one might not focus on an erratic bug, because it seems more likely one will make more progress by focusing on replicable bugs instead. Perhaps it will be solved along with solving something else, or perhaps at a later point one can return to it, but for now, there are other things to do, goes the thinking.)

Second, the phenomena Kreeft doesn’t mention but seem more replicable – angels guiding humans (bringing them messages) – also seems explainable by something else: namely, part of the brain (or so the presumption goes). (Neurotheology investigates effects like this.)

In both cases, there’s nothing unscientific about postulating angels per se, except that it seems that there are easier explanations at hand, given the picture of the world that science already has. Both this and the first consideration are instances of Occam’s Razor. Perhaps, though, upon close inspection of the phenomena, there doesn’t seem to be any good conventional explanations at hand, and something like an angelic being outside of space and time does start to seem reasonable – just as sometimes, when learning about a bug, it starts to seem likely that it is something outside the code causing it: perhaps it’s in the customer’s operating system, or hardware, or some other program running at the same time. In this case, it becomes reasonable, and scientific, to start postulating other causes. There’s no general rule here, but rather an informed sense of what could be going on.

Having said all this, Kreeft is on to something right here. Warranted belief for a ‘supernatural’ cause is not possible in science in a sense, because “to be physical” = “to be in the physical causal network,” where the way that something is judged to be in the physical causal network is if it has effects on other things in the physical causal network. So, if one has good evidence of effects in the physical causal network, then the postulated causes become linked to the physical causal network, i.e., they become ‘physical’ also. As far as something becoming part of the physical causal network becomes ‘natural’, then causal evidence for ‘supernatural’ processes is impossible.

(Consider the idea that the universe was ‘birthed’ from another universe, and that there are many universes in existence. Is the original universe ‘supernatural’? That’s not my intuition – it’s natural, and physical, while existing outside of this universe.)

The corollary of this conclusion is the fact that the definition of what is physical is changing to fit whatever can be posited to cause effects in the physical causal network. So, electromagnetic causal processes would not have been considered ‘natural’ or ‘physical’ at some point in the past – but now they are paradigmatic.

Also see here.

Are chairs real?

300 y.a., chairs were thought to have a very different fundamental nature than we think that they have now.

Yet, we don’t say that therefore chairs aren’t real, or that people who talked about chairs 300 y.a. were deluded. Nor would we say that we are engaging in delusion whenever we talk about chairs nowadays, even though we probably have the fundamental nature of chairs wrong.

Why?

In coding, there is an approach called object-oriented programming (OOP). OOP collects data and functions in an ‘object’. When working with the object, one can work not directly with the internal data and functions, but with what’s known as an interface. This interface encapsulates the ‘private’ data and functions of the object – it makes it so that you as a coder can work with the object without worrying about its fundamental implementation. Therefore, the private implementation can be changed while the interface remains the same, which is useful.

Similarly, the concept ‘chair’ is centrally about an everyday interface with chair objects, where the fundamental ‘implementation’ of a chair (it’s fundamental nature) isn’t that relevant. The interface works, regardless of how we change our concept of what chairs fundamentally are, because the interface is about the everyday macroscopic interactions people have with these things we call ‘chairs’.

So, why do we say that some things that we used to say exist don’t exist anymore, even though both today’s and the historical concepts refer (at least in part) to the same (real) things? I think the answer is that, for whatever reasons, the relevant interfaces for the two concepts are significantly different.

Take the idea of dragons and dinosaurs. We say that dragons don’t exist, even though they referred to fossil evidence that was shared by some (what we now call) dinosaurs. Understood in terms of reference, these were the same (sub-)set of things, and so dragons did exist, just as dinosaurs did exist. Yet, we don’t say that. Why not? Since the primary way that we interact with dinosaurs is in a paleontological setting, the concept is determined by the relevant paleontological interface. Yet, ‘dragons’ don’t have a primarily paleontological interface, but rather are interfaced through stories, where they have various attributes that are incompatible with dinosaurs (such as talking, casting spells, living contemporaneously with humans, and so on).

Consider the idea that certain illnesses were caused by demons, that now we believe are caused by bacteria. Why didn’t we identify demons with these bacteria? Both are invisible, enter into the body, and cause us to become sick. One answer is that the relevant interface for dealing with the illness changed, i.e., to deal with demons, you don’t apply antibiotics.  In order for an identity to have occurred, those using the concept ‘demon’ would have had to make a massive change to various parts of it, so it fit with what we were learning about these certain types of bacteria. That didn’t occur, and so we now say that demons don’t cause these certain illnesses, rather certain kinds of bacteria do, even though the concept ‘demon’ used to refer (in part, it turns out) to those certain kinds of bacteria.

Also see here and here.

Kreeft on academics

To the question:

If Christianity is intellectual, why aren’t more academics Christians?

(and presumably mentions something about being practical, logical, and scientific), Peter Kreeft answers (transcribed from here):

“Because academics are not practical, and not logical, and not scientific. In fact, if I were to list the 100 most absurd, illogical, impractical ideas in the history of the world, the one thing common to most of them is: you have to have a Ph.D. to believe them. [...] To be rational is to think about reality. Reason is an instrument, like light, for bumping off of things. But academics are notoriously [...] in-bred, self-referential, thinking about thinking, thinking about each other, thinking about theories. [...] Academics are also very intelligent and very clever, and therefore they’re much better at hiding from themselves than ordinary people, because you can invent all sorts of little tables in your mental laboratory to do your little experiments on and hide your mind from the fact[s].”

There is another reason why academics tend to get things wrong: they tend to live separated from most of human society, and therefore are largely in the dark about a large swath of human experience, psychology, and social realities. Because they are unfamiliar with these sorts of things, it is easy for them to make mistaken inferences that are based on how things are occurring around them or what people around them think.

Even in the best of cases, reason is a fragile instrument. Put another way, a chain made by reason easily breaks, and sometimes it is difficult to even see what could be wrong about one’s given chain of thought. Coding offers a limited test-field for this: working within an environment made explicitly for human logic, it is still extremely easy to overlook mistakes in that logic. One can look at a piece of code several times, and be certain that there are no mistakes. Then try to compile or run it and – there’s a mistake! With coding, however, there is a fairly close relationship between the logic (the code) and reality (does the code do what it’s supposed to do when it’s run?). Most beliefs academics have are not tested in a similar sort of way.

What is at stake?

If Christianity is nonsense to you – not merely false, but something that you are largely unable to understand – then what is the consequence of that?

It seems fairly obvious that you will therefore be unable to understand the large bulk of Western culture – the literature, the paintings, the music. It will be boring and tedious, or just odd, or perhaps you will catch a glimpse here and there of what the author meant, but not much more.

If this happens, then barring another culture of equivalent stature replacing the Christian one that you cannot understand, you perforce enter into a cultural dark age.

Yet, there is no replacement culture – none that is Western, anyway – to supplant the cathedral of Christian thought and expression which has grown over the last millennia. The reason is that the new culture – that created in the wake of Christianity – is meagre and generally of very poor quality.

To be unable to understand Christian thought and expression – for it to consist of mere delusion and nonsense – is then to lose a great deal. There is therefore much at stake: either, one must be able to re-interpret Christianity so it starts to make ‘sense’, or one is lost in a dark age. It follows that most intellectuals nowadays are in a cultural dark age.

A mistaken inference based on technological progress

Because science and technology have been advancing, people gain a bias for what is more recent: the more recent the book, blog post, and so on, the more valuable it is thought to be.

The mistake: this does not apply to art or the humanities in general, i.e., it is fairly obvious that in literature, painting, music, and so on, things are not getting better. If anything, they are tending towards devolution.

A contemporary, pre-modern model for relations between the sexes

What is the traditional arrangement between the sexes? Some would look to what things were like in the 1950′s, say, where the man worked in an office or factory, and the woman raised children (until they went to compulsory schooling) and looked after the domestic sphere.

Is this really traditional? A cursory glance shows: it is not. The 1950′s model was a brief, transitionary phase, brought on by urbanization and industrialization, among other things. 50 years (or so) earlier, when most people farmed, instantiations of this sort of model were relatively rare. It would be curious to call something that was part of a brief, transitionary shift brought on by singular aspects of modernization ‘traditional’.

So what does a more traditional model look like? To go back to what was in play for thousands of years, that would be something like the farming model. Here, although there was division of roles between the sexes that emerged from more natural differences, both sexes were engaged in an endeavour which centred around the property. Both ‘worked’ and both spent significant time with the children, but in different roles.

What is a modern analogue to this? One answer: a family-owned business, operating out of the home, where the children are also involved.

Epistemic momentum

Bruce Charlton writes:

“The root reason why modern atheists are incredulous about Christianity is that they (and my former self) deny the *possibility* of the soul, the supernatural, God/ gods, revelation, miracles, prophecies etc.”

This is basically correct. Yet. It’s more that they have a project. That project is advancing. The project’s core metaphysical assumptions don’t include the soul (and so on). As long as the project advances, so they think, any problems with explanation don’t matter so much in the short term. “It will be sorted out later.” Or, “that’s a pseudo-problem now that we’ve gotten better concepts.” Or, they might believe that they have solved various problems (when they probably haven’t).

Take the soul. We have explained parts of the mind in materialistic terms. So, the thinking goes, we will go on to explain all about the mind and, by analog, the soul, in materialistic terms. There are various problems (the most obvious is probably the ‘hard’ problem of consciousness), but the point is: the unease these unresolved or “perhaps-problems” cause is reduced as long as the project is advancing.

It is like an economy – as long as it is growing, various concerns about the structure of the polity, damage to culture, and so on, move to the back-burner.

Demographic shift

One of the most important things that will happen in the next 50 years is probably global population decline.

Most of the people I talk to think that the Earth is set for a population explosion, but that mode of thinking is several decades out of date. By about 2050 (in less than 40 years), given current trends continue, population will begin to decline. My guess is that it will occur sooner than that – perhaps 2040. So, as far as we can tell, marginal indigenous population collapses like we see happening already in countries like Japan, Korea, Germany, and Russia will be happening all over the globe in about a generation.

The following documentary called Demographic Winter is an introduction (from a specific point of view) to the general topic. It was published in 2008 as far as I can tell:

Part 2
Part 3
Part 4
Part 5
Part 6
Part 7
Part 8
Part 9
Part 10
Part 11

Bruce Charlton’s Psychology of Political Correctness

Bruce Charlton – Professor of Theoretical Medicine at the University of Buckingham – has started publishing draft sections from a book on which he is working. The first installment starts:

Political correctness, or PC, is now pervasive and dominant in the West.

PC is not a joke, it is extremely powerful and extremely widespread – indeed hardly anybody among the intellectual elite, the ruling class, is immune – most are deeply complicit, even those who laugh at what they regard as the absurdities and excesses of PC.

(In ten years time these same individuals will be zealously defending these absurdities and regarding the excesses as middle of the road mainstream).

This is my sense as well. It is easy, when one hears that one is supposed to say “personholes” instead of “manholes”, say, to dismiss it as absurd – until one finds oneself, several years later, saying the new term.

I tend to focus on how political correctness is a contingent result of various interest groups exerting political power, but I think that Charlton is right to see a more fundamental thread to the general phenomenon.

Mature content

Much of entertainment that is marked as supposedly for “mature audiences only” is reflective rather of an immature mindset, i.e., an obsession with graphic violence and sexual imagery is conducive not to mature thinking on matters but rather to tawdry and sensationalistic presentations.

Instead of a “Warning: mature content” it should rather read “Warning: juvenile content”. The rise of sexual imagery and graphic violence in various entertainment products is part of a larger trend towards juvenile behaviour and thought-patterns on the part of supposed adults in society.

In this sense, “mature content” is a joke. Once you get that it is really immature, you can dismiss the explicit or implicit claims made by it for what they really are (unfortunately, one cannot dismiss its impact).

Personal Science vs. Amateur Science

I have written about the amazing success of the ‘amateur science’ model here. An amateur scientist is simply someone researching but not being paid money to do so (they have a separate income source). The basic idea is that, when a researcher is freed from certain constraints tied to remuneration (constraints: scope of research tied to what will lead to more money, bureaucracy, having a background that is more or less required for the typical career track), he can become more effective in certain ways.

Seth Roberts – Professor Emeritus of Psychology at U.C. Berkeley – has a post on what he calls ‘personal science’ – which he defines as “science done to help the person doing it.”

Almost all the reasons Roberts gives for thinking that personal science will grow apply to amateur science:

  1. Lower cost.
  2. Greater income. People can afford more stuff.
  3. More leisure time.
  4. More is known. The more you know, the more effective your research will be. The more you know the better your choice of treatment, experimental design, and measurement and the better your data analysis.
  5. More access to what is known.
  6. Professional scientists unable to solve problems. They are crippled by career considerations, poor training, the need to get another grant, desire to show off (projects are too large and too expensive), and a Veblenian dislike of being useful. As a result, problems that professionals can’t solve are solved by amateurs.

So what is the difference between personal science and amateur science? In some cases, there is no difference – amateur science can be done to help the person doing it. However, amateur science is a more general category – for example, someone might be an amateur scientist simply because he enjoys the learning or exploration or hypothesizing, or for peer status. However, personal science can also be done by someone being paid money to do science.

So, if one were to imagine a Venn diagram, personal science’s circle would overlap with both the ‘paid science’ and ‘amateur science’ circles. My guess, however, is that the overlap will be significantly larger in the amateur science circle.

The Doctrine of Natural Equality and Contemporary American Politics

Reading Lothrop Stoddard’s The Revolt Against Civilization (1922), I came across an interesting passage. He says:

“The doctrine of natural equality was brilliantly formulated by Rousseau, and was explicitly stated both in the American Declaration of Independence and in the French Declaration of the Rights of Man. The doctrine, in its most uncompromising form, held its ground until well past the middle of the nineteenth century.” (p. 38)

This is an interesting claim – I had assumed that the United States’ Declaration of Independence’s line:

“We hold these truths to be self-evident, that all men are created equal[.]“

was a reference to a moral equivalence, similar to the Christian belief that God created all men’s souls in some sense equal. A little more searching around suggests that Stoddard is correct, and this was meant in a physical (or natural) sense. If so, then the evidence we now have suggests that the claim implicit in the subordinate clause is false if understood as it was intended.

Cause and compliance

If someone asks: “For whom are you going to vote?” and one answers “I’m not going to vote, as my vote won’t make a difference to who’s elected,” one might frequently hear the response “Yes, but if everyone thought that way …” This appears beside the point: one’s voting or not won’t cause everyone to think that way, and so it still won’t make a difference.

This conclusion, however, is a little beside the point. The situation is not really about whether one’s actions are the difference that can make the difference, although that’s often how it’s consciously couched. Rather, what is occurring is an example of a compliance mechanism for results requiring coordinated behaviour.

This compliance mechanism cuts across a large class of actions: every ’cause’ where one’s actions aren’t going to make a relevant difference (most voting, recycling, foreign assistance, fighting in a war, and so on). In each of these situations, one can respond to requests for an action with the same response as the voting example above: my action won’t make a relevant difference.

This compliance mechanism is actually two major compliance mechanisms:

1. A social cost for non-compliance, or gain for compliance. So, people won’t invite you to their cocktail parties, will be angry with you, and so on, or will invite you to their cocktail parties, be nice to you, and so on. This mechanism operates through other people’s reactions to your (non-)compliance. Knowing this increases compliance.

2. A personal gain for compliance (one feels good about doing something to help a cause, even knowing the action isn’t the difference that makes difference). Knowing one will feel good about taking action increases one’s chance of compliance.

I think this compliance mechanism is properly biological, i.e., wired into the human psyche, at least in some cases – we can posit that this psychological mechanism was adopted for an evolutionary reason (groups with it tended to prosper). Regardless, the point here is that it is fundamental, not an arbitrary aspect of over-zealous people in one’s society. Therefore, the dismissive response to voting in the example given at the start is technically correct, but misses the thrust of what’s occurring – a biologically founded compliance mechanism for achieving results requiring coordinated humans’ actions.

In terms of a straightforward hedonic theory of utility, game theory would have to include these ‘payouts’ (1. and 2. above) to arrive at a more accurate picture of the rationality of a given course of action in a social context.

Also see here.

Why did the gods live on a mountain?

In Greek mythology, most of the major gods lived on a mountain. I wonder if the mountain was meant as a symbol for something approaching omniscience? (as when one gets to the top of a mountain with a clear view in all directions, there is a large amount of information about the surrounds that is therefore available to one)

James Watt’s Schooling

James Watt was one of the greatest practical scientists the world has seen, his discoveries including the invention of the separate condenser for steam engines (which enabled significant parts of the Industrial Revolution). ‘Watts’ in electricity are named after him.

So, what sort of formal schooling did this eminent scientist and inventor have? 13 years (the typical kindergarten-primary-secondary amount nowadays)? 17 years (add an undergraduate degree)? 21 years (for a Ph.D.)?

Watt was largely homeschooled. From Andrew Carnegie’s James Watt (1905):

“[James] was so delicate [in health] that regular attendance at school was impossible. The greater part of his school years he was confined to his room. [... His mother] taught him to read. [...] He was rated as a backward scholar at school, and his education was considered very much neglected. [... His father] taught him writing and arithmetic, but also provided a set of small tools for him in the shop among the workmen.” (pp. 10-13)

At 17, Watt aimed to become a mathematical instrument maker, and so began working for an optician (of sorts) in Glasgow. At the same time, a brother of a school friend, Professor John Anderson, gave Watt unrestricted access to his own library.

He then left Glasgow, moving to London, where eventually he secured apprenticeship with a mathematical instrument maker, at which he spent a year’s work. After becoming ill, he returned home to recuperate. “His native air, best medicine of all for the invalid exile, soon restored his health, and” at 20 “to Glasgow he then went, in pursuance of his plan of life early laid down, to begin business on his own account.” (p. 35) After procuring work on astronomical instruments from the university, he setup shop in a room provided by the university (because a local guild would not allow him to work without 7 years of apprenticeship – the university was exempt from this). He was able to meet various eminent scientists at the university.

A couple themes:

1. Little formal schooling (much less the 21 years most Ph.D.’s now receive). See here for comparison to eminent English men in science in the late 19th century (Watt was active in the 18th and early 19th centuries).

2. An ‘amateur’ scientist. Watt had an autonomous income. See here for more on the amateur scientist model. While still being an ‘insider’ in terms of his ability to meet eminent scientists, he was an outsider in certain respects (this is similar to Faraday‘s early work, for example). See the end of this post for more on the insider-outsider idea.

One thing I liked about the section of Carnegie’s book where this biographical information comes from are passages like the following, describing the tenor of Watt’s early years:

“[V]isits to the same kind uncle “on the bonnie, bonnie banks o’ Loch Lomond,” where the summer months were spent, gave the youth his happiest days. Indefatigable in habits of observation and research, and devoted to the lonely hills, he extended his knowledge by long excursions, adding to his botanical and mineral treasures. Freely entering the cottages of the people, he spent hours learning their traditions, superstitions, ballads, and all the Celtic lore. He loved nature in her wildest moods, and was a true child of the mist, brimful of poetry and romance, which he was ever ready to shower upon his friends.” (p. 16)

If everybody …

Here is part of a comment to Angelique Chao‘s post on eating humanely-raised animals:

“Because the bottom line is this: regular consumption of meat and fish just isn’t sustainable. [... If] everyone on the planet ate that way starting … NOW! … we’d be out of land, food, animals, everything in about 2 minutes. It’s really as simple as that.”

Let’s grant that, if everyone on the planet started eating certain kinds of meat or fish all at once, it would not be sustainable. How does that affect whether a given person should eat meat at some point in time? It seems irrelevant, because the given person isn’t going to cause – by their act of eating – everyone else to act in the same way.

Consider an analogy: you might be sitting in a chair right now. If everyone on the planet sat in that chair starting now, almost all of them would be crushed to death. Therefore, should you not sit in the chair?

Obviously, there’s been a mistake in logic somewhere if that’s your conclusion. What’s relevant in considering the morality of an action can’t be what would happen if everyone did it.

The immorality of vegetarianism?

This article by Angelique Chao, arguing that eating humanely-raised meat is a morally acceptable choice, got me thinking about vegetarianism.

One argument against eating meat is “One is doing harm against this animal by purchasing this product. Therefore, one shouldn’t purchase this product.” When scrutinized, however, this claim doesn’t seem very straightforward.

Consider: will eating this particular chicken leg (say) cause harm? Not to the chicken, who is already dead. The chicken was killed regardless of whether I eat it. Perhaps the argument, then, is “One is causing harm to be done against a future animal by purchasing this product. Therefore, one shouldn’t purchase it.” This is because one is supporting a system of animal production.

Again, the reasoning here isn’t as straightforward as it seems: if you are dealing with a typical industrial poultry process (say) then one’s actions are miniscule compared to the scale of production. Is the system really sensitive enough to one’s actions to scale production up or down based on the purchase of one chicken leg (say)?

The response may be: “One’s actions in aggregate are causing harm to be done against a future animal(s) by purchasing this product. Therefore, one shouldn’t purchase it.” The idea here seems to be that cumulative purchases cause an increase in chicken production, say, and that there is a specific point at which one causes an increase in production, but it happens only because one also made purchases before.

Either this or the previous argument might be made into a probabilistic argument: “For any given purchases of these sorts of products, there is a chance that one is causing harm to be done against a future animal. Therefore, one shouldn’t purchase them.

There are two things left out of this argument: first, the net effects for some future animal(s), and second, the net effects for all agents concerned.

Let’s imagine that the purchase of this chicken leg sets off an increase in chicken production, which leads to whatever harm is involved in raising a future chicken and killing it. Is this the whole equation? No. The relevant question here is rather: is the new chicken’s life worse than no life at all? The vegetarian might have a case here for some chicken production, but it seems more weak in other cases (are cows or sheep that spend their days grazing in a field free of natural predators really having a net negative life?).

However, even this isn’t an adequate analysis, because the chicken isn’t the only part of the equation. There are various other possible effects of the action, involving harm and good. How does the action affect the poultry company workers, other people in your society, other animals, and, of course, yourself? There is now a welter of possible effects that can be considered.

So, the argument can be given as: “For any given purchases of these sorts of products, there is a chance that one is causing net harm to the future agents affected by the purchase. Therefore, one shouldn’t purchase them.

Suddenly, it doesn’t seem automatically intuitive that the argument is sound. In particular, various purchases of meat might trigger net harm, but others might be neutral or trigger net good. For example, what if purchasing meat in some cases caused an increase in idyllic cow lives?

If there were such cases, then following the logic of the argument, it would be immoral in those cases to be a vegetarian.

Frans de Waal, Robert Wright, and whether one should act ‘morally’

In a dialog, Robert Wright says to Frans de Waal, a primatologist (transcribed from here, 33:07):

Robert Wright: “If somebody says to you, ‘Look, there’s no God, so what’s wrong with me just killing this person? You tell me, why is it wrong to make someone else suffer? If it makes me better off by making someone suffer, what’s wrong with that?’, what do you say to them?”

Frans de Waal: “Well, I would say, ‘I, as a representative of the community, I don’t agree with that kind of behaviour, because next time it’s going to be me, or next time it’s going to be my family, and so I have good reasons, selfish reasons, to object to this kind of behaviour which we don’t tolerate in this community, and we will punish you for that.’ And I think that’s how morality came into being, communities with a single voice would tell you how they felt you needed to behave.”

In other words, according to de Waal if one can do something ‘wrong’ without the community catching one, then there’s no negative consequence to doing something wrong (however that is defined), i.e., it is rational to do the wrong thing when not doing it conflicts with self-interest and if one can get away with it.

So we can rephrase the question put to de Waal as an anonymous note slipped into de Waal’s mailbox by our hypothetical bad-doer:

“I’m going to do something you and the rest of the community object to (call ‘wrong’), but I’m going to do it in a way that I won’t be caught. Can you give me a good reason not to do it?”

Based on what de Waal says in the dialog, the answer is: “no.”

Yet, the whole point of morality is to make it so that it doesn’t matter if one is caught, which is why de Waal’s reasoning seems like a giant step backward. What de Waal is proposing, rather, is called legislation, police, and courts.

This can be contrasted with a more traditional kind of Christian morality. In that kind of case, there is no way to get away with it. The moral laws aren’t so much things that can be broken as things that one can break oneself against. This is because doing something wrong leads to estrangement from God (say), which in itself is a bad thing (either in this world or afterwards, through Hell or Heaven).

In this case, when our anonymous hypothetical bad-doer asks: “Can you give me a good reason not to do this bad thing?” the person can say “Because you will be worse off for it, in this world or the next.”

Also see here and here.

Philosophical Zombies, Chalmers, and Armstrong

In his paper What is consciousness? (1981), D.M. Armstrong – Professor Emeritus at the University of Sydney and one of the founders of functionalism – brings up an example of what sometimes happens to long-distance truck-drivers, to distinguish between ‘perceptual consciousness’ and ‘introspective consciousness’:

“After driving for long periods of time, particularly at night, it is possible to ‘come to’ and realize that for some time past one has been driving without being aware of what one has been doing. The coming-to is an alarming experience. It is natural to describe what went on before one came to by saying that during that time one lacked consciousness [i.e., lacked some kind of consciousness different from perceptual consciousness].” (p. 610, Philosophy of Mind, ed. John Heil, 2004)

He then introduces the notion of introspective consciousness:

“Introspective consciousness, then, is a perception-like awareness of current states and activities in our own mind.” (p. 611)

How does this tie into ‘philosophical zombies’? He continues at a later point in the paper:

“There remains a feeling that there is something quite special about introspective consciousness. The long-distance truck-driver has minimal [a technical term Armstrong uses to mean some sort of mental activity] and perceptual consciousness. But there is an important sense, we are inclined to think, in which he has no experiences, indeed is not really a person, during his period of introspective unconsciousness. Introspective consciousness seems like a light switched on, which illuminates utter darkness. It has seemed to many that with consciousness in this sense, a wholly new thing enters the universe.”

The language Armstrong uses here is reminiscent of the language David Chalmers, writing in The Conscious Mind (1996), uses to describe philosophical zombies:

“A [philosophical] zombie is just something physically identical to me [i.e., acts the same, talks the same, and so on - is the same as far as we can physically tell], but which has no conscious experience – all is dark inside.” (p. 96)

The idea is not that philosophical zombies have an experience of darkness – rather, it is a figurative way of speaking about a lack of what I would call subjective experience, or what Chalmers calls ‘conscious experience’ above. Yet, it is suggestive of a link between how Armstrong is conceptualizing ‘introspective consciousness’ and how Chalmers is conceptualizing ‘conscious experience’.

Armstrong seems to be conflating subjective experience with introspective consciousness. Chalmers picks up on this in his 1996 book The Conscious Mind:

“Armstrong (1968), confronted by consciousness [i.e., subjective experience] as an obstacle for his functionalist theory of mind, analyzes the notion in terms of the presence of some self-scanning mechanism. This might provide a useful account of self-consciousness and introspective consciousness, but it leaves the problem of phenomenal experience to the side. Armstrong (1981) talks about both perceptual consciousness and introspective consciousness, but is concerned with both only as varieties of awareness, and does not address the problems posed by the phenomenal qualities of experience. Thus the sense in which consciousness is really problematic for his functionalist theory is sidestepped, by courtesy of the ambiguity in the notion of consciousness.”

I think Chalmers is right here, but he makes it sound like Armstrong uses the ambiguity in order to sidestep the problem. My sense from reading Armstrong 1981, rather, is that there is some sort of implicit identity between certain functional states and subjective experience, and so for him the conceptual distinction carries less weight. However, it seems Armstrong is also working through the conceptual muddle that many other philosophers and scientists were working through at the time, and hasn’t clearly distinguished the two aspects.

When Chalmers (Facing up to the problem of consciousness, 1995) pulls apart consciousness into ‘consciousness’ proper (i.e., subjective experience) and ‘awareness’, this is doing heavy work:

“Another useful way to avoid confusion [...] is to reserve the term ‘consciousness’ for the phenomena of experience, using the less loaded term ‘awareness‘ for the more straightforward phenomena described earlier [i.e., causal or functional phenomena, such as the ability to discriminate, categorize, and react to environment stimuli, the integration of information by a cognitive system, and so on].” (p. 619, Philosophy of Mind, ed. John Heil, 2004)

The conflation of the two elements of this and related psychological or mental terms occurs throughout the literature, and leads to a ‘side-stepping’ potential for various solutions, of which Armstrong is but one. However, it is only by making the conceptual distinction explicit, and then showing how an (functional in this case) identity is problematic, that the real problem becomes apparent.

Ned Markosian and Mereological Nihilism

Motivation:

If there are physical objects at “higher-levels” in a strong metaphysical sense, then it suggests a solution to one aspect of the problem of subjective experience vis a vis the physical universe. Namely, how there can be a complex thing such as subjective experience that is physical.

Mereological Nihilism:

In a recent talk, Ned Markosian, Professor of Philosophy at Western Washington University, said that the main problem with “mereological nihilism” – the view that the only physical objects properly speaking are simples – is its counter-intuitiveness. Instead, he offered an alternative called “regionalism.” (Mereology is the study of part and whole – from meros meaning part.)

How is a mereological nihilist to respond?

The first step is to make a distinction between objects in the everyday sense of the term, and objects in a philosophical or metaphysical sense of the term.

In the everyday sense, objects are identified based on things like whether they hang together in an identifiable way, whether there is some particular use for which they will be picked out, and so on. Scientific uses follow along similar lines.

In this sense of an object, a mereological nihilist need not deny that there are everyday or scientific objects. In this way, the counter-intuitiveness of the nihilist’s position is reduced.

However, the nihilist must add, it turns out that these “objects” don’t have an existence beyond the arrangement of the simples. They are useful conceptual (or perhaps perceptual) devices – shortcuts to help in interacting with the world.

The reason for believing this, is Occam’s Razor – to predict how the everyday objects behave, we don’t need to postulate anything more than simples moving in concert, say. We could say that there are ontologically strong objects above and beyond them, but why not just say that they are conceptually useful but ontologically weak (i.e., not real, i.e., mere devices) objects instead?

To motivate a position like Markosian’s regionalism, then, as opposed to mereological nihilism, it seems that we would need to motivate it beyond “intuitive” reasons that can be handled by mereological nihilism by the definitional split outlined above. To do this, I think that what is required is motivation for believing that there are higher-level physical objects in a strong sense for other-than-causal reasons (as physical science locates objects using causal criteria – to be is to be causal according to physical science – and the causal story seems to be covered at the lower level).

Subjective experience is reason for believing that there are higher-level physical objects, but is it sufficient? The alternatives are: deny subjective experience is real, posit that subjective experience is fundamental (and so a “simple”) in some sense, or say that subjective experience isn’t physical.

Intrinsic and Extrinsic

In discussing strategies for avoiding epiphenomenalism (the idea that subjective experience is causally irrelevant), David Chalmers lists one option as (The Conscious Mind, 1996, p. 153):

4. The intrinsic nature of the physical. The strategy to which I am most drawn stems from the observation that physical theory only characterizes its basic entities relationally, in terms of their causal and other relations to other entities. Basic particles, for instance, are largely characterized in terms of their propensity to interact with other particles. Their mass and charge is specified, to be sure, but all that a specification of mass ultimately comes to is a propensity to be accelerated in certain ways by forces, and so on. Each entity is characterized by its relation to other entities, and these entities are characterized by their relations to other entities, and so on forever (except, perhaps, for some entities that are characterized by their relation to an observer). The picture of the physical world that this yields is that of a giant causal flux, but the picture tells us nothing about what all this causation relates. Reference to the proton is fixed as the thing that causes interactions of a certain kind, that combines in certain ways with other entities, and so on; but what is the thing that is doing the causing and combining? As Russell (1927) notes, this is a matter about which physical theory is silent.

One might be attracted to the view of the world as pure causal flux, with no further properties for the causation to relate, but this would lead to a strangely insubstantial view of the physical world. It would contain only causal and nomic relations between empty placeholders with no properties of their own. Intuitively, it is more reasonable to suppose that the basic entities that all this causation relates have some internal nature of their own, some intrinsic properties, so that the world has some substance to it. But physics can at best fix reference to those properties by virtue of their extrinsic relations; it tells us nothing directly about what those properties might be. We have some vague intuitions about these properties based on our experience of their macroscopic analogs – intuitions about the very “massiveness” of mass, for example – but it is hard to flesh these intuitions out, and it is not clear on reflection that there is anything to them.

There is only one class of intrinsic, nonrelational property with which we have any direct familiarity, and that is the class of phenomenal properties. It is natural to speculate that there may be some relation or even overlap between the uncharacterized intrinsic properties of physical entities, and the familiar intrinsic properties of experience. Perhaps, as Russell suggested, at least some of the intrinsic properties of the physical are themselves a variety of phenomenal property? The idea sounds wild at first, but on reflection it becomes less so. After all, we really have no idea about the intrinsic properties of the physical. Their nature is up for grabs, and phenomenal properties seem as likely a candidate as any other.

It doesn’t matter if one postulates a “pure causal flux” or not. The reason is that the way we understand both things and their relations (or causality) is representational. If one is going down this road, then subjective experience could be identified with the ‘hidden’ aspect of the things, or the ‘hidden’ aspect of the relations.

In physical representation, one has an abstract, quantitative representation of things in a space, say. The relations are abstract as well as the things. That is to say, our representations of extrinsic properties presumably correlate to something real, but there is ‘room’ to put subjective experience ‘in’ behind the representations, as well.

This is to say, if we think that we have a direct line to (in some sense) non-representational knowledge of our own subjective experience, then it is not clear why that subjective experience can’t be identified with either the extrinsic, intrinsic, or both sorts of physical properties.

So when Chalmers says that there “is only one class of intrinsic, nonrelational property with which we have any direct familiarity, and that is the class of phenomenal properties,” this may be wrong. Chalmers is assuming a ‘relevant transparency‘ when it comes to representations of relations, but not to things. Phenomenal properties may turn out not to be ‘intrinsic’ or ‘nonrelational’.

The pace of real science

“[Current mass science] does not operate at the pace of real science, but at the pace of management [...]. Six monthly appraisals, yearly job plans, three yearly grants and so on. (All evaluations being determined by committee and bureaucracy, rather than by individuals.)”

So says Bruce Charlton, Professor of Theoretical Medicine at the University of Buckingham, here http://thestoryofscience.blogspot.com/

The same can be said for academia in general.

In game design, if one has designed a feature before, then if the new feature is fairly similar, and one can imagine how to implement it, then one can give a rough approximation of the time involved. If the feature is completely terra nova, then there’s no way to know how much time it will take. One instead can start, see how it goes, and start to get a better idea of how long it will take as one gets more into implementing it.

This is relevant for bureaucratic science, where ‘goals’ are set for what will be discovered by what time, or where certain units of predictable progress are expected. This sort of managerial approach to science is a misunderstanding of what scientific exploration is when it comes to terra nova areas of science.

Using number of papers to judge scientific progress is similar to a managerial approach used in coding – using lines of code to judge progress. The latter is a pretty poor metric. Often, this just leads to code bloat, bugs, and so on. Similarly, in science (or academia in general) judging things by the number of papers leads to ‘knowledge bloat’ (a high noise:signal ratio in terms of valuable things being written), and I am guessing to other problems.

Gods and Goddesses

Various conceptions of gods and goddesses map partially to aspects of the subconscious or unconscious. In this sense, one can think of a typical pantheon as a (antiquated) set of tools for interacting with and remembering aspects of the (subconscious or unconscious) mind.

In The Iliad, for example, there are various scenes where gods or goddesses appear to humans. In reality, there is a spectrum of sorts of appearances of information from the subconscious or unconscious to the conscious, and the appearance of gods or goddesses can be understood as standing in this spectrum (along with problem solving, the ‘muse’, and so on).

Understanding gods, say, or muses, as coming through part of the brain (or what have you) can lead to a possibly incorrect conclusion: the brain explains the phenomena, i.e., the phenomena are nothing but the brain acting in a certain way.* To see that this is problematic, consider an example: if I am looking at a tree, presumably there is some correlate processing occurring within my brain. Yet, it would be incorrect to conclude that trees are brain processing. The information being presented is of something external to my ‘self’, and is in some sense veridical.

*It is another question what the ‘brain’ is – i.e., the concept ‘brain’ is a specific kind of representation of something, i.e., a ‘scientific’ representation. See here.

So, evaluating the veridical nature of purported experiences of gods, say, requires looking at the causal chain which leads to that experience. Are there really things outside of ourselves which have these mental properties and these abilities? (In a way, the unconscious is a mental thing outside our ‘selves’.)

For a question like “Do gods (say) exist?”, it is easy to say ‘no’, but a more careful evaluation brings certain puzzlements to light. For example, to the question “Do (did) dragons really exist?”, the concept ‘dragon’ originated from fossil evidence, and in this sense refers to something real. If that is what is meant by whether there ‘really are’ dragons, then the answer is ‘yes’. Yet, usually something more is meant: just how accurate of a concept it is. Here, there isn’t a yes-no answer, but rather a sliding scale of accuracy. (It isn’t that simple: for example, one concept could get A right but B wrong, another A wrong but B right.) For example, moving from ‘dragons’ to ‘dinosaurs’ was (on the whole) a step in accurately conceptualizing the nature of those things to which the fossils belonged. This would favour the idea that dragons don’t exist. However, we have then continued to change our concept ‘dinosaur’, in minor and major ways. So we would be put in the position of continually saying “The ‘dinosaurs’ they believed in 5 years ago don’t exist,” which is an odd way to talk. We don’t say ‘dinosaurs’ no longer exist, every time we make a conceptual change. (See here.)

(Because we invented a new term for those things to which the fossils belonged, a radical conceptual dimorphism can develop (i.e., our concept attached to the word “dinosaur” can become much different from the concept attached to the word “dragon”). If we had used the same word, we would have had a conceptual change in what was associated with it, which would incline us less towards thinking: ‘old dragons’ didn’t exist, but ‘new dragons’ do – rather, we would be more inclined to think dragons are real, but our concept of them has changed. This can be seen by looking at all sorts of more quotidian concepts, where we have retained the name but our understanding of the nature of the phenomenon has changed significantly.)

Why do scientific representations work?

What defines a scientific representation? These representations are designed to work at predicting certain other representations (eventually, resulting in manifest representations via our sensory apparati). The network of these models refers to what we think constitutes the “physical universe.”

The nature of the representations, then, is to work to predict. In this sense, they are essentially ‘causal’, but we could instead just say ‘sequential’. Not much rests on the word ’cause’ in an ontological sense, here.

So, why do scientific representations work? If they are essentially abstract, quantitative representations, what about this allows them to work?

The reason that the representations work is that they correspond to things (or ‘states of affairs’) in the universe. That is, they are placeholders that in some sense correspond to things. In what way do they correspond?

That is where things become difficult. Reductive science tends to posit that the ‘smaller’ things are the ‘real’ things. Yet consider: a leaf in my subjective experience can have a correlate. Yet the leaf is not independent. My experience is not a ‘mere’ conglomeration of leaf experiences. Rather, there is an experience, and the leaf is in some sense a part of it.

Our instinct in reductive science is to ‘get rid of’ higher-level entities, if we can model them successfully in terms of more abstract, ‘lower’ level entities. We say: the higher-level entity was an illusion, what there really is, is these lower-level entities. (More carefully, we should say: our higher-level representation did not give us as accurate of ontological type correlates – i.e., placeholders – as we could have, and the lower-level representation is better in this sense.)

How does this work with subjective experience? The problem is that there are parts and wholes. Subjective experience isn’t reducible to its components (a ‘mere’ conglomeration). Rather, there is a whole, and we can analyze it into parts.

This would only be reflected in scientific representation if this whole-part relationship affected what is reflected in the sequential predictions of ‘physical’ representations. That is, the whole-part relationship will only turn up in science if it makes a causal difference.

It probably does make a causal difference, as it seems that what occurs in a subjective experience makes a difference. For example, I can reflect on it and say “It is united.” This is an effect, and so, that it is united seems to be causally important. Yet, it seems conceivable that it might not be causally important. Science can look for its effects by looking for phenomena that are unified but have parts. A concept like a ‘field’ in physics, for example, might reflect this state of affairs. (In this case, something like a ‘field’ would be the representation, the subjective experience would be the things.)

What does abstract, quantitative representation say?

If we understand the problem of qualia not to be of how to reduce qualia to abstract, quantitative representations but rather how to represent qualia in terms of abstract, quantitative representations, then that seems rather easy in a way. We can make a start on representing colour experiences as indicated here, for example. This then leads to a question: why is abstract, quantitative representation useful? The basic idea with science is that this sort of abstract representation reveals something important about the universe. So what does it reveal?

What is quantity? Quantity at its root is distinction. For example, we can quantify the length of the side of a rectangle by making distinctions – places – along it. We have a way to record these distinctions – 1, 2, 3, and so on, are placeholders for these distinctions. We could just as easily use different symbols – a, b, c, and so on. Mathematics is largely ways to understand the relationships between placeholders as such. That is, what can we say about placeholders, to the degree that they’ve been abstracted?

The problem with science is that reality isn’t abstract. Reality is concrete. My subjective experience instantiates certain things that can be ‘captured’ to an extent with abstract, quantitative representation. For example, my visual experience of a rectangle can be ‘divided’ up. These distinctions can then be mapped to abstract placeholders. (The trick is that many people mistake scientific representations for concrete things. “The deer is really a bunch of molecules,” for example, where the molecules are taken to be concrete. They aren’t – they are abstract representations of what was formerly represented by the ‘deer’ concept.)

Instead of thinking of science in this abstract, placeholder sense as revealing something, it is more like it is remembering something – in a symbolic language that can be ‘re-converted’ later on – much like letters can be converted to meaning later on, if one has some basic grasp of the meaning of letters arranged in such-and-such an order to begin with.

Certainly, science does reveal things. We use tools to perceive things hitherto unperceived. Yet, the language in which science records these things requires an interpreter on the other end, who can decode the ‘meaning’ of them (‘meaning’ here is understood in a disparate sense).

The same goes with scientific representations of subjective experience. The attempt to eliminate subjective experience as a phenomena itself, and in its place put only symbolic representations of it, is misguided. Rather, the symbolic representation can help us to understand the things (in various ways) – they don’t replace them, except when we are moving, rather, from one representational scheme to another. I.e., we can abandon one representational scheme for another in reductive science, where the new or ‘lower-level’ scheme is in some way more useful. However, even here they are both referring to something ‘real’, and so in a certain ontological sense of reference neither are primary.

In the case of fitting subjective experience into the physical universe, we are not moving from one mere representational scheme to another – rather, we have things (i.e., subjective experiences, say) and are looking for some possible physical representation of this that ‘fits’ with our other physical representations in some relevant sense.

Quantity, Quality, and a Materialist Pandora’s Box

Science (in an area like physics) works largely through abstract, quantitative representations. In philosophy of mind, there is a problem with ‘qualia’. Qualia are qualitative properties of subjective, experiential states. This is a problem because ‘qualities’ don’t seem reducible to quantity.

It is a straightforward matter to note that one can map from quantity to qualities. For example, one can understand (part of) the structure of human colour vision as correlating with 3 quantitative ‘dimensions’, A, B, and C. Each dimension holds a value from 0 to 255, say. The three numbers combined produce a location in an abstract, quantitative ‘space’.

This is how a colour is often represented in computer code – it is an abstract, quantified representation. More carefully: this is how coders represent the internal state of the computer, which combined with a causal chain involving a monitor produces experiences of certain sorts of colours in people under certain standard conditions. So, the three quantitative dimensions A, B, and C are usually called R, G, and B, which map to the phenomena of red, green, and blue colour experiences. By combining these three dimensions, one can produce a circle spectrum of regular human colours in a human who is looking at something like a monitor, because the monitor is made to receive inputs corresponding to the locations in the abstract quantitative space and produce certain optical phenomena as a result.

This is to say, subjective experience of colour has certain correlates in an abstract, quantitative ‘space’.

If scientific representation is taken to not be ‘relevantly transparent’ (contra someone like Daniel Dennett), then one can think of abstract, quantitative scientific representation as a kind of Pandora’s box, that upon being opened might allow subjective experience ‘into’ the scientific picture. More precisely: to allow subjective experience behind the abstract, quantitative picture that science ostensibly gives.

A seemingly coherent response to eliminativist materialists such as Dennett is: reality goes from subjective experience in certain cases (say) to abstract, quantatitive representations that occur in humans, so to speak. It is like the coder example, but in the other direction. That is, certain sorts of physical phenomena are subjective experiences which humans represent in an abstract, quantitative way.

If the above is right, though, then it also suggests that the problem of reducing qualia to quantity is confused: what we are doing in physics (say) isn’t reducing ‘things’ to other ‘things’, but rather introducing abstract quantitative symbols for things and then replacing one set of symbols for another in the case of reduction, where the nature of these symbols is useful for various reasons.

Truth, religion, and science

The cause of the fall of theism from the elite in Western society preceded the theory of natural selection (see here). What was the cause, then? In part, it was the introduction of a new system of truth, that could potentially conflict with (or corroborate) the truth claims of an extant system (i.e., various forms of Christianity, folk traditions, and so on).

Yet, the notion of truth is not static through this process. The introduction of scientific truth changed what ‘truth’ was – i.e., it’s ontology changed the notion of what could be ‘real’. This has led to contemporary academic philosophers, such as Daniel Dennett, denying that there is such a thing as subjective experience. Science trades in abstract, quantitative ‘things’ which are left as such. So, the logic Dennett might employ goes, therefore subjective experience is a mere illusion – it does not exist.

Yet, this leads to cognitive dissonance – the apparent fact of subjective experience is constantly before one. It is fairly straightforward to see that scientific representation is representation, which is symbolic, and so must be a symbol of something. Someone like Dennett posits the abstract symbols of science’s ontology to be transparently understood things in some relevant sense. So a response can be made: deny ‘transparent understanding’ vis a vis science’s things. If subjective experience is real and science is comprehensive in the relevant sense, then the symbols employed in science must be a representation of subjective experience at some point. So, in order to pursue this intuition of ‘facts’ and ‘reality’, one can say that Dennett posits an understanding of scientific things that is not actually obtained.

(One cannot say that subjective experience is not real just because current scientific ontology does not allow for it be real, say. Physical science has consistently expanded or modified its ontology upon discovery or investigation of phenomena with robust evidentiary bases.)

Would dragons exist if …?

The basis for the concept of ‘dragon’ was largely fossil evidence. People were finding large skeletons of a certain type, and postulating that there existed a type of creature to whom the fossils belonged.

The term “dinosaur” was adopted in the mid-19th century by paleontologists to describe a kind of reptile, based on somewhat similar fossil evidence but situated in a different theoretical context.

Yet, imagine: how would we think of dragons if the people who adopted the name “dinosaur” had decided instead on the name “dragon”?

Would we say that the old ‘dragons’ never existed, being supernatural, but new ‘dragons’ existed? Or would we, rather, say that dragons existed, and we used to have a mistaken conception of certain aspects of dragons? I.e., would we say that we have refined our concept of just what dragons were like?

Consider: the concept ‘dinosaur’ has undergone various theoretical changes: the exact time when the dinosaurs were alive, what they looked like, whether they were warm-blooded, their social behaviours, and so on. Yet, it seems odd to say after every conceptual change: the old ‘dinosaurs’ didn’t exist, but the new ‘dinosaurs’ do. Rather, we say: before we had the conceptual refinements, we were referring to creatures that existed, and we now have a better picture of exactly what they were like.

Why can’t the same argument apply to ‘dragons’ into ‘dinosaurs’? It seems the naming convention is somewhat arbitrary. To be sure, when significant conceptual changes are made, people often pick a new name for something. Yet, they might not. Regardless, the name doesn’t seem to be what’s important to the reference of a term.

The question of reference, however, obscures another issue: we pick new names because they are useful. In particular, they help in gaining a certain sort of conceptual clarity. The point is that the concept ‘dragon’ is significantly confused. The concept ‘dinosaur’ is (we believe) less-so. We could simply transform the concept associated with the word “dragon” into the concept we now have associated with the word “dinosaur.” In the context of discovery, though, the introduction of new ‘pegs’ on which to hang the conceptual hooks is a significant practical consideration.

What is natural?

George Berkeley, in A Treatise Concerning the Principles of Human Knowledge (1710), gave the following definition:

“By Matter, therefore, we are to understand an inert, senseless substance, in which extension, figure, and motion do actually subsist.” (pt. 9)

Consider the following from Colin McGinn (transcribed from here):

“It’s very difficult to get across to people who are religious, that when you’re an atheist you mean you don’t believe in anything [like God] whatsoever. [...] You don’t believe in anything of that type. Nothing supernatural[.]“

I think that McGinn’s clean conceptual distinction is problematic. “Nature” is tied to what is “physical.” How do we know whether something is physical? My sense is that: we know from whether it is part of our paradigmatically physical causal network. This is just to say, whether it interacts with the physical causal network. This is how we come to ascertain that something is “physical.” We do not say that something must conform to a specific understanding we have a priori of, say, regularity, in order to be physical.

Basic idea: Whatever has a robust evidentiary basis and interacts with paradigmatically physical things will come to be classed as “physical,” whether it be by ferretting out the true nature of the phenomenon and discovering it works by recognized physical principles, or by expanding the concept of “physical.”

For example, George Berkeley’s definition of “matter” above is not the current definition. Our concept of matter changed due to physical investigation.

(Also see here.)

Philosophical arguments for the (non-)existence of God, and intuition

In The Varieties of Religious Experience (1902), in Lecture XVIII on Philosophy, William James says:

“If you have a God already whom you believe in, these [philosophical arguments for God's existence] confirm you. If you are atheistic, they fail to set you right. The proofs are various. The “cosmological” one, so-called, reasons from the contingence of the world to a First Cause which must contain whatever perfections the world itself contains. [...]

The fact is that these arguments do but follow the combined suggestions of fact and feeling. They prove nothing rigorously. They only corroborate our preëxistent partialities.” (p. 476)

I think much the same can be said of atheism – often the intuition (a conglomeration of various facts and feelings) that there is no “God” (or, a lack of an intuition that there is a God) comes first, the scientific or philosophical reasons or arguments are consequent – that is, they corroborate preëxisting partialities.

This topic moves into coherence theory and the nature of evidence.

(Also see here.)

McGinn, James, and Empiricism

“In my belief that a large acquaintance with particulars often makes us wiser than the possession of abstract formulas [...]“

from William James, preface, The Varieties of Religious Experience (1902).

Colin McGinn is asked what some of the best reasons are for not believing in God, and responds with (transcribed from here, at approx. 20:00):

“Well, the classic argument against [God] is the problem of evil. Even religious people find this one very uncomfortable. So the argument is simply, God is meant to be a being who is all-knowing, all-powerful, and all-good. So, how come there is suffering and pain in the world? Why does God allow it? God, if he’s all-good, thinks it’s bad that this should occur, would rather it didn’t occur, like any decent person, and yet he lets it occur. Now, that would be okay if he didn’t have the power to change it, but he’s all-powerful, we’re told by religious people he intervenes all the time in various ways. So, why doesn’t he intervene to prevent the death of a child or the torture of a prisoner? He doesn’t do it. So, you don’t want to conclude from that that God is actually quite a bad person – that’s a conceivable conclusion you might draw. But what you conclude from it is that the combination of these two characteristics is inconsistent. He’s all-good and he’s all-powerful – you need all-knowing too, because he has to know what’s going on – but it’s essentially the conflict between all-good and all-powerful and the existence of evil.”

Philosophers like easy ways of showing that something is absolutely a yes or a no based on conceptual arguments, instead of developing probabilistic arguments that rely on various strands of empirical evidence of varying degrees of strength.

For example, most contemporary theists’ conceptions of God probably aren’t centrally that of an omniscient-omnipotent-omnibenevolent being – rather, their concepts of God are centralized around various experiences, anecdotes, and so on, which fit in with daily practices (and so on) that seem to work, which they then might fit with theories they’ve heard or thought up based thereupon.

Arguments, then, against the existence of “God” that attack a particular theological conception must reckon with the fact that, if successful, all they’ve done is knock a somewhat artificial theological add-on off of an empirical body. It is as if I were to show that a contemporary theory of gravity is incoherent, and then declare “Gravity doesn’t exist!” The problem is that objects still fall when I release them, and so on, and these phenomena are what’s important and what give rise to various theories of gravity.

Central idea: what sustains and motivates theism is people’s experiences, not theological constructs like the above which to a large extent supervene on the experiences, et. al.

Morality and rational self-interest

Christian systems, say, make what is right into what is in your rational self-interest through the causal structure of morality (see here). By coinciding these, they gain a potentially relatively robust motivational element. For example, stealing something might be ostensibly in your rational self-interest. The Christian system postulates that God will punish you for doing so, and reward you for not doing so. Therefore, it is actually in your rational self-interest not to steal (assuming that there is a God, and so on).

Contemporary secular academic philosophers’ moral systems have lost this dynamic. Because they are atheistic and so don’t include a Heaven, Hell, or Purgatory (say), any appeal to rational self-interest must be done ‘this world’. Often, however, self-interest and the morally right thing are in conflict in this world. Realizing this, secular academic philosophers simply jettison rational self-interest as having to do with morality (for example, see here).

Instead, doing what isn’t in your rational self-interest is celebrated, and in particular they conflate what is ‘right’ at a societal level with what is right at a personal or moral properly speaking level – hence gaining the appearance of having a personal motivation for overcoming self-interest, when they have really just introduced semantic confusion (for example, see here, or consider Rawls’ ‘veil of ignorance’ as a purported guide to determining the moral assessment of a situation, perhaps). Yet, because it is not plausible that not appealing to self-interest in some way (whether directly or indirectly) can sufficiently motivate people to moral action where what is right conflicts with apparent self-interest, their moral systems become, in a word, ‘academic’ and, indeed, parasitic on existing moral systems that do have relatively robust motivational elements.

To come up with a workable moral system, one must curb ostensible self-interested behaviour that incurs a societal cost by showing how rational self-interest coincides with various societal goods. To not have a motivational component leads, effectively, to moral nihilism.

Two levels of decision: moral and political

There are two levels relevant to many situations. The first is a personal level (and what in some cases is a moral level, properly speaking), the second is a societal level (a political level).

One can see the distinction by looking at something like voting. At the personal level, how one votes in a federal election (say), statistically speaking, will make no difference to who is elected. In this sense, it is irrational to vote. However, one might belong to an organization composed of thousands of members, who meet to decide which candidate to support. The organization’s decision, i.e., at the societal level, about which candidate to support very well may make a difference in terms of who is elected.

Conflating these levels leads to confusion. For example, sometimes I ask people why one should vote in, say, a federal election. My reasoning is: one’s vote will not make a difference to the outcome of the election. The response is often: if everyone thought that way, then it would affect the election. My response is then: how one votes doesn’t affect how everyone votes. Usually, there is no response after this point. There might be various other reasons to vote, and perhaps, depending on one’s situation, it is therefore rational to vote, but not because it will affect who is elected.

That one’s vote will not make a difference does not commit one to thinking voting, in general, is irrelevant. Rather, it means that one’s focus should be directed to the relevant, i.e., societal, level, if one in fact wants to change which candidate is elected. It is only through the amplificatory affects of a social organization that one’s efforts can lead to this result. A one-citizen, one-vote system discludes this possibility vis a vis one’s vote itself.

We can say: the relevant level of decision making for various problems is the societal one, not the individual one. To take voting again, in Canada the compliance rate is about 60-65%. In Australia, a relevant decision was made at the societal level (a law was passed) such that, if one does not register at the voting booth, one is theoretically given a small fine. Voting compliance in Australia is around 95% because of this policy.

This sort of logic extends beyond voting, to any situation where one’s action will not make a relevant difference, but a group of people’s actions will make a relevant difference.

Utilitarians make a similar error. They think that what is ‘morally right’ is what is best for the most in the long-term. Yet if one responds: why should I act in such a way as to do what is best for the most? they have no response, except to repeat themselves.

For example, consider what Robert Wright says in a conversation with academic philosopher Peter Railton (transcribed from here):

“To me the moral premise is that human welfare, happiness, and so on, are better than the alternative. These are moral goods [...] and so I basically come out as a kind of utilitarian, thinking the more of that stuff the better, and [...] with that as a premise, you can go a long way. But if people say to me, “Where do you get that premise?” I have to stammer a little bit. On the one hand, I can say “Wait, you’re telling me you don’t think human welfare and happiness are better than the alternative?” and you rarely run into somebody who says “No, I don’t think that.” On the other hand, in a very technical, philosophical sense [... there is a question of] whether you can get a foothold that’s secure in a technical, philosophical sense, to do this kind of boot-strapping, and it leads to [the topic of] moral realism.”

With the distinction above, it is easy for one to agree that human welfare is better than not having human welfare, without thinking that actions that increase human welfare for the most people (i.e., Wright’s position) are stipulative of what is morally right. This tension can be resolved by noting that Wright is talking about a societal (political) level of decision, but describing it as if it were moral at the personal level. In the sense developed above, he is talking not about what is the morally (personally) right decision (i.e., what action is the right one for me to take?), then, but what might be a politically right decision (i.e., what action is the right one for a given social organization to take?).

Moral intuitions and universalism

Where do moral intuitions come from? According to the standard scientific worldview which many secular academic moral philosophers tacitly accept, they are evolved, and exist because of selection pressures in our near to distant evolutionary past. That is, moral intuitions come from the process of surviving, and in particular of surviving as social animals in a certain kind of context.

Understood in this way, moral intuitions tell us something important about social existence as a human being. The problem, however, is that the conditions in which we live nowadays are very different from the conditions under which those moral intuitions were selected for, and did most of their work.

What were those social conditions like? It seems like humans existed in small group environments, where practically everyone around us was someone we knew at some level, and who was part of our ‘group’. That is, most all interactions involved a cause-and-effect relationship that was pretty direct, significant, and with people with whom we would have repeated actions, and with whom we were closely related and that we knew somewhat.

This is why I am skeptical of applying moral intuitions, so understood, to various problems in the world today that involve us in very different circumstances. Where the cause-and-effect relationship of our actions in social situations was usually clear cut and direct, very often now it is diffuse and involves difficult to understand causal situations. Where it was with people with whom we would have repeated interactions, now there are often one-off interactions. Where the people with whom we would interact were people we knew, now they are typically people to whom we are not closely related and who we do not know. It is plausible to say that, once you change all these circumstances, our moral intuitions are no longer applicable.

Consider an example used by various contemporary academic philosophers: the drowning child in the pond. Say that on the way to work you are passing by a child in a pond who is drowning, and that the cost to you of saving the child will be that you ruin your suit ($50, say). If you don’t save the child, no one else will. Do you have a moral obligation to save the child?

If you answer ‘yes’, then an argument can be made: there is an analogous situation, that of a child dying from an easily preventable disease in some foreign land. For no greater cost ($50), you can save that child’s life. If you don’t, no one else will. Therefore, if you answer ‘yes’ to the first case, the argument goes, you are committed to saying that you have a moral obligation to help this distant child.

A simple way to see that there is something awry with this analog is to note number. In the first case, it is presented as an unusual situation where you save a drowning child (which is what it almost always is in real life – children drowning in ponds one is walking by do not occur very often). In the distant example, however, in reality there is not one child, but rather there is a multitude. To make the first situation more like the implicit aspects of the second situation, one would need to revise it to something more like: every day while on your way to work, you pass by millions of children drowning in ponds. Do you have a moral obligation to save one? Two? As many children as you can? Just for today? Today and tomorrow? Every day?

What is happening here? We are moving away from a situation our moral intuitions have been designed to guide us in (someone near us in imminent danger) to a situation that starts to boggle the mind, and which our moral intuitions weren’t designed to navigate.

The move from applying moral intuitions primarily to a small group of people one has repeated interactions with, to applying it to people in (say) one’s city, to then applying it to people everywhere (or all animals everywhere), is the trajectory of a kind of universalism. That is, the position that our moral intuitions are supposed to apply to possible interactions with everyone, no matter how distant or tenuous our interactions with them might be. Some people see the move towards this kind of universalism as a tangibly good thing. It might be good in some sense, but it is (in terms of what the intuitions were ostensibly designed to do in a narrow sense) a misapplication of our moral intuitions.

One can still recognize that certain situations will be better for people involved if x is done instead of y, say, and work towards that. Yet, this sort of situation is not a moral one in the same sense. It is a political one, say.

Therefore, as one moves away from the situations our moral intuitions were designed for, it is reasonable to deny that the new situations are moral ones, properly speaking, within the standard secular context of thinking about morality. This is not an argument against kinds of universalism that do not rely upon a typical evolutionary understanding of the origin and purpose of moral intuitions. Rather, it is to protect oneself from the badgering of secular universalists.

Put another way, it seems that one must make a choice: accept the limited sphere of the applicability of moral intuitions, properly speaking, or come up with a different framework for moral intuitions. To do the former is unpalatable to the universalist ambitions of many secularists, to do the latter is to go out on a limb scientifically.

Exclusion and community

Some people think that exclusion is ‘bad’. I was discussing the topic with someone the other day, and his point was that exclusive communities are founded on acts of (emotional, social, or physical) violence. Therefore such communities are bad in some relevant sense.

My response was fairly straightforward: if ‘violence’ is defined in a weak enough sense, then the initial point is granted. Exclusion requires some method to exclude, and ultimately this usually depends on some mechanism to compel others away.

Yet, it is an incomplete analysis: to determine whether such exclusion is good or bad also requires looking at what value the communities so established are creating.

For example, if you want to create a successful orchestra, you have to exclude certain people from being hired. Turning down applicants probably will involve some disappointed hopes, feelings of sadness, and so on. This especially will be true as the organization gains status and more and more people want to join it. Yet, if one were not to exclude anyone, in very short order the orchestra would be ruined by poor players, or people who couldn’t play at all.

General idea: exclusion is required for the creation of most kinds of value.

This applies to not only human communities – take a human body: the developing organism must exclude large numbers of things, and create barriers to protect itself from those things.

Corollary: if one has a valuable community, as soon as one loses the will to selectively exclude in a relevant sense, one will start to lose the respective value that is in one’s community.

This does not mean that all kinds of exclusion make sense. If the orchestra above were to exclude good oboe players, say, that might not be a good idea (depending on how many they already had, and other factors).

My basic point can be summed up as: If you want to create value, there has to be some cost somewhere. This applies for almost anything of value, beyond just communities.

(It is relevant to note that various forms of ‘inclusion’ are in fact forms of exclusion. In many cases the number of people in an organization is relatively fixed, or the number of applicants exceeds the number of places available. In these cases, then, debates about ‘inclusion’ are more properly understood as debates about the precise form of exclusion.)

Ad Hoc Rationales and Legal Trends

If you have similar effects, it makes sense to look for a common cause.

Take legal trends. If you see seemingly coordinated legal changes in different countries (such as laws on abortion, pornography, homosexual marriage, and so on), it makes sense to infer a common cause. Typically, legal arguments are ostensibly based in particular laws (in the U.S., it might be the constitution, say). Yet, when the same trends are occurring in all sorts of countries, is this really a sufficient explanation? Instead, it seems that the constitution (say) is an ad hoc rationale for the change.

What is the common cause? These are ideas or sentiments which accrue in a legal elite who reach a critical threshold in those societies where the changes occur. This will happen, then, by two distinct mechanisms: ideas or sentiments that flow between certain sorts of elites, or the coming into power of certain sorts of elites who have pre-existing ideas or sentiments.

Take homosexual marriage. It is being legalized in various countries not because of the particularities of the legal traditions in those countries, but because a significant portion of the legal elite have decided that it is a good thing. Once this sentiment occurs, these people then start looking around for ways to change the law, typically using whatever legal arguments are at hand.

The upshot of this is: arguing about these sorts of issues based on the constitution, or what have you, is really fighting a symptom. The cause is the prior ideas or sentiments which the group of elites have. If you want to change the trajectory of jurisprudence in a country, you need to look beyond the ad hoc rationales, towards the education or composition of the relevant elites.

Longevity as a marker of societal success

Sometimes increases in longevity are pointed to as a reason to believe that things are getting better – so, how robust is this reasoning?

Imagine two rectangles, one with a width of 90 and one with a width of 100. Which rectangle has a greater area? The answer is: you don’t know, unless you make assumptions about the height of the rectangles. We can think of width being longevity and height being average quality, and the area being total value. Therefore, saying that one society has higher longevity rates than another is to say little about which society has more value, unless you make assumptions about average quality. More pithily, longevity in itself is an empty vessel.

In a simple utilitarian calculus, the relevant question is: how good is the person’s life over that time? Increases in longevity could conceal that things are getting worse, so measured, in two specific ways:

a) It is possible that the marginal increases in people’s longevity at some point involve less quality, and

b) It is possible that degradations in the quality of people’s lives on the whole are occurring, in such a way that there is a net decrease in the sum of quality despite increases in longevity.

So increases in longevity, alone, not only don’t show that things are linearly getting better relative to the increase in longevity, but they don’t even show that things are getting better in general in one’s society. The question then is whether one has reason to believe that things are getting worse either marginally or in general in terms of quality.

The Problem of the Subjective Self

I sometimes hear people describe humans as a “bunch of molecules,” or something to that effect. For example, “Humans are ultimately just a bunch of molecules.” This is false, unless a strong emphasis is on ‘bunch’.

The reason is that the most obvious aspect of humans is our subjective experience. (‘Experience’ here does not mean ‘experience’ as in memory and practical skills developed, it means experience as in “I am having an experience right now.” ‘Subjective’ here is not used in the sense of something distorted due to personal bias, but rather as in a mode of being.)

This subjective experience is complex, unified, and real. For example, I might have a visual experience of a meadow. There are many entities in the visual experience (grass here, sky there, and so on), so it is complex in a certain sort of way. These things are in a visual experience, so they are unified. The experience itself is real (i.e., it exists), as opposed to it being an arbitrary grouping of external things (such as may be the case when we talk of a ‘bunch of molecules’).

There is something which must account for the unity of an experience. To one who believes that humans are nothing but ‘molecules’, there does not appear to be anything at hand that is a good explanatory fit.

This problem interconnects with several other problems, including: the unity of experience through a series, subjective being, and the qualitative nature of experience. The above problem of the unity of an experience is symptomatic – someone who describes humans as a ‘bunch of molecules’ will probably be unable to adequately answer any of these problems. This is why someone like Colin McGinn says that the qualitative nature of experience is inexplicable. McGinn is better than many, in that he acknowledges that there is a difficult problem(s) at hand, where the conceptual resources of physicalism do not seem able to afford an answer.

It might be important to note that the concept ‘physical’ has changed dramatically over the past, say, 400 years. With our current conception of ‘physical’, my working guess is that McGinn is right that the problem of the subjective self is an insoluble mystery for physicalists. That is to say, if one wishes to remain a ‘physicalist’ and solve these problems, something must change within one’s conceptualization of what ‘physical’ things can include.

McGinn, Atheism, and Workable Moral Systems

Colin McGinn, responding to a question about whether morality needs a God, says the following (from here – I’ve transcribed it):

“Some people often think that if you’re an atheist, you don’t believe in God, then you can’t have a morality. There’s no foundation to morality and morality’s in question and so forth. [...] It’s amazing to me that people in the current world still think that way, because that view was refuted 2.5 thousand years ago by Socrates, in the Euthyphro argument, where Socrates made the point that you can’t define goodness or rightness as what God commands, because the reason God commands it is that it’s right, it’s not right because God commands it. [...] So, God cannot be the foundation of morality in that sense.”

This is good as far as it goes, perhaps, but in a Christian moral system (say), God isn’t just invoked as the revealer of the moral rules. What McGinn doesn’t address is why one would act according to the moral principles, and how an atheist can replace God in this sense.

It requires the following 2 things:

1. Rewards or punishments for following or not the moral rules.

2. A way to enforce the rewards or punishments.

This is because moral rules might conflict with self-interest. In response to the question: “Why should I do what is right?”, McGinn has no response (see here). In contrast, one of the principle functions of God in a Christian system is to enforce the moral rules (either in this world, or by sending someone to Heaven, Hell, or Purgatory).

If it were up to McGinn, it is as if it were sufficient for a society to just discover and write up good laws for regulating social conduct, but then not write up any punishments for disobeying the laws, and not have a justice system capable of enforcing the punishments!

Some system.

(Also see here.)

Railton’s “near morality”

In a recent article, after outlining an evolutionary origin for moral intuitions (i.e., from various selfish and us-ish (inclusive fitness) mechanisms), Peter Railton – an academic philosopher from the University of Michigan, Ann Arbor – then concludes with:

“We still must struggle continuously to see to it that our widened empathy is not lost, our sympathies engaged, our understandings enlarged [...]“

My initial response is: on what basis? How do we arrive at these valuations, according to Railton’s view of morality, such that “widened empathy” is a moral good? What does it even mean to say that they are morally right things, except in some tautological sense (i.e., let us define what is right as that which expands certain fashionable ‘rights’ and so on)? There is no basis here for these value judgments, except secular whimsy or, indeed, simply an appeal stemming from these evolved mechanisms whose purpose has been to enhance (inclusive) fitness. On reflection, how can either be an adequate basis for normative claims like Railton is trying to make?

McGinn’s Moral System

From Colin McGinn’s Moral Literacy (1992):

“Some readers may be wondering, sceptically, why they should bother to be virtuous at all. Why not be a bad person? What reason is there for being a good person? The answer is, there is no reason – or no reason that cuts deeper, or goes further, than the tautology ‘because goodness is good’. The reason you should be virtuous and not vicious is just that virtue is virtue and vice is vice. Ultimately, what you get from virtue is simply … virtue. Virtue may also get you health, wealth, and happiness, but there is certainly no guarantee of that – definitely not – and in any case that isn’t the reason you should be virtuous.” (p. 95)

Let’s dig a little deeper into McGinn’s reasoning:

“Moral justification, like all justification, comes to an end somewhere. At some point we have simply to repeat ourselves, possibly with a certain emphasis, or else just remain silent. Virtue is, if you like, its own justification, its own reason: you can’t dig deeper than that. To the question ‘Why should I care about others as well as myself?’ the best answer is another question: ‘Why should you care about yourself as well as others?’ In the case of the latter question, the right reply is, ‘Because you are a person to be taken into account too’; and in the former case, the right reply is, ‘Because they are persons to be taken into account too.’”

McGinn here ignores indexicality, as opposed to, say, how some Christian moral system might incorporate it. To the question ‘Why should I care about others as well as myself?’ that McGinn poses another answer is: you will be rewarded for your virtue (or punished for your vice), sooner or later. This is possible because moral guidelines in, say, such a Christian system are tied into a causal system that matters – i.e., the moral system is intended to be used, and so works with basic human psychology and action.

McGinn seems to pride himself on conjuring up a moral system that doesn’t matter, but this is a little too like the academics who pride themselves on useless research because it can accord higher status. It turns out that their research is useful in a sense (for their own status). Perhaps what McGinn is trying to do is develop a moral system which is practical, in that exercising or expounding it shows that one is higher status. “I don’t even need appeals to my own well being to motivate moral action!”, perhaps.

In this sense, McGinn’s moral system does have a reward (see here for a basic structure of moral systems): higher status is the practical, psychological justification for virtue in his system. To the extent a moral system like his will work, he needs something like this. The problem is that, as soon as people see the preening of a moral system like his for what it is, it starts to lose its higher status.

What makes a moral system work?

What makes a moral system work?

There are 3 major elements to the basic structure:

1. A way to figure out what to do and not to do. Usually, this includes ideas or rules for long-term over short-term thinking, or ways to regulate social conduct so as to make social life better, or ways to cultivate an appreciation of or connection to aspects of the universe suggestive of divinity.

2. Rewards or punishments for following 1.

3. A mechanism to enforce 2.

Christianity, for example, tends to implement these 3 elements as follows:

1. The 10 Commandments, and more generally the Bible and exegesis, and one’s conscience.

2. Heaven, Hell, and Purgatory, and statistical natural consequences in this life of following or not following 1.

3. God’s omniscience and ability to send one to Heaven, Hell, or Purgatory, and statistical natural consequences of following or not following 1.

2. and 3. are important for a moral system to have practical psychological import. Without them, a person asks “Why act according to these guidelines or rules?” and there is no reply. They end up not following the rules.

Darwin’s Idea?

I sometimes read that Charles Darwin’s theory of natural selection, or some succeedant variation on this, was the primary cause of the rise of atheism among the elites in European society.

Yet, consider the following passage by John Herschel, one of the most prominent English astronomers of his day (Preliminary Discourse on the Study of Natural Philosophy, 1830):

“Nothing, then, can be more unfounded than the objection which has been taken, in limine, by persons, well meaning perhaps, certainly narrow-minded, against the study of natural philosophy, and indeed against all science, – that it fosters in its cultivators an undue and overweening self-conceit, leads them to doubt the immortality of the soul, and to scoff at revealed religion. Its natural effect, we may confidently assert, on every well constituted mind is and must be the direct contrary.”

Herschel’s book was written well before Darwin’s On the Origin of Species was published (1859). That he felt a need to spend a significant part of the first chapter of his book arguing for a “natural philosophy” -> “certain received religious ideas” link seems to suggest that he thought it a serious issue, despite his ostensible beliefs about their complementarity.

This suggests that the rise of atheism among the scientific elite started prior to Darwin’s theory. This further suggests that the cause of the rise in atheism after Darwin was in some sense part of the same cause, whatever the proximate causes may have been.