Author Archives: admin

Non-monetized economies

Non-monetized economies are more important than monetized economies. When economists or game theorists ignore this, they make mistakes in their analysis of rationality.

A monetized economy is the sum of the transactions where money is exchanged. The money is a simple, quantified representation of an estimate of value. Person r pays $y for thing x, where statistically there is some association between the price y and the value of thing x. This makes it convenient to track estimated value.

The non-monetized economy includes everything from parent-child interactions to having a dinner party to gazing at the starry sky. In all these cases, there is creation of value, but there isn’t as simple a way to quantify an estimate of it. Yet, these non-monetized transactions or activities form the majority of what is valuable in a person’s life.

Consider the Ultimatum Game (see here). Part of the tension in the analysis is resolved once one realizes that the non-monetized payoffs can be much more important than the monetized payoff, and so therefore seemingly irrational behaviour can be rational (and, vice versa, seemingly rational behaviour when only looking at the monetized payoffs becomes quite irrational when looking at both monetized and non-monetized payoffs).

The problem with monetized economics is that it is radically incomplete. When people worry about monetized economics, they are worrying about something that is a part of the overall economy.

Science, Black Swans, and Insider-Outsiders

Nassim Taleb writes in the prologue to The Black Swan (2nd edition, 2010):

“Black Swans being unpredictable, we need to adjust to their existence (rather than naively try to predict them). There are so many things we can do if we focus on antiknowledge, or what we do not know. Among many other benefits, you can set yourself up to collect serendipitous Black Swans (of the positive kind) by maximizing your exposure to them. Indeed, in some domains – such as scientific discovery and venture capital investments – there is a disproportionate payoff from the unknown, since you typically have little to lose and plenty to gain from a rare event. We will see that, contrary to social-science wisdom, almost no discovery, no technologies of note, came from design and planning – they were just Black Swans. The strategy for the discoverers and entrepreneurs is to rely less on top-down planning and focus on maximum tinkering and recognizing opportunities when they present themselves.”

To an extent, this comports with my reading of a large number of important scientific discoveries. They come from a combination of a) preparedness, b) sensateness, and c) curiosity. The c) curiosity is the tinkering Taleb refers to above. The b) sensateness is being aware of when something unexpected happens, i.e., recognizing opportunities when they present themselves. The a) preparedness is a background of knowledge and skills that enables you to recognize something as unexpected and help you to investigate it.

Consider Taleb on the following:

“Think about the “secret recipe” to making a killing in the restaurant business. If it were known and obvious, then someone next door would have already come up with the idea and it would have become generic. The next killing in the restaurant industry needs to be an idea that is not easily conceived of by the current population of restaurateurs. It has to be at some distance from expectations. The more unexpected the success of such a venture, the smaller the number of competitors, and the more successful the entrepreneur who implements the idea. The same applies to […] any kind of entrepreneurship. The same applies to scientific theories – nobody has interest in listening to trivialities. The payoff of a human venture is, in general, inversely proportional to what it is expected to be.”

This reminds me of Seth Roberts’ insider-outsider idea in science – the idea that insiders (people with expert knowledge in an area) who are also outsiders (have markedly different personal resources than most experts in the area*) have an advantage. If a more-than-marginal breakthrough tends to come from “an idea that is not easily conceived of by the current population of” experts in a field, then it makes sense why an insider-outsider might have an advantage.

*Roberts defines ‘outsider’ as someone who has freedom due to differences in their career. I expand the concept here, then – it seems like having different freedom because of one’s career is one instance of the importance of an ‘outsider’ – i.e., someone who has markedly different personal resources than most experts in the area.

Percy Spencer‘s curiosity was helped by his lack of typical, formal schooling in his area of expertise. In this sense, he was an insider-outsider – such that his “preparedness” and “curiosity” were different from most other engineers. This seems to have directly contributed to his success.

Sir Francis Galton’s Scientific Priesthood

Seth Roberts writes that:

[Doing experiments oneself] is what the Protestant Reformation was about: Speaking directly to God rather than waiting for “definitive studies” by experts that “quantify the risks and benefits”.

Consider Sir Francis Galton in English Men of Science (1873):

“As regards the future provision for successful followers of science, it is to be hoped that, in addition to the many new openings in industrial pursuits, the gradual but sure development of sanitary administration and statistical inquiry may in time afford the needed profession. These and adequately paid professorships may, as I sincerely hope they will, even in our days, give rise to the establishment of a sort of scientific priesthood throughout the kingdom, whose high duties would have reference to the health and well-being of the nation in its broadest sense, and whose emoluments and social position would be made commensurate with the importance and variety of their functions.” (p. 259)

Galton’s Men of Science and Contemporary Science

Sir Francis Galton was an English polymath of the 19th century. While doing research into the education of remarkable scientists of that period, I found out that he wrote a book called English Men of Science (1874). It largely consists of the results of a detailed survey that he carried out of 180 of the most preeminent men of science living at that time in the United Kingdom (‘men of science’ is then adopted as a technical term in the book for these preeminent men).

Part of his summary of the survey results as it applies to education is as follows:

[M]y returns show that men of science are not made by much teaching, but rather by awakening their interests, encouraging their pursuits when at home, and leaving them to teach themselves continuously throughout life. Much teaching fills a youth with knowledge, but tends prematurely to satiate his appetite for more. I am surprised at the mediocre degrees [i.e., poor grades] which the leading scientific men who were at the universities have usually taken, always excepting the mathematicians. Being original, they are naturally less receptive; they prefer to fix of their own accord on certain subjects, and seem averse to learn what is put before them as a task. Their independence of spirit and coldness of discomposition are not conducive to success in competition: they doggedly go their own way, and refuse to run races.” (p. 257)

Much of what Galton says here fits with the conclusions I have been drawing from my ongoing inquiry into the educations of various historical scientists and engineers (Charles Darwin, Michael Faraday, John Herschel, Percy Spencer, and most recently I started looking at Alexander Bell), where I was struck by the pattern forming from my very limited sampling: little or no formal schooling in their areas of research, absenteeism when they were in school, and either dropping out of school or being removed from the school by their parents due to problems (Darwin and Bell due to lack of interest, Faraday after he was beaten by a school teacher after consistent problems between himself and the schoolmaster).

When Galton says that “[m]uch teaching fills a youth with knowledge, but tends prematurely to satiate his appetite for more,” this probably happens because it tends to destroy a sense of curiosity that might develop more naturally about something, such as chemistry (Faraday), astronomy (Herschel), biology (Darwin), acoustics (Bell), or radio (Spencer), where the sparks for curiosity and personal motivations are diverse and not conducive to being established by fiat by compulsory schooling. If the curiosity is already developed, then my guess is that some limited, voluntary schooling in that area may be an (significant) asset. (My guess is that even here, meeting like-minded individuals and accessing expensive equipment may be one of the biggest advantages to schooling in this sense – not the curricula, or grading, and so on.) Compare where Galton says “men of science are not made by much teaching” to our current day, where a typical scientist has 20 years of ‘teaching’, i.e., schooling, if not more.

The idea that more schooling = more scientific advancement, or that more funding for schooling is the answer to the question of how to increase scientific advancement, is more and more clearly becoming fatuous to myself. The low “advancement : money + time” ratio we now experience relative to 19th century England is probably partially due to the rise of formal schooling in the early 20th century, and is now perpetuated by entrenched interests (an interesting counter-development is the relatively rapid increase in home-schooling rates in the United States very recently).

Percy Spencer, Education, and Technological Advancement

In testing the idea that there has been more change in the day-to-day technological reality from, say, 1860 to 1910 as opposed to 1960 to 2010 (see the last four paragraphs here for a discussion of this), one possible change that I thought of in favour of the latter was the microwave oven. I imagined them appearing in the mid-1970s.

Not quite: microwave heating was discovered in 1945, and the first commercial microwave oven was invented in 1947. It was initially sold to commercial food operators due to its costs and immense size, but the first domestic microwave oven came on the market in 1955. The first compact microwave oven was introduced in 1967. So, it may just marginally count – a person in 1960 probably would have heard of microwave ovens, but they weren’t that common.

From this, I found out about Percy Spencer, the discoverer of microwave heating. Spencer was one of the most accomplished American engineers of his time, and was Senior Vice-President of Raytheon Corp. What was his schooling like? The answer is: he had several years of conventional schooling, which didn’t occur until he was 18 or 19.

His education (primary source: Reader’s Digest, August 1958, p. 114, here):

– Born in 1894. Father dies at 18 months.

– Is raised by aunt and uncle on rural farm in Maine.

– Learns how to chop wood, hoe, saddle a horse, help with the preserving, skin a deer, saw a straight line, and improvise solutions to various problems involved in a farm.

– At 12, he starts working at a spool mill.

– At 16 (1910), although he has no formal knowledge of electricity, he signs up to be one of three men to install a new electricity system at the local paper mill. He learns by trial and error, and becomes a competent electrician.

– At 18 (1912), he joins the U.S. Navy with the intent of learning wireless telegraphy, gets hold of textbooks (from which supposedly he teaches himself in significant part while standing watch at night), and enters the Navy’s radio school.

– He gets a position at Wireless Specialty Apparatus Co.

– He gets a position at Raytheon Corp., where his experiments bring him into contact with many of the best physicists at M.I.T.

Several aspects of his accomplishments I found particulary interesting:

1. Spencer led Allied efforts to produce working models of combat radar equipment during World War II, which was the highest priority U.S. military research project after the Manhattan Project at that time.

2. In the story of how he discovered the heating properties of microwaves, he was visiting a lab where experiments with magnetrons (tubes that generate high-frequency radio waves) were being done. He noticed that a chocolate bar in his shirt pocket had begun to melt. Other scientists had noticed a similar sort of phenomenon when working with the tubes, but Spencer followed up by doing further experiments specifically aimed at rapidly heating food.

This is similar to another incident. While Spencer was experimenting with photoelectric vacuum tubes, one developed a small leak. Instead of simply discarding the tube, he was curious what affects it would have on the functioning of the tube. From this he discovered that the tube’s photoelectric quality had increased roughly ten times.

These sorts of examples point to an hypothesis about scientific discovery: they come about from a specific combination of preparedness, sensateness, and curiosity. Which leads to the next point:

3. Relatedly, much of Spencer’s work was marked by an insatiable, boyish curiosity.

I have my doubts as to whether this sort of curiosity is supported by typical, extended contemporary schooling. Consider this comment about Spencer:

“[A]n M.I.T. scientist explained to me how Spencer operates: “The educated scientist knows many things won’t work. Percy doesn’t know what can’t be done.”

I think this is backwards. From all accounts, Spencer was better educated than most peer engineers, and that is one of the reasons why he didn’t develop mistaken beliefs such as the M.I.T scientist was alluding to.

4. He meets pre-eminent scientists not by becoming an academic scientist, but by going to work for a company where in turn he is exposed to scientists from M.I.T.

This is similar to Faraday, who first met pre-eminent Continental scientists not by being a pre-eminent scientist himself (at this point), but by being the amanuensis to Sir Humphry Davy while Davy traveled the Continent.

Charlton and the Scientific Bubble

I have put forward, in passing, the idea that the ratio of “advancement : money + time” is lower than it was in the 19th century in a place like England. This is based on the astounding discoveries made by a relative handful of men in that time. I have put forward a few reasons why that may be so: education, flexibility in terms of research and exploration (and in particular how this allows for focusing on problems a scientist is more personally motivated about), and less bureaucracy. (See here for more on these reasons – the last comes from Bruce Charlton, and is discussed in passing here.)

Charlton puts forward a more striking hypothesis, however: it’s not just that the ratio is lower, but the absolute amount of scientific advancement may be lower.

Because we have more professional scientists and more science journals, and more journal articles, we assume there is more advancement.

“[T]he expectation of regular and frequent publication would only make sense if it was assumed that scientific knowledge was accumulating in a predictable fashion.”

How would we really know, however?

“Who could evaluate whether change is science and increased amounts of self-styled scientific *stuff* actually corresponded to more and better science?”

We can’t use the growth in the professionalization of science, or the growth in journals, as direct evidence, unless we make various assumptions about these things and their relation to advancement.

Charlton says we can think of journal publications as paper money – just as in an economy, paper money can grow and grow while not maintaining equivalent value, just so in science: the publications, which are supposed signs of value, may become detached from actual scientific value, i.e., advancement.

Charlton thinks science is in a similar situation to a bubble economy:

“In science, what masquerades as growth in knowledge (to an extent which is unknown, and indeed unknowable except in retrospect) is not growth in knowledge but merely an expansion of *stuff*, changes in the methods of counting, and so on.”

He continues:

“Multiple counting is rife: progress is claimed when a grant is applied for and also when a grant is awarded, and even when the work is still happening – since scientific progress is assumed to be predictable – a mere function of resources, capital and manpower; credit for a scientific publication is counted for all of its (many) authors, for all the many forms in which the same stuff is published and republished, for the department and also for the university where it was done, and also the granting agency which provided the funds and for the journal where it was published – everyone grabs a slice of the ‘glory’.

Credit is given for the mere act of a ‘peer reviewed’ publication regardless of whether the stuff is true and useful – or false and harmful.”

I continue to quote at length:

“If science is really *hard*, then this fact is incompatible with the professionalization of science – with the idea of scientific research as a career. Since science is irregular and infrequent, science could only be done in an amateur way; maybe as a sideline from some other profession like teaching, practicing medicine, or being a priest.

Professional science would then be intrinsically phony, and the phoniness would increase as professionalization of science increased and became more precisely measured, and as the profession of science expanded – until it reached a situation where the visible products of science – the *stuff* bore no relationship to the reality of science.

Professional scientists would produce stuff (like scientific publications) regularly and frequently, but this stuff would have nothing to do with real science.

Or, more exactly, the growing amount of stuff produce by the growing numbers of professional science careerists, whose use of hype would also be growing – the amount of this stuff would be so much greater than the amount of real science, that the real science would be obscured utterly.

This is precisely what we have.”

He then ties this into the recent economic bubble:

“The economy was collapsing while the economic indicators improved; and science can be collapsing while professional science is booming.”

He then makes a further move:

“But if science is very difficult and unpredictable, and if the amount of science cannot be indefinitely expanded by increasing the input of personnel and funding, then perhaps the amount of real science has not increased *at all* and the vast expansion of scientific-stuff is not science.

If so, then the amount of real science (intermittent, infrequent, unpredictable) has surely not stayed constant but will have actually declined due to the hostile environment. At the very least, real science will be de facto unfindable since the signal is drowned by every increasing levels of noise.”

He concludes:

“But when the scientific bubble bursts, what will be left over after the explosion? Maybe only the old science – from an era when most scientists were at least honest and trying to discover the truth about the natural world.

[…] At the very least, science would be set back by several decades and not just by a few years. But it could be even worse than that.”

I think Charlton is right in saying that gauging scientific advancement by looking at the number of people working in science, or the number of journals or publications, can be misleading – these could even be negative indicators depending on one’s views of professionalization or the problems of signal-to-noise ratios, as Charlton notes.

What would be a more useful metric? I don’t think there are simple demarcations between technology and science, but technology is caught up in science to some degree. Technology is where scientific ‘knowledge’ is put to the test, and often refined.

Many people talk about how technological change is speeding up, and in part this is because journalists like talking about supposedly new things. In our time, though, pretty much everything in our day-to-day technological reality would be familiar to someone from around 50 years ago. The big exception is the development of computers and the Internet. Near the end of the 19th century, dramatic technological changes involving electrical power, the internal combustion engine, airplanes, and wireless telegraphy (i.e., radio), to name a few, were taking place.

My working guess for a peak for technological change in terms of how it affects people would be then.

Faraday’s Education and the Scientific Dark Age

Michael Faraday was the leading experimental scientist of his time, and along with James Clerk Maxwell was one of the founders of electromagnetism. Similar to Darwin, his education was nothing like the standard schooling route used by most people who now become scientists. It is my hypothesis that the standard schooling route nowadays contributes to the very low “advancement : money + time invested” ratio we now have in science. Consider Faraday’s education:

– Goes to a day-school for several years.

– At 13 becomes the errand-boy for a local bookbinder, Mr. Riebau.

– At 14 becomes an apprentice of the bookbinder.

– During this time he reads from the books he is working with that interest him, including ones on reasoning, chemistry, and electricity.

– Curious about some of the experiments he has read about, he begins his own experiments in chemistry and electricity in the kitchen of his own house. (He later delivers his first lecture from the end of his kitchen table to friends.)

– At the same time, he meets a Frenchman lodging with the bookbinder, M. Masquerier, who is an artist. Masquerier teaches Faraday perspectival drawing, and lends him his books.

– One day while out walking Faraday sees a notice for lectures on natural philosophy by Mr. Tatum. He can’t afford it, but his brother, a blacksmith, gives him enough money to attend. There he meets a variety of like-minded people, and chats and strikes up correspondences.

– He meets Mr. Dance, a member of the Royal Institution and customer of the bookbinder, who takes Faraday to hear four lectures by Sir Humphry Davy, one of the leading scientists at the Royal Institution.

– At 20, his apprenticeship expires. He writes Davy, seeking employment at the Royal Institution. Soon after, Davy’s laboratory assistant gets in a brawl, and is discharged. Faraday is interviewed, and is hired as Chemical Assistant at the Royal Institution.

– Faraday promptly nearly gets killed in an experiment, as Davy is working with “chloride of nitrogen”, which explodes at seemingly random points.

– He is admitted as a member of the City Philosophical Society, 30-40 men who meet to discuss various topics.

– Mostly from this group he draws half a dozen or so to make a “mutual improvement plan,” who meet in the attics of the Royal Institution among other places. These people

“met of an evening to read together, and to criticise, correct, and improve each other’s pronunciation and construction of language. The discipline was very sturdy, the remarks very plain and open, and the results most valuable.” (p. 22)

– After 7 months as laboratory assistant, Sir Davy decides to travel to the Continent (having received a special pass from Emperor Napoleon), and offers to take Faraday as his amanuensis (transcriber). For 1.5 years, they travel from France to Italy to Switzerland, and then return by the Tyrol, Germany, and Holland, doing experiments, meeting natural philosophers, and absorbing the culture. As Gladstone puts it:

“This year and a half may be considered as the time of Faraday’s education; it was the period of his life that best corresponds with the collegiate course of other men who have attained high distinction in the world of thought. But his University was Europe; his professors the master whom he served, and those illustrious men to whom the renown of Davy introduced the travelers. It made him personally known, also, to foreign savants, at a time when there was little intercourse between Great Britain and the Continent; and thus he was associated with the French Academy of Sciences while still young, his works found a welcome all over Europe, and some of the best representatives of foreign science became his most intimate friends.” (p. 27)

(Extracted from Michael Faraday, by J.H. Gladstone (1872).)

The pattern seems to be a development of autonomous interest in scientific topics and experimenting – not compulsory schooling.

The Incredible Success of the “Amateur” Scientist Model

In the prologue to the 2nd edition of The Black Swan (2010), Nassim Taleb says:

“In the past, for better or worse, those rare philosophers and thinkers who were not self-standing depended on a patron’s support. Today academics in abstract disciplines depend on one another’s opinion, without external checks, with the severe occasional pathological result of turning their pursuits into insular prowess-showing contests. Whatever the shortcomings of the old system, at least it enforced some standard of evidence.”

This applies to science. Consider Michael Partridge’s introduction (1966) to John Herschel’s A Preliminary Discourse on the Study of Natural Philosophy (1830), where Partridge says:

“Herschel studies mathematics for the joy of it, not caring that his profession was to be the law! Science might appeal even to a man without such training, and the English scientific amateur of that day and age did not need to be a dilettante; he might be a serious contributor, but as free, as erratic, as eccentric as he pleased, for he had only himself to please.”

Herschel was one of the most accomplished astronomers of his day, but he was an English scientific amateur (i.e., he was not paid to do science, was financially independent, and had other sources of status). Charles Darwin, who was influenced by Herschel’s writing, was also a scientific amateur. Both of them studied, researched, and innovated in their leisure time.

The early 19th century, and in particular in Britain, was one of the greatest times for science in history (per dollar and hour of research it was much more effective than contemporary science), and this is due in significant part to the “amateur” scientists (like Herschel or Darwin) of the time. Why does the amateur scientist model work? Here are three possibilities:

1. There is more flexibility in research and exploration, and in particular a scientist can work on a problem he is more personally motivated about. (See Seth Roberts’ paper linked here for a discussion of some of these sorts of considerations.)

2. It involves less bureaucracy.

3. An amateur scientist can have more and different influences on him.

To expand upon 3., most Professors of Philosophy follow a certain kind of trajectory: 12 years in elementary and high-school, a B.A. in Philosophy, a possible M.A. in Philosophy, then a Ph.D in Philosophy, then teaching and research as a Post-Doc in academic philosophy, then Assistant Professor in Philosophy, and so on, until full Professor or Research Professor in Philosophy. This creates a poverty of personal resources in the profession. This goes for most any academic (including scientific) research.

For a comparison, here is Charles Darwin’s early career path (this is from work by David Leff):

– Did a lot of hiking as a child.

– Spent time with his brother in a make-shift back-yard chemistry lab.

– Practically flunked out of the equivalent of high-school (his father removed him because of poor grades).

– Went to Medical School in Edinburgh. Was bored, and dropped out.

– During this time and outside of medical school, though, Darwin:

– learned how to stuff animals (taxidermy), simultaneously meeting someone from South America,

– read The Natural History of Selborne while hiking northern Wales,

– hung around the Natural History Museum and befriended the curator,

– joined the Plinian Society (a society which debated the merits of various scientific approaches),

– and took long walks with Zoology Professor Robert Grant.

– After dropping out of Medical School, he went to Cambridge University to study to join the clergy.

– He skipped most of his lectures, instead shooting birds in the countryside, wandering the countryside collecting and classifying beetles (entomology), and going to dinner parties where, among other things, he met Professor Reverend John Henslow.

– At the dinner parties the Professor gave informal lessons to the assembled upper class students on matters of science, and invited Darwin to his botany lectures. Darwin then went with Henslow on scientific excursions to the countryside.

– During this time Darwin also went with Professor Adam Sedgwick on a geological tour of North Wales in the summer.

– He read various extra-curricular books that excited his imagination about possible scientific discovery.

Darwin’s time at Cambridge is humourously summarized on his Wikipedia page as “Studies at the University of Cambridge encouraged his passion for natural science” (and this uses the same reference as linked to above), when it was precisely his extra-curricular activities that did this.

Darwin did not spend 12 years in elementary and high-schooling, he did not get a B.Sc. (he rather received a B.A.), nor an M.Sc., nor a Ph.D. (the closest equivalent to the latter was, of course, spending 5 years traveling around the world in a boat).

The standard formal schooling route used by most people who then become scientists is probably sub-obtimal as a means to innovation and discovery.

Charlton, a New Pragmatism, and the Philosophy of Investigation

I was once in a talk with the philosopher of science Susan Haack, and someone asked: “What is your definition of science?” Her reply was something like: “Doing your d***ed best to figure things out.”

Bruce Charlton says many interesting things about epistemology and philosophy of science in his post here. The views he outlines could be called a new pragmatism.

Consider:

“[T]he failure to answer philosophical questions has led to the arbitrary manufacture of ‘answers’ which are then imposed by diktat. So that a failure to discover scientific methodology led to the arbitrary universal solution of peer review (science by vote), the failure to understand medical discovery led (inter alia) to the arbitrary imposition of irrelevant statistical models (p < 0.05 as a truth machine).”

(Also see here for a specific discussion of Random Controlled Trials.)

He continues:

“Yet, science is not a specific methodology, nor is it a specific set of people, nor is it a special process, nor is it distinctively defined by having an unique aim – so that common sense does lie at the heart of science, as it does of all human endeavor.”

‘Science’ is an instance of investigation, and has much to learn from pursuits such as computer coding, cooking, figuring out what the problem is with a motorbike, or, really, figuring anything out. What’s important about science can be informed from many different pursuits, and people are doing ‘science’ all the time without calling it such. In a way, there’s nothing distinct about science.

Some scientists don’t like this, because it makes what they do sound less rarefied, and less ‘high status’.

Science – when it succeeds – is much more about what one could call ‘common science’ – figuring things out using techniques and tools which are common to and come from a large number of things we do, and not from a refined epistemology.

Charlton continues:

“It is striking that so many writers on science are so focused on how it is decided whether science is true, and whether the process of evaluating truth is itself valid. Yet in ‘life’ we are almost indifferent to these questions – despite that we stake our lives on the outcomes.”

“How do we decide who to marry? How do we decide whether it is safe to walk down the street? How do we decide whether or not something is edible – or perhaps poisonous?

Such questions are – for each of us – more important, much more important, than the truth of any specific scientific paper – yet (if we are functional) we do not obsess on how we know for certain and with no possibility of error that our answers are correct.

[…] The result of detached, isolated, undisciplined philosophical enquiry is usually to undermine without underpinning.”

He concludes with:

“Epistemology currently functions as a weapon which favours the powerful – because only the strong can impose (unanswerable) questions on the weak; and ungrounded and impatient epistemological discourse is terminated not by reaching an answer – but by enforcing an answer.”

The charge that something “isn’t scientific” – to the extent it is a useful phrase – is probably better rephrased as “this doesn’t fit with other things we’re pretty sure about, so how can you square them?” or “have you really tried to figure out if this is true?” or “there’s an obvious competing theory that can explain this, why do you prefer this theory instead of that one?” Phrasing the question this way moves science away from a hallowed – and somewhat inaccessible – realm, and into the chaotic, muddy, life-and-death world of everyday investigation.

Cooking as science

In an e-mail I sent recently, I said that “cooking is prototypical science.” Then while trying an experiment involving caramelizing some onions (my hypothesis didn’t work out that well), I thought to myself: “That’s not quite right. Cooking isn’t prototypical science. Rather, cooking is science.”

Cooking is a branch of science. Why? Because it is a form of investigation of the natural world. Consider a standard definition of science: “systematic knowledge of the physical or material world gained through observation and experimentation.”

Does cooking pass this test? Cooking involves observation (“my mushrooms are sticking to the pan”). It also involves large amounts of experimentation (“does a little bit of butter cause them to not stick to the pan?”). It is of the physical world (I can verify by sticking my hand on the pan). It can be, and is, systematized (personal cooking books, published cooking books, cooking schools, and so on).

Just because cooking involves “higher-level” entities (foods) doesn’t mean it isn’t science. Is chemistry not a science because it deals with higher-level processes than certain branches of physics?

It might be objected that cooking’s object isn’t knowledge of the universe, but rather that this knowledge is a mere by-product. Does one say that medical research isn’t science, because it’s object is to help people, and not first and foremost to gain knowledge of the universe?

Cooking is a particularly good science, because a) it can be done on a personal scale (no big bureaucracies required), b) experiments can be repeated and modified relatively easily (lots of trial and error rapidly and with little cost), and c) there is an immediate practical benefit to it (see Seth Roberts’ paper on science linked to here for a discussion of these sorts of factors).

I can be a little more precise. There is the experimentation, bold hypotheses, crushing failures, soaring victories, and increase in knowledge that is one part of cooking. Then there is the use of this knowledge to make meals, which is more technology than science.

Human capability, exploration, and Charlton’s hypothesis

Bruce Charlton, Professor of Theoretical Medicine at the University of Buckingham, puts forward the provocative thesis that:

“human capability reached its peak or plateau around 1965-75 – at the time of the Apollo moon landings – and has been declining ever since.”

He argues for this based, among other things, on the fact that humans have not been to the moon since 1972. As he continues:

“I presume that technology has continued to improve since 1975 – so technological decline is not likely to be the reason for failure of capability. […] But since the 1970s there has been a decline in the quality of people in the key jobs in NASA, and elsewhere […]. And also the people in the key jobs are no longer able to decide and command, due to the expansion of committees and the erosion of individual responsibility and autonomy.”

This may be true – that the quality of people at NASA has declined since the 1970s for a variety of reasons, and that the increase in bureaucracy has made it more difficult to carry out missions like sending people to the moon. I’m not qualified to judge.

My objection to this particular example, however, is simple: the difference is exactly about the advance in technology, but for a reason opposite to Charlton’s correct assumption that technology hasn’t declined – rather, it is precisely because technology has advanced that sending humans becomes outmoded. That is, we no longer send humans because robots do a better job relative to cost, and this now includes setting down and exploring on the ground. It is the advances in robotics and computerization since the period Charlton looks back on that have made robots a better choice for this kind of exploration.

It is true that after the 1970s, there was a significant period of time where the U.S. did not send robots to the moon. This ended in 1994, and again in 1998. The U.S. isn’t the only country to have recently sent robots to the moon: Japan, Europe, China, and India have all sent robots in the 200os.

Not only has the U.S. sent robots to the moon since the 1970s, it has sent robots that have touched down and then actively explored the surface of Mars. This includes the Spirit Rover and the Opportunity Rover, the latter of which is actively exploring the surface of the red planet.

If relevant technologies get better, then robots get better and better as choices for these sorts of missions. Humans may not return to the moon in the near future, but an explanation for this is at hand outside of the reasons Charlton gives.

The Uselessness of French – Schooling and Post Hoc Justifications

Many parents tell their children that schooling is important. Otherwise, why would they be forced to go?

French is taught in elementary and high-schools in Canada in large part because the country is officially bilingual. Although decisions are made largely on a provincial basis, the federal government provides funding incentives for provinces to encourage bilingual education. Not surprisingly, then, the decision to teach French is largely a political one, not one based on the immediate practical concerns of students who might be spending hours upon hours learning it.

Parents, for their part, engage in post hoc justifications for this. (A post hoc justification is one made after the fact.) Why are their children being forced to learn French? “You are forced to spend hours learning this language not because of a political reason, but because it’s useful.”

French is useful the way learning any language is useful. The large majority of people in Canada outside of Québec don’t speak French. In a place like Vancouver, for example, it is more useful to know Mandarin, Cantonese, Punjabi, Korean, Japanese, Spanish, and probably several other languages before one gets to French. That is, relatively speaking, learning French is useless for someone living or working in Western Canada.

Consider: one might be able to have some conversations with people in one’s own province in French once in a while. One might be able to travel to a country and better navigate the place or have conversations with people. One might be able to read literature one otherwise would not be able to. Yet, these sorts of considerations could be said for almost any language, and in many cases more so, depending on someone’s situation.

The usefulness of schooling in general is also often a sort of post hoc rationalization. Many parents think the idea of having several hours of free time away from their children is attractive (especially with something that has no extra marginal cost such as public schooling), as this will allow them to make more money, clean the house, or so on.

Since they like this idea, they then look around for justifications for sending their children to school: it teaches important and useful things in a better way (largely false – there are much better ways to spend one’s time), it’s important for proper socialization (largely false), it’s required so you can get into university (nowadays, this is false), and university is important so that you can make good amounts of money (again, largely false – what universities teach is largely inapplicable to generating money, and there is a large selection effect which skews average incomes for people with university degrees up, i.e., people who get into university tend to be more motivated and so on, and so tend to make more money – the causal chain is largely opposite of what most people think).

Reclaiming Rationality and the Ultimatum Game

The Ultimatum Game (here) is as follows:

There are two players, and an amount of money they can divide. The first player proposes how to divide the sum between the two players, and the second player can either accept or reject this proposal. If the second player rejects, neither player receives anything. If the second player accepts, the money is split according to the proposal. The game is played only once so that reciprocation is not an issue.

Let’s say the amount if $100. If player-1 offers player-2 a payout of $99 for player-1 and $1 for player-2, is it irrational for player-2 to reject the offer?

The reasoning ‘yes’ usually goes as follows: if player-2 accepts, he gets $1. If he rejects, he gets $0. $1 > $0. Therefore, it is rational for him to take the $1.

The obvious problem is: the payout includes emotions, which are not included in the explicit analysis of the payouts of the game above. Rejecting an offer such as 1% of the total sum in the case above may lead to a feeling of retribution, for example. A feeling of retribution is a good. This is not included in the calculus of the payouts above.

Rationality comes from ratio, which involves comparing two things. In this case, the relevant ratio is “utility if accept” : “utility if don’t accept”. Emotion can impede an accurate assessment of the ratio, but here the feeling of retribution is part of the value on one side of the ratio. It is no longer $1 : $0, but $1 + emotional payout : $0 + emotional payout. It is easy to imagine how the right side of the ratio might become a lot bigger than the left side.

The bigger question I have, then, is: is a player being irrational if he accepts the offer? Based on the various payouts not included in the explicit description, my guess is yes, he is being irrational in accepting the specific kind of offer above. Not only is it not irrational to reject the $1, but he will be much better off rejecting the offer.

The above is about emotions in the payout, but what about being emotional while assessing the payout? Although being emotional when assessing payouts can interfere with one’s assessment, emotion can also inform one’s assessment, as emotion is basically a mechanism we have to synthesize (large amounts of) information, that would be intractable for the ‘rational’ part of the brain to work through, or that we have gained (presumably) through an evolutionary process. A feeling of retribution is telling us important information, in this case about what is the appropriate response to such a proposed sharing of resources in a more natural situation. Although in this situation the feeling of anger at such an offer may be outmoded, regardless the feeling of retribution one gains from spurning the offer is still a good in itself.

Why do we have a feeling of retribution, or other emotional aspects of a payout either way? Humans are social animals, so fair sharing is an important part of surviving. Emotions related to that are an important part of survival. We can’t turn the emotions on or off at will. Therefore, they need be included in an analysis of the payouts from games like these. Why is money important? Because it brings goods that are usually emotional, such as feeling good when drinking a coffee, or feeling good because one has higher financial status, and so on.

Reclaiming rationality means thinking more about emotion in payouts and assessment.

Thanks to Sacha for bringing the Ultimatum Game to my attention.

The Case of the Missing Category Error

Much discussion about fictional literature implicitly invokes a category error. A category error is where you apply a concept that has developed to work with one kind of thing and instead apply it to another kind of thing, where there might be important differences. In this case, most of the concepts discussed in literature have been developed to apply to biological entities (i.e., humans) or real situations, but they are being applied to fictional ones.

For example, in a fictional murder mystery, did suspect so-and-so really murder so-and-so? This question has no answer. You can answer: what the author intended, what fictional ‘facts’ presented cohere better with one hypothesis or another, and so on. Yet, there is no answer to the question, because ‘murder’ is meant to apply to a real-world situation, and this is fiction.

Debates like these are like a joke. The setup is a debate about some part of a fictional book or play and so on, and the punch-line is the realization that one is making a category error.

Being aware of category errors could be useful in thinking about various thought-experiments in philosophy, because thought-experiments are fiction.