Why is a Raven Like a Writing Desk?

(An Allowable Psychologism?, pt. 1)

In Alice’s Adventures in Wonderland, the Mad Hatter asks Alice, “Why is a raven like a writing desk?” After several pages of delay the Hatter finally admits, “I haven’t the slightest idea.” Cracked characterizes this as “one of the biggest dick moves in literature.” Apparently, Lewis Carroll was plagued with demands for an answer to the riddle. In response to the numerous accusations of a “dick move” (or the Victorian equivalent thereof), he addressed the issue, saying, “Because it can produce a few notes, tho they are very flat; and it is never put with the wrong end in front!” This is disappointing. However, apparently, “never” was originally written by Carroll as ‘nevar’. This is indeed ‘raven’ “with the wrong end in front” (i.e. “nevar” is ‘raven’ backwards). This is a little better, but still not satisfying. WiseGeek suggest that the answer is simply that neither is made of cheese, which, if this post looks TL;DR, is close enough. But if you’d like to see the specifics of my answer, and the possible resolution of some philosophical paradoxes, then please, read on.


Paradoxes, so many paradoxes

It doesn’t seem highly likely that the answer to one mathematical paradox and one logical paradox (and a few others besides) could be illustrated by, and ultimately help solve, a flippant and ultimately throwaway riddle from Alice’s Adventures in Wonderland. That being said, stranger things happened in Wonderland, and truth is stranger than fiction, so here we go.

Paradoxically, I want to start my discussion about the aforementioned mathematical and logical paradoxes with another paradox entirely. Epimenides’ Paradox is a version of what has come to be known as the Liar’s Paradox. Epimenides lived in Knossos, Crete, around 600 BC, and is attributed with the original version of this paradox. Doug Hofstadter (1979)[1], in his book Gödel, Escher, Bach, puts the paradox like this:

Epimenides was a Cretan who made one immortal statement: “All Cretans are liars.”

This is a paradox of self-reference. Epimenides cannot be telling the truth about Cretans, and also be a Cretan; and if he is a Cretan, then he cannot be telling the truth. It is with some amusement that I note that Hofstadter’s book was originally published with the tagline “a metaphorical fugue on minds and machines in the spirit of Lewis Carroll.” Curiouser and curiouser.

This paradox of self-reference helps give us some traction on Russell’s Paradox, which Bertrand Russell uncovered whilst writing his Principia Mathematica in 1901. Russell was trying to ground mathematics in rigorous logical terms using set theory. An example of this rigorousness being that all empty sets are the same empty set, for a set with nothing in it has nothing to differentiate it from other sets with nothing in them. An empty set is zero, by definition, and there is only one empty set, so from zero, we have derived one… and so on, for several hundred pages… across three volumes.

If we concern ourselves with sets, as Russell did, we come to a paradox of self-reference when defining sets into categories. A set is normal, unless it contains itself, then it is abnormal. That being said, it seems as though the only sets that could contain themselves are sets that are descriptions of things, rather than actual things, i.e. self-reference. To borrow from Russell (1919, p. 136)[2], “normally a class [set] is not a member of itself. Mankind, for example, is not a man.” So, now, what of the set of all normal sets (R)? Is it normal or abnormal? If R were normal it would be a member of the set of normal sets (R), thereby making it abnormal, but if it were abnormal it would no longer be a member of the set of all normal sets (R). Thus R is neither normal nor abnormal, R is both R, and not-R, and this is a paradox, Russell’s Paradox. Curious-R and curious-not-R.

Russell’s Paradox, to me at least, bears a striking resemblance to both Epimenides’ Paradox, as outlined above, and Hempel’s ‘Paradox of the Ravens.’

Bertrand Russell


The Paradox of the Ravens

The Paradox of the Ravens, was put forward by Carl Gustav Hempel (not Jung), and arises from the confluence of two logical rules, Nicod’s Principle and The Equivalence Condition. The Paradox reads something like this:

  1. All ravens are black.
  2. Everything that is not black is not a raven.
  3. Nevermore, my pet raven, is black.
  4. This green (and thus not black) thing is an apple (and thus not a raven).

Both 1 and 2 are equivalent. 3 is evidence for 1, and 4 is evidence for 2, but because 2 is equivalent to 1, 4 is also evidence for 1.

To expand on this a bit, under Nicod’s Principle[3], if we state that ‘All ravens are black’ (All Rs are B), and we encounter an instance of a raven that is black, we have, to some extent supported the hypothesis. Famously, of course, it was thought that all Swans were white, until they discovered black swans in Australia. So, as with the scientific method, inductive logic, and Bayesian reasoning, each instance of a case that supports the hypothesis lends incrementally more credence to the hypothesis, but certainty is seldom absolute. Indeed, courtesy of Karl Popper, the scientific method now requires that a hypothesis be able to be falsified (shown to be untrue) in order to have any use as a hypothesis. As such, the hypothesis ‘All ravens are black’ can be falsified by an instance of a raven that is not black, such as an albino raven. This being said, the definition of ‘swan’ could include the fact that they are white, and that thus black swans are in fact not swans at all. Conversely, an albino raven is still a raven, albeit not black, and not ultimately a case which falsifies the hypothesis, because we know what albinism is.

With the equivalence condition, whatever can be confirmed by a statement can also be confirmed by an equivalent statement. For example, if we were to say that ‘All Ravens are not Writing Desks (All Rs are not-WD), we could logically say that ‘No Writing Desks are Ravens’ (No WDs are R). From this, each instance of a writing desk not being a raven also supports the idea that no raven is a writing desk. But we expected that.

In combination, then, Nicod’s Principle and the Equivalence Condition should allow us to say ‘All Ravens are Black’ (All Rs are B), and that its equivalent statement is ‘All non-ravens are non-black’ (All not-R are not-B). Which seems fair enough on the face of it. The problem arises when you realize that the hypothesis ‘All Ravens are Black’ would now be, at least to some extent, supported by any instance of a non-black thing that also happened to not be a raven. Curious-R and Curious-B.

This idea is nicely explained, here:


It’s amusing to note that the Raven in this paradox is called ‘Nevermore’. Another solution to the riddle ‘Why is a Raven like a writing desk?’ courtesy of WiseGeek, that actually does seem to answer the question in a satisfying way, is also a reference to Edgar Allen Poe: Poe wrote on both. Cracked credits yet another solution to American Chess puzzle composer, Sam Loyd: They both have inky quills.

These aren’t bad, but not where I’m going with this.

The Paradox of the Ravens is supposed to illustrate a conflict between inductive logic (as exemplified by Nicod’s Principle and the Equivalence Principle) and intuition (as illustrated by our reaction to the paradoxical conclusion that the greenness of an apple in anyway supports the hypothesis that all ravens are black). So it seems that the paradox is somehow a product of some flaw in human thought. But does that flaw extend to logic itself, or is that an unallowable psychologism?


Mentally processing a negative

There is a truism in self-help literature that when trying to stop doing something, like smoking or drinking, we do better by mentally representing what we will do, rather than what we will not do. Saying that we will not do something forces us to represent all of the alternatives. Saying that we will do something that is not the thing we are trying to not do, is a successful way of not engaging in the behaviour we are trying to avoid.

Likewise, if we are trying to remember things, or make logical inferences about things, it helps to represent these things in positive, and thus concrete, terms. We will more quickly get to the correct state of affairs if we remember ‘The door was closed’ rather than ‘The door was not open’ (Kaup, Lüdtke & Zwaan, 2006[4]). The delay is greater, and means of representation different, if the negation opens up an even wider scope of possibilities than the binary represented by the door (open or closed). “Not wearing a pink dress” (Kaup & Zwaan, 2003[5]), for example, gives rise to everything from an amber dress to a yellow dress.

Along the same lines, matching a picture to a sentence describing that picture takes longer, and is more error-prone, when using negatives (Carpenter & Just, 1975[6]; Clark & Chase, 1972[7]; Trabasso, Rollins & Shaughnessy, 1971[8]). Interestingly, negated items are slower to be recalled (Kaup, 2001[9]; MacDonald & Just, 1989[10]), which means that you will be slower to think of a white bear when you’re told not to, but think of it you will (Winerman, 2011[11]).

So, with respect to the Paradox of the Ravens, is the fact of representing not one negative, but two – not a raven, and not black – just something our brains balk at? Are we actually trying to mentally represent all of the alternatives, or even just all of the plausible (seemingly relevant) alternatives? Or, is the problem maybe that our brains are trying to deal with the description, as given, negatives and all, and the result does not seem like it can logically be related to black ravens?


The Conjunction Fallacy

If a negative is a shorthand way of describing absolutely all possible counter-examples of a given situation or thing, what problems, aside from delayed access to relevant information, could that give rise to? The conjunction fallacy may give us some clues to the possible answer. The classic depiction of the conjunction fallacy is due to Tversky and Kahneman (as cited in Kahneman, 2012[12]):

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

  • Linda is a bank teller.
  • Linda is a bank teller and is active in the feminist movement.

This is not a paradox, this is an outright fallacy. The number of feminist bank tellers must be less than both the number of active feminists and the number of bank tellers. So it is more probable that Linda is merely a bank teller than being both a bank teller and an active feminist. The reason that the idea of Linda being both is so attractive is that we know she is definitely a bank teller (given the two choices) and, courtesy of the representativeness heuristic[13], we feel that a bright, female, philosophy major, with concerns about discrimination and social justice simply must be a feminist. As Stephen J. Gould said,

I am particularly fond of this example [the Linda problem] because I know that the [conjoint] statement is least probable, yet a little homunculus in my head continues to jump up and down, shouting at me— “but she can’t just be a bank teller; read the description.”

More on the conjunction fallacy here.

Is a negative an inherent source of a conjunction fallacy, or even multiple conjunction fallacies? Is the source of the problem with the Paradox of the Ravens our inability to represent millions of alternatives to blackness and ravenness being compounded by our inability to account for the overlaps between these? Or is it just, as the conjunction fallacy illustrates, that our brains are really unintuitive when it comes to probabilities? Or is the use of the word “not” an obtuse kind of self-reference that leads to paradoxes of self-reference, just as Epimenides’ “lie” and “normal” sets do?


Wason and Confirmation Bias

Also discussed in Kahneman’s ‘Thinking, Fast and Slow’ is Peter Wason’s experiments from 1960. Wason asked people to identify a rule as represented by a number sequence, (2, 4, 6) and to then ask him if a number sequence they generated also fit the rule he had in mind. They were to do this as many times as they felt necessary to confirm that they knew the rule. Most people would then proceed to provide sequences that were consecutive even numbers (e.g. 6, 8, 10), some might provide a sequence of even numbers with gaps (e.g. 4, 8, 12). Few, if any would present odd-numbered sequences, or even numbers in reverse numerical order, or anything that deviated too far from the most obvious. Wason’s pattern was simply numbers in numerical order, which would include such examples as ‘3, 7, 9’ and ‘10, 100, 1000’.

So we should be looking for disconfirmation. We should be looking to falsify our hypothesis. Non-black ravens falsify our hypothesis. Black non-ravens neither confirm nor deny our hypothesis, and non-black non-ravens support our hypothesis.


Why IS a Raven like a Writing Desk?

A raven is more obviously like other corvids, such as blackbirds, rooks, and so on, and a writing desk is more obviously like other man-made, wooden, objects, such as bureaus and dressers. This having been said, is a wooden chess piece, which happens to be called a Rook, more like a raven, or a writing desk? What about a piece of cheese? The moon? And what about the word ‘Raven’ written on a piece of paper? Is that more like the raven it names, or the writing desk upon which it was written?

We’ve established that we’re not very good at probabilities, even with something as simple as Linda’s job in combination with her political orientation.

We’ve established that negating a statement may give rise to an infinity of alternatives, but even where it only gives rise to one alternative it takes longer for our brains to represent than the same statement expressed in the positive.

And we’ve established that the greenness of apples adds to our certainty that ‘All Ravens are Black’.

Except that it still “feels” wrong, doesn’t it?

What if I were to suggest that the problem is with the way the equivalence principle makes you think of equivalency in the wrong way? Encountering a black raven does directly lend extra weight to the hypothesis that all ravens are black. But then the hypothesis specifically mentions black ravens. The greenness of an apple also lends credence to the hypothesis that all ravens are black, but to a very, very, very much smaller degree. A green apple means that you have one less thing that is both not black and not a raven, but that is only one less thing from a very long list of things. Indeed, all of the things!

The statements ‘All ravens are black’ IS logically equivalent to ‘All non-ravens are not-black’, but the weight they lend to their respective hypotheses, the degree to which they add to your certainty as to the truth of the statement, are not equivalent. Indeed 1/∞ is nowhere near 1/10,000,000 (no actual stats as to likely world raven population were discoverable, by me at least, at the time of writing). Now, 1/10,000,000 is not a massive incremental increase in our certainty on the hypothesis that all ravens are black, so it’s odd that our brains, which have trouble with numbers of that size, are still so certain that green apples do not help us to confirm that hypothesis, but that black ravens do. Then again, if ravens are black, by definition, we don’t need an incremental increase in our certainty on that point, so the logical relationship between black ravens and green apples is irrelevant. Which really does seem like self-reference by the back door.

So, the solution to the paradox of the ravens can be illustrated by using it to solve Lewis Carroll’s riddle, “How is a raven like a writing desk?” A raven is like a writing desk in that they are both unlike far more things than they are unlike each other. They also both have names that seem to unequivocally denote what they are, and many more that denote what they are not. The fact that ravens seem unlike writing desks to us is, statistically speaking, merely a rounding error, they are, in fact, virtually identical, and notably unlike a piece of cheese.



[1] Hofstadter, D. (1979). Gödel, Escher, Bach: An Eternal Golden Braid (a meta-phorical fugue on minds and machines in the spirit of Lewis Carroll). New York, NY: Basic Books.

[2] Russell, B. (1919). Introduction to Mathematical Philosophy, London: George Allen and Unwin Ltd, and New York: The Macmillan Co.

[3] Nicod, J. (1930). Foundations of Geometry and Induction, P. P. Wiener (trans.), London: Harcourt Brace.

[4] Kaup, B., Lüdtke, J. & Zwaan, R. A. (2006). Processing negated sentences with contradictory predicates: Is a door that is not open mentally closed? Journal of Pragmatics, 38(7), 1033-1050.

[5] Kaup, B., & Zwaan, R. A. (2003). Effects of negation and situational presence on the accessibility of text information. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29(3), 439.

[6] Carpenter, P. A. & Just, M. A. (1975). Sentence comprehension: A psycholinguistic processing model of verification. Psychological Review, 82(1), 45-73.

[7] Clark, H. H., & Chase, W. G. (1972). On the process of comparing sentences against pictures. Cognitive Psychology, 3(3), 472-517.

[8] Trabasso, T., Rollins, H., & Shaughnessy, E. (1971). Storage and verification stages in processing concepts. Cognitive Psychology, 2(3), 239-289.

[9] Kaup, B. (2001). Negation and its impact on the accessibility of text information. Memory & Cognition, 29(7), 960-967.

[10] MacDonald, M. C., & Just, M. A. (1989). Changes in activation levels with negation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15(4), 633-642.

[11] Winerman, L. (2011). Suppressing the ‘white bears’. Monitor on Psychology, 42(9), 44. 

[12] Kahneman, D. (2012). Thinking, Fast and Slow. London: Penguin.

[13] https://en.wikipedia.org/wiki/Representativeness_heuristic

An Allowable Psychologism?

This is the first in an intermittent series where I will use psychology to illuminate philosophy. (I do have a degree in psychology, but am merely a hobbyist philosopher.) Specifically, I intend to explore the possibility that understanding psychology can actually illuminate logic, mathematics, and reason.

Doing this is, in some people’s eyes, an unconscionable thing, and they call it psychologism.

The Stanford Encyclopedia of Philosophy defines ‘psychologism’ thus:

“Many authors use the term ‘psychologism’ for what they perceive as the mistake of identifying non-psychological with psychological entities. For instance, philosophers who think that logical laws are not psychological laws would view it as psychologism to identify the two.”[1]

So psychologism is the idea that our understanding of something reflects our psychology, to a greater extent than it reflects the thing that we seek to understand. By analogy, consider growing up with an undiagnosed cataract, you can see, but there is a distortion in your vision. You were born with it, so as far as you’re aware what you see is normal. As you grow up your brain adapts to the distortion, in the same way that you quickly adapt to wearing glasses with stripes on the front of them that should block your vision, like those made famous by Kanye West. Studying the cataract is psychology, taking your view of the world through the cataract to be in some sense true is psychologism, removing the cataract from the equation is the answer, apparently.

Neon Shutter Shades

Neon Shutter Shades – significantly cooler than Kanye West, and their inclusion here neatly avoids having to have a picture of him on my blog.

Over the last few decades psychology has gone some way to separating our understanding of the world from our brains and the distortions that are inherent in its structure. Unlike cataracts, though, we all have these distortions to a greater or lesser extent. Indeed, the main way in which we can be less impacted by these distortions is to be aware of them (unlike the stripes on those shutter shades). In the case of the human mind, the single biggest cataract surgery has been the work of Daniel Kahneman and Amos Tversky on heuristics and biases. Indeed, I would suggest that Kahneman’s ‘Thinking, Fast and Slow’ is one of the most important books of popular psychology ever written, certainly the most important this century.

Heuristics are simple rules that our brains use to resolve complex problems. These ruels are right most of the time… but not all of the time. For example, if we see a shadow in the corner of our eye, we will treat this as a potential source of danger. Biases, on the other hand, are the consistent and predictable results of relying on heuristics. These are by no means the only distortions that arise from the way our brains evolved. Evolution is pragmatic, balancing costs (needs for fuel/food) against benefits (survival advantage). So we have numerous talents that helped us survive, but that hamper our ability to get at the truth. According to evolution, knowing that the movement in the corner of your eye is a large, aggressive mammal is less important than already being on the run by the time you discover it is a large, aggressive carnivore; and the occasional life-preserving sprint occasioned by a harmless herbivore, is a small price to pay.

An example of where our psychology may have been what was being described rather than the reality “out there” is Plato’s Idealism. Platonic Idealism is the idea that transcendent ideal things exist in another sphere of existence above our mundane world. Is this a reflection of reality, or a reflection of the fact that the human mind (probably) is a connectionist neural network (relying on prototypes by which to define things)? Can you countenance the idea that a transcendent Platonic world exists, and contains, for example, and because this is the internet, an ideal cat from which all earthly cats derive their ‘cat-ness’? Or does it seem more likely that, having been exposed to a great many cats, we have stored in our memory a prototype that best encapsulates ‘cat-ness’, and from which we can decide whether some quadrupedal mammal in our environment is a cat or not?


Plato’s cats, from Midnight Media Musings

More particularly, psychologism has at its heart the question of whether logic is a sub-discipline of psychology or not, as mentioned above. The answering of this question hasn’t been helped by the fact that psychology was a branch of philosophy until about the same time as the psychologism argument arose in earnest with the work of Frege[2] and Husserl[3], in the late 19th Century. One solution is to suggest that psychology is the study of how we do think, and logic is the study of how we should think, but of course Ethics is both the study of how we do think about right and wrong behaviour, and how we should think about right and wrong behaviour, and there is no long-standing argument about whether that is psychology. That being said, whilst Ethics is an area of philosophical discussion, it is increasingly encroached upon by psychology (indeed Richard Carrier has called for a psychologically-based ‘ethicology’[4], and Sam Harris’ claims that science can be applied to all questions of ethics, so long as ethics is positioned as a discussion about the harm of conscious creatures)[5].

So, psychologism is the application of our understanding of human thought processes to the study of things that don’t seem to have anything to do with human thought processes. For myself, I think that we need an intentional psychologism as a tool with which to disentangle what is studied from the means by which it is studied, a means of bracketing out (as Husserl would have it) the vagaries of the human mind from the vagaries of the world it reports on. In other words, are you describing a thing in the world, or are you describing the way in which that thing (or those things) is represented in your mind? Are they phenomena, or is it phenomenology? From Plato’s idealism to almost everything that has arisen from rationalist philosophy (aka armchair science), this seems necessary. So I shall proceed as though psychologism is a tool, rather than a philosophical debate.

It seems I’m not the only one. The rest of the quote from The Stanford Encyclopedia of Philosophy that I opened with reads like this:

“Other authors use the term in a neutral descriptive or even in a positive sense. ‘Psychologism’ then refers (approvingly) to positions that apply psychological techniques to traditional philosophical problems (e.g. Ellis 1979, 1990).”


Free-floating rationales

Dan Dennett (1984), in ‘Elbow Room: The Varieties of Free Will Worth Wanting’ introduced the idea of the free-floating rationale, and it strikes me as a useful tool for this intentional psychologism. A free-floating rationale is a reason why a particular response to a particular problem is inherent in the elements of the problem, all that is being waited upon is the tools by which the solution can be deployed. For example, the ability of the squid or chameleon to change colour to match their surroundings can be seen as a response to the need to hide quickly. A means by which that reason can be arrived at is not necessary for the animals that display it (neither the squid nor the chameleon, so far as we are aware, makes the decision to change colour, and the ability to do so was arrived at via evolution, not a congress of chameleons, nor a senate of squids). It so happens, though, that arriving at reasons for things is a skill that humans do have. That’s why humans have developed sonar similar to a bat’s echo-location, and many other things that echo solutions that have been arrived at in nature. The solutions exist, and we discover them. What we invent is the means by which to manifest that solution. This may also explain the pervasive belief that the universe is intelligently designed – we perceive the reason, and we perceive the reason to be out there, correctly, but assume that the reason is also articulated, out there, which it is not (a case of mistaking the thing for the label we have for it).

I see the concept of a free-floating rationale as yet another in a long line of claimed human inventions that are really human discoveries. Numbers don’t exist, but they describe relationships between things; centres of gravity don’t exist, but describe the relationships between the particles within, and the totality of, a thing; and free-floating rationales don’t exist, but nevertheless describe relationships between things, and states of affairs in the world; and logic is a distinct set of patterns of inter-relationship to which things regularly conform… inherent in the elements of the problem.

Is mathematics demonstrative of human rationality, or is it merely a free-floating rationale? Is reason phenomenology, or a phenomenon? Is logic an invention, or a discovery?

Can psychologism be used to distinguish between free-floating rationales and human-centric biases in the way we see the universe? I think so, and I’m going to explore that idea over the coming posts.


Next Instalment: ‘How is a Raven Like a Writing Desk?’


[1] http://plato.stanford.edu/entries/psychologism/

[2] http://plato.stanford.edu/entries/psychologism/#FreAntArg

[3] http://plato.stanford.edu/entries/psychologism/#HusAntArg

[4] https://books.google.co.uk/books?id=oFdMzq56qyEC&pg=PA335&lpg=PA335&dq=Richard+Carrier+Ethicology&source=bl&ots=HiKypvJaL_&sig=3821RM8fpBz27a1IwgPXuGM8JSk&hl=en&sa=X&ved=0ahUKEwiklfro75jMAhUF5SYKHXP7BvYQ6AEIMzAD#v=onepage&q=Richard%20Carrier%20Ethicology&f=false

[5] https://books.google.co.uk/books?id=5FRW30QaDQwC&printsec=frontcover&dq=Sam+Harris+Moral+Landscape&hl=en&sa=X&ved=0ahUKEwipkpy18JjMAhVE5CYKHaGVDrMQ6AEIHTAA#v=onepage&q=Sam%20Harris%20Moral%20Landscape&f=false

Does anybody really think?, pt. 2 (Does any body really feel?)

The first part of this blog detailed what I saw as flaws in the methodology and the presentation of the statistics that was the basis of the documentary ‘What British Muslims Really Think.’ In this second part I want to look at what I see as a primary flaw in virtually all discourse regarding the Middle East, and the Muslim diaspora more generally.

What follows is a segment taken from Bruce E. Wexler’s book, ‘Brain and Culture: Neurobiology, Ideology, and Social Change.’ I present a 1000-word case study describing what it feels like to be an immigrant. From this, I have removed any clue as to race, creed, religion, country of origin, country of destination, and even the period of history to which it relates, in an effort to have you, the reader, insert your own experiences, or those of people that you know and care for, to see if it’s possible to walk a mile in this person’s shoes. I will then relate this content back to the first installment of this blog.

[Name] was 13 years old when [they], [their] parents, and [their] 9-year-old sister left [old country] to emigrate to [new country]. Growing up as part of a [culture] family in [old city] after [traumatic event], [they were] in many ways on the margin of [old country] society and the object of more than a few exclusionary and critical comments by [their] peers. Yet this was [their] only world, external and internal, until [they] and [their] family [travelled to new country]. As [they] explained, “the country of my childhood lives within me with a primacy that is a form of love… It has fed me language, perceptions, sounds, the human kind. It has given me the colours and the furrows of reality, my first loves. The absoluteness of those loves can never be recaptured. No geometry of the landscape, no haze in the air, will live in us as intensely as the landscape that we saw as the first, and to which we gave ourselves wholly, without reservations”. [Name] recalls that, walking around [large city] shortly before leaving, “I burst into tears as I pass a nondescript patch of garden, which, it turns out, holds a bit of myself.” Standing on the [transport to new country], “I feel that my life is ending”. “When the [traditional band] near the [transport to new country] strikes up the [local musical style] of the [national anthem], I am pierced by a youthful sorrow so powerful that I suddenly stop crying and try to hold still against pain. I desperately want time to stop, to hold the [transport] still with the force of my will.” How clearly [they describe] the formative effects of the environment into which [they] happened to be born, the connection between [their] internal and external worlds, and the impossibility – in [their] situation – of keeping the internal world together with the external world by which it was shaped and to which it was matched.

On [their] third night in [new city] [they had] “a nightmare in which I’m drowning in the ocean while my mother and father swim farther and farther away from me. I know, in this dream, what it is to be cast adrift in incomprehensible space; I know what it is to lose one’s mooring. I wake up in the middle of a prolonged scream. The fear is stronger than anything I’ve ever known.” A short while after [they wonder], “what has happened to me in this new world? I don’t know. I don’t see what I’ve seen, don’t comprehend what’s in front of me. I’m not filled with language anymore, and I only have a memory of fullness to anguish me with the knowledge that, in this dark and empty state, I don’t really exist.” Aware when [they] was leaving her [old country] that part of [himself/herself] was being left behind because [they were] losing the external match to [their] internal self, in the [new country] [they feel] loss, discomfort, terror, and confusion when [they are] surrounded by an environment that does not match the inner world [they had] brought with [them].

While those mourning a deceased spouse may feel that part of oneself has died, this transplanted immigrant felt as empty as if [they] no longer existed. When encouraged by those around [them] to try and forget what [they] left behind, [they wonder] “Can I really extract what I’ve been from myself so easily?” When [they attempt] to take in [their] new environment, the requisite internal structures are lacking or the old structures are obstructing. … “The city’s unfocused sprawl, its inchoate spread of one-family houses, doesn’t fall into any grid of mental imagery, and therefore it is a strain to see what is before me. Even on this days when the sun comes out in full blaze and the air has the special transparency of [new country], [it] is a dim world to my eyes, and I walk around it in the static of visual confusion.” When [they look] at others who are farther along than [them] in the adjustment to a new world, [they worry] that even for them “insofar as meaning is interhuman and comes from the thickness of human connections and how richly you are known, these successful immigrants have lost some of their meaning.”

The change in language associated with many immigrations further disrupts the links between the self and others, and between internal neuro-psychological processes and external social processes. Two days after their arrival [Name] and [their] sister are taken to school and given new names […]. After the teacher introduced them to the class, mispronouncing their last name in a way they had never heard before, “we make our way to a bench at the back of the room; nothing much has happened, except a small, seismic mental shift. The twist in our names takes them a tiny distance from us. …Our [old country] names didn’t refer to us; they were us as surely as our eyes or hands. Those new appellations, which we ourselves can’t yet pronounce, are not us … Make us strangers to ourselves.” Their original names were, of course, assigned to them by others. But the assignment of new names after years of hearing the original names in association with themselves, and the internalisation of those names in important neural structures, is no small matter. [Name] quickly learns [new language] but “the words … don’t stand for things in the same unquestioned way they did in my native tongue. …This radical disjoining between word and thing is a desiccating alchemy, draining the world not only of significance, but of its colours, striations, nuances – its very existence. It is a loss of a living connection.

I think this piece eloquently sums up what one might feel in just a normal, or as we seem to be saying these days “economic”, migration, and gives a good base from which to extrapolate what might be the case in a refugee or forced migration (because from a young person’s perspective, any migration is forced).

The case study contains a number of direct quotes from an adult reflecting on their adolescent experience of immigration. Now consider that 33% of the Muslim population was aged 15 years or under in 2011, compared to 19% of the population as a whole. This is likely to have held steady in the intervening four years. Only 4% of the Muslim population is over 65 years, compared to 16% of the overall population.


The survey that was the basis of this documentary was meant to be a “representative sample.” It’s very hard to get consent to interview children under 16 due to ethical concerns. If such interviews were undertaken (and there is no indication that they were), given that we’ve established that Muslims tend to more conservative views, what is the likelihood that a youth would, with their parents present (because that’s how such an interview would have to take place), go against their parents’ views? But, as I said, there is no reason to believe that such interviews took place. What this means is that the 4% of the Muslim population that is 65+ have had the prevalence of their view statistically doubled because of the way the survey was carried out. It is a truism of aging that one’s views become more conservative as one gets older. Note the difficulty, in the case study, that the author had in translating the language and landscape of childhood into the present realities of a new country. That’s an adolescent, whose brain is still highly plastic (able to change more readily in response to external stimuli), noting these difficulties. The problems would be much more severe, and require numerous coping mechanisms, in older adults. This is culture shock, writ large.

To illustrate even more forcefully what culture shock can do. Immigration is a recognised risk factor in the diagnosis of schizophrenia (McGrath, Saha, Welham, El Saadi, MacCauley & Chant, 2004[1]), and especially in all ethnic minority groups (Fearon, et al., 2006[2]). People living in developing nations are more at risk, as are people living in urban areas. Curiously, though, the risk of schizophrenia increases for the children of immigrants. New immigrants are 2.7 times more likely to develop schizophrenia than the native English population, but the next generation are 4.5 times more likely (Cantor-Graae & Selten, 2005[3]). So it may be that environmental factors in the country of origin, or changes in environmental factors between countries, are influential; but that only explains prevalence in new immigrants, not the second generation. So the question then becomes, is this due to integration, or lack thereof? Cantor-Graae and Selten (2005, p. 101) have suggested what they call “social defeat” is a causative factor in subsequent diagnosis of schizophrenia:

Since both migrants and city residents are exposed to high levels of social competition, the long-term experience of social defeat, defined as a subordinate position or as ‘outsider status’, is a viable candidate. This is compatible with the recent meta-analysis of studies on migrants, which showed greater effect sizes for migrants from developing countries than for those from developed countries, and greater effect sizes for the second generation than for the first. A bigger increase in the second generation is expected, because outsider status would be even more humiliating for individuals who feel entitled to the status conferred by their birthright. Since discrimination would certainly contribute to the migrant’s experience of defeat, it is noteworthy that a prospective study in The Netherlands found that perceived discrimination was a risk factor for the development of psychotic symptoms (Janssen et al, 2003[4]). The risks for immigrant groups known for their strong family networks, for example Asian immigrants to the UK and Turkish immigrants to The Netherlands, are not nearly as high as those for Caribbean immigrants to the UK or Moroccan immigrants to The Netherlands. Moreover, the incidence in minority ethnic groups is smaller when they comprise a greater proportion of the local population (Boydell et al, 2001[5]). A plausible interpretation of these findings is that social support protects against the development of schizophrenia and this accords well with the social defeat hypothesis.

So here we see that discrimination is a causal factor on the one hand, but relying on strong family networks, and having a large number of people of similar cultural origin are protective factors.

In the case study quoted at the start of this blog, it mentions that the author is somewhat of an outsider in their home country. Could they be describing the experiences of a Sunni youth from Iran, where 90-95% of the population is Shi’a? Or maybe a Shi’a youth from Jordan, positioned as it is between Israel, Iraq, Saudi Arabia, and Syria, where the population is 95% Sunni, and 3-4% Christian?

The “rigorous survey” that was the basis for ‘What British Muslims Really Think’ didn’t actually make a distinction between Sunni and Shi’a Muslims. Amongst British Muslims, 85% are Sunni, and 14.8% Shi’a. This approximately reflects worldwide demographics that suggest Sunni are between 75 and 90% of the world’s Muslims, and that 10 to 20% are Shi’a. I’m sure such a failure would not occur in distinguishing Catholics from Protestants, and the different attitudes they hold. Why, if we know that a population’s religious identity is important to them, when we’re trying to advocate active integration, would we then turn around and ignore that identity? It is at least conceivable that the difference between Sunni and Shi’a, as illustrated by the two examples I just gave, could be a part of the reason for emigrating in the first place.

As much as we might try and empathise with someone when they are going through something traumatic, and migration is traumatic, it’s hard to do so when we’ve not ever migrated ourselves. It is even more difficult when the person with whom we are trying to empathise seems radically different to us, and thus the basis for even empathizing – which most of us like to think we’re pretty good at, I’m sure – is just that bit harder. It is an unfortunate truism of the way our brains work that we notice difference first, and have to be reminded of similarity. Those of us who rely more on gut reaction, will therefore, in general, tend to be less empathic towards noticeably different strangers than those who stop and think about the commonalities we nevertheless share as humans.

Let me re-quote what I think is one of the most affecting lines from the case study above – notice the beautiful use of language when, at the same time they are mourning the loss of language. This, in a way, is the power and the shortcoming of words:

“the words … don’t stand for things in the same unquestioned way they did in my native tongue. …This radical disjoining between word and thing is a desiccating alchemy, draining the world not only of significance, but of its colours, striations, nuances – its very existence. It is a loss of a living connection.”

Do you care to guess the specifics of the individual in that case study, the author of those words? In an attempt to illustrate just how universal such things are, I’ve not quoted a Muslim.


[Scroll down…]



[…Have a guess…]




[…Oh, go on…]





The subject of that piece, the quotes contained therein written by her as an adult woman, speak of the experiences of a thirteen-year-old Jewish girl moving from Krakow, Poland, to Vancouver, Canada, in 1959.

[Reactions in the comments section, please.]

[1] McGrath, J., Saha, S., Welham, J., El Saadi, O., MacCauley, C., & Chant, D. (2004). A systematic review of the incidence of schizophrenia: the distribution of rates and the influence of sex, urbanicity, migrant status and methodology. BMC medicine, 2(1), 1.

[2] Fearon, P., Kirkbride, J. B., Morgan, C., Dazzan, P., Morgan, K., Lloyd, T., … & Mallett, R. (2006). Incidence of schizophrenia and other psychoses in ethnic minority groups: results from the MRC AESOP Study. Psychological medicine, 36(11), 1541-1550.

[3] Cantor-Graae, E. & Selten, J. P. (2005) Schizophrenia and migration: a meta-analysis and review. American Journal of Psychiatry, 162, 12-  24.

[4] Janssen, I., Hanssen, M., Bak, M., et al (2003) Discrimination and delusional ideation. British Journal of Psychiatry, 182, 71^76.

[5] Boydell, J., van Os, J., McKenzie, K., et al (2001) Incidence of schizophrenia in ethnic minorities in London: ecological study into interactions with the environment. BMJ, 323, 114.

Does anybody really think?, pt. 1

I want to take a closer look at the recent Channel 4 programme ‘What British Muslims Really Think’, now described on My4 as “Trevor Phillips presents the results of a rigorous survey of the views of British Muslims.” To call this survey, or at least this presentation of it, “rigorous”, is overselling it. The language used, in several places, to present the findings is, I think unintentionally, inflammatory. And the description of the methodology leaves me feeling as though there was rather a lot of question-begging going on. What is odd, though, is that the conclusion is, nevertheless, broadly in line with my own sentiments, and so, because of the 45 minute rollercoaster ride to get there, I don’t see how the conclusion follows from the presentation as a whole.

I started writing this piece with the intention of using a quote from a book that looks at neurobiology, ideology, and social change, which seems germane to the problems of integration, but now I will save that for part two. In this first part I will pick through the results, as presented in this documentary, and highlight issues with the methodology, the results, or the presentation thereof. Where applicable I have provided time-stamps for the portion of the documentary that the quote relates to.

After a general introduction to the intent of the documentary, we are advised of some aspects of the methodology used to run the survey that is the basis of it. For example, “ICM decided that the best way to get a fully representative sample of Muslim opinion was to concentrate on areas where at least one fifth of the population is Muslim” (5:11-5:19) and, “ICM interviewed 1081 British Muslim’s face-to-face.” (6:22-6:26). Starting at 8:37 we get our first hint of the results. One third of Muslims think polygamy is acceptable, compared with one tenth of the general public. One-fifth of Muslims (18% actually) think homosexuality should be legal, where as four-fifths of the general public do (it was 73%, actually, and that’s nearer to three quarters, for the purposes of accuracy, clarity, AND brevity). Finally, sympathy for political violence and suicide bombing was 4% in the Muslim population, as compared to 1% in the general population. For a documentary which, in its conclusion talks about integration, to kick off with a very stark ‘Us vs. Them’ presentation of results seem, well, unhelpful.

Martin Boon, Director of ICM, the company that carried out this research, characterises that 1% of the general population as “no more than a handful.” So here we come to our first bit of lazy, and potentially inflammatory presentation, and from the Director of the Company that carried out the research. The Muslim population, in total, is only 4.8% of the total population, and it is 4% of that population that is sympathetic to political violence and suicide bombing. Much mileage was made of the fact that this was something like 100,000 people, but not much mileage was made out of the fact that it was to “fight injustice” (see image below). The thing is, only 1% of the non-Muslim population has this sympathy, and 0.2% of the total population has this sympathy, and is Muslim. So, as presented, four times as many Muslims as “non-Muslims” have these sympathies, but in the context of broader society five times as many “non-Muslims” as Muslims have these sympathies. That’s why the saying ‘Lies, Damned Lies, and Statistics’ was coined. Of the 63.2 million people in the UK (as at the last census, 2011), and based on these percentages, there are around 110,000 Muslims who have these sympathies… and 630,000 or so “non-Muslims” who also do.

Screen Shot 2016-04-19 at 16.07.26.png

It’s not so much that Muslims are being stereotyped as suicide bombers, but that suicide bombers are being stereotyped as Muslim. The documentary tells us that a disturbingly large number of Muslims have “some form of sympathy with violent acts”, which is then ramped up to “…sympathize with Islamist terrorism.” OK. What does “sympathy” mean? Can we determine the difference between “violent acts” and “Islamist Terrorism”? The presentation specifically noted that this sympathy was in response to people fighting injustice, but this key motivation to sympathy is missing from the presentation of the results (indeed I only know about the “injustice” bit because it was on screen (above) – it did not rate a mention in the commentary or narration). I’ll come back to the point about what sympathy means later.

We’re then told that 21% of Muslims have been to the home of a non-Muslim only once in the last year, and that another 21% have never been to a non-Muslim’s home. There is a significant problem with this question, and I will quote the documentary to highlight it: “ICM decided that the best way to get a fully representative sample of Muslim opinion was to concentrate on areas where at least one fifth of the population is Muslim” (5:11-5:19). So, people are more inclined to make friends with people that are more like them when they are in areas where they are spoiled for choice? Shocking! Also, is it the Muslims not visiting the non-Muslims, or is it the non-Muslims not inviting the Muslims? There didn’t appear to be a question about whether non-Muslim’s had visited the homes of Muslims. One of the interviewees, Anjum Anwar makes this point forcefully: “So, if you have a child that goes to a school that is wholly Asian, who lives in an areas that is predominantly Asian… Where would that child meet children and people of other faiths? They’re restricted, aren’t they?”

Trevor Phillips then tells us that, “Equality of women, social tolerance, freedom of expression are now all taken for granted as features of the British way of life” (13:57). By contrast, homosexuality should be illegal, according to 52% of British Muslims, compared to 10% of the general population. In other words 1.35 million Muslims have the same beliefs as around six million non-Muslim Britons. That being said Muslims make up 4.8% of the population, and YouGov estimates that homosexuals make up around 6%.

The next question to be addressed was that of anti-Semitism. 35% of Muslims hold at least some anti-Semitic views, as opposed to 9% of the non-Muslim population – that’s around 900,000 Muslims as against more than five million non-Muslim Britons.

A sizable 39% of British Muslims believe that ‘wives should always obey their husbands,’ compared with 5% of non-Muslim Britons, again, vastly more others hold this view than Muslims. At this point the view on polygamy is reiterated. Whilst not polygamy, and certainly not about assuming that women should do what they’re told, polyamory is a small and growing subculture in the UK. There is even a “non-monogamous” option on OKCupid (and you can’t really call it cheating if you’re open about it, which at least implies that polyamory is a big enough deal for OKCupid to have that option). So we have a minority view that many Muslims hold that some non-Muslims might have at least some sympathy with. I don’t want to get sidelined into a discussion about the difference between polyamory and polygamy. Suffice to say that media coverage, such as this one in the Independent, often focuses on one male/two female polyamorous triads – though that might say more about media prurience than about polyamory. What I wanted to introduce was the idea that some people have views that are at least nominally or partially compatible with Muslims, and those people aren’t targeted for having those views, albeit that they are still somewhat fringe at the moment.

Now, let’s look at the statistics about homosexuality in a little more depth. Where only 18% of all Muslims think that Homosexuality should be legal, 28% of British Muslims aged 18-24 agree that Homosexuality should be legal, as compared to 2% of those over 65. Homosexuality was decriminalized in the UK in 1967, so there is no direct comparison to be drawn between modern British attitudes, but it’s probably fair to say that if you’re for the legality of homosexuality, you’re probably pro-same sex marriage (the vice is definitely versa). In a society where Same Sex marriage has been legalized, albeit only in 2013, once someone has made the decision to be for freedom of sexual orientation, they’re now under pressure from broader society to make the relatively short jump to being for the legal recognition of relationships that arise form that orientation. As such, a comparison between a YouGov poll of the general population from mid-May, 2013, with regard to same sex marriage, and views on the legality of homosexuality amongst Muslims may put things in perspective. In this we find that only 54% of non-Muslims support gay marriage, with 37% opposing; amongst Conservative supporters, that drops to 45% in favour, 48% oppose; UKIP supporters swing further still, 38% in support, and 53% opposing.

Ignoring politics for the moment, the sentiments of 18-24 year old Muslims is trending towards that of Britons aged over 60. (I put very approximate reciprocal ‘Oppose’ numbers into the table below, just to illustrate the general trend.) The law enabling same sex marriage, as finally passed, states that no religious organization can be compelled to perform these ceremonies. It’s hard to see how highly religious Muslims are failing to fit in to British society, therefore.

SAME SEX MARRIAGE Lib. Dem. Labour Conservative UKIP
Support 72 57 45 38
Oppose 24 31 48 53
UK (60+) UK (40-59) UK (25-39) UK (18-24)
Support 28 58 70 74
Oppose 63 32 21 17
 Legal Homosexuality Muslim: 18-24 65+
Support 28 2
Oppose ~70 ~95


Now, consider that in times of high stress, uncertainty, and instability, people become more religious (Hogg, Kruglanski & Bos, 2013; Paul, 2009), and less reliant on government (Kay, Shepherd, Blatz & Chua, 2010). See also Gregory Paul’s further work related to his Successful Societies Scale. Consider, also that strong religiosity is linked to more conservative political views (Altemeyer, 2006). Additionally, note that Muslim families are almost twice as likely to have small children as the general population (whilst Muslims are 4.8% of the overall population, 8.1% of all school-age children are Muslim). There is also a very strong link with parenting and conservative views (Altemeyer, 2006; Hohman, 2015), which must link, at least in part, to the high stress and uncertainty mentioned above.

One can expect certain behaviours in line with heightened religiosity, we see it in the US, but we’ve been (mostly) spared it here in the UK. According to the survey, 18% of British Muslim’s sympathize with violence against those who mock the prophet. There’s that word “sympathize” again. Although, how you sympathize with violence, I’m not sure. You might sympathize with people who commit violence in the service of fighting injustice… but another question arises, how did the person answering the question perceive “mockery”? Whilst we’re noticing the religiousness of Muslims it’s appropriate to point out that the blasphemy law was only struck from the books, in the UK, in 2008. How quickly we forget, and get self-righteous about our newly enlightened position.

Speaking of hypocrisy and stereotypes, Martin Boon, tells us that only 2/3 of Muslim’s condemn stoning for adultery, compared to nearly all members of the general British public. This of course is a problem. Meanwhile, the 40+% of the population who claim to adhere to some form Christianity believe in the importance of a book that advocates the exact same thing. If there were any instances of this being carried out in the UK it is even further back in history than the abolition of the blasphemy law. How fortunate! Unfortunately, section 54 of the Coroners and Justice Act 2009, which came into law in October of 2010, allows infidelity to be used as a defense for murder (see Horder & Fitz-Gibbon, 2015, for a discussion of the impact of this law). So, again, where’s the vast chasm of differing attitudes?

The tone of the presentation gets worse at 25:55:

“It’s clear that I, and many others involved in the policy-making field just got the aspirations of British Muslims wrong. Our mistake was to imagine that because historically other minority communities – Hindus and Sikhs, for example – had gradually moved to adopt some of the behaviours of the majority, that Muslims would follow the same pattern. But our survey suggests significant number of British Muslims don’t want to change, and don’t want to move to adopt the behaviours of the majority. … Many British Muslims would rather that non Muslim Britain changed its ways to accommodate their way of life.”

There are so many things wrong with this. But they can all be summed up by pointing out that there are many significant differences between the histories of the relationships between the English and Hindus and Sikhs, as compared to that between the English and Muslims. English occupation of Hindu and Sikh territories mostly ended quite a while ago. And whilst the echoes of empire are doubtless still felt in those places, the impact of British colonialism, and British support of US programmes of interventionist politics in the Middle East and Pakistan, and an illegal war or two, may have more than a little influence on Muslim sympathies with their countrymen (and women), and may well have influence on who has fled those countries to come to Britain. I am not a historian, so I don’t want to get caught up in a long discussion about the history of the region that was home to so many of the Muslims that now live in the UK, whether in this generation, or generations past. I’m not a statistician, either, but I think the statistics that this documentary set out to present need some context and balance, so here goes…

British Hindus and Sikhs, combined, are half the total numbers of British Muslims, and so are less likely to have that many communities where they make up 20% or more of the local population. According to the 2011 census, half of all British Hindus live in London. In other words, 400,000 Hindus live in a city of around eight million. So in London, Hindus make up around 5% of the population, on a par with the overall population of Muslims in the UK.

Year Hindu Growth Sikh Growth Muslim Growth
1961 30,000 16,000 50,000
1971 138,000 360.0% 72,000 350.0% 226,000 352.0%
1981 278,000 101.4% 144,000 100.0% 553,000 144.7%
1991 397,000 42.8% 206,000 43.1% 950,000 71.8%
2001 559,000 40.8% 340,000 65.0% 1,600,000 68.4%
2011 817,000 46.2% 423,000 24.4% 2,707,000 69.2%

According to the 2011 census the London boroughs of Tower Hamlets, Newham, Redbridge, Waltham Forest, Brent, Enfield, Ealing, and Haringey, along with the City of Westminster, were home to the majority of Muslims in London. These are the areas in London that were amongst the top 20 for the largest Muslim populations, per capita, as at 2011. As such this is a good shortlist for the communities selected by ICM for this survey. They total 581,997. Along with smaller populations around London, Muslims make up 12.4% of the population (and Londoners account for 40% of the UK’s Muslims). That is starkly different to the 50% of British Hindus that make up 5% of the London population. So why are we surprised that Muslims have a different social trajectory? Especially given the recent socio-psychological and geopolitical issues that relate to that movement, much of which is ongoing.

The implication from Phillips’ comment is that policy-makers were surprised when a population that is between two and six times the size of the reference population(s) didn’t behave in the same way. Muslim population growth is taking much longer to regress to the general population’s mean. Between 2001 and 2011 population growth in the UK was around 7%. Without Hindus, Muslims, and Sikhs, it drops to 4.8%. Not only is the Muslim population itself more than double that of the reference groups (see table below where “Both” is Hindu and Sikh combined), but Muslim population growth is almost double, too (33% of the Muslim population was aged 15 years or under in 2011, compared to 19% of the population as a whole). This makes a very strong case for educating our policymakers in statistics and demography or, I don’t know, actually using the ONS to interpret statistics for policy decisions.

Year Both Growth Muslim Growth
1961 46,000 50,000
1971 210,000 356.5% 226,000 352.0%
1981 422,000 101.0% 553,000 144.7%
1991 603,000 42.9% 950,000 71.8%
2001 899,000 49.1% 1,600,000 68.4%
2011 1,240,000 37.9% 2,707,000 69.2%

A strong thread throughout this ”documentary” is the concern about the lack of integration, and the ‘us vs. them’ mentality that arises from, and is strengthened by, this lack of integration (ironically not helped by the presentation to this point). But sometimes it was the very fact of adopting a British attitude that was demonised, a kind of ‘damned if you do, damned if you don’t’ dichotomy. At 33:11 we get this gem:

“…’live and let live’ is probably the most commonly accepted expression of British tolerance. Usually accompanied with a sort of ‘well, what can you do?’ shrug. But, there is a problem with this ‘live and let live’ laissez-faire approach: our survey revealed that the more people hankered after a separate life the more sympathetic they were to violence and extremism, and that really does matter. … When it came to exploring attitudes to violence, the survey asked British Muslims what actions they would take if they knew someone who was involved with supporting terrorism in Syria. Just 1/3, 34% said they would report it to the police. There may be several reasons for not shopping would-be jihadists… one, of course, is that you might be sympathetic to their cause.”

I find this reasonable-sounding “several reasons” followed by the less reasonable “one, of course, is that you might be sympathetic to their cause” to be inflammatory, and the sort of thing one expects from Fox “News”. According to this version, your average Muslim is supposed to live and let live, except where it comes to other Muslims having sympathies for terrorism. Which seems fair enough, on the face of it, but given the conflation of Muslims with terrorism in popular conception (only increased by this very presentation), Muslims also have a pretty strong motivation to not say anything, and to hope that they’re wrong about the suspected terrorist sympathizer, in order that the stereotype not be perpetuated, and that they be not personally associated with it. Additionally, and problematic for the validity of this survey, “terrorism IN Syria,” which is what the question asked about, is very different from terrorism in general. Syria has a dictator, and insurgents backed by numerous world powers. In Syria, terrorism is about the only means that some people have to fight back, caught between the oppressive regime of their own government, and the oppressive insurgence of someone else’s utopian dream of a Caliphate, supported by various vested interests in an international proxy war.

I have pointed out the use of the root word “sympathy” a number of times. Martin Boon, addresses this, saying, “There is no right or wrong of measuring sentiment on the use of violence, but we decided to use the word sympathy – the expression of sympathy toward violent questions or sensitivities – as the best way of dealing with it, because it has been used in similar surveys.”

The primary meaning of sympathy, according to that most British of institutions, the Oxford English Dictionary, is, “Feelings of pity and sorrow for someone else’s misfortune”, the secondary definition is “Understanding between people; common feeling”, but the survey seems to be using the third, and thus least used, meaning – “The state or fact of responding in a way similar or corresponding to an action elsewhere.” The usage may possibly be a conflation of one or other of the first two with this last definition. Asking about “sympathy” which, by its common usage, people take to mean “Feelings of pity and sorrow for someone else’s misfortune” and using the alternative definition – “The state or fact of responding in a way similar or corresponding to an action elsewhere” – as a lens through which to interpret the results, is begging the question – not a good thing in a “rigorous survey.”


“What we do know from the survey is that Muslim’s who have sympathy for violence are significantly more likely to hold illiberal on issues like gay rights, and women’s equality than those who don’t. So what the survey is showing us is the emergence of what you might describe as a nation within the nation. Where many hold very different values and behaviours from the majority. I’d say that hardly anybody wants to see that happen, but the question is what are we going to do about it?” (37:05-37:37)


How do you hold a behaviour? A behaviour is, most often, a course of action predicated on a value. You can actually hold a value, you know, like ‘justice.’ This comment seems to be indicative of an underlying bias in the reporting of these results. Holding different values isn’t necessarily bad, and it’s not always easy to know what a person’s values are. But behaviour? Behaving differently is much easier to portray as bad – the use of the word here just seems to add to the bias apparent through-out this presentation. I’ve repeatedly illustrated that many other Britons hold similar views to Muslims, and in greater numbers, they’re just a little harder to pick on, demographically.

Yasmin Alibhai-Brown then makes an extremely important point, one that, in combination with my point about the socio-psychology and geopolitical realities of Muslim immigration, is the crux of the matter:

“Increasingly – and this really interests me – I’m getting young Muslims writing to me who hate the lives they’re living. They hate it. Some of them are gay. Some of them – men and women – have been forced into marriages. Some of them are lost, because they feel no affinity to anything or anybody, ‘cause they’ve never been allowed to. Yu know, it’ just this thing about being a Muslim – one of them said to me, ‘I am a Muslim, but I am so much more than that.’” (39:01-39:34)


Phillips’ narration continues:

“Those of us who are not Muslims shouldn’t be telling those who are how to live their lives or how to meet the needs of their faith. And nobody likes the old idea of assimilation where people abandon their cultural identity in order to blend in to some kind of mainstream. But that doesn’t mean we do nothing. Many people, including me, believe that we can create a set of policies that promote integration, make clear that there are some things on which the society will not compromise, and would support liberal trends in all parts of society. We call it a policy of active integration” (39:57-40:37).


I won’t directly quote the next segment, but it suggests that desegregating schools such that no ethnic group can be more than 50% of the roll would be a swift and decisive means to increasing everybody’s exposure to each other. This is “active integration,” and it seems to be working well in one school in Oldham where it was tried. Finally! Something I can wholeheartedly agree with! It only took 40 minutes.

So, Phillips closes with:

“Britain faces a huge challenge: adopting a policy of active integration may give rise to some ideas that make you, me, Muslims, non-Muslims, everyone, feel pretty uncomfortable. But what is our choice? We could cross our fingers, close our eyes, and hope that the segregation, the tensions, the periodic outrages, and the backlash that follows will, somehow, simply vanish. Or, we could seize the initiative; take steps to support those Muslims who do want their communities to change – in their attitudes towards women, towards lesbian and gay people and, indeed, towards violence. I know which of those I would choose. A policy of active integration must be the first step on the path towards those shared values that will come to define what it means to be British, for Muslims and non-Muslims alike.” (45:45-46:42)


I’m honestly flabbergasted that a documentary that spent so much time misrepresenting statistics about Muslims, ignoring the very real socio-psychological and geopolitical realities of the Muslim diaspora, nevertheless came to an appropriate conclusion. I get the strong impression that the entire thing was an exercise in promoting “active integration,” which is just a new name for an old idea (as far I’m concerned). I’m aware that multi-culturalism has come to mean a kind of segregated coexistence of peoples (i.e. a kinder, gentler apartheid), and it is that multi-culturalism that the pundits tell us has failed. But the multi-culturalism I adhere to, which is no doubt coloured by my being raised in New Zealand, is the one where people share spaces, and their ideas and traditions with each other, as openly as their personalities allow. So, apparently that is called Active Integration, now, in the UK at least.

So, having dealt with significant issues of statistics, context, and bad journalism/sloppy presentation, my next post on this documentary, which will be much shorter, is going to simply look at the issue of integration from the point of view of developmental psychology, neurobiology, ideology, and social change. The general idea being that no-one will be hurt by having a better, more empathic understanding of what it’s like to change countries, and how that change impacts the individual.

The Road to Monotheism

Here’s the approximate script for a talk I did recently. Unfortunately, for one reason and another, the videoing did not happen, so my YouTube channel will have to languish that little bit longer.


Good evening, everyone. I’d just like to take the opportunity, at the outset, to thank the organisers for giving me the opportunity to speak with you all, tonight.

Speaking of the organisers, I’d like to quote the description of this talk, as it appears on Meet-Up, because it’s what Ed wrote (with some minor changes by me), as a paraphrase of the overly verbose description I originally sent him, and because I want to try and keep to it as closely as possible:

“Belief in the divine is widespread across many cultures and this may be because the belief reflects reality. An atheist thus needs to explain why, from their perspective, belief in the divine has arisen erroneously.”

The description goes on to say that I “will illustrate how one very important social-cognitive skill gives rise to empathy (and thus morality?). From this [I] will then seek to explain the experience of the divine as an attempt to grapple with the moral problems of an increasingly large social environment, and the non-moral problems of the general environment.”

In keeping with this description, I will briefly describe the social-cognitive skill mentioned, along with some related psychology that helps put it in perspective. I will then detail how this primary skill relates to both morality and belief in gods. After a few closing comments, we’ll launch into the question and answer session.


Social-Cognitive Ability:

First off, let’s get to grips with this social-cognitive ability that is the lynchpin for this discussion. It was originally called Theory of Mind, back in 1978, when it was first discussed (with regard to chimpanzees). Since 1978 it has been called many other things: Mentalizing, Mind-Reading (the non-magical kind), Folk Psychology, and The Intentional Stance. These different names are all quite descriptive of what this ability entails…


Theory of Mind – Precursors

Reading facial expression/body posture:

Many of us habitually adopt the same facial expression and/or body posture as the person we’re talking to, so as to better understand their meaning, or to empathize with them. Notice that people who do this in an unaffected way often make us feel more comfortable speaking with them. The almost unavoidable feeling is that they’re ‘our kind of people’ (of course this may depend on what your threshold for ‘unaffected’ is).

Now, consider the fact that people who hold a pen clenched between their teeth, thereby adopting many of the facial characteristics of a smile will rate cartoons that they then read as funnier than if not doing so, and much more so than if holding a pen between pursed lips, and thus adopting a frown. This is an application of the facial feedback-hypothesis, which suggests that some of what we know about the status of our bodies is through noticing the body itself. This is merely reinforcing the feeling rather than the initiating cause.

However, notice that once we understand our own facial expressions and what they mean, we can better notice and understand them in others. By a series of steps, one can abstract from a particular facial expression or bodily posture what a person is likely to be thinking, especially if you are in the same context as them. From here one can begin to anticipate an individual’s behaviour. One can augment this ability using knowledge you have about the person based on:

  • how well you know them/how similar they are to you, or;
  • whether they conform to a stereotype you have for them.


Theory of Mind – Peculiarities

Reading actions in the environment:

As a species, we are embedded in a highly social environment. In many respects understanding each other has become more important than understanding the general environment in which the social environment itself is embedded. Other people are a more immediate danger to us – they might steal our food, or aggress against us. In the general environment seasons are more predictable, and earthquakes and thunderstorms are less frequent, in the modern general environment bear attacks are fewer and further between.

As such, once this skill was developed, it made pragmatic sense for it to lead to assumptions about people we know, or know of. So when something unexpected but positive occurs in our environment we ascribe benign intent, i.e. something done by someone we know, someone who is good; but when something unexpected but negative occurs in our environment we ascribe malign intent, i.e. something done by someone we don’t know, someone who is bad. You will recognise the Us vs. Them mentality in this, and more.

Notice, here, that a physical body is no longer the basis for our theorizing, we have climbed a few rungs on the ladder of abstraction. Now disembodied “behaviour” is the basis for our guesses about intent in our environment. Indeed actual theory of mind is triggered by surprising or unexpected occurrences (behaviour in our environment), not explicitly human occurrences. This is predicated on the simple expedient that a false positive is less dangerous than a false negative: thinking the shadow in the bushes is a potential assassin or burglar, is a lot safer than thinking that the actual assassin or burglar in the bushes is just a shadow.

To give two quick examples about how quickly we engage theory of mind and how quickly this leads to anthropomorphization of non-human entities consider computers and cars:

If your computer is working fine, it’s just a computer. However, if it glitches or crashes it’s a “stupid” computer. “Stupid”? Really?

If you’ve ever had the unabashed joy of owning an old car, how likely is it that you referred to how it runs as “temperamental”? Indeed, I’d be prepared to bet that the cars that have been given names are the ones that don’t work that well, or are owned by people who don’t really understand cars (making almost all “behaviour” unexpected). Even the lads on Top Gear predictably finish their reviews of cars that are “mad” or “bonkers” with, “…but you know what? I absolutely love it!”

Some might complain that these are things of human design, so of course they are the focus of Theory of Mind. OK, fair enough. What about the weather? The weather, like the second-hand car can be temperamental, too. Storms can be violent. Weather can be sultry (admittedly I’m cheating there… sultry is a description of weather applied to people, illustrating that the anthropomorphization has come full circle).

Just so we move away from any idea that empathy is all things good and pure and perfect, by default, let’s not ignore the fact that understanding someone else’s pain leads to the ability to take pleasure in someone else’s pain BECAUSE it is not your own pain.


Theory of Mind – Factors

Other aspects of human psychology:

System 1/System 2:

We have a skill that started off as the ability to read physical, which is to say facial and bodily, cues in the environment, but over time this skill became more abstract, able to read symbols in the environment detached from human bodies… and requiring of more cognitive effort. If you’ve read Daniel Kahneman’s ‘Thinking, Fast and Slow’ you probably know where I’m going with this – some of the skill is of ‘System 1’ which is to say intuitive, unreasoned, fast, effortless, but prone to errors. ‘System 2’ is more cognitive, reasoned, slow, effortful, and less prone to errors (and indeed seeks to correct System 1), but sometimes relies on the output of System 1, and thus can perpetuate errors.


Executive Functions:

One of those errors is, as I mentioned, being triggered by surprising stimuli, rather than actual human stimuli. This is not a failure of Theory of Mind, per se, but a failure of Executive Functions. According to a classic paper on the topic, Executive Functions are “general-purpose control mechanisms that modulate the operation of various cognitive subprocesses and thereby regulate the dynamics of human cognition” (Miyake, et al. 2000, pp. 501). These include the abilities to:

  1. shift between sets and domains of data;
  2. update and monitor information in multiple domains, and;
  3. inhibit inappropriate responses within and across domains

So, you can see, that when Theory of Mind is triggered by events in the general environment, rather than in the social environment, it is a failure of all three. It should come as no surprise that Executive Functions, being complex and new in the scheme of things, is amongst the first to be negatively impacted by primal responses such as fear or anxiety. One of the benefits of being a social species is the greater protection from sources of fear, and thus anxiety, by being in a tribe… but this leads to other sources of fear and anxiety.


Dunbar’s number:

How well you know someone comes down to how much time you spend with them, in the social environment – thus, in general, you know your family and close friends, your “tribe”, best of all. There is a limit to how many people you can know well – and there are several different ways to tackle this issue. And it IS an issue, because how well you know someone defines, at least to some extent, how you deploy your Theory of Mind. With people you know well (to whom you have more than 10,000 hours of exposure, say) you are an expert in their likely response to situations. This knowledge has become mostly intuitive.

According to Robin Dunbar, the maximum number of people you can know well is around 150. This is predicated on the amount of frontal cortex humans have as a ratio to the rest of their brain, as compared to other primates. What Dunbar found was that the smaller the ratio, the smaller the “tribe” that the primate is naturally found in. The brain size related to how many tribe members it is feasible to groom in order to maintain social closeness, and the brain-space required to maintain the information gained from grooming. Note, here, that we’re back to the physical precursor to Theory of Mind, whereas humans use symbolic language (a cognitive skill) to maintain social relationships, sometimes over great distance.

Some people disagree with Dunbar’s Number, and alternatives have been proposed, such as the Bernard-Kilworth number, which is 1.5-2 times Dunbar’s. I’m not especially concerned about which number we agree is correct, or even if these numbers have any meaning at all, as I will explain in a moment. The difference between the numbers may just be down to the strength of the social ties represented (and this may be one means of managing larger tribes). Indeed, Dunbar’s work looked at groupings in Modern Western culture (such as military corps), but also historical anthropological work on Amazonian, New Guinean, and African tribes. It is interesting to note that tribes that were larger than 150 were so because of the number of children living amongst them – these tribes often split as the children reached adulthood.


Psychological Distancing:

Whether or not either of these numbers is relevant, one additional thing to note is that we can derive something very similar, though less specific, by noticing three aspects of our relationships with people in our tribe/social group:

  • how physically and psychologically similar to us someone is;
  • how physically close to someone we are;
  • how well we know someone.

The phrase ‘out of sight, out of mind’ is disturbingly true. Daniel Kahneman also presents the idea of ‘What You See Is All There Is’. Physical distance has a very real effect on our relationships with people. We have to work harder to maintain long distance relationships with people we ostensibly care about, and we are less likely to strike up relationships with people that we might otherwise care about, if only they were closer. In other words distance can, passively or actively, affect the way we engage with people. We can, actively or passively, replicate the impact of physical distance with psychological distance. If you don’t consider the person right next to you to be important, they could as easily be 1000 miles away. And if your partner or lover is 1000 miles away, they could as easily be by your side. Things like similarity to yourself, whether physically or mentally, impacts on the effort you will make to make someone psychologically closer.



Stereotypes are a means by which we depict a group of people, usually people that are at some physical distance, and thus with whom our inter-actions are only fleeting, and stereotypes tend to focus on difference, not similarity. Stereotypes are incredibly useful cognitive tools and, despite the bad press they get, are often highly accurate depictions of groups (I will add the caveat that this does depend upon whether the source of the stereotype is ideologically driven).

The primary problem with stereotypes is where an interaction moves from being at the group level to being at the individual level, and the degree to which the stereotype assumptions are held, despite contrary evidence from the individual. The continued holding of a stereotype about a person with whom you are directly interacting is a form of psychological distancing.

Despite, or maybe, in some cases, because of, their utility in creating distance, stereotypes are also used as a means to classify oneself to oneself. Notice that people who rely on too few self-stereotypes are able to distance themselves from their own pretty abominable behaviour (for example, people who define themselves by their gender, their race, their country of birth, and so forth, but little else).


So this brings us to morality. But first let me briefly recap what I’ve said about Theory of Mind…


Recap on Theory of Mind:

  1. Theory of Mind is predicated on the ability to read facial expressions and body posture.
  1. Adopting other people’s facial expressions and body posture will often make what they’re saying easier to understand…. and impact our own thoughts and feelings. This is the basis for empathy, and by extension, morality (or at least moral discourse).
  1. With any ability, as we become more practiced (as an individual, as a culture, as a species), the skill relies on less explicit content and becomes more abstract.
  1. In the case of Theory of Mind, this includes being able to discern motive from an action that is disembodied in space and time.
  1. Theory of Mind can be triggered by unexpected events in the general environment, not just the social environment, and can be applied to non-people, such as stereotypes, and non-assassins, such as shadows in the bushes.



So to head off (or possibly create) discussion on my use of the word morality, just then, here’s a definition from the Stanford Encyclopaedia of Philosophy:

The term “morality” can be used either

  1. descriptively to refer to some codes of conduct put forward by a society or,
    1. some other group, such as a religion, or
    2. accepted by an individual for her own behavior or
  2. normatively to refer to a code of conduct that, given specified conditions, would be put forward by all rational persons.

I think it would be uncontroversial to say that codes of conduct put forward by a society, or a religion, as accepted by an individual, are JUST that society’s, or that religion’s documented code for a given specified condition, or, more often, a command that is supposed to be relevant across all conditions (e.g. Thou Shalt/Shalt Not). The individual, on the other hand, may be swayed by their society or religion, and/or they may be a rational person who normatively adopts a code of conduct under specified conditions (such as those not contemplated by a religion’s or society’s code).

So, what I would like to do now is discuss, in very general terms, the types of societies that humans have been part of, and the gods that those societies gave rise to, and the impact on moral discourse.


The Evolution of Monotheism:

Humans evolved the skill of Theory of Mind in an environment of small tribes, in which everybody knew everybody they interacted with regularly. So much so, that they generally interacted with people they were related to. Their tribe was like them, in every plausible way. By contrast, they could only have folk knowledge about their general environment. So what phenomena are going to be both extremely important to understand, and almost constantly surprising to such people?

The animals, the trees, indeed the very earth.

If you use your Theory of Mind – your abstracted self – to try to understand these things, you will necessarily imbue them with your ‘self’, your “spirit”. If in a state of fear, your Executive Functions will fail to remove the social aspect of that cognitive process, so the information will still “feel” social. So I’ve just described animism: animals and trees and the like with individual spirits, as well as the spirit of the forest, and maybe an overarching sky god or world spirit.

Notice that animist societies are never agricultural. When they become agrarian it signals that they’ve come to understand enough about their environment that aspects of it are less surprising. Successful agriculture and domestication of animals comes with a population explosion and specialization within that population – farmers, shepherds, hunters, and so on. Specialization leads to power structures, and thus politics and hierarchy. Gods, then, become more overtly anthropomorphized, with links to distinct animals and natural phenomena, and most importantly, they start to have their own hierarchy.

Note, here, that highly successful agricultural civilizations got large enough, quickly enough, that social inertia stopped some of the gods they worshipped, as a society, from progressing beyond being part animal and part human. Examples of this, not surprisingly, being fertile river deltas, such as the Nile and the Indus valley, giving rise to the Egyptian and Hindu pantheons.

Civilisations that were a little slower to flourish, or whose geography was less conducive to large, more homogeneous populations, instead have humanoid pantheons (though many of these gods have animal forms), foremost examples in the West being Greece and Rome, but also Scandinavian, Slavic, Sami, and Celtic pantheons. It seems that flood-plains, being broad, flat expanses lend themselves to greater inter-personal connection, more frequent (and less violent) interaction, and thus greater tolerance of difference, and gods to reflect the scope of human experience.

Clearly, the fertile region (if not a floodplain itself) that bucks this trend, is the Levant. Here civilizations rose and fell with various iterations of humanoid gods, but monotheism has been a recurring theme, from Zoroastrianism to the Abrahamic faiths. So the question is: Why?

The answer is probably going to sound very familiar to modern woes, particularly after I mentioned self-stereotypes based on race and country of birth: Immigration and Trade.

The Levant is, very approximately, the crossroads between Africa, Europe, and Asia, and there is evidence that this has been the case for millennia (in fact, almost constant for the last 1.8 million years, with evolutionarily relevant migrations as recent as 40,000 years ago). So the indigenous people of that region were constantly assailed by people from Africa, Asia, and Europe; all with their different epistemic commitments, and their desert gods, hill gods… and iron chariots.


From pantheism to monotheism:

So how do we get from the expansive and inclusive ideas of pantheism, and its subtypes, henotheism and monolatrism, to monotheism?

Pantheism is the belief in multiple gods. Some are gods of certain human activities, with their related moral codes. Others are gods of things in the environment (usually ones that humans rely on happening, or rely on not happening – from harvests to hurricanes – as such they are gods of human interaction with the general environment).

  • Henotheism is where each individual worships one particular god, of the multitude available, whilst accepting that there is a multitude, and adopting the moral code relevant to that god.
  • Monolatrism takes this idea, and has all people worshiping a high god, whilst still worshiping their preferred lesser deity, thereby bringing moral discourse into some kind of unity, whilst still having moral preferences, as exemplified by other gods, in the dicussion.

Monotheism, of course, is the doing away with all of the other gods, and worshipping a single god, and ascribing all morality to that god.


Now let’s draw a comparison with stereotypes (please excuse the dreadful coinages, which I use only to make the comparison explicit):

If Pantypism is the idea that there are humans that engage in the multiple activities available to them in society.

  • Then Henotypism is the idea that you engage in one of those activities, whilst accepting that there is a multitude.
  • And Monolatypism is the idea that you engage in one activity, but are, simultaneously, part of some unifying group, or society.

Thus, Monotypism is the idea that your unifying group is more important than anything else.

Recall that I said that stereotypes are predicated on Theory of Mind and the recognition of difference.

So, does Monotheism bear comparison to a monotypism like nationalism?

The Biblical God hardened Pharaoh’s heart, and then killed the sons of Egypt, he condoned and/or aided in the destruction of the Canaanites, the Amalekites and the Moabites (sometimes to the point of directing the killing of women, children, and livestock). Egypt, Canaan, Amalek, and Moab, were nations apart from Israel.



The conclusion that I want you to draw from this is that stereotypes are, generally, explicit constructs used to describe other people by virtue of their difference from you (or your theory of your mind), and that gods are generally implicit constructs used to describe your people by virtue of their similarities to you (or your theory of minds). The construct under which people unite and differentiate themselves from others most readily is race and/or nation.

What I am saying is that any god is a metaphorical construct used to describe a group of people in shorthand. After all, if you wanted to describe your tribe/race/nation you would want to highlight their gifts, such as the goodness, wisdom and strength of its people. Monotheism is explicitly just such a national grouping; what has happened, however, is that the Abrahamic God has come detached from Israel – because Omni-benevolence, omniscience, and omnipotence were such generic descriptions on their own – it can be ported into a Christian American landscape, or an Islamic caliphate.

To illustrate that one’s religion is just an abstraction from one’s country (itself an abstraction from tribe), at least conceptually, consider the sedition, rebellion, and treason (or insurrection), and notice the relationship to heresy, blasphemy, and apostasy.

  • Sedition – Acts intended to promote disorder
  • Rebellion – Resisting authority
  • Treason – Betraying or attempting to overthrow one’s government
  • Insurrection – An uprising or revolt.


  • Heresy – Opinion contrary to orthodox doctrine.
  • Blasphemy – Speaking sacrilegiously about God.
  • Apostasy – The renunciation of a religious belief.


In this light consider the Christian martyrs. Were they spreading the good news to other countries, or were they encouraging the residents of those countries to adopt the cultural norms of an enemy state?

If you accept my postulate, then significant arguments between the Christian and the Skeptic dissolve, foremost of which is the argument on the source of human morality… if God is a metaphor for some given group of people, then god and humanity are BOTH the source of a morality, because they are one and the same thing considered in different ways.

Christianity, as a step on from Judaism, takes the non-human thing-ness of God and fleshes out the stereotype, creating an idealized person. As such many denominations aspire to be Christ-like. The deficiency of reliance on a stereotype is evident in the bigotry of certain denominations; a stereotype can’t be both male and female, for example.

As such, and in this light, consider Galatians 3:28

There is neither Jew nor Gentile, neither slave nor free, nor is there male and female, for you are all one in Christ Jesus.

The Big Questions: Does evidence undermine religion?

As mentioned previously, I attended the filming of The Big Questions on Sunday, January 11th. The topic for this episode was, ‘Does evidence undermine religion?’ I’ve done a pre-amble explaining the different degrees of evidence, now I’m going to illustrate how this differentiation impacts theistic claims. I’m going to pick on the theists, for the most part.

I’m going to follow the order of the programme, which you can view, here, and I’ll supply the time-stamps for each bit. Where I quote the panelists I will edit out irrelevancies … and I’ll resist the urge to point out faulty English, because at least one of the panelists has English as a second language.

Nicky Campbell says, “…you [Robert Feather] believe that you’ve actually discovered the mountain where the commandments were meant to have been handed down.”

Robert Feather replies, “Probably, yes. The exact mountain, in fact.”

So, we start off with equivocation. He started with the reasonable, “Probably, yes”, but immediately switched to the overblown “The exact mountain, in fact.” So we’re off to a bad start. The main thrust of Feather’s argument seems to be that because he found a mountain that conforms to some Biblical descriptions, that Moses was real, and by extension, the Bible is true. My response to that: I highly recommend a book called ‘The Historian’, which goes into great detail, both historical and more current, about Istanbul and Budapest, and therefore Dracula is real.

We then have a lesson in what a ‘degal’ is. What’s interesting here is that Feather claims that this number was mistranslated as 1000, when it is a much lower number. He plumps for around 50-60. According to Strong’s Concordance a ‘degal’ is a banner or a standard. Given that the Exodus is effectively an origin story, and given the numerous other instances of hyperbole when translating these stories (I recommend looking up how big Solomon’s temple actually was), would it not be easiest to suggest that each banner was a family with a man at it’s head? That makes any Exodus to be 605 families (of between two and six people, say) and that might explain a lack of archaeological evidence for any Exodus whatsoever. Alternatively, could this be a story about 605 families who were followers of Akhenaten (the first Egyptian to be a monotheist), who fled Egypt when Akhenaten died, to the relative safety of the outskirts of the Empire (i.e. Canaan)? I only mention this, because Feather has advocated for the Akhenaten thesis in the past (see his book The Mystery of the Copper Scroll of Qumran: The Essene Record of the Treasure of Akhenaten). Or, either of these could be the case, but involving 605 individual men who went on to start their own families… which is more plausible still. Or, and this seems even more likely, it’s just a story.

Is it now that I point to the recent study that found that children who are presented with Biblical stories as historical facts are less able to distinguish truth from fiction when presented with non-Biblical fairy stories?

Professor Stavrakopoulou (hereafter, Francesca) corrects a number of Feather’s overstatements (e.g. “the exact mountain”), and in response Feather goes on the offensive. Interestingly, he claims that Francesca has been “overtaken by a flood of archaeological and textual information.” He then proceeds to ask whether she is aware of Beno Rothenberg. This is an interesting question, and one that I would turn back on Feather. Beno Rothenberg died in 2012. Furthermore, aside from a couple of papers on metallurgy, the bulk of Rothenberg’s work is from last century. Is Feather suggesting that information that is more than a decade old can be characterized as current? Feather also makes a big deal about the Merneptah Stele. Is this part of his “flood of archaeological and textual information” by which Francesca has been overtaken? The Merneptah Stele was uncovered in 1896, and the translation, which questionably includes the key word, Israel, was from the following year (Petrie & Spiegelberg, 1897).

At this point the conversation shifted to a Jewish interpretation, courtesy of Rabbi Miriam Berger. There’s really not much to say, here. The Rabbi accepts that the story is likely metaphorical (though I would go further and say that Yahweh is, too), and that she would get shivers if some element of the story that was true could be rooted in a real world location, but that it’s not necessary for the identity that her faith provides. Great. Perfectly sensible.

Next up is Doctor Radica Antic. Now I am going to admit that I found this man intensely annoying, and this will likely come out in what I say. Indeed, I’ll get my complaint off my chest now. This man is a doctor? Of what? Errant nonsense and condescension?

“To make archaeology the measure of all truth is so wrong. And it simply, it does not stand…”

He says a lot more, but I want to get to grips with the above, because this underscores my complaint about definitions of evidence. First, Antic sets up a strawman by claiming that anyone has suggested that archaeology is the font of all knowledge. Indeed, that is more commonly a claim from theism. Richard Carrier, a well-known historian, places history (his own field), as a means to knowledge, behind reason, science, and experience, not least because establishing historical fact requires reason, science and experience. So Antic is right, archaeology is not the measure of all truth, but as he was the one making this claim, he’s begged his own question.

“…To impose atheistic interpretation on the Biblical text it would be like imposing Biblical or theistic understanding on some atheistic work.”

Antic, using “atheistic” to mean ‘scientific and/or materialist’, fails to note that the Bible IS making claims about the nature of reality. Claims that we know to be false. Whether you read the creation as seven actual days (despite the sun not being present until the fourth), or seven epochs, the story itself is still unequivocally wrong. The Noachian flood, as a genetic bottleneck, just makes it more wrong. Indeed, without the claims in Genesis there is no need for the New Testament. If there is no Adam and Eve, and no Fall, then there is no need for Atonement in the person of Jesus Christ, end of story (quite literally).

“First of all, I believe there is God. And IF there is God…”

It amuses me how often theists say ‘IF’, only to assume the conclusion in everything they go on to say. No, let’s stop at ‘If there is God’ and point out that you don’t know if there is, you can’t prove that there is, and your entire worldview (as shown above) is predicated on that IF. I hasten to remind Antic of Matthew 7:26 (And everyone who hears these words of mine and does not do them will be like a foolish man who built his house on the sand). “IF” is linguistic sand, whereas science’s ‘This is what we know, so far’ is, whilst maybe not rock, certainly the driving of piles down into the rock through the ‘IF’ sand.

“The atheistic community, they have no answer, how the universe-cosmos came into existence. Not at all, they are telling us that something comes from nothing. This is an offence to the common sense…”

By “atheistic community” Antic means the ‘scientific community’, so we have a conflation/red herring here, again, which now becomes the basis of a genetic fallacy. He prefers that which comes to us from common sense over careful observation. Common sense has an incredibly poor record for delivering truth (as Adam Rutherford says in response). Indeed most religious texts are written as common sense for that region, and then stray into global or universal concerns. But lets deal with this argumentum ad populum (appeal to popularity) on this specific topic…

First, God also comes from nothing, or has always existed, which is the same thing. So there is fundamentally no difference between the two. Of course, on a deeper reading, many scientists have a different definition of nothing (e.g. Lawrence Krauss). Second, an appeal to common sense doesn’t work, because most religions (including Christianity) agree with science that we only know of one universe for certain, the one we’re in. In order for something to be common sense we need to have been in a position to witness something repeatedly, and deduced the correct response accordingly, so common sense simply can’t provide us with an answer (Arif Ahmed makes much the same point, but with reference to statistical probability). Those who clapped to Antic’s comments, here, are applauding willful ignorance, and should be ashamed of themselves.

“…200 constants in the Universe… and if only one, if only one of these constants is changed, nothing would exist… then how did life started. Dawkins is telling us pure, sheer chance. …if there is God, then he speaks, and he speaks also in the Bible, then your questions about Noah’s Ark… because there are miracles all around us.”

As is the case with many theists, Antic makes discussion of biological evolution equivalent to cosmic evolution (an equivocation), presumably because the Bible considers these events in the same chapter and/or because they both use the word evolution. They are, of course, not alike, and have around seven billion years separating them (rather than seven days/epochs in which to occur).

That being said, Antic unwittingly provides exactly the same definition for miracle that I do: a lack of understanding of the underlying mechanics makes something seem ‘amazing and inexplicable’. Once you have even a slightly better understanding of what’s going on, then you lose the ‘inexplicable’ and are left with just ‘amazing’. Indeed, it is only through believing the creationist account, and the flood, that you can believe in miracles. Once you lose belief in these fairy stories everything becomes more explicable, and thus less miraculous.

This, by the way, is the main claim that I am making with my own research, that religion is self-perpetuating, in that it encourages belief in easy-to-believe stories, but that belief makes the world a scarier and more surprising place, and fear (or at least anxiety) is a fundamental driver of religious belief.

This is the bit that makes me think that Antic is a condescending fool: In response to Adam’s perfectly sensible and (importantly) circumspect comments, he said, “Of course you don’t know [how the universe began] [more cheering from the peanut gallery].” You don’t know either, Antic. You believe some ill-founded guesswork from 3000 years ago, which was immediately superseded, at about the same time, by the nascent science of the Greeks, which you choose to ignore. I suspect that Antic is, intentionally or otherwise, unaware of the sheer volume of work that has been done in science. Whilst science can’t discount the possibility of a god, it certainly will not the God of the Bible.

And now we get onto Hamza Tzortzis, another self-impressed apologist, but this time for Islam (I want to point out that I very much enjoyed listening to Maajid Nawaz on the previous episode, before anyone jumps up and accuses me of Islamophobia). Tzortzis turns to, I presume Adam Rutherford and, in a manner that you would adopt if speaking to a toddler (i.e. not humble), claims that we need to have, “epistemic humility.” I’m going to go ahead and guess that he recently discovered The Stanford Encyclopedia of Philosophy and read the bit about ‘Wisdom’.

“The point is, is that scientists are limited to the observations they have at hand, there can be a future observation that denies previous conclusions. It’s in flux. This is the beauty of science.”

This is actually an excellent point, at least on the surface (and before he goes on to ruin it with what else he has to say). The implication, in a deeper reading, is that the entirety of science can be overturned by a new observation. This is incorrect. Science, like the human mind, is a recursive process, as such it is hugely common to be garden-pathed by a particular line of enquiry or theorising. (An example being the geocentric model, and the heliocentric (Copernican) revolution.) However, once findings get to a certain point there is nothing that can overturn them, for example, Einstein’s theories of relativity refined Newton’s, they didn’t overturn them. Certainly there is no finding that will make any creationist account true (if anyone wants to challenge me on that I’ll happily explain why, but it’s beyond the scope of this writing). Likewise, no findings will suddenly find that the Qur’an’s discussion of embryology is true. Indeed, the Qur’an relied upon a subset of what was known from the work of Galen and others, and science has moved beyond that. Scientific knowledge has only improved, and most new findings refine existing theories rather than overturning them. Most of those that were overturned outright (like geocentrism and female sperm) were based in religious views… which is bad news for what Tzortzis is trying to achieve with his (faux) epistemic humility.

“Are you going to use science as a yardstick for absolute truth? No! No sincere scientist would say that because we’re bound to change. We’re limited human beings, one day we look at the horizon, we think it’s flat. Next minute learn about maths, and know it’s round. So the point is, let’s have epistemic humility.”

No scientist would lay claim to absolute truth, because that’s a religious claim. “Epistemic humility” is built into science, not least because science doesn’t lay claim to a personal relationship with the creator of the universe. Note, also, how Tzortzis makes my point by talking about the flat horizon, the discovery of maths, and the knowledge that the earth is round. Does he honestly think THAT observation is going to be overturned?

“The issue is this, why are we imprisoned, from an epistemic perspective? Why is it only science? What about philosophy, reason, maths, logic, other forms to truth? Because what we’ve done we’re presuming a scientism here, and scientism is limited.”

Right. Hamza is sitting opposite two philosophers, Peter Cave and Arif Ahmed, both of whom adopt knowledge from science to inform their philosophizing. There are philosophers of various sciences who have a very active role in the science of which they are philosophers, and their input into that science is significant. Of course reason, maths, and logic actually form the very basis of science, so Tzortzis is begging the question for these to be used instead of science. Various theisms have attempted to employ these things to strengthen their arguments – that seldom works out well:

“Reason is a whore, the greatest enemy that faith has; it never comes to the aid of spiritual things, but more frequently than not struggles against the divine Word, treating with contempt all that emanates from God.”
― Martin Luther

“Credo quia absurdum” (paraphrase)

“I believe because it is absurd.”
– Tertullian.

Now we have Vince Vitale, jumping in after a direct question to Antic, by Ahmed.

“In terms of the explosive force of the Big Bang, if you just conceptualise it… it’s the slightest bit stronger, it literally disperses into thin air. …The slightest bit weaker and it all collapses back in on itself.”

I find it amusing that this guy encourages us to conceptualise it, and then fails to conceptualise it himself. For starters, the contents of the Big Bang will NOT “disperse into thin air”, because there is no “thin air” for it to disperse into (and ‘void’ has the benefit of being only one word, not two). This illustrates the anthropocentric nature of theism. A devout theist seems only able to conceptualize things from a human perspective (and hence an anthropomorphic God, in shape of body and/or shape of thought). It is outside the grasp of their imagination to remove the human element from the imagination and to act purely as a passive observer. What’s even more absurd is that Vitale thinks he’s delivered a knock–out argument as to why the universe must be finely tuned, but instead he’s delivered a knock-out argument as to why neither of the universes that he’s described is the one we live in. There may well have been prior iterations of this universe (or other regions of spacetime), whether due to a Big Bounce, or Smolin Selection Theory that fit his description… but if there were, neither we, nor anything like us, would live in it.

Then Antic rejoins the conversation:

“If ever, if ever there are enough evidences [to prove evolution true]… I would lose my faith in God, yes. If there is enough evidences, but there are no evidences.”

Antic is incorrect, for two reason:

  1. there are mountains of evidence, he has just systematically avoided being presented with it, or paying attention when presented with it, or, and this seems most likely, doesn’t understand it.
  2. the evidence merely supports the theory of evolution, evolution itself is a fact, for which the theory is our best explanation.

So Antic is ignoring the fact, the theory, AND the evidence for that theory. I think it safe to call that ignorance. And in this forum he is, arguing that his ignorance is better than someone else’s hard-won knowledge. Of course, because that knowledge doesn’t come from personal experience (of evolution itself), or from hearing a story about someone else’s personal experience, it does not count as evidence. Recall the three definitions of ‘evident’ I gave previously, and notice how Antic is relying solely on the weakest one.

Antic then goes on to show how truly, gob-smackingly hypocritical he is in his response to Adam Rutherford:

“More humbleness would help you… what we know is very, very little.”


Is this the face of a humble man?

The lack of humility in this demand for humility is astonishing. His lack of self-awareness about his lack of humility is saddening (and this is a common problem with fervent and fundamentalist theists). There can be nothing more humble than asking the universe for the answer, and actually listening to the reply, and Antic would do well to remember this. Instead, all of his knowledge, or rather, beliefs, have been gained from listening to other people, and believing them. This might make him a half-decent friend, but a lousy scientist or philosopher.

Here, Vince Vitale scores an excellent own goal, and doesn’t realize it.

“His [audience member’s] point is that… if evolution is the sole guiding principle of human development, that is aimed at survival, not at truth. And if that’s the case, it sounds a bit like we get on the scale and think it should tell us the time. Why should we believe that our thoughts, our beliefs…”

And the audience member clarifies:

“I think my point was missed out. Um, look, you can believe in evolution, and believe in a creator, there’s not contradiction between the two. That wasn’t my point. My point was purely from an atheistic paradigm, right, there is no God, there is no intelligence behind this universe… assuming Darwinian evolution is true (even though science is based on induction – it can be wrong)… assuming it’s true, how can you trust your mind, when your mind is a product of a blind evolutionary process, which doesn’t have an end goal. If the end goal is pure rationality, then we should have the same rationality as…”

I suppose we should be grateful that Vince characterizes evolution somewhat accurately. Although that scale/clock metaphor is just bizarre. What he, and the audience member in question, fails to observe, is that we are getting better at detecting actual truth because truth is ultimately better for our survival (as Ahmed says). The human species has established itself as flexible survivalists with a pragmatic understanding of our surroundings. We assume, for example, that the thing we half see from the corner of our eye is dangerous, and we turn to confirm or disconfirm that. As such, our basest urges and reflexes are indeed pragmatic and survival-oriented. But our meta-cognitive functions evolved to enable us to break deadlocks of two pragmatically equivalent drives.

We often hear of the fight/flight/freeze response. There are incredibly few cases where, when in danger, freezing is appropriate. It is reasonable to assume, therefore, that the two options are just fight or flight. Freezing occurs when the two options are deadlocked. Thanks to our evolved capacity for breaking this deadlock we now know (or at least we would, had we read the appropriate survival handbook) when it is appropriate to freeze, run, shin up a tree, make lots of noise and flap our hands around, and when it is appropriate to attack. Notice that I just listed five reactions to what would previously have been the three of fight, flight or freeze, and that’s before we get to brandish a flaming brand, flinging a spear at it, or shooting it between the eyes. That is what our intelligence is for, and that is, very roughly, how it works.

…and it’s at this point, about half way through the episode, that I admit defeat, both on the basis of available time, and sheer mental exhaustion. The devout theists in this episode held the floor for longer than any of the pragmatic theists or atheists did (I apologise to Rabbi Miriam Berger and Professor Joan Taylor for making the distinction in that way, but it really was the only one I could think of). Whilst the theists held the floor they said a great deal, whilst also saying very little. And the atheists (in the main) spent more time correcting them than making their own points. As illustrated above, the theists consistently presented opinion as fact, denied any opposing facts that were presented to them by people who were in the right field of study to be able to contradict them, and then demanded humility, whilst displaying none.

What this episode illustrates is that people who are fundamentalist or fervent in a given religion are blind to their own shortcomings and deaf to contrary evidence. Those that are a little more open-minded, are only so in a sophist fashion; they continue to argue the theistic point, but with a veneer of plausible-sounding philosophy and quasi-scientific language, and usually in a rehearsed fashion. Only the theists that treat their religion as metaphorical, or as an organizing framework for more esoteric thought, sound sensible. This seems to be because they also use science to organize their lives, a fact which allows them the luxury of considering their esoteric thoughts in the first instance.

My research suggests that the monotheistic God is a construct derived from human social thought, as a reaction to the overwhelming number of people in our social world. This overwhelms the minds of some, and makes it impossible to view the world in anything but human terms. The switching off of the social module is no longer possible in this overwhelmed state. This process is reinforced by stories and mythologies from religion, not because of the stories themselves, but because in believing those stories, much more about the world comes as a surprise. Surprise leads to fear, or at least anxiety, and both reduce cognitive abilities. Anxiety is the equivalent of running on three out of four cylinders, semi-permanently, and fear is the equivalent of running on one or two cylinders, over short periods of time (this fact giving birth to the absurd trope that ‘there are no atheists in foxholes’).

Watching this episode, it was easy to see whose ability to use social thinking to monitor their own behaviour was impaired. It was also easy to see who, when made anxious about the veridicality of their beliefs, lost the ability to string a sentence together without self-contradiction or fallacious reasoning. Others had the defence mechanism of only listening for keywords, and reacting to what they though they heard, often with a rehearsed spiel. None of these behaviours was evident from the atheists or pragmatic theists who responded to the question asked, or position stated, with a considered reply…

This is our modern world, in microcosm.

The Big Questions: Evidence

On Sunday, January 11th, I had the great good fortune to be in the audience for a filming of The Big Questions. The topic for this episode was, ‘Does evidence undermine religion?’ A great deal arose from the comments of the various panelists for this, and I will address those in part 2 of this blog (after the episode has aired, January 18th). In this installment, though, I’m going to take a look at the concept of evidence with a particular emphasis on how it relates to religious claims.

For a discussion on whether evidence undermines religion it was unfortunate that the definition of ‘evidence’ was not discussed. Then again, that would make a much less interesting hour-long show. So, in a show such as The Big Questions, it is to be expected that the focus be on the big AND interesting questions. This being said, a working definition of ‘evidence’ (such as the one below), provided to the panelists ahead of time, might have produced quite different results (or lead to the less interesting episode I just mentioned).

There is a vast difference between folk theories (or so-called common-sense) about what constitutes evidence, and scientifically and philosophically literate theories of such things. In the context of a discussion, panelists can only start to get an understanding of what the other panelists’ positions are on that underlying question. Whilst one is assessing that position, and until one has accurately deduced it, one is necessarily talking past the other. Over the course of this essay I will explain why, by referencing what different individuals tend to consider appropriate as regards evidence, and the impact of interpretation on evidence, after the fact. The best place to start is with a (Chambers) dictionary definition, but, rather than evidence, let’s start with what it means to be ‘evident’:

Evident: that can be seen; clear to the mind; obvious…*

Dictionaries generally order their definitions such that the most common usage is first, and subsequent definitions can add clarity, whether by comparison or contrast. This is well illustrated with the above. Firstly, “that can be seen” has a modern, scientific, empiricist slant. By comparison, “clear to the mind” is an older, but still relevant, philosophically rationalistic view (indeed it calls to mind Descartes’ extended discussion of “vivid and clear” mental imagery from ‘Discourse 2’). Finally, “obvious,” which is problematic. What is obvious to one person may not be obvious to another, and for a whole host of reasons. I think it fair to say that, in a discussion about science and religion, the more scientific, and those (like me) with a passing understanding of the history of philosophy, are working with the first definition, and sometimes the second. The more religious tend to use the last two. Some religious people might take umbrage at my saying this, so let me be clear, I used the word ‘tend’ for a reason, and I would point to the ‘evidence’ for God using testimony and ‘the witness of the holy spirit’. (See Christian apologist, William Lane Craig’s defense of the Christian God “by the self-authenticating witness of God’s Holy Spirit” and arguments against that.) The use of the word “witness” is itself problematic, as it is a witnessing that seldom involves senses – there is no earwitnessing or eyewitnessing – indeed one might suggest that miracles are the provision of corroborating external sense data. Unlike miracles, witnessing is an emotional (and internal) experience. As such, something that is “clear to the mind” certainly is obvious – to you – but not necessarily obvious to anyone else. So you’ll need some other kind of evidence:

Evidence: that which makes things evident; means of proving an unknown or disputed fact; support (e.g. for a belief); indication; information in a law case; testimony; a witness or witnesses collectively…*

Evidence is that which makes something evident, but as discussed, what is evident to one is by no means evident to another. As such, witnessing and testimony, as employed in religious circumstances (and indeed in legal ones), is not proof of the claim, but proof of the witness’s belief in that claim (assuming that they’re not lying, but we’ll touch on intentional falsehood later). Does belief prove an unknown or disputed fact?

Psychologist, Elizabeth Loftus has shown us that a witness’s testimony can be affected by something as simple as the way in which a question is asked about an event. For example, in a famous experiment (Loftus & Palmer, 1974), participants were shown footage of an automobile accident. After being shown the footage, participants were assigned to answer one version of the question, “About how fast were the cars going when they (smashed / collided / bumped / hit / contacted) each other?”. The resulting estimates varied between ‘contact’ (the lowest estimate, at around 32mph) and ‘smash’ (the highest estimate, at around 41mph). The participants’ responses were biased by the version of the question they saw, such that their estimates varied by around 25% (which I’m sure you’ll agree is quite a lot considering they viewed the exact same footage, not merely the same event). This serves to make the point about memory and, without being diverted by too much further detail, human cognition is riddled with similar flaws of receiving, processing, understanding, and recalling, of information, as highlighted in the work of Amos Tversky, Daniel Kahneman (1974), and many, many other psychologists.

It’s likely that the fact of these flaws in cognition is exactly why science has been so very successful in describing the natural world, as compared to other methods that rely on human cognitive faculties. Instead of developing folk theories about a phenomenon, or asking someone else about their folk theories, we have sufficient humility to ask the universe itself. We attempt to re-enact the scenario in which the relevant phenomenon occurs (or predict its occurrence and observe it more closely), and measure the outcome. That measurement, as objective as we can make it, is evidence.

The fact of evidence, even where it is agreed upon, does not mean that differences of interpretation can’t occur, even between incredibly smart people. The Einstein-Bohr debates at the birth of quantum mechanics make that evident. But, just as the way in which a question is asked can alter the answer given (and without the respondent being aware), so too, can exposure to ideas change the way in which you receive subsequent information. The Bible, for example, has variously been used to support slavery, mostly in the past, as well as condemn slavery, more recently. As such, if one considers the Bible to be evidence (rather than the claim), what has changed is the set of extra-Biblical facts (or at least beliefs) that the Bible-believing populace hold to be true. This is, in effect, hermeneutics (the interpretation of texts), and it is instructive that hermeneutics was born out of Biblical textual analysis. It was subsequently recognised (first in philosophy, and then in Biblical criticism) that hermeneutics had to include an understanding of the “social, historical, and psychological world”* of the time in which the original text was written (I’m sure Professor Stavrakopoulou – one of the panelists – would correct me if she happened to read this, and I happened to be wrong on that latter point).

The Bible, very generally, is a collection of testimonies about events, claims about the nature of reality in light of those events, and claims about the impact of that reality on the social world, and so on. As such, within the Bible, there is a great deal of interpretation of prior work that is also within the Bible – there is no clear distinction between older and newer writing. My understanding is that the Qur’an contains a great many of the stories that are contained in the Bible, and this seems likely to be due, at least in part, to the impact of the Jewish and Christian knowledge of Muhammad’s cousin-in-law, Waraqah ibn Nawfal, and others, in interpreting and writing down Muhammad’s revelations (themselves the product of the social and religious environment of the time).

In the case of the Qur’an it is often said that it must be read in the original Arabic, and no translation is a true Qur’an. Much the same was said of the Bible, when it was still in Latin, and attempts to translate the Bible into English were met with death threats. This is no longer the case, and as such there are now hundreds of versions of the Bible… and all of them are at least subtly different.

The Bible and Qur’an are, at some level, claims made by people about events. In the case of the Bible, those claims are voiced either by the authors themselves, or by the protagonists in the Bible story in question – as such there is at least one or two levels of interpretation involved. The Qur’an, by contrast, is a claim by one person, Muhammad, about the nature of a set of revelatory experiences, which may or may not have been recontextualised by the input of various scribes, family members, and followers, depending on their own knowledge of the Torah, Tanakh, and Christian Biblical writing, and other socially relevant historical matters. Needless to say, depending on the impact of the knowledge of Jewish and Christian scripture on the Qur’an, the layers of interpretation may move from one layer deep, to three, four, five, or more layers deep. These interpretations of interpretations of interpretations are presented with a human voice (as opposed to a divine one), and they are about very human concerns, such as life, love, death, and meaning… and this fact leads me to my final point.

Most people take other people at their word, unless they have reason not to. The reason not to may be because the individual has been found to be a false witness in the past, but bearing false witness is different from being mistaken. I am often surprised by how readily people who claim deep religious faith will call someone that makes an opposing claim a liar – calling someone a liar is very different from saying ‘I disagree with you.’ Likewise, saying someone may be mistaken in their interpretation is not the same as calling someone a liar.

As discussed above, religious texts have very strong human themes, and as such it is unsurprising that some people will engage with these in a very human way, especially if they have been raised to do so. If you’ve been raised to not contradict your elders and, by extension, to accept religious authority, with little or no question, then the issue is the way in which you are engaging with the evidence. If your continued exposure is to a limited subset of religious claims, delivered emphatically by a priest or imam, then your engagement with the claims will continue to be social and emotional, not rational.

“As Loftus puts it, ‘just because someone says something confidently doesn’t mean it’s true.’ Jurors can’t help but find an eyewitness’s confidence compelling, even though experiments have shown that a person’s confidence in their own memory is sometimes undiminished even in the face of evidence that their memory of an event is false.”

In this modern scientific age, even the most ardent believer will have been affected by their exposure to both science and technology – not least the democratisation of information on the internet. With this exposure, and access to both good and bad information, skepticism is a necessary skill. The realization that a modern teenager knows far more about how the world works than the authors of any ancient holy book did, should give believers pause for thought, whether they believe humans were conduits for, or interpreters of, the divine word.

Nice people treat people they meet with respect until given cause to do otherwise. Extending this courtesy to long-dead people who were short on good, evidenced information does not make one nice, it makes one gullible (which is nice, if you’re a sociopath looking for people to use). Where individuals use their belief in the words of the long since deceased as the basis for being rude to someone that is right in front of them rather makes a mockery of the religious claim to humility. Assuming that your assessment of the claims of the long-since deceased is correct, and that someone else’s assessment of other, contradicting evidence is therefore wrong, is arrogant. Of course, most religions suggest humility in the face of evidence. Unfortunately for many believers, what constituted evidence at the time those words were written has changed because we now know how fallible we humans really are… then again, doesn’t your omniscient being of choice know that, too?

*Definitions of both ‘evident’ and ‘evidence’: The Chambers Dictionary (13th Ed.)

*Definition of ‘hermeneutics’ and related quote (“social, historical, and psychological world”): Blackburn, S. (2008). Hermeneutics. In Oxford Dictionary of Philosophy (Second Edition (Revised), p. 165).

For an excellent read about human memory I recommend Charles Fernyhough’s Pieces of Light: The new science of memory