Does anybody really think?, pt. 1

I want to take a closer look at the recent Channel 4 programme ‘What British Muslims Really Think’, now described on My4 as “Trevor Phillips presents the results of a rigorous survey of the views of British Muslims.”

To call this survey, or at least this presentation of it, “rigorous”, is overselling it. The language used, in several places, to present the findings is (I think unintentionally) inflammatory. And the description of the methodology leaves me feeling as though there was rather a lot of question-begging going on. What is odd, though, is that the conclusion is, nevertheless, broadly in line with my own sentiments, and so, because of the 45 minute rollercoaster ride to get there, I don’t see how the conclusion follows from the presentation as a whole.

I started writing this piece with the intention of using a quote from a book that looks at neurobiology, ideology, and social change, which seems germane to the problems of integration, but now I will save that for part two. In this first part I will pick through the results, as presented in this documentary, and highlight issues with the methodology, the results, or the presentation thereof. Where applicable I have provided time-stamps for the portion of the documentary that the quote relates to.

After a general introduction to the intent of the documentary, we are advised of some aspects of the methodology used to run the survey that is the basis of it. For example, “ICM decided that the best way to get a fully representative sample of Muslim opinion was to concentrate on areas where at least one fifth of the population is Muslim” (5:11-5:19) and, “ICM interviewed 1081 British Muslim’s face-to-face.” (6:22-6:26). Starting at 8:37 we get our first hint of the results. One third of Muslims think polygamy is acceptable, compared with one tenth of the general public. One-fifth of Muslims (18% actually) think homosexuality should be legal, where as four-fifths of the general public do (it was 73%, actually, and that’s nearer to three quarters, for the purposes of accuracy and clarity). Finally, sympathy for political violence and suicide bombing was 4% in the Muslim population, as compared to 1% in the general population. For a documentary which, in its conclusion talks about integration, to kick off with a very stark ‘Us vs. Them’ presentation of the results seem, well, unhelpful.

Martin Boon, Director of ICM, the company that carried out this research, characterises that 1% of the general population as “no more than a handful.” So here we come to our first bit of lazy, and potentially inflammatory presentation, and from the Director of the Company that carried out the research. The Muslim population, in total, is only 4.8% of the total population, and it is 4% of that population that is sympathetic to political violence and suicide bombing. Much mileage was made of the fact that this was something like 100,000 people, but not much mileage was made out of the fact that it was to “fight injustice” (see image below). The thing is, only 1% of the non-Muslim population has this sympathy, and 0.2% of the total population has this sympathy, and is Muslim. So, as presented, four times as many Muslims as “non-Muslims” have these sympathies, but in the context of broader society five times as many “non-Muslims” as Muslims have these sympathies. That’s why the saying ‘Lies, Damned Lies, and Statistics’ was coined. Of the 63.2 million people in the UK (as at the last census, 2011), and based on these percentages, there are around 110,000 Muslims who have these sympathies… and 630,000 or so “non-Muslims” who also do.

Screen Shot 2016-04-19 at 16.07.26.png

It’s not so much that Muslims are being stereotyped as suicide bombers, but that suicide bombers are being stereotyped as Muslim. The documentary tells us that a disturbingly large number of Muslims have “some form of sympathy with violent acts”, which is then ramped up to “…sympathize with Islamist terrorism.” OK. What does “sympathy” mean? Can we determine the difference between “violent acts” and “Islamist Terrorism”? The presentation specifically noted that this sympathy was in response to people fighting injustice, but this key motivation to sympathy is missing from the presentation of the results (indeed I only know about the “injustice” bit because it was on screen (above) – it did not rate a mention in the commentary or narration). I’ll come back to the point about what sympathy means later.

We’re then told that 21% of Muslims have been to the home of a non-Muslim only once in the last year, and that another 21% have never been to a non-Muslim’s home. There is a significant problem with this question, and I will quote the documentary to highlight it: “ICM decided that the best way to get a fully representative sample of Muslim opinion was to concentrate on areas where at least one fifth of the population is Muslim” (5:11-5:19). So, people are more inclined to make friends with people that are more like them when they are in areas where they are spoiled for choice? Shocking! Also, is it the Muslims not visiting the non-Muslims, or is it the non-Muslims not inviting the Muslims? There didn’t appear to be a question about whether non-Muslim’s had visited the homes of Muslims. One of the interviewees, Anjum Anwar makes this point forcefully: “So, if you have a child that goes to a school that is wholly Asian, who lives in an areas that is predominantly Asian… Where would that child meet children and people of other faiths? They’re restricted, aren’t they?”

Trevor Phillips then tells us that, “Equality of women, social tolerance, freedom of expression are now all taken for granted as features of the British way of life” (13:57). By contrast, homosexuality should be illegal, according to 52% of British Muslims, compared to 10% of the general population. In other words 1.35 million Muslims have the same beliefs as around six million non-Muslim Britons. That being said Muslims make up 4.8% of the population, and YouGov estimates that homosexuals make up around 6%.

The next question to be addressed was that of anti-Semitism. 35% of Muslims hold at least some anti-Semitic views, as opposed to 9% of the non-Muslim population – that’s around 900,000 Muslims as against more than five million non-Muslim Britons.

A sizable 39% of British Muslims believe that ‘wives should always obey their husbands,’ compared with 5% of non-Muslim Britons, again, vastly more others hold this view than Muslims. At this point the view on polygamy is reiterated. Whilst not polygamy, and certainly not about assuming that women should do what they’re told, polyamory is a small and growing subculture in the UK. There is even a “non-monogamous” option on OKCupid (and you can’t really call it cheating if you’re open about it, which at least implies that polyamory is a big enough deal for OKCupid to have that option). So we have a minority view that many Muslims hold that some non-Muslims might have at least some sympathy with. I don’t want to get sidelined into a discussion about the difference between polyamory and polygamy. Suffice to say that media coverage, such as this one in the Independent, often focuses on one male/two female polyamorous triads – though that might say more about media prurience than about polyamory. What I wanted to introduce was the idea that some people have views that are at least nominally or partially compatible with Muslims, and those people aren’t targeted for having those views, albeit that they are still somewhat fringe at the moment.

Now, let’s look at the statistics about homosexuality in a little more depth. Where only 18% of all Muslims think that Homosexuality should be legal, 28% of British Muslims aged 18-24 agree that Homosexuality should be legal, as compared to 2% of those over 65. Homosexuality was decriminalized in the UK in 1967, so there is no direct comparison to be drawn to modern British attitudes, but it’s probably fair to say that if you’re for the legality of homosexuality, you’re probably pro-same sex marriage (the vice is definitely versa). In a society where Same Sex marriage has been legalized, albeit only in 2013, once someone has made the decision to be for freedom of sexual orientation, they’re now under pressure from broader society to make the relatively short jump to being for the legal recognition of relationships that arise form that orientation. As such, a comparison between a YouGov poll of the general population from mid-May, 2013, with regard to same sex marriage, and views on the legality of homosexuality amongst Muslims may put things in perspective. In this we find that only 54% of non-Muslims support gay marriage, with 37% opposing; amongst Conservative supporters, that drops to 45% in favour, 48% oppose; UKIP supporters swing further still, 38% in support, and 53% opposing.

Ignoring politics for the moment, the sentiments of 18-24 year old Muslims is trending towards that of Britons aged over 60. (I put very approximate reciprocal ‘Oppose’ numbers into the table below, just to illustrate the general trend.)

The law enabling same sex marriage, as finally passed, states that no religious organization can be compelled to perform these ceremonies. It’s hard to see how highly religious Muslims are failing to fit in to British society, therefore.

SAME SEX MARRIAGE Lib.Dem. Labour Conservative UKIP
Support 72 57 45 38
Oppose 24 31 48 53
(60+) (40-59) (25-39) (18-24)
Support 28 58 70 74
Oppose 63 32 21 17
 Legal Homosexuality Muslim: 18-24 65+
Support 28 2
Oppose ~70 ~95


Now, consider that in times of high stress, uncertainty, and instability, people become more religious (Hogg, Kruglanski & Bos, 2013; Paul, 2009), and less reliant on government (Kay, Shepherd, Blatz & Chua, 2010). See also Gregory Paul’s further work related to his Successful Societies Scale. Consider, also that strong religiosity is linked to more conservative political views (Altemeyer, 2006). Additionally, note that Muslim families are almost twice as likely to have small children as the general population (whilst Muslims are 4.8% of the overall population, they are 8.1% of all school-age children). There is also a very strong link with parenting and conservative views (Altemeyer, 2006; Hohman, 2015), which must link, at least in part, to the high stress and uncertainty mentioned above.

One can expect certain behaviours in line with heightened religiosity, we see it in the US, but we’ve been (mostly) spared it here in the UK. According to the survey, 18% of British Muslim’s sympathize with violence against those who mock the prophet. There’s that word “sympathize” again. Although, how you “sympathize” with violence, I’m not sure. You might sympathize with people who commit violence in the service of fighting injustice… but another question arises, how did the person answering the question perceive “mockery”?

Whilst we’re noticing the religiousness of Muslims it’s appropriate to point out that the blasphemy law was only struck from the books, in the UK, in 2008. How quickly we forget, and get self-righteous about our newly enlightened position.

Speaking of hypocrisy and stereotypes, Martin Boon, tells us that only 2/3 of Muslim’s condemn stoning for adultery, compared to nearly all members of the general British public. This of course is a problem. Meanwhile, the 40+% of the population who claim to adhere to some form Christianity believe in the importance of a book that advocates the exact same thing. If there were any instances of this being carried out in the UK, it is even further back in history than the abolition of the blasphemy law. How fortunate! Unfortunately, section 54 of the Coroners and Justice Act 2009, which came into law in October of 2010, allows infidelity to be used as a defense for murder (see Horder & Fitz-Gibbon, 2015, for a discussion of the impact of this law). So, again, where’s the vast chasm of differing attitudes?

The tone of the presentation gets worse at 25:55:

“It’s clear that I, and many others involved in the policy-making field just got the aspirations of British Muslims wrong. Our mistake was to imagine that because historically other minority communities – Hindus and Sikhs, for example – had gradually moved to adopt some of the behaviours of the majority, that Muslims would follow the same pattern. But our survey suggests significant number of British Muslims don’t want to change, and don’t want to move to adopt the behaviours of the majority. … Many British Muslims would rather that non Muslim Britain changed its ways to accommodate their way of life.”

There are so many things wrong with this. But they can all be summed up by pointing out that there are many significant differences between the histories of the relationships between the English and Hindus and Sikhs, as compared to that between the English and Muslims. English occupation of Hindu and Sikh territories mostly ended quite a while ago. Whilst the echoes of empire are doubtless still felt in those places, the impact of British colonialism, and British support of US programmes of interventionist politics in the Middle East and Pakistan, and an illegal war or two, may have more than a little influence on Muslim sympathies with their countrymen (and women), and may well have influence on who has fled those countries to come to Britain.

I am not a historian, so I don’t want to get caught up in a long discussion about the history of the region that was home to so many of the Muslims that now live in the UK, whether in this generation, or generations past. I’m not a statistician, either, but I think the statistics that this documentary set out to present a contrast between hindus/Sikhs and Muslims need some context and balance, so here goes…

British Hindus and Sikhs, combined, are half the total numbers of British Muslims, and so are less likely to have that many communities where they make up 20% or more of the local population. According to the 2011 census, half of all British Hindus live in London. In other words, 400,000 Hindus live in a city of around eight million. So in London, Hindus make up around 5% of the population, on a par with the overall population of Muslims in the UK.

Year Hindu Growth Sikh Growth Muslim Growth
1961 30,000 16,000 50,000
1971 138,000 360.0% 72,000 350.0% 226,000 352.0%
1981 278,000 101.4% 144,000 100.0% 553,000 144.7%
1991 397,000 42.8% 206,000 43.1% 950,000 71.8%
2001 559,000 40.8% 340,000 65.0% 1,600,000 68.4%
2011 817,000 46.2% 423,000 24.4% 2,707,000 69.2%

According to the 2011 census the London boroughs of Tower Hamlets, Newham, Redbridge, Waltham Forest, Brent, Enfield, Ealing, and Haringey, along with the City of Westminster, were home to the majority of Muslims in London. These are the areas in London that were amongst the top 20 for the largest Muslim populations, per capita, as at 2011. As such this is a good shortlist for the communities selected by ICM for this survey. They total 581,997. Along with smaller populations around London, Muslims make up 12.4% of the population (and Londoners account for 40% of the UK’s Muslims). That is starkly different to the 50% of British Hindus that make up 5% of the London population. So why are we surprised that Muslims have a different social trajectory? Especially given the recent socio-psychological and geopolitical issues that relate to that movement, much of which is ongoing.

The implication from Phillips’ comment is that policy-makers were surprised when a population that is between two and six times the size of the reference population(s) didn’t behave in the same way. Muslim population growth is taking much longer to regress to the general population’s mean. Between 2001 and 2011 population growth in the UK was around 7%. Without Hindus, Muslims, and Sikhs, it drops to 4.8%. Not only is the Muslim population itself more than double that of the reference groups (see table below where “Both” is Hindu and Sikh combined), but Muslim population growth is almost double, too (33% of the Muslim population was aged 15 years or under in 2011, compared to 19% of the population as a whole). This makes a very strong case for educating our policymakers in statistics and demography or, I don’t know, actually using the ONS to interpret statistics for policy decisions.

Year Both Growth Muslim Growth
1961 46,000 50,000
1971 210,000 356.5% 226,000 352.0%
1981 422,000 101.0% 553,000 144.7%
1991 603,000 42.9% 950,000 71.8%
2001 899,000 49.1% 1,600,000 68.4%
2011 1,240,000 37.9% 2,707,000 69.2%

A strong thread throughout this ”documentary” is the concern about the lack of integration, and the ‘us vs. them’ mentality that arises from, and is strengthened by, this lack of integration (ironically not helped by the presentation to this point). But sometimes it was the very fact of adopting a British attitude that was demonised, a kind of ‘damned if you do, damned if you don’t’ dichotomy. At 33:11 we get this gem:

“…’live and let live’ is probably the most commonly accepted expression of British tolerance. Usually accompanied with a sort of ‘well, what can you do?’ shrug. But, there is a problem with this ‘live and let live’ laissez-faire approach: our survey revealed that the more people hankered after a separate life the more sympathetic they were to violence and extremism, and that really does matter. … When it came to exploring attitudes to violence, the survey asked British Muslims what actions they would take if they knew someone who was involved with supporting terrorism in Syria. Just 1/3, 34% said they would report it to the police. There may be several reasons for not shopping would-be jihadists… one, of course, is that you might be sympathetic to their cause.”

I find this reasonable-sounding “several reasons” followed by the less reasonable “one, of course, is that you might be sympathetic to their cause” to be inflammatory, and the sort of thing one expects from Fox “News”. According to this version, your average Muslim is supposed to live and let live, except where it comes to other Muslims having sympathies for terrorism. Which seems fair enough, on the face of it, but given the conflation of Muslims with terrorism in popular conception (only increased by this very presentation), Muslims also have a pretty strong motivation to not say anything, and to hope that they’re wrong about the suspected terrorist sympathizer, in order that the stereotype not be perpetuated, and that they be not personally associated with it. Additionally, and problematic for the validity of this survey, “terrorism IN Syria,” which is what the question asked about, is very different from terrorism in general. Syria has a dictator, and insurgents backed by numerous world powers. In Syria, terrorism is about the only means that some people have to fight back, caught between the oppressive regime of their own government, and the oppressive insurgence of someone else’s utopian dream of a Caliphate, supported by various vested interests in an international proxy war.

I have pointed out the use of the root word “sympathy” a number of times. Martin Boon, addresses this, saying, “There is no right or wrong of measuring sentiment on the use of violence, but we decided to use the word sympathy – the expression of sympathy toward violent questions or sensitivities – as the best way of dealing with it, because it has been used in similar surveys.”

The primary meaning of sympathy, according to that most British of institutions, the Oxford English Dictionary, is, “Feelings of pity and sorrow for someone else’s misfortune”, the secondary definition is “Understanding between people; common feeling”, but the survey seems to be using the third, and thus least used, meaning – “The state or fact of responding in a way similar or corresponding to an action elsewhere.” The usage may possibly be a conflation of one or other of the first two with this last definition. Asking about “sympathy” which, by its common usage, people take to mean “Feelings of pity and sorrow for someone else’s misfortune” and using the alternative definition – “The state or fact of responding in a way similar or corresponding to an action elsewhere” – as a lens through which to interpret the results, is begging the question – not a good thing in a “rigorous survey.”


“What we do know from the survey is that Muslim’s who have sympathy for violence are significantly more likely to hold illiberal views on issues like gay rights, and women’s equality than those who don’t. So what the survey is showing us is the emergence of what you might describe as a nation within the nation. Where many hold very different values and behaviours from the majority. I’d say that hardly anybody wants to see that happen, but the question is what are we going to do about it?” (37:05-37:37)


How do you hold a behaviour? A behaviour is, most often, a course of action predicated on a value. You can hold a value. This comment seems to be indicative of an underlying bias in the reporting of these results. Holding different values isn’t necessarily bad, and it’s not always easy to know what a person’s values are. But behaviour? Behaving differently is much easier to portray as bad – the use of the word here just seems to add to the bias apparent through-out this presentation. I’ve repeatedly illustrated that many other Britons hold similar views to Muslims, and in greater numbers, they’re just a little harder to pick on, demographically.

Yasmin Alibhai-Brown then makes an extremely important point, one that, in combination with my point about the socio-psychology and geopolitical realities of Muslim immigration, is the crux of the matter:

“Increasingly – and this really interests me – I’m getting young Muslims writing to me who hate the lives they’re living. They hate it. Some of them are gay. Some of them – men and women – have been forced into marriages. Some of them are lost, because they feel no affinity to anything or anybody, ‘cause they’ve never been allowed to. You know, it’ just this thing about being a Muslim – one of them said to me, ‘I am a Muslim, but I am so much more than that.’” (39:01-39:34)


Phillips’ narration continues:

“Those of us who are not Muslims shouldn’t be telling those who are how to live their lives or how to meet the needs of their faith. And nobody likes the old idea of assimilation where people abandon their cultural identity in order to blend in to some kind of mainstream. But that doesn’t mean we do nothing. Many people, including me, believe that we can create a set of policies that promote integration, make clear that there are some things on which the society will not compromise, and would support liberal trends in all parts of society. We call it a policy of active integration” (39:57-40:37).


I won’t directly quote the next segment, but it suggests that desegregating schools such that no ethnic group can be more than 50% of the roll would be a swift and decisive means to increasing everybody’s exposure to each other. This is “active integration,” and it seems to be working well in one school in Oldham where it was tried. Finally! Something I can wholeheartedly agree with! It only took 40 minutes.

So, Phillips closes with:

“Britain faces a huge challenge: adopting a policy of active integration may give rise to some ideas that make you, me, Muslims, non-Muslims, everyone, feel pretty uncomfortable. But what is our choice? We could cross our fingers, close our eyes, and hope that the segregation, the tensions, the periodic outrages, and the backlash that follows will, somehow, simply vanish. Or, we could seize the initiative; take steps to support those Muslims who do want their communities to change – in their attitudes towards women, towards lesbian and gay people and, indeed, towards violence. I know which of those I would choose. A policy of active integration must be the first step on the path towards those shared values that will come to define what it means to be British, for Muslims and non-Muslims alike.” (45:45-46:42)


I’m honestly flabbergasted that a documentary that spent so much time misrepresenting statistics about Muslims, ignoring the very real socio-psychological and geopolitical realities of the Muslim diaspora, nevertheless came to an appropriate conclusion. I get the strong impression that the entire thing was an exercise in promoting “active integration,” which is just a new name for an old idea (as far I’m concerned). I’m aware that multi-culturalism has come to mean a kind of segregated coexistence of peoples (i.e. a kinder, gentler apartheid), and it is that multi-culturalism that the pundits tell us has failed. But the multi-culturalism I adhere to, which is no doubt coloured by my being raised in New Zealand, is the one where people share spaces, and their ideas and traditions with each other, as openly as their personalities allow. So, apparently that is called Active Integration, now, in the UK at least.

So, having dealt with significant issues of statistics, context, and bad journalism/sloppy presentation, my next post on this documentary, which will be much shorter, is going to simply look at the issue of integration from the point of view of developmental psychology, neurobiology, ideology, and social change. The general idea being that no-one will be hurt by having a better, more empathic understanding of what it’s like to change countries, and how that change impacts the individual.

Posted in Uncategorized | Leave a comment

The Road to Monotheism

Here’s the approximate script for a talk I did recently. Unfortunately, for one reason and another, the videoing did not happen, so my YouTube channel will have to languish that little bit longer.


Good evening, everyone. I’d just like to take the opportunity, at the outset, to thank the organisers for giving me the opportunity to speak with you all, tonight.

Speaking of the organisers, I’d like to quote the description of this talk, as it appears on Meet-Up, because it’s what Ed wrote (with some minor changes by me), as a paraphrase of the overly verbose description I originally sent him, and because I want to try and keep to it as closely as possible:

“Belief in the divine is widespread across many cultures and this may be because the belief reflects reality. An atheist thus needs to explain why, from their perspective, belief in the divine has arisen erroneously.”

The description goes on to say that I “will illustrate how one very important social-cognitive skill gives rise to empathy (and thus morality?). From this [I] will then seek to explain the experience of the divine as an attempt to grapple with the moral problems of an increasingly large social environment, and the non-moral problems of the general environment.”

In keeping with this description, I will briefly describe the social-cognitive skill mentioned, along with some related psychology that helps put it in perspective. I will then detail how this primary skill relates to both morality and belief in gods. After a few closing comments, we’ll launch into the question and answer session.


Social-Cognitive Ability:

First off, let’s get to grips with this social-cognitive ability that is the lynchpin for this discussion. It was originally called Theory of Mind, back in 1978, when it was first discussed (with regard to chimpanzees). Since 1978 it has been called many other things: Mentalizing, Mind-Reading (the non-magical kind), Folk Psychology, and The Intentional Stance. These different names are all quite descriptive of what this ability entails…


Theory of Mind – Precursors

Reading facial expression/body posture:

Many of us habitually adopt the same facial expression and/or body posture as the person we’re talking to, so as to better understand their meaning, or to empathize with them. Notice that people who do this in an unaffected way often make us feel more comfortable speaking with them. The almost unavoidable feeling is that they’re ‘our kind of people’ (of course this may depend on what your threshold for ‘unaffected’ is).

Now, consider the fact that people who hold a pen clenched between their teeth, thereby adopting many of the facial characteristics of a smile will rate cartoons that they then read as funnier than if not doing so, and much more so than if holding a pen between pursed lips, and thus adopting a frown. This is an application of the facial feedback-hypothesis, which suggests that some of what we know about the status of our bodies is through noticing the body itself. This is merely reinforcing the feeling rather than the initiating cause.

However, notice that once we understand our own facial expressions and what they mean, we can better notice and understand them in others. By a series of steps, one can abstract from a particular facial expression or bodily posture what a person is likely to be thinking, especially if you are in the same context as them. From here one can begin to anticipate an individual’s behaviour. One can augment this ability using knowledge you have about the person based on:

  • how well you know them/how similar they are to you, or;
  • whether they conform to a stereotype you have for them.


Theory of Mind – Peculiarities

Reading actions in the environment:

As a species, we are embedded in a highly social environment. In many respects understanding each other has become more important than understanding the general environment in which the social environment itself is embedded. Other people are a more immediate danger to us – they might steal our food, or aggress against us. In the general environment seasons are more predictable, and earthquakes and thunderstorms are less frequent, in the modern general environment bear attacks are fewer and further between.

As such, once this skill was developed, it made pragmatic sense for it to lead to assumptions about people we know, or know of. So when something unexpected but positive occurs in our environment we ascribe benign intent, i.e. something done by someone we know, someone who is good; but when something unexpected but negative occurs in our environment we ascribe malign intent, i.e. something done by someone we don’t know, someone who is bad. You will recognise the Us vs. Them mentality in this, and more.

Notice, here, that a physical body is no longer the basis for our theorizing, we have climbed a few rungs on the ladder of abstraction. Now disembodied “behaviour” is the basis for our guesses about intent in our environment. Indeed actual theory of mind is triggered by surprising or unexpected occurrences (behaviour in our environment), not explicitly human occurrences. This is predicated on the simple expedient that a false positive is less dangerous than a false negative: thinking the shadow in the bushes is a potential assassin or burglar, is a lot safer than thinking that the actual assassin or burglar in the bushes is just a shadow.

To give two quick examples about how quickly we engage theory of mind and how quickly this leads to anthropomorphization of non-human entities consider computers and cars:

If your computer is working fine, it’s just a computer. However, if it glitches or crashes it’s a “stupid” computer. “Stupid”? Really?

If you’ve ever had the unabashed joy of owning an old car, how likely is it that you referred to how it runs as “temperamental”? Indeed, I’d be prepared to bet that the cars that have been given names are the ones that don’t work that well, or are owned by people who don’t really understand cars (making almost all “behaviour” unexpected). Even the lads on Top Gear predictably finish their reviews of cars that are “mad” or “bonkers” with, “…but you know what? I absolutely love it!”

Some might complain that these are things of human design, so of course they are the focus of Theory of Mind. OK, fair enough. What about the weather? The weather, like the second-hand car can be temperamental, too. Storms can be violent. Weather can be sultry (admittedly I’m cheating there… sultry is a description of weather applied to people, illustrating that the anthropomorphization has come full circle).

Just so we move away from any idea that empathy is all things good and pure and perfect, by default, let’s not ignore the fact that understanding someone else’s pain leads to the ability to take pleasure in someone else’s pain BECAUSE it is not your own pain.


Theory of Mind – Factors

Other aspects of human psychology:

System 1/System 2:

We have a skill that started off as the ability to read physical, which is to say facial and bodily, cues in the environment, but over time this skill became more abstract, able to read symbols in the environment detached from human bodies… and requiring of more cognitive effort. If you’ve read Daniel Kahneman’s ‘Thinking, Fast and Slow’ you probably know where I’m going with this – some of the skill is of ‘System 1’ which is to say intuitive, unreasoned, fast, effortless, but prone to errors. ‘System 2’ is more cognitive, reasoned, slow, effortful, and less prone to errors (and indeed seeks to correct System 1), but sometimes relies on the output of System 1, and thus can perpetuate errors.


Executive Functions:

One of those errors is, as I mentioned, being triggered by surprising stimuli, rather than actual human stimuli. This is not a failure of Theory of Mind, per se, but a failure of Executive Functions. According to a classic paper on the topic, Executive Functions are “general-purpose control mechanisms that modulate the operation of various cognitive subprocesses and thereby regulate the dynamics of human cognition” (Miyake, et al. 2000, pp. 501). These include the abilities to:

  1. shift between sets and domains of data;
  2. update and monitor information in multiple domains, and;
  3. inhibit inappropriate responses within and across domains

So, you can see, that when Theory of Mind is triggered by events in the general environment, rather than in the social environment, it is a failure of all three. It should come as no surprise that Executive Functions, being complex and new in the scheme of things, is amongst the first to be negatively impacted by primal responses such as fear or anxiety. One of the benefits of being a social species is the greater protection from sources of fear, and thus anxiety, by being in a tribe… but this leads to other sources of fear and anxiety.


Dunbar’s number:

How well you know someone comes down to how much time you spend with them, in the social environment – thus, in general, you know your family and close friends, your “tribe”, best of all. There is a limit to how many people you can know well – and there are several different ways to tackle this issue. And it IS an issue, because how well you know someone defines, at least to some extent, how you deploy your Theory of Mind. With people you know well (to whom you have more than 10,000 hours of exposure, say) you are an expert in their likely response to situations. This knowledge has become mostly intuitive.

According to Robin Dunbar, the maximum number of people you can know well is around 150. This is predicated on the amount of frontal cortex humans have as a ratio to the rest of their brain, as compared to other primates. What Dunbar found was that the smaller the ratio, the smaller the “tribe” that the primate is naturally found in. The brain size related to how many tribe members it is feasible to groom in order to maintain social closeness, and the brain-space required to maintain the information gained from grooming. Note, here, that we’re back to the physical precursor to Theory of Mind, whereas humans use symbolic language (a cognitive skill) to maintain social relationships, sometimes over great distance.

Some people disagree with Dunbar’s Number, and alternatives have been proposed, such as the Bernard-Kilworth number, which is 1.5-2 times Dunbar’s. I’m not especially concerned about which number we agree is correct, or even if these numbers have any meaning at all, as I will explain in a moment. The difference between the numbers may just be down to the strength of the social ties represented (and this may be one means of managing larger tribes). Indeed, Dunbar’s work looked at groupings in Modern Western culture (such as military corps), but also historical anthropological work on Amazonian, New Guinean, and African tribes. It is interesting to note that tribes that were larger than 150 were so because of the number of children living amongst them – these tribes often split as the children reached adulthood.


Psychological Distancing:

Whether or not either of these numbers is relevant, one additional thing to note is that we can derive something very similar, though less specific, by noticing three aspects of our relationships with people in our tribe/social group:

  • how physically and psychologically similar to us someone is;
  • how physically close to someone we are;
  • how well we know someone.

The phrase ‘out of sight, out of mind’ is disturbingly true. Daniel Kahneman also presents the idea of ‘What You See Is All There Is’. Physical distance has a very real effect on our relationships with people. We have to work harder to maintain long distance relationships with people we ostensibly care about, and we are less likely to strike up relationships with people that we might otherwise care about, if only they were closer. In other words distance can, passively or actively, affect the way we engage with people. We can, actively or passively, replicate the impact of physical distance with psychological distance. If you don’t consider the person right next to you to be important, they could as easily be 1000 miles away. And if your partner or lover is 1000 miles away, they could as easily be by your side. Things like similarity to yourself, whether physically or mentally, impacts on the effort you will make to make someone psychologically closer.



Stereotypes are a means by which we depict a group of people, usually people that are at some physical distance, and thus with whom our inter-actions are only fleeting, and stereotypes tend to focus on difference, not similarity. Stereotypes are incredibly useful cognitive tools and, despite the bad press they get, are often highly accurate depictions of groups (I will add the caveat that this does depend upon whether the source of the stereotype is ideologically driven).

The primary problem with stereotypes is where an interaction moves from being at the group level to being at the individual level, and the degree to which the stereotype assumptions are held, despite contrary evidence from the individual. The continued holding of a stereotype about a person with whom you are directly interacting is a form of psychological distancing.

Despite, or maybe, in some cases, because of, their utility in creating distance, stereotypes are also used as a means to classify oneself to oneself. Notice that people who rely on too few self-stereotypes are able to distance themselves from their own pretty abominable behaviour (for example, people who define themselves by their gender, their race, their country of birth, and so forth, but little else).


So this brings us to morality. But first let me briefly recap what I’ve said about Theory of Mind…


Recap on Theory of Mind:

  1. Theory of Mind is predicated on the ability to read facial expressions and body posture.
  1. Adopting other people’s facial expressions and body posture will often make what they’re saying easier to understand…. and impact our own thoughts and feelings. This is the basis for empathy, and by extension, morality (or at least moral discourse).
  1. With any ability, as we become more practiced (as an individual, as a culture, as a species), the skill relies on less explicit content and becomes more abstract.
  1. In the case of Theory of Mind, this includes being able to discern motive from an action that is disembodied in space and time.
  1. Theory of Mind can be triggered by unexpected events in the general environment, not just the social environment, and can be applied to non-people, such as stereotypes, and non-assassins, such as shadows in the bushes.



So to head off (or possibly create) discussion on my use of the word morality, just then, here’s a definition from the Stanford Encyclopaedia of Philosophy:

The term “morality” can be used either

  1. descriptively to refer to some codes of conduct put forward by a society or,
    1. some other group, such as a religion, or
    2. accepted by an individual for her own behavior or
  2. normatively to refer to a code of conduct that, given specified conditions, would be put forward by all rational persons.

I think it would be uncontroversial to say that codes of conduct put forward by a society, or a religion, as accepted by an individual, are JUST that society’s, or that religion’s documented code for a given specified condition, or, more often, a command that is supposed to be relevant across all conditions (e.g. Thou Shalt/Shalt Not). The individual, on the other hand, may be swayed by their society or religion, and/or they may be a rational person who normatively adopts a code of conduct under specified conditions (such as those not contemplated by a religion’s or society’s code).

So, what I would like to do now is discuss, in very general terms, the types of societies that humans have been part of, and the gods that those societies gave rise to, and the impact on moral discourse.


The Evolution of Monotheism:

Humans evolved the skill of Theory of Mind in an environment of small tribes, in which everybody knew everybody they interacted with regularly. So much so, that they generally interacted with people they were related to. Their tribe was like them, in every plausible way. By contrast, they could only have folk knowledge about their general environment. So what phenomena are going to be both extremely important to understand, and almost constantly surprising to such people?

The animals, the trees, indeed the very earth.

If you use your Theory of Mind – your abstracted self – to try to understand these things, you will necessarily imbue them with your ‘self’, your “spirit”. If in a state of fear, your Executive Functions will fail to remove the social aspect of that cognitive process, so the information will still “feel” social. So I’ve just described animism: animals and trees and the like with individual spirits, as well as the spirit of the forest, and maybe an overarching sky god or world spirit.

Notice that animist societies are never agricultural. When they become agrarian it signals that they’ve come to understand enough about their environment that aspects of it are less surprising. Successful agriculture and domestication of animals comes with a population explosion and specialization within that population – farmers, shepherds, hunters, and so on. Specialization leads to power structures, and thus politics and hierarchy. Gods, then, become more overtly anthropomorphized, with links to distinct animals and natural phenomena, and most importantly, they start to have their own hierarchy.

Note, here, that highly successful agricultural civilizations got large enough, quickly enough, that social inertia stopped some of the gods they worshipped, as a society, from progressing beyond being part animal and part human. Examples of this, not surprisingly, being fertile river deltas, such as the Nile and the Indus valley, giving rise to the Egyptian and Hindu pantheons.

Civilisations that were a little slower to flourish, or whose geography was less conducive to large, more homogeneous populations, instead have humanoid pantheons (though many of these gods have animal forms), foremost examples in the West being Greece and Rome, but also Scandinavian, Slavic, Sami, and Celtic pantheons. It seems that flood-plains, being broad, flat expanses lend themselves to greater inter-personal connection, more frequent (and less violent) interaction, and thus greater tolerance of difference, and gods to reflect the scope of human experience.

Clearly, the fertile region (if not a floodplain itself) that bucks this trend, is the Levant. Here civilizations rose and fell with various iterations of humanoid gods, but monotheism has been a recurring theme, from Zoroastrianism to the Abrahamic faiths. So the question is: Why?

The answer is probably going to sound very familiar to modern woes, particularly after I mentioned self-stereotypes based on race and country of birth: Immigration and Trade.

The Levant is, very approximately, the crossroads between Africa, Europe, and Asia, and there is evidence that this has been the case for millennia (in fact, almost constant for the last 1.8 million years, with evolutionarily relevant migrations as recent as 40,000 years ago). So the indigenous people of that region were constantly assailed by people from Africa, Asia, and Europe; all with their different epistemic commitments, and their desert gods, hill gods… and iron chariots.


From pantheism to monotheism:

So how do we get from the expansive and inclusive ideas of pantheism, and its subtypes, henotheism and monolatrism, to monotheism?

Pantheism is the belief in multiple gods. Some are gods of certain human activities, with their related moral codes. Others are gods of things in the environment (usually ones that humans rely on happening, or rely on not happening – from harvests to hurricanes – as such they are gods of human interaction with the general environment).

  • Henotheism is where each individual worships one particular god, of the multitude available, whilst accepting that there is a multitude, and adopting the moral code relevant to that god.
  • Monolatrism takes this idea, and has all people worshiping a high god, whilst still worshiping their preferred lesser deity, thereby bringing moral discourse into some kind of unity, whilst still having moral preferences, as exemplified by other gods, in the dicussion.

Monotheism, of course, is the doing away with all of the other gods, and worshipping a single god, and ascribing all morality to that god.


Now let’s draw a comparison with stereotypes (please excuse the dreadful coinages, which I use only to make the comparison explicit):

If Pantypism is the idea that there are humans that engage in the multiple activities available to them in society.

  • Then Henotypism is the idea that you engage in one of those activities, whilst accepting that there is a multitude.
  • And Monolatypism is the idea that you engage in one activity, but are, simultaneously, part of some unifying group, or society.

Thus, Monotypism is the idea that your unifying group is more important than anything else.

Recall that I said that stereotypes are predicated on Theory of Mind and the recognition of difference.

So, does Monotheism bear comparison to a monotypism like nationalism?

The Biblical God hardened Pharaoh’s heart, and then killed the sons of Egypt, he condoned and/or aided in the destruction of the Canaanites, the Amalekites and the Moabites (sometimes to the point of directing the killing of women, children, and livestock). Egypt, Canaan, Amalek, and Moab, were nations apart from Israel.



The conclusion that I want you to draw from this is that stereotypes are, generally, explicit constructs used to describe other people by virtue of their difference from you (or your theory of your mind), and that gods are generally implicit constructs used to describe your people by virtue of their similarities to you (or your theory of minds). The construct under which people unite and differentiate themselves from others most readily is race and/or nation.

What I am saying is that any god is a metaphorical construct used to describe a group of people in shorthand. After all, if you wanted to describe your tribe/race/nation you would want to highlight their gifts, such as the goodness, wisdom and strength of its people. Monotheism is explicitly just such a national grouping; what has happened, however, is that the Abrahamic God has come detached from Israel – because Omni-benevolence, omniscience, and omnipotence were such generic descriptions on their own – it can be ported into a Christian American landscape, or an Islamic caliphate.

To illustrate that one’s religion is just an abstraction from one’s country (itself an abstraction from tribe), at least conceptually, consider the sedition, rebellion, and treason (or insurrection), and notice the relationship to heresy, blasphemy, and apostasy.

  • Sedition – Acts intended to promote disorder
  • Rebellion – Resisting authority
  • Treason – Betraying or attempting to overthrow one’s government
  • Insurrection – An uprising or revolt.


  • Heresy – Opinion contrary to orthodox doctrine.
  • Blasphemy – Speaking sacrilegiously about God.
  • Apostasy – The renunciation of a religious belief.


In this light consider the Christian martyrs. Were they spreading the good news to other countries, or were they encouraging the residents of those countries to adopt the cultural norms of an enemy state?

If you accept my postulate, then significant arguments between the Christian and the Skeptic dissolve, foremost of which is the argument on the source of human morality… if God is a metaphor for some given group of people, then god and humanity are BOTH the source of a morality, because they are one and the same thing considered in different ways.

Christianity, as a step on from Judaism, takes the non-human thing-ness of God and fleshes out the stereotype, creating an idealized person. As such many denominations aspire to be Christ-like. The deficiency of reliance on a stereotype is evident in the bigotry of certain denominations; a stereotype can’t be both male and female, for example.

As such, and in this light, consider Galatians 3:28

There is neither Jew nor Gentile, neither slave nor free, nor is there male and female, for you are all one in Christ Jesus.

Posted in Uncategorized | Leave a comment

The Big Questions: Does evidence undermine religion?

As mentioned previously, I attended the filming of The Big Questions on Sunday, January 11th. The topic for this episode was, ‘Does evidence undermine religion?’ I’ve done a pre-amble explaining the different degrees of evidence, now I’m going to illustrate how this differentiation impacts theistic claims. I’m going to pick on the theists, for the most part.

I’m going to follow the order of the programme, which you can view, here, and I’ll supply the time-stamps for each bit. Where I quote the panelists I will edit out irrelevancies … and I’ll resist the urge to point out faulty English, because at least one of the panelists has English as a second language.

Nicky Campbell says, “…you [Robert Feather] believe that you’ve actually discovered the mountain where the commandments were meant to have been handed down.”

Robert Feather replies, “Probably, yes. The exact mountain, in fact.”

So, we start off with equivocation. He started with the reasonable, “Probably, yes”, but immediately switched to the overblown “The exact mountain, in fact.” So we’re off to a bad start. The main thrust of Feather’s argument seems to be that because he found a mountain that conforms to some Biblical descriptions, that Moses was real, and by extension, the Bible is true. My response to that: I highly recommend a book called ‘The Historian’, which goes into great detail, both historical and more current, about Istanbul and Budapest, and therefore Dracula is real.

We then have a lesson in what a ‘degal’ is. What’s interesting here is that Feather claims that this number was mistranslated as 1000, when it is a much lower number. He plumps for around 50-60. According to Strong’s Concordance a ‘degal’ is a banner or a standard. Given that the Exodus is effectively an origin story, and given the numerous other instances of hyperbole when translating these stories (I recommend looking up how big Solomon’s temple actually was), would it not be easiest to suggest that each banner was a family with a man at it’s head? That makes any Exodus to be 605 families (of between two and six people, say) and that might explain a lack of archaeological evidence for any Exodus whatsoever. Alternatively, could this be a story about 605 families who were followers of Akhenaten (the first Egyptian to be a monotheist), who fled Egypt when Akhenaten died, to the relative safety of the outskirts of the Empire (i.e. Canaan)? I only mention this, because Feather has advocated for the Akhenaten thesis in the past (see his book The Mystery of the Copper Scroll of Qumran: The Essene Record of the Treasure of Akhenaten). Or, either of these could be the case, but involving 605 individual men who went on to start their own families… which is more plausible still. Or, and this seems even more likely, it’s just a story.

Is it now that I point to the recent study that found that children who are presented with Biblical stories as historical facts are less able to distinguish truth from fiction when presented with non-Biblical fairy stories?

Professor Stavrakopoulou (hereafter, Francesca) corrects a number of Feather’s overstatements (e.g. “the exact mountain”), and in response Feather goes on the offensive. Interestingly, he claims that Francesca has been “overtaken by a flood of archaeological and textual information.” He then proceeds to ask whether she is aware of Beno Rothenberg. This is an interesting question, and one that I would turn back on Feather. Beno Rothenberg died in 2012. Furthermore, aside from a couple of papers on metallurgy, the bulk of Rothenberg’s work is from last century. Is Feather suggesting that information that is more than a decade old can be characterized as current? Feather also makes a big deal about the Merneptah Stele. Is this part of his “flood of archaeological and textual information” by which Francesca has been overtaken? The Merneptah Stele was uncovered in 1896, and the translation, which questionably includes the key word, Israel, was from the following year (Petrie & Spiegelberg, 1897).

At this point the conversation shifted to a Jewish interpretation, courtesy of Rabbi Miriam Berger. There’s really not much to say, here. The Rabbi accepts that the story is likely metaphorical (though I would go further and say that Yahweh is, too), and that she would get shivers if some element of the story that was true could be rooted in a real world location, but that it’s not necessary for the identity that her faith provides. Great. Perfectly sensible.

Next up is Doctor Radica Antic. Now I am going to admit that I found this man intensely annoying, and this will likely come out in what I say. Indeed, I’ll get my complaint off my chest now. This man is a doctor? Of what? Errant nonsense and condescension?

“To make archaeology the measure of all truth is so wrong. And it simply, it does not stand…”

He says a lot more, but I want to get to grips with the above, because this underscores my complaint about definitions of evidence. First, Antic sets up a strawman by claiming that anyone has suggested that archaeology is the font of all knowledge. Indeed, that is more commonly a claim from theism. Richard Carrier, a well-known historian, places history (his own field), as a means to knowledge, behind reason, science, and experience, not least because establishing historical fact requires reason, science and experience. So Antic is right, archaeology is not the measure of all truth, but as he was the one making this claim, he’s begged his own question.

“…To impose atheistic interpretation on the Biblical text it would be like imposing Biblical or theistic understanding on some atheistic work.”

Antic, using “atheistic” to mean ‘scientific and/or materialist’, fails to note that the Bible IS making claims about the nature of reality. Claims that we know to be false. Whether you read the creation as seven actual days (despite the sun not being present until the fourth), or seven epochs, the story itself is still unequivocally wrong. The Noachian flood, as a genetic bottleneck, just makes it more wrong. Indeed, without the claims in Genesis there is no need for the New Testament. If there is no Adam and Eve, and no Fall, then there is no need for Atonement in the person of Jesus Christ, end of story (quite literally).

“First of all, I believe there is God. And IF there is God…”

It amuses me how often theists say ‘IF’, only to assume the conclusion in everything they go on to say. No, let’s stop at ‘If there is God’ and point out that you don’t know if there is, you can’t prove that there is, and your entire worldview (as shown above) is predicated on that IF. I hasten to remind Antic of Matthew 7:26 (And everyone who hears these words of mine and does not do them will be like a foolish man who built his house on the sand). “IF” is linguistic sand, whereas science’s ‘This is what we know, so far’ is, whilst maybe not rock, certainly the driving of piles down into the rock through the ‘IF’ sand.

“The atheistic community, they have no answer, how the universe-cosmos came into existence. Not at all, they are telling us that something comes from nothing. This is an offence to the common sense…”

By “atheistic community” Antic means the ‘scientific community’, so we have a conflation/red herring here, again, which now becomes the basis of a genetic fallacy. He prefers that which comes to us from common sense over careful observation. Common sense has an incredibly poor record for delivering truth (as Adam Rutherford says in response). Indeed most religious texts are written as common sense for that region, and then stray into global or universal concerns. But lets deal with this argumentum ad populum (appeal to popularity) on this specific topic…

First, God also comes from nothing, or has always existed, which is the same thing. So there is fundamentally no difference between the two. Of course, on a deeper reading, many scientists have a different definition of nothing (e.g. Lawrence Krauss). Second, an appeal to common sense doesn’t work, because most religions (including Christianity) agree with science that we only know of one universe for certain, the one we’re in. In order for something to be common sense we need to have been in a position to witness something repeatedly, and deduced the correct response accordingly, so common sense simply can’t provide us with an answer (Arif Ahmed makes much the same point, but with reference to statistical probability). Those who clapped to Antic’s comments, here, are applauding willful ignorance, and should be ashamed of themselves.

“…200 constants in the Universe… and if only one, if only one of these constants is changed, nothing would exist… then how did life started. Dawkins is telling us pure, sheer chance. …if there is God, then he speaks, and he speaks also in the Bible, then your questions about Noah’s Ark… because there are miracles all around us.”

As is the case with many theists, Antic makes discussion of biological evolution equivalent to cosmic evolution (an equivocation), presumably because the Bible considers these events in the same chapter and/or because they both use the word evolution. They are, of course, not alike, and have around seven billion years separating them (rather than seven days/epochs in which to occur).

That being said, Antic unwittingly provides exactly the same definition for miracle that I do: a lack of understanding of the underlying mechanics makes something seem ‘amazing and inexplicable’. Once you have even a slightly better understanding of what’s going on, then you lose the ‘inexplicable’ and are left with just ‘amazing’. Indeed, it is only through believing the creationist account, and the flood, that you can believe in miracles. Once you lose belief in these fairy stories everything becomes more explicable, and thus less miraculous.

This, by the way, is the main claim that I am making with my own research, that religion is self-perpetuating, in that it encourages belief in easy-to-believe stories, but that belief makes the world a scarier and more surprising place, and fear (or at least anxiety) is a fundamental driver of religious belief.

This is the bit that makes me think that Antic is a condescending fool: In response to Adam’s perfectly sensible and (importantly) circumspect comments, he said, “Of course you don’t know [how the universe began] [more cheering from the peanut gallery].” You don’t know either, Antic. You believe some ill-founded guesswork from 3000 years ago, which was immediately superseded, at about the same time, by the nascent science of the Greeks, which you choose to ignore. I suspect that Antic is, intentionally or otherwise, unaware of the sheer volume of work that has been done in science. Whilst science can’t discount the possibility of a god, it certainly will not the God of the Bible.

And now we get onto Hamza Tzortzis, another self-impressed apologist, but this time for Islam (I want to point out that I very much enjoyed listening to Maajid Nawaz on the previous episode, before anyone jumps up and accuses me of Islamophobia). Tzortzis turns to, I presume Adam Rutherford and, in a manner that you would adopt if speaking to a toddler (i.e. not humble), claims that we need to have, “epistemic humility.” I’m going to go ahead and guess that he recently discovered The Stanford Encyclopedia of Philosophy and read the bit about ‘Wisdom’.

“The point is, is that scientists are limited to the observations they have at hand, there can be a future observation that denies previous conclusions. It’s in flux. This is the beauty of science.”

This is actually an excellent point, at least on the surface (and before he goes on to ruin it with what else he has to say). The implication, in a deeper reading, is that the entirety of science can be overturned by a new observation. This is incorrect. Science, like the human mind, is a recursive process, as such it is hugely common to be garden-pathed by a particular line of enquiry or theorising. (An example being the geocentric model, and the heliocentric (Copernican) revolution.) However, once findings get to a certain point there is nothing that can overturn them, for example, Einstein’s theories of relativity refined Newton’s, they didn’t overturn them. Certainly there is no finding that will make any creationist account true (if anyone wants to challenge me on that I’ll happily explain why, but it’s beyond the scope of this writing). Likewise, no findings will suddenly find that the Qur’an’s discussion of embryology is true. Indeed, the Qur’an relied upon a subset of what was known from the work of Galen and others, and science has moved beyond that. Scientific knowledge has only improved, and most new findings refine existing theories rather than overturning them. Most of those that were overturned outright (like geocentrism and female sperm) were based in religious views… which is bad news for what Tzortzis is trying to achieve with his (faux) epistemic humility.

“Are you going to use science as a yardstick for absolute truth? No! No sincere scientist would say that because we’re bound to change. We’re limited human beings, one day we look at the horizon, we think it’s flat. Next minute learn about maths, and know it’s round. So the point is, let’s have epistemic humility.”

No scientist would lay claim to absolute truth, because that’s a religious claim. “Epistemic humility” is built into science, not least because science doesn’t lay claim to a personal relationship with the creator of the universe. Note, also, how Tzortzis makes my point by talking about the flat horizon, the discovery of maths, and the knowledge that the earth is round. Does he honestly think THAT observation is going to be overturned?

“The issue is this, why are we imprisoned, from an epistemic perspective? Why is it only science? What about philosophy, reason, maths, logic, other forms to truth? Because what we’ve done we’re presuming a scientism here, and scientism is limited.”

Right. Hamza is sitting opposite two philosophers, Peter Cave and Arif Ahmed, both of whom adopt knowledge from science to inform their philosophizing. There are philosophers of various sciences who have a very active role in the science of which they are philosophers, and their input into that science is significant. Of course reason, maths, and logic actually form the very basis of science, so Tzortzis is begging the question for these to be used instead of science. Various theisms have attempted to employ these things to strengthen their arguments – that seldom works out well:

“Reason is a whore, the greatest enemy that faith has; it never comes to the aid of spiritual things, but more frequently than not struggles against the divine Word, treating with contempt all that emanates from God.”
― Martin Luther

“Credo quia absurdum” (paraphrase)

“I believe because it is absurd.”
– Tertullian.

Now we have Vince Vitale, jumping in after a direct question to Antic, by Ahmed.

“In terms of the explosive force of the Big Bang, if you just conceptualise it… it’s the slightest bit stronger, it literally disperses into thin air. …The slightest bit weaker and it all collapses back in on itself.”

I find it amusing that this guy encourages us to conceptualise it, and then fails to conceptualise it himself. For starters, the contents of the Big Bang will NOT “disperse into thin air”, because there is no “thin air” for it to disperse into (and ‘void’ has the benefit of being only one word, not two). This illustrates the anthropocentric nature of theism. A devout theist seems only able to conceptualize things from a human perspective (and hence an anthropomorphic God, in shape of body and/or shape of thought). It is outside the grasp of their imagination to remove the human element from the imagination and to act purely as a passive observer. What’s even more absurd is that Vitale thinks he’s delivered a knock–out argument as to why the universe must be finely tuned, but instead he’s delivered a knock-out argument as to why neither of the universes that he’s described is the one we live in. There may well have been prior iterations of this universe (or other regions of spacetime), whether due to a Big Bounce, or Smolin Selection Theory that fit his description… but if there were, neither we, nor anything like us, would live in it.

Then Antic rejoins the conversation:

“If ever, if ever there are enough evidences [to prove evolution true]… I would lose my faith in God, yes. If there is enough evidences, but there are no evidences.”

Antic is incorrect, for two reason:

  1. there are mountains of evidence, he has just systematically avoided being presented with it, or paying attention when presented with it, or, and this seems most likely, doesn’t understand it.
  2. the evidence merely supports the theory of evolution, evolution itself is a fact, for which the theory is our best explanation.

So Antic is ignoring the fact, the theory, AND the evidence for that theory. I think it safe to call that ignorance. And in this forum he is, arguing that his ignorance is better than someone else’s hard-won knowledge. Of course, because that knowledge doesn’t come from personal experience (of evolution itself), or from hearing a story about someone else’s personal experience, it does not count as evidence. Recall the three definitions of ‘evident’ I gave previously, and notice how Antic is relying solely on the weakest one.

Antic then goes on to show how truly, gob-smackingly hypocritical he is in his response to Adam Rutherford:

“More humbleness would help you… what we know is very, very little.”


Is this the face of a humble man?

The lack of humility in this demand for humility is astonishing. His lack of self-awareness about his lack of humility is saddening (and this is a common problem with fervent and fundamentalist theists). There can be nothing more humble than asking the universe for the answer, and actually listening to the reply, and Antic would do well to remember this. Instead, all of his knowledge, or rather, beliefs, have been gained from listening to other people, and believing them. This might make him a half-decent friend, but a lousy scientist or philosopher.

Here, Vince Vitale scores an excellent own goal, and doesn’t realize it.

“His [audience member’s] point is that… if evolution is the sole guiding principle of human development, that is aimed at survival, not at truth. And if that’s the case, it sounds a bit like we get on the scale and think it should tell us the time. Why should we believe that our thoughts, our beliefs…”

And the audience member clarifies:

“I think my point was missed out. Um, look, you can believe in evolution, and believe in a creator, there’s not contradiction between the two. That wasn’t my point. My point was purely from an atheistic paradigm, right, there is no God, there is no intelligence behind this universe… assuming Darwinian evolution is true (even though science is based on induction – it can be wrong)… assuming it’s true, how can you trust your mind, when your mind is a product of a blind evolutionary process, which doesn’t have an end goal. If the end goal is pure rationality, then we should have the same rationality as…”

I suppose we should be grateful that Vince characterizes evolution somewhat accurately. Although that scale/clock metaphor is just bizarre. What he, and the audience member in question, fails to observe, is that we are getting better at detecting actual truth because truth is ultimately better for our survival (as Ahmed says). The human species has established itself as flexible survivalists with a pragmatic understanding of our surroundings. We assume, for example, that the thing we half see from the corner of our eye is dangerous, and we turn to confirm or disconfirm that. As such, our basest urges and reflexes are indeed pragmatic and survival-oriented. But our meta-cognitive functions evolved to enable us to break deadlocks of two pragmatically equivalent drives.

We often hear of the fight/flight/freeze response. There are incredibly few cases where, when in danger, freezing is appropriate. It is reasonable to assume, therefore, that the two options are just fight or flight. Freezing occurs when the two options are deadlocked. Thanks to our evolved capacity for breaking this deadlock we now know (or at least we would, had we read the appropriate survival handbook) when it is appropriate to freeze, run, shin up a tree, make lots of noise and flap our hands around, and when it is appropriate to attack. Notice that I just listed five reactions to what would previously have been the three of fight, flight or freeze, and that’s before we get to brandish a flaming brand, flinging a spear at it, or shooting it between the eyes. That is what our intelligence is for, and that is, very roughly, how it works.

…and it’s at this point, about half way through the episode, that I admit defeat, both on the basis of available time, and sheer mental exhaustion. The devout theists in this episode held the floor for longer than any of the pragmatic theists or atheists did (I apologise to Rabbi Miriam Berger and Professor Joan Taylor for making the distinction in that way, but it really was the only one I could think of). Whilst the theists held the floor they said a great deal, whilst also saying very little. And the atheists (in the main) spent more time correcting them than making their own points. As illustrated above, the theists consistently presented opinion as fact, denied any opposing facts that were presented to them by people who were in the right field of study to be able to contradict them, and then demanded humility, whilst displaying none.

What this episode illustrates is that people who are fundamentalist or fervent in a given religion are blind to their own shortcomings and deaf to contrary evidence. Those that are a little more open-minded, are only so in a sophist fashion; they continue to argue the theistic point, but with a veneer of plausible-sounding philosophy and quasi-scientific language, and usually in a rehearsed fashion. Only the theists that treat their religion as metaphorical, or as an organizing framework for more esoteric thought, sound sensible. This seems to be because they also use science to organize their lives, a fact which allows them the luxury of considering their esoteric thoughts in the first instance.

My research suggests that the monotheistic God is a construct derived from human social thought, as a reaction to the overwhelming number of people in our social world. This overwhelms the minds of some, and makes it impossible to view the world in anything but human terms. The switching off of the social module is no longer possible in this overwhelmed state. This process is reinforced by stories and mythologies from religion, not because of the stories themselves, but because in believing those stories, much more about the world comes as a surprise. Surprise leads to fear, or at least anxiety, and both reduce cognitive abilities. Anxiety is the equivalent of running on three out of four cylinders, semi-permanently, and fear is the equivalent of running on one or two cylinders, over short periods of time (this fact giving birth to the absurd trope that ‘there are no atheists in foxholes’).

Watching this episode, it was easy to see whose ability to use social thinking to monitor their own behaviour was impaired. It was also easy to see who, when made anxious about the veridicality of their beliefs, lost the ability to string a sentence together without self-contradiction or fallacious reasoning. Others had the defence mechanism of only listening for keywords, and reacting to what they though they heard, often with a rehearsed spiel. None of these behaviours was evident from the atheists or pragmatic theists who responded to the question asked, or position stated, with a considered reply…

This is our modern world, in microcosm.

Posted in Uncategorized | Tagged , , , | Leave a comment

The Big Questions: Evidence

On Sunday, January 11th, I had the great good fortune to be in the audience for a filming of The Big Questions. The topic for this episode was, ‘Does evidence undermine religion?’ A great deal arose from the comments of the various panelists for this, and I will address those in part 2 of this blog (after the episode has aired, January 18th). In this installment, though, I’m going to take a look at the concept of evidence with a particular emphasis on how it relates to religious claims.

For a discussion on whether evidence undermines religion it was unfortunate that the definition of ‘evidence’ was not discussed. Then again, that would make a much less interesting hour-long show. So, in a show such as The Big Questions, it is to be expected that the focus be on the big AND interesting questions. This being said, a working definition of ‘evidence’ (such as the one below), provided to the panelists ahead of time, might have produced quite different results (or lead to the less interesting episode I just mentioned).

There is a vast difference between folk theories (or so-called common-sense) about what constitutes evidence, and scientifically and philosophically literate theories of such things. In the context of a discussion, panelists can only start to get an understanding of what the other panelists’ positions are on that underlying question. Whilst one is assessing that position, and until one has accurately deduced it, one is necessarily talking past the other. Over the course of this essay I will explain why, by referencing what different individuals tend to consider appropriate as regards evidence, and the impact of interpretation on evidence, after the fact. The best place to start is with a (Chambers) dictionary definition, but, rather than evidence, let’s start with what it means to be ‘evident’:

Evident: that can be seen; clear to the mind; obvious…*

Dictionaries generally order their definitions such that the most common usage is first, and subsequent definitions can add clarity, whether by comparison or contrast. This is well illustrated with the above. Firstly, “that can be seen” has a modern, scientific, empiricist slant. By comparison, “clear to the mind” is an older, but still relevant, philosophically rationalistic view (indeed it calls to mind Descartes’ extended discussion of “vivid and clear” mental imagery from ‘Discourse 2’). Finally, “obvious,” which is problematic. What is obvious to one person may not be obvious to another, and for a whole host of reasons. I think it fair to say that, in a discussion about science and religion, the more scientific, and those (like me) with a passing understanding of the history of philosophy, are working with the first definition, and sometimes the second. The more religious tend to use the last two. Some religious people might take umbrage at my saying this, so let me be clear, I used the word ‘tend’ for a reason, and I would point to the ‘evidence’ for God using testimony and ‘the witness of the holy spirit’. (See Christian apologist, William Lane Craig’s defense of the Christian God “by the self-authenticating witness of God’s Holy Spirit” and arguments against that.) The use of the word “witness” is itself problematic, as it is a witnessing that seldom involves senses – there is no earwitnessing or eyewitnessing – indeed one might suggest that miracles are the provision of corroborating external sense data. Unlike miracles, witnessing is an emotional (and internal) experience. As such, something that is “clear to the mind” certainly is obvious – to you – but not necessarily obvious to anyone else. So you’ll need some other kind of evidence:

Evidence: that which makes things evident; means of proving an unknown or disputed fact; support (e.g. for a belief); indication; information in a law case; testimony; a witness or witnesses collectively…*

Evidence is that which makes something evident, but as discussed, what is evident to one is by no means evident to another. As such, witnessing and testimony, as employed in religious circumstances (and indeed in legal ones), is not proof of the claim, but proof of the witness’s belief in that claim (assuming that they’re not lying, but we’ll touch on intentional falsehood later). Does belief prove an unknown or disputed fact?

Psychologist, Elizabeth Loftus has shown us that a witness’s testimony can be affected by something as simple as the way in which a question is asked about an event. For example, in a famous experiment (Loftus & Palmer, 1974), participants were shown footage of an automobile accident. After being shown the footage, participants were assigned to answer one version of the question, “About how fast were the cars going when they (smashed / collided / bumped / hit / contacted) each other?”. The resulting estimates varied between ‘contact’ (the lowest estimate, at around 32mph) and ‘smash’ (the highest estimate, at around 41mph). The participants’ responses were biased by the version of the question they saw, such that their estimates varied by around 25% (which I’m sure you’ll agree is quite a lot considering they viewed the exact same footage, not merely the same event). This serves to make the point about memory and, without being diverted by too much further detail, human cognition is riddled with similar flaws of receiving, processing, understanding, and recalling, of information, as highlighted in the work of Amos Tversky, Daniel Kahneman (1974), and many, many other psychologists.

It’s likely that the fact of these flaws in cognition is exactly why science has been so very successful in describing the natural world, as compared to other methods that rely on human cognitive faculties. Instead of developing folk theories about a phenomenon, or asking someone else about their folk theories, we have sufficient humility to ask the universe itself. We attempt to re-enact the scenario in which the relevant phenomenon occurs (or predict its occurrence and observe it more closely), and measure the outcome. That measurement, as objective as we can make it, is evidence.

The fact of evidence, even where it is agreed upon, does not mean that differences of interpretation can’t occur, even between incredibly smart people. The Einstein-Bohr debates at the birth of quantum mechanics make that evident. But, just as the way in which a question is asked can alter the answer given (and without the respondent being aware), so too, can exposure to ideas change the way in which you receive subsequent information. The Bible, for example, has variously been used to support slavery, mostly in the past, as well as condemn slavery, more recently. As such, if one considers the Bible to be evidence (rather than the claim), what has changed is the set of extra-Biblical facts (or at least beliefs) that the Bible-believing populace hold to be true. This is, in effect, hermeneutics (the interpretation of texts), and it is instructive that hermeneutics was born out of Biblical textual analysis. It was subsequently recognised (first in philosophy, and then in Biblical criticism) that hermeneutics had to include an understanding of the “social, historical, and psychological world”* of the time in which the original text was written (I’m sure Professor Stavrakopoulou – one of the panelists – would correct me if she happened to read this, and I happened to be wrong on that latter point).

The Bible, very generally, is a collection of testimonies about events, claims about the nature of reality in light of those events, and claims about the impact of that reality on the social world, and so on. As such, within the Bible, there is a great deal of interpretation of prior work that is also within the Bible – there is no clear distinction between older and newer writing. My understanding is that the Qur’an contains a great many of the stories that are contained in the Bible, and this seems likely to be due, at least in part, to the impact of the Jewish and Christian knowledge of Muhammad’s cousin-in-law, Waraqah ibn Nawfal, and others, in interpreting and writing down Muhammad’s revelations (themselves the product of the social and religious environment of the time).

In the case of the Qur’an it is often said that it must be read in the original Arabic, and no translation is a true Qur’an. Much the same was said of the Bible, when it was still in Latin, and attempts to translate the Bible into English were met with death threats. This is no longer the case, and as such there are now hundreds of versions of the Bible… and all of them are at least subtly different.

The Bible and Qur’an are, at some level, claims made by people about events. In the case of the Bible, those claims are voiced either by the authors themselves, or by the protagonists in the Bible story in question – as such there is at least one or two levels of interpretation involved. The Qur’an, by contrast, is a claim by one person, Muhammad, about the nature of a set of revelatory experiences, which may or may not have been recontextualised by the input of various scribes, family members, and followers, depending on their own knowledge of the Torah, Tanakh, and Christian Biblical writing, and other socially relevant historical matters. Needless to say, depending on the impact of the knowledge of Jewish and Christian scripture on the Qur’an, the layers of interpretation may move from one layer deep, to three, four, five, or more layers deep. These interpretations of interpretations of interpretations are presented with a human voice (as opposed to a divine one), and they are about very human concerns, such as life, love, death, and meaning… and this fact leads me to my final point.

Most people take other people at their word, unless they have reason not to. The reason not to may be because the individual has been found to be a false witness in the past, but bearing false witness is different from being mistaken. I am often surprised by how readily people who claim deep religious faith will call someone that makes an opposing claim a liar – calling someone a liar is very different from saying ‘I disagree with you.’ Likewise, saying someone may be mistaken in their interpretation is not the same as calling someone a liar.

As discussed above, religious texts have very strong human themes, and as such it is unsurprising that some people will engage with these in a very human way, especially if they have been raised to do so. If you’ve been raised to not contradict your elders and, by extension, to accept religious authority, with little or no question, then the issue is the way in which you are engaging with the evidence. If your continued exposure is to a limited subset of religious claims, delivered emphatically by a priest or imam, then your engagement with the claims will continue to be social and emotional, not rational.

“As Loftus puts it, ‘just because someone says something confidently doesn’t mean it’s true.’ Jurors can’t help but find an eyewitness’s confidence compelling, even though experiments have shown that a person’s confidence in their own memory is sometimes undiminished even in the face of evidence that their memory of an event is false.”

In this modern scientific age, even the most ardent believer will have been affected by their exposure to both science and technology – not least the democratisation of information on the internet. With this exposure, and access to both good and bad information, skepticism is a necessary skill. The realization that a modern teenager knows far more about how the world works than the authors of any ancient holy book did, should give believers pause for thought, whether they believe humans were conduits for, or interpreters of, the divine word.

Nice people treat people they meet with respect until given cause to do otherwise. Extending this courtesy to long-dead people who were short on good, evidenced information does not make one nice, it makes one gullible (which is nice, if you’re a sociopath looking for people to use). Where individuals use their belief in the words of the long since deceased as the basis for being rude to someone that is right in front of them rather makes a mockery of the religious claim to humility. Assuming that your assessment of the claims of the long-since deceased is correct, and that someone else’s assessment of other, contradicting evidence is therefore wrong, is arrogant. Of course, most religions suggest humility in the face of evidence. Unfortunately for many believers, what constituted evidence at the time those words were written has changed because we now know how fallible we humans really are… then again, doesn’t your omniscient being of choice know that, too?

*Definitions of both ‘evident’ and ‘evidence’: The Chambers Dictionary (13th Ed.)

*Definition of ‘hermeneutics’ and related quote (“social, historical, and psychological world”): Blackburn, S. (2008). Hermeneutics. In Oxford Dictionary of Philosophy (Second Edition (Revised), p. 165).

For an excellent read about human memory I recommend Charles Fernyhough’s Pieces of Light: The new science of memory

Posted in Science | 1 Comment

Non-believers and American politics…

It is hard to ignore two psychosocial features of religious observance: First, that with increased material wealth (or, to put it less hedonistically: with increased protection from privation), religiosity reduces (Paul, 2009). This is not just personal wealth, or societal wealth, but an interaction of the two. Secondly, with increased learning (both at the personal level, as well as the societal), comes decreased religiousness. More correctly this seems to be an interaction between intelligence (e.g. Zuckerman, Silberman & Hall, 2013), and education (Martinez, 2014). These facts interact; as such the highly educated and most wealthy are the least likely to be religious (Paul, 2009). This can be seen in the reduced religiousness of wealthy but equitable nations (such as the Scandinavian states, and Japan), and in the hyper-religiosity of the US (as a wealthy but inequitable nation), and the religious zeal of the poor and uneducated nation states of the third world. Such tendencies, at the population level, must have some degree of expression in individuals. As such, a significant number of US politicians (and their primary donors), who are undeniably wealthy and well educated, must also be non-religious, despite the wealth disparity in the US, and despite their claims of religiosity.

“The average age of Members of the House at the beginning of the 112th Congress was 56.7 years; and of Senators, 62.2 years. The overwhelming majority of Members have a college education. The dominant professions of Members are public service/politics, business, and law. Protestants collectively constitute the majority religious affiliation of Members. Roman Catholics account for the largest single religious denomination, and numerous other affiliations are represented.” – Manning, 2011

Given the unavoidable fact that some of these politically motivated individuals MUST be non-religious (despite there being no professing unbelievers in the houses at this time), the question then arises: Are they claiming religion for personal gain, or are they claiming religion for personal gain? Clearly this is a non-question, but under the surface a genuine question lurks: Does this belief inure the individual from the fundamental injustice of their wealth at the expense of their fellow Americans, or do they adopt an outwardly religious pose to assure the masses that they are merely there by the grace of God? I would suggest that this dichotomy is no dichotomy at all – both are hypocritical. Try as I might, I see no third option; at most, I see variants of these two.

It is interesting to note that the two political parties have, in general terms, qualitatively different approaches to the issue of wealth. Liberals don’t really have a coherent narrative regarding wealth, aside from occasional attempts to enact progressive taxation and wealth redistribution. There is also a notable propensity amongst Democratic donors to be known for their humanitarian giving (e.g. Warren Buffett and George Soros). Republicans, on the other hand, have rationalised wealth as indicative of moral worth (Lakoff, 2002), conflating merit with money. Of course, it’s hard to maintain this position of moral authority when the majority of members of Congress (of both parties) are from very well paid professions that seem to attract those high in psychopathic tendencies (CEO and Lawyer are listed at first and second, respectively, and Civil Servant is listed at tenth).

“Psychopathy is a personality disorder that has been variously described as characterized by shallow emotions (in particular reduced fear), stress tolerance, lacking empathy, coldheartedness, lacking guilt, egocentricity, superficial character, manipulativeness, irresponsibility, impulsivity and antisocial behaviors such as parasitic lifestyle and criminality.”

On the matter of religiosity, the parties also have different approaches. The Republicans claim religiousness at every turn, appealing directly to the religious right (obviously), and lying about their statistically likely lack of religiousness. Liberals, by comparison, avoid mentioning religion wherever possible, except where politically expedient. This a difference of degree, not type, and the reason is simple. The populace generally treat sins of commission more harshly than sins of omission (Harris, 2010), but where that sin of commission is a claim of religiosity, it is more readily overlooked, particularly by the religious right:

“On September 20, 2006 an independent Congressional-watch organization called Citizens for Responsibility and Ethics in Washington released its second annual “Most Corrupt Members of Congress Report.” Three senators and seventeen members of the House were named, most of them hold-overs from the first annual report (although the news release noted with some glee that two of the previous winners were already on their way to jail).

In other words, corruption, and overtly irreligious (one might say sinful) behaviour are not sufficient to impact on a politician’s chances, but proof of atheism will. The highly religious will put up with an incredible amount of widely reported “unChristian” behaviour (although that term seems to have lost all meaning in US politics), but they will not countenance an atheist in office (Gervais, Shariff & Norenzayan, 2011). So, let’s find (and out) atheist Republicans, as this will have one of two effects – it will either improve the stock of atheists with the highly religious (and atheists are less trusted than rapists (ibid.)), or it will remove the atheists of low integrity from the political landscape… I see both as positive outcomes.

Late addition: here are the religiosity stats for the current Congress.

Posted in Uncategorized | Leave a comment

The Right to Die as a Human Right

Two commentators on the recent appeal to the Supreme Court over the right to die, Baroness Jane Campbell (former Commissioner of the Equality and Human Rights Commission, 2006-2008) and Lord Carlile (Liberal Democrat Member of the House of Lords) present a stilted view of the situation. The position put forward by both is flawed, both through poor reasoning, and failing to focus on the individuals as rational and emotional beings who happen to be physically incapacitated.

In a stellar piece of fallacious reasoning Baroness Campbell said:

The main reason given for wishing to die is not wanting to become a burden, whether their family would see it that way or not. Against this background, the “quick-fix” of an assisted death appears attractive.

It is precisely because that is the majority view that we must continue to oppose it. There is no better evidence of the negativity with which terminal illness, chronic illness, and disability is viewed than that we might be better off dead.

Notice how in the first paragraph she speaks of the main reason for those wanting assisted death for themselves, but in the second paragraph, she states that the majority view must be opposed. So we must resist the majority view of those most directly affected by any such future law? I think she meant to say that it if the majority view of the able-bodied was to be pro-right to die, against the wishes of the incapacitated, that we must resist that. And she would be right, if that were the case, but, by her own words, that is not the case. If the majority of those that would be in the position to opt for the right to die are in favour – and I suspect that the majority of empathic individuals who are not personally affected by the issue would, likewise, be in favour of it – then it is a law that must be seriously looked at. Perhaps the Baroness would like to clarify her point (and cite her sources whilst she’s at it).

For some reason, the Baroness thinks that the individual that is suffering through a debilitating illness is somehow no longer able to make rational decisions based on their own empathy for the dilemma of their loved ones. Maybe, as an incapacitated person with a fierce desire to live, herself, she is unable to empathise with someone that does not have that drive, or who expresses other equally strong but still perfectly human drives. An able-bodied person makes decisions about what they will and won‘t do based, in part, upon the impact of those actions on the people around them. In the case of many of the terminally ill and incapacitated, this element of self-determination has been taken away from them, and the Baroness would seek to extend this by perpetuating the removal of this element of their legal personhood.

In a similar vein, after the ruling came down against the right to die, Lord Carlile said:

There are other people involved in these cases. The subject may wish to commit suicide, but his or her children may see things in a completely different way. They may value every minute of the rest of that person’s life, and so rights are not merely in the mind of the person, of the individual concerned, there are other people involved too, and we need to take a broader view than is sometimes advocated.

Why, if someone is of sound mind, is that person’s right to self-determination removed, and instead ceded to their relatives? We are concerned about the dehumanising of the terminally ill and incapacitated, but here we have rendered them down to the legal status of chattel. So in trying to offer dignity (which is, I assume, the goal of Campbell and Carlile), instead we offer ignominy. Why can the relatives not “value every minute” of a finite and determined time with their loved one? That is something that the families of cancer patients are not afforded. Why is the family’s emotional pain allowed to trump the mental, physical, and emotional pain of the individual?

Indeed both Campbell and Carlile seem to be treating incapacitated humans as merely physical beings, due to their physical condition, and not thinking of their emotional well-being at all. This is best illustrated by another comment from Lord Carlile:

There are very few people who die in agony. Some do, but there are very few people who die in agony as a result of an inability to treat pain.

When did the physical become the sole determiner of an individual’s quality of life? What if a previously very physical person is suddenly incapable of movement? Daniel James, the rugby player who took his life after becoming paralysed from the chest down, felt that he could not thrive after the loss of physical movement. Another rugby player, Matt Hampson, saw his paralysis as a challenge. Can anyone say which of these two men is “right”? The only ‘right’ here, is the one of self-determination.

On this matter Baroness Campbell also sets up a false dichotomy by saying:

We help those with suicidal thoughts look for positives in their lives. I believe chronically ill and disabled people deserve that “right”, to be helped by us all to live their lives.

Indeed. Most calls for right to die legislation also call for appropriate counselling and that checks and balances be in place to minimise abuse. Counselling is, or is called upon to be, a mandatory part of the process for gender re-assignment, and even relatively minor plastic surgery. Campbell points to the media exposés of abuse in care homes as potentially indicative of the way that the incapacitated are treated, but then counselling, properly undertaken, would uncover these problems, as well as any duress associated with it.

In conclusion, Baroness Campbell said:

I didn’t want Tony Nicklinson to die and I don’t want Paul Lamb to die. I respect and value them. I want them to carry on disagreeing with me for as long as possible.

Why, Baroness Campbell, does what you want have any bearing on what they do? It is called self-determination for a reason. You can determine your course for yourself, something which you have shown yourself amply capable of doing, but these brave individuals should be allowed the same right for themselves. If you respect them, you should respect their decision, even if you disagree with it.

To conclude, Baroness Campbell is understandably nervous about right to die legislation that is too broad, too permissive, too open to abuse. So, I imagine, is anyone who is pro-right to die. Her position, however admirable for its motives, is not helped by misrepresenting her own facts and by using fallacious reasoning. And it is fatally undermined by Lord Carlile displaying a startling lack of empathy. The right to die is a logical extension of the right to self-determination. Counselling, in an effort to determine the state of mind from within which the decision is being made, is clearly a necessity, and such checks and balances can be put in place, through consultation, in the writing into law of the right to die. The writing of such a law is the context in which these discussions should be had, not here, not now, actively stopping any such law from being formulated in the first instance.

Posted in Uncategorized | Leave a comment

Religious thinking is not the same as religious thought…

The main problem with religions (and this is pretty much all religions), is that the words of a few have been painted as revealed (God-given) wisdom, and as such are unimpeachable. If they were wrong (and they were), any given religion will never improve in any appreciable way until the founding utterances are discarded, and the fallibility of the utterers conceded. But because everyone in positions of power within the religion has a vested interested in keeping the enterprise going, no such acceptance of what is patently obvious to anyone outside the fold will occur. Hence, I have more time for the clergy that have joined The Clergy Project, than I do for sophisticated followers that hope against hope (aka pray) for a change in ‘the system.’

Apart from anything, the concept of “revealed wisdom” is patently a misunderstanding of what we now call intuition, and that we know is the product of non-conscious (which is to say evolutionarily old) processes. Intuition is the product of available (crystallised) knowledge, deeply processed, as such these intuitions can only be the product of knowledge as it was at the time. For example, many Muslims point out the embryology of the Qur’an as being ahead of its time, but not only is the embryology wrong, it’s also similar to the work of Galen (b. 129CE), who was based in Pergamon (modern Bergama) Turkey, despite his Roman origins, centuries before Mohammed.

There’s no doubt that religious people have occasionally made inspired guesses, for example, Maimonides, in describing the making of the world in Judaic terms, came very close to describing the Big Bang as starting as a mustard seed and spreading out. But that is clearly a poetic device that almost captures the truth of the Big Bang whilst actually describing Genesis. A Christian example is the work on memory by St Augustine, which just last year was noted for being highly prescient with regard to modern neuroscience in regard to memory and mental time-travel (links below).

It surely can’t escape anyone’s notice that followers of any given religion can point out the flaws of other religions, but often fail to see those same flaws in their own faith. Some, a very few, can see the flaws in their own religious organisation and the way in which it administers doctrine, but abide, nevertheless. The issue is not the mode of thought, per sé, the issue is the focus of that thought – if a religiously inclined thinker is open-minded about science, they will be a valuable asset to any committee on scientific ethics. But if a religiously inclined thinker applies that same mode of thought, with archaic beliefs about the world as its base (as many of the most outspoken religious individuals are inclined to do), then one can only expect archaic (and occasionally post-archaic, but certainly not modern) outcomes.

Posted in Uncategorized | Leave a comment