Thee or Me? (Part 1) Morality & The Prisoner’s Dilemma

One of the most interesting consequences of the Covid Pandemic has been the public displays of both prosocial and antisocial sentiment. We have seen the best and worst of humanity. 

There’s been hoarding and profiteering of essential supplies and protective equipment, acts of flagrant aggression and selfishness and the worst of denialism and conspiracy theory mongering.

Alongside this we’ve seen many wonderfully altruistic and kind gestures made to friend and stranger alike. More broadly there has been society-wide cooperation at personal cost to protect the vulnerable. Still, there’s also been non-cooperative, angry opposition to such broad-based community policies and directives. 

How can the ‘same’ circumstances bring about such different responses?  Well, it is obvious that we are complex creatures and that there is a diversity of personalities and characters at play.  On the other hand, there are diverse environments that ‘nurture’ us, lending considerable weight to the determination of our behaviours.

This and the next podcast will tease apart some of these issues to try to shed light on this complexity but to also indicate implications for living well with each other.  Along the way we’ll explore our evolutionary nature, views from economics and the development of character.

Let’s start with the economist Adam Smith. In ‘The Theory of Moral Sentiments’ – he put forward a revolutionary idea that our moral ideas and actions are a product of our nature, not a product of our reason.  This is highly consistent with evolutionary psychology that views altruism as a hard-wired feature of our species – one that is critical for our survival.  It’s also consistent with what we know about the hereditable nature of personality.

Smith proposed there was an ‘impartial spectator’ – an imaginary figure in our mind that guides us.  Let’s call it our conscience. 

Through experience we gradually build up a personal system of behavioural rules – morality.  In this way, society, using public ethics, approves of and rewards acts that benefit society broadly, and disapproves of and punishes acts that harm it.

Nature, for its part, has equipped us with appetites and aversions that promote the continued existence of our species and our society.  Two such mechanisms are self-interest and altruism.

This is similar to Smith’s invisible hand (his famous metaphor for market economies) that guides us in our balancing of self and mutual interest.

So, what is this invisible hand for human beings?  And why are some people being ‘immoral’ and others ‘moral’ in the current pandemic? Predictably, the answer is ‘partly nature and partly nurture’.  

One way of thinking of this is that our genetically endowed personality provides us with a ‘loaded deck’.  But nurture (including knowledge, experience, incentives, rewards and punishments) actually deals the cards.

We are all endowed with morality-seeking mechanisms, but unequally. Put bluntly, it is harder for some personality types to behave altruistically, to be ‘good’.

For example, people who are extremely low on the traits of agreeableness and conscientiousness are much more likely to be sociopathic.  They will weigh the moral balance intuitively towards self-interest. 

On the other hand, those who are extremely high on agreeableness and conscientiousness will tend to offer better ‘trades’ to others, often sacrificing self-interest in the service of others’ interests, again driven by implicit cost-benefit analyses. They will tend to be prosocial.  We are born with these kinds of built-in preferences.

Both ‘types’, however, will develop their moral compass throughout their life tempered by the rewards and punishments of their ‘village’, calibrating their personal morality in response to the pressures of a public ethics and by the rewards and sanctions of society. 

This moral compass is also influenced by information and knowledge. e.g. education in civics, or understanding one’s own psychology and personality-driven biases.

How does this play out in everyday life? Can it be so engineered that people will be more prosocial or more antisocial?  Again, predictably, the answer is yes and no.  Groups can be swayed, but predicting individuals is notoriously difficult.

One very widespread view of our species is as Homo Eco-nomicus, or economic man (sic). This presents an impoverished view of our nature in terms of amoral self-interest ignoring our natural moral sentiments. According to this view we each rationally will calculate what is in our best interest, and then do it.  

However, there are really significant predictive and explanatory failures of this view and this has led to the development of the very influential field of behavioural economics. This was simply because people do not actually only behave according to this kind of selfish calculation.   We are complex, if not messy!

Samuel Bowles, in The Moral Economy, has fascinating things to say about how implicit messages in incentives, rewards and punishments can ‘crowd out’ or ‘crowd in’ our civic motives.  

For example, when behavioural experiments test this view of rational selfishness (using monetary rewards, punishments, and other incentives), it doesn’t emerge we are routinely selfish.  

However, it is the case that public policies informed by this view of economic man tend to ‘crowd out’ ethical and generous motives and tend therefore to backfire. They actually produce selfishness. They ‘crowd in’ our self -interest.

Surprisingly, when we offer financial or other rewards to encourage moral behaviour, it can make people less moral.  It is not the incentives that are the culprit. It is the implicit message of mistrust conveyed by fines and rewards. 

People intuit that self-interest is expected, that I’m expected to be lazy, or cannot be trusted to contribute to the public good. This is why policies and business practices that do not speak to the moral and generous side of human nature often fail.

On the other hand, well-designed incentives can ‘crowd in’ the civic motives on which social cooperation depends. These appeals to shared moral interests, as against punitive consequences, shift brain processing away from regions that process threats and frontal lobe cost benefit analyses, to other regions concerned with social connection and values-based thinking. 

Decades ago, behaviourism demonstrated you can’t inculcate desired behaviour by punishment; you can only restrict unwanted behaviour.  Only by positive reinforcement can desired behaviour be cultivated.  Bowles takes this insight further: the type of incentive also counts; we should assume trustworthiness, and this elicits trustworthiness.  Even ‘positive’ reinforcement can be construed as ‘negative’ and the appropriate self-interested mental mechanism is activated.

But we should not be naive.  There are people strongly predisposed by both nature and nurture to immorality and bad social action.  All the positive framing and attempts to elicit altruism may, in these cases, fall upon deaf ears. When such behaviour manifests it needs to be controlled as best we can by significant sanctions. 

Once a person is operating by pure cost-benefit analysis, to curb antisocial behaviour the cost has to clearly outweigh the benefit. Thus, we need an approach of both carrots and sticks, with a bias to carrots.  Unless we have good reason to the contrary, trust will beget trust, most of the time.

As in the famous Prisoner’s Dilemma, when we are given opportunities to cooperate or betray others for enhanced self-interest, the best long-term strategy is tit-for-tat.  We should first cooperate.  If the other party betrays that trust, we should protect our self-interest at their expense.  If they revert to cooperation, we again engage in altruistic behaviour (tit-for-tat).  In the long run, life prospers as a nonzero-sum game of reciprocal altruism. That said, there’s no good reason to be a sucker.