The Nature of Ethics

7 Jan 2014 Dr Donald Cameron, BRLSI Convenor Details It is shown by a relatively crude computer simulation that a basic morality will evolve, given certain minimum cognitive abilities. Human intelligence and communication skills have caused the development of morality to a level beyond that of other species. Sadly these skills can also be used for cheating and deception; defenses against these have in turn evolved resulting in great complexity.

The Nature of Ethics NatEthic.doc
Donald Cameron
BRLSI – 7th January 2014

Paper Recycling
A little while ago, I was present at the Royal Institute of Philosophy seminars for young people ably run by Christopher Gifford of Bristol University. While there, I idly picked up a book the young people were studying. It was a recent publication, entitled Ethics and I cannot remember the author’s name. I scanned the index and found no entries for evolution, natural selection or Darwin. I commented that the only use for the book should be as recycled paper.

My judgment was harsh, but fair. If it is true that we are evolved animals and have developed with no organising force except natural selection (and everyone except a minority motivated by religious desire agrees that the evidence for that is overwhelming) then that must tell us a great deal about why we have ethics, and many other things about us too. We have a complex moral instinct, albeit an apparently imperfect one, and that could have come from nowhere other than our evolutionary past.

Moral philosophy is not very impressive
The current philosophy of ethics, while making a few useful observations, is still largely a failure. Despite centuries of discussion of the subject there seems to be no agreement on any well defined theory. We are invited to believe that the philosophical world is full of people who cling to one opinion or the other and are known as “ists”. Their fixed points of view are known as “isms”. Their theories are mutually contradictory in many aspects and people in philosophy classes discuss their plausible aspects and criticisms of them, but seem unable to reach a conclusion.

I do not intend to give a comprehensive course in ethics, but let us consider some of these. Roughly they divide into duty-based (or deontological) or consequence-based. Have you ever wondered why philosophers use words like deontological, instead of duty-based? Do they do it to help our understanding, or is it to give themselves a mantle of erudition when their ideas are actually pretty simple?

Religious ethics tend to be duty based. God has given us his commandments and specifies duties in other ways. But there are many problems. One paradox is the question of whether God is good, in which case he is conforming to a standard external to himself. Alternatively, goodness is what God says, whatever that happens to be. And if you read the Old Testament, God is supposed to have commanded some pretty nasty things by our present-day standards. And Christianity is not the only religion. Although there is a measure of agreement between people from all parts of the world on moral behaviour, there are cultural differences also and these are reflected in their religions. It seems plausible to me that the will of the various gods has been codified by priests to reflect the moral codes of the societies that they found themselves in.

Kant’s ethical views were very much duty-based. The motive, he felt, was more important than the consequence and the important thing is a sense of duty. Curiously, he said that helping someone because you feel an emotion of compassion for their misfortune is not a moral act. You should help them because of a sense of duty to do so. His “categorical imperative” and the rule that you should do what you would be happy to see everyone do. This is a kind of Golden Rule which is accepted by many societies. His scheme makes reasonable sense, but is no more that an attempt to codify his inner feelings. It is of limited use in telling us what our duty ought to be.

The consequentialists believe that it is the results of actions that matter. Teleological is the unnecessary jargon for this approach. Utilitarianism, in rough terms, says that actions should aim for the greatest happiness of the greatest number. It sounds OK at first, but it begins to fall apart when it is examined. How can we measure happiness? The calculation must take account of negative happiness. How should we evaluate the pleasure of the sadist? If his happiness is more intense than the unhappiness of his victim, then does it become morally right?

And picture a woman who is informed of the death of her own child. Then think instead of a woman who learns of the death of a thousand children in a disaster in some distant part of the world reported on television. Which would feel the greater grief?

Happiness is obviously not the correct measure of welfare. It is a mental process more aligned to the first derivative of welfare against time, assuming that can be quantified. We are happy when our welfare increases and sad when it decreases, even from a high level. A billionaire might feel unhappy if he loses 99% of his assets even though he remains rich by the standards of most people. A poor person who suddenly becomes as rich as the ex-billionaire who is now only a multi-millionaire would probably feel very happy.

But producing a simple mental state cannot really be our purpose. Suppose we could arrange to spend our lives in a drug induced stupor; we might be deliriously happy, but would it be good? To be even more absurd, suppose our brains could be taken out and kept in a jar infused with chemicals which induce the happiness reaction. Perhaps eternal happiness could be attained, but something would be wrong.

There are various brands of utilitarianism. Negative utilitarianism seeks only to minimise unhappiness. Mill spoke of higher and lower happinesses. Rule, as opposed to act, utilitarians propose that duty-based rules should be accepted based on what would usually maximise happiness, but what do we do about the exceptions when they don’t?

The ancient Greeks commended the adoption of the “virtues” as part of the good life, but like all the rest had to draw their beliefs from their own moral sentiments.

The Emotivists hold that all moral statements are an expression of the emotion of the speaker. There is some truth in that, although many will disagree that someone saying “Murder is wrong” is speaking with no more authority than someone saying “Murder – yuk”.

David Hume, the well known philosopher of the Enlightenment period in Edinburgh set out a principle now known as Hume’s Law. It is, in essence, that no conclusion about ethics or value can be deduced from premises which are purely statements of fact. On reflection it seems obvious; Ethical statements, or even statements of our wishes, are descriptions of how we think the world ought to be or how we would wish it to be. Fact statements describe how the world really is. These are two different things. This is Hume’s Law or the Naturalistic Fallacy. You cannot derive an “ought” from an “is”. As Hume pointed out, reason can only be the servant of the passions; our desires and moral wishes must come from within.

(Of course, the opposite is also true. You cannot derive an “is” from an “ought”. Wishin’ don’t make it so!) But that remains a very popular form of logical error.

This would seem to show that the task is hopeless. Ethics can only come from our inner drives and everyone might harbour a different opinion. Yet we observe that, despite many differences, there is a great deal of common ground in the opinions that most people hold.

Perhaps the thing we need to study is the basis of our ethical instincts: what they are, where they come from and why we hold them. And to study these, we need to study their origin in our evolution as social animals.

Evolution misunderstood
The relationship between evolution and ethics is very often misunderstood. Nature, they say is “red in tooth and claw” and could not have had anything to do with ethics. But where else could ethics have come from? Certainly not from civilisation; we see that the morality of people in primitive groups is often more altruistic than in the big cities of our modern world.

A typical example appears in The Value of Life, an otherwise excellent book by John Harris, of Manchester University. While considering natural sources of ethics, he states “Indeed, if one or other of the versions of natural selection describe a natural evolutionary progress for human beings, then again while it may be evolutionarily successful one could hardly describe the survival of the fittest and its corollary, the destruction of the weakest, as a humane (albeit a human) arrangement”. He then swoops on and does not think any more about evolution, having completely missed the point.

But giving a little more thought to our evolutionary past, it is apparent that an ethical sense is likely to have evolved. If two partners can make a treaty to behave ethically to one another, each will gain more than each will have to forego. There is a payoff to be had, if this contract can be maintained, and evolution will favour any development which gives a payoff that can be turned into better survival and reproduction.

During a long period of our evolution we grouped together in small bands of hunter gatherers. In these conditions it is easy to visualise that those bands whose members treated each other ethically would survive and reproduce better than those who were always cheating and fighting with each other. There is no question that evolution would favour ethical behaviour, if a means can be found to stabilise it.

African Hunting Dogs

Ethical behaviour is well known in nature. Many animals show it and a good example is the African hunting dog. Some people think that it is an unattractive animal because it lives by hunting in packs in a rather bloodthirsty style. When a wildebeest or other suitable prey is targeted, it seldom escapes. The pack of dogs, working cooperatively, attacks it on all sides tearing at its flesh until it is brought down. It is then eaten, even while still alive. Occasionally they will do this to cattle and this does not endear them to local farmers.

No-one has ever recorded a vegetarian African hunting dog. But within their own group, they are the nicest animals one could wish for. Hunting in a pack requires great cooperation. They have rituals to create the necessary excitement before each hunt and several other means of communication. They have greeting rituals and only a gentle social hierarchy. Aggression is rare to members of their own species. A female who is caring for the young will receive food regurgitated by those returning from the hunt and some will babysit for others.

One feature that is necessary for the development of ethics is that individuals can be recognised and the African hunting dog is good at that. They have distinctively patterned coats which may help, although this may not be necessary. Our domestic dogs are very good at recognising individuals, both canine and human.

The essence of cooperation and ethics is that we should resist the obvious temptation to grab a benefit for ourselves, but instead to think of others. This could only evolve if it produced a long-term benefit in terms of a greater reproduction of the underlying genes, but indeed, it clearly does so. A group of people who work ethically with each other will obviously do better at surviving and reproducing that a group which spends its time cheating and fighting.

The Prisoner’s Dilemma
A commonly cited formulation of the cooperation situation was formulated by Merrill Flood and Melvin Dresher in 1950; the prisoner’s dilemma. Recasting it with rewards, rather than the penalties of the original, we can call it a cooperation game. Let is suppose that two players, Joe and Sam are playing this game. If both of them cooperate, both will receive a reward of 8 points, but, if both defect, they will only get 2 points each. It sounds as though both should cooperate, but there is an advantage in cheating. If one cooperates and the other defects, the cheat will get 10 points and the sucker will get nothing.

So what is the rational thing to do? Put yourself in the position of one of the players. If your opponent is going to cooperate, you will get 8 points by cooperating, but 10 if you defect – so the best policy is to defect. But, if your opponent is going to defect you will get 2 points, if you defect, but nothing if you cooperate. So no matter what your opponent is going to do, the best strategy is to defect. And, of course, you know that he is doing exactly the same calculation, so he is probably going to defect.

It seems that both players are going to finish with two points each when they could have had eight, had they behaved differently. It is easy to see why people feel this simple game encompasses many of the problems of ethics.

ACTION PAYOFF
Joe Sam Joe Sam
Cooperate Cooperate 8 (R) 8 (R)
Cooperate Defect 0 (S) 10 (T)
Defect Cooperate 10 (T) 0 (S)
Defect Defect 2 (P) 2 (P)

The codes represent standard terminology for each action:

R Reward for mutual cooperation
S Sucker's payoff
T Temptation to defect
P Punishment for mutual defection

So it is clear that there is a reward for ethical behaviour, but how can it be achieved? If we only have a single-trial game, it probably can’t, but, if there is to be a repeated series of trials between the same partners it will pay them to reach an agreement to cooperate.

This is, of course a very simple model of human interaction and it is only partly realistic. There is one amusing paradox that appears. If you imagine playing ten games with a partner, it will be best, in general, for both players to cooperate to ensure that their partner does not defect in the following game. But what happens in game 10? There is no following game, so the best strategy for either player is to defect. Now consider game 9. Knowing that your opponent is going to defect in game 10, there is no point in cooperating in game 9. And then the same reasoning applies to game 8 and so on.

Perhaps the cooperative strategy only works when there is an indefinite number of trials or perhaps when we don’t know when the last trial will be. Perhaps this rather artificial calculation only shows that our morality is fragile. But there is also a degree of realism; people sometimes are more ethical towards those that they expect to spend a lot of time with in the future, rather than someone that they will never see again. Morality is often better in a small village than in a big city.

Computer Simulation
I have conducted a little computer experiment to test the idea that animals, which can decide to cooperate or defect and which can remember their past interactions with other individuals, will tend to evolve an ethical behaviour. Of course, this is not a completely new idea and it has been done in various ways by others. Most notable, perhaps is Axelrod’s formulation in which individuals equipped with different strategies played against each other. TIT FOR TAT, which played whatever move its partner had played on the last encounter, famously became a strong contender.

In my present exercise, differently from Axelrod, I have tried to avoid giving the individuals any strategy, just to see whether a strategy would emerge from natural selection alone.

I have imagined that there are a number of participants in a tribe arranged, let us suppose, in a large circle and they interact in a cooperation game repeatedly with five of their neighbours on either side. If both partners cooperate, each receives 8 points, but if one cooperates and the other defects, the defector receives 10 points and the cooperator receives nothing. If both defect, each receives 2 points. This is a classic prisoner’s dilemma matrix, identical to the cooperation table we have just described.

I do not require the individuals to have intelligence and, indeed, they are pretty dumb. They do have memories, but these are quite short. All they can do is to remember whether that particular partner cooperated or defected the last time they met. They will then cooperate or defect with a certain probability. Each has two probabilities that they will cooperate: (i) when the partner cooperated at the previous interaction and (ii) when their partner defected at the last interaction. If the partners have not interacted before, the probability at (i) is used. At the start, these values for each individual are set to random numbers in the range 0 to 1.

I have chosen a circle with 18 individuals for no better reason than that it fits nicely on my computer screen. Each person plays against the five neighbours on either side at each round and there are 20 rounds in one series. After these trials in which each individual experiences 400 interactions, the points scored are counted. If anyone could get away with defecting against a sucker every time, the maximum score would be 4000 points, but that is never achieved. If a player succeeds in having a successful cooperation every time the score would be 3200, whereas a situation in which everyone defects every time would give each a score of 800. Of course, being a sucker every time would give zero.

After each of these series of trials, it is time for a new generation. Those with below average scores die off and their places in the circle are taken by “offspring” of the survivors. It is asexual reproduction, so each has only one parent. A child has similar probabilities to the parent, but with small random variations which may be upward or downward. When all the spaces have been filled, a new series of trials can begin. This process is then repeated for 1000 generations and can be extended further by the operator.

So does ethical behaviour emerge? The answer is yes, sometimes. When the probabilities are set at random, mostly it does and sometimes it doesn’t, but usually it goes one way or the other; they do not stay random. Either the population moves to being complete altruists or trust completely breaks down and the later generations defect almost all of the time.

Let us look at some of the results. The figures shown are the averages of the population of 18 after a given number of trials and the fields are:
P1 = probability of cooperating when either no data is available or the particular partner cooperated on the previous round. Most of the time, data will be available, except when it is an interaction with a newborn individual on the first of 400 interactions.
P2 = probability of cooperating, given that the partner has defected in the last trial.
Score = average of each individual’s total score from the most recent set of 400 interactions.
Coops = the average number of times that an individual experienced a successful cooperation during these 400 interactions.

Trial 1: Probabilities for each individual randomly assigned between 0 and 1
Generation P1 P2 Score Coops
0 .4744 .5609
2 .3436 .4728 1785 65
4 .2823 .4029 1648 47
1000 .0126 .0010 802 0
2000 .0006 .0001 800 0

Trial 2: Probabilities for each individual randomly assigned between 0 and 1
Generation P1 P2 Score Coops
0 .4660 .3952
2 .4188 .2124 1457 34.34
1000 .9998 .2250 3199 399.67

In the above two trials, with different sets of random starting probabilities, an opposite result has occurred.

In trial 1, the evolutionary environment selected those who cooperated less and, in succeeding generations, the number of cooperations dropped, and with it the score. The final state showed an average score of 800 with each individual having exactly 800, the score obtained when all partners defect all the time, getting two points on each of the 400 encounters.

In trial 2, the opposite has happened. With a different set of random probabilities, a cooperative strategy has evolved and the members of the population will almost always cooperate with someone who has cooperated before. Interestingly, when most of the population cooperates most of the time, it is not so important to punish the occasional non-cooperation, because that could invite retaliation, so P2 does not evolve to zero.

It seems to be a matter of luck whether the little community develops into a moral society or a nest of vipers. (Am I being unfair to vipers?) But in every case it seems that it will be one or the other – the intermediates are not stable. Sometimes it takes much longer than others to reach a steady state as was shown in Trial 3.

Trial 3: Probabilities for each individual randomly assigned between 0 and 1
Generation P1 P2 Score Coops
0 .5828 .4734
2 .4413 .3470 1719 57
3 .3422 .3088 1539 41
4 .3027 .3251 1551 42
200 .2839 .1500 1213 14
400 .2875 .0141 856 0.1
600 .3044 .0055 838 0.8
800 .2445 .0013 818 0.2
1000 .2312 .0035 823 0.3
2000 .2634 .0001 820 0.1
3000 .4323 .0000 843 2.4
4000 .9983 .0000 3149 383

To test these ideas further, I have introduced three characters that we can put into the starting generation. These are:
1) The Sucker – he mostly cooperates all the time, no matter what anyone else does.
P1 = 0.95, P2 = 0.95.
2) The Nice Guy – he mostly cooperates with others who cooperated last time, but usually punishes those who didn’t
P1 = 0.95, P2 = 0.05.
3) The Cheat – he mostly defects, whatever his partner has done in the past.
P1 = 0.05, P2 = 0.05.
By using these characters, we can explore the model more fully.

Trial 4: A population entirely made up of Suckers
Generation P1 P2 Score Coops
0 .95 .95
2 .9492 .9499 3081 361
4 .9463 .9519 3076 360
200 .6731 .8822 2546 155
400 .3904 .7609 2122 120
600 .1795 .4621 1651 54
800 .1000 .2471 1319 18
1000 .0144 .0133 831 0
2000 .0013 .0001 800 0

Here is a population of nice people who are too nice. We might expect a cheat to evolve to take advantage of their naïve generosity. But the model produces an extraordinary result; in much of the sequence P2 becomes greater than P1. How can it be that the players will defect against the people who have cooperated last time and will cooperate with those who defected last time? The highest payoff when playing against a sucker is to defect every time, but because encounters with defectors are rare at the early stages, P1 evolves towards defection more quickly than P2. The decline into total non-cooperation then becomes inevitable.

Trial 5: A population of Nice Guys.
Generation P1 P2 Score Coops
0 .95 .05
2 .9502 .0508 2263 156
1000 .9998 .2546 3200 400

This is an unsurprising result. A population of players who mostly do the right thing will evolve into a population who nearly always do. In the equilibrium state, the descendants always cooperate against those who have not defected against them in the previous trial. P2 does not progress to zero because cooperation has become so perfect that it is never invoked, but also a defection could be costly in subsequent trials. The need to punish must be balanced against the avoidance of a dispute over a random error.

Trial 6: A population of Cheats.
Generation P1 P2 Score Coops
0 .05 .05
2 .0516 .0496 910 0.8
1000 .0062 .0001 800 0

Another unsurprising result. If you live in a community where people mostly cheat, the only strategy that works is to cheat too. Everybody is worse off, but there is nothing that the individual can do except choose to be a sucker.

So what happens, if we mix the characters? We can try some experiments, beginning with a half-and-half mix.

Trial 7: A population of half Nice Guys and half Cheats alternating
Generation P1 P2 Score Coops
0 .5 .05
2 .9491 .0501 2127 126
1000 .9992 .1977 3190 397

The Cheats have been rapidly driven out. The cheats interacting with each other will often get 2 points per encounter while the Nice Guys will often get 8. On the first encounter, when playing against a Nice Guy, they may get 10, but never again, as the Nice Guys remember what they did. By the second generation, the Cheats have all died leaving no issue and are replaced with the offspring of the Nice Guys who go on to develop a perfect morality, just as they did in trial 5.

Trial 8: A population of half Suckers and half Cheats, alternating
Generation P1 P2 Score Coops
0 .5 .5 (0.5 is the average of 0.95 and 0.05)
2 .0496 .0509 917 0.4
4 .0505 .0487 909 0.6
1000 .0012 .0002 800 0
When the Suckers and Cheats have alternating places in the circle, the Suckers all disappear by the second generation and a descent of the all-cheat population into total non-cooperation evolves as before

Trial 9: A population of half Nice Guys and half Suckers
Generation P1 P2 Score Coops
0 .95 .5
2 .9501 .9499 3077 360
4 .9487 .9492 3084 362
1000 .0067 .0063 815 0
2000 .0016 .0001 800 0
Curiously the Nice Guys score only a little less well than the Suckers initially, but this is enough to drive them out. This is because they punish the occasional lapses of their fellows, and in consequence expose themselves to a defection in the next encounter with other Nice Guys. Once an all-sucker population is established, it goes downhill exactly as in Trial 4.

Many other combinations can be tried. 17 Nice Guys and 1 Cheat evolves into a moral society, but 17 Cheats and 1 Nice Guy does not. Interestingly, 17 Nice Guys and 1 Sucker fail to evolve morality. Suckers seem to do more damage to morality than Cheats.

The Simplicity of the Model
This little computer exercise is fun and it certainly demonstrates that conditions exist in which morality can evolve. We have studied the evolution of very simple creatures; all they can do is to recognise partners and to remember what they did on the most recent encounter. Any species equipped with this basic intellectual equipment can, indeed almost certainly will, evolve a degree of morality.

But, of course, we must be aware that the real situation is far more complex than the model we have used. With greater brainpower, it is far more likely that ethics would evolve and this is indeed the case. Let us consider some of these complications.

Our model does not consider kin altruism. Obviously altruism towards one’s own children will increase the transmission of one’s genes to the next generation. It is a matter of common experience that this is the instinct that most people have. The altruism shown to children is rather special, because it has no need for reciprocation. This altruism can also stretch to other relatives such as nephews in proportion to the number of genes that they share.

A curious example occurs with the hymenopteran insects – bees, wasps etc. Because of the unusual way their genes are transmitted, the female worker is more closely related to her sisters than to her own daughters. For this reason it is better to care for the offspring of the queen than her own.

Reciprocal altruism is a common feature of human interaction between unrelated subjects and that is essentially what the little computer game was modelling. We are completely familiar with it and it governs much of our daily lives. When an unrelated person does us a favour, we feel a duty to reciprocate. When we receive an offence, we feel some duty to reciprocate that too.

But the real world is more complex than the simple two person game described by the cooperation matrix that we have used in the computer model. It is a multi-person game in which many people interact. This gives rise to a third type of altruism: display altruism. Often a person will show altruism to a recipient who has no prospect of returning the favour, but the benefit to the donor is earned in reputation building. We all behave more generously when being watched, as the organisers of charity social events are well aware.

But our morality is always fragile. Just as our ancestors could transmit their genes more effectively by reaping the benefits of cooperation, they could also do very well if they could be successful cheats. They also had to avoid doing very badly, if they became suckers! Cooperation is best, but successful cheating is even better and even the sparse rewards of non-cooperation are better than being taken for a sucker. These strategies, both base and noble are built into the brains of us all.

But we also have strategies to cooperate to ensure that we are not cheated upon. In human society, we have developed many mechanisms to enforce morality: police forces, walls, fences, gates, locks, barbed wire, legal contracts, courts, laws, national constitutions, money, safes, banks, store detectives, social conventions, CCTV cameras, fingerprints – the list is endless. We have an instinct to cooperate with others to create these mechanisms because we are better off if we do so.

We even have an instinct to punish those who have offended us. That is interesting, because punishment has a cost to the punisher and, in a sense, can be considered an altruistic contribution to society. But perhaps revenge has evolved because it advertises to others that people who harm the punisher are unlikely to get away with it.

There is also a difference of morality in different communities. If someone is born into a “bad” environment his experience may be that potential cooperation partners will frequently, in terms of the simple cooperation matrix, defect. This will cause the individual to be more likely to defect in future encounters as the only alternative is to end up as a sucker. The result is that individuals become hardened and the community as a whole becomes worse off. We begin to get a glimpse of why an underclass, once established, can be so hard to correct and can persist for many generations.

Evolution has left us with the strange mix of moral instincts that we can see in ourselves and others. We have a moral sense, but we also will cheat sometimes, if we think we can get away with it. We have genetically hard-wired moral beliefs but we also have moral beliefs that are part of our culture. We have an instinct to build a moral culture, complete with enforcement mechanisms. And we have an instinct mostly to conform to our society’s moral codes. These instincts have all been formed over millions of years because they gave a selective advantage.

Conclusions
Our ethical instinct has been formed over millions of generations to satisfy one criterion alone: to maximise the transmission of its holder’s genes to the next generation. So we can see what a waste of time it is to consider the various ethical policies that have been proposed by philosophers – utilitarianism and the rest.

Many people react in shocked denial to this fact. It just doesn’t seem to be the sort of conclusion they had hoped for. They tend to use the fine logical principle “that would be completely distasteful, therefore it cannot be true”. Although the fault in that principle can be seen in a few moments of reflection, it is very common and it is an instinct which has also evolved. It is interesting to speculate how that could have happened, but that is a subject for another time. Those who have ambition be philosophers must recognise this hopeless logic when it arises in themselves and avoid it.

It seems odd that morality should have its origins in this apparently selfish goal, but it is so. It is the basis of Richard Dawkins’ first book “The Selfish Gene”. The gene is selfish in the sense that natural selection makes it simply do what it takes to survive and replace its rival versions (or alleles). And it is astonishing that altruism between its human carriers serves this underlying selfish purpose of the gene.

So in broad outline at least, we understand the whole system. Why is there a philosophical problem?

But I have already alluded to Hume’s Law. It seems so clearly irrefutable and yet it implies that we do not understand the whole system. But what we understand here is a statement of fact. These facts describe the human instincts that would be, and have been, created by natural selection.

What we know about the evolution of our moral instincts is a fact – it is a description of what is and was. Under Hume’s Law, which seems so irrefutable, even facts about our ethical sense cannot dictate what our ethical sense ought to be.

Or should they? Should I take guidance from this and deliberately do what will maximise the transmission of my genes to future generations? How could that work in our modern, overcrowded world?

So what can we do for philosophy? This is the great problem that moral philosophers should be giving their attention to, but are not yet doing. Perhaps we need to give more thought to the nature of ethics.

Appendix: Coding of computer programme (BASIC)

'Evolution of Ethics Simulation - ETHICS2.BAS
DIM TICK%(100),P(100,3),PTNR%(100,20),RSLT%(100,20),SCO%(100),PAVG(3)
'XSUM=0:FOR I%=1 TO 18:X=RND(I%):PRINT X:XSUM=XSUM+X:NEXT I%
'PRINT:PRINT XSUM/18:END:'RND tester
'X=RND(1)
NUMB%=18:COUNT%=0:GOTO 200
100 Y$=INKEY$:IF Y$="" THEN 100:'----------------------------------Y$ SUB
X%=ASC(Y$):IF X%>96 AND X%NUMB% THEN K%=K%-NUMB%
900 II%=I%:KK%=K%:GOSUB 2000:AI%=AII%
II%=K%:KK%=I%:GOSUB 2000:AK%=AII%
IF AI%=1 AND AK%=1 THEN SI%=8:SK%=8:RSLTI%=2:RSLTK%=2:'both cooperate
IF AI%=2 AND AK%=1 THEN SI%=10:SK%=0:RSLTI%=2:RSLTK%=3:'K ok but I bad
IF AI%=1 AND AK%=2 THEN SI%=0:SK%=10:RSLTI%=3:RSLTK%=2:'I ok but K bad
IF AI%=2 AND AK%=2 THEN SI%=2:SK%=2:RSLTI%=3:RSLTK%=3:'both defect
SCO%(I%)=SCO%(I%)+SI%:SCO%(K%)=SCO%(K%)+SK%
FOR L%=1 TO 20:'each memorise partner's action
IF PTNR%(I%,L%)=K% THEN RSLT%(I%,L%)=RSLTI%:GOTO 920
NEXT L%
FOR L%=1 TO 20
IF PTNR%(I%,L%)=0 THEN PTNR%(I%,L%)=K%:RSLT%(I%,L%)=RSLTI%:GOTO 920
NEXT L%
920 FOR L%=1 TO 20
IF PTNR%(K%,L%)=I% THEN RSLT%(K%,L%)=RSLTK%:GOTO 1000
NEXT L%
FOR L%=1 TO 20
IF PTNR%(K%,L%)=0 THEN PTNR%(K%,L%)=I%:RSLT%(K%,L%)=RSLTK%:GOTO 1000
NEXT L%
1000 NEXT J%:NEXT I%
'PRINT "AFTER TRIAL ";N%:GOSUB 4000
NEXT N%
FOR I%=1 TO NUMB%:TICK%(I%)=TICK%(I%)+1:NEXT I%:GOSUB 3000:GOTO 3010

2000 'sub to find action of II% when II% plays KK% result is AII%
RSLT%=1:FOR L%=1 TO 20
IF PTNR%(II%,L%)=KK% THEN RSLT%=RSLT%(II%,L%)
NEXT L%
X=RND(1):IF P(II%,RSLT%)>X THEN AII%=1 ELSE AII%=2
IF AII%=1 THEN COOPS%=COOPS%+1
RETURN

3000 'Compute and print stats for latest trial series
FOR P%=1 TO 3:PAVG(P%)=0:NEXT P%:SAVG=0
FOR I%=1 TO NUMB%
FOR P%=1 TO 3:PAVG(P%)=PAVG(P%)+P(I%,P%):NEXT P%:SAVG=SAVG+SCO%(I%)
NEXT I%
FOR P%=1 TO 3:PAVG(P%)=PAVG(P%)/NUMB%:NEXT P%:SAVG=SAVG/NUMB%
PRINT COUNT%;
FOR P%=1 TO 3:PRINT USING "#.#### ";PAVG(P%);:NEXT P%:
PRINT USING "#####.### ##########";SAVG;COOPS%
IF COUNT%=1 OR COUNT%=1000 THEN GOSUB 100
'PRINT "AFTER TRIALS":GOSUB 4000
RETURN

3010 'Cull the unfit----------------------------------------------------
PRINT "BEFORE CULL COUNT = ";COUNT%:GOSUB 4000:'diagnostic
FOR I%=1 TO NUMB%:IF SCO%(I%)>=SAVG THEN 3020
TICK%(I%)=0:SCO%(I%)=0:FOR P%=1 TO 3:P(I%,P%)=0:NEXT P%
FOR L%=1 TO 20:PTNR%(I%,L%)=0:RSLT%(I%,L%)=0:NEXT L%
FOR K%=1 TO NUMB%
FOR L%=1 TO 20:IF PTNR%(K%,L%)=I% THEN PTNR%(K%,L%)=0:RSLT%(K%,L%)=0
NEXT L%
NEXT K%
3020 NEXT I%
'PRINT "AFTER CULL":GOSUB 4000:'diagnostic

'Replicate---------------------------------------------------------
3030 FOR I%=1 TO NUMB%:IF TICK%(I%)0 THEN 3040
TICK%(K%)=1
FOR P%=1 TO 3:X=RND(1)/100:Y=RND(1):IF Y.98 THEN X=X/10
IF P(I%,P%)<.002 or="" p="">.998 THEN X=X/10
IF P(I%,P%)<.0002 or="" p="">.9998 THEN X=X/10
P(K%,P%)=P(I%,P%)+X
IF P(K%,P%)1 THEN P(K%,P%)=1
NEXT P%:GOTO 3060
3040 NEXT K%:GOTO 3080
3060 NEXT I%
C%=0:FOR J%=1 TO NUMB%:IF TICK%(J%)=0 THEN C%=C%+1
NEXT J%:IF C%>0 THEN 3030
3080 'PRINT "AFTER REPLICATION":GOSUB 4000
COUNT%=COUNT%+1:IF COUNT%"Y" THEN END
COUNT%=0:GOTO 700

4000 PRINT " AGE P1 P2 P3 SCORE":'print stats
FOR I%=1 TO NUMB%:PRINT USING "### ";TICK%(I%);
FOR P%=1 TO 3:PRINT USING "#.#### ";P(I%,P%);:NEXT P%
PRINT SCO%(I%)
NEXT I%:PRINT
FOR P%=1 TO 3:PAVG(P%)=0:NEXT P%:SAVG=0
FOR I%=1 TO NUMB%
FOR P%=1 TO 3:PAVG(P%)=PAVG(P%)+P(I%,P%):NEXT P%:SAVG=SAVG+SCO%(I%)
NEXT I%
FOR P%=1 TO 3:PAVG(P%)=PAVG(P%)/NUMB%:NEXT P%:SAVG=SAVG/NUMB%
PRINT " ";
FOR P%=1 TO 3:PRINT USING "#.#### ";PAVG(P%);:NEXT P%:PRINT SAVG;COOPS%
PRINT "PRESS ANY KEY":GOSUB 100
RETURN