Is a Utilitarianism Society A Happy One?
Is Rule Utilitariasnism Distinct from Act Utilitarianism?
The moral theory of utilitarianism is founded upon the so-called ‘Greatest Happiness Principle’ (GHP) whereby the morally right action in any circumstance is that which, when evaluated against all possible alternative actions, leads to the ‘greatest happiness for the greatest number’. It is in the practical evaluation of actions against this criteria that two alternative interpretations of utilitarianism arise: act utilitarianism (AU) and rule utilitarianism (RU). In this essay we shall examine whether these alterative viewpoints are in fact practically distinct – i.e. whether AU and RU can conceivably recommend different actions in identical circumstances, or whether RU must inevitability reduce to AU in real world application.
We shall show that, considering a single moral decision, RU is extensionally equivalent to AU and that RU and AU generate reliably identical outcomes if correctly applied. We will show how RU addresses some but not all the problems inherent in AU. Like all moral theories, RU is not without its problems and it is always possible to devise pathological cases that lead to moral absurdities. Drawing on some recent work in game theory, we shall show how reliable moral rules of the kind required by RU can arise spontaneously in a society motivated primarily by self interest.
Firstly, what is meant by utilitarianism and how exactly do AU and RU differ in their approach? Simply put, utilitarianism states that an action is right to the degree that it promotes the greatest good for the greatest number. Thus utilitarianism is an attempt to define an quantitative theory of morality expressed as the maximization of some measurable utility. Utilitarianism in its early form, as promoted by Jeremy Bentham (1748- 1832), defined this utility purely in terms of pleasure and the avoidance of pain. For Bentham, pleasure itself was not only desired but was desirable in itself as the worthy end to which all human endeavour should be directed. But this somewhat bleak hedonistic viewpoint does not provide the sought for moral justification for an action. After all, Bentham’s utilitarianism could just as well be applied to the actions of pigs competing for food and shelter in the farmyard. Any true theory must surely recognize that only rational animals can be moral agents - humans can foresee the consequences of their actions, weigh up the alternatives and act accordingly. As moral agents, human can not only conceive of a future in which they will experience the consequences of their actions and but also empathise with those affected by their actions.
In Utilitarianism [1], John Stuart Mill (1806- 1873) attempted to elevate Bentham’s theory above the level of the farmyard by elaborating on what is meant by ‘happiness’. The Benthamite view says that all measures are strictly quantitative – there being no difference in kind between mere sensual gratification of the body and higher pleasures such as intellectual or artistic achievement or appreciation. Mill however distinguished not just quantitatively but also qualitatively between various forms of pleasure - not all pleasures being equally worthy. To Mill, the goal of utilitarianism is a “well-being” which must include the pleasures of the mind and soul as well as just the body. Particularly valued is the fulfillment of potential – the appreciation of the achievement of some intellectual or artistic goal.
Given some agreed concept of ‘greatest happiness’ (and the exact definition is not crucial to the following discussion), then how should the prospective utilitarian decide on the right action? A practitioner of act utilitarianism (AU) will evaluate a possible action according to the favorable or unfavorable consequences that will result from that particular action, and not be concerned with the general case involving actions of a similar type. AU simply involves the direct evaluation of the utility inherent in each action. For the act utilitarian, morality is decided on-the-fly on a case by case basis and there cannot be any absolute or general action-guiding rules.
At first sight, AU would appear to offer a simple deterministic method for ‘doing the right thing’. But for AU to be a true moral theory, there must exist a convenient and practical decision procedure able to select the utility-maximising winner from the (potentially infinite) set of candidate actions. Here we immediately encounter a number of practical difficulties in applying utilitarianism to a complex society of free thinking individuals.
1. The Scope Problem: exactly whose happiness should the utilitarian be concerned with? Utilitarian theory holds that each person’s happiness counts for one in the total score. But surely my own individual and my family’s happiness must be weighted more heavily in my personal cost/benefit analysis? What reason can the utilitarian have to put someone else’s happiness on their agenda?
2. The Measurement Problem: how can happiness or pleasure be meaningfully quantified as part of a utilitarian cost/benefit analysis? Any action will originate a causal chain of events which propagates far into the future. How is it possible for all significant consequences at any future time to be both foreseen and quantified with regard to the effect on general utility? Bentham appealed to some poorly defined ‘felicific calculus’ – but this is either illusory or, even if possible in principle, would prove impossibly complex and time consuming in any practical situation. However, this is not a devastating objection to AU since surely all that is required is the relative evaluation of candidate actions rather than their absolute numerical evaluation against to some felicific scale.
3. The Computational Problem: any evaluation of the potential consequences of an action would be not only impractically complex but, in principle, infinite in duration. Even in a Newtonian universe, the computation of all future events resulting for a single cause rapidly results in a combinatorial explosion. In actuality, we live in a universe where absolute accuracy and knowledge is precluded by the strictures of quantum theory.
4. The Corruption Problem: since the greatest good for the greatest number is described in aggregate terms, then that good may be achieved in ways that are harmful to some, so long as that harm is balanced by a greater good. This is clearly a open vehicle for justifying all sorts of abhorrent activities (such as slavery, racial subjugation or the disenfranchisement of women). If the act utilitarian is driven solely by the end goal, then all forms of corrupt and evil acts may be justified along the road to maximal utility. Similarly, AU would potentially lead to the breakdown of society since it positively promotes the wholesale flouting of normative rules which promote trust and social cohesion (such as ‘keep your promises’) whenever opportunistic circumstances arise. For example, suppose lying is decidably wrong 99% of the time, it might be permissible and, in fact, moral, to lie if that saves one’s child from certain death.
5. The Liberty Problem: AU would appear to allow individual rights to be violated for the sake of the greatest good. For example, the murder of an innocent person would seem to be condoned if it served the greater number.
Given these serious objections to AU, is there a better moral theory that preserves the end goal of utilitarianism? . Rather than evaluating the consequences of individual acts, rule utilitarismism (RU) is concerned with the following of moral rules that reason about the class of similar acts. RU involves selecting the best rule which, if followed, will have the happiest consequences. RU is thus a 2-step process: (1) select the set of appropriate moral rules (2) select the best action according to the rule that produces the best outcome.
For RU to work, each rule of moral action is such that society as a whole will benefit more by its members universally adopting that rule than not. Take the rule ‘do not lie’ for example. While, at any particular time it might be better for an individual to lie, society as a whole would immediately suffer if the majority began to tell untruths. Within a strictly RU-society it is always an immoral action to lie – regardless of the individual consequences.
How does RU stand up against the difficulties of AU as listed above?
1. The Scope Problem: One can easily conceive of viable RU rules where the preferential treatment of oneself and close kin is advocated whilst taking into account the welfare of society as a whole. Surely a more stable, contented and productive society will result where the individual devotes most of their energy towards the happiness of their immediate vicinity as opposed to helping strangers encountered at random in the street?
2. The Measurement and Computational Problems: RU immediately removes the computational burden inherent in AU because the moral agent simply needs to decide between a relatively small number of self-evident rules. The rules represent the accumulated moral wisdom of society insomuch as they are the aggregate of individual moral decisions taken by earlier generations of society. In effect, the RU rules provide the ‘hard-coded’ results of innumerable AU cost/benefit calculations in a handy, easy to use package.
3. The Corruption and Liberty Problems: At first sight, RU seems to offer a defence against the perversion of utilitarianism by obviously evil acts which nevertheless increase the general happiness – we can formulate our moral rules to preclude such classes of action that harm the individual at the expense of the greater good. But RU rules could also easily be drafted to allow, for example, slavery if such a rule was found to be beneficial to society as a whole. It is intuitively wrong that such a rule counts as moral – and yet it is exactly in accord with the moral principles of RU. Surely then, RU offers no protection against the opportunistic manipulation of rules to encompass actions that seem intuitively immoral.
Clearly then, RU offers some improvement over AU but is still subject to corruption which condones acts which intuitively seem either immoral or just perverse. This problem has been used to justify the claim (R.B.Brandt [3]) that RU cannot practically exist as distinct from AU. Even if claiming to practice RU, a moral agent will always act in a way indistinguishable from another agent applying AU in the same situation. Let us examine the argument.
Suppose we are a devout rule utilitarian and encounter a moral dilemma where our RU rule set indicates the right action as A whereas it is obvious that, in this particular case, a different action B has far happier consequences for all concerned. What do we do?
If we adhere strictly to our rule then surely we are merely a blind follower of rules and not a utilitarian since we are violating the sacred utility principle itself. Any utilitarian theory which denies the maximization of utility cannot be a true theory.
Perhaps therefore we take the pragmatic approach and break the rule – choosing action B over A. Now we have reverted to pure AU behaviour where the immediate utility is all and our RU rules are clearly irrelevant.
Lastly we could modify our rule to accommodate the current situation: always do A except in situation S where it is better to do B. But here we have generated another rule which is subject to exactly the same problem in a different circumstance. We are forced to entertain a potentially infinite number of elaborations to our rule to account for each practical circumstance. Clearly, this is logically equivalent to AU since the adherence to a set of unbreakable rules has disappeared and each case is decided individually against the principle of utility.
It follows then that RU and AU have the same extension on the space of possible actions and are thus indistinguishable for practical purposes – i.e. RU must reduce to AU in the real world. RU is clearly prey to the same issues that bedevil any attempt to codify complex behaviour in the real world as a set of rules. Early attempts to produce ‘artificial intelligence’ in computers used rule-based systems to codify human problem solving expertise. Such systems were however quickly overwhelmed by the mass of relevant factors which must be taken into account when dealing with real world problems. The so-called frame problem prevents any single system of rules from producing acceptable behaviour in anything other than simplistic ‘blocks world’ simulations inside a computer.
But is RU always reducible to AU? It is relatively easy to create artificial situations where AU and RU would select different actions. Consider the following situation where a group of people in a closed room each has a button. If everyone resists pressing their button for thirty minutes then everyone gets £1000 each. However, if any one person presses their button then they get £100 and everyone else gets nothing. What is the right thing to do in this situation? From an AU viewpoint, clearly one should press one’s button as soon as possible because, in this way, the overall outcome is that I get £100 and no-one else really suffers. If I do not press my button then surely someone else will, and the overall utility is the same, but my personal utility is much reduced. Even though by pressing my button I do not reach the theoretical maximum utility (£1000), I must take into account the likely actions (and hence the consequences for me) of my fellow competitors. From an RU viewpoint however, then clearly I should follow the general rule that, if obeyed by everyone, results in maximal utility. Thus I should not press my button if I am an RU moralist (and believe that everyone else is also a rule utilitarian.) So, as shown in a similar example by Allan Gibbard [2], in a single isolated moral decision, RU and AU could have different recommendations for the right action.
But how does this change if we consider the same individuals making repeated moral decisions which affect the same group of people – just as happens in a real human society such as a village. If RU always reduces to AU in practical applications, why then is it that we appear to have a largely viable and widely accepted set of moral rules which purport to guide our actions within society? How have these rules arisen and what part do they play in our moral behaviour?
Consider a society containing the moral rule ‘do not tell lies’. Suppose our society is populated initially entirely with pious RU fanatics who resolutely obey all moral rules – everyone always tells the truth. Such a society is vulnerable to invasion by liars – either from another society with different RU rules, or spontaneously from within via some mutation in the moral fibre of a rogue individual. In a society of inveterate truth tellers (TT’s) then the single liar is at a huge advantage to maximize his own happiness. Liars will prosper at the expense of TT’s and, human nature being what it is, some of the TT’s will begin to emulate the more successful strategy of their mendacious colleagues. Now we have a society where the proportion of liars grows until everyone is persistently economical with the truth – a situation already realized within our own political institutions. But clearly now there can be no incremental benefit in lying – everyone is at the same game. In such a corrupt society, an individual who reliably tells the truth is likely to be more successful – people will trust them with their business and affairs and return to them in preference to their cheating competitors. Now our model society is ripe for invasion by increasing numbers of insufferably smug TT’s. Clearly we have the potential for society to oscillate wildly between all-liars and all-TT’s. In reality, a stable equilibrium position would be reached where the relative advantages and disadvantages of liars and TT’s defecting to the other side are in balance. At such an equilibrium, any slight increase in the proportion of liars would be quickly compensated for by an increase in the number of TT’s – and vice versa.
In our real society we have the situation where most people tell the truth most of the time – but not all of the time. The balance point is reached when, averaged over the population, people lie only as much as gives them the best overall outcome given that everyone else is pursuing the same strategy. Thus to succeed, each individual must take into account the actions of others also trying to maximize their own personal utility. In short, I can benefit by lying some of the time – but if I become known as a liar then I will be ostracized and my happiness decreased. It follows then that the pursuit of self interest and personal utility does not result in a wholly selfish and dastardly society for such a society would overall be less successful than one where people mostly tell the truth (and similarly conform to other moral laws). In evolutionary theory, this situation is known as an evolutionarily stable strategy (John Maynard-Smith [4]) where no alternative strategy, if adopted by the whole of society, can produce a more successful outcome. In economics, the Nash Equilibrium is an exactly analogous concept.
A famous example from game theory also illustrates the point nicely. In the well- known prisoner’s dilemma, two prisoners are held in isolation from each other have the choice either to give evidence against the other, and receive a much reduced sentence, or remain silent. If both prisoners remain silent then they will both be released. The best strategy is to inform on your fellow prisoner immediately - even though this does not produce the theoretically maximal outcome (no time in jail). The dilemma arises because, although both parties would benefit by co-operating and remaining silent, each side is individually better off by adopting the selfish attitude and defecting. In a single one-off situation then, selfish behaviour (i.e. act utilitarianism) delivers the best guarantee for a happy outcome. However, when the same two prisoners act out the dilemma repeatedly, then a strategy where the parties co-operate will produce the overall best result. In one successful strategy – called ‘Tit for Tat’ – the parties co-operate until one side defects. This defection is punished by the other side also defecting until the first partner returns to co-operating, whereupon the second partner returns to co-operating.
But even Tit-for-Tat is not the best strategy for building the most successful society – i.e. the one with the highest aggregate happiness or utility over time. If one side defects by mistake, then both parties will be locked into a mutually unproductive round of mutual defections with neither side winning. A better strategy is one which is takes a more trusting view and allows for a certain amount of cheating – but not too much. If we play the strategy Tit-for-Two-Tats we should allow the other side to defect the once without punishment - however, two defections in a row and we will also defect. To avoid this naïve strategy being exploited by the other side, we should not be magnanimous on every occasion – the forgiving of about a third of mistakes will generally produce the best results in computer simulations (Axelrod [5]).
Whilst there is much more that could be said about the spontaneous formulation of co-operative and seemingly altruistic behaviour in society, the general result seems clear: short term self-interest as proposed by AU is not the most successful strategy in the longer term. Once we take into account ‘the shadow of the future’ (Axelrod), then ‘nice’ strategies have the edge over ‘nasty’ ones.
In summary then, we must conclude that from the viewpoint of the individual, RU does indeed reduce to AU and will advocate the same outcome if the only criterion is the immediate utility at that point in time. However, co-operation, reciprocity and altruism will tend to arise spontaneously over time simply because in a society where we meet the same individuals frequently, it pays, in general, to be nice rather than nasty. Here then we have the paradox of moral behaviour in society – by trying to behave in a purely self-interested AU fashion, individuals give rise to a society that appears to be following a set of RU-style moral rules that have the good of the community as a whole as their end. Given that moral rules are an emergent property of any rational society, we must conclude that the practice of AU by selfish individuals leads inevitably to the practice of RU by society as a whole.
References
1. John Stuart Mill, Utilitarianism, Everyman Library, 1996.
2. A. Gibbard, Is Rule Utilitarianism Merely an Illusory Alternative? Australian J. of Philosophy, 1968.
3. R.B. Brandt, Toward a Credible form of Utilitarianism, Morality and the Language of Conduct.
4. John.Maynard Smith. and GR. Price, The Logic of Animal Conflict. Nature 246:15-18, 1973.
5. Robert Axelrod, The Evolution of Co-operation. Basic Books, New York, 1984.