(Original post by grumballcake)
but how will you know that it's the most likely course, even if you could define what 'good' is. So you have two levels of pure guesswork with no rational underpinning.
I don't agree at all. For one thing since we're arguing about utilitarianism we've already defined what "good" is, though I admit we could formulate it either as happiness or preferences of persons.
Working out what is most likely to maximise the greatest good in a given situation is certainly not "two levels of pure guesswork." We have a vast array of information available to us about the world in which we live in, with which we can speculate as to consequences: the effects that we will likely cause with any given action. Of course not knowing the nature of all things within the cosmos we can’t predict perfectly, but demonstrably this does not reduce choosing actions to pure guesswork. All of our choices of action throughout our lives are based on a tacit judgement of probable outcomes. The very act of typing this post is based on the acceptance of the likelihood that I will probably type up some words, click on ‘post’, and thus the words will be posted, rather than clicking on ‘post’ and having the computer spontaneously explode. Such a judgement of probability is of course entirely necessary, entirely possible, entirely commonplace and not ‘pure guesswork’. If judgement of which actions will bring good outcomes were ‘pure guesswork’ there would be no reason why every-one should simply not lie motionless on the floor indefinitely, as there would be no reason to guess that this action will not bring about the most preferable outcome.
I don’t understand at all what you mean by saying that judging consequences of actions has “no rational underpinning.” This process which is implicit in all action all the time, seems obviously straightforward.
You don't seem to understand the difficulties. Are you arguing that people's preferences should override the utility of the others? You seem to be setting up an imaginary calculus which will allow you to juggle these cases, but I can show that such a calculus will necessarily be internally incoherent.
What difficulties, you haven’t specified any? All you’ve done is imagine an imaginary calculus and announced that it will be incoherent. In any case I don’t understand what you mean by “are you arguing that people's preferences should override the utility of the others?” I don’t even understand what you might mean I meant.
we've modelled that if 1/3 give up one year of productive life, then 2/3 will gain 1 year of productive life. Even if we reduce the gain to 0.51 years of productive life, the equation still favours the genocide. If you don't like those figures, we can always manipulate the population size to get the correct expected values. The only way you can avoid this is to give the life of the victim a value disproportionate to the value of the one who benefits, but why should that be? If we swapped their roles, why wouldn't it swing the opposite way?
That isn’t the case that we’ve modeled. Rather you modeled:
If by the painless genocide of 1 million people, you could improve the lives of 2 million others, would the genocide be ethically justified?
I responded by noting that it wouldn’t, because killing the 1/3 of the population- painlessly or not- would be a considerable harm. The 1 million people could live their lives without being massacred, which they clearly prefer or else they would massacre themselves. Alternatively they could die and “improve the lives” of the 2 million. The massacre would only maximise good if the losses for the 1/3rd of the population were outweighed by the gains of the 2 million, which is incredibly unlikely to be the case given that the preference of the 1/3 to continue living is doubtless greater than the preference of the 2/3 to “improve their lives,” by acquiring extra space or some more holiday homes, or whatever benefit they derive from slaughtering 1/3 of the populace.
As I noted in my earlier reply, the outcome might be different were the situation modeled differently, for example, were the scenario in a life raft or hot air balloon, such that the death of the lesser number would actually directly prevent the death of the larger.
I don’t see what your new formulation is about, if it is supposed to be related to the model we’ve been discussing then it is simply misleading, because it doesn’t accurately describe the situation or choices at all. Even if we do relocate this scenario to a life-raft, such that depriving some-one of x years of life, directly benefits another with x years of life, the scenario is still not simply a case of “life-transfer,” whereby it makes no real difference who lives and who dies. To get such an inaccurate conclusion you have to ignore the fact that a “genocide” is occurring, otherwise you would reach the obviously correct conclusion that killing 1/3 of the population doesn’t maximise the greatest good at all.
The only way you can avoid this is to give the life of the victim a value disproportionate to the value of the one who benefits, but why should that be?
It isn’t disproportionate, if you consider the scenario accurately. As your own quote highlights there is a shift between on the one hand the loss of the value of the life of the victim and the “value of the one who benefits.
The scenario posits a choice between either: 1 million deaths and increased benefit for 2 million, or no deaths for 1 million and no increased benefit for the 2 million. Thus the question is simply whether the million deaths are outweighed by the benefit derived from them. The point is that except in a very unusual situation, the benefit for the 2 million persons will not outweigh the loss of 1 million lives. If the situation were equal, such that the choice was 1 million lose their lives or 2 million, then ceteris paribus the 1 million deaths would be preferable. It seems radically unlikely however, that the benefit of the 2 million persons in the situation we’ve specified, will actually be so astounding that it is better that a person die, than 2 persons be deprived of this improved lifestyle.
It's one reason why utilitarianism is so deeply flawed from an ethical point of view. The calculus will always allow atrocities.
The whole point is of course that utilitarianism will maximise the greatest good. Thus the only way to suggest a flaw in utilitarianism is to suggest that the greatest good is, or could require an atrocity, in which instance the notion of
is deprived of its force. Alternatively one can incorrectly calculate the greatest good and thus claim that 'the greatest good [calculated incorrectly] isn't the greatest good' which is what you've been doing by trying to suggest that committing genocide would be the best way to make every-one happy.