Turn on thread page Beta
    Offline

    0
    ReputationRep:
    Who decides that the greater good is in fact "greater"?

    What happened to individualism and the protection from the "tyranny of the majority"?
    Since we're discussing ethics and not politics the question is one of what actually is right. Utilitarianism is clearly the most universally rationally acceptable of ethical formulations anyway, as it affords each person's interests equal consideration.

    Further, utilitarianism takes account of "who decides" and the potential for the "tyranny of the majority," as it is entirely plausible that within many situations the utilitarian 'solution' would be through democratic government, as this may maximise the greater good.

    As with the previous 'genocide' scenario Grumballcake suggested, fears of a 'tyranny of the majority' are only born out of a misconception of utilitarianism.

    Individualism is quite frankly nonsense, in this context it either doesn't conflict with utilitarianism or else it has to be defined specifically as an 'individual good that makes things worse for a greater number of individuals to a greater extent'.
    Offline

    15
    ReputationRep:
    (Original post by TCovenant)
    Since we're discussing ethics and not politics the question is one of what actually is right. Utilitarianism is clearly the most universally rationally acceptable of ethical formulations anyway, as it affords each person's interests equal consideration.
    Not really. Since utilitarianism relies on the good of the greater number it cannot be removed from some kind of authority; whether that be government or othewise is irrelevant. What is "right" is that the individual himself decides what is good for himself (as long as his decision does not harm others - Mill), not that a group of people, government or otherwise, decide for him, under the pretext of the good of the greatest number.

    Further, utilitarianism takes account of "who decides" and the potential for the "tyranny of the majority," as it is entirely plausible that within many situations the utilitarian 'solution' would be through democratic government, as this may maximise the greater good.

    As with the previous 'genocide' scenario Grumballcake suggested, fears of a 'tyranny of the majority' are only born out of a misconception of utilitarianism.
    Again the fear of tyranny of the majority are present as a consquence of democracy not inspite of it. Hence, there needs to be limitations to the power of government and the strengthening of the sovereignty of the individual in order that tyranny of the majority is not the result.

    Individualism is quite frankly nonsense, in this context it either doesn't conflict with utilitarianism or else it has to be defined specifically as an 'individual good that makes things worse for a greater number of individuals to a greater extent'.
    Utilitarianism in that sense is nonsense, for it compromises the individuality and uniqueness of each and everyone of us by making the arrogant claim that whatever is good for the greatest number translated to the good of everyone, when such is quite clearly not the case. It may be in the good of the greater number that black people be repatriatated back to their country of origin, it is definitely not in the interests of those people. Under utilitarian theory, they would be sent back because it was the good of the greatest number. That, sir, is nonsense.
    Offline

    0
    ReputationRep:
    Since utilitarianism relies on the good of the greater number it cannot be removed from some kind of authority; whether that be government or othewise is irrelevant
    That is completely false, for the same reasons previously noted- utilitarianism is a theory of ethics, namely of what actually is right. There is absolutely no need therefore for an authority.

    “What is "right" is that the individual himself decides what is good for himself (as long as his decision does not harm others - Mill), not that a group of people, government or otherwise, decide for him, under the pretext of the good of the greatest number.
    “Right” thus defined is of course a different matter, again. Regardless, as Mill notes, the individual defines their personal good by virtue of their preferences. The greatest good therefore, is undeniably the greatest amount of good thus defined. This has nothing whatsoever to do with the question of “a group of people deciding the greatest good for him.” The utilitarian solution would undeniably decide whether this hypothetical group were good or not based on whether they actually served to maximize the greatest good.
    The fear of tyranny of the majority are present as a consquence of democracy not inspite of it. Hence, there needs to be limitations to the power of government and the strengthening of the sovereignty of the individual in order that tyranny of the majority is not the result.
    If any given social rule were actually to maximise the greatest good, then it would be for the greatest good. Hence citing the “tyranny of the majority” is of course no point whatsoever, as I stated previously none of these ‘objections’ constitute any sort of conflict unless they actually touch upon the question of the greatest ethical good, which they don’t at all.

    “Utilitarianism in that sense is nonsense, for it compromises the individuality and uniqueness of each and everyone of us by making the arrogant claim that whatever is good for the greatest number translated to the good of everyone, when such is quite clearly not the case.”
    That’s an incoherent claim obviously. Utilitarianism starts and ends with the acknowledgement of the good of individuals- citing “individuality” and “uniqueness” as important is only a valid criticism if it can be demonstrated that these two things are more important than the net preferences of all persons. Thus the only way in which you could conceivably criticize utilitarianism would be to demonstrate a good that outweighs the good of individuals which you began with- a demonstrably nonsensical assertion.
    “It may be in the good of the greater number that black people be repatriatated back to their country of origin, it is definitely not in the interests of those people. Under utilitarian theory, they would be sent back because it was the good of the greatest number. That, sir, is nonsense.”
    Yes, it is nonsense. Admittedly, if any situation is actually for the greatest good then it is for the greatest good. But creating some hypothetical scenario wherein it is said that the greatest good is served by some outlandish act which clearly would not maximise the greatest good, is clearly an invalid manner of arguing against utilitarianism. The fact is that in no realistic situation would genocide, mass-deportation or gang-rape, maximise the greatest good, and as such in any realistic ethical situation utilitarianism correctly demonstrates why they are bad ideas. If one posits a parallel existence wherein every-thing is upside down, then quite clearly utilitarianism provides an upside-down answer, which is precisely what happens in this “send the blacks home, then every-one will be happy” scenario.

    Such a result can only be accepted if it is accepted that a racist actually suffers such anguish as a result of the presence of a black person that their suffering far outweighs the harm caused by stripping a minority group of their dignity and property and sending them to a foreign land.
    Offline

    0
    ReputationRep:
    (Original post by TCovenant)
    We have a vast array of information available to us about the world in which we live in,
    You're confusing data with information. We can't even agree whether reducing CO2 emissions will achieve anything. Some of the world's foremost experts disagree as to whether global warming exists. And that's just in the physical sciences. Economists propose mutally exclusive models of how countries should be run. Theologians postulate completely different models of God (incluing no God at all). In the midst of all this, you're suggesting that we do have a complete handle on ethics, which is one of the slipperiest subjects around?
    not knowing the nature of all things within the cosmos we can’t predict perfectly,
    We can't even predict where all the snooker balls will end up after the break. That's just 15 reds, 7 colours and a cue ball on a flat, constratined surface where every ball's starting position and mass is known. But you reckon we can accurately estimate the behaviour of 6 billion people scattered around the globe? Have you any understanding of chaos theory?
    All of our choices of action throughout our lives are based on a tacit judgement of probable outcomes.
    I agree, but I disagree that predicting a small number of limited probabilities somehow can be extrapolated into the bigger picture, just by multiplication of the smaller decisions.
    If judgement of which actions will bring good outcomes were ‘pure guesswork’ there would be no reason why every-one should simply not lie motionless on the floor indefinitely, as there would be no reason to guess that this action will not bring about the most preferable outcome.
    Indeed so. For some historical people, such an action would have been preferable to their actual behaviour.
    I don’t understand at all what you mean by saying that judging consequences of actions has “no rational underpinning.” This process which is implicit in all action all the time, seems obviously straightforward.
    What I'm saying is that you have no basis for your extrapolation. Let's take a specific example. For many years it was thought that babies should be placed on their fronts in cots, so that, if they were sick, the vomit would drain away. Lots of mothers followed this advice and the result was that cot deaths rose. Now it's believed that babies should be placed on their backs.

    So what went wrong? It was that the earlier advice was based upon a rational line of argument that was simply wrong. They didn't know that it was wrong and they certainly didn't intend babies to die as a result of that advice. they were acting in the interests of the greatest good, but it did not have that effect.

    No-one (excluding God) knows enough to predict what will produce the greatest good. Nihilism isn't an answer either, since inaction is no more likley to produce the common good and it's certainly very boring to do.

    Anyone who's been a parent knows that it's execeptionally difficult to know what actions to take when a teenager is acting in a way that you perceive as sub-optimal and even potentially dangerous. How much more complex is it when you don't even know the people concerned?
    What difficulties, you haven’t specified any?
    Sorry, I though they were obvious. Let's set some out:

    a) You don't know what 'good' is.
    b) You don't know that a cause will produce an effect
    c) You can't say why a single greater good should rank higher than lesser goods (except by tautology).
    d) You cannot formulate a coherent calculus for good.

    Start with those and I'll come up with some more, if you manage to solve those problems.
    I responded by noting that it wouldn’t, because killing the 1/3 of the population- painlessly or not- would be a considerable harm. The 1 million people could live their lives without being massacred, which they clearly prefer or else they would massacre themselves.
    I agree it's a harm, but you haven't yet shown why it's not a necessary harm. Utilitarianism assumes that some must lose if others are to gain.
    [quote] 100% if the population would prefer to be rich and healthy and have long, fulfilling lives. If that's not possible because of constrarined resources, how will we decide the best way to share out what there is? Do we give some people 100% of their desires and some 0%, or do we give everybody 10% (i.e. lives which are "nasty, brutish and short".) ? How will utilitarianism decide and what makes it ethically right to do so?
    The massacre would only maximise good if the losses for the 1/3rd of the population were outweighed by the gains of the 2 million, which is incredibly unlikely
    As you've noted, what if there's only enough food to provide sustenance to 2/3 of the population? If everybody gets "not enough" then all die. So the 1/3 die to save the others. That's an easy case to deal with, which is why we make it less defensible in my example. When we're dealing with ethics, it's generally all about the difficult cases. Utilitarianism leads you to believe that genocide could be justified, unless you shore it up with another ethical system, which values the rights of the individual over the majority for some issues. But then, it's no longer utilitarianism, but something else, isn't it?
    The scenario posits a choice between either: 1 million deaths and increased benefit for 2 million, or no deaths for 1 million and no increased benefit for the 2 million. Thus the question is simply whether the million deaths are outweighed by the benefit derived from them. The point is that except in a very unusual situation, the benefit for the 2 million persons will not outweigh the loss of 1 million lives.
    That's part of my challenge. Let's talk about utiles (a measure of utility, or benefit) for a moment. Let's say that having a new MP3 player has a utility of 1 utile and dying tomorrow has a disutility of -1,000,000 utiles. Does that mean that it's ethically OK to kill someone, if a million people would get a new MP3 player as a result?

    Trivial? Of course it is, but it's this sort of calculus which utilitarians rely upon. You must be able to quantify every part of your value system on to a utility scale.Yet how would you even start to compose such a scale? Is a human life to have infinite utility, for example? If so, how will you ever fight a war, or stop an armed criminal? Feel free to set out such a scale, in order to counter my argument.

    You seem to be trying to limit the field of your decision making to the area with which you're comfortable; to an area of 'common-sense', but I contend that ethics have to be able to deal with far more complex situations.
    Thus the only way to suggest a flaw in utilitarianism is to suggest that the greatest good is, or could require an atrocity, in which instance the notion of atrocity is deprived of its force.
    I agree that utilitarianism, would support the logic of the Holocaust (that ordinary Germans would be better off if the Jews were eliminated). It only recognises the 'greater good' and 'atrocity' can have no meaning as long as it serves that goal. That is one of its greatest and most fundamental flaws as an ethical system.
    Offline

    0
    ReputationRep:
    (Original post by TCovenant)
    Thus the only way in which you could conceivably criticize utilitarianism would be to demonstrate a good that outweighs the good of individuals which you began with- a demonstrably nonsensical assertion.
    That's exactly what we are doing. We're arguing that you're falsely excluding real-world examples because you don't like them. Suppose there's a population with 10 million As and 1 million Bs. The As decide that they hate Bs, so they systematically wipe them out. Thus 10 million got their first choice and only 1 million didn't. The remaining population are 100% happy with their choice. Is that ethical?

    Your only escape is to say that the wishes of 1 million Bs not to die somehow outweighs the wishes of 10 million As that they should. Since we're only talking preferences, why should one set of preferences outweigh another? It seems that you're arguing that B's wish not to die is worth more than A's wish that B should die. While you think that's just common-sense, it's far from trivial to prove. Will you allow some measure of quality or duration? If so, please suggest a scale.

    If you appeal to the theory that committing genocide will actually diminish the A population, then to what are you appealing? It's certainly outside utilitarianism.
    The fact is that in no realistic situation would genocide, mass-deportation or gang-rape, maximise the greatest good,
    Yet in the last 100 years we've seen a significant number of real-world examples where that's exactly what's been intended to happen. Genocides occur because those who do it think they'll be better off.
    Offline

    0
    ReputationRep:
    another criticism of utilitarianism (mill's) is the distinction of happiness. It doesn't make any sense to say some pleasures are higher than others. There's an analogy of thje watches. A man wants to know the time and he asks 3 people with watches. One person said its 1:30, the other said its 1:45 while the third one said 1:15. The question is what makes the other time more reliable than the others? Is it because the watch is made out of quartz while the other is electronic?
    Offline

    2
    ReputationRep:
    (Original post by grumballcake)
    You're confusing data with information. We can't even agree whether reducing CO2 emissions will achieve anything. Some of the world's foremost experts disagree as to whether global warming exists. And that's just in the physical sciences. Economists propose mutally exclusive models of how countries should be run. Theologians postulate completely different models of God (incluing no God at all).
    I kind of agree with your point, although I have trouble seeing how any ethical system can avoid this problem - any attempt to focus on the 'intrinsic value' of an action is going to run into problems of definition, and even if you get past that you'll probably end up with something unpalatably inflexible. However, I have to take issue with your examples.

    Firstly, while there are certainly some experts who express scepticism about global warming, they are in a minority, and a shrinking one at that. You'll find dissenters for any scientific theory, particularly one with considerable political and economic significance. There are significant differences of opinion about the extent of global warming and the most effective way of dealing with it, which isn't the same thing.

    Economics is a new science, and one that deals with incredibly complex systems, but there has been significant progress made over the last century or so, and there are grounds for optimism on how developed our understanding can become. In particular, treatment of economics under the umbrella of complexity theory looks pretty promising.

    Theologians are a different matter altogether.
    Offline

    0
    ReputationRep:
    “You're confusing data with information. We can't even agree whether reducing CO2 emissions will achieve anything. Some of the world's foremost experts disagree as to whether global warming exists. And that's just in the physical sciences. Economists propose mutally exclusive models of how countries should be run. Theologians postulate completely different models of God (incluing no God at all). In the midst of all this, you're suggesting that we do have a complete handle on ethics, which is one of the slipperiest subjects around?
    We can't even predict where all the snooker balls will end up after the break. That's just 15 reds, 7 colours and a cue ball on a flat, constratined surface where every ball's starting position and mass is known. But you reckon we can accurately estimate the behaviour of 6 billion people scattered around the globe? Have you any understanding of chaos theory?
    I agree, but I disagree that predicting a small number of limited probabilities somehow can be extrapolated into the bigger picture, just by multiplication of the smaller decisions.”
    The cosmos may be complex, but luckily as I stated a couple of posts ago we don’t need perfect knowledge. Further, even if you could demonstrate that the cosmos was so hopelessly complex that we have no basis for any action at all, it would not disprove utilitarianism. Preferences would still be exactly as preferable even if no-one could ever predict what actions would bring them happiness.

    Even more fortunately action is nowhere near as inscrutably complex as you describe, you yourself are doubtless, even at this moment acting according to your presuppositions about what can be expected to bring about desired results. There are, unsurprisingly, very noticeable trends in human action, and obvious ways in which net happiness will be increased/decreased.

    Incidentally if you actually believe that we suffer from this horrendous lack of information, why do you continue to act, and on what basis do you do so? Also why do you believe that some people are generally happy and others not- is the fact that some-one’s is unhappy and some terrible disaster befell them unrelated?

    “Lots of mothers followed this advice and the result was that cot deaths rose. Now it's believed that babies should be placed on their backs…
    Anyone who's been a parent knows that it's execeptionally difficult to know what actions to take when a teenager is acting in a way that you perceive as sub-optimal and even potentially dangerous. How much more complex is it when you don't even know the people concerned?”
    How difficult the decision is doesn’t matter- so long as it can be agreed that babies not dying is probably good the only question is how best to achieve this. With a choice between A/B, front/back, all that could/ought to be done is that most likely to be good.
    Faced with a situation where action will affect lots of people in a complex way, the only response is to act in a manner most likely to bring a better result. What alternative can be offered to in the best possible manner- flipping a coin, arbitrarily choosing a point pf view some other way? Clearly course most likely to bring good is the one most likely to bring good, of course the world is very complex and action/inaction might have a negative effect, but it is clearly less likely in general (no more likely at worst) to bring a bad result than by deciding actions on a basis other than that likely to do good.

    “a) You don't know what 'good' is.
    b) You don't know that a cause will produce an effect
    c) You can't say why a single greater good should rank higher than lesser goods (except by tautology).
    d) You cannot formulate a coherent calculus for good.”
    I think I’ve already addressed these, but I’ll offer a summary.
    A) Persons have preferences, it is good that these are fulfilled. The greatest good therefore is the greatest fulfillment of preferences.
    B) Nevertheless one has reason to suspect that there is a correlation between causes and effects which can be acted upon: I like apples, therefore I will eat one and enjoy it.
    C) Tautology is pretty sound.
    D) One needn’t model a ‘calculus.’ As in one’s personal life, one can postulate as to what means will bring about desired ends to a greater extent than alternative means. Exactly the same applies with regards to other persons, without prejudice.

    “If that's not possible because of constrarined resources, how will we decide the best way to share out what there is? Do we give some people 100% of their desires and some 0%, or do we give everybody 10% (i.e. lives which are "nasty, brutish and short".) ? How will utilitarianism decide and what makes it ethically right to do so?

    As you've noted, what if there's only enough food to provide sustenance to 2/3 of the population? If everybody gets "not enough" then all die. So the 1/3 die to save the others. That's an easy case to deal with, which is why we make it less defensible in my example. When we're dealing with ethics, it's generally all about the difficult cases. Utilitarianism leads you to believe that genocide could be justified, unless you shore it up with another ethical system, which values the rights of the individual over the majority for some issues. But then, it's no longer utilitarianism, but something else, isn't it?”
    I can’t determine what decision would be made as I lack the requisite knowledge of the situation, but clearly with knowledge of the resources available it would be possible to approximate what can typically provide for a basic level of happiness. Obviously utilitarianism would necessitate the best situation- assuming that all the persons are equal (and if not applying triage) then the greatest level of happiness would be the order of the day. This in practice would be the largest number surviving with a level of happiness that was preferable to death. Killing persons could be justified but only if doing so were to save a greater number. As Mill and Singer have pointed out, the utilitarian solution necessarily favours the preservation of the life of a person with a basic level of contentment over adding extra happiness to some-one who is already happy. Such a formulation is not an added bias to sort out unsavoury conclusions, rather it is the natural result of accurately applying utilitarianism- the preference of a single person to continue living far outweighs the preference of a person for supplementary happiness.

    This is of course without noting the fact that in most scenarios there are ancillary reasons why attacking some minority for the pleasure of a greater number is not for the greater good. Invariably actions which do not reinforce the fact that individuals are valued within society, and which give the impression that one could be ‘used’ against one’s will for societal benefit have devastating effects upon society- the greater good is not served by creating a society where every-one is in perpetual fear of being dragged off in the night to be placed in the panopticon, or some such.

    “That's part of my challenge. Let's talk about utiles (a measure of utility, or benefit) for a moment. Let's say that having a new MP3 player has a utility of 1 utile and dying tomorrow has a disutility of -1,000,000 utiles. Does that mean that it's ethically OK to kill someone, if a million people would get a new MP3 player as a result?

    Trivial? Of course it is, but it's this sort of calculus which utilitarians rely upon. You must be able to quantify every part of your value system on to a utility scale.Yet how would you even start to compose such a scale? Is a human life to have infinite utility, for example? If so, how will you ever fight a war, or stop an armed criminal? Feel free to set out such a scale, in order to counter my argument.

    You seem to be trying to limit the field of your decision making to the area with which you're comfortable; to an area of 'common-sense', but I contend that ethics have to be able to deal with far more complex situations.”
    I’m in no sense trying to limit the field to ‘common sense’ or to an area where I’m comfortable, I’m merely stating the (usually very obvious) utilitarian solution to your various imaginary scenarios.

    In any case utilitarian formulations need not rely on a grand abacus of utiles. In any given situation, such as the ‘new MP3’ one, it is intuitively obvious that extra MP3 players would at no point outweigh the innumerable harms that would result from this murder. One doesn’t need to represent the dilemmas numerically in deciding the greater good any more than one needs to assign numerical values to purely personal choices.

    As for the armed criminal/war, obviously the value attached to a purely basic level of happiness, necessitates a situation which safeguards this, but clearly a combating of armed criminals is necessary for any number of related reasons- the more general preservation of life for example.

    “I agree that utilitarianism, would support the logic of the Holocaust (that ordinary Germans would be better off if the Jews were eliminated). It only recognises the 'greater good' and 'atrocity' can have no meaning as long as it serves that goal. That is one of its greatest and most fundamental flaws as an ethical system.”
    That the greater number automatically define the greater good is a great over-generalisation. You’re clearly not of the opinion that the completion of the Holocaust would actually raise net happiness?

    “We're arguing that you're falsely excluding real-world examples because you don't like them. Suppose there's a population with 10 million As and 1 million Bs. The As decide that they hate Bs, so they systematically wipe them out. Thus 10 million got their first choice and only 1 million didn't. The remaining population are 100% happy with their choice. Is that ethical?

    Your only escape is to say that the wishes of 1 million Bs not to die somehow outweighs the wishes of 10 million As that they should. Since we're only talking preferences, why should one set of preferences outweigh another? It seems that you're arguing that B's wish not to die is worth more than A's wish that B should die. While you think that's just common-sense, it's far from trivial to prove. Will you allow some measure of quality or duration? If so, please suggest a scale.”
    One needn’t propose a numerical scale as I stated above, it would simply be an approximate representation, in any case, of what we’re actually discussing, namely the relative puissance of the preferences in question.

    Obviously it is actually impossible to prove the strength of preferences, as it is to prove preferences. We have to assume that humans act in various ways on the basis of psychological indicators, our genetically-socially derived intuition, behaviourism etc. Fundamentally, I can’t prove that people don’t like being murdered, but this is nothing to do with utilitarianism, it is just a fact of life. Some very basic facts should be apparent when considering the A-B scenario, for one thing, most persons (seemingly) value the life of themselves and those close to them very strongly indeed, and as a comparative measure would suffer the loss of many things of value to protect them. Conversely the hatred of B’s seems to be on a different order of significance, persons who hate other groups more than they value their own lives are few and far between, with such examples occurring (if they occur at all), in short bursts of passion (a battlefield say) or in psychological scenarios where the hatred takes a monomanical form. Accepting that no-one can prove what another thinks to an absolute extent, it seems very probable that whatever has given rise to this hypothetical-hatred will, if satisfied, bring significant happiness, or if denied, bring suffering over an above that of the slaughter of the B’s. Of course once again there are other factors to consider, for example the fact that a society that en mass, slaughters parts of itself, will likely give rise to any number of future contradictions and potential problems, if this slaughter is allowed to occur.

    Of course if the A’s actually do constitute, for example, a new species, who have the capacity to suffer so from their hatred of the B’s (which is wholly irrational and thus no solution can possibly resolve it other than satiation) that their suffering is stronger than that of the to-be-slaughtered group, then it would be preferable that their suffering be abated.

    “Yet in the last 100 years we've seen a significant number of real-world examples where that's exactly what's been intended to happen. Genocides occur because those who do it think they'll be better off.”
    Once again your words indicate a crucial distinction however- the genocides occur because “those who do it think they’ll be better off,” if any of these situations were considered from a utilitarian perspective it is clear that the greater good would occur through non-murderous means- mutual co-operation, a society wherein the greatest personal preferences were achievable etc.

    Notably, of course the point of the ethical discussion is to determine which ethical system is right, not which would bring about good results were given people to follow it- hence it doesn’t matter that various persons might misinterpret utilitarianism and commit genocide. From a utilitarian perspective it is conceivable (if unlikely) that it would be better that people were inculcated with a simpler ethical rule, “don’t kill any-one, or else” for example but this does not change either the correctness of utilitarianism or the fact that any of those simpler ethical assertions would only be good insofar as they serve greater net preferences.
    Offline

    0
    ReputationRep:
    (Original post by wanderer)
    while there are certainly some experts who express scepticism about global warming, they are in a minority, and a shrinking one at that.
    My belief is that the number of dissenters is actually growing. It's also true that for every scientific advance, the supporters of the existing theories were adamant that the old way was best. Priestley went to his grave defending the phlogiston theory, even after he'd discovered oxygen (which he called de-phlogistoned air).

    You're actually supporting my point here anyway. If we can't be sure of simple physical facts and trends, what hope do we have of predicting what's best for society? Let's take an emotional issue. The 1960s saw a number of deaths in back-street abortions, so in 1968 David Steel and others put forward a bill which would legalise abortion where the life or mental health of the mother was in jeopardy. It was seen as the lesser of two evils for extreme cases. No-one envisaged that this would mean the deaths of 120,000 foetuses a year and if they had foreseen it, the bill would never have passed. Whatever your view of abortion, it's a clear application of utilitarianism - the child is sacrificed for the preferences of the mother. It's justified only by denying that the unborn child is entitled to consideration.

    That's not to open the abortion debate, which usually generate more heat than light in online forums. However, it's a clear example of where society will deny rights to a minority for its own convenience. Those people would have been considered to have rights if they had lived for less than a year more.
    Offline

    2
    ReputationRep:
    (Original post by dragonlance)
    another criticism of utilitarianism (mill's) is the distinction of happiness. It doesn't make any sense to say some pleasures are higher than others. There's an analogy of thje watches. A man wants to know the time and he asks 3 people with watches. One person said its 1:30, the other said its 1:45 while the third one said 1:15. The question is what makes the other time more reliable than the others? Is it because the watch is made out of quartz while the other is electronic?
    cool i did utiilitarianism revision yesterday

    good question :rolleyes:

    ive never heard of that analogy :confused:

    the only one ive heard is from paleys for the teloeogical argument
    Offline

    0
    ReputationRep:
    (Original post by TCovenant)
    Preferences would still be exactly as preferable even if no-one could ever predict what actions would bring them happiness.
    So you're saying that ethics are actually just based upon whim? You approximate the 'right' action by taking the average whim of how many? The whole world?
    There are, unsurprisingly, very noticeable trends in human action, and obvious ways in which net happiness will be increased/decreased.
    I can only conclude that you've studied little or no anthropology. It's a massive over-simplification designed for rhetorical purposes. If it were all so obvious, how come no-one does it? Are you saying that your system is being deliberately obscured by people? If so, how come your system cannot explain their contrary behaviour?
    why do you continue to act, and on what basis do you do so?
    I think you're missing the point. acting under uncertainty doesn't make me a utilitarian. It makes me human. My system of ethics accepts that I won't know best what to do and, as a theist, I don't have to make all those distinctions. However, I am required to act, as best as I can in accordance with my conscience, within a particular ethical framework.
    Faced with a situation where action will affect lots of people in a complex way, the only response is to act in a manner most likely to bring a better result.
    I'm not disagreeing, I'm simply saying that your ethics do not have a rational basis. They rely on being able to assess 'good', the 'greatest good' (which necessarily implies a calculus) and only then decide on actions. If we allow uncertainty to all, then you still have the problem of the first two of these. You seem to be studiously avoiding any attempt to discuss the calculus which is actually the central plank of the argument.
    A) Persons have preferences, it is good that these are fulfilled. The greatest good therefore is the greatest fulfillment of preferences.
    B) Nevertheless one has reason to suspect that there is a correlation between causes and effects which can be acted upon: I like apples, therefore I will eat one and enjoy it.
    C) Tautology is pretty sound.
    D) One needn’t model a ‘calculus.’ As in one’s personal life, one can postulate as to what means will bring about desired ends to a greater extent than alternative means. Exactly the same applies with regards to other persons, without prejudice.
    I'm rather disappointed here. Assuming that you can ascertain preferences is a rather grand project all on its own. I'm not sure what your methodology would be, let alone the mechanism for ensuring that any data gathered is based upon fully informed consent. After all, if you don't know what my preferences are, how will you ensure that I get them?

    B) is weak. Again you're trying to extrapolate to a complex situation from a trivial one. It's the sort of logic which makes books like "The One Minute Manager" popular and leats to Dilbertian situations.

    C) is laughable. You seriously think that tautology adds explanatory value?

    d) Oh, but you do need a calculus unless you're saying that all these decisions are largely arbitrary and subject to mood. Or, in other words, unfair. How will you ensure consistency of application otherwise?
    assuming that all the persons are equal (and if not applying triage) then the greatest level of happiness would be the order of the day.
    If wishes were horses...
    As Mill and Singer have pointed out, the utilitarian solution necessarily favours the preservation of the life of a person with a basic level of contentment over adding extra happiness to some-one who is already happy.
    Why necessarily? That's why you need a calculus. After all, what's 'happy'? What is a 'basic level of contentment'? How would I establish such a thing to make sure I don't take a life unnecessarily?
    the preference of a single person to continue living far outweighs the preference of a person for supplementary happiness.
    Unless, of course, they haven't been born yet ... or if they're Jewish (or Other, if you prefer) ... or mentally handicapped ... or severely ill.

    So, is it reasonable to spend the NHS budget on hip replacements on multiple transplants for a single war criminal? If not, why not?
    the greater good is not served by creating a society where every-one is in perpetual fear of being dragged off in the night to be placed in the panopticon, or some such.
    Yet we know of lots of societies which have behaved exactly like that and have argued that they were pursuing the greater good. Look at Mao's quotes on the famine in China, for example. You can disagree with him, of course, but what makes your judgement so much better from an ethical point of view?
    One doesn’t need to represent the dilemmas numerically in deciding the greater good any more than one needs to assign numerical values to purely personal choices
    Why not? What are you proposing as an alternative. It all seems so woolly to me. You keep dismissing examples, but without saying what you're actually going to use to make these decisions. It's all vague handwaving as if the 'greater good' were inherently obvious to all. Which takes me back to my point - if it's so obvious, why do so many well-meaning people have such difficulty in finding it?
    You’re clearly not of the opinion that the completion of the Holocaust would actually raise net happiness?
    No, but that's for other moral reasons which are rooted in a belief in an absolute scale of morality. It has nothing to do with a utility calculus.
    One needn’t propose a numerical scale as I stated above, it would simply be an approximate representation, in any case, of what we’re actually discussing, namely the relative puissance of the preferences in question.
    Is this puissance associative, transitive and commutative? For example, if I prefer A to B, and I prefer B to C, do I necessarily prefer A to C? (Hint: here be dragons)
    Conversely the hatred of B’s seems to be on a different order of significance, persons who hate other groups more than they value their own lives are few and far between, with such examples occurring (if they occur at all), in short bursts of passion (a battlefield say) or in psychological scenarios where the hatred takes a monomanical form.
    They merely have to hate the person more than they value that person's life, if they are the majority. Can't you see that? You keep arguing as if this 'greater good' were clear and obvious to all. Human history inplies strongly that it is not.
    if any of these situations were considered from a utilitarian perspective it is clear that the greater good would occur through non-murderous means
    Only if you redefine utilitarianism to recognise some non-utilitarian concept of what 'greater good' means. If it's the average preferences of a population, then that's consistent. If it's actually an appeal to a Platonic ideal, then it isn't. You seem to me to be conflating the two, but perhaps I'm simply confused.
    Offline

    2
    ReputationRep:
    so we're on the topic of utilitarianism

    is it Mill who came up with the Hedonic Calculus?
    Offline

    0
    ReputationRep:
    (Original post by grumballcake)
    “So you're saying that ethics are actually just based upon whim? You approximate the 'right' action by taking the average whim of how many? The whole world?”
    I’m saying that ethics is based upon the fact that within the context of human experience, some things are preferred over others. What these preferences are based upon is beyond the necessary scope of ethics, if you want to call it whim, do so, as it is irrelevant. Also, yes “the whole world,” there’s no possible basis for any limit on ethics.

    “I can only conclude that you've studied little or no anthropology. It's a massive over-simplification designed for rhetorical purposes. If it were all so obvious, how come no-one does it? Are you saying that your system is being deliberately obscured by people? If so, how come your system cannot explain their contrary behaviour?”
    I fail to see how you take the fact that I state that there are “trends in human action,” and proceed to criticize my knowledge of anthropology, not to mention your other assertions. One needn’t establish a totalising Grand Unified Theory of humanity, all you need to do, taking my statement at their word, is notice “trends in human action” and “obvious ways in which net happiness will be increased/decreased”. I hardly see how it is necessary to offer justification for such assertions but, taking your own example of bringing up children, it is surely a safe bet that parenting them will increase happiness over beating them and then leaving them to fend for themselves.
    “I think you're missing the point. acting under uncertainty doesn't make me a utilitarian. It makes me human. My system of ethics accepts that I won't know best what to do and, as a theist, I don't have to make all those distinctions. However, I am required to act, as best as I can in accordance with my conscience, within a particular ethical framework.”
    How am I missing the point by asking my question? In any case you’ve certainly missed my point, as your first sentence demonstrates- my point is that a conscious, sentient being, you have to act, and since you have desires and goals you will necessarily act with aims in mind. Being a theist is quite apart from this matter. I find it implausible that you act without taking account, consciously or not, of the consequences of your actions. You may remember my question was in response to your couple of paragraphs detailing the extent of our lack of knowledge, I refuse to believe however that you do not take account of the consequences of your actions specified, or that you in any sense hold to the radical doubt you profess.
    “I'm not disagreeing, I'm simply saying that your ethics do not have a rational basis. They rely on being able to assess 'good', the 'greatest good' (which necessarily implies a calculus) and only then decide on actions. If we allow uncertainty to all, then you still have the problem of the first two of these. You seem to be studiously avoiding any attempt to discuss the calculus which is actually the central plank of the argument.”
    This has no bearing whatsoever on utilitarianism having a rational basis. It is clearly possible to assess good and degrees of good, otherwise action would be both impossible and futile. This does not “imply (or necessitate) a calculus.” When choosing between action or ends all one need to is compare the preferentiality of those which present themselves, clearly a formal calculus is not required. Similarly as you add degrees of complexity, a formal numerical hierarchy is not required, as consideration of any aspect of daily life should immediately make obvious. What would, in your mind constitute a “rational basis” to choosing between two courses of action?

    “Assuming that you can ascertain preferences is a rather grand project all on its own. I'm not sure what your methodology would be, let alone the mechanism for ensuring that any data gathered is based upon fully informed consent. After all, if you don't know what my preferences are, how will you ensure that I get them?”
    All your assertions denote is that a perfect system is impossible, which I agree with, but this changes nothing. We are faced with a situation wherein good or bad ends can occur from our actions, whether we can ascertain preferences is irrelevant, we cannot prove that other persons have preferences at all, after all. Nevertheless it seems reasonable to acknowledge the possibility that other persons do, and to act accordingly, otherwise there would be no reason to not simply murder every-one- after all who can say what their preference is?

    In short I can’t assure that I will maximise your preferences, or that my actions will maximise my own, nevertheless this is a fact of action, not a flaw in utilitarianism. The utilitarian assertion is simply that the preferences of all others ought to be given equal weight, in acting to bring about the most preferable ends.

    “B) is weak. Again you're trying to extrapolate to a complex situation from a trivial one. It's the sort of logic which makes books like "The One Minute Manager" popular and leats to Dilbertian situations.”
    Weak compared to what? Human action may indeed be complex, but if one acknowledges that the suffering of others is a bad thing relative to their preferences then, the best/greatest thing to be done is to act as to maximise the preferred over suffering. Sure, I don’t know that “a cause will lead to an effect” but there seems to be a trend and thus reason to act in accordance with what seems best; weak- maybe, but by definition the best available action.

    “C) is laughable. You seriously think that tautology adds explanatory value?”
    It’s not necessary to add explanatory value to a tautology. What would be necessary would be to demonstrate the possibility that the “greater good is not greater than lesser goods.” How can I conceivably add weight to a tautological truth?

    “d) Oh, but you do need a calculus unless you're saying that all these decisions are largely arbitrary and subject to mood. Or, in other words, unfair. How will you ensure consistency of application otherwise?”
    A calculus would necessarily be an artificial approximation, it adds nothing. It is not the case that decisions not based on a calculus are in any sense arbitrary or subject to mood. If faced with a group of children, it is fair to say based upon our past observations etc, that it would probably be a bad thing to kill them, since they’ll probably overall prefer not. Applying a number to each of them and calculating would not add anything to the situation, it would simply be a representation of our view of likely effects. If you disagree and say that on the weight of your experience it’s probably best to kill them, there’s nothing I can do to prove otherwise, merely appeal to you to come round to my view.
    “Why necessarily? That's why you need a calculus. After all, what's 'happy'? What is a 'basic level of contentment'? How would I establish such a thing to make sure I don't take a life unnecessarily?”
    A calculus would be arbitrary, all I can do is advance arguments at you to try to convince you of probable outcomes. It is clear (though not ultimately provable, as we cannot prove the existence of other humans ultimately) that a person with little wealth etc typically values their life a great deal is clear as a trend from observation- clearly people appreciate their sofas and CD collections too, nevertheless a clear trend exists that people generally dislike the prospect of losing their life, much more than persons to losing their CDs.

    “Unless, of course, they haven't been born yet ... or if they're Jewish (or Other, if you prefer) ... or mentally handicapped ... or severely ill.”
    That wouldn’t be utilitarian. Though yes, if faced with a triage situation of some-one chronically ill against some-one young, with an acute but easily curable injury, there is basis for choice, but that choice is made without privilege of one over the other, except insofar as their actual capacity for preference is reduced.
    ”So, is it reasonable to spend the NHS budget on hip replacements on multiple transplants for a single war criminal? If not, why not?”
    I don’t understand the dilemma, sorry.

    “Yet we know of lots of societies which have behaved exactly like that and have argued that they were pursuing the greater good. Look at Mao's quotes on the famine in China, for example. You can disagree with him, of course, but what makes your judgement so much better from an ethical point of view?”
    Without a basis for proving ethical views instantly incontrovertibly right, this argument applies to all ethical assertions. Short of finding a ‘Big Book of Ethics- with all the answers’ there is no basis for me to demonstrate Mao is wrong, any more than you could.

    Clearly that fact that one can make a mistake in the name of an ethical theory is unimportant That Mao thought he was pursuing the greater good is irrelevant, if not a contrary view of the greater good then upon what basis could he be disagreed with upon?
    “Why not? What are you proposing as an alternative. It all seems so woolly to me. You keep dismissing examples, but without saying what you're actually going to use to make these decisions. It's all vague handwaving as if the 'greater good' were inherently obvious to all. Which takes me back to my point - if it's so obvious, why do so many well-meaning people have such difficulty in finding it?”
    How can you ask ‘why not?’ Clearly in approximating what actions will fulfil one’s preferences one need not numerically model probable outcomes. There is utterly no reason why this changes when you take into account the probable preferences of persons other than yourself. As we’ve established it is impossible to be certain of preferences, but if we assume that other persons matter then the only alternative is to simply ignore their preferences. Approximating what other persons want is difficult, vague and one can make mistakes, but this doesn’t change the fact that they are hold equal weight to your own.

    Neither the greater good nor the means to achieve it is obvious, nevertheless there is clear basis for preferentiality within this context, as your own behaviour would suggest- assuming you’re not a murderer or some such.

    “No, but that's for other moral reasons which are rooted in a belief in an absolute scale of morality. It has nothing to do with a utility calculus.”
    That you think the Holocaust is wrong is because you hold a belief in an absolute scale of morality (which it would be handy/interesting if you’d explicate), but the fact that you think it wouldn’t raise net happiness is surely to do with a utility (defined as happiness) calculus?

    “Is this puissance associative, transitive and commutative? For example, if I prefer A to B, and I prefer B to C, do I necessarily prefer A to C? (Hint: here be dragons)”
    If the three are responses to a single situation then they can, all else being equal, be placed in such a hierarchy.
    “They merely have to hate the person more than they value that person's life, if they are the majority. Can't you see that? You keep arguing as if this 'greater good' were clear and obvious to all. “
    I don’t see that, because it is clearly wrong. B may prefer death over not-death, but the very point is that A prefer life over not life to a much greater extent, and therefore outweigh B’s preferences, as the suffering caused by A’s death clearly is greater than the suffering caused to B by A’s continued existence, unless as I qualified we are speaking of a new order of being who actually possesses the capacity to hate to such an extent. Clearly since B are not taking their own lives to avoid the suffering that A’s presence in the cosmos inflicts upon them, their preference is less puissant than A’s desire not to die.

    “Human history inplies strongly that it is not.”
    Humans have behaved immorally, but unless you are arguing for moral nihilism this doesn’t constitute a particularly searching attack on utilitarianism. The fact that humans choose their own good over the good of others suggests that getting people to behave morally may be tricky, but it does not change the fact that treating others as yourself is the fundamental good.
    “Only if you redefine utilitarianism to recognise some non-utilitarian concept of what 'greater good' means. If it's the average preferences of a population, then that's consistent. If it's actually an appeal to a Platonic ideal, then it isn't. You seem to me to be conflating the two, but perhaps I'm simply confused.”
    The greater good in a utilitarian sense, is neither of the two described above. The greater good would be the fulfilment of preferences to the greatest extent possible. Namely the best pursuit of preferences, without bias towards any set of preferences over another. Thus if a majority group wants to kill a minority to steal their possessions then it would clearly not be justified even if the majority group prefers pillage over not. Such a concept can only be arrived at by the actors if they consider the situation without privilege, thus it is not reducible to that which benefits the larger group, if in so doing it reduces benefit to the other group to a greater extent. Thus the greatest good is not reducible to the conflicting preferences of the group wherein everybody wants the best for themselves personally- they conflict, one party wins what they want to the detriment of others. The greatest good is the ‘best fulfillment’ of their net preferences, that would be conceptualised from an omniscient situation. That one cannot attain such an ideal situation is irrelevant, because if one acknowledges that all other persons have preferences that have equal weight, per se, to your own, then the only option is to act according to what most likely to fulfil these preferences to the best extent.
    Offline

    0
    ReputationRep:
    (Original post by TCovenant)
    the fact that within the context of human experience, some things are preferred over others
    ...at some times, in some places, by some people. There is no universal referent for this. That's why I said you can't know much anthropology. Many post-modernist anthropologists would deny that there are any unifying themes in human culture and that societies are incommensuarable. So there simply aren't the "trends of human action" which will help us. We might find common tendencies such as preferring not to murder, but these are not universal. If you don't have a universal set of preferences, you don't have anything to base a universal ethic upon. If you deny absolute referents then you're simply on ever-changing sands. No decision can be ethical or unethical, since there is no standard against which to compare it. I chose the word 'whim' carefully.
    “obvious ways in which net happiness will be increased/decreased”.
    That's my point. It's not obvious in all but the most trivial of cases.
    I hardly see how it is necessary to offer justification for such assertions but, taking your own example of bringing up children, it is surely a safe bet that parenting them will increase happiness over beating them and then leaving them to fend for themselves.
    Well, if you can only come up with trivial examples, it's hard to see how you'll apply this further. Is it reasonable to beat children at all? The answer has been 'obvious' to different cultures. In Biblical times, it was a sign of love to beat children if they misbehaved. In our current culture, you will be villified and possibly go to jail. Which one is ethical? How will you judge? I'm willing to bet your first response would be that it's obvious that it's wrong. Yet what controlled trials have you done to establish your view?
    find it implausible that you act without taking account, consciously or not, of the consequences of your actions. You may remember my question was in response to your couple of paragraphs detailing the extent of our lack of knowledge, I refuse to believe however that you do not take account of the consequences of your actions specified, or that you in any sense hold to the radical doubt you profess.
    You mistake me. I do not hold to radical doubt, nor do I have to. I have a reference point and a rationale for my actions which allows for my own uncertainty. The problem is that you're trying to borrow my clothes. You believe in 'right' actions but define them by reference to some external ideal which is not established using your method. A utilitarian denies that there are right actions, except those driven by the utility calculus. That's why it cannot avoid what would normally be regarded as atrocities.
    This has no bearing whatsoever on utilitarianism having a rational basis. It is clearly possible to assess good and degrees of good, otherwise action would be both impossible and futile.
    OK, as it's so clear, it should be no problem to define 'good' then. Many philosophers have struggled with this, so I'm all ears.
    This does not “imply (or necessitate) a calculus.” When choosing between action or ends all one need to is compare the preferentiality of those which present themselves, clearly a formal calculus is not required.
    You use 'clear' when it seems beyond you to explain it in detail. It's not clear to me how you "compare the preferentiality" at all. Can you give me a worked example?
    What would, in your mind constitute a “rational basis” to choosing between two courses of action?
    As my PhD involves decision theory and the problems with Expected Value (EV), you don't really want to go there.
    The utilitarian assertion is simply that the preferences of all others ought to be given equal weight, in acting to bring about the most preferable ends.
    So you're happy with an ethical system where holocausts are justified because the majority prefer it? If not, you need to set out some method as to how you weight preference and outcomes, without appealing to an external referent.
    It’s not necessary to add explanatory value to a tautology. What would be necessary would be to demonstrate the possibility that the “greater good is not greater than lesser goods.” How can I conceivably add weight to a tautological truth?
    You haven't said how it's greater. While it's true that 3x > 2x, that presumes an arithemetical relationship. Yet you simultaneously deny that such a relationship exists or can be formulated. That's why I think 'greater' is misleading in this context. You actually mean something diffferent and "Greater Good" is not the sum of "lesser goods" when you're using it in argument. So you've made things apples and oranges for comparison purposes.
    A calculus would necessarily be an artificial approximation, it adds nothing.
    It's my contention that there's nothing there in the first place. You can't admit to a calculus because it leads to reductio ad absurdum. Yet, without that calculus, your thesis is little more than hand-waving platitudes. Of course people prefer not to be killed, but so what? If the cause demands it...
    It is not the case that decisions not based on a calculus are in any sense arbitrary or subject to mood. If faced with a group of children, it is fair to say based upon our past observations etc, that it would probably be a bad thing to kill them, since they’ll probably overall prefer not.
    It seems that you're now retreating to a Golden Rule approach (i.e do as you would be done by) rather than utilitarianism per se.
    If you disagree and say that on the weight of your experience it’s probably best to kill them, there’s nothing I can do to prove otherwise, merely appeal to you to come round to my view.
    But would my actions be ethical? If I can show a utilitarian calculus in its support, are my actions justified? After all, aren't those pople simply being selfish if they deny my logic?
    Clearly that fact that one can make a mistake in the name of an ethical theory is unimportant That Mao thought he was pursuing the greater good is irrelevant, if not a contrary view of the greater good then upon what basis could he be disagreed with upon?
    Fair enough. We're getting somewhere. So you now agree that Mao's callous choice to let 50m Chinese starve was simply 'one of those things' and that we cannot condemn him for it. He was pursuing the Greater Good and that's enough.
    That you think the Holocaust is wrong is because you hold a belief in an absolute scale of morality (which it would be handy/interesting if you’d explicate), but the fact that you think it wouldn’t raise net happiness is surely to do with a utility (defined as happiness) calculus?
    Well, as I said, I'm a theist. The Holocaust was wrong because some things are right and others are wrong. Which is which is defined by God as the creator and sustainer of the universe. They're the rules of this game, if you like.

    The Holocaust was wrong because it's morally wrong to murder people. It's morally wrong because God says so. That's the absolute standard, in my world view.
    I don’t see that, because it is clearly wrong. B may prefer death over not-death, but the very point is that A prefer life over not life to a much greater extent, and therefore outweigh B’s preferences, as the suffering caused by A’s death clearly is greater than the suffering caused to B by A’s continued existence, unless as I qualified we are speaking of a new order of being who actually possesses the capacity to hate to such an extent.
    We don't need hate - we simply need indifference. People don't hate third-world slaves, they simply prefer to have cheap jeans (or perhaps cocaine/heroin).
    The greater good would be the fulfilment of preferences to the greatest extent possible. Namely the best pursuit of preferences, without bias towards any set of preferences over another. Thus if a majority group wants to kill a minority to steal their possessions then it would clearly not be justified even if the majority group prefers pillage over not.
    I think you need to re-read the stuff I wrote on the calculus. You're wearing its clothes, but denying its warmth. What you're doing is to privilege some preferences over others.
    Offline

    0
    ReputationRep:
    (Original post by grumballcake)
    “...at some times, in some places, by some people. There is no universal referent for this. That's why I said you can't know much anthropology. Many post-modernist anthropologists would deny that there are any unifying themes in human culture and that societies are incommensuarable. So there simply aren't the "trends of human action" which will help us. We might find common tendencies such as preferring not to murder, but these are not universal. If you don't have a universal set of preferences, you don't have anything to base a universal ethic upon. If you deny absolute referents then you're simply on ever-changing sands. No decision can be ethical or unethical, since there is no standard against which to compare it. I chose the word 'whim' carefully.”
    To say that “things are preferred over others” does not require the identification of a totalisable Master-preference for all mankind. Nor does the existence of trends of action, necessitate the existence of eternal rules of action.

    You don’t need a universal set of preferences, the fact is that preferentiality (things being preferred) exists. If it isn’t and no action or result is to be preferred then there is no ethics. If things are preferred then by definition: that which is preferred is preferable. Things are preferred, and there are trends in ends and means which can be identified. That these aren’t universal, eternal or known perfectly, since we lack absolute knowledge is unimportant. The fact remains that things are preferred and it is preferable that preferences are fulfilled; thus the greatest outcome is that preferences are satisfied to the greatest extent.

    “That's my point. It's not obvious in all but the most trivial of cases.
    That isn’t a point against utilitarianism of course because even if there were no basis upon which we could judge whether our actions will bring beneficial or negative consequences our actions would still result in consequences that were beneficial or negative. Placed in the extreme situation of this, where we cannot tell what will result from our actions to any extent at all, ethics-of-action would be obsolete, because it would be utterly impossible to choose action.

    “Well, if you can only come up with trivial examples, it's hard to see how you'll apply this further. Is it reasonable to beat children at all? The answer has been 'obvious' to different cultures. In Biblical times, it was a sign of love to beat children if they misbehaved. In our current culture, you will be villified and possibly go to jail. Which one is ethical? How will you judge? I'm willing to bet your first response would be that it's obvious that it's wrong. Yet what controlled trials have you done to establish your view?”
    You’re correct to guess that I haven’t got any findings from reports I’ve commissioned into child-beating to hand, but it doesn’t matter. Even if these examples are what you define as ‘trivial’ they work, and thus utilitarianism is entirely viable. If you believe that action is so inscrutable that we can only limit out ethical judgements to “Don’t torture any children because it will make them unhappy” the theory still holds.

    In any case, this doesn’t matter because I could conceivably do research upon the impact of child-beating and draw some conclusions, to guide my decision. Similarly we could discuss the various angles of it quite easily, as to whether it will bring benefits to the children and society or not, upon the basis of whether it will actually bring benefit. The point is that child-beating would be right/wrong based on whether it brought more benefit that the alternative, benefit being defined as ‘that which would be preferred most (without privileging any preferring-agent over another).

    “You mistake me. I do not hold to radical doubt, nor do I have to. I have a reference point and a rationale for my actions which allows for my own uncertainty. The problem is that you're trying to borrow my clothes. You believe in 'right' actions but define them by reference to some external ideal which is not established using your method. A utilitarian denies that there are right actions, except those driven by the utility calculus. That's why it cannot avoid what would normally be regarded as atrocities.”
    You mistake me, I was demonstrating precisely that you don’t hold to radical doubt, i.e. you were criticizing utilitarianism on the basis that it was impossible to predict which actions will bring beneficial consequences, and latterly impossible without a calculus, where you clearly act throughout your life on the basis of actions bringing benefit or suffering.

    I believe in “right” actions, defined as actions which bring the greatest benefit, a utilitarian denies there are right actions except those which bring the greatest benefit- there’s no conflict. Utilitarianism only allows that which would “normally” be regarded as atrocities if the utilitarianism is incorrectly applied (some-one incorrectly believes that an action will bring the greatest good/least suffering) or if the term atrocity is incorrectly applied (something is called an atrocity even when it brings the least suffering/most benefit).

    “OK, as it's so clear, it should be no problem to define 'good' then. Many philosophers have struggled with this, so I'm all ears.”
    Good is that which is preferable.

    “You use 'clear' when it seems beyond you to explain it in detail. It's not clear to me how you "compare the preferentiality" at all. Can you give me a worked example?”
    I have a cooling drink on a warm sunny day, I suspect, though I could be wrong, that I would enjoy drinking it and consequently proceed to do so, since the alternative of simply holding it seems likely to bring less pleasure.
    I see you approach with another tasty drink, I suspect, though I could be wrong, that I’ll like that one too and that you probably like it also. I am faced with a plethora of choices, including whether to try to make off with your beverage, thus depriving you of it. I also note that perhaps you are here to steal my drink- and consider whether without further ado I should try to run away, immobilize you in a pre-emptive strike to protect mine. I consider the probabilities of success of each of these options, and decide that you’re probably not going to steal my drink, being the charitable soul that you are. I then consider the ethical situation of stealing your drink- I will benefit one drink, but you will lose one drink, both of us probably have a similar desire for our respective drinks, so perhaps stealing yours would be neutral, but experiencing the theft of your drink will likely cause you surplus harm, and since I already have plenty of drink anyway I will probably benefit little from the surplus drink. Additionally, stealing your drink will bring further surplus harm, as it may embitter you against your fellow man, decrease your happiness and maybe encourage you to steal some-one else’s drink, making life bad for the whole community. Consequently though the theft would benefit me, I treat the preferences of those likely to be effected as equal to my own, and thusly do not follow through with the diabolical scheme; even though I don’t know for certain that your drink isn’t poisonous and thus I would be bringing you benefit through my actions, because based on experience, it seems an unlikely result to come from my action.

    “As my PhD involves decision theory and the problems with Expected Value (EV), you don't really want to go there. ”
    Ok. Though you concede that we could still reasonably discuss a reasonable basis for choice between actions?

    “So you're happy with an ethical system where holocausts are justified because the majority prefer it? If not, you need to set out some method as to how you weight preference and outcomes, without appealing to an external referent.”
    What the majority prefer is irrelevant. All that is important is that which most preferable overall. To determine this you don’t just have a quick vote, to see what people prefer as a course of action, rather you compare the weight of net preferences- happiness derived versus suffering derived, which invariably comes out against any form of massacre. Offering these scenarios in which, for example, people suffer more from the knowledge that another group isn’t being massacred, than do the people being massacred is continuously misleading.

    “You haven't said how it's greater. While it's true that 3x > 2x, that presumes an arithemetical relationship. Yet you simultaneously deny that such a relationship exists or can be formulated. That's why I think 'greater' is misleading in this context. You actually mean something diffferent and "Greater Good" is not the sum of "lesser goods" when you're using it in argument. So you've made things apples and oranges for comparison purposes.”
    I haven’t said how “it’s greater” because thus far we’ve only defined it- “the greater good” in terms of being “the greater good.” Accepting good as that which is preferred, the greater good is that which is preferred to a greater extent. Since we’ve only defined a tautology, not defined two distinct things I fail to see how you think there’s been a change to “something different.” If person 1 prefers scenario-1, and person 2 prefers scenario-2, then the greatest good, is simply the scenario or end, which is most preferred.

    “It's my contention that there's nothing there in the first place. You can't admit to a calculus because it leads to reductio ad absurdum. Yet, without that calculus, your thesis is little more than hand-waving platitudes. Of course people prefer not to be killed, but so what? If the cause demands it...”
    That which is there, is that ‘that which is preferred is preferred,’ so long as things are preferred then there is “something there” not merely platitudes, the net preferences of all persons, without bias. There’s no question of the “cause” the fact is that only that which is preferable can be preferable, there’s no question of causes, only preferences. If persons preferred to kill people against their preferences then it would only be acceptable if the preferences of the murderers held stronger weight than the preferences of the victims to not be killed- i.e. were their suffering as a result of not killing to be greater than the suffering of those being killed.
    “It seems that you're now retreating to a Golden Rule approach (i.e do as you would be done by) rather than utilitarianism per se.”
    It would be a “bad thing to kill them since they’d probably prefer not”- that is definitively utilitarian. The fact that the Golden Rule carried to its logical extreme is logically identical to utilitarianism is aside from the point.

    “But would my actions be ethical? If I can show a utilitarian calculus in its support, are my actions justified? After all, aren't those pople simply being selfish if they deny my logic?”
    Ethical decisions can be made by people who don’t have perfect knowledge and thus be made incorrectly based on the best (imperfect) knowledge that they have to hand. Likewise ethical discussions can occur between two people who have imperfect knowledge of ethics, as in the example given.
    Whether the person who thinks he should kill the children to bring about the greatest benefit is correct or not depends on whether killing the children will actually bring about the greatest benefit- in my view almost certainly not. Whether he is acting ethically is solely determined by whether he thinks he’s acting morally, if he sincerely thinks he’s acting morally then he is. Obviously without access to omniscience it is impossible to prove which of the two is right, all they can/have to do is act to the best of their ability, based on their knowledge.

    “Fair enough. We're getting somewhere. So you now agree that Mao's callous choice to let 50m Chinese starve was simply 'one of those things' and that we cannot condemn him for it. He was pursuing the Greater Good and that's enough.”
    What do you mean, “one of those things?” In my view it is pretty likely that Mao didn’t sincerely think he was pursuing the greater good, but not being a historian or a psychologist or Mao, I can’t prove whether he was acting in a moral fashion- namely doing the best he could- what he thought was the best thing.

    His actions almost certainly wouldn’t have maximized the greater good (though without omniscience I can’t say absolutely, who knows what chaos theory might have thrown out). I’m not looking to condemn any-one ethically, all that is relevant is judging which actions are the best one’s to be carried out and then carrying them out.

    “Well, as I said, I'm a theist. The Holocaust was wrong because some things are right and others are wrong. Which is which is defined by God as the creator and sustainer of the universe. They're the rules of this game, if you like.

    The Holocaust was wrong because it's morally wrong to murder people. It's morally wrong because God says so. That's the absolute standard, in my world view.”
    Fair enough, plenty to be discussed tomorrow. That said, you have to decide between actions which will conflict, insofar as they will be right/wrong to various degrees- how do you decide?

    “We don't need hate - we simply need indifference. People don't hate third-world slaves, they simply prefer to have cheap jeans (or perhaps cocaine/heroin).”
    One “needs” indifference, in order for the thing to occur, in that were there no indifference such practice would not be allowed. But indifference doesn’t make the action morally justifiable, the net suffering defines its moral status- the slaves preference for freedom and good living standards outweigh our preference to avoid the suffering of expensive jeans.
    “I think you need to re-read the stuff I wrote on the calculus. You're wearing its clothes, but denying its warmth. What you're doing is to privilege some preferences over others.”
    I’m not privileging some preferences over another, as stated in the quoted paragraph.
    Offline

    0
    ReputationRep:
    Whoo! I'm just here for the atmosphere :p:
    Offline

    2
    ReputationRep:
    (Original post by monty mike)
    Whoo! I'm just here for the atmosphere :p:
    hehe

    hope you enjoy it here!
    Offline

    0
    ReputationRep:
    (Original post by TCovenant)
    If things are preferred then by definition: that which is preferred is preferable. Things are preferred, and there are trends in ends and means which can be identified. That these aren’t universal, eternal or known perfectly, since we lack absolute knowledge is unimportant. The fact remains that things are preferred and it is preferable that preferences are fulfilled; thus the greatest outcome is that preferences are satisfied to the greatest extent.
    Preferred by whom? I agree that if I prefer A to B then A is preferable to B for me, but so what? That doesn't mean that A is preferable for anyone else. Or even that because I prefer A it is preferable at all. Suppose I prefer to torture animals (I don't, before you ask) is it therefore ethical to torture animals? It's obvious from common sense that it isn't, but it's not at all obvious from your formulation. You seem to have a mix of situational ethics in with utilitarianism. If a society prefers to abuse children (examples available from real societies on request), can you condemn them?
    The point is that child-beating would be right/wrong based on whether it brought more benefit that the alternative, benefit being defined as ‘that which would be preferred most (without privileging any preferring-agent over another).
    So, if a society prefers child-beating, that's a benefit?
    I believe in “right” actions, defined as actions which bring the greatest benefit, a utilitarian denies there are right actions except those which bring the greatest benefit- there’s no conflict.
    Your definition is yet another tautology. If benefit is the outcome of a right action, then a right action is one which produces benefit. Yet what does that add in explanatory terms? It's like saying that red paint is paint which is red.
    Utilitarianism only allows that which would “normally” be regarded as atrocities if the utilitarianism is incorrectly applied (some-one incorrectly believes that an action will bring the greatest good/least suffering) or if the term atrocity is incorrectly applied (something is called an atrocity even when it brings the least suffering/most benefit).
    So you now have a neat system which allows no external critique either. If we show that it does allow atrocities, then we must be wrong because it doesn't, by its own definition. The old

    Rule 1: I am always right
    Rule 2: If I appear to be wrong, see rule 1

    I'm afraid that cuts no ice with me. You can't say that we're "incorrectly applying" the rules just because you don't like the outcome.
    What the majority prefer is irrelevant. All that is important is that which most preferable overall. To determine this you don’t just have a quick vote, to see what people prefer as a course of action, rather you compare the weight of net preferences- happiness derived versus suffering derived, which invariably comes out against any form of massacre.
    Except that you haven't yet produced any evidence of this, or a system to back up this bald assertion.

    Your appeal is now to ignore/avoid preferences and appeal to "that which most preferable overall", but how will you determine this? Preferable to whom and measured how?
    I haven’t said how “it’s greater” because thus far we’ve only defined it- “the greater good” in terms of being “the greater good.” Accepting good as that which is preferred, the greater good is that which is preferred to a greater extent.
    If I put on my teenager hat for a moment: "Well, duh!". Right, that's got that out of the way, so what on earth are you talking about? How will I know when I've found the "greater good" and how will I tell it from a "lesser good" in real, concrete terms? Will everyone else agree with this definition?

    Let's go back to classical utilitarianism for a moment. Are you advocating act utilitarianism (Bentham), or rule utilitarianism (Mill)? At times you've veered towards eudaimonism since you've talked about hapoiness as the greatest good a few times. I'm just trying to get a feel for where you're going.
    That which is there, is that ‘that which is preferred is preferred,’ so long as things are preferred then there is “something there” not merely platitudes, the net preferences of all persons, without bias.
    You'd feel right at home with Derrida.
    If persons preferred to kill people against their preferences then it would only be acceptable if the preferences of the murderers held stronger weight than the preferences of the victims to not be killed- i.e. were their suffering as a result of not killing to be greater than the suffering of those being killed.
    So how will you objectively establish this weighting? After all, it's central to your thesis.
    The fact that the Golden Rule carried to its logical extreme is logically identical to utilitarianism is aside from the point.
    It's also not true - it isn't "logically identical" at all.
    Whether he is acting ethically is solely determined by whether he thinks he’s acting morally, if he sincerely thinks he’s acting morally then he is.
    So Fred West was acting morally? He wanted to kill people and he thought that on the balance of probabilities he'd be happier if he did. He also considered that his happiness outweighed the needs of the others to live. QED.
    In my view it is pretty likely that Mao didn’t sincerely think he was pursuing the greater good,
    He obviously did, unless you think he was psychotically deranged. People usually act for what they see as the greater good, it's just that their view of what that is, will usually be tinged by selfishness. So Mao's vision of the greater good (i.e. power and wealth for him and China) allowed him to ride over the peasants' vision of the greater good (i.e. not starving).
    One “needs” indifference, in order for the thing to occur, in that were there no indifference such practice would not be allowed. But indifference doesn’t make the action morally justifiable, the net suffering defines its moral status- the slaves preference for freedom and good living standards outweigh our preference to avoid the suffering of expensive jeans.
    I'd argue that indifference is the default state and that it's an absense of something, rather than an object in itself. If we're simply unaware that cheap jeans are bought by the suffering of others, then are we morally culpable? We're happy (in your terms) so there's clearly a good, but is it a moral good? Kant argues against happiness as a greater good, since he believed that happiness needed to be earned.
    Offline

    0
    ReputationRep:
    (Original post by rahmara)
    so we're on the topic of utilitarianism

    is it Mill who came up with the Hedonic Calculus?
    No. Bentham started it. See http://philosophy.lander.edu/ethics/calculus.html for a decent explanation.
    Offline

    2
    ReputationRep:
    (Original post by grumballcake)
    ...at some times, in some places, by some people. There is no universal referent for this. That's why I said you can't know much anthropology. Many post-modernist anthropologists would deny that there are any unifying themes in human culture and that societies are incommensuarable. So there simply aren't the "trends of human action" which will help us. We might find common tendencies such as preferring not to murder, but these are not universal. If you don't have a universal set of preferences, you don't have anything to base a universal ethic upon.
    There's more disagreement among anthropologists than you indicate. Donald E. Brown gives a list of ~100 human 'surface universals', which include only characteristics found in all known cultures - there are considerably more 'near-universals' found in all but a few cultures. These include 'copulation normally conducted in privacy', 'consciousness of economic inequalities', 'generosity admired', 'healing or attempting to heal the sick', 'hospitality', 'incest prevention or avoidance', 'law (rights and obligations)', 'law (rules of membership)', 'leaders', 'marriage', 'mourning', 'murder proscribed (afraid you were wrong there)', 'preference for own children and close kin', 'promises', 'rape proscribed', 'revenge', 'sanctions for crimes against the collective', 'concept of fairness', 'fear of death', 'pride', 'sexual jealousy', and 'territoriality'.

    Those could all be relevant, although they're probably not strong enough to make the kind of argument you were rejecting. Still worth pointing out.
 
 
 
Turn on thread page Beta
Updated: September 14, 2010
Poll
Do you think parents should charge rent?

The Student Room, Get Revising and Marked by Teachers are trading names of The Student Room Group Ltd.

Register Number: 04666380 (England and Wales), VAT No. 806 8067 22 Registered Office: International House, Queens Road, Brighton, BN1 3XE

Write a reply...
Reply
Hide
Reputation gems: You get these gems as you gain rep from other members for making good contributions and giving helpful advice.