If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Ethics: Utilitarianism, Part 2

Julia Markovits (Cornell University) gives an introduction to the moral theory of utilitarianism. Utilitarianism is the view that the right moral action is the one that maximizes happiness for all.

Speaker: Dr. Julia Markovits, Associate Professor, Sage School of Philosophy, Cornell Universtiy

.

Want to join the conversation?

  • piceratops ultimate style avatar for user Anil
    If the people watching the World Cup found out about Jones (which would be inevitable), they would be mad at the TV company or whoever for NOT shutting off the transmitter, so doesn't utilitarianism actually support helping Jones?
    (8 votes)
    Default Khan Academy avatar avatar for user
  • male robot hal style avatar for user Enn
    Does the Theory of Right Action say that it is better to do something that is expected to give happiness when the outcome is uncertain ?
    (4 votes)
    Default Khan Academy avatar avatar for user
  • ohnoes default style avatar for user Tejas
    To solve this problem, couldn't you just change your value from happiness to something that will weight a person's life more heavily?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • aqualine ultimate style avatar for user Sarah
    Perhaps minor inconvenaces are not worth the same as actual suffering because the long term consequences for J will be more severe than it would be for those people. They may be mad for a week or so but J could be tramatized for the rest of his life.
    (3 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user John.h.leiman
    Or... Perhaps it is just happiness that has value. I've considered this endlessly, especially after reading Aldous Huxley's Brave New World. If we can all be happy, and eternally high on drugs, why shouldn't we be? Of course, there's the argument that doing so would eventually lead to unhappiness, but then that is still utilitarian as it is a minimization of unhappiness? But if we could really be maximally deluded, high and happy forever, putting aside the practicality of any such event, why would it be immoral to do so?

    I believe this is not necessarily true, as happiness in this sense is not necessarily quantifiable. While quantities of deaths can sometimes easily be compared to other quantities of deaths (though only sometimes), comparing the value of happiness derived from the outcome to other "events" of different nature and therefore different "quality" introduces a factor which determines the value of the outcome of the event without any algebraic logic, such as the value of life and liberties. While that is not to say that the rights to life and liberty are necessarily sacred (1. e.g murder to prevent massacre, imprisonment 2. defining "sacred" as unjust to damage regardless of literally any circumstances), I do believe that they are of virtually infinitely greater value as compared to any quantity of headaches.

    To anyone who reads this, what do you think?
    (3 votes)
    Default Khan Academy avatar avatar for user
  • mr pants teal style avatar for user J D
    Turning of the World Cup game wouldn't hurt anyone else though. It would annoy them, sure, but they would not suffer physical pain like Jones.
    (2 votes)
    Default Khan Academy avatar avatar for user
  • starky sapling style avatar for user Nikhil Roy
    I do disagree with the attempt to allocate happiness a value because in many different circumstances there is no telling whether the outcome provides more happiness or less happiness than another, without even considering emotional attachments. Utilitarianism, however, if applied to the first concept which you proposed, can also be interpreted to oppose this "eternally high life" because drugs would still impact on mental health, hence is it true happiness or a false pretence of happiness, much like the idea in a former video which stated that you enjoy your friends company, even though they don't really like you. It instigates a somewhat false sense of happiness, which should be discussed in the utilitarian principle.
    What is everyone else's opinion?
    Nikhil Roy
    (1 vote)
    Default Khan Academy avatar avatar for user

Video transcript

(intro music) Hi, I'm Julia Markovits, and I'm an associate professor of philosophy at MIT. Today, I'm going to talk about utilitarianism. We can break the utilitarian thesis up into two parts: a theory of what is valuable, and a theory of right action given what's valuable. First, the theory of what's valuable. It says that the only thing that's valuable in its own right is happiness and the absence of suffering. Second, the theory of right action. The right action is the one that maximizes, produces the most of, what's valuable, or if that's uncertain, that produces the most expected value. If you put those two pieces, the theory of what's valuable and the theory of right action given what's valuable, together, you get utilitarianism. Utilitarianism has a lot going for it. It's a very simple, theoretically elegant theory that has universal application. It's built on a value, happiness, that is at least extremely widely shared. Almost everyone, in fact, values happiness. In a way, it's completely egalitarian. In the utilitarian calculus, each person's happiness counts for as much as anyone else's. There's something very intuitive about the thought that happiness is valuable, and the more we make of what's valuable, the better. And as we've seen, embracing these thoughts led Bentham, at least, to important moral insights at a time when many around him were blind to those insights. But both parts of the utilitarian thesis also raised some worries. One set of worries concerns the utilitarian theory of value. A lot of people have disputed that only happiness is valuable and only suffering disvaluable. Couldn't we be happy even though we're massively deluded about our lives? Maybe the people we think are our friends really despise us, and the work we think is a success is really widely derided. In that case, we might still be happy, but surely our lives would be lacking much that is valuable. These worries can be avoided to some extent by revising the utilitarian theory of value. Maybe it's not just happiness, but well-being more broadly understood, that's valuable. It's a tricky problem to figure out exactly what's valuable, but I will set that problem aside here. I want to focus instead on a problem facing the second half of the utilitarian thesis, the theory of right action. This part of the thesis looks particularly hard to question. Once we've agreed on what's valuable, how could we deny that it's better morally to secure more of what's valuable? This looks very plausible, but it's proved to be surprisingly problematic. Consider this example, due to the philosopher T. M. Scanlon. Suppose that Jones has suffered an accident in the transmitter room of a television station. Electrical equipment has fallen on his arm, and we can't rescue him without turning off the transmitter for fifteen minutes. A World Cup match is in progress, watched by many people, and it will not be over for an hour. Jones's injury won't get any worse if we wait, but his hand has been mashed, and he's receiving extremely painful electrical shocks. Should we rescue him now or wait until the match is over? Does the right thing to do depend on how many people are watching, whether it's one million or five million or a hundred million? To put a finer point on the problem, in fact, over one billion people watched the last World Cup Final. Must a utilitarian conclude that poor Jones should be left to his fate? Consider this. >From a utilitarian perspective, preventing one death is a very good thing, but surely preventing very many severe mutilations can relieve more suffering and so produce more value than a single life. And surely preventing a much larger still number of somewhat less severe mutilations is more valuable than preventing the comparatively smaller number of severe mutilations, and so on. Following this reasoning, we'll eventually arrive at comparatively minor harm, a headache, say, that if suffered by a vast enough number of people will be worse than a comparatively much smaller, but still vast, number of somewhat more serious harms, maybe sprained ankles. But notice that "less valuable than" is transitive. If A is less valuable than B, and B is less valuable than C, then A is less valuable than C. So we seem to have arrived at the conclusion that preventing a vast enough number of headaches can produce more value than saving a life, and to get back to poor Jones, that avoiding fifteen minutes of frustration for one billion soccer fans may be more valuable than preventing an additional hour of pain for Jones. If the second half of the utilitarian thesis, the theory of right action, is correct, and the morally right action is the one that maximizes value, it seems we're morally obligated to leave poor Jones to suffer. Can that possibly be right? It sounds wrong, but it's worth noting that we in fact make tradeoffs like this all the time. We raise the speed limit for the sake of minor convenience for millions of people, even though it means more deaths on the highway. We fund research into athlete's foot treatments when we could instead pay for research into the cure of some very fatal but very rare disease. We direct some aid money into programs that target deworming, which benefits a lot of people a little, rather than programs that prevent death for a much smaller number of people. But in spite of this, many people will feel that utilitarianism has advised us wrongly in the Jones case. There are some things, they will think, which we may not do or allow to happen to people, even for the sake of maximizing total value. In other words, people have a right not to have their interests sacrificed for the greater good in some circumstances. Such people might also object, for example, to the use of torture to get potentially life-saving intelligence. Subtitles by the Amara.org community