If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content
Current time:0:00Total duration:6:22

Ethics: Utilitarianism, Part 2

Video transcript

(intro music) Hi, I'm Julia Markovits, and I'm an associate professor of philosophy at MIT. Today, I'm going to talk about utilitarianism. We can break the utilitarian thesis up into two parts: a theory of what is valuable, and a theory of right action given what's valuable. First, the theory of what's valuable. It says that the only thing that's valuable in its own right is happiness and the absence of suffering. Second, the theory of right action. The right action is the one that maximizes, produces the most of, what's valuable, or if that's uncertain, that produces the most expected value. If you put those two pieces, the theory of what's valuable and the theory of right action given what's valuable, together, you get utilitarianism. Utilitarianism has a lot going for it. It's a very simple, theoretically elegant theory that has universal application. It's built on a value, happiness, that is at least extremely widely shared. Almost everyone, in fact, values happiness. In a way, it's completely egalitarian. In the utilitarian calculus, each person's happiness counts for as much as anyone else's. There's something very intuitive about the thought that happiness is valuable, and the more we make of what's valuable, the better. And as we've seen, embracing these thoughts led Bentham, at least, to important moral insights at a time when many around him were blind to those insights. But both parts of the utilitarian thesis also raised some worries. One set of worries concerns the utilitarian theory of value. A lot of people have disputed that only happiness is valuable and only suffering disvaluable. Couldn't we be happy even though we're massively deluded about our lives? Maybe the people we think are our friends really despise us, and the work we think is a success is really widely derided. In that case, we might still be happy, but surely our lives would be lacking much that is valuable. These worries can be avoided to some extent by revising the utilitarian theory of value. Maybe it's not just happiness, but well-being more broadly understood, that's valuable. It's a tricky problem to figure out exactly what's valuable, but I will set that problem aside here. I want to focus instead on a problem facing the second half of the utilitarian thesis, the theory of right action. This part of the thesis looks particularly hard to question. Once we've agreed on what's valuable, how could we deny that it's better morally to secure more of what's valuable? This looks very plausible, but it's proved to be surprisingly problematic. Consider this example, due to the philosopher T. M. Scanlon. Suppose that Jones has suffered an accident in the transmitter room of a television station. Electrical equipment has fallen on his arm, and we can't rescue him without turning off the transmitter for fifteen minutes. A World Cup match is in progress, watched by many people, and it will not be over for an hour. Jones's injury won't get any worse if we wait, but his hand has been mashed, and he's receiving extremely painful electrical shocks. Should we rescue him now or wait until the match is over? Does the right thing to do depend on how many people are watching, whether it's one million or five million or a hundred million? To put a finer point on the problem, in fact, over one billion people watched the last World Cup Final. Must a utilitarian conclude that poor Jones should be left to his fate? Consider this. >From a utilitarian perspective, preventing one death is a very good thing, but surely preventing very many severe mutilations can relieve more suffering and so produce more value than a single life. And surely preventing a much larger still number of somewhat less severe mutilations is more valuable than preventing the comparatively smaller number of severe mutilations, and so on. Following this reasoning, we'll eventually arrive at comparatively minor harm, a headache, say, that if suffered by a vast enough number of people will be worse than a comparatively much smaller, but still vast, number of somewhat more serious harms, maybe sprained ankles. But notice that "less valuable than" is transitive. If A is less valuable than B, and B is less valuable than C, then A is less valuable than C. So we seem to have arrived at the conclusion that preventing a vast enough number of headaches can produce more value than saving a life, and to get back to poor Jones, that avoiding fifteen minutes of frustration for one billion soccer fans may be more valuable than preventing an additional hour of pain for Jones. If the second half of the utilitarian thesis, the theory of right action, is correct, and the morally right action is the one that maximizes value, it seems we're morally obligated to leave poor Jones to suffer. Can that possibly be right? It sounds wrong, but it's worth noting that we in fact make tradeoffs like this all the time. We raise the speed limit for the sake of minor convenience for millions of people, even though it means more deaths on the highway. We fund research into athlete's foot treatments when we could instead pay for research into the cure of some very fatal but very rare disease. We direct some aid money into programs that target deworming, which benefits a lot of people a little, rather than programs that prevent death for a much smaller number of people. But in spite of this, many people will feel that utilitarianism has advised us wrongly in the Jones case. There are some things, they will think, which we may not do or allow to happen to people, even for the sake of maximizing total value. In other words, people have a right not to have their interests sacrificed for the greater good in some circumstances. Such people might also object, for example, to the use of torture to get potentially life-saving intelligence. Subtitles by the Amara.org community