Wednesday, October 12, 2011

Don't Limit Utilitarianism

In class today, my philosophy of law professor repeatedly defined utilitarianism as the family of philosophies that hold that a) the welfare of sentient beings matters tremendously for ethical thinking, and b) nothing else matters at all. That's a good, accurate definition (though I'm paraphrasing slightly on Part A, with the language about sentient beings, it's still the gist of what he said). But the problem is what he did with it. Because he was saying things like, under a utilitarian view one would have to approve of two people twisting a child's arm, and inflicting pain on that child, just because the two of them get a kick out of it. Two people having fun, one person in pain, it's a net win, so he says. But there are some serious problems with this, because in making that assertion he's using the word "utilitarianism" in a much more restrictively defined fashion.

In order to reach the kinds of conclusions he reached, you have to define a kind of utilitarianism that is, roughly speaking, strictly additive. That is to say, you're talking about a kind of ethical calculus in which you just say, X net well-being for this person, Y net well-being for this person, Z net well-being for this person, so on down the line; add up all of those values, and then maximize the result. That is, to be sure, a utilitarian kind of ethics. But not the only one. The only requirement of a utilitarian ethics, as I see it, is that your utility function be a function of the welfares of sentient beings, and nothing else. That gives you a lot of discretion! For one thing, we haven't yet assumed that suffering and happiness are to be measured in the same units. Maybe we adopt a utility function that highly prioritizes the avoidance of pain. In that case, you shouldn't twist the kid's arm, because our utility function doesn't value your pleasure as highly as it values avoiding the kid's pain. In fact, we could theoretically have a utility function that says that an increase in pain for one individual can never be justified by an increase in pleasure for another individual, but only by a reduction in pain for others that is commensurately large. Thus, twisting the kid's arm to save the world, or even to save one life, would be justified, but even if all seven-billion people in the world would get a tremendous kick out of seeing this little kid's arm be twisted, that's not enough to justify it. Again, all we're caring about are the welfares of sentient beings, and we've just avoided his result.

Simply saying the word "utilitarian" really doesn't commit you to very much about your ethical philosophy. All it commits you to is never making an argument that X is good that you cannot translate into an argument that X will increase the welfare of at least some sentient beings, and can plausibly be claimed to increase the welfare of sentient beings in the aggregate; similarly, to never making an argument that Y is bad that you cannot translate into an argument that Y will decrease the welfare of at least some sentient beings, and can plausibly be claimed to decrease the welfare of sentient beings in the aggregate. That's all. The specific rule for adjudicating between competing interests is not determined simply when we decide to be utilitarians. Within the big tent of utilitarianism we can have reasonable arguments about how to tally up the score, and when X will outweigh Y. But we are not committed to the view that we should twist that little child's arm, not even remotely.

No comments:

Post a Comment