Knowledge, Reality, and Value Book Club Replies, Part 4
Here are my reactions to your latest round of comments.
The more I’ve discussed moral philosophy with people, the more I suspect lots of people are just intuitionists in denial. Scott Alexander, who calls himself a utilitarian, was admirably upfront about this when he wrote:
It seems to boil down to something like this: I am only willing to accept utilitarianism when it matches my moral intuitions, or when I can hack it to conform to my moral intuitions. It usually does a good job of this, but sometimes it doesn’t, in which case I go with my moral intuitions over utilitarianism.
Which raises the obvious question – why not just say you’re an intuitionist then and save yourself all the effort? A lot of brainpower is spent by very smart people trying to find ways to hack their ethical theories in the way Scott describes. Most of these efforts just seem like obvious attempts at reverse engineering an explanation to get to the desired and intuitive conclusion, and the obviousness of this backwards reasoning only further discredits that ethical theory to anyone who doesn’t share it.
I’d say, and I think Huemer would agree, that intuitionism is a meta-ethical theory while utilitarianism is a first-order ethical theory. So technically, you could be an intuitionist utilitarian or a non-intuitionist non-utilitarian. Substantively, though, you’re right. In the Huemerian framework, the correct formulation is that most so-called utilitarians are “moderate deontologists in denial.” And as your “reverse engineering” point suggests, utilitarianism frequently degrades the quality of social science, because one of the easiest ways for utilitarians to to retain their ethical theory is to embrace to implausible factual beliefs.
It’s hard to see how demandingness is a particularly strong objection to utilitarianism. All moral theories can be extremely demanding in certain circumstances.
Yes. The point is that utilitarianism is demanding in almost all real-world situations, not just odd hypotheticals. It’s not implausible that you would have extreme moral duties in an emergency. What’s implausible is that you have extreme moral duties almost all the time. (Alternately, that we’re almost always in a moral emergency).
John’s child is dying from cancer. He cannot afford to fly her to a country offering pioneering treatment. Let’s assume it has a reasonably high chance of saving her life. Can John hack into Bill Gates’ bank account and steal the money he needs?
I take it that Caplan thinks this would be wrong; and yet what could possibly be more demanding than a moral principle which requires you to step back and watch your own child die from cancer? In fact, utilitarianism would probably be less demanding in this case, since you may well increase overall utility by stealing the money.
Actually, I don’t think this would be wrong. Like Huemer, I endorse a moderate deontological view that allows rights violations when the benefits vastly exceed the costs.
On utilitarianism, Huemer writes: “To a first approximation, you have to give until there is no one who needs your money more than you do.” But, no, this is not a consequence of utilitarianism. Huemer is overlooking the fact that in utilitarianism future people count just as much as present people. If I invest my surplus wealth, I will benefit future people, probably doing more good than if I gave it away to people who will consume it immediately.
First of all, this objection only works if you’re choosing between investment and charity. It does not work if you’re choosing between consumption and charity. So utilitarianism still requires you to bring your consumption down to near-zero.
Second, as long as people in the future are likely to be much richer than people today, the law of diminishing marginal utility means that it could still easily be better to give to charity today. (This is also the least-bad objection to Robin Hanson’s view that effective altruists should take all the money they would have given to charity and invest it in order to help a vastly larger number of recipients in the far future. A la Benjamin Franklin).
(Also, given the inherent self-centeredness of virtually all people, the expectation of give-aways would undermine the recipients’ incentives to be productive.)
Almost all utilitarians would agree that government shouldn’t tax people into penury due to the bad incentives. But these bad incentives only exist because almost no one is a consistent utilitarian! A consistent utilitarian would keep working hard despite the lack of selfish payoff.
The post appeared first on Econlib.