Tag Archives: Philosophy

What is Ethics good for?

A classic question in ethics is ‘why be moral?’. My usual answer has been ‘because you want to’. A correct understanding of the nature of morality makes ‘why be moral?’ sound like ‘why like ice-cream?’. Morality is not some alien creation that we need an independent reason to accept. It is a system that subsists within our desires.

‘X is good’ is pretty much short hand for ‘I desire X to obtain’. Moral language is a projection of our attitudes on to the world, allowing us to reason about them as if they were properties of the world.  Moral reasoning is simply us figuring out how to best achieve what we want. It says for example, if you don’t like little girls dying, then don’t like setting orphanages on fire because that causes little girls to die.  Morality gets more abstract the more you think about it. You might go on to decide that it’s not little girls dying per se that you don’t like; it’s the loss of a distinct mind. Realising this, you will want to start disliking things that you that you never considered before, such as the death penalty.

Our desires have logical consequences that we can (in theory) discover. Morality is  the sum of all our values and their logical consequences.  There are objective moral truths of which we are ignorant in  the same sense as there are objective winning games in chess that no one has played. They exist out there in logic land. At its best, Ethics allows us to work out this logic and get more of what we want (such as fewer dead girls).

Ethics is not about dutifully submitting to something outside you, its about rationally maximizing what you want. In fact it i think morality is just a special subset of practical rationality that deals with concerns like helping others rather than concerns like getting to work on time.

However there are complications with the above account. There are two ways something can be good. Firstly there is simple projection; Ice cream makes me feel warm and fuzzy so, as well as being cold and organic, ice-cream is ‘good’. Secondly there are the various extrapolations we make off such simple intuitions. For example I might examine why ice cream is good and conclude that the reason ice cream is good is because pleasure is  good.

Note that ‘reason’, as used in the last sentence, does not mean ’cause’. When we says things like ‘the reason why ice cream is good is because pleasure is good’ we are providing a justificatory normative reason, explaining what is is we value about ice cream. However ‘pleasure being good’ is not what really makes you value ice cream ie the causal reason. That is some long story involving stimulation of the olfactory system and evolution.

What causes ice creams goodness (causal reason) ie our projected liking, is thus a different type of thing from the explanation we give for its goodness (normative reason). Ethical theory trades in the normative. A consequence of this is that we can get normative ‘goods’ via reasoning that we don’t actually desire because they fail to pull the correct levers in our psychology.This messes with my orignal story about ethics being a way to get what you want. We can end up with a break between what we feel is good and what we reason is good.

I basically accept preference utilitarianism as the correct normative account of morality, yet I don’t feel much motivation to follow its prescriptions. I put this down to the badness of human(or at least my) psychology.  Another response would be to say im wrong about utilitarianism and just strip my normative theory down to match my feelings, as do moral particularists. Either way Ethical reasoning doesn’t look that good. With the first approach Ethics is inconsequential, with the 2nd it is redundant.

This is probably too strong though. I don’t think I have shown ethics is completely useless, rather that Plato was wrong  that to know the good is to will the good. We  in fact end up with two systems of morality, the set of our intuitive moral judgments and the more formal ethical system that arises from it. This second system may have its uses but it will work quite differently from the first. My wanting to alleviate pain of the little girl on fire in front of me is quite different from my wanting to alleviate pain in the future-persons by donating 10% of my income to tech research. The first is brute desire which demands little explanation.While with the second it makes sense to ask ‘why be moral’  where you are using ‘why’ in the practical deliberative sense.

So what can formal ethics do? Although I doubt it can generate or suppress natural moral passions it may come in to play where our desire is ‘to do the right thing’ where the right thing is not specified. In other words when we simply want to ‘be moral’ rather than act on any particular moral impulse like pity. For example, accepting utilitarianism won’t make me wince with guilt whenever I spend a dollar on ice-cream, but if i feel like giving to charity so i can feel virtuous, ethics may lead me to choose to feed starving humans rather than re-home kittens. I will say more about this in the next post.


Hedging your Ethical Bets

When trying to do good, people often take actions they expect to be pointless because there is a low possibility of great good.  Taking serious a report of poison in the water supply though it is likely a hoax is an example. Here we  are confident about our moral judgment, that people dying is bad, but are uncertain about the state of the world.

There are also cases where we are confident about the state of the world but not our moral beliefs. So on the face of it we should be more willing to act according to low probability moral principles when the cost of being wrong is high. Take Animal welfare as an example. I don’t feel bad about how we use animals nor am I much convinced by the particular arguments against doing so. On the other hand I don’t have any strong principled reason against  animals as moral patients. If I am wrong, then I may be committing a grave error similar in type to those who disregarded other races in the past. If  animal welfareists are wrong then they are committing a much less important error by forgoing many of the useful goods animals provide.*  There is thus a case for, say, avoiding Halal meat, on moral grounds, without believing  it to be wrong.

 Unfortunately, this is harder to do then the first kind of moral mathematics. Moral ‘belief’ works differently from belief about the world. I can quite comfortably expect rain with 20 % confidence, clouds without rain at 30% etc. However moral thinking is about reconciling our feelings with our talk about morality. Driven on by cognitive dissonance,  it tends to jumps between reflective equilibria rather than assign probabilities. I psychologically can’t ‘10%-care’ about animals on realisation that I haven’t countered all Peter Singer’s arguments. Either im convinced or not. The best I might manage is an intellectual understanding that given my other values I ‘should’ care about animal welfare, even if I can’t represent that emotionally.

Taking our uncertainty more seriously would have some interesting consequences :

-We might adopt less plausible but cheaply enacted principles over more plausible but also more stringent principles.

– Given all the disagreement in ethics, we would likely end up with a very messy pluralism.  In my example i was thinking of a consequentialist factoring in the chance that animal suffering has some value to be considered  deciding what to do; a relatively simple matter. However  Suppose I was also slightly impressed by deontological arguments against using animals as means. Different normative systems are a lot harder to integrate then simply adding in an extra value to your consequential calculations. Certain concerns are incommensurable and will require a leap, perhaps drastic, one way or another.

-While still arguing about which theories are best, ethicists could spend some time developing  meta-procedures to deal with the uncertainty in a principled way in the meantime.

*Im thinking of  a  weak position  in this example, that animal mental states have value >0 . Many pro-animal positions, especially those based on rights, are very costly when logically followed through.