When trying to do good, people often take actions they expect to be pointless because there is a low possibility of great good. Taking serious a report of poison in the water supply though it is likely a hoax is an example. Here we are confident about our moral judgment, that people dying is bad, but are uncertain about the state of the world.
There are also cases where we are confident about the state of the world but not our moral beliefs. So on the face of it we should be more willing to act according to low probability moral principles when the cost of being wrong is high. Take Animal welfare as an example. I don’t feel bad about how we use animals nor am I much convinced by the particular arguments against doing so. On the other hand I don’t have any strong principled reason against animals as moral patients. If I am wrong, then I may be committing a grave error similar in type to those who disregarded other races in the past. If animal welfareists are wrong then they are committing a much less important error by forgoing many of the useful goods animals provide.* There is thus a case for, say, avoiding Halal meat, on moral grounds, without believing it to be wrong.
Unfortunately, this is harder to do then the first kind of moral mathematics. Moral ‘belief’ works differently from belief about the world. I can quite comfortably expect rain with 20 % confidence, clouds without rain at 30% etc. However moral thinking is about reconciling our feelings with our talk about morality. Driven on by cognitive dissonance, it tends to jumps between reflective equilibria rather than assign probabilities. I psychologically can’t ‘10%-care’ about animals on realisation that I haven’t countered all Peter Singer’s arguments. Either im convinced or not. The best I might manage is an intellectual understanding that given my other values I ‘should’ care about animal welfare, even if I can’t represent that emotionally.
Taking our uncertainty more seriously would have some interesting consequences :
-We might adopt less plausible but cheaply enacted principles over more plausible but also more stringent principles.
– Given all the disagreement in ethics, we would likely end up with a very messy pluralism. In my example i was thinking of a consequentialist factoring in the chance that animal suffering has some value to be considered deciding what to do; a relatively simple matter. However Suppose I was also slightly impressed by deontological arguments against using animals as means. Different normative systems are a lot harder to integrate then simply adding in an extra value to your consequential calculations. Certain concerns are incommensurable and will require a leap, perhaps drastic, one way or another.
-While still arguing about which theories are best, ethicists could spend some time developing meta-procedures to deal with the uncertainty in a principled way in the meantime.
*Im thinking of a weak position in this example, that animal mental states have value >0 . Many pro-animal positions, especially those based on rights, are very costly when logically followed through.