Here is a link to an excellent hour long radio show about morality that was recommended to me by Aeneas in a comment to this blog.
WNYC.org-RadioLab: Morality (April 28, 2006)
The show starts off with the posing of two moral problems that appear to be the same, but are consistently decided in opposite ways by 90% of those who are asked. Later in the show they return to the lab that does functional brain scan studies using these kinds of moral questions and they look at the results of posing a true moral paradox in which the decisions made by respondents are about 50/50.
What the interviewers emphasized about the moral problems in the beginning of the show were the specific similarities between the posing of the problems, not the key distinction between them. Here's the gist of the problems:
There are five workers on a set of train tracks and the train is coming but they are completely unaware, and unable to become aware, of its approach and you have no means of communicating with them. You are certain that they will all be killed if the train hits them.
Variation 1: you have a lever for a track switch that will divert the train onto a siding but there is one worker on that track who will be killed.
Question: Do you pull the lever?
Variation 2: you are on an overpass over the train tracks between the workers and the train with a very large man who happens to be standing on the edge of the over pass such that if you push him over he will fall to his death on the tracks but stop the train from killing the five other workers.
Question: Do you push the man?
As I said the interviewers emphasized the idea that these problems are similar, as if what really matters is simply the math of sacrificing one to save five. This is a mistaken view of what is important in deciding a moral question, which makes the results that 90% of respondents answer, “yes” to the first question and “no” to the second answer seem odd. The mistake is in assuming, as the Western Philosophical Tradition has for a very long time, that we are mere rule following moral calculators. If you can calculate the cost of every action against the cost of all the alternatives then you can arrive at a solution that will maximize the return on the moral risks taken.
Our moral concepts are complex and, as pointed out in the second part of the show, derive from an evolutionary history that preceded our analytical abilities. They are based on experiences of well-being and the most effective ways to achieve those experiences in a variety of situations that may have logical structures that are mutually exclusive of each other. Thus we are able to use the moral calculation metaphor to guide our thoughts when people we do not know are in a situation in which we cannot interact with them and our only possible intervention is through an isolated lever pulling action. But, when we can sense the presence of a real person with whom we can sympathize then our thoughts are guided by deep unconscious conceptual metaphors that dictate against sacrificing the immediate compelling interest of our compatriot on the overpass in continuing to live.
In the moral paradox presented later in the program the choice is the death of your whole village, who are hiding from enemy soldiers together, or killing your sick baby whose coughing will alert the soldiers to the village hiding place, with respondents splitting 50/50 on killing the baby or the whole village.
I predict that the first train scenario would elicit closer to a 50/50 response if it were posed as pulling a lever to kill your mother on the one track versus killing your five best friends on the other. Or if you presented the second scenario as the choice between pushing Hitler, Osama bin Laden, or another infamous mass killer off the overpass versus saving your five best friends then the distribution of answers might be different.
What counts in moral deliberation is not the objective “facts” of the situation, but the understanding of the person in the situation at the moment when they are called to make a decision. If they understand the problem as one involving the action of pulling a lever to save a lot of people with no further information, then it’s easy. If they understand that they have to look their own child in the eyes and then kill him or her then it doesn’t matter what else is at stake normal people will put up resistance to making that choice. What is probably most important in the decision making process is how the person imagines their experience of taking the action options they understand are available, and secondarily, what consequences they can then imagine occurring as a result of their imagined actions.
If we want to help people to make morally appropriate choices then we have to help them to develop better ways to assess their own understanding of the situations they face and to be better able to imagine themselves taking courageous, kind, compassionate, or other virtuous courses of action. This is a substantial change from the traditional Western Philosophical view that we, as moral calculators, just need the right set of rules and an appropriate system of punishments and rewards for obedience to them. Looking at the 90% of the people who respond to the first two questions in opposite ways; the only problem is in the questions, not the answers. The questions activate very different experiences in the imagination of the decision maker and they provide very limited choices. Real life is rarely, if ever, that simple. While the questions were undoubtedly good for brain imaging studies that want to isolate different moral processing activity in the brain, they are not very good for gaining insight into real moral decision making skills nor assisting people with improving them.
No comments:
Post a Comment