Computerized philosophers

A week ago they had a researcher on the radio talking about computational morality. I don’t remember if that’s what they called it, but that’s the gist of it: using logic to solve moral dilemmas. The example they gave was this: a train is going out of control down a track toward a group of people. You are in the position to flip the switch to make it go down another track, where there is only one person. If you act, one person will die. If you do nothing, many people will die. When posed with this question, most people will say they would flip the switch.

However, if you change the question just slightly, people’s answers change. Instead of a switch, there is a very large person standing in front of you. If you push him, his body will stop the train before it hits the other people.

The computer sees these as morally equivalent. In both cases, it is a choice between the death of one person or the death of several people. Most people, on the other hand, would not actively kill one person to save the lives of several.

The researchers when on to talk about how computers could help to make people more rational by pointing out these situations. Now, my professional life is filled with statistics and probability, and I spend a lot of time arguing for rationality. But when it comes to morality, this is a case of garbage in equals garbage out.

Humans are exceptionally well-adapted to their environment, especially when it comes to surviving in an emergency situation. So are rabbits, squirrels, and even squid. Millions of years of survival of the fittest tends to hone a brain pretty well. And when it comes to social creatures, the calculus extends to saving the lives of others.

The logic of the computer is pretty simple. Saving one life versus saving many. It’s so easy, an ant could do it with a single neuron. So why the different answers?

It all comes down to certainty. The question is posed as a case of two certain outcomes. But in life there is no certainty. The certainty itself puts the question into the realm of the hypothetical. Brains are optimized for real world use, not hypothetical situations.

The history of artifical intelligence is riddled with robots which could plot a perfect path from point A to point B, but couldn’t tell a shadow from an obsticle. Or which became perplexed when a sensor malfunctioned. Even in the most controlled conditions, it’s hard to completely eliminate ambiguity and its buddy uncertainty.

In a real world train situation, there’s no guarantee that the people standing in the train tracks would die. They might manage to get out of the way, or they might only be injured. When you act to cause harm, there is a greater chance that you will be successful than when it is merely a side effect of your action.