I don’t predict what is going to happen when I watch a film. It isn’t like I can’t, it just doesn’t occur to me. When the bad guy turns out to be a good guy (or vice versa) my friends will say “Well that was obviously going to happen”. But it wasn’t obvious to me.
If you had described the salient facts to me, given me a plot summary, I would be able to make the correct prediction, I’m sure, but something about the way my brain works stops me making the leap from the level of experience to the level of description. I am stuck just experiencing the events of the film, and not representing them in a way that would allow me to draw obvious conclusions.
Let’s call this ability ‘narrative extraction’.
I’ve got some smarts, sure, but I think I’ve got a deductive kind of smarts. This is the kind of smarts that can take a set of facts, or axioms, and crunch through the consequences until you get to inevitable result. I’m good at maths and most logic puzzles. I think narrative extraction requires a different kind of smarts. It is the ability to pick an appropriate set of facts or an appropriate method of description which will provide you with an answer which serves your purposes.
For the film example you need to do more than just experience the characters, you need to classify them by their types, the film by genre, the plot by template and from all that infer what would be the most likely thing for an exciting film.
Moral reasoning requires the same kind of smarts. There’s a famous test of moral reasoning by Kohberg, where children are presented with vignettes (“Your wife is sick and you cannot afford medicine. Should you break into the pharmacy and steal it?” type things). Kohberg ranked children’s moral reasoning, giving the most credit to moral reasoning which invoked logical deductions from abstract moral frameworks.
Gilligan, in her book “In a different voice” has a powerful critique of Kohlberg’s system, on the grounds that it gave credit to one kind of reasoning – abstract logical deduction or calculus (e.g. “Stealing is wrong, but letting your wife die is worse, so I should therefore steal the medicine”) – and not to another kind of more contextually sensitive reasoning (e.g. “If I break into the pharmacy then I might get caught and then I won’t be able to help my wife, so I should find another way of getting the medicine”). This sensitivity to what is not in the question – what is not explicitly stated – is a part of narrative extraction.
There is another important, perhaps more primary, way in which narrative extraction is required for moral judgement. Kohlberg’s vignettes are not just logic problems, which can be convergently or divergently solved, they are also descriptions of the world. Thus they do one of the major tasks of moral reasoning for you – that of going from the nebulous world of experience to the concrete would of categories and actions.
As soon as you describe the world you massively constrain the scope for moral reasoning. You can still make the wrong judgement, but you have made moral reasoning possible by the act of description using moral categories.
Milgram demonstrated scientifically the banality of evil, that normal people could do inhuman things. Did those people who thought they were delivering lethal electric shocks make an incorrect moral judgement? Did they weigh “doing what you are told” against “the life of an innocent” and choose the former? My intuition is that they did not, not explicitly. Yes they made the wrong choice (we too would probably have made the wrong choice), but I believe that they were so caught up in the moment, in the emotion of the situation, that they did not move to the necessary level of description. We, reading this in comfort, are given the moral categories and the right choice is so obvious that we have difficulty empathising with their situation. The narrative extraction has been done for us, so right thing seems obvious. But it isn’t.