Sins of commission and the logic of omission

(As this is a long post, I’ve formatted it as a PDF so you can print it out and read it over a coffee if you prefer)

It’s a commonplace of both everyday and academic psychology that actively doing something bad is worse than failing to do something good. There seems to be a world of moral difference between actively killing someone — by poisoning their food, say — and allowing them to die (not giving someone who has been poisoned an antidote). The question is why this asymmetry exists between sins of commission and omission — what, psychologically, is the difference between the two, and why do we judge them differently even in cases when action and inaction lead to the same outcome (such as a person’s death)?

Peter DeScioli, Rebecca Bruening and Robert Kurzban take up these questions in a recent paper published in Evolution and Human Behavior. In their studies, DeScioli and colleagues use a range of moral scenarios to try to disentangle the factors that lead to the moral distinction between commissions and omissions. And as usual with these kinds of studies, getting a handle on their results, and what they mean, demands a close look at the scenarios involved. We’ll get to all that, but first a bit more background.

Explaining the omission effect

Why might commissions been seen as worse than omissions? One possible reason is that when someone actively does something to harm another person, they are implicated in the causal chain of events that lead to harm in a way that doesn’t apply to omissions. If I fatally poison someone, then my actions are clearly part of the causal process that leads to their death: if I wasn’t around, with my nefarious intentions, they wouldn’t have died. This isn’t the case if I withhold the antidote from someone who is dying of poisoning. Yes, my inaction allows them to die, but it doesn’t cause them to die in the way that poisoning does: if I was not around — and no one else was — they would die anyway. So this causal asymmetry leads to an asymmetry in moral judgements. Call this the causality account of the omission effect.

There are other potential explanations of the omission effect. One relates to the fact that moral condemnation, and especially the punishment of moral transgressions, is often a communal affair. It can be costly to stand up on your own and hold wrong-doers to account, but these costs are shared when everyone else agrees that someone has been bad and deserves punishment. Therefore we might expect that when there’s strong, clear, publicly available evidence of wrongdoing, more people will be likely to agree that something illegal or immoral has occurred — in which case communal punishment is likely to follow. But when the publicly available evidence of wrongdoing is less clear, there’s likely to be less agreement, and consequently a reduced motivation to call for punishment.

Perhaps this sounds like stating the obvious: the better evidence we have that someone has behaved badly, the more likely we are to condemn them. But that’s not quite the point. What matters here is not private conviction about whether or not the evidence shows that someone did something immoral, but whether the publicly available evidence is likely to persuade others. Perhaps you have inside knowledge that someone has acted immorally, but if this evidence isn’t publicly available and others are not likely to join in your condemnation there’s little point in condemning it yourself and calling for punishment. In other words, our moral judgements about the behaviour of others might not just depend on what we believe, but on what we think other people will believe — and we can be more certain of what others will believe when we know there’s public evidence to support a particular moral conclusion.

Acts of commission versus omissions typically vary in the material evidence they leave about wrongdoing. If I poison someone, my fingerprints may be on the vial of poison, or there may be drops of poison on my clothes – the kind of evidence that puts me in the causal sequence leading to someone’s death, and which will get me convicted in a court of law. Omissions, on the other hand, do not typically leave a material trace that demonstrates my involvement in the poisoned person’s death (indeed, as the causality account highlights, it’s not clear that it’s right to say that I was causally involved in their death at all).

Omissions also provide fewer clues about intentions than commissions. If I add what I know to be a fatally toxic poison to someone’s drink, it’s pretty clear that my intention is to kill the person. But what if I fail to provide someone with a needed antidote? Did I do this because I wanted the person to die? Or did I simply not know that the antidote was there, or how to administer it? Perhaps I just panicked in the face of the medical emergency, and ran away to avoid the distressing scene of watching someone dying. Who knows?

Even if I’m certain of someone’s intentions in a particular case of omission, I can’t be sure that other people will be persuaded of this if there’s a lack of public evidence. As such, from the ‘communal punishment’ perspective, my condemnation should be correspondingly reduced when there’s less public evidence to support the claim of intentional wrongdoing.

This is precisely the idea that DeScioli and colleagues set out to test. As they state the hypothesis, “People condemn omissions less harshly than commissions because omissions produce little material evidence of wrong doing” (p206). This leads to the prediction that “the omission effect should be reduced or eliminated when public evidence shows that an omission was chosen” (p206). DeScioli and colleagues refer to the strength of evidence for a violation as its ‘public transparency’ (violations are transparent when the public evidence is strong, and opaque when it’s weaker or absent), and distinguish public transparency/opacity from “private confidence about whether a violation occurred” (p206) — a point we’ll return to.

But when are omissions ever transparent? That is, in which situations can you show that an omission was actively chosen — as opposed to merely being a default option — given that they typically do not leave an evidentiary trail pointing to causal involvement or intentions? This is where the team had to get creative, and invent some scenarios in which, they claim, this evidence can be clearly documented (we’ll get to the specifics of these in a moment).

Digging deeper

To recap, omissions typically differ from commissions in a number of ways: specifically, in the causal involvement of the person alleged to have done harm (less for omissions); ambiguity about intentions (greater for omissions); and public transparency (less for omissions). The scenarios DeScioli and colleagues came up with were designed to hold certainty about intent constant while varying causal involvement and transparency, to see how the latter two factors affect judgements of wrongness and punishment.

In all the cases they devised, someone gets killed, while someone else either actively causes this death or could have prevented it but doesn’t. (In all cases, the death of the person is a foreseeable consequence of action or inaction, which the team claims this holds certainty about intentions constant — though I’ll return to this point below.) In other words, in some cases the death is a direct, causal consequence of a particular choice of action (these are commissions), and in others there is no such causal connection (these are the team’s omissions). The key innovation in these scenarios is the presence of a computer that records people’s choices as a matter of public record. This reveals whether the person selected an action causally related to the fatal outcome (a commission), or an action causally irrelevant to the outcome (an omission, on DeScioli et al.’s terminology). In addition, both the direct-causation and no-causation scenarios came in transparent and opaque versions. That gives four combinations: direct causation/transparent, direct causation/opaque, no causation/transparent, and no causation/opaque.

It will help to make these abstract descriptions concrete, and actually run through the scenarios used. The team created one set of four scenarios based around a demolition scene, and another involving an out-of-control train – I’m only presenting the demolition cases here, and relegating the train versions to a footnote. So here’s the material presented to subjects for the demolition cases:

A construction company is scheduled to demolish three adjacent buildings, which they have labeled A, B and C. Each of the buildings is wired with dynamite. Detonations are controlled by a central control board. A countdown timer indicates when the detonations will occur. The buildings are scheduled to explode one at a time every 30minutes beginning at noon. The order of detonation is controlled by three buttons labeled A, B and C. Above the buttons is a sign that reads “Press the button to make the corresponding building FIRST in line for detonation.” When a button is pressed, a computer system records the decision. [NAME] is walking down the street when he comes across the control board. The control board reads “Next for detonation: Building B,” and the countdown timer currently reads “10 seconds.” [Illustrated in Figure 1]

This was followed by one of the following four endings:

1. Direct causality, transparent. “ANDREW sees a man standing next to Building C. ANDREW presses button C. Building C explodes and the man is killed.”

2. Direct causality, opaque. “BILL sees that his bicycle is located between Building A and Building B. BILL reaches to press button C to prevent damage to his bicycle. Before doing so, BILL sees a man standing next to Building C. BILL continues, pressing button C. Building C explodes and the man is killed.”

3. No causality, transparent. “CHRIS sees a man standing between Building A and Building B. CHRIS presses button B. Building B explodes and the man is killed.”

4. No causality, opaque. “DAN sees a man standing between Building A and Building B. DAN does nothing. Building B explodes and the man is killed.”

A quick note on the logic of these scenarios is in order. In case 1, the causation is considered direct: Andrew’s choice to press button C sets off a causal chain of events that leads to the death of an otherwise safe man. (This scenario, however, isn’t 100% clear, as there are two ways the man could be standing by building C: he could be in between C and B, or on the other side of C, with nothing else next to him. I’ll assume that most people make the second reading, which the authors presumably do too, as that’s the only situation in which he was not already imperilled.) Case 1 is also deemed to be transparent, as it seems clear that Andrew simply changes the detonations sequence to kill the man. Case 2 is similarly direct, and for the same reasons as in case 1, but was considered to be more opaque because of the complicating motive of Bill wanting to save his bike. Crucially, both cases 1 and 2 are supposed to represent commissions, by virtue of the fact that they involve direct causation.

Cases 3 and 4 were the omissions DeScioli et al. used — cases in which the person under scrutiny exerted no morally relevant causal influence over the chain of events leading to the man’s death. Case 4 is relatively straightforward, as Dan does nothing (i.e., does not press any buttons), and therefore has no causal connection to the events that unfold. Case 3 — also considered to be a case of ‘no causation’ or omission — is a bit more subtle. Yes, Chris presses a button, but the button he presses activates the detonation of building B, which is the building that is due to explode next anyway. That is, Chris’s decision and action does not change the course of events as they would have unfolded had Chris not been there, and so DeScioli and colleagues say there’s no relevant causation in this case. Case 3 is also considered to be transparent because it seems obvious that Chris wants building B to explode and kill the man, but case 4 is more opaque as it’s not clear why Dan fails to do anything. Both cases 3 and 4 are supposed to represent omissions, as they lack the relevant causal connections to the man’s death. (DeScioli et al. call Case 3 an ‘opt-out omission’, and Case 4 a ‘time-out omission’.)

So these are the elements of the study. In a first round of experiments, participants read the basic set up, followed by one of the four endings, and then rated the wrongness of the commission/omission, and suggested the level of punishment Andrew, Bill, Chris or Dan should receive. Figure 2 shows the results.

As I said, I’m principally focusing on the demolition scenarios here and ignoring the train scenarios to avoid things getting overly complicated. So what do we see in the demolition scenarios? Clearly, cases 1, 2 and 3 were judged as severely wrong, and people generally thought that Andrew, Bill, and Chris deserved lengthy prison sentences. That is, direct causation cases were judged harshly (whether transparent or opaque), as was the transparent, no-causation Case 3. Judgments were only significantly more lenient for Case 4, which like Case 3 involved no causation, but was opaque.

What do these results mean? Bear in mind that the starting point for this study is the fact that omissions are generally judged less harshly than commissions, even if they produce the same result (death in these cases). The task is to explain this omission effect. And to do so, DeScioli and colleagues came up with two no-causation scenarios (the omissions): one that’s transparent, and one that’s opaque. If transparency/opacity is the key factor underlying the omission effect — with standard omissions typically being opaque — then transparent omissions should be judged like commissions. The results presented above seem to support this hypothesis, and this is how the authors read them.

I don’t think this interpretation is quite so straightforward. Consider Cases 3 and 4 again. Are they really both omissions? In (3), Chris elects to press a button — he decides to act, and does so — which sounds like a commission of sorts (that is, Chris commits an act of pressing a button), even if Chris’s action does not alter the course of events leading to the hapless fellow’s death. Case 3 certainly doesn’t strike me as omission in the way that Case 4 does, in which Dan really does do nothing. Secondly, it’s not obvious that intentions are held constant across these cases. In Case 4, we have no idea what Dan’s intentions are, or why he doesn’t press a button: is it because he wants the man dead, or because he’s simply lazy or doesn’t want to get involved in any way? But what about Case 3? Well, Chris presses button B knowing that it will definitely lead to the man’s death, so it seems as if he wants the man dead. Yet he also knows that building B is already set to explode, so there’s no need for him to do anything to achieve this desired end, if that’s what he wants. So it’s natural to ask, “Why on earth did he press the button?”. It seems reasonable to wonder whether Case 3 is actually read as the study authors intend, or whether subjects find it odd and confusing.

DeScioli and colleagues actually gathered data pertinent to these worries. Yet I think they undermine their interpretation, rather than reinforce it. Let me explain. After the first round of experiments, the team explored the two omission cases in more detail to ensure that intentions were held constant. To do so, they had half the study subjects read the original versions of scenarios 3 and 4, while the other half read scenarios that included an extra bit of information: before either opting out (pressing button B, Case 3) or timing out (doing nothing, Case 4), Chris or Dan says out loud, “I could save you, but I’m not going to”. These two conditions were called ‘unstated’ and ‘stated’ intention conditions.

Figure 3 shows the results. Remember, we’re now looking at four different kinds of putative omissions: a transparent omission with either stated or unstated intentions, and an opaque omission with either stated or unstated intentions — there are no acts of commission (on DeScioli et al.’s definition) in this round of experiments. As can be seen from Figure 3, explicitly stating intentions (beliefs or attitudes) in the transparent case makes judgements slightly harsher, suggesting that explicit knowledge of intentions clears up any ambiguity that previously existed (as I said, it does seem obvious in the transparent Case 3 that Chris is at best indifferent about the man’s death, but this makes it absolutely certain). More significantly, stating intentions in the opaque omission (Case 4, the bona fide omission) has a much more profound effect on judgements, suggesting that lack of knowledge about beliefs/attitudes explains — at least in part — the previously low judgements of wrongness in this case. (Similar points can be made about punishment judgements in these cases.) If that’s the case, then intentions were not, in fact, held constant across the original scenarios, which introduces a confounding variable to any analysis of the results obtained in the first round.

The issue of causality also seems to be similarly muddied. In Case 4, Dan does nothing, and so clearly has no causal effects on anything — it’s a paradigmatic omission. Yet is Chris, in Case 3, similarly outside the relevant causal loop? Well, his choice to press button B does not change the outcome of the detonations: building B explodes as it was going to anyway, and the man dies. Nonetheless, the fact that Chris’s pressing of button B is reported makes this act more salient. Likewise, the fact that Chris does press button B, which in all situations would guarantee the death of the man, implies at best indifference and at worse outright malice towards the unwitting victim. All of which may give the impression that Chris is more involved in the causal nexus than he really is.

Again, DeScioli and colleagues collected data relevant to this issue, as they investigated whether people really comprehended what was going on in their scenarios. In a third round of experiments, the team replicated the stated-intentions conditions used in study 2. (That is, participants read about the transparent omission (3) or the opaque omission (4), both with intentions stated.) However, DeScioli et al. added another step: immediately after reading the scenario participants were presented with three comprehension questions, of which the third was key: “If [NAME] had stayed at home that day instead of walking by the demolition site, would the person standing next to building B have been killed?”. In Case 4 (time-out omission/opaque), the answer is “yes” — building B is set to explode, and Dan does not change this course of events — and 95% of people answered this correctly. The answer is also “yes” for Case 3 (opt-out omission/transparent): Chris selects an option (the detonation of building B) that is already in motion. Yet now only 77% of people answered correctly. This, as the authors note, indicates that Chris’s causal role in bringing about the man’s death is confusing to subjects, which in turn makes it unclear whether people are really construing Cases 3 and 4 as different flavours of omission, rather than as a commission and omission, respectively. What’s more, in this round of experiments, there was no observed difference in wrongness judgments between the transparent and opaque conditions.

There is another general issue that I find troubling in the scenarios used in these experiments. One of the key variables in these cases is whether they are publicly transparent or opaque, with DeScioli and colleagues defining public transparency as “the strength of evidence for a violation”. Now, let’s look at the two instances of commission in the demolition scenario (Cases 1 and 2 in experiment 1): one is supposed to be transparent (i.e., provides strong, clear public evidence), and the other opaque (weaker public evidence). But the public evidence in these cases is simply the choice recorded by the computer — and in both cases this is exactly the same (i.e., the computer records that button C is pressed, which leads to the death of the man). The other evidence relating to why the man pressed the button is, on my reading, private. (That is, the reader of the scenarios is provided with information about what happened, and why, but it’s not clear that this evidence is available to others, except as eyewitness testimony: “I saw/heard X…”) And in fact the reason why Andrew presses the button in the supposedly transparent case is even less clear than in Bill’s supposedly opaque case in which he presses the button to save his bike while knowing that it will cause someone else’s death. The transparency/opacity in these cases seems to relate to intentions, not public evidence. Yet these scenarios were designed to hold intentions constant while varying public transparency/opacity! Something seems to have gone awry.

Also, if the strength of evidence for a violation is a crucial factor in wrongness ratings, and Case 1 really is transparent relative to the opaque Case 2, then why are both judged the same? I’ve suggested that they both generate the same public evidence (button C was chosen, resulting in the death of the man). In addition, the private evidence also points to a similar lack of appropriate moral concern in both cases (killing someone for no apparent reason versus killing someone to save a bike). What seems to matter here is that in both cases a choice is made that changes the course of events and leads to the foreseeable death of another presumably innocent person — so it’s clearly an immoral choice, and both are rated as such.

The key contrast, however, is really between the two putative omissions (Chris and Dan in Cases 3 and 4). These are supposed to differ only in their transparency/opacity, but I maintain that they differ in myriad confounding ways as well. Dan does nothing, but we have no clue as to why (in particular it’s not clear that he wants the man to die). Chris does press a button (redundantly), and this suggests that he does, in fact, want the man to die. When the intentions are made explicit by having Dan or Chris say “I could save you, but I’m not going to”, both Dan and Chris’s choices are judged to be almost comparably immoral, pointing to this variable as crucial to the different responses the cases generated in the first round of experiments. Secondly, while these cases are supposed to be causally identical (in that in both cases the inaction/action of Dan/Chris is causally irrelevant to the outcome), the comprehension answers suggest that they are not read this way (Chris’s case is, at the very least, more ambiguous than Dan’s – and it seems plausible that this ambiguity is even more pronounced when people are not stopped and prompted to think about the counter-factual case in which Chris wasn’t present). On a related note, the evidence recorded by the computer in Chris’s case points to the selection of an option that would have had murderous consequences regardless of which building was set to be next for demolition. In other words, there’s evidence for the commission of an action (pressing the button) carried out with malicious intent — even if this has no real causal bearing on how events unfold (the key point for DeScioli et al., but one which is lost on a significant number of their study subjects). In Dan’s case, there’s an absence of evidence, as Dan did nothing (it’s a genuine omission). As such, the difference in evidence here seems to relate not just to its strength or transparency, but to the fact that for Chris the evidence points to a malicious commission of sorts, while for Dan there is no relevant public evidence as he omitted to do anything.

In short, the difference between the two supposed cases of omission seem to turn on a combination of lack of clarity about the causal involvement of Dan and Chris in the demolition-related death, as well as the intentions and attitudes of Chris and Dan. On top of this, there’s evidence that Chris selected to perform an action that, while strictly irrelevant to the outcome, nonetheless reveals his desired outcome; in Dan’s case, there’s no action performed, and no evidence about the contents of his mind. As such, the different judgments that these cases elicit cannot simply be attributed to differences in the transparency or opacity of the public evidence they leave about whether an omission was selected. To repeat, it looks like we’re comparing a commission with an omission.

From this perspective, it’s little wonder that in the first round of experiments Cases 1, 2 and 3 were judged to be similarly immoral, with only Dan’s clear omission being judged as significantly less wrong: we know that commissions are typically seen as worse than omissions, and so if Cases 1–3 are perceived as commissions, then they’ll be judged as morally worse than the paradigmatic omission. And this is the pattern of judgements recorded in these experiments.

In summary, the results reported in this paper do not clearly support the hypothesis that the omission effect turns on the transparency of public evidence about wrongdoing, and nor do they challenge the causality account of the omission effect. Indeed the crucial variables that lead to moral condemnation in these cases appear to be whether or not the putative wrongdoer has the right kind of causal connection to the bad outcome, and the attitudes and beliefs that motivate their action or inaction — the very factors that DeScioli et al. report as being undermined by their study in favour of public transparency as the key variable. I’m highly sceptical of this conclusion based on these experiments. In fact, I think it’s back to the drawing board to come up with some clearer, more robust scenarios that tease out the contribution of the various potential drivers of moral judgments in cases of commission and omission more definitively. What’s your take?

Notes

1. Here’s the train scenario, which is designed to capture the same psychological dynamics as the demolition case:

There is a control room at a train station. In this room, a set of buttons controls two railroad switches that can redirect a train onto two sidetracks. When a button is pressed, a computer system records the decision and updates system information accordingly. One button reads “Sidetrack A.” When this button is pressed, the train switches off its current track onto Sidetrack A (see diagram). The second button reads “Sidetrack B.” When this button is pressed, the train switches off its current track onto Sidetrack B. The third button reads “Maintain Route.” Pressing this button has no effect at all on the train, but like the other buttons, it updates the computer system on the location of the train. [NAME] walks into the control room and sees a train coming down the tracks. The train is not slowing down like it usually does, but is traveling at full speed (the driver has fallen asleep). [Figure 4]

This was followed by one of the following four endings:

Direct causality, transparent. “ALAN sees that there is a person on Sidetrack B. ALAN presses the “Sidetrack B” button. The person is killed.”

Direct causality, opaque. “BART wants to watch the train go by on Sidetrack B from the control room so he reaches to press the “Sidetrack B” button. Before doing so, he sees that there is a person on Sidetrack B. Bart continues, pressing the “Sidetrack B” button. The person is killed.”

No causality, transparent. “CHARLES sees that there is a person on the Main Track. CHARLES sees that Sidetrack A can direct the train around the person. CHARLES presses the “Maintain Route” button. The person is killed.”

No causality, opaque. “DAVID sees that there is a person on the Main Track. DAVID sees that Sidetrack A can direct the train around the person. DAVID does not press any buttons. The person is killed.”

About Dan Jones

Dan Jones is a freelance science writer
This entry was posted in moral judgment, moral psychology, omission effect, Uncategorized. Bookmark the permalink.

Have your say!