Sam Harris has been lured into talking about free will again. He says he has resisted writing about this topic since publishing The Moral Landscape and the short book Free Will because he felt he had said all there was to say about the matter. But emails from his readers flagged up what Harris sees as a continuing confusion about what his view of free will means for the possibility of loving people. We’ll get to this new post later, but first I want to spell out Harris’s views on free will, and the problems that attend to them.
Harris’s position on free will is straightforward enough. (See this post by Harris for more background.) The universe is deterministic, and the behaviour of every entity in the universe is determined by the fixed laws of nature. This includes everything from the motion of atoms and the planets to human behaviour. On this view, according to Harris, there is no such thing as deep responsibility, and humans are no more responsible for their actions (in a deep sense) than mountains are for having avalanches.
Hard determinism and moral responsibility
This is Free Will 101, as Paul Bloom said of another recent essay outlining a position similar to Harris’s. (It’s well worth reading both the target essay and Bloom’s response.) This take on free will sometimes goes by the name of hard determinism, and is defined by two key claims: first, that determinism is true; and second, that free will therefore does not exist. It’s hardly a new or unheard of view; indeed, most discussions of free will start with the problems that hard determinism poses for notions of responsibility, and the moral implications that follow: namely, that if we’re not really responsible for our actions, then how could we be morally culpable for our bad actions, and how could we meaningfully deserve praise for our good actions? (Introductions to free will also typically look at the problems associated with libertarian conceptions of free will, which argue that humans, and the decisions they make, are somehow exempt from the network of cause and effect that governs everything else.)
Hard determinism contrasts with the dominant position in contemporary Western philosophy, known as compatibilism. Compatibilist philosophers, such as Dan Dennett, accept that determinism is true, but do not believe that this truth poses the threats to free will and moral responsibility that hard determinists see. That is, compatibilists argue that determinism is compatible with free will and moral responsibility, and so we can meaningfully talk about culpability, guilt, blame, praise and other features of moral judgements.
Hard determinists often claim to have common sense on their side, but common sense doesn’t have much place in philosophy. That said, compatibilists do face the very hard task of showing in what sense we can be free, make choices, and remain responsible for our actions in a morally meaningful sense if determinism is true. In what follows I don’t claim to offer a detailed positive defence of compatibilism, but rather to show that if you’re a hard determinist like Harris, and you reject compatibilism and the libertarian conception of free will, then you’ve also got to leave moral responsibility, blame, praise, punishment, reward and most of the other language of moral discourse at the door.
Harris is clear that he rejects compatibilism, which he takes to be an evasive, topic-changing approach to free will. His hard determinism certainly has the appeal of simplicity. It does away with the need to think deeply about what choice and responsibility really, could, or should mean, and thus obviates the difficulty of reconciling determinism with free will and moral responsibility — such ideas are just metaphysical fictions.
Or perhaps not; things get a bit murky in Harris’s philosophy on this point. One the one hand, Harris poses and answers the following question:
If we cannot assign blame to the workings of the universe, how can evil people be held responsible for their actions? In the deepest sense, it seems, they can’t be. But in a practical sense, they must be. I see no contradiction in this. In fact, I think that keeping the deep causes of human behavior in view would only improve our practical response to evil. The feeling that people are deeply responsible for who they are does nothing but produce moral illusions and psychological suffering.
On the other hand, Harris ties to establish a notion of personal responsibility that “fits the facts”, which, in this case, are the facts of determinism — specifically, that human behaviour is as determined as the weather, and humans are essentially “neuronal weather patterns”.
Harris’s solution is quite simple. In The Moral Landscape he observes that the last time he went to the market he was fully clothed, did not steal anything, and did not buy anchovies. “To say that I was responsible for my behaviour is simply to say that what I did was sufficiently in keeping with my thoughts, intentions, beliefs, and desires to be considered an extension of them.”
This implies that if someone does something that out of character, then they’re not responsible for it. Acting out of character is hardly unheard of: there are cases of generally law-abiding citizens who for the first 40 or 50 years of their lives do not steal, maim or kill, and then one day turn around and kill their wife (perhaps they found out she was having an affair, or an argument got out of control — neither of which I’m suggesting justifies killing!). As implausible as it seems to excuse someone from responsibility in such cases merely because it’s not part of their usual behavioural repertoire, Harris actually embraces it:
If … I had found myself standing in the market naked, intent upon stealing as many tins of anchovies as I could carry, this behaviour would be totally out of character; I would feel that I was not in my right mind, or that I was otherwise not responsible for my actions. Judgments of responsibility, therefore, depend on upon the overall complexion of one’s mind, not on the metaphysics of mental cause and effect.
It’s good to know that if Harris found himself doing something out of character he would excuse himself from responsibility for his actions. But I would wager that most people, if they heard that Sam Harris got caught stealing anchovies, would hold him responsible even if he doesn’t have a history of theft.
But that’s not the real problem. The deeper issue is that everything in Harris’s scenario — the thoughts, intentions, beliefs and desires — are the product of deterministic processes. On Harris’s view, there’s no more real agency here than in a rain cloud. That is, thoughts, intentions and so on are part of the causal story of why someone behaved as they did, and help establish causal responsibility for their actions, but the same goes for clouds: the conditions in them are causally responsible for the rain they produce. In neither case, however, does causal responsibility establish or entail moral responsibility. (Again, compatibilist philosophers attempt to provide a way of thinking about thoughts, intentions, beliefs and desires that could support a worthwhile concept of moral agency and responsibility — but Harris wants nothing to do with this.) As so if clouds aren’t responsible for the typhoons they visit upon people in a morally relevant sense, then why and how are people ever morally responsible for the behaviour they direct towards other people?
Harris’s answer is, in essence, that what matters is whether people harbour the kinds of thoughts, intentions, beliefs and desires that will make them likely to commit moral offences, such as harming others. That is, if a man is the kind of person who wants and intends to kill people, then “we need entertain no notions of free will to consider him a danger to society”.
I agree. An out-of-control lawnmower may pose a danger to guests at a barbecue, but this threat does not depend on the lawnmower acting under its own free will. At the same time, this observation has nothing to do with showing that moral responsibility is compatible with hard determinism. Recognising that some entity poses a threat to people says nothing about whether that entity would be morally responsible should the potential threat become an actual harm. So if we say a human is morally responsible for their actions, but a lawnmower isn’t, this difference can’t turn on whether they’re deterministic systems — for they both are.
Citing the thoughts, intentions, beliefs and desires of the human, which the lawnmower lacks, is not enough either, as these are as much the product of deterministic process as those that drive the behaviour of the lawnmower! So what invests these deterministically caused causes with what it takes to establish moral responsibility? These are the kinds of questions that a compatibilist philosophy tries to deal with in one way or another. Harris doesn’t buy into these compatibilist approaches, and so it remains unclear why he should attach any importance at all to deterministically caused thoughts, intentions, beliefs and desires in underpinning moral responsibility.
To make sense of Harris’s position, it’s important to recognise that he is not making a metaphysical or conceptual case for moral responsibility, but a legalistic or practical case for responsibility. Harris rightly argues that regardless of whether a danger to people is the product of freely willed behaviour or blindly mechanical actions, a threat is a threat: an out-of-control lawnmower is dangerous, and we would want to stop the lawnmower by unplugging it, or directing it into a shed and locking the door behind it (a lawnmower prison), without thinking about free will for a second. As Harris points out, this practical perspective brings into focus questions about how we treat people who do bad things. He writes: “Clearly, we need to build prisons for people who are intent on harming others. But if we could incarcerate earthquakes and hurricanes for their crimes, we would build prisons for them as well.”
Presumably we’re not supposed to consider earthquakes and hurricanes morally responsible for the deaths they often cause. Yet that doesn’t mean that we wouldn’t want to stop them occurring again, which is why we would build prisons for them (or do whatever it took to stop them occurring). The same logic applies to people: if some people are going to hurt or kill other people, then regardless of the causes of their actions it would be wise to take them out of the population and keep them somewhere else (in prison or a psychiatric unit). Such decisions have got nothing to do with believing in a libertarian sense of free will or assigning moral responsibility.
All this shows, however, is that there is a rationale for imprisoning people that is independent of questions of free will or moral responsibility. (The same goes for less severe responses to moral offences, such as fines or merely a good telling off; if these things cause people to fix their ways, even if this is a result of deterministic processes, then they’re useful tools for societies to wield.)
So at best Harris has offered us some practical reasons why we should worry about people with bad intentions, why we might want to find people legally responsible and why we might want to put them in prison. He’s also convincingly — but I would say uncontroversially — argued that these reasons are compatible with hard determinism. Yet none of this entails any notion of real or deep moral responsibility, unless Harris is tacitly appealing to some kind of compatibilist notion of moral responsibility (that’s not an option, however, since he rejects compatibilism).
The notion of responsibility that Harris does develop is not moral responsibility per se. What he’s interested is whether people are causally responsible for their actions in a way that provides clues as to how they’re likely to behave in the future.
To bring out what this means, imagine two robots, one programmed to be helpful to people by bringing them tea or coffee, and another programmed by a malign engineer to go around jabbing people with a sharp spike. One day, an electrical surge causes the helpful robot to pour hot coffee over his owner, scalding his skin. This robot, along with the electrical surge, is causally responsible for the scalding, but we wouldn’t automatically decommission the robot; it was a blip, and in the future the robot is as unlikely to cause further harms as a well-disciplined human butler.
The malicious robot is a different matter: it’s built to be mean, and will continue to hurt people, so we should want to turn it off. So although the behaviour of both robots is equally determined, and both are causally responsible for their actions, we would treat them differently should they cause harm to humans. Yet this has absolutely nothing to do with assigning moral responsibility: we don’t bother saying that the helpful-but-malfunctioning robot is not morally responsible while the malign robot is. It’s a purely practical matter that doesn’t connect with issues of free will. So if you’re going to start talking about moral responsibility, it must be to differentiate moral responsibility from the practical or legal responsibility Harris advocates — and presumably as a prelude to talking about culpability, blame and so on. In the end, Harris offers no real account of how his conception of free will leaves any room for deep moral responsibility at all, beyond arguing that we can sensibly treat different deterministic systems differently.
Beyond hate and retribution
There is, however, a more positive upshot of Harris’s position, for it calls into question the logic of retribution. Take sending people to prison. This can be viewed a number of ways. We can see it as a simple and pragmatic way to keep bad people away from everyone else, so that they can no longer harm them; and this does not depend on free will (in any sense) or moral responsibility (by analogy, we’d lock a runaway lawnmower in a shed without talking about free will or assigning responsibility). Alternatively, we may hope that prison will show people the errors of their ways and rehabilitate them, so that when they are released they will not be a menace to society. Again, this does not depend on free will or moral responsibility; it’d be like capturing and fixing the rogue lawnmower. Finally, we may see prison as a genuine punishment, a way of exacting revenge or retribution on wrongdoers — and this does turn on a sense of free will and moral responsibility; after all, you wouldn’t seek revenge on a dangerous lawnmower that certainly does not have free will. (Note that the sense of free will and moral responsibility that underpins punishment does not, logically, have to be a libertarian view of free will — this is precisely what’s debated by compatibilist philosophers!)
Harris argues that once we accept the truth of determinism, and the illusory nature of free will, we’ll be less concerned about assigning blame and seeking retribution via punishment. This, in turn, will make us more humane, perhaps by making us better appreciate the forces that impinge on human behaviour and creating more scope for taking mitigating circumstances into account. Rejecting retribution would also mean that we didn’t make wrongdoers suffer for suffering’s sake, but only to the extent that it helped reform people.
The benefits of rejecting retribution don’t stop there. Being consumed with anger, rage and hatred against people who have wronged us can exact a psychological toll on our minds. If we could let go of our mistaken views on free will, we’d see that being angry at, or hating, a deterministic system, even if it’s a human, is senseless. We could learn to see bad people like broken lawnmowers or avalanches and stop seeking retribution, which might be a psychologically healthier way of dealing with bad people. (Note that Harris’s case against retribution also undermines his claim that his view of free will preserves notions of moral responsibility: for on this view retribution against humans makes as little sense as retribution against a mountain because neither is deeply responsible!)
Lurking behind these positives is a significant negative, for the idea that hard determinism can free us from the shackles of anger, rage and hatred and allow us to more calmly respond to the bad behaviour of others has a less appealing mirror image: we should not get caught up with moral praise and awe when people do kind or even heroic things, as they’re just as determined as the bad things we’re not holding them responsible for.
Harris new post touches on a related concern: that if we shouldn’t hate people, then we shouldn’t love people either. As always, Harris is ready with an answer:
Seeing through the illusion of free will does not undercut the reality of love, for example—because loving other people is not a matter of fixating on the underlying causes of their behavior. Rather, it is a matter of caring about them as people and enjoying their company. We want those we love to be happy, and we want to feel the way we feel in their presence. The difference between happiness and suffering does not depend on free will—indeed, it has no logical relationship to it (but then, nothing does, because the very idea of free will makes no sense). In loving others, and in seeking happiness ourselves, we are primarily concerned with the character of conscious experience.
Hatred, however, is powerfully governed by the illusion that those we hate could (and should) behave differently. We don’t hate storms, avalanches, mosquitoes, or flu. We might use the term “hatred” to describe our aversion to the suffering these things cause us—but we are prone to hate other human beings in a very different sense. True hatred requires that we view our enemy as the ultimate author of his thoughts and actions. Love demands only that we care about our friends and find happiness in their company. It may be hard to see this truth at first, but I encourage everyone to keep looking. It is one of the more beautiful asymmetries to be found anywhere.
This needs breaking down. Let’s start by clarifying two everyday uses or meaning of hate. The first is captured when people say things like “I hate rainy weather”. The second is captured by saying “I hate my next door neighbour”. In both cases, what we’re expressing is a strong dislike — of rain and this particular neighbour. They are also, of course, different sentiments. In the case of rain, you’re simply saying that you don’t like getting wet; in the case of the neighbour, you probably also mean you have feelings of hostility towards this person, because of things they’ve done and the fact that they’re responsible for them.
Now, for Harris, the underlying causes of rain and the neighbour’s behaviour are not significantly different. In neither case is any deep responsibility involved. So, once we clear our eyes, we’ll see that we can only really hate in the sense of disliking certain outcomes, which is perhaps not really true hate at all. (It does sound somewhat hyperbolic to say one hates rain anyway.) The worry, among Harris’s readers, is that the same logic applies to love — why shouldn’t we, on Harris’s logic, abandon it along with hate?
I find Harris’s answer unconvincing. He says “True hatred requires that we view our enemy as the ultimate author of his thoughts and actions. Love demands only that we care about our friends and find happiness in their company.” Why couldn’t one say with equal justification “Hatred merely requires that we dislike certain people because of their thoughts and behaviour, irrespective of their causes”? Harris’s answer — that true hate commits us to accepting a libertarian (and incoherent) conception of free will — is a mere assertion, not an argument. Of course, Harris would argue that it doesn’t make sense to hate deterministic systems, be they hurricanes or humans. But this begs the question of whether there are any interesting differences between hurricanes and humans that would allow us to talk about humans as morally responsible agents whom we may hate if they behave terribly. These are conclusions we need to establish, not starting points for further argument.
But let’s follow Harris in accepting that hard determinism undermines hating people for what they do in way that is morally different from hating rain. Then where does that leave love? Harris says love only depends on “car[ing] about our friends and find[ing] happiness in their company”.
That sounds reasonable enough, but it does raise further questions. Music is an important part of my life, and I find happiness in the company of my iPod, and to the extent that I want continued easy access to my music I care about the continuing function of my iPod. Of course, nothing in this depends on me having free will, just as enjoying ice cream or curry does not depend on having free will.
But is this really what love boils down to, simply liking the experience of being in the company of certain people and caring about them to the extent that their continued existence provides further opportunities for them to bring pleasure to your life? If so, then loving a person is akin to me liking my iPod, just stronger — but pretty superficial nonetheless.
Let me put this another way. Harris distinguishes between true hate (which he views as unwarranted because it supposedly depends on a metaphysically untenable view of free will), and what we may call hyperbolic hate, which is merely a strong dislike and does not depend on free will. But the concept of love he defends boils down to mere liking of other people and their company — and if mere dislike doesn’t count as true hate — then we can ask why merely liking others counts as true love. Harris hasn’t revealed an interesting asymmetry between love and hate, but invented a double standard.
In any case, Harris’s hard determinism does seem to pose a problem for components of love, such as forgiveness and gratitude. Harris believes that true hatred — the kind we direct towards evildoers, as opposed to mere dislike — implies an untenable view of human behaviour, in that it depends on an incoherent concept of free will. The same must go for forgiveness. It would be daft to talk of forgiving a mountain for an avalanche, but for Harris it must be equally daft to talk of true forgiveness among humans — for what is there to forgive in a deterministic system, whether a mountain or human?
The same goes for gratitude. You might be thankful that a mountain provided good slopes for skiing one day, but that’s not the true gratitude you show to your friend for teaching you how to ski in the first place. This true gratitude must too fall beneath Harris’s deterministic sword: what is there to thank in a deterministic system, mountain or human?
Harris, on my reading, is caught on the horns of a dilemma. He wants to deny free will and deep responsibility, but also preserve talk of good and evil, right and wrong, and moral responsibility. (The issue about love is, for me, one of the least interesting, partly as love is such a broad and nebulous concept.) Yet he eschews compatibilist approaches that try to show how what we value about free will — the capacity to make choices with moral responsibility — can exist in a deterministic universe.
The compatibilist argues that if we define moral agency and responsibility in terms that are compatible with determinism, then we can say that the deterministic processes that underpin moral agency and responsibility constitute free will. This may not be free will in the libertarian sense of floating free from the network of cause and effect that governs the actions of everything else in the universe, but this is not a problem for the compatibilist per se — indeed, the whole point of compatibilism is to say, “Look, we can’t have free will in the libertarian sense as it’s not only implausible but incoherent. But if we think about why we care about free will, it comes down to issues of choice and responsibility — so let’s see if these concepts can make sense in a deterministic universe, and whether we can preserve a sense of free will worth wanting that doesn’t commit us to metaphysical nonsense”. The bottom line is that if you accept determinism, reject the libertarian conception of free will, and yet believe that moral responsibility is compatible with these positions, then you’re essentially endorsing a compatibilist view of free will — which opens up the door to talking about praise and blame, right and wrong, and good and evil.
Yet despite talking about moral responsibility, Harris is at pains to distance himself from compatibilism. He thinks this approach is simply changing the subject, because instead of talking about free will in the libertarian sense — which Harris rejects and which he thinks is the common sense understanding of free will — compatibilists try to re-work the meaning of free will. I think this misunderstands the compatibilist enterprise, which I admit lacks the compelling force of a logical, knockdown argument . But compatibilism cannot be dismissed as merely changing the subject; it’s trying to change the way we talk about the subject of free will. And the reason for doing that is to avoid the confusions and contradictions in which Harris finds himself embroiled.
There could a simple solution to all this: maybe, just maybe, Harris is a crypto-compatibilist after all!
Postscript: I am currently working on a series of essays that will examine in detail Sam Harris’s whole scientific/philosophical worldview – he writes about big, important issues and thinking about them is time well spent I reckon! Sign up for email alerts if you want to read these forthcoming essays.
1. I confess that compatibilist arguments often remind me of a dream scene in the Coen brother’s film A Serious Man. Larry Gopnik, a lecturer in physics who is having the dream, has just talked his class through Heisenberg’s uncertainty principle. When the students leave, one person remains seated: Sy Ableman, the man Larry’s wife was going to leave him for but who recently died in a car crash. Responding to Gopnik’s exposition of the uncertainty principle, Abelman says, in a most condescending way, “Now, I’ll concede that it’s subtle and clever — but at the end of the day, is it convincing?”