A difference of opinion

A new paper offers insights into the psychology and intuitions of our epistemological beliefs, and adds to a growing body of research that challenges traditional philosophical practice

Disagreements are as varied as they are common. Sometimes we lock horns over everyday issues, like whose turn it is to do the washing up or pick the kids up. Other times our disagreements revolve around weightier matters that involved our core values and beliefs, from whether President Trump is fit for office to the reality of climate change.

The content of our disagreements is just one dimension along which they vary. Another important dimension reflects our attitudes towards the person we’re debating or arguing with. We might see them as our intellectual equal, in which case we’re likely to approach any disagreement in a much more constructive way than when we think the other person is ignorant, biased, disingenuous, or simply a moron.

So two philosophers might disagree about whether free will is compatible with living in a universe determined by physical laws, but they will typically approach each other with professional respect, and take each other’s ideas seriously, even if they think they’re wrong-headed. (There are, of course, many exceptions to this rosy picture of academic disagreement!)

Things are different when, say, an evolutionary biologist and a creationist get into a debate. Here, each side not only thinks the other is grossly mistaken but typically that their opposite number is oblivious to the ideas, arguments, and evidence that go against their views. The scientist will think that the creationist is woefully ignorant of the relevant evidence support modern evolutionary theory, and fundamentally biased because they filter everything they read and hear through a religious filter that warps their thinking. The creationist, for their part, might think that their scientific opponent is not only mistaken, but a wicked advocate of a dangerously heathen idea. (I’ve been on the receiving end of this criticism after writing stories about evolutionary biology and human nature.)

It’s unsurprising that when two people disagree and hold the intellectual or moral character of their interlocutor in low regard, the mere fact that they believe different things is of little consequence to whether they change their views. When a creationist and an evolutionary biologist debate each other, some kind of reconciliation or meeting of minds isn’t even the goal. It’s more about trying to win over the audience to triumph in the debate.

But what about when we think the person we’re engaging with is as smart, knowledgeable, and as blessed with critical thinking skills as ourselves? Should the fact that we disagree about some topic give us pause, and reason to reassess our confidence in our beliefs? Or should the fact of disagreement be strictly irrelevant?

Opinion is divided. (Quelle surprise.) By and large, it’s an issue that falls under the remit of epistemology, the branch of philosophy that deals with theories of knowledge: how we can know things, and what we’re entitled to claim as knowledge. Some philosophers argue that when we find ourselves in disagreement with an intellectual equal – what has been called an epistemic peer – we should question the conviction with which we hold our beliefs. Other philosophers reject this conciliatory position. Instead, they argue that even in the face of disagreement with epistemic peers, we should remain steadfast in our beliefs.

These difficult epistemological issues show no sign of imminent resolution. In the meantime, a recent paper by Joshua Alexander and colleagues explores the psychology driving conciliatory and steadfast views among regular people – that is, people who don’t spend their days wondering about the epistemological implications of disagreement with their epistemic peers. It’s an empirical issue that might be dismissed as irrelevant to the epistemological arguments. Yet this new piece of experimental philosophy, in common with much other work in the field, offers provocative suggestions for what’s going on in epistemological arguments about disagreement specifically, and in philosophical debate more generally.

The psychological questions explored by Alexander et al. spring from the way philosophers typically argue for and against conciliatory and steadfast positions. Like any philosophical argument, there are the usual conceptual clarifications and terminological distinctions to set the whole discussion up. And there are, of course, detailed analytic arguments about what certain claims and positions entail, and what they do not.

But in addition to these considerations, epistemologists arguing about disagreement also routinely use thought experiments that, to borrow a phrase from Dan Dennett, ‘pump the intuitions’ of the reader towards the desired position. For example:

Suppose you and your friend go out to dinner. When it is time to pay the check, you agree to split the check evenly and to give a 20% tip. You do the math in your head and become highly confident that your shares are $43 each. Meanwhile, your friend does that math in her head and becomes highly confident that your shares are $45 each. You and your friend have a long history of eating out together and dividing the check in your heads, and know that you’ve been equally successful at making these kinds of calculations: usually you agree; but when you disagree, you know that your friend is right as often as you are. Moreover, you are both feeling sharp tonight and thought that the calculation was pretty straightforward before learning that you disagreed about the shares.

This particular vignette is supposed to give you a nudge towards the conciliatory view. Why? In this case, there’s no particular reason to think your calculation is error-free while your friend has made a mistake. After all, the fact that you make mistakes as often as your friend is made explicit. So this example makes the idea that the mere fact of disagreement should give you pause for thought seem entirely reasonable.

This case can be modified to make the steadfast view seem more plausible:

Suppose you and your friend go out to dinner. When it is time to pay the check, you agree to split the check evenly and to give a 20% tip. You do the math carefully on pencil and paper, checking your results with a calculator, and become highly confident that your shares are $43 each. But then your friend, who was also writing down numbers and pushing calculator buttons, announces that your shares are $45 each. You and your friend have a long history of eating out together and dividing the check in this way, and know that you’ve been equally successful at making these kinds of calculations: usually you agree; but when you disagree, you know that your friend is right as often as you are. Moreover, you are both feeling sharp tonight and thought that the calculation was pretty straightforward before learning that you disagreed about the shares.

In this case, you’ve gone through the sort of double-checking that ordinarily would bolster your confidence in your calculation. It just doesn’t seem likely that you’ve made a mistake. So what should you make of the fact that your friend has come up with a different answer? Should you doubt that your double-checked calculation is correct? Or should you remain steadfast in your belief and suspect that something weird is going on – perhaps your friend, who is as good at calculating as you, is lying or playing a prank or something else? This vignette is meant to make the steadfast view feel more natural.

Alexander and colleagues argue that this ‘method of cases’ does a lot of work in the epistemology of disagreement. In essence, what we as readers think about these hypothetical cases – our philosophical intuition, in short – is taken to be another source of evidence about which position – conciliatory or steadfast – is correct. But is this evidence philosophically reliable?

There are reasons to think not. Recent research in experimental philosophy has shown that in a number of areas, from moral judgments to beliefs about free will, our philosophical intuitions are shaped by factors that practically all philosophers would reject as irrelevant. For example, the moral intuitions generated by mulling over different moral dilemmas depend on the order in which we encounter those dilemmas. Likewise, judgments about whether free will is compatible with living in a deterministic universe differ depending on whether people are asked this as an abstract question, or in the context of a concrete case of someone performing a certain act (especially when that’s a morally bad act).

So it is possible that some cases that crop up in the epistemology of peer disagreement contain features that pump one type of intuition – say, a conciliatory view – while others are more likely to elicit a steadfast attitude. This is the question Alexander and colleagues set out to answer.

Framing the issue

The new research is inspired by the large body of research on framing effects – the way the wording and presentation of vignettes, and the questions posed about them, affect how people respond. A classic instance of a framing comes from the Nobel-prize winning work of Amos Tversky and Daniel Kahneman. (Tversky tragically died before the prize was awarded, so Kahneman was the sole recipient.) Consider this scenario:

Imagine that the U.S. is preparing for an outbreak of an unusual Asian disease that is expected to kill 600 people. Two alternative programs to fight the disease, A and B, have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows: If program A is adopted, 200 people will be saved. If program B is adopted, there is a 1/3 probability that 600 will be saved, and 2/3 probability that no people will be saved. Which of the two programs would you favor?

Most people opt for program A in this case. But now consider an alternative framing:

Imagine that the U.S. is preparing for an outbreak of an unusual Asian disease that is expected to kill 600 people. Two alternative programs to fight the disease, C and D, have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows: If program C is adopted, 400 people will die. If program D is adopted, there is a 1/3 probability that nobody will die, and 2/3 probability that 600 people will die. Which of the two programs would you favor?

The most popular program in this case is program D. Yet both scenarios are mathematically identical: programs A and B in the first case are exactly equivalent to programs C and D in the second. So in the first case, where the issue is framed in terms lives saved, people appear risk averse and prefer option A (the same as option C). When the frame is changed to potential lives lost, people become more willing to take a risk and opt for program D (the same as option B).

This kind of content framing is one way to play with people’s intuitive judgments. Another is to change the context in which the cases are considered. Such context framing might mean the order in which cases are read and thought through, which has already been shown to play a role in people’s moral intuitions.

Framing effects in peer disagreement

Given that the kinds of cases and thought experiments used in various areas of philosophy are susceptible to framing effects, might the same be true in the epistemology of peer disagreement? And precisely kinds of framing might elicit different intuitions about the significance of peer disagreement?

Alexander et al., having surveyed the philosophical literature on disagreement, picked out some candidates. They noticed that after philosophers presented their cases, they often asked one of two different types of question. Let’s say the case described you and your friend disagreeing about some conclusion you’d been asked to come to. Some philosophers would ask a comparative ‘forced-choice question’ at the end, essentially asking their readers to compare and choose between the conciliatory and steadfast positions, like this: Should you always give your friend’s assessment equal weight, and think that it is no more likely that you’re right than that she is? Or can it sometimes be rational for you to stick to your guns, or at least give your own assessment some extra weight?

At other times, philosophers would ask how much impact the fact of disagreement should have on your confidence in your beliefs. To return to the check-splitting scenario above, readers might be asked “How confident should you now be that your shares are $43?”, and to report their answer on a sliding scale.

These comparative and scalar questions ask different things of respondents. The first asks which theory about peer disagreement they prefer, conciliatory or steadfast. The second prompts reflection about how confident someone is that they are correct in their beliefs, judgments or conclusions. Perhaps asking one kind of question tickles one kind of intuitive response, and other, another.

To find out, Alexander et al. culled 20 different cases of peer disagreement that have appeared in the philosophical literature. Then they recruited 100 participants, each of whom read one of the cases and answered either a comparative or scalar question about it. Those answering a comparative question were also asked to rate how confident they were in their choice on a 6-point scale from ‘very unconfident’ to ‘very confident’ to yield a ‘confidence adjusted’ measure of conciliation/steadfastness that could be compared against the confidence ratings given in response to the scalar questions.

It turned out that the kind of question people were asked about cases of peer disagreement did affect their answers. As the authors put it:

When people are asked to focus on what influence peer disagreement should have on confidence, they are more likely to think that people should remain just as confident as before that they are right, but when people are asked to focus instead on what influence peer disagreement should have on preference, they are more likely to think that people should give the position held by an epistemic peer equal weight to their own. This looks like a straightforward context-based framing effect since it seems to be the case that whether people recommend being conciliatory or steadfast depends in important ways on how they have been asked to think about cases of peer disagreement.

Next, the authors wanted to explore the possible influence of content-based framing effects. Casting their eyes over the cases used to analyse the epistemology of peer disagreement, they noticed another way the cases vary: the perspective from which the cases are considered. Take these four openings to cases of disagreement:

1) Suppose that my friend and I independently evaluate some claim…
2) Suppose that you and your friend independently evaluate some claim…
3) Suppose that Pat and Sam independently evaluate some claim…
4) Suppose that you and I independently evaluate some claim…

In the first case, the disagreement is between the philosopher (or narrator) presenting the vignette and someone else, an example of first-personal peer disagreement. The second is between the reader and a friend (second-personal), the third is between two characters the reader is asked to imagine are epistemic peers (third-personal), and the fourth, between the philosopher narrator and the reader (a variant of second-personal disagreement).

Alexander et al. recruited 385 people to read about one case of peer disagreement from one of these four perspectives, which were otherwise identical. Then, as in the first study, they were asked a comparative question and then reported how confident they felt in their choice on a 6-point scale.

This second study also revealed framing effects. First, when people think about cases of disagreement from a second-personal perspective – that is, when the reader imagines that they are disagreeing with someone else – they tend to adopt a steadfast position. (Given all that we know about self-serving biases and the rosy lens through which we view our own abilities, this isn’t that surprising.) . Conversely, when thinking about what two other people should do, people are more conciliatory – self-image plays less of a role and perhaps people take a more objective view of things.

A final analysis in this paper sheds light on a question that may have been bothering you, and which helps put the results into a broader perspective: is there a general tendency towards conciliation or remaining steadfast? In short, no. Alexander et al. looked at responses to the twenty cases in their first study, and found that, on average, people were just as likely to endorse conciliation as remaining steadfast, with the mean response essentially halfway between the two. In an interesting twist, Alexander et al. then grouped cases according to whether they had typically been used to support one position or the other in the epistemology of disagreement – and then asked whether the different groups provoked different responses. They did, and in the direction you’d imagine: cases that had been employed by philosophers to convince people that conciliation was the right route elicited more conciliatory responses, and cases used to advocate steadfastness did likewise.

Taken together, the new findings tell us three things. One: the average person is neutral about whether being conciliatory or steadfast is the appropriate response to disagreement with epistemic peers. Two: it’s possible to push certain intuitive buttons to nudge people into a more conciliatory or steadfast positions. Three: philosophers do actually employ these nudges, whether consciously or not, in the service of their philosophical arguments.

A challenge to philosophy

So that’s what the results tell us. But what is their significance?

Superficially, they just shed light on our common philosophical psychology. (Well, at least among the pool of subjects recruited for this study.) Yet there’s a case to be made that these findings – especially in conjunction with other work in psychology and experimental philosophy – raise some deep questions about the way philosophy is done, and what people are actually doing when fashioning philosophical arguments.

Let’s start with the fact that people’s views on the significance of peer disagreement are sensitive to framing effects. Put another way, this means that people’s philosophical intuitions about what feels right in the epistemology of disagreement are swayed by factors that nearly all philosophers would agree are irrelevant to settling debates between advocates of conciliation and remaining steadfast: specifically, the kind of question you’re asked about a particular case, or the perspective from which the case is described.

The upshot of this is to render our epistemological intuitions as unreliable guides as to which view, if any, is correct. If people’s intuitions in this area are unstable – in the sense that they track philosophically irrelevant aspects of cases of peer disagreement – then they can’t form a solid foundation for epistemological theories.

“But no one uses intuitions for this purpose, or suggests that they should!” might come the reply from philosophers. OK, so let’s explore just what role the ‘method of cases’ plays in philosophical argumentation – and why cases are used at all.

It’s possible that philosophers merely use cases to illustrate the kind of problem that their theory is designed to deal with. If so, it’s a suspicious coincidence that philosophers typically frame their cases in ways that pump peoples’ intuitions in exactly the direction they’re arguing for. It doesn’t look like the method of cases is used simply to clarify the problems under discussion. Instead, it seems like they’re being deployed to do some philosophical work.

What kind of work? In essence, to offer another line of support for strictly philosophical arguments: “Not only have I given a rational argument for my particular epistemological theory, but look, you can feel that it’s right by considering these cases!”. If that’s the strategy, then the intuitions people have about these cases are, in fact, being used as an implicit source of evidence for the theory in question. But given the instability of our epistemological intuitions, and their sensitivity to framing effects, this is not a sound move.

Perhaps a philosopher might counter, “No, I’m not actually using my readers’ intuitive responses to cases to support my theory – all that work is done by my philosophical arguments”. But then why, we might ask, do you use cases that pump intuitions in your favoured direction? If intuitive responses to these cases are irrelevant to the argument, why bother with them at all?

Perhaps, it might be claimed, that while framing-sensitive cases are not employed to bolster philosophical arguments, they’re used as a persuasive tool: “Here’s my theory, and if you’re not quite convinced by the rational arguments, see how you feel after considering these cases”. This way of using cases is also problematic. Most philosophers do not, by and large, consider themselves mere rhetoricians, using whatever tricks and tools are available to bring their readers around to their way of thinking. The philosophical endeavor is supposed to be about getting at the truth, rather than simply winning arguments by whatever means. (Right?!)

There’s yet another way of defending the method of cases in philosophy. This is to acknowledge that while the intuitions of everyday folks about matters like the significance of peer disagreement are strictly irrelevant to philosophical arguments, it’s OK to pump certain intuitions because they simply echo what rational argument dictates anyway, or because they are intuitions endorsed by professional philosophers.

This kind of move is known as the ‘expertise defence’. It can take a number of forms. One might go as follows:

I’m a philosopher, and my professional training gives me enhanced conceptual competence and theoretical insight for dealing with philosophical problems, including the use of thought experiments and hypothetical cases. My theories are guided by rational arguments that track philosophically relevant conceptual distinctions, not the intuitions of regular folk. When I employ a case to elicit intuitions in the reader, I’m not claiming that their response is evidence in favour of my view. It’s just that I, having received the relevant training and spent many years thinking about the issues, have come to what I consider the correct view. My reader, who is perhaps not so familiar with the relevant conceptual distinctions and arguments that support it, might need some extra guidance to see that my theory is on the right track. Cases that pump the relevant intuition help get my reader to the right conclusion. Again, it’s not that I think their response is philosophically significant, and I’m aware that different frames might pump different intuitions. However, I think the particular intuition I’m eliciting gives the reader a kindly nudge towards what I consider, on strictly philosophical grounds, to be the right answer – one whose full defence draws on my philosophical arguments, not any intuition at all.

This specific expertise defense can be tweaked. The philosopher might claim that their intuition is, in fact, philosophically relevant, precisely because they have the right kind of training to generate the right kinds of intuitions. In which case, getting readers to feel the intuitive pull of the specific cases might be deemed part of the philosophical argument.

Whatever form the expertise defence takes, it faces problems. The deepest concern comes from evidence that expert philosophers are, like regular folk, moved by philosophically irrelevant aspects of thought experiments. Eric Schwitzgebel and Fiery Cushman, for instance, have shown that the moral judgments of philosophers are sensitive to the order in which they’re asked to consider moral dilemmas. That shouldn’t be the case if philosophers’ moral judgments are solely to product of stable, reasoned arguments – just as a mathematician’s answers to arithmetic problems do not depend on the order they’re presented.

At a minimum, this suggests that even philosophers experience intuitive, non-reasoned responses to thought experiments and hypothetical cases. What’s more, these intuitions are unstable, in that they are responsive to philosophically irrelevant features of intuition-pumping cases, such as the order in which they’re presented.

This takes on a special significance when considered in the broader context of human cognition and reasoning. In many domains, people come to conclusions about specific questions through intuitive means – and then reason after the fact about why they came to that conclusion. In many cases, there’s a clear motivation to reason to a particular conclusion, perhaps because it bolsters our own self-image or that of the group with which we identify.

The widely documented reality of such motivated reasoning raises a disquieting possibility. Perhaps philosophers, instead of using their well-honed powers of reasoning in a rational, unbiased search for the truth, actually engage in motivated reasoning to defend the particular intuitions they have about certain philosophical problems. Philosophers are, after all, human too.

But surely all the training and practice in thinking things through clearly makes philosophers immune to the kind of self-serving motivated reasoning that everyone else succumbs to? The conclusions philosophers reach, in other words, are not merely post hoc justifications of the intuitions they may or may not have; they are the product of strict philosophical reasoning.

If this were true, then philosophers wouldn’t be vulnerable to philosophically irrelevant framing effects. Yet they are. It certainly looks like the judgements philosophers offer about certain cases, and therefore the broader theories they defend, are influenced by non-rational/non-philosophical considerations. Something else – intuition, a feeling of what’s right – seems to be a part of the philosophical mix.

Then there’s the fact that the kinds of cases used by philosophers arguing about the epistemology of disagreement typically pump intuitions in precisely the direction that fit the philosopher’s theoretical or intuitive predilections. Of course, this move makes sense if the goal is mere persuasion. But it’s a fishy method for arriving at the truth, much like only paying attention to facts that fit in with your beliefs is disastrous in the search for objective truth. So philosophers who use the method of cases in epistemology of disagreement (and other areas) are either willingly guiding their readers towards conclusions by appealing to intuitions elicited by thought experiments whose persuasive power draws on philosophically irrelevant features. Or they’re unwittingly selecting these thought experiments because they happen to generate these desirable intuitions.

Neither option is attractive. The first smacks of intellectual gamesmanship. The second implies that philosophers aren’t fully aware of what they’re doing when they make their arguments – that is, whether they’re simply arguing to defend their intuitions, or unconsciously selecting cases that happen to point in the same direction as their theoretical arguments.

Viewed through the lenses of intuitive judgment and motivated reasoning, the history and practice of philosophy takes on a new light. It raises the possibility that many of the great debates in philosophy are the product of very clever people using their prodigious powers of reason to defend their particular intuitions.

If this is least partly true, it’s worth asking why individual philosophers develop the particular intuitions they do. (And the same question can be asked of everyone else too.) There are a number of possible answers. Maybe different people naturally have different intuitions, perhaps related to personality factors or their cognitive style or even something to do with their upbringing and local culture. Or perhaps people are effectively neutral in their intuitions about various topics until you start guiding them one way or another with clever thought experiments. In the case of philosophers, it’s possible that they start off intuition-neutral, but are led down one intuitive route by mentors during their training. These are questions for future research.

The new findings by Alexander et al. do not settle arguments about the epistemology of peer disagreement. But do they any have any relevance to the debates at all? I think they do. If you’re a philosopher working in this area, it’s probable that you appeal to cases that pump intuitions to serve your argumentative ends. It’s also possible that you’re sensitive to the same philosophically irrelevant features of cases that fresh readers are, and just as philosophers in other domains are. It’s even possible, given all we know about motivated reasoning, that you’re at least sometimes trying to argue for conclusions that you’ve arrived at through intuitive paths. Your opponents, on the other hand, are most likely doing the exact same things.

These worries should, I believe, prompt a little introspection. If your philosophical arguments are in hock to intuitions, and these differ between philosophers, then it’s possible that you or your opponents (or both) are chasing philosophically misleading intuitions. The problem is, there’s no real way to step ‘outside the system’ and double-check whose intuitions are the ones to follow. In this light, the fact of disagreement with epistemic peers provides a reason to pause and think, “Hmmm, maybe I’m being seduced by what seems intuitively obvious to me. If that’s true, is it likely that I just happen to be on the right track while my opponents have gone down the wrong route? If not, then maybe I’m on the wrong track”.

That doesn’t mean you should give your opponent’s views equal weighting. But if you’re really concerned with getting at true conclusions, then it behooves you to pause and think about what it is you believe, and why, and how much confidence you place in your beliefs. That step in itself is a partially conciliatory response; it’s certainly not the steadfast attitude of treating the mere fact of epistemic disagreement as irrelevant.

If you’re a proponent of conciliatory or steadfast positions, here’s an interesting exercise: argue your cases using methods that Alexander et al. identify as eliciting intuitions contrary to position. If you acknowledge that framing effects of cases are doing some of the philosophical work that your rational arguments should be doing, then you can eliminate that criticism by using ones your opponents favour. If your theory is right, and you’ve articulated it convincingly, it should be able to handle the cases of peer disagreement regardless of the cases used. If your rational arguments can convince people to accept judgments that go against their intuitive responses, then you may have a stronger argument than one that relies on easier-to-handle cases.

Rethinking philosophy

There’s another way that an understanding of our philosophical intuitions can contribute to philosophical practice. To see how, we need to first step back and get a perspective on what philosophy is all about. Dan Dennett offers a compelling take on this. One of philosophy’s major tasks, for Dennett, is to render the world comprehensible in a way that reconciles our “manifest image” with our “scientific” self-image. (A distinction first drawn by Wilfrid Sellars.)

What are these? The manifest image describes our widely shared, commonsense view of people and the wider external world. The manifest image grew out of the ‘original image’, which probably began as an unarticulated, theory-free way of making sense of the world. Our ancestors of 100,000 years no doubt had beliefs about the kinds of objects that exist in the world, and about the nature of their fellow humans, even if they didn’t consciously reflect on these beliefs, write about them, and argue about them with others. Over time, this original image evolved into the manifest image as we became more self-aware and self-conscious, and began exploring deep questions of existence in philosophy and theology. For at least three thousand years, the manifest image has informed by these disciplines.

Owen Flanagan, in The Problem of the Soul: Two Visions of the Mind and How to Reconcile Them (2002), usefully distinguishes between two components of the manifest image: a ‘humanistic image’, which concerns the nature of humans, and a ‘world image’ that covers everything else. The world-image part of the manifest image contains everyday objects like tables and chairs, the moon and mountains. The humanistic image, as it’s developed over the millennia, sees humans as partly animal, but distinguished from other animals in possessing a conscious mind or soul, rationality, and free will.

The scientific image is the worldview we get from science. It’s populated by atoms and molecules, DNA and neurons, photons and electrons, and counter-intuitive notions like quantum uncertainty and warped space-time. While much of the scientific image challenges the manifest image, it doesn’t always threaten what really matters to us, our humanistic image. And the manifest image has proven able to absorb many aspects of the scientific image. The idea that we’re made out of matter composed of tiny atoms, or that we live on a relatively small rock orbiting a relatively small star, are widely shared beliefs in the manifest of image of millions if not billions of people.

Yet contemporary science clearly does call into question some aspects of the manifest image, and especially the humanistic part. Science, for instance, provides us with an essentially deterministic worldview that seems to offer little or no room for free will, which many believe is essential moral responsibility and other deeply valued components of the humanistic image. (Quantum mechanics might be an exception to the deterministic vision, but there’s no plausible account of how quantum indeterminacy could support free will, or any other core beliefs of the humanistic image.) Nor does science leave much room for souls or immaterial minds, also key strands of the humanistic image.

For philosophers like Dennett and Flanagan, a major task of philosophy is to look at how the manifest and scientific images relate to each other – and, if possible, to reconcile the two. Can we find a scientifically and philosophically respectable way of talking about free will that preserves what matters to us in the manifest image – moral responsibility and all the rest – and which is also compatible with the truth of determinism? (Dennett has spent much of his career trying to show that the answer is a yes, and Flanagan would agree, even if they differ on the details.)

On this view of the philosophical endeavor, which I find very attractive, the role of intuitions takes on a different light. Dennett proposes, quite plausibly, that a lot of philosophy, especially the technical stuff that appears in academic journals, proceeds as follows (from Intuition Pumps – and Other Tools for Thinking (2014)):

[Y]ou gather your shared intuitions, test and provoke them by engaging in mutual intuition-pumping, and then try to massage the resulting data set into a consistent “theory,” based on “received” principles that count, ideally, as axioms.

One problem with this approach is that the intuitions of professional philosophers might be quite different from everyone else’s. Consequently, their take on philosophical issues – which unless you’re just playing intellectual games should have some relevance to the issues that grab the attention of regular folk too – might be quite unusual. Dennett makes the point:

[O]ne’s own intuitions are apt to be distorted by one’s theoretical predilections. Linguists have known for a long time that they get so wrapped up in their theories they are no longer reliable sources of linguistic intuition. Can you really say in English, “The boy the man the woman kissed punched ran away,” or is my theory of clause embedding tricking my “ear”? Their raw, untutored intuitions have been sullied by too much theory, so they recognize that they must go out and ask nonlinguists for their linguistic intuitions.

Philosophers can and should do likewise. Obviously the new work reported above, and experimental philosophy more generally, do exactly this. But if the intuitions of regular folk are, strictly speaking, irrelevant to the truth or falsity of philosophical theories, why bother? Dennett explains why:

Since at least a large part of philosophy’s task, in my vision of the discipline, consists in negotiating the traffic back and forth between the manifest and scientific images, it is a good idea for philosophers to analyze what they are up against in the way of folk assumptions before launching into their theory-building and theory-criticizing … So here is a project … that philosophers should seriously consider undertaking as a survey of the terrain of the commonsense or manifest image of the world before launching into their theories of knowledge, justice, beauty, truth, goodness, time, causation, and so on, to make sure they actually aim their analyses and arguments at targets that are relevant to the rest of the world, both lay concerns and scientific concerns. Such a systematic inquiry would yield something like a catalogue of the unreformed conceptual terrain that sets the problems for the theorist, the metaphysics of the manifest image, if you like. This is where we philosophers have to start in our attempts to negotiate back and forth between the latest innovations in the scientific image, and it wouldn’t hurt to have a careful map of this folk terrain instead of just eyeballing it.

So even if you think that the role intuitions play in philosophy has been overstated, and recognize that intuitions are not capable of supporting philosophical positions, you should still welcome the study the intuitions of people broadly. As German Field Marshal Erwin Rommel reportedly said, “Time spent in reconnaissance is seldom wasted”.

Advertisements

About Dan Jones

Dan Jones is a freelance science writer
This entry was posted in Essay and tagged , , , , . Bookmark the permalink.

Have your say!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s