A post by Margot Strohminger.
I’m walking to the office and decide to take a slightly different route from usual. Normally I take High Street up to Cornmarket Street and then go straight. It takes me around twenty-five minutes. This time I turn off High Street much earlier. I’m wondering if I will still reach the office in under thirty minutes.
I seem to have two different ways of reaching a verdict on the conditional
(1) If I take the new route, then I will reach the office in under thirty minutes.
The first way consists of a series of inferences. For example, I might believe that the distance travelled via the new route is roughly the same as the distance of the old route and that my old route only takes twenty-five minutes. I use these beliefs in an inference to (1). The inference I use is not deductive, but it is an inference all the same.
There is also a second way that uses the imagination. I imagine myself taking the new route and then consider by what time I would reach the office. I fill in various details of the hypothetical scenario. One of these details may be that the walk takes me no longer than usual. When I imagine the hypothetical scenario as one in which the walk takes less than thirty minutes, I come to believe (1).
We might ask under what circumstances beliefs like my belief in (1) constitute knowledge. The question I’ll explore in this post is whether I have just presented you with two fundamentally different methods for reaching knowledge or just one. I’ll suggest the answer is ‘two’.
There is a way of motivating the answer that there is just a single method, contrary to appearances. The idea is that to the extent that I get to know (1) using either of the routes I just sketched, I am merely performing a series of inferences from beliefs I held already. Take a case where I suppose that I take the new route, and fill in the details in such a way that the walk takes less than thirty minutes and I judge (1). Moreover, let’s suppose, I know (1) by this process. According to the view, I must have performed a series of inferences from beliefs (together with some temporary assumptions I made). If I had drawn the inference from those beliefs without the imagining, I still would have come to know (1). I imagine a hypothetical scenario but it is incidental to my getting knowledge. For obvious reasons, we might call this view ‘inferentialism’. On the crudest version of inferentialism, the tacit inference looks just like the inference I performed in the first case. I tacitly already believe both that the distance travelled via the new route is roughly the same as the distance of the old route and that my old route only takes twenty-five minutes and use these in a tacit inference to (1). But the inference may be considerably more complex.
Inferentialism underlies an influential view of thought experiments in the philosophy of science, which John Norton defends in a series of papers.[i] On Norton’s view, whenever someone learns the intended conclusion of a thought experiment, this derives entirely from the fact that she has performed a series of inferences from beliefs she held already, prior to conducting the thought experiment. Norton only discusses the epistemology of scientific thought experiments, but the view naturally extends to beliefs reached as a result of imaginings even when we don’t use the label ‘thought experiment’ to describe the process. When I imagine myself taking the new route, there is nevertheless a good sense in which I am performing a thought experiment: the process is fundamentally similar to the processes used for thought experiments in physics or philosophy.
The inferentialist can still leave some room for the imagination in characterizing the process by which we obtain knowledge in the second case. In particular, she can allow that the imagination facilitates and in some cases even enables us to draw certain inferences. My imagining the scenario in which I take the new route to work might allow me to draw certain inferences that I otherwise couldn’t. For the inferentialist, there is an inference that yields knowledge of (1) from beliefs I have. If I made that inference without imagining the hypothetical scenario, I would still get knowledge of (1). But as it turns out, the inferentialist might add, it isn’t possible for me to perform the inference without imagining the scenario. There are psychological limitations on creatures like us that mean that sometimes we can’t draw the relevant inference without imagining the scenario.
If inferentialism is true, then either we have a lot less knowledge than we might have initially thought, or else inference is much more widespread. In the case of (1), either I can’t use the second route to know it or, if I can, then I am—despite appearances—just drawing a series of tacit inferences. The second option seems preferable: very often we reach judgments by imagining. To have to say that these judgments only constitute knowledge when they are obviously caused by inferences that on their own suffice for knowledge is a steep cost.
Even so there is a worry for the inferentialist that we simply don’t have enough beliefs to form the basis for an inference. When we go through the imaginative process, we draw on representations that we do not—at least not clearly—believe.[ii] By way of illustration, suppose that something like a modular picture of the mind is correct: some mental representations are stored in informationally encapsulated modules. Yet we can use these representations to help fill in the details of scenarios we imagine; think of how we might use visual representations in visualizing scenarios, for example.
Perhaps inferentialism can account for many cases of knowledge by imagining a hypothetical scenario. Sometimes we engage in a complex process that involves imagining what happens in a hypothetical scenario as well as a tacit series of inferences. Maybe even the case I started with is a case like that. But the story doesn’t look like it will work across the board.
There is a clear contrast between the two methods with which we started. Inferentialism provides an interesting way of challenging that contrast, even though in the end I doubt it can work.
[i] See especially J. Norton, ‘Are thought experiments just what you thought?’, Canadian Journal of Philosophy 26 (1996): 333–66, and J. Norton, ‘On thought experiments: is there more to the argument?’, Philosophy of Science 71 (2004): 1139–51.
[ii] For discussion of this idea within the context of the epistemology of counterfactual conditionals, see T. Williamson, The Philosophy of Philosophy (Blackwell, 2007): 141–55.