The Junkyard

View Original

Scientific models, fiction, and imagination

Kathleen Stock is a Reader in Philosophy at the University of Sussex, UK. She’s most recently the author of Only Imagine: Fiction, Interpretation and Imagination (Oxford 2017), and blogs about fiction and imagination at www.thinkingaboutfiction.me

A post by Kathleen Stock.

Scientific models can be physical objects: wind tunnels, scale models, and so on. Equally, they can be presented via descriptions, diagrams, and equations, but not materially instantiated. Examples include Galileo’s famous description of an object moving down an inclined frictionless plane, used to show the effect of gravity on free-falling bodies; Alan Turing’s model of a symmetrical ring of cells to make a point about the mammalian embryo (1952); and John Stuart Mill’s self-interested, exclusively wealth-directed, and wholly rational chooser, “economic man’ (or ‘Homo Economicus’). In all these cases there is no such thing, physically, and nor could there be. Put simply, there are no frictionless planes, ring-shaped embryos (Turing 1952: 56), or wholly rational, exclusively wealth-directed choosers.

How should we further account for the nature of ‘non-physical’ models such as these? At least two stories present themselves. The first, which I’ll call ‘Models As Fictions’ (or MAF), says that such models are physical after all – for they are to be identified with the sets of descriptions, diagrams, and equations offered in each case. These, in turn, are to be understood as fictions, or props for the imagination, in a sense well-known from the work of Kendall Walton (1990). On the second view, ‘Models as Fictional Objects’ (or MAFO), the models are to be identified with fictional objects. The fictional objects in question are specified by the descriptions (etc.), but aren’t identical to them. Aligning with various competing accounts of fictional objects, there are various versions of MAFO potentially available, according to which models are concrete spatio-temporally located possibilia, abstract individuals, or abstract types or roles.

MAF is deflationary: there’s no need to posit additional fictional objects, in addition to the descriptions (etc.). Sentences such as ‘there is a frictionless plane’ should be – depending on context - interpreted either as: a) (in mouth of Galileo) an instruction to hearers or readers to imagine that there is a frictionless plane; or (in the mouth of a reader of Galileo) either b) a ‘pretence’ claim, expressive of imagining what one is prescribed to; or c) elliptical for the claim  ‘according to the model, there is a frictionless plane’ (i.e. as a claim about the model, not the world).

According to MAF, models, like novels and short stories, explicitly generate certain fictional truths (‘primary fictional truths’) but also generate implied fictional truths, via what Walton calls ‘principles of generation’. As is fairly well-known, Walton, following David Lewis (1983), identifies two principles which allegedly govern the generation of most of the implied fictional truths between them (1990: 144-161). The first is the ‘Reality Principle’, which says, very roughly but accurately enough for our purposes, that, for a given fiction F, we should take as implied fictional truths for F, any state of affairs which is the case, and which does not contradict any primary fictional truths in F. The second is the ‘Mutual Belief Principle’, which says, again roughly, that we should take as implied fictional truths in F, any states of affairs widely believed to be the case at the time of F’s writing, and which does not contradict any primary fictional truths in F.

MAF has it that learning about further non-explicit aspects of a given model is effectively the act of learning about what is implied as fictionally true, according to that model.  I like this idea, not least because it allows us to access a familiar framework with which to talk about cases where models, understood as fictions, implicitly or explicitly contain harmful biased assumptions. Take Homo Economicus again, severely criticised by Katrine Marçal (2016) as prioritising characteristics such as atomisation, isolation, and selfishness over values such as connection, warmth, and altruism, and ruling out women’s traditional occupations as worthless in the process. Just as the stories we tell our children can influence their developing values, sometimes for the worse, so too does it seem likely that the models we choose with which to understand the world can potentially skew our sense of what’s important in a harmful way. Equally, just as there is ‘imaginative resistance’ to some fictions (Stock 2017, Ch. 4), so too the phenomenon might potentially arises in the setting up of models. Imaginatively resisting a gendered model of DNA which describes it as the ‘master molecule’ which ‘controls’ and ‘determines’ is one way of reading Evelyn Fox Keller’s ground-breaking work on metaphors in science (1985).

Michael Weisberg (2013) complains that Waltonian ‘principles of generation’ don’t help us understand how some ‘non-physical’ models work in practice. He focuses on the Lotka-Volterra model of predator-prey interaction. His complaint is, basically, that applying the Reality Principle to the primary fictional truths generated by this model won’t get us to the right set of implied assumptions, well-understood by those who use the model. The Reality Principle gives us all real-world facts as true in the model, and this isn’t good enough because a correct version of the Lotka-Volterra model leaves many real-world facts out (2013: 59). Equally the Mutual Belief Principle wouldn’t ‘sufficiently restrict theorists' imaginations’ either (2013: 60).

To this, the following response is available, however. Never mind the Lotka-Volterra model: the Reality and Mutual Belief principles in fact work very badly for novels and stories too! (For a plethora of examples, see Stock 2017, Ch. 2). Other theories of fictional truth are available, including a Gricean one which focus on the reflexive intentions of the author: what did the author or joint authors of the fiction intend readers to imagine? (Stock 2017). Working this out will occur against a background of shared knowledge about the conventions governing the specific genre in question, and what counts as implying what, by what means. Indeed, it seems that on the face of it, an intentionalist theory of fictional truth generation would lend itself to a plausible treatment of the Lotka-Volterra model: what counts as fictionally true is what was intended to be imagined by the authors of the model, given the assumption of a commonly understood knowledge base, including knowledge of the conventions of model construction and reception in this particular area.

Assuming MAF can be saved in this way, a new question now arises: namely, what is the relation between a given model and the ‘real-world’ target system being modelled? Two explanatory challenges have been made here. Explanatory Challenge 1 (EC1) focuses on the fact that many models describe systems which don’t or couldn’t exist in real life. To some, such models might look like they offer:

..‘a description of a missing system’: they look just like a description of some actual, concrete object, and yet we realise that there are no such systems that would satisfy this description.. how [then should] we .. understand our subsequent talk about our model, which appears to assume that there is an object that satisfies that description[?] (Toon, 2010: 303)  

Explanatory challenge 2 (EC2), meanwhile, takes a step further back: In order to talk about degrees of ‘resemblance’ (or lack of them), or even just ‘comparison’ between what a model describes and a real-world target system, don’t we need to posit two objects, and not just one: a fictional object and a real-world system?

Resemblance, whatever exactly it comes down to, is a relation between objects (or events, or at any rate, bona fide things). But on the make-believe view, the model is not a thing in any robust sense—it is only a set of prescriptions for the imaginations of scientists. How then are models to be compared with targets (or, for that matter, with one another)? (Levy 2015: 789)

A problematic response to EC1 and EC2 is available. It would say that, for a given model M and real-world target system S1, M directly represents some state of affairs other than S1 (call this other state of affairs S2), and, moreover – and this is the problematic bit - S2 represents S1. Curiously, this view is at one point gestured at by Roman Frigg (2010), despite his explicitly embracing a deflationary position about fictional objects (2010: 264). However, he also says:

A representation, by definition, is a prop in an authorised game of make-believe. On this view, the text of a novel and the description of a model system are representations. But in science the term ‘representation’ is also used in a different way, namely to denote a relation between the model system and its target (and, depending on one’s views about representation, also other relata like users and their intentions). These two senses of ‘representation’ need to be clearly distinguished, and for this reason I call the former ‘p-representation’ (‘p’ for ‘prop’) and the latter ‘t-representation’ (‘t’ for target). Using this idiom, the two acts mentioned in the introduction can be described as, first, introducing a p-representation specifying a hypothetical object and, second, claiming that this imagined object t-represents the relevant target system.(2010: 264; my italics)

Taken at face value, this passage raises pressing questions. How could an ‘imagined object’ with no metaphysical reality represent (in any sense) some target system? When I imagine that there is a frictionless plane, there is not something concrete I am thinking of, which in turn represents some real world system, as a physical model might represent some aspect of the world. Nor do I imagine a-frictionless-plane-that -represents-a-real-world-system (in fact, it isn’t clear how one could do this).

A different approach says that should think of a model, understood as a fiction, as directly representing its target real-world system, rather than any intermediate object (Toon 2010; Levy 2015). This would seem to meet EC1 as follows: even a model which describes a system unlike the real world in various ways, nonetheless directly (mis)describes it, albeit inaccurately. Such models ask us to imagine of a given real world system, that it is different in some respects (just as I might, in having an amazing goal scored by footballer Sergio Aguero explained to me, imagine of the pepper pot on the table in front of me that it is Aguero, and of the fork that it is a goal mouth). And this approach also apparently meets EC2, insofar as, on this view, there is no need for comparison of two things, for there is only one thing: the real-world system. One imagines of this real world system that it is otherwise.

Nonetheless, this approach faces other problems. One is that, in some cases, the degree of disparity between model and target looks so great that it seems implausible to think that the target could have been intended by the model-maker to be represented at all directly.  A further, perhaps more general problem is that an ascription of ‘Imagining of’ seems to work well only with concrete objects of acquaintance, present or past. For instance, returning to Turing’s model, what is it to imagine  ‘of’ a mammalian embryo not in one’s current environment or memory, that ‘it’ has properties ‘it’ does not in fact have?  In fact, this must be quite unlike imagining of a pepper-pot that it is Sergio Aguero, for in the latter case, there’s an established practice of perceptually matching physical bits of the object in front of one with bits of the imagined object (‘the top of the pot is Aguero’s head..’, etc.)  Most of us can’t do this with any embryo. This case looks more like a case of imagining that an embryo is a ring-shaped organism; a characterisation which returns us to face EC1 and EC2, at least ostensibly.

A much better solution, to my mind, is to return to Walton. For a given model M and a target system S1, M represents some other state of affairs S2 (a frictionless plane, a ring-shaped organism), where one “pretends” to compare S2 and S1. But this too has been seen as problematic:

Walton suggests paraphrasing comparative [‘transfictional’] statements such that they are seen as either pretend-comparisons or statements that are in fact about the text and/or the rules of generation, rather than statements about the apparent target of comparison. Neither option will do in the context of modeling, understood indirectly, since comparisons are meant to generate knowledge (rather than pretend-knowledge) about targets (and not about the rules of the game). (Levy 2015: 789)

In response, let’s first distance ourselves from talk of ‘pretending to compare things’ and also ‘make-believe’, for this can imply that the practice isn’t serious. Let’s say instead that in such cases we ‘compare two things in imagination’. ‘Comparing two things in imagination’ can quite obviously lead to knowledge in a straightforward way. Consider: I imagine my sporty friend Kate competing in a triathlon, and then imagine my sofa-loving friend Margaret doing so, and I conclude that Kate would do better than Margaret. Or: I compare, in my mind, a memory of public-speaking with an imagining representing me public-speaking, in order to judge whether I would feel any better about it now. It’s important to remember here that in imagining generally, one might include as imaginative content, lots of things one also believes, because one believes them (Stock 2017: Section 6.5). Where the goal of the imaginative exercise is to produce knowledge, this is in fact certainly what one will do. One won’t draw one’s conclusions randomly. Counterfactual imagining is like this: imagining in the services of working out what would or could be true, were other things true. When one counterfactually imagines, one includes as imaginative content lots of things one believes are the case. Often one will non-coincidentally come up with a useful answer as a result.

To conclude: what resources does the Waltonian view have to answer the two explanatory challenges articulated earlier? In response to EC1, the lack of real-world description inherent in many models doesn’t force us to posit any intermediate fictional object between the model – understood as a fiction - and the real world system. EC1 apparently assumes that where one imagines that there is an X, there must be some X one is thinking of: either an object in the real world, albeit misrepresented (as in Toon’s view above); or some fictional object. Yet this is a false dichotomy. Imagining is a mental activity, such that when you are imagining that there is an X, you are in some real sense committed to thinking that this state of affairs exists or obtains. Granted, this is not the commitment of belief; but it is temporary commitment of a kind, nonetheless. But equally, imagining that there is an X is such that, as soon as you stop imagining, you can easily recognize that X doesn’t or may not exist, or obtain. You need no longer in any sense be committed to X’s existence. Within the scope of imaginative thought, X exists and can be compared with other things. Outside the scope of this thought, it does not. This is what imagining is. This fundamental fact about it, to my mind, stands in no need of further explanation or justification, via the positing of stable and persistent ‘objects’ for such thoughts. It is an irreducible fact about imagination, and one compatible with imagination being a respectable tool in the pursuit of knowledge, nonetheless.

Equally, in response to EC2, in order to talk about ‘comparison’ between two objects in imagination, we don’t need to posit two existent objects. You can imagine that S1 and S2 exist, and make an informative comparison between ‘them’, without there being any S1 there to compare with S2. This sort of comparison is not a comparison between existent objects. It is still a form of genuine comparison.

Understanding and comfortably accepting this aspect of imagining is key to ceasing to feel the urge to posit fictional objects, in addition to a) scientific models, understood as instructions to imagine certain things, and b) real-world systems, which the fictions help us to better understand. I think it is probably the key to ceasing to posit fictional objects generally, but that, as they say, is another story.

*    *    *

This post is based on a presentation given at the ‘Imagination in Science workshop’, University of Leeds, June 2017. I’m very grateful to Alice Murphy and Steven French for organizing it, and to the audience for great questions.


Bibliography

Frigg, Roman (2010) Models and Fiction. Synthese  172:251–268.

Hartmann, Stephan and Frigg, Roman (2012) Models in science. In: Zalta, Edward N., (ed.) The Stanford Encyclopedia of Philosophy. Stanford University, Stanford, CA., USA. 

Fox Keller, Evelyn (1985) Reflections on Gender and Science. Yale University Press.

Lewis, David (1983) ‘Truth in Fiction’ reprinted in his Philosophical Papers. Vol. 1. Oxford: Oxford University Press. pp. 261-80.

Levy, Arnon (2015) Modeling without models. Philosophical Studies 172:781–798.

Marçal, Katrine (2016) Who Cooked Adam Smith’s Dinner? Trans. Saskia Vogel. Portobello Books.

Mill, John Stuart. (1836). On the definition of political economy and the method of investigation proper to it. Reprint in 1967, Collected Works of John Stuart Mill, 4,Toronto: University of Toronto Press.

Stock, Kathleen (2017) Only Imagine: Fiction, Interpretation and Imagination. Oxford University Press.

Toon, Adam (2010) The ontology of theoretical modelling: models as make-believe. Synthese 172:301–315.

Turing, A.M. (1952) The Chemical Basis of Morphogenesis. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, Vol. 237, No. 641. (Aug. 14, 1952), pp. 37-72.

Walton, Kendall (1990). Mimesis as Make-Believe. Harvard University Press.

Weisberg, Michael (2013) Simulation and Similarity: Using Models to Understand the World. Oxford University Press.