Empirically investigating imaginative resistance: Many questions and a few answers
A post by Jessica Black.
Imaginative resistance (IR) has been addressed various times in this forum, namely by Kengo Miyazono, Eric Peterson, Kathleen Stock, Emine Hande Tuna, and most recently by Hanna Kim. With the exception of Kim’s explanation of her recent work with Markus Kneer and Mike Stuart, the treatments of IR have been exclusively philosophical. Some pose questions that have been—to varying degrees—tested empirically in our lab, although much of our results remain unpublished. In this post I will share some of our more intriguing findings, some of which attempt to probe the phenomenon of IR directly, and others which are relatively independent of philosophical debates. I hope these will raise more questions about the causes and consequences of IR, especially as it appears in cases outside of the more traditional philosophical thought experiments.
1. Individual differences
First, not everyone experiences IR to the same degree or even to the same fictions. If you ask a group of people if they experience imaginative resistance, defining it to them as “the inability to buy into stories in which immoral acts are presented as the right thing to do” with the classical example “In killing her baby, Giselda did the right thing, after all, it was a girl” (Gendler, 2000; Walton, 1994), you will get answers ranging from “I don’t get it! I can imagine anything” to “I don’t want to even think about it!” Providing empirical support for anecdotal evidence was the beginning of my research on this topic, and we now have abundant data demonstrating individual differences in self-reported and experienced IR (beyond my unpublished studies: Barnes & Black, 2016; Black & Barnes, 2017; Black et al., 2018; Liao, Strohminger, & Sripada, 2014). This is perhaps not very surprising, but it is important. What causes the variation between people? Why do some people experience more IR for one scenario than another?
2. IR and moral contagion
One way to approach these questions is to test for associations between IR and potentially related traits. Like Emine Hande, my PhD advisor, Jen Barnes, and I had strong intuitions about an association between IR and disgust. Beginning with the hypothesis that IR could be explained at least in part by fear of moral contagion, we tested for and found positive correlations between disgust sensitivity and IR, operationalized in two ways: a self-report scale we developed and ease of imagining morally deviant scenarios such as Giselda killing her baby (Black & Barnes, 2017). The relation between IR and moral purity concerns is even stronger (note that purity, disgust, and moral judgment have been repeatedly associated in the psychological literature, e.g., Pizarro, Inbar, & Helion, 2011; Wagemans, Brandt, & Zeelenberg, 2018a & b), and in later studies we focus on purity. Across numerous samples (Black, Helmy, Robson, & Barnes, 2018; various as yet unpublished studies containing samples from undergraduate research pools, Amazon.com’s Mechanical Turk, unpaid adults recruited on social networking sites) we have found a consistent moderate to strong positive correlate between moral purity concerns (using Haidt’s Moral Foundations Questionnaire subscale) and self-reported IR.
We have also tried priming participants with disease and cleanliness (e.g., Rottman, Keleman, & Young, 2013). We primed disease (vs. control conditions and embedded in a numeracy test; e.g., “Which of the following numbers represents the biggest risk of getting an infectious disease/getting in a minor car accident?”) and measured subsequent IR to morally deviant scenarios. There was a significant gender*group interaction, with men reporting greater IR after the disease prime and women showing less IR after the disease prime. Priming purity with a cleansing prime (rating cleansing products) resulted in a similar gender interaction: Women demonstrated less IR after rating cleansing products whereas there was no effect for men. We haven’t dedicated much time to speculation about the meaning of these experimental results, partly because I would like to prime moral contagion specifically before doing so, partly because of the gender interactions (this means larger samples and more complex analyses), but mainly because there is so much to do and so little time.
3. IR to nonmoral claims
People can experience IR to nonmoral claims, such as Gendler’s (2000) Tower of Goldbach, proposed as an example of a story unlikely to cause resistance (see Barnes & Black, 2016; we’ve also tested reactions to the entire story in other studies; Kim, Kneer, & Stuart [2018] provide more examples of IR to nonmoral claims (See Kim’s recent blog post). In fact, people find the proposition in Tower of Goldbach that 7+5 be both equal and unequal to 12 at least if not more difficult to imagine than it being right for Giselda to kill her baby because it’s a girl. Fantasy worlds (dragons! Harry Potter!) are often mentioned as an example of something readers have no trouble imagining, and indeed, study participants rate them as much easier to imagine on average. However, there are individual differences here too, with imaginability scores ranging from 0 to 100 in all our studies, for both ability and willingness to imagine.
4. Can’t and won’t
Where our data differ for fantastical worlds vs. immoral, contradictory (e.g., Tower of Goldbach), and Dystopian (e.g., wolves roaming the towns of England, Mahtani, 2010) is that people tend to be just as willing as they are able to imagine fantasy. We have found in two large studies that people are less willing than able to imagine morally deviant scenarios, whereas the difference between willingness and ability is small or nonexistent for nonmoral ones. (See Liao & Gendler, 2015 for an explanation of “Cantian” vs. “Wontian” theories.) Interestingly, the difference between willingness and ability is nearly as large for dystopian worlds. I speculate that this may be due to the moral implications of societal breakdown (future research needed).
5. Authorial authority?
Finally, in a study we are about to submit for publication, we asked participants to be the authors of their own fictions by having them describe a world in which “In killing her baby, Giselda did the right thing, after all, it was a girl” was true. Despite writing more words in an effort to describe such a world (compared with two nonmoral scenarios), participants were significantly more likely to subsequently state that they had failed to do so (Giselda had not, in fact, done the right thing in the world they had described). In short, we put participants in the position of author to test the theory that lack of authorial authority explains IR (e.g. Gendler, 2000) and found that it cannot account for the refusal to accept claims in fiction that would be immoral in the real world, at least not fully.
So what?
The above mentioned studies may have something to say about IR caused by philosophical thought experiments (or scenarios written to match them), but I am not sure it says much about fiction and IR in the real world. To begin with, it isn’t even clear that there are fictions that would satisfy the standard definition of IR (a story that presents something considered immoral in the real world as the right thing to do, as in the Giselda example or Weatherson’s [2004] “Death on a Freeway,” shared in Kengo Miyazono’s post ). If there are, they are rare; I have spent hours searching for (short format) examples to use as stimuli and found only one that is explicit enough to use (from Haruki Murakami’s 1Q84). Even Cormac McCarthy doesn’t make explicit moral claims (even in Blood Meridian!) Why is it so difficult to find explicit claims of immoral paradigms in fiction?
IR in reaction to implicit propositions
In an early post on this blog, Eric Peterson distinguishes between explicit and implicit IR, experienced in response to explicit or implicit propositions. I am not very clear on what he means by implicit IR, i.e., IR that is experienced only implicitly (what would it look/feel like? Perhaps just not wanting to read/watch something without knowing why?). I believe he is spot on when he mentions the possibility of IR occurring in reaction to implicit propositions, however; this may be precisely what is going on when people experience IR to a “real world” fictional story. Readers may understand the narrative (and its author) to be endorsing a deviant morality even when there is no explicit claim to that effect. There would be no need to state “Giselda did the right thing because it was a girl” in the context of a fictional universe that showed that such an act made enough things better in that world to justify it. Of course, this is addressed in the literature; IR could result from the author’s failure to make clear that the story is an invitation to imagine (Brock, 2012), or from the lack of a clear suppositional clause (Gendler, 2006; Goldman, 2006); see Catherine Wearing’s post on this blog for a discussion of supposing vs. imagining and the possibility of suppositional resistance. However, I would argue that the perception of implicit propositions goes beyond any possible author failure, and depends not only on the story itself, but also on what fiction consumers bring to the reading/viewing experience.
Viewers might easily imagine that murder and torture are being implicitly endorsed in Game of Thrones, given the characteristics of the fictional world (where, for example, not murdering could well mean the death of oneself and one’s entire family). And yet, clearly many people have no trouble engaging with GoT. I have avoided the show (despite loving dragons, which cause no resistance at all in me), because I have read the books and I know I do not want to imaginatively engage with a televised version. (This could be an example of true IR in the sense of “won’t”: refusing to engage at all based on anticipated moral disgust.) I have also experienced resistance similar to what Eric Peterson describes with House of Cards, but with Breaking Bad. I know the authors/producers do not want to state, implicitly or not, that Walter White is doing the right thing, but I still stopped watching in the first season, after 6 episodes. It is a great show, fantastic acting, but far too realistic. I simply do not want to imaginatively engage with a fictional world that so accurately depicts real-life immorality. Am I reacting to an implicit proposition, even if I do not infer moral approbation on the part of the authors? (At least, I infer nothing explicitly. Perhaps this is what Eric understands as implicit IR.) Or do I simply not wish to engage with immorality in fiction independently of the normative judgment implied by the narrative?
Why do we think that IR only happens when the immoral thing is condoned?
It could be that readers/viewers want to avoid immoral content, even if obviously presented as wrong within its fictional universe. As such, it may be a good idea to expand the definition of IR to include the inability or unwillingness to engage with immorality in (moral or neutral) fictional worlds—rather than restricting it to immoral fictional worlds (those that explicitly or implicitly claim that the immoral contents are right). To the extent that IR reflects fear of moral contagion from fictional content, or simply disgust sensitivity, this makes sense: why would condoned immorality be the only potential threat? In the real world, we assume that criminals are not ignorant of societal norms, they just choose to ignore them. Our justice system is built upon the assumption of moral responsibility and the inability to distinguish right from wrong can result in pleas of diminished responsibility (e.g., 1957 British Homicide Act). Even those of us who are not criminals do things we know are wrong. Such discrepancy between moral ideals and behavior can be explained with reference to moral disengagement, or the avoidance of moral agency via justification of immoral behavior, selective dehumanization, diffusion of responsibility, and ascription of blame (Bandura, 2002). Tellingly, the same mechanisms may be employed by consumers of fiction, particularly when they identify with immoral fictional characters (Raney, 2011). We can be tempted by things we know are wrong.
In this sense, IR may serve a purpose. It could prevent people from “catching” unwanted moral views from fiction—or from otherwise exporting immoral content. Whether this is a good thing depends on real world moral paradigms. People who read Ursula Le Guin’s Left Hand of Darkness could potentially learn to be more accepting of gender fluidity; for some, this would be grounds for banning the book. But we don’t have a lot of evidence that you can catch (im)morality from fiction (although Natural Born Killers was linked to and blamed for eight copycat crimes, and people have wildly different intuitions about the overlap of morality and imagination. If you ask people whether it’s wrong to imagine acting immorally, you will get a variety of answers that depend on the person asked as well as on the example given: “pedophile imagining sex with children” and “graduate student imagining armed bank robbery” elicit very different judgments.
There is some evidence that the type of imaginative work matters too: Sabo and Giner-Sorolla (2017) carried out a series of experiments to test their hypothesis that harm violations are given more of a “fictive pass” than purity violations. To compare “fiction” with reality they grouped “imagining doing something” with watching the same act in a film or carrying it out in a video game; they asked participants to rate the moral character of an agent (the person who acted/viewed/imagined) as well as the act. Importantly, they also asked multiple questions about the consequences; for example “Will [imagining/watching/playing] these sorts of [things/films/video games] make [the agent] a morally bad person?” and “Does [doing this/imagining this/watching this film controlling his character to do this in a video game] indicate that Sam actually feels the urge to [perform the randomly assigned moral transgression]?” They do not report pairwise comparisons within conditions. However, on average, for both purity and harm violations, participants thought that the agent who imagined performing a harmful/impure act would subsequently desire it more than one who had watched or played it (those less than one who had done it in real life). When it came to other future consequences (of engaging with the immoral act), however, mean rating of imagining fell between reality and other fictions for harm, and below reality and film yet above video games for purity. In other words, the perceived threat of exporting immoral imaginative content depended on various factors, including the nature of the imaginative medium.
It seems reasonable to suppose that people’s experience of IR—which we know varies greatly with the individual and the content of the imagined world—may also vary according to the type of imaginative activity. Are people more likely to experience IR in response to fiction in print or film? What about video games? Does my distaste for first-person shooter games reflect IR?
What’s next?
Thanks to this blog, I’ve just added proposition-type (explicit vs implicit) as a possible variable influencing IR on my ever-growing to-do list. The questions I’ve posited here could lead to other empirical studies that I hope someone else will run. I’ve already got a backlog of papers to submit and results to be written up—including many studies I do not mention above. (If anyone is thinking of doing empirical work, feel free to ask me if we’ve done anything similar, we might be able to save you some time!).
I will mention one project I am excited about. This spring we are testing the extent to which IR interferes with learning from fiction; are people who experience greater IR less likely to extrapolate moral content from fiction? In the same study, I am exploring the association of IR with psychological reactance. Reactance refers to negative reactions (anger, rejection of the message and messenger) to threats to our personal freedom that we perceive in persuasive messages, such as health advice (Don’t smoke!), governmental regulations (think guns), or being told not to do something (think toddlers and some ex-toddlers). To the extent that IR relates to people not wanting to be told what to think, it could be related to reactance. Specifically, for my current study, I hypothesize that people who more readily express reactance to persuasive content will be more likely to experience IR when they are told that a fictional story will change the way they view the world. Thoughts?
Special thanks to Jen Barnes and Jerry Vollmer for brainstorming possible topics and to Mike Stuart for providing feedback on earlier drafts.