How to Visualize the Non-Existent

Stephen Müller is a PhD candidate and adjunct lecturer at the Philosophy Department of the University of Salzburg. He specializes in philosophy of mind and cognitive science, with a particular focus on the representational theory of mind and mental imagery.

A post by Stephen Müller

The realm of sensory imagination is a place where dragons soar and unicorns prance, highlighting a pivotal feature of mental imagery: it can represent things that do not exist. This fact is broadly recognized yet seldom examined in the face of the substantial challenges it presents to standard theories of representation, such as informational and teleosemantic frameworks. These frameworks hinge on the notion that a sequence of physical events—like light reflecting off an apple and being processed by our visual system—culminates in the mental representation of the apple (e.g., Pylyshyn, 2007; Recanati, 2012; Neander, 2017)[1]. Hence, these theories do not easily extend to the representation of non-existent entities, where no light from fictitious creatures reaches our eyes.

Adopting a positivist perspective, one might question what it even means for mental images to represent non-existent entities. It verges on the Meinongian to propose that "there are objects of mental imagery about which it is true that there are no such objects" (compare Meinong, 1904 / 1960). Thus, it falls upon philosophers to navigate these tricky waters carefully, ensuring that our explanations do not introduce bizarre ontologies or outright contradictions. This brings us to the core question I wish to delve into: What kinds of objects do our mental images represent, and by what process do they come to represent them?

This topic isn’t just a niche concern of theoretical philosophy; it’s central to how cognitive science views mental imagery. Consider the definition itself: mental imagery is a type of mental representation involved in perceptual processing, occurring without any direct sensory input (e.g., Pearson, Naselaris, Holmes & Kosslyn, 2015; Nanay, 2023). The concept inherently suggests a disconnection between representation and its object. Despite this, cognitive scientists confidently navigate mental imagery, suggesting there’s an implicit understanding at play. Therefore, addressing the core question should essentially involve an explication of how cognitive science employs the concept of mental imagery. In the ensuing discussion, I aim to outline such an explication. However, as we delve into the specifics, my analysis exhibits significant flaws. These shortcomings hint at intriguing philosophical puzzles ripe for further investigation.

My suggestion is that mental images inherit their objects from corresponding perceptions. For example, the mental image of an apple, in a manner yet to be elucidated, inherits its object from the perception of an apple. Should this be accurate, it categorizes mental images as indirect representations (that is, they represent solely through their association with entities that represent directly, in this context, perceptions). This finding would be noteworthy in itself. The idea is compelling not solely due to the experiential similarity between perceiving and imagining an apple; it also directs us toward contemporary cognitive science, which demonstrates a significant overlap in the neurobiological underpinnings of mental imagery and perception. Studies, such as those by Winlove et al. (2018), have shown that visual mental imagery prompts activation in the visual cortices, including primary and extrastriate areas, even when no visual stimuli are present. More pertinently, this likeness makes itself felt at the level of content. Consider for example Dijkstra et al. (2019), who conclude that “[i]magery and perception generate similar neural representations for the same content” (emphasis added).

So, when I say that a perception and a mental image “correspond”, I do not mean “in virtue of representing the same object”. This would presuppose the analysis I’m trying to give. Instead, I mean a neurobiological correspondence (which in turn explains both experiential as well as functional similarities). We can encapsulate the idea as follows:

Principle of Object Inheritance.  A mental image m represents o iff there is a perception p such that m neurobiologically corresponds to p and p represents o.

For instance, consider the perception of an apple. If a mental image that neurobiologically corresponds to this perception is tokened, then that mental image will represent an apple. It’s true that the Principle of Object Inheritance hinges on a certain interpretation of what it means for a perception to represent something. Here, you are free to slot in your preferred theory.

If the Principle sounds slippery, that’s because the neurobiological correspondence in question is very difficult for a human to parse. To refine this, we could supplement the Principle with an operational criterion of “correspondence” that leverages recent progress in the use of deep neural networks as tools in cognitive neuroscience. (These advances can be appreciated independently from the question whether deep neural nets are themselves models of the human brain.) Remarkably, deep neural networks can be trained to decode mental imagery. This means that these networks, by analyzing fMRI data from subjects instructed to visualize o, can reliably predict o. Thus, these computational systems can be viewed as precursors to a ”brain-reading device that could reconstruct a picture of a person’s visual experience at any moment in time” (Kay, Naselaris, Prenger & Gallant, 2008). For instance, consider VanRullen & Reddy (2019), where subjects were asked to visualize celebrity faces while undergoing an fMRI scan. The deep neural networks, after ”inspecting” the resulting fMRI data, were able to identify the faces visualized. This was achieved by first training these networks on subjects perceiving images of celebrity faces. Such achievements underscore that, while humans may currently struggle to detect neurobiological correspondences between perception and mental imagery, artificial intelligence is making significant strides in this area (see also Kriegeskorte & Douglas, 2019; Glaser et al., 2020).

The Principle of Object Inheritance, as currently formulated, has its fair share of flaws. However, these deficiencies illuminate paths for further refinement, each introducing its own set of intriguing philosophical challenges. I will not conclusively solve these issues in the space given here; instead, I’m laying them out for you to mull over for yourself.

First, the Principle doesn’t address the phenomenon (of non-existing objects) that initiated our exploration. If a mental image inherits its object from a corresponding perception, then it can only represent what that perception represents. Non-existent objects, evidently, don’t fall into this category. We never perceive non-existent objects because, trivially, a causal chain connecting a perception to such an object cannot exist. So, how can a mental image inherit a non-existent object from a perception?

This issue points to a distinct yet related problem: even when an object does exist, it might never be perceived. For example, when looking at recent photos from the Mars Curiosity Rover, you might imagine a rock formation from an angle from which it was never actually observed. So, what do we make of counterfactual talk about mental images that inherit their object from non-actual perceptions?

Finally, we face the problem of lookalikes. This issue is particularly insightful for understanding mental images as representations. Consider when multiple distinct perceptions represent different objects that look the same, like a wax apple and a real apple. Despite their substantial differences, perceptions of them would be neurobiologically similar since the sensory input is based on their looks, not their distinct identities. Therefore, if a mental image is generated that neurobiologically corresponds to the perception of an actual apple, it would also correspond to the perception of its wax replica. (This scenario can be looked at in terms of neural decoders: in both instances, they would decode an image of an object that looks like an apple. In principle, the neural decoders are not equipped to determine whether the visualization was of an apple or a wax imitation.) So, if a mental image corresponds to perceptions of different objects that look the same, how do we decide which one it inherits?

I find the last issue most enlightening, as I believe it demonstrates that we shouldn’t think of the “objects” of mental imagery as being any particular entities in the first place. What mental images inherit from perceptions isn’t any object, but what perceptually similar objects have in common, namely their looks[2]. This proposal can then be extended to accommodate for the non-existent objects of mental imagery: We don’t visualize unicorns, but how unicorns look.

I’m interested to hear about adaptations of the Principle of Object Inheritance (or outright counterproposals) that make sense of these problems. It’s very curious that somehow the recent mental imagery revival has completely failed to address what it is that mental images represent.


Notes

[1] It’s widely acknowledged that while a causal relationship is a necessary condition for perceptual representation, it’s not sufficient in itself (Arstila & Pihlainen, 2009).

[2] Dominic Gregory (2013, 2018), within a very different framework anchored in phenomenology, also comes to the conclusion that mental images represent how things look.


References

Arstila, V., Pihlainen, K. (2009). The Causal Theory of Perception Revisited. Erkenntnis, 70, 397-417.

Dijkstra, N., Bosch, S.E., van Gerven, M.A.J. (2019). Shared neural mechanisms of visual perception and imagery. Trends in Cognitive Science, 23, 423-434.

Glaser, J.I., Benjamin, A.S., Chowdhury, R.H., Perich, M.G., Miller, L.E., Kording, K.P. (2020). Machine Learning for Neural Decoding. eNeuro, 7(4), ENEURO.0506-19.2020.

Gregory, D. (2013). Showing, sensing and seeming: Distinctively sensory representations and their contents. Oxford: Oxford University Press.

Gregory, D. (2018). Visual Expectations and Visual Imagination. Philosophical Perspectives, 31(1), 187-206.

Kay, K.N., Naselaris, T., Prenger, R.J., Gallant, J.L. (2008). Identifying natural images from human brain activity. Nature, 452, 352-355.

Kriegeskorte, N., Douglas, P.K. (2019). Interpreting encoding and decoding models. Current Opinion in Neurobiology, 55, 167-179.

Meinong, A. (1960). The Theory of Objects. In: R. Chisholm (Ed.), Realism and the Background of Phenomenology. Atascadero: Ridgeview. (Original from 1904.)

Nanay, B. (2023). Mental Imagery: Philosophy, Psychology, Neuroscience. Oxford : Oxford University Press.

Neander, K. (2017). A mark of the mental: In defense of informational teleosemantics. Cambridge, MA: MIT Press.

Pearson, J., Naselaris, T., Holmes, E.A., Kosslyn, S.M. (2015). Mental Imagery: Functional Mechanisms and Clinical Applications. Trends in Cognitive Sciences, 19, 590-602.

Pylyshyn, Z. (2007). Things and places: How the mind connects with the world. Cambridge: MIT Press.

Recanati, F. (2012). Mental Files. Oxford: Oxford University Press.

VanRullen, R., Reddy, L. (2019). Reconstructing faces from fMRI patterns using deep generative

neural networks. Communications Biology, 2(1), 1-10.

Winlove, C.I.P., Milton, F., Ranson, J., Fulford, J., MacKisack, M., Macpherson, F., Zeman, A. (2018). The neural correlates of visual imagery: A coordinate-based meta-analysis. Cortex, 105, 4-25.