Can AI imagine?

Mike Stuart is a lecturer at the University of York. He works on scientific imagination, scientific understanding, the aesthetics of science, and artificial intelligence. If you’re interested, you can access all of Mike’s work here: www.michaeltstuart.com.

A post by Michael T. Stuart

Yes, obviously. Some algorithms already do, or soon will. No, of course it can’t, don’t be silly, though it might be good to pretend it does. Who cares?

Let’s go through these different attitudes to the question one by one. In the end, I’ll suggest what I think is a better attitude.

“Yes, obviously”

It’s the hottest day of the year. Walking down the street, you notice someone giving water to a very thirsty looking stray cat. In that moment, you seem to see the kindness of the action, as well as the gratitude of the cat, even though kindness and gratitude never hit your retinas. Some philosophers claim that when we “see” things like kindness and gratitude, it’s because imagination is injecting extra content to perception. Can AI “imagine” in this way? In a sense, it already does. For example, AI image generators can already “fill in” aspects of scenes based on their “previous experience” with training data.

Here's a different sense of imagination. Suppose after seeing that cat, you begin wondering what it would be like for you to adopt it. In your mind, you’re already giving the cat a bath, hearing its happy meows, feeling its deceptively small frame soaped up in your hands, and then seeing the excited faces of your family members over a surprise video call. Could this kind of imagination be found in an algorithm? Again, it seems so, as there are already AI algorithms that “represent” possible states of affairs to themselves in ways that enable the “offline” production and processing of what we might call sensory input, in order to explore possibilities and make decisions.

Here’s one last sense of imagination. Suppose after seeing the cat you ask yourself: What if that person hadn’t given it any water, would the cat be okay? Without using any mental imagery, you try to reason it out. “Humans can last about a week without water, but probably less than that in very hot temperatures. Cats are smaller, and covered in fur, so they might last even less long. There might be a source of clean water nearby, but the cat might have been too dehydrated to get to it…etc.” Could this kind of imagination be found in an algorithm? Again, in a sense, yes. Game playing AI algorithms like DreamerV3 “entertain” various possible moves and “play them out” to ascertain their value.

We just considered three different senses of imagination and found them roughly implemented in AI. The argument might then go that we could do this for all senses of imagination.

“Of course not”

Sure, AI can do things that look imaginative, but “real” imagination requires intentionality, agency, and interaction with an external environment, at least.

Maybe, optimists reply, but AI can have functional analogs of these too. For example, AI can ground symbolic meanings by interacting with the world using cameras, microphones, and other sensors, and AI can make choices between options in an unforced way. Pessimists are unlikely to be impressed. They will accuse optimists of over-ascribing mental states and capacities to machines, perhaps due to anthropomorphizing tendencies.

Maybe what’s happening in the background is something like this. Pessimists see imagination as essentially good; it’s what makes us special and different from everything else. If computers ever replicated it, this would bring them to our level, or demote us to theirs. So, we defend ourselves by allowing that even if an algorithm could instantiate the more descriptive features that are often linked to imagination (e.g., imagination is quarantined from action, has a greater degree of freedom than other cognitive processes, and is able to mirror other cognitive processes in an “offline” way, etc.), it won’t satisfy the more evaluative features that are often linked to imagination (e.g., spontaneity, responsibility for creativity and empathy, anarchy, etc.). Optimists can reply that those evaluative features aren’t really necessary, or they could argue that algorithms can instantiate (versions of) those features, too.

After going back and forth like this for awhile, you might be feeling a bit like,

“Who cares?”

In other words, maybe it doesn’t matter whether AI can imagine or not.

But there are good reasons to care. Every day, artists and scientists are trying to produce ideas that are meant to help to solve complex problems and find meaning, and many of them do this by creating their own AI tools, specifically in ways that appear to extend the human imagination. The better we understand imagination, the better they can make these algorithms, and probably, the better their artistic and scientific work will be, which is good for all of us. On the other hand, the better their AI algorithms learn to do things that look like human imagination, the more we can learn about (a general sense of) imagination. So, something about AI and imagination matters. Okay, but in that case, what’s the right attitude to take?

“Can’t stop, won’t stop”

I went to a talk last week given by Devin Gouvêa, who reminded us of W. B. Gallie’s notion of “essentially contested concepts.” Examples of such concepts might include democracy, art, consciousness, violence, race, health, and science. Concepts like these are always in danger of fracturing into many more specific and more manageable subconcepts, at least because we notice that they always admit of many different interpretations (both across time, and at any one time) and they tend to be internally complex, which allows us to focus on different aspects of the concept. At the same time, we want to keep each contested concept together as a single concept, because they are organized around clear exemplars that we all understand, and they matter for drawing attention to significant achievements.

I think imagination is an essentially contested concept. If that’s true, how can we decide if AI (or anything else) instantiates an essentially contested concept? One way to proceed is to collect every characterization of imagination that we can find, and ask whether, for each of these, AI could (or already does) satisfy that definition. Interesting though that might be, it would ignore the essentially contested nature of imagination.

To see why, consider other essentially contested concepts, like art, democracy, or consciousness. Can AI produce “real” art? Does AI strengthen or weaken democracy? Is AI conscious? How should we answer these questions? Suppose we made a list of all the different characterizations of each of them, and we all agreed on all the definitions. We still wouldn’t agree whether any given characterization was satisfied by any current or future AI, because each of us has (and recognizes that we have) our own characterization of that concept which colours our interpretation of all the others. At the same time, each of us recognizes that our characterization is not the only one, and that there are many others which are justified by similarly compelling exemplars, and yet we find ourselves unwilling to give up our own characterization.

In this kind of situation, it doesn’t make sense to ignore the contested nature of the concept by dogmatically asserting which characterization is correct or insisting that we wait until everyone agrees on a single characterization. The problem isn’t that we’re not being inclusive enough, or pluralist enough. The problem is just that these concepts are essentially contested. The only way to make this go away is to remove the underlying reasons for contestation. In other words, change who we are and what we care about.

If this is right, we’re never going to agree about whether AI can imagine. But importantly, that doesn’t make the exercise a waste of time. Instead, it can prompt reflection on the underlying beliefs, values, and practices that motivate our willingness to categorize something as instantiating imagination or not. What matters to you about imagination might not be what matters to me about imagination, and that’s not always something we should try to fix. What we want to know is, (how) can AI help to promote those underlying values?