A post by Isabelle Wentworth
The capacity for creativity has long been used to index humanity—it’s part of what distinguishes us from the rest of the animal kingdom. This logic—like all forms of categorisation—has both inclusionary and exclusionary force. We’ve seen it finance archaeological explorations for Hominid art, and also become instrument of oppression and empire—recall Thomas Jefferson’s justification for the expulsion of emancipated slaves from the United States, as “among the blacks there is misery enough, God knows, but no poetry.” (1781)
AI creativity is looking to challenge this logic. We’ve had some fears about what artificial creativity means for us in the past—bolstered by AI story-grammar storytelling models in the early 2000s—but they have taken on new urgency since the creation of AI which can not only analyse data but learn its underlying patterns to generate original material.
Of course, it’s important to know whether AI is genuinely creative, or whether its output just mimics creativity. That answer will depend on what theory of creativity we decide to use, but before we can make that decision it’s important to ensure we’ve addressed all the factors of ‘creativity’ in the first place. Here we might look to Mel Rhodes’ conceptual model of the dimensions of creativity. In this model, we have product (that is, the actual creative output, such as the text or artwork); the person (the creator of the art); and the process (Rhodes, 1961).
Conceptual work on AI creativity has focused in large part on person: the fact that AI texts came from a machine rather than a human. This is, intuitively, very important—reading something you know came from an AI generator as opposed to a person seems quite different: instead of a moment of human connection, the co-construction of meaning becomes an exercise in the uncanny. So analysing this feeling by looking at the characteristics of the creator figure makes a lot of sense. For example, author Bernard Erik Torres suggests that the difference is that AI creativity comes from a being which “lacks the emotional depth and authenticity” of humans (Torres 2023). Niklas Hageback describes AI as crucially lacking the creative spark. Thomas Hajdu (2023) and Stephen Marche (2023) have both looked at this question through the idea of “authorship”. Framed in this way, AI creativity presents a sequel to Barthes’ death of the author: the robot rebirth.
Or, as many literary critics have done, you can focus on the product: an object-centered view of creativity. You might see whether or not the output has the features—narrativity, coherence, originality, whatever aesthetic criteria you might want to impose—necessary to count as creative.
Considered on their own, both these approaches have problems. Looking only at the ‘person’ reifies the creator figure as an isolated actor, an aesthetics of “autonomy and genius”, as Hannes Bajohr (2022) puts it. Equally, looking only at the product is going to run into trouble as soon as AI is able to produce narratives which are impossible to distinguish from those written by a competent author—which is, arguably, where we’re at right now.
Something to note is that provenance and product are what you might call static dimensions of creativity. Yet creation itself is an inherently temporal process: the fact that it takes place over time is a defining rather than contingent aspect of creativity. One impulse is to look at the moment of creative inspiration, the generation of the idea, as a sui generis act. But like all other cognitive acts, creativity is situated, embodied, and – importantly – enduring: it takes time, over a number of intersecting temporal scales. On an unconscious level, creativity requires information processing, happening over the minimal temporal extensions required for thought and perception. You might call this the “microlayer” of time. On a conscious level, lived situations and experienced narratives constituting memory and learning can be defined by their broader temporal horizons—hours, days, and years (what you might call a “macrolayer” of time). These two layers interact, so that even the so-called creative moment is not reducible to a single point in time.
Scholars of AI have suggested that AI will be able to do-away with the durational labour of creativity: AI can do the leg work of creative process, leaving humans free to just come up with the good ideas. But such a gap between ideation and execution is illusory. The ebb and flow of creativity are influenced by the passage of time and the evolving nature of the creative work (Harris, 2021). The artist works both with and through the medium – the paintbrush, the pen, the keyboard, the technology of the text itself. These are not just records of thought: rather, the outside world is recruited in the creative process. It's very hard to draw boundaries around such a creative ecosystem: even beyond the individual scale, creativity is intertwined with social and cultural forces which themselves occur over different temporal scales. Its complex temporal infrastructure is partly why creativity cannot be formalized and abstracted.
The importance of time to creative process suggests this temporal dimension ought to be included in the decision of whether something counts as creative or not. Maybe we need to ask not where AI texts come from, but when.
This is a sticking point for AI creativity: interestingly, it has been described as atemporal in a number of important ways. Its training data is, as scholars like Kate Crawford and Hannes Bajohr have pointed out, decontextualised and thus dehistoricised—indeed, Michele Elam calls this ‘algorithmic ahistoricity’ (2023). This does not mean that algorithms cannot be trained on historically accurate data, but rather that, for the purpose of programming, the data itself is treated as not having a history.
To explain this, we need a little more background. The solution employed for many problems in natural language processing is known as a transformer. This technology utilizes methods such as positioning and self-attention to accomplish its feats in language generation. Each token (that is, a quantum of language in the training data) is assigned a value that indicates its position within a sequence. Through this positioning, the system achieves "self-attention," enabling it to grasp not only the identity and location of a token but also its relationship with all other tokens in the sequence. Thus, the significance of any word arises from its connection to the position of every other word (a kind of radical Saussurean linguistic relativity).
Training data are processed at a granular level, within a local context window. So while the mechanisms of positioning and self-attention allow the model to consider nearby tokens, they don’t inherently capture the global historical context of the entire document or text. All AI “knows” about the history of its data is what is visible within a limited window. Additionally, AI’s training data is often sampled in a way that doesn't preserve the original chronological order of its sources. Therefore, the model doesn't learn the temporal relationships or historical order of the documents in its training data.
A final feature of AI’s atemporality relates to its probabilistic prediction. You might think, on the one hand, that the way AI uses its data to make predictions or create content isn’t so different to how humans use experience, practice, and learning to create content. But the flow of information in human learning is profoundly bidirectional—new information is constantly being used to update beliefs, in ways which actually modify memory. This helps to explain why our memory is routinely fallible: new theories suggest that forgetting and revising our memories help us to adapt to new situations and environments (Vecchi & Gatti, 2020). On the other hand, while AI gets some feedback—either through reinforcement training or prompts—there is a huge asymmetry in the amount of updated information going in vs. the existing training data, which remains largely unchanged. This is how, as Cathy O’Neil has put it, generative AI codifies the past, “injecting yesterday’s prejudice into tomorrow’s decision-making” (O’Neil, 2021). This ahistoricity is partly why, as Kate Crawford (2021) points out, generative AI has 'baked in' biases of the kind that humanities scholars have been engaged in critiquing for decades.
So, given the importance of temporality to creativity, and the difference between the temporal dynamics of AI and human creativity, it doesn’t seem like AI is about to supplant human creativity just yet. But this story of creativity might be upended by instances of hybrid co-creation: in cases like the AI program Sudowrite, which acts as a kind of co-author, helpfully providing everything from descriptions for characters to resolutions of the narrative arc. What happens when creative autonomy is genuinely distributed over both human and AI systems? Is this a difference in kind from other examples of human–technology collaboration—where the artist or writer recruits their pen, paper, or keyboard—or just degree? Both enactive and ecological approaches to creativity have often challenged the common idea of creativity as a feature of some autonomous Enlightenment human. Maybe the question of whether AI is truly creative is missing the point. Perhaps the real issue is, are we?
References
Bajohr, H. (2022). The Paradox of Anthroponormative Restriction: Artistic Artificial Intelligence and Literary Writing. CounterText, 8(2), 262–282. https://doi.org/10.3366/count.2022.0270
Elam, M. (2023). Poetry Will Not Optimize; or, What Is Literature to AI? American Literature, 95(2), 281–303. https://doi.org/10.1215/00029831-10575077
Harris, D. (2021). Creative Agency. Springer International Publishing. https://doi.org/10.1007/978-3-030-77434-9
O’Neil, C. (2021). Ace at Any Age (T. Willcocks, Interviewer) [Medium]. https://medium.com/@tashwillcocks/cathy-oneil-ace-at-any-age-1b47f1a9d34d
Rhodes, M. (1961). An Analysis of Creativity. The Phi Delta Kappan, 42(7), 305–310.
The Creative Penn (Director). (2023, May 12). Intentionality, Beauty, and Authorship. Co-Writing With AI With Stephen Marche. https://www.youtube.com/watch?v=LMPxe9eKaCU
Vecchi, T., & Gatti, D. (2020). Memory as Prediction: From Looking Back to Looking Forward. MIT Press.