A post by Melvin Chen.
Do androids dream? That is the question that Philip K. Dick’s protagonist Rick Deckard asks himself. Human beings (unless aphantasic) are able to conjure up mental images of the sheep that they count before sleeping. Can machines or programs imagine, daydream, and dream? Mahadevan (2018) proposes that we are at the cusp of imagination science, one of whose primary concerns will be the design of imagination machines.
Programs have been written that are capable of generating jokes (Kim Binsted’s JAPE), producing line-drawings that have been exhibited at such galleries as the Tate (Harold Cohen’s AARON), composing music in several styles reminiscent of such greats as Vivaldi and Mozart (David Cope’s Emmy), proving geometry theorems (Herb Gelernter’s IBM program), and inducing quantitative laws from empirical data (Pat Langley, Gary Bradshaw, Jan Zytkow, and Herbert Simon’s BACON). In recent years, Dartmouth has been hosting Turing Tests in creativity in three categories: short stories, sonnets, and dance music DJ sets. This question no longer seems as far-fetched as it might have been to Philip K. Dick’s 1968 readership: are we entering the age of the imagination machine? In this post, I will provide a brief and non-exhaustive survey of some plausible responses to these imagination machines and the related prospects for our understanding of the creative imagination.
The Humans-as-Machines Response. If mechanical systems are capable of these creative feats of invention and discovery on the artistic, musical, mathematical, and scientific fronts, then perhaps human beings are no more than elaborate bits of clockwork (at least on the creative imaginative front). Can computers do everything the human mind can? This is the question that Penrose poses in The Emperor’s New Mind (1989). Although Penrose replies in the negative, one might hold that JAPE, AARON, Emmy, and BACON are demonstrating that facets of human thinking having to do with the creative imagination can be emulated by sophisticated machines. If algorithms are appropriately specified and possess the right level of sophistication, creative processes that employ the use of the imagination can be mechanized in machines that are technological implementations of these algorithms. If humans are machines, then they will be subject to whatever limitations (e.g. Gödelian limitations) apply to these machines (Nilsson, 2010). The creative imagination will be de-mystified and the Romantic myth of the creative genius will be dissolved, once and for all.
The Many-Roads-Lead-to-Rome Response. It is traditionally held that consciousness, intelligence, and the imagination supervene on certain physical and biochemical properties. Only the right kind of stuff (viz. neuroprotein) can support creative imagining. If however silicon-and-metal-based machines are capable of creative output, then we appear to be led to the multiple realizability thesis, according to which the same mental state of creatively imagining can be realized by different physical states. As many roads lead to Rome, mental states involving creative uses of the imagination ought not to be reduced to brain states. Naturalistic approaches to the philosophy of imagination have thus far sought integration with cognitive science (Nichols & Stich, 2003, Weinberg & Meskin, 2006). They might be advised to extend their gaze to salient developments within AI research as well.
The What-You-See-Is-Not-What-You-Get Response. While the output of JAPE, AARON, Emmy, and BACON might be ostensibly creative, what you see need not be what you get. For starters, we could still distinguish between the creative output and the creative process (as is typically done in creative cognition research). These machines remain the product of human ingenuity, and the algorithmic processes that are implemented in these machines cannot properly count as creative imaginative processes. After all, since these machines can only do what they are programmed to do, they fail to take us by surprise and cannot achieve any novelty (Lovelace, 1953). The What-You-See-Is-Not-What-You-Get Response is a highly popular response, supported inter alia by a number of distinct objections:
The Easy Dupe Objection. One could also deny that the ostensibly creative output generated by these programs is really creative. If machines are able to pass the Dartmouth-based Turing Tests in creativity, it is not because their output meets some creativity threshold. Rather, human beings are in general easily duped. The Turing Tests in creativity and the more standard Turing Test, according to this skeptical line of reasoning, confirm neither the creativity of the output generated by programs nor the conversational intelligence of programs, but merely the more general fact that human beings are in general easily deceived into making conclusions that are unwarranted.
The Lovelace Objection. As machines can only do as they have been programmed to do, the relevant creative process is derivative in these machines. If the ostensibly creative output is indeed creative, then credit ought to be properly assigned to the human algorithm designers and AI researchers instead of the machines. The Lovelace Objection is more plausible if AI continues to be understood in the classical computationalist sense, wherein algorithms are designed with decision-making rules that are handwritten. In a machine-learning context, however, algorithms have the capacity to define or modify their decision-making rules autonomously. As algorithms increasingly rely on machine learning capacities, human input will be minimized.
The Vagueness Objection. Turing (1950) and Minsky (2006) have notoriously given short shrift to the questions ‘Can machines think?’ and ‘Are machines capable of consciousness?’. They would have dismissed Deckard’s question on behaviorist grounds. Thinking, consciousness, dreaming, and imagining are vague, ambiguous, mysterious notions, and it simply will not do to pose questions that include these notions. If JAPE, AARON, Emmy, and BACON are capable of generating creative output, we can only conclude that they exhibit at least some of the behavioral dispositions of which human producers of creative output are capable. Whether or not these machines and programs can exercise creative processes that employ the use of the imagination is a question we should ultimately refrain from answering, given the gappiness of the notion of the creative imagination.
A final thought: when encountering the creative output of human beings, it is relatively uncontroversial for us to make a causal inference in favour of the existence of an underlying mechanism for this creativity (typically involving creative uses of the imagination) in these individuals. If we resist making the same inferential move when encountering the ostensibly creative output of machines, do we do so simply because of our species chauvinism or anthropocentric bias? How do we avoid being invidiously discriminatory in our responses to these imagination machines? To what extent will these imagination machines alter our more general understanding of the imagination (whether by their heroic failure or successful emulation of the uses we make of our creative imagination)? I would love to hear your thoughts!
Update: a modified version of this post has been published in AI & Society. You can find the published version here.
 This should come as no surprise to us: Dartmouth College could well lay claim to being the spiritual home of AI. The term ‘AI’ had originally been coined to distinguish the subject matter of a Dartmouth-based summer 1956 research workshop from automata studies and cybernetic research. This Dartmouth-based ‘Summer Research Project on Artificial Intelligence’ workshop would in turn set the agenda for AI research and usher in the classical AI era.
 Besides (and as things stand), there remains a worrying lack of consensus about assessment criteria for creativity. Having addressed this elsewhere, it is not my intention to revisit my arguments here (Chen, 2018).
 Weizenbaum (1966) provides a similar objection to the Turing Test.
Chen, Melvin. 2018. ‘Criterial Problems in Creative Cognition Research,’ in Philosophical Psychology, Vol. 31 No. 3, pp. 368-82
Lovelace, Ada. 1953. ‘Notes on Manabrea’s Sketch of the Analytical Engine Invented by Charles Babbage,’ in Faster than Thought, ed. Bertram Vivian Bowden, London: Sir Isaac Pitman & Sons
Mahadevan, Sridhar. 2018. ‘Imagination Machines: A New Challenge for Artificial Intelligence,’ AAAI Conference, link available at <https://people.cs.umass.edu/~mahadeva/papers/aaai2018-imagination.pdf>
Minsky, Marvin. 2006. The Emotion Machine: Commonsense Thinking, Artificial Intelligence, & the Future of the Human Mind, New York: Simon & Schuster
Nichols, Shaun & Stephen Stich. 2003. Mindreading: An Integrated Account of Pretense, Self-awareness & Understanding Other Minds, Oxford: Oxford University Press
Nilsson, Nils. 2010. The Quest for Artificial Intelligence, Cambridge University Press
Penrose, Roger. 1989. The Emperor’s New Mind, Oxford University Press
Turing, Alan M. 1950. ‘Computing Machinery & Intelligence,’ in Mind, Vol. 59 No. 236, pp. 433-60
Weinberg, Jonathan & Aaron Meskin. 2006. ‘Puzzling over the Imagination: Philosophical Problems, Architectural Solutions,’ in The Architecture of the Imagination: New Essays on Pretence, Possibility, & Fiction, ed. Shaun Nichols, Oxford University Press, pp. 175-202
Weizenbaum, Joseph. 1966. ‘ELIZA – A computer program for the study of natural language communication between man & machine,’ in Communications of the ACM, Vol. 9 No. 1, pp. 36-45