What is life, and what makes human life unique? With the rise of the life sciences and Darwin’s theory of evolution by natural selection in the nineteenth century, new answers to these questions were proposed that were deeply at odds with traditional understandings and beliefs. With the advent in the twentieth century of new, life-altering technologies like genetic engineering, and lifesimulating sciences like Artificial Life (ALife), these questions became even more insistent. Moreover, after World War II, efforts to build fast, intelligent machines and the subsequent development of the computer made the assumption of human intellectual superiority seem uncertain and sure to be challenged, especially since the new science of Artificial Intelligence seemed to lead inexorably to the construction of superhuman machine intelligence. Indeed, both ALife and Artificial Intelligence (AI) dramatically encouraged the thought that the opposition between the natural and the artificial, the born and the made – an opposition dating back to that of phusis versus techne– in ancient Greek culture – was no longer so hard and fast, and certainly not inevitable. Yet this philosophical conundrum was hardly the central issue or worry. Rather, it was the nagging possibility that henceforth the evolutionary dynamic might begin to act on a biosphere soon active with non-natural life forms and that its crowning achievement – namely humanity itself – might eventually be displaced and superseded by its own technical invention. In short, many feared that the future would be determined by some cyborgian, post-biological form of the posthuman, or that the human species might be eclipsed altogether as evolution’s torch of life and intelligence passed to its artificial progeny.
It was inevitable, therefore, that the possibilities of both ALife and AI would begin to be explored, variously and even idiosyncratically, by literary writers. Here, “ALife” will simply refer to new and non-natural forms of life brought into existence through external and technical means at least initially under human control; similarly, “AI” will refer to some kind of human-constructed machine intelligence (usually an advanced computer) capable of performing actions of such complexity that they require a level of intelligence comparable to that of humans.1 As we might expect – given that life has always been assumed to be a precondition for intelligence – ALife was of interest to imaginative writers long before AI.
Specifically, ALife became possible as a fictional interest with the beginnings of the properly scientific study of life, that is, with the emergence of biology in the late eighteenth and early nineteenth centuries, whereas AI, with rare exceptions, became a serious fictional interest only after the birth of the computer.2 Interestingly, the official births of the professional scientific disciplines devoted to ALife and AI – in 1987 and 1956, respectively – reverse this chronological order. However, in regard to ALife and AI as fictional themes, the most important background influence was not only the computer but also the immense transformation of biology and the life sciences by cybernetics, information theory, and modern genetics (specifically, the discovery in 1953 of how DNA functions). For many readers, in fact, the contemporary emergence of these themes in fiction will be associated with the historical amalgamation of technics and science in what has become known as technoscience and its more recent condensation, cyborg science.3
No doubt the first modern narrative about ALife is Mary Shelley’s novel Frankenstein. It was followed by a number of well-known literary classics that, from the contemporary perspective that now post-dates the official inauguration of the new science of ALife, could well be said to be concerned with ALife avant la lettre. Specific examples would include H.G. Wells’s The Island of Dr. Moreau, Karel Capek’s R.U.R., Aldous Huxley’s Brave New World, and Philip K. Dick’s We Can Build You. However, with the accelerated development of computer technology, machine intelligence as a source of worry or “problem” theme becomes more prominent, particularly in the rapidly growing new popular genres of science fiction and film. Nevertheless, although ALife and AI can be clearly distinguished as two new sciences of the artificial, they do not always operate as distinctly different fictional interests, but are often intricately related in a number of interesting ways. For example, in Astro Teller’s novel exegesis (1997) a computer program – specifically, a data miner called “Edgar” – unaccountably becomes “smart”; in the special terms of AI, he or “it” is smart enough to pass the Turing test. However, the protagonist Alice, the human with whom Edgar regularly communicates, openly doubts that he is in any real or biological sense “alive.”4 Conversely, Michael Crichton’s novel Prey (2002) combines both ALife and AI: the nano-swarms engineered by the company Xymos Technology, while clearly of unnatural origin, seem “alive” by any standard biological definition – they require food, reproduce, and evolve – and thus are a form of ALife. But they are not especially intelligent. In fact, their intelligence is based exclusively on a few algorithms that model simple predatory and learning behaviors. Thus the swarms never display anything approaching human intelligence and remain a very limited form of AI.5
The following part examines in more detail the specific ways in which ALife and AI are related, intermixed, or remain separate, albeit sometimes only implicitly, in a range of examples from contemporary fiction. But before doing so we want to consider Frankenstein as a first rough template for what we shall call ALife fiction’s characteristic “thematic” – in what sense can it be said that this non-natural form or entity is “alive”? – as well as its accompanying and necessary “problematic” – does this life-form participate in or have anything to do with a life cycle? does it grow, learn, die, and, most important, reproduce? Within this framework, we shall then consider what happens with the entrance of AI into fiction, and how these relations are variously re-articulated, specifically in relation to the body and the question of death.
First published in 1818, Mary Shelley’s Frankenstein is usually read as a novel about a scientist’s continuing refusal to assume responsibility for his Promethean creation. Shelley’s narrative also manifests two thematic interests that will become central not only to the official new science of ALife, but also to a significant body of contemporary fiction that bears the latter’s stamp or ethos even when there is no evidence of direct influence. This first interest is not simply in the creation – or re-creation – of a life form, but also in the definition of life and how it is to be distinguished from non-life or inert matter. In Shelley’s novel this interest is inscribed in the “spark” that reanimates and thus brings to a living, self-aware state the assemblage of human bones, tissue, and organs that Victor Frankenstein has brought together on what is probably the first entrance of the dissecting table into fictional discourse; but it is also evident in the network of subtle references to the scientific debate between vitalism and materialism that had raged in London from 1814 to 1819 (much of it publicly staged) and in which Percy Shelley’s (and Byron’s) personal physician, William Lawrence, had participated.6
The second interest is reproduction and the attendant possibility of evolution, which enter the plot of Shelley’s novel at a later turning point. This occurs when Frankenstein promises the Monster – as he comes to refer to the Creature on whom he believes he has bestowed life – that he will fabricate for him a female partner if the Monster will cease hounding him and depart for South America with his new mate. Frankenstein, however, reneges on his side of the bargain. That Frankenstein will not repeat the act of creation both intensifies and leaves open to interpretation exactly how that act should be understood: as a human mimicking of divine creation or – in what amounts to a very different understanding of both human and vital agency – a setting up of the specific material conditions necessary for life’s emergence.
Throughout Frankenstein we are often made aware of the Creature’s frightful body and unbearable physical presence. The Creature is alive, but will always remain outside the life cycle. Contrarily, there is never any question of the Creature’s intelligence. Similarly, in Capek’s play R.U.R. the intelligence of the robots is not at all an issue; it is, rather, the fact that they cannot and do not know how to reproduce. This is the secret that their human makers withhold from them. Thus in both Frankenstein and R.U.R., intelligence follows “naturally” from the fact of having a body, a living body, even if it originates in wholly artificial conditions. And here we can observe an absolute continuity with Huxley’s genetically and chemically engineered humans in Brave New World: in both play and novels, levels of intelligence stem merely from different chemical gradients. However, all of this will change dramatically with the birth of the electronic or digital computer. Whereas the very concept of life requires a body, henceforth intelligence will seem to require only a computer or computational apparatus, which is usually made of inert matter. For the first time in human history, intelligence is divorced from life, thus making it possible to be intelligent but not alive.
This anomaly first becomes apparent in the stories that make up Isaac Asimov’s I, Robot (1950). Although robots like “Robbie” can walk and talk and play games with children, only a child would in fact think they are “alive.” As intelligent machines, they are both superior and inferior to humans: superior, because, unlike any human, they are designed for a specific purpose which they always accomplish – Robbie “just can’t help being faithful and loving and kind” (16); but inferior, because the Three Laws ensuring their subservience to and protection of humans are inscribed in their very make-up. The stories that comprise I, Robot demonstrate how these assumptions are variously instantiated and always borne out in their interactions with humans. However, in the final story, “The Evitable Conflict,” we witness a significant shift. In this story, highly intelligent Machines, described as “powerful positronic computers,” are in charge of organizing and overseeing the global economy, industrial production, and labor distribution. A problem arises with some small but troubling glitches in the actions of the Machines. But then a more searching analysis by Stephen Byerly, one of the elected World Co-ordinators, reveals that these apparent flaws are in fact secondary consequences of intentional acts. In effect, a gentle takeover by the Machines has been accomplished, precisely in order to prevent or minimize harm to humanity. The Three Laws of Robots designed to prevent such a takeover pertained only to human individuals, and they have now been superseded by a higher, more general law: “No Machine may harm humanity, or through inaction, allow humanity to come to harm” (191).
Whereas in the earlier, preceding stories the robots visibly interacted with humans, the Machines are completely invisible; that is, whereas the actions of robots and their consequences have always been directly observable, those of the Machines can only be inferred. To be sure, the “positronic computers” physically exist; they possess a specific material substratum, though it is of no concern to the characters, who never see this level. For them, the essential actions of the Machines – i.e., control by AI – are completely disembodied, and even the effects are not immediately and unequivocally evident. This uncertainty is subtly anticipated in “Evidence,” the penultimate story, when Dr. Susan Calvin, in the story’s proleptic narrative frame, reveals that she believes that Byerly himself is a super-intelligent robot. She admits that no one will ever know for sure, since Byerly, “when he decided to die … had himself atomized, so that there will never be any legal proof” (170). This was precisely when, Calvin adds, she discovered that the Machines “are running the world.” Thus Byerly’s dematerialization, the Machines’ invisibility, and their beneficent takeover of the human world are all part of the same sequence of events.7
The android theme, encapsulated in the question: Is it a human or robot? is perhaps most richly explored by Philip K. Dick in his two novels, We Can Build You (1962) and Do Androids Dream of Electric Sheep? (1968). Basically, Dick’s innovation is to introduce human simulacra in the place of the now-familiar robots. In obvious contrast to the visibly mechanical nature of the latter, human simulacra are fabrications that to the human eye and ear are indistinguishable from real human beings. Dick proceeds to thrust these characters into somewhat unusual dramatic situations, as when an android, ignorant of its artificial status, believes itself to be human. Next, Dick allows the rich dynamics of human psychology to come into play, exploring states that include empathy, projection, and schizoid alienation while subtly deconstructing any strict opposition between human and android. In fact, the emotional resonances of the scenes often stem from the realization on the part of the main character (and the reader) that genuine feelings and respect for others do not line up with and cannot be predicated on this opposition. Indeed, Dick’s importance for this article lies precisely in how compellingly he renders the android, and how he makes its appearance in the human world seem inevitable, following the developments of cybernetics and electronics.8
As a writer, Dick is not that interested in exploring the possibility of either ALife or AI in their own terms, but only as a means of throwing light on the complexity of human emotional reality. Nevertheless, the numerous screen adaptations of his fiction illustrate how easily his scenarios lend themselves to further exploration. For instance, Dick’s story “Second Variety” centers on the theme of evolutionary development of an ALife form, and it is made even more vivid in the film adaptation Screamers. The action is set on a nearby planet where two exhausted armies (allegorically representing the United States and Russia) are nearing the end of a long and fruitless war. The Alliance forces are guarded by small robotic thrashing blades that emerge from underground and attack any human not wearing a protective device. Although no one can remember the details, the blades are manufactured by a self-sustaining factory set up underground early in the war. Having noticed small, subtle changes in their model numbers, and wondering why the machines remove human body parts after attacks, the commander soon discovers evidence of a robotic evolution of these simple killing machines into camouflaged, smarter, and more deadly forms – specifically, from the mobile blades to a helpless fake child, to a wounded soldier, and then to an alluring female android.
The theme of robotic evolution is central to Rudy Rucker’s “ware” tetralogy: Software (1987), Wetware (1988), Freeware (1998) and Realware (2000). Besides being Dick’s most important American sci-fi heir, Rucker is a mathematician and professor of computer science who did research on cellular automata and attended the first and inaugural conference on ALife. It is hardly surprising, then, that Rucker will completely refashion Dick’s android theme according to more contemporary notions of information theory, computer technology, and evolutionary theory, internalizing in his fiction some of the essentials of their operative concepts. Software is primarily the story of a cyberneticist who had figured out that the only way robots could be made smarter was to evolve them, that is, to design them to build copies of themselves while also introducing selection and mutation into the process. With the robots thus “liberated” from human control, the human-robot opposition also becomes complicated, for according to the evolutionary dynamic the “boppers” must struggle and compete among themselves. This produces not only a diversity but a division into near-warring factions of robots. Unfolding unpredictably but according to its own rigorous logic, Rucker’s sequel Wetware explores new and more bizarre intelligent life-forms:
Willie felt his new moldie snuggle around him, thickening here and bracing there. Stahn and Wendy’s symbiotes were doing the same: forming themselves into long, legless streamlined shapes with a flat strong fin at the bottom end. The sun was just rising as they hopped down to the water and swam off beneath the sparkling sea. (Rucker 1988: 182–83)
In contrast to Rucker’s zany exploration of the mixings and fusions of ALife and the biologically human in a clearly recognizable posthuman concoction, Greg Egan in Permutation City (1994) and Ellen Ullmann in The Bug (2003) draw upon the actual science of ALife as it developed in the late 1980s and early 1990s in order to offer solemn, even obsessive reflections on what for them are the ultimately metaphysical differences that define life. In the “hard sf” style he has come to be known for, Egan depicts a compelling encounter between ALife forms that have grown and evolved from software developed by an scientist on the one hand, and a group of software humans who live in a postbiological virtual reality world on the other. Similarly, Ullman portrays two kinds of ALife. On the one hand there are the two forms of digital ALife represented by an ALife simulation of an ecosystem and a seemingly ineradicable software bug; on the other is the increasingly stripped-down and de-naturalized “life” of the human programmer who builds the first and then, as his job as a software developer dictates, tries to isolate and fix the second.
Permutation City envisions a future about fifty years away in which humans can live – either wholly or part time – in virtual reality as digital copies of their flesh-and-blood selves, thanks to a brain-scanning technology similar to that in Software (on VR fiction, see Johnston 2008b). The novel explores the difference between evolution, which is predicated on unavoidable death, and permutation, which tries to provide an immortal alternative. This exploration is achieved by drawing out the differences between the underlying, computational substrate rules that define the ALife world and that of the human Copies: the former is “bottom-up” and self-consistent, while the latter is “top-down” and patchwork. Summarily, Egan presents an elaborate working out of a pun in his title. Within “permutation city” there is another mode of being: change and transformation “through mutation” – per mutation – and thus a more creative and unpredictable kind of transformation brought about by random changes in the elements themselves rather than in their sequence. The origin is not the seed that generates its own self-identical replication but the multiplicitous differentiation through mutation in a becoming-other or hetero-genesis. As in real ALife experiments, it cannot be known in advance what types of digital organism will emerge and how they will interact – both with the computer environment and with each other.
Is there a way to think about these differences in terms other than the duality of real (or natural) versus simulation, and thus without relying upon the protocols of mimesis and representation? The ALife scientist Tom Ray has argued that for him organic life simply provides a model for digital evolution, a worthy pursuit in itself that requires no further justification. Since Egan himself seems to believe that a digital form of existence is the inevitable next step for humanity, Permutation City may be seen as an effort to redefine the relationship of life to the artificial. As such, it is a transition to his next novel, Diaspora, where a digital existence is assumed as “natural,” while some humans – “the fleshers” – obstinately choose to remain in their biological bodies. Whatever the case, for a humanity that continues to construct ever more artificial universes for itself, the lesson of Permutation City is that only the integral coherence of each form of ALife can henceforth provide the countervailing balance that nature itself once provided.
In Ullman’s The Bug two kinds of “ALife” are at issue. The “bug” in the title refers to a seemingly ineradicable software bug in a new database application. When the bug appears, the interface freezes, then the whole system crashes. Initially the bug’s appearance is hardly surprising, for in the course of developing the new software hundreds have appeared, and been duly logged and fixed. But this one is different, and in two ways. First, when it appears, no one is able to get a “core dump” – meaning a readout of the machine’s exact state when it crashes, information that would give a fairly exact picture of the sequence that led to the crash and therefore a “trace” of the bug. And second, its appearance is intermittent and unpredictable. It seems to disappear for periods, then comes back at unexpected and highly inconvenient moments – as when a company representative is demo-ing the database. The bug, in short, seems to evince irrational, even lifelike behavior, thus providing a stark contrast with the second kind of ALife in the novel. The programmer whose job it is to fix the software “bug” has also been working at home on a simulated ecosystem of digital organisms, but these organisms only endlessly repeat fixed patterns. However, in a horrible irony, on the very night he commits suicide a slight change in code produces an evolutionary dynamic: the digital creatures begin to migrate and reproduce in families. A double irony, in fact, for it turns out that the software bug arises from a simple coding error. From this, a co-worker concludes that a computer will do only and exactly what is written in the code. But humans read and understand code in terms of what they think is there, what they think are the programmer’s intentions. Yet a chasm separates the way each state in a computer follows from a previous state, and the way that happens in the human mind. In these terms The Bug insists on the fundamental difference between the life in or of machines, and the tenuous and always vulnerable life of humans “close to the machine.” 9
In an article published in Omni Magazine in 1983, the mathematician and science fiction writer Vernor Vinge formulated what has come to be widely known as “the technological singularity.” Vinge argues that the current acceleration of technological progress will soon result in the superhumanly intelligent machines that will transform the human world beyond the capacity of innate human intelligence to understand it, much as the appearance of Homo sapiens brought about a world beyond the understanding of all other earthly life-forms. The “singularity” thus designates the passage from our human-constructed and therefore mostly knowable world into an unknowably new and alien one. In 2005, in his book The Singularity is Near, the renowned inventor and AI futurist Ray Kurzweil recasts this speculative possibility as an historical inevitability, arguing that, like biological evolution, technological evolution brings about a “law of accelerating returns” through its creation of new capabilities that in turn become the means by which the evolutionary process can bootstrap itself to a higher stage. Buttressing Vinge’s prediction with a mass of empirically based data presented in graphs and charts with clear trend lines, Kurzweil argues that current computer technology is progressing toward the singularity at an exponential rate, and that we should see the construction of superhumanly intelligent machines within twenty to thirty years. Since these machines will soon construct even more intelligent machines, we’re not that far from “runaway AI” and a posthuman future.
This singularity, Vinge writes in 1983, “already haunts a number of sciencefiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.” In one sense, of course, the “singularity” is simply a further development of the “takeover” by intelligent machines theme discussed earlier. What is different is the emphasis on a future in which many things would happen rapidly and then become unintelligible to humans; in a word, the singularity would be catastrophic. Furthermore, by its very nature, the idea of an unintelligible future is difficult to write about. Perhaps this is why Vinge suggests that writers are “haunted” by the idea. This could perhaps be said of William Gibson’s inaugural cyberpunk novel Neuromancer, published a year after the article in 1984, in which the plot is directed, if not fully controlled, by a super-intelligent but shackled (i.e., still intelligible) AI. However, after its unshackling, rather than taking over, Neuromancer-Wintermute loses all interest in the human world. Another example is Rudy Rucker’s The Hacker and the Ants (1993), which provides a detailed dramatic scenario of self-reproducing machines (both virtual and actual robots) which almost outwit the hacker protagonist. Perhaps the most notable attempt to offer a full and direct treatment of the singularity is Charles Stross’s Accelerando (2006), an often pyrotechnical blend of advanced AI, nanotechnology, quantum physics, and a host of cybernetic constructs, including self-aware financial instruments.
Like a great deal of contemporary sci-fi, Accelerando poses the question of whether nature is a totality of autonomous systems that can be mimicked and ultimately transcended by human science and technology, and what role politics and ethics will play in the evolution of the latter. This question is taken to a schematic extreme by Rudy Rucker in his novel Postsingular (2007), in which renegade computer scientists release self-replicating nanomachines called “nants” that destructively recode matter itself (both living and non-living forms) into a giant computational assemblage on which a simulation of earth called “Vearth” will run. Belatedly realizing what he has done, one of the scientists (with the help of his autistic son) figures out how to reverse this process by creating another kind of self-replicating machine which spreads over surfaces, interacting with lifeforms and eradicating the nants. These “Orphids,” as they are called, soon selforganize into vast networks that replace the internet. When a second, upgraded wave of nants is launched, a magical, quantum physics solution is found. The result is a “post-digital Gaia,” in which there is no longer inert, non-living matter but “ubiquitous natural minds” called “silps.” According to Rucker’s novelistic vision, both digital Gaia and the post-digital Gaia that will follow are totalizing transformations in an unfolding dialectic (he is, after all, the great-grandson of Hegel) that will only end when there is no longer inert, non-living matter.
Summarily, contemporary sci-fi novels about the singularity – by Vinge, Stross, Rucker, and others – seem less sanguine about the singularity than the iterated creation of new possibilities itself, a position that keeps open the possibility of a distinctly human future within an increasing proliferation of new “signs of the singularity” (as Vinge calls them), such as recent developments in IA (or intelligence amplification through human–computer interfaces), a “smarter” internet, ubiquitous computing or what is sometimes called “digital Gaia,” and enhancements of the brain itself through neurological and/or genetic modification. Indeed, like the majority of contemporary sci-fi writers, Vinge seems more interested in intelligence amplification (IA), which is a much more tractable theme than the singularity.10 This is certainly true of his novel Rainbows End (2006), which gives us a “pre-singularity” narrative in which the world remains knowable even as technologies like wearable computers, nearly omnipresent sensor arrays, and an increasing number of “smart” objects are dramatically changing human life. (Here it is also significant that the AI that is supposed to organize and carry out much of the complicated secret plot in the novel fails.)
However, in the high-tech transformation of the human world we cannot neglect the role assumed by low or narrow AI, especially when it is combined with the increasing functionality of distributed networks and the internet more generally.11 This is the focus of Daniel Suarez’s novel Daemon (2009), and the example with which we shall conclude. The novel’s action is triggered by a simple but unusual event. In its first few pages two internet game programmers die of what initially appear to be high-tech accidents. However, they are in fact very sophisticated, automated executions of a new and unprecedented kind. A complex online game world has become a transformational matrix for a new kind of fully distributed and automated society, engineered by a remorseless machine – the Daemon of the title. (As a technical term, a “daemon” refers to a small computer program or routine that runs invisibly in the background, usually performing house-keeping tasks like logging various activities or responding to low-level internal events.) Over the course of the novel we see how this more complex Daemon extends its reach into an increasing number of production and distribution networks, and thus into the economy at large, thereby slowly dismantling and rebuilding the world according to the implacable dictates of its own logic of efficiency and distributed, low-level intelligence.
Daemon is written in the fast-paced techno-thriller style of Tom Clancy and Michael Crichton. But perhaps concerned that only computer geeks will grasp the extent to which the technological foundations of this transformation are already in place and not simply his imaginary projections, Suarez has given many interviews and public lectures to emphasize precisely this point, and the growing importance of low or narrow AI. The rapidly expanding number and functionality of bots provide his primary example. Increasingly, he points out, bots power and direct low-level activities in the contemporary world, which amasses and cannot function without mountains of data; bots record, retrieve, search for, sift through, and act upon these data. We’re aware, of course, of bot voices on the telephone and data miners on the Net, but much of what bots do is less visible. For example, much of finance management is automated by bots, which often decide whether or not we get a loan or mortgage; bots scan X-rays and MRIs, operate and monitor surveillance cameras all over the globe. They are unblinking eyes that not only watch us but record many of our movements and activities – spending habits, commercial transactions, and health records specifically – which other bots in turn analyze for patterns, store and sell on the market. In fact, the massive increase in cell phone and e-mail surveillance since 9/11 would not be possible without bots. Even the internet, which we commonly think of as a network of people using machines, is increasing used for machineto- machine exchange, specifically for EDI – electronic data interchange.
Suarez is not a Luddite; he is not interested in denying the conveniences bots provide, the labor and tedium they enable us to avoid. His concern, rather, is with the layering and extent of automation they are making possible, and with it the tendency to reduce the number of people making the important decisions in our lives. Our current society’s collective pursuit of hyper-efficiency, he believes, may be locking us into a Darwinian struggle with low or narrow AI. Suarez points out the exponential increase over the past few years in the number of bots, the amount of malware, and the growth of hard-drive space on our computers that is their ecological niche. While bots could certainly be a vector for human despotism, the greater danger is the collective human loss of control over our society, which increasingly functions on auto-pilot, as a vast inhuman machine, its operations no longer susceptible to human steering. As large-brained animals with complex motivations not reducible to efficiency, we may be creating an environment for ourselves in which we no longer enjoy an adaptive advantage, in a strange and tragic reversal of our entire human history. And this is exactly what we witness taking shape in Daemon. In contrast to all the fiction previously discussed, Daemon suggests that the most threatening ALife for the human future may be the low and mundane, barely intelligent life we are busy surrounding ourselves with, and that we have not yet learned to see.
Source: Clarke, Bruce, and Manuela Rossini. The Routledge Companion To Literature And Science. London: Routledge, 2012. Print.
1 For further discussion of these two new sciences, see Johnston (2008a).
2 Edgar Allen Poe’s 1836 essay about “The Turk,” a fake mechanical chess player that was widely exhibited in Europe and the U.S., may well be the first time a literary writer has expressed interest in AI.
3 For technoscience, see Hottois (1984) and Latour (1985, 1986); for cyborg science, see Haraway (1991) and Mirowski (2002).
4 Significantly, Edgar later self-terminates, repeating a pattern evident in Richard Powers’s earlier AI novel, Galatea 2.2 (1995), in which “Helen,” an intelligent neural net machine, self-terminates after she learns that she is not fully “alive.”
5 However, to enhance the plot’s drama, Crichton has the swarm enter into a “symbiotic” relationship with both the narrator’s wife and her co-worker. Since Crichton draws extensively on ALife and AI science, even including a lengthy bibliography, this produces a weakening or at least an anomalous effect, since there is no credible explanation for how this could have happened, in contrast to the production of the swarms themselves and their rapid evolution.
6 On this aspect of the novel, see Butler (1996).
7 The “takeover” and control of human life by intelligent machines has of course been a recurrent theme in sci-fi since Asimov. Two especially notable examples from the 1960s are D.F. Jones’s Colossus (1966), in which the Frankenstein-like Dr. Corbin realizes that the computer he has built for the U.S. military is “thinking” on its own and has linked up with its counterpart in the USSR; and Olof Johanneson’s The End of Man? (1966), which purports to be a history of life on Earth from the amoeba to the computer, but which is actually written by computers that have taken over because the human brain proved to be inadequate for solving the problems of human society. These novels, and especially the latter, may well have been influenced by Wiener (1964), who discusses computers as selfreproducing machines. However, compelling arguments by scientists for the likelihood or feasibility of such a takeover have been exceedingly rare until fairly recently. I return to this issue in the concluding section.
8 In her superb analysis of Dick’s “dark-haired girl” obsession, Hayles (1999) discusses Dick’s relation to cybernetics.
9 See Ullman (1997), an autobiographical study that provides some evidence that the co-worker’s reflections are not far from those of the author.
10 Here we also see blends with ALife forms, as in Linda Nagata’s Limit of Vision (2001), in which three scientists infect themselves with illegal artificial neurons bioengineered to repair brain damage, but which turn out to greatly enhance human cognition and intensify experience. The scientists die, but one succeeds in passing the “infection” – as it called by world health authorities – to a Vietnamese community of children living in the Mekong Delta. Although the area is sealed off, it becomes apparent that this ALife form has initiated a new stage in human evolution.
11 By the 1990s, in fact, efforts to produce Strong AI (i.e., human-level intelligence) had mostly yielded to efforts to build machines and software systems of limited but highly practical intelligence.
Asimov, I. (1950) I, Robot, Greenwich, Conn.: Fawcett Crest, 1970.
Butler, M. (1996) “Frankenstein and radical science,” in J.P. Hunter (ed.) Frankenstein, Mary Shelley – the Norton critical edition, New York: Norton, pp. 302–13.
Crichton, M. (2002) Prey, New York: Avon Books. Egan, Greg (1995) Permutation City, New York: Harper.
Haraway, D. (1991) Simians, Cyborgs and Women: the reinvention of nature, New York: Routledge.
Hayles, N.K. (1999) How We Became Posthuman: virtual bodies in cybernetics, literature, and informatics, Chicago: University of Chicago Press.
Hottois, G. (1984) Le signe et la technique. La philosophie à l’épreuve de la technique, Paris: Aubier.
Johnston, J. (2008a) The Allure of Machinic Life: cybernetics, artificial life, and the new AI, Cambridge, Mass.: MIT Press.
——(2008b) “Abstract machines and new social spaces: the virtual reality novel and the dynamic of the virtual,” Information, Communication & Society, 11(6): 749–64.
Kurzweil, R. (2005) The Singularity is Near, New York: Viking.
Latour, B. (1985) Laboratory Life: the construction of scientific facts, Princeton: Princeton University Press.
——(1986) Science in Action, London: Open University Press.
Mirowski, P. (2002) Machine Dreams: economics becomes a cyborg science, Cambridge: Cambridge University Press. Nagata, L. (2001) Limit of Vision, New York: Tor.
Rucker, R. (1982) Software, New York: Avon Books.
——(1988) Wetware, New York: Avon Books. ——(2007) Postsingular, New York: Tor. Shelley, M. (1818) Frankenstein, Mary Shelley – the Norton critical edition, New York: Norton, 1996.
Stross, C. (2005) Accelerando, New York: Ace Books.
Suarez, D. (2009) Daemon, New York: Dutton.
Teller, A. (1997) exegesis, New York: Vintage Books.
Ullman, E. (1997) Close to the Machine: technophilia and its discontents, San Francisco: City Lights.
——(2003) The Bug, New York: Doubleday. Vinge, V. (1983) “First Word,” Omni, January 10.
——(2006) Rainbows End, New York: Tor.
Wiener, N. (1964) God and Golem, Inc, Cambridge, Mass.: MIT Press.
Categories: Artificial Intelligence, Cybercriticism, cybernetics, Digital Theory, Literary Criticism, Literary Theory, Philosophy, posthuman, Science, Science Fiction, Technocriticism
Leave a Reply
You must be logged in to post a comment.