001

Table of Contents
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

001

SOURCES AND ACKNOWLEDGMENTS

Part I

Chapter 1, “Brain in a Vat” (John Pollock, Chapter 1, “The Problems of Knowledge,” in Contemporary Theories of Knowledge, Rowman & Littlefield Publishers, Inc., 1986, pp. 1-3, reprinted by permission of the publishers); Chapter 2, “Are You in a Computer Simulation?” (Nick Bostrom, “The Simulation Argument: Why the Probability that You Are Living in a Matrix is Quite High,” in Times Higher Education Supplement, 16 May 2003, pp. 1-5, reprinted by permission of Times Higher Education and Nick Bostrom); Chapter 3, “Excerpt from The Republic” (Plato, The Republic, trans. Benjamin Jowett, P.F. Collier & Son, Colonial Press, 1901); Chapter 4, “Excerpt from The Meditations on First Philosophy” (René Descartes, Meditation I, trans. John Veitch, The Classical Library, 1901); Chapter 5, “The Matrix as Metaphysics” (David J. Chalmers, reprinted by permission of the author).

Part II

Chapter 6, “Where Am I?” (Daniel C. Dennett, Brainstorms, Bradford Books, 1978, pp. 356-64); Chapter 7, “Personal Identity” (Eric Olson, in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, Winter 2008; ); Chapter 8, “Divided Minds and the Nature of Persons” (Derek Parfit, in Mindwaves, ed. Colin Blakemore and Susan Greenfield, Basil Blackwell, 1987, pp. 351-6, reprinted by permission of Blackwell Publishing); Chapter 9, “Who Am I? What Am I?” (Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology, Viking, 2005, pp. 382-7); Chapter 10, “Free Will and Determinism in the World of Minority Report” (Michael Huemer); Chapter 11, “The Book of Life: A Thought Experiment” (Alvin I. Goldman, “Actions, Predictions and Books of Life,” American Philosophical Quarterly, 5.3 (1968), pp. 22-3).

Part III

Chapter 12, “Robot Dreams” (Isaac Asimov, in Robot Dreams, Byron Preiss Visual Publications Inc., 1986, pp. 25-50); Chapter 13, “A Brain Speaks” (Andy Clark, from Being There: Putting Brain, Body and World Together Again, MIT Press, 1996, pp. 223-7, © 1996 Massachusetts Institute of Technology, by permission of MIT Press); Chapter 14, “The Mind as the Software of the Brain” (Ned Block, from An Invitation to Cognitive Science, ed. D. Osherson, L. Gleitman, S. Kosslyn, E. Smith, and S. Sternberg, MIT Press, 1995); Chapter 15, “Cyborgs Unplugged” (Andy Clark, from Natural Born Cyborgs, Oxford University Press, 2007, pp. 13-34, by permission of Oxford University Press, Inc.); Chapter 16, “Consciousness in Human and Robot Minds” (Daniel C. Dennett, from Cognition, Computation and Consciousness, Oxford University Press, pp. 1-11, by permission of the publishers); Chapter 17, “Superintelligence and Singularity” (Ray Kurzweil, Chapter 1 in The Singularity is Near: When Humans Transcend Biology, Viking, 2005, pp. 7-33).

Part IV

Chapter 18, “The Man on the Moon” (George J. Annas, from American Bioethics: Crossing Human Rights and Health Law Boundaries, Oxford University Press, 2004, pp. 29-42); Chapter 19, “Mindscan: Transcending and Enhancing the Human Brain” (Susan Schneider); Chapter 20, “The Doomsday Argument” (John Leslie); Chapter 21, “Asimov’s ‘Three Laws of Robotics’ and Machine Metaethics” (Susan Leigh Anderson, from Proceedings of the AAAI Fall Symposium on Machine Ethics, ed. Anderson); Chapter 22, “Ethical Issues in Advanced Artificial Intelligence” (Nick Bostrom, in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, vol. 2, ed. I. Smith et al., Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12-17).

Part V

Chapter 23, “A Sound of Thunder” (Ray Bradbury, from Collier’s Weekly, The Crowell-Collier Publishing Company, 1952, pp. 1-9); Chapter 24, “Time” (Theodore Sider, from Riddles of Existence, Oxford University Press, 2008, pp. 44-61, by permission of Oxford University Press); Chapter 25, “The Paradoxes of Time Travel” (David Lewis, from American Philosophical Quarterly, 13 (1976), pp. 145- 52); Chapter 26, “The Quantum Physics of Time Travel” (David Deutsch and Michael Lockwood, from Scientific American, March 1994, pp. 68-74, reprinted with permission. Copyright © 1994 by Scientific American, Inc. All rights reserved); Chapter 27, “Miracles and Wonders: Science Fiction as Epistemology” (Richard Hanley).
 
Every effort has been made to contact owners of copyright material. In the event of any oversight, please contact the publisher so that errors or omissions can be rectified at the earliest opportunity.

INTRODUCTION
Thought Experiments: Science Fiction as a Window into Philosophical Puzzles
Susan Schneider
 
 
 
Let us open the door to age-old questions about our very nature, the nature of the universe, and whether there are limits to what we, as humans, can understand. But as old as these issues are, let us do something relatively new - let us borrow from the world of science fiction thought experiments to fire the philosophical imagination. Good science fiction rarely disappoints; good philosophy more rarely still.
Thought experiments are imagination’s fancies; they are windows into the fundamental nature of things. A philosophical thought experiment is a hypothetical situation in the “laboratory of the mind” that depicts something that often exceeds the bounds of current technology or even is incompatible with the laws of nature, but that is supposed to reveal something philosophically enlightening or fundamental about the topic in question. Thought experiments can demonstrate a point, entertain, illustrate a puzzle, lay bare a contradiction in thought, and move us to provide further clarification. Indeed, thought experiments have a distinguished intellectual history. Both the creation of relativity and the interpretation of quantum mechanics rely heavily upon thought experiments. Consider, for instance, Einstein’s elevator and Schrödinger’s cat. And philosophers, perhaps even more than physicists, make heavy use of thought experiments. René Descartes, for instance, asked us to imagine that the physical world around us was an elaborate illusion. He imagined that the world was merely a dream or worse yet, a hoax orchestrated by an evil demon bent on deceiving us. He then asked: how can we really be certain that we are not deceived in either of these ways? (See Descartes’ piece in this volume, Chapter 4.) Relatedly, Plato asked us to imagine prisoners who had been shackled in a cave for as long as they can remember. They face a wall. Behind them is a fire. Between the prisoners and the fire is a pathway, where men walk, carrying vessels, statues and other objects (See .)
Plato’s cave
002
As the men walk behind the prisoners, they and the objects they carry cast shadows on the cave wall. The prisoners are thus not able to see the actual men and objects; their world is merely a world of shadows. Knowing nothing of the real causes of the shadows, the prisoners would naturally mistake these shadows for the real nature of things. Plato then asked: is this analogous to our own understanding of reality? That is, is the human condition such that our grasp of reality is only partial, catching only the slightest glimpse into the true nature of things, like the prisoners’ world of shadows?1
Intriguingly, if you read science fiction writers like Stanislaw Lem, Isaac Asimov, Arthur C. Clark and Robert Sawyer, you are already aware that some of the best science fiction tales are in fact long versions of philosophical thought experiments. From Clark’s 2001, which explored the twin ideas of intelligent design and artificial intelligence gone awry, to the Wachowski brothers’ Matrix films, which were partly inspired by Plato’s Cave, philosophy and science fiction are converging upon a set of shared themes and questions. Indeed, there is almost no end to the list of issues in science fiction that is philosophically intriguing. It is thus my modest hope that this short book isolates a number of key areas in philosophy where the interplay between philosophy and science fiction is especially rich. For instance, you might have seen the films AI or I, Robot (or you may have read the stories they are derived from). And you might have asked:
Can robots be intelligent? Should they have rights?
Is artificial intelligence even possible?
Or you might have read a time travel story, such as H. G. Wells’s The Time Machine, and asked:
Is time travel possible? Indeed, what is the nature of space and time?
In this book, we delve into these questions, as well as many others, such as:
Could I be deceived about the external world, as in The Matrix, or Vanilla Sky?
What is the nature of persons? For instance, can my mind survive the death of my body? Can I ‘upload’ my memories into a computer and somehow survive? (E.g., as in Mindscan.)
Do we ever act freely, or is everything predetermined? (E.g., see Minority Report.)
Should we enhance our brains, and even change our very nature?
So let us see, in more detail, where our reflections will lead.

Part I: Could I be in a “Matrix” or Computer Simulation?

Related Works: The Matrix; Permutation City; The 13th Floor; Vanilla Sky; Total Recall; Animatrix

You sit here in front of this book. You are as confident that the book exists as you are of the existence of any physical object. The lighting is good; indeed, you feel the pages pressing on your hands - this is no illusion. But think of stories like The Matrix or Vanilla Sky. How can you really be sure that any of this is real? Perhaps you are simply part of a computer-generated virtual reality, created by an omnipotent supercomputer of unthinkable proportions. Is there some way to rule out such a scenario?
Our first section explores the aforementioned issue of the reality of the external world. Does the world around you - the people you encounter, the book you are now reading, indeed, even your hand - really exist? Answers to this question are a central focus of the sub-field of philosophy known as “epistemology,” or “theory of knowledge.” We begin with a brief science fiction story written by a philosopher, John Pollock, who depicts a “brain in a vat” scenario. Pollock’s thought experiment, like the works named in the section title listed above, invites reflection on a philosophical position known as “external world skepticism.” The skeptic about the external world holds that we cannot know that the external world that we believe is around us really exists, instead we may be in a dream, in virtual reality, and so on. Represented in this section are the aforementioned ideas of both Plato and Descartes; such provide essential philosophical background for this topic. While reading the pieces in the section, as well as other sections of the book, readers may wish to view or read one or more of the science fiction works named in the section titles. (Relatedly, instructors using this book for their courses may want their students to do so. In particular, they may consider screening the Star Trek episodes I list, as they are short, leaving time for in class discussion.)
The next piece in the section develops the issue of external world skepticism in a stunning new direction, suggesting that virtual reality science fiction thought experiments depict science fact. For philosopher Nick Bostrom has recently offered an influential argument that we are, in fact, in a computer simulation. He observes that assuming that a civilization survives long enough to be technologically sophisticated, it would likely be very interested in running simulations of entire worlds. In this case, there would be vastly more computer simulations, compared to just one real world. And if this is so, there would be many more beings who are in a simulation than beings who are not. Bostrom then infers that given this, it is more likely than not that we are in a simulation. Even the seasoned philosopher will find Bostrom’s argument extremely thought provoking. Because the argument claims that it is more likely than not that we are in a simulation it does not rely on remote philosophical possibilities. To the skeptic, the mere possibility of deceit means that we cannot know the external world exists; for the skeptic holds that we must be certain of something in order to truly say that we know it. On the other hand, opponents of external world skepticism have argued that just because a skeptical scenario seems possible, it does not follow that we fail to know the external world exists. For knowledge doesn’t require certainty; the skeptic places too strong a requirement on knowledge. But Bostrom’s argument bypasses this anti-skeptical move: Even if you reject the claim that knowledge requires certainty, if his argument is correct, then it is likely that we are in a simulation. That the world we know is a computer simulation is no remote possibility - more likely than not, this is how the world actually is.
Part I also features a related piece by philosopher David J. Chalmers. In his “Matrix as Metaphysics” Chalmers uses the Matrix films as a means to develop a novel position on external world skepticism. Interestingly, Chalmers does not dispute Bostrom’s argument. Instead, he aims to deflate the significance of knowing we are in a simulation. Chalmers asks: why would knowing that we are in a simulation prove that the external world skeptic is correct? He writes:
I think that even if I am in a matrix, my world is perfectly real. A brain in a vat is not massively deluded (at least if it has always been in the vat). Neo does not have massively false beliefs about the external world. Instead, envatted beings have largely correct beliefs about their world. If so, the Matrix Hypothesis is not a skeptical hypothesis, and its possibility does not undercut everything that I think I know. (Chapter 5, p. 35)
Chalmers is suggesting that being in a simulation is not a situation in which we fail to know that the external world around us really exists. Suppose that we learn we are in a matrix. According to Chalmers, this fact tells us about the nature of the external world: it tells us that the physical world around us is ultimately made of bits, and that our creators were creatures who allowed our minds to interact with this world of bits. But upon reflection, knowing a new theory of the fundamental nature of the universe is just learning more physics. And while intriguing, this is not like proving that skepticism is true. For Chalmers contends that there is still a “physical world” which we interact with; what is different is that its fundamental physics is not about strings or particles, but bits. Furthermore, learning that there is a creator outside of space and time who allowed our minds to interact with the physical world, while obviously of great metaphysical and personal import, is akin to learning that a particular religious view holds. This would be an earth shattering revelation, but it does not mean that we are not situated in the external world that we believe we are in.
Suggestively, a very basic brain in a vat was recently developed at the university of Florida in the laboratory of Thomas De Marse. It now is sophisticated enough to successfully fly a flight simulator.2 Bostrom would likely say that this is further proof that we are in a simulation; for when we start turning our own basic simulations on, this is, in effect, evidence that advancing societies have interest in doing so. It also indicates that we are nearing the point at which we are capable of surviving the technological age long enough to develop more advanced simulations. Indeed, I find De Marse’s development to be yet another telling example of the convergence between science fiction and science fact. Some of the most lavish science fiction thought experiments are no longer merely fictions - we see glimpses of them on the technological horizon.

Part II: What Am I? Free Will and the Nature of Persons

Related Works: Software; Star Trek, The Next Generation: Second Chances; Mindscan; The Matrix; Minority Report

Part I left us with the question: Is reality, at rock bottom, just a pattern of information in an unfathomably powerful supercomputer? If one lives with this question long enough, one will likely also wonder: am I, being part of this larger reality, merely a computational entity - a certain stream of information or computer program? Indeed, this could even be the case if we are not living in a simulation. Many cognitive scientists suspect that the brain is a kind of computational system, and that relatedly, the person is fundamentally a sort of computational being. As the futurist Ray Kurzweil suggests in his piece for this section (Chapter 9), using language reminiscent of the ancient Greek philosopher Heraclitus, “I am rather like the pattern that water makes in a stream as it rushes past the rocks in its path. The actual molecules of water change every millisecond, but the pattern persists for hours or even years” (Kurzweil, Chapter 9, p. 100). For Kurzweil this “pattern” is construed in computational terms: the pattern is the pattern of information processing that your brain engages in - the particular numerical values and nodes characterizing your neural network, down to the last detail. Let us call this view of the nature of persons “information patternism.”
Indeed, this view of the nature of persons is developed in many philosophically oriented science fiction works. Consider, for instance, Jake Sullivan, the protagonist of Robert Sawyer’s Mindscan, who, hoping to avoid death, scans his brain and attempts to upload his mind into an artificial body. In a similar vein, Rudy Rucker’s Software features an aging character who uploads his pattern into numerous devices, including a truck, in a last-ditch effort to avoid death. This common science fiction theme of scanning and “uploading” one’s mind is predicated on the idea of copying one’s informational pattern - one’s memories, personality traits, and indeed, every psychological feature - into a supercomputer. The survival of one’s pattern is supposed to be sufficient for the survival of the individual, across a storyline of extraordinary shifts in underlying matter.
Informational patternism is essentially a version of a leading theory of the nature of persons in metaphysics, a view commonly called the “Psychological Continuity Theory.” According to this view, you are essentially your memories and ability to reflect on yourself (a position associated with John Locke) and in its most general form, your overall psychological configuration; what Kurzweil referred to as your “pattern.” Informational patternism is also closely related to the leading view of the nature of mind in both philosophy of mind and cognitive science. The view is, more explicitly, the following:
Computational Theory of Mind (CTM). One’s mind is essentially the “program” running on the hardware of the brain, where by “program” what is meant is the algorithm that the mind computes, something in principle discoverable by cognitive science.3
Because, at least in principle, the brain’s computational configuration can be preserved in a different medium, i.e., in silicon as opposed to carbon, with the information processing properties of the original neural circuitry preserved, the computationalist rejects the idea that a person is essentially her body (including, of course, her brain).4 Instead, a person is something like an embodied informational pattern.
But is informational patternism correct? The plausibility of informational patternism and other theories of personal identity is pursued throughout the section. The first piece in the section (Chapter 6) is a science fiction tale by the well-known philosopher Daniel Dennett. Dennett’s piece, “Where Am I?” boggles the mind. Dennett is sent on a bomb diffusing mission by NASA, and his out of body adventures test the limits of leading theories of personal identity, especially informational patternism. Eric Olson follows up with a useful survey of the major theories of the nature of persons; the reader may enjoy turning back to Dennett’s story to reflect on which ones were invoked. Then, employing the classic science fiction pseudotechnology of the teleporter and the example of split brains from actual neuroscience cases, Derek Parfit’s piece (Chapter 8) raises problems for both informational patternism and the popular soul theory of personal identity, suggesting that they are both incoherent.
Finally, any discussion of persons should at least touch upon the related topic of the nature of free will. After all, as you reflect on your very nature, it is of great import to ask whether any of the actions which you seem to choose are really selected freely. Consider that from the vantage point of science, there is a sense in which every intentional action seems to be determined by either genetics or environment, or a combination of both. And every physical event in the brain has, at least in principle, a causal explanation in terms of the behavior of fundamental particles. In light of this, one wonders whether there really is a plausible sense in which individuals’ intentional actions are free. Do they “break free” of the laws of nature? And, come to think of it, what does it mean to “break free” of the laws? Further, recalling our earlier discussion of informational patternism, if persons are, at rock bottom, computational, are they even capable of being free? In his thought provoking “Free Will and Determinism in the World of Minority Report,” Michael Huemer uses the film Minority Report as a means of reflecting on the age-old topic of free will.

Part III: Mind: Natural, Artificial, Hybrid, and “Super”

Related Works: 2001; Blade Runner; AI; Frankenstein; Terminator; I, Robot

Perhaps our universe is, or will be, science fiction-like in the sense that it will be populated by many distinct kinds of minds. We are all biological entities, and with the exception of the rare individual with a brain implant, all parts of our brains are natural, that is, “non-artificial.” But this will soon change. As neuroscience discovers the algorithms in the brain underlying computation, scientists are increasingly realizing that brains are computational entities. Some of the younger readers may eventually be like the cyborgs Bruce Sterling, William Gibson and other writers in the cyberpunk genre explore. If so, they would have “hybrid” minds, being part natural and part artificial. And perhaps scientists will reverse-engineer the human brain, creating AI creatures that run the same algorithms as human brains do. Other AI creatures could have minds that are entirely different, borrowing from sensory modalities that other animals have (e.g. echolocation), featuring radically enhanced working memory capacity, and so on. Existing human brains could be enhanced in these novel ways as well. In sum, a plurality of distinct sorts of artificial minds could be “sculpted.”
Numerous developments in cognitive science strongly support the aforementioned computational theory of mind (CTM). They also seem to support the related doctrine of informational patternism. However, it is important to note that while the brain may be a computational device, one’s mind might be something more. Perhaps, for instance, our brains can be mapped out in terms of the language of a penultimate computational neuroscience yet we nonetheless have souls. Are these two things really inconsistent? Or perhaps consciousness is a non-physical, non-computational, feature of the brain. The debate rages on in philosophy of mind. In this section, we explore some of these issues, raising thought provoking points of contact between science fiction, philosophy of mind, and science fact.
Isaac Asimov’s “Robot Dreams” leads the section. There is perhaps no better example of philosophically rich science fiction available than Asimov’s robot tales - especially in light of the connection to contemporary robotics (as the next section shall discuss). The second piece in this section is also a work of science fiction. In “A Brain Speaks” philosopher Andy Clark writes from the vantage point of his brain. The brain explains the concept of “functional decomposition” - how it is a blend of different functional subcomponents, each of which computes its own algorithm to carry out a specialized function. The different subcomponents are wired together by evolution and experience to do important tasks. The next few pieces provide essential background for understanding and critiquing the computational approach to the mind. Ned Block’s piece explores computational intelligence and functional decomposition. His discussion is followed by an excerpt from Andy Clark’s Natural Born Cyborgs, a project that argues that we are already seamlessly interwoven with technologies around us and that the path toward becoming cyborgs does not lead us to become essentially different than we are. Human minds are already both computational and integrated with the larger technological world around us. Such is our cyborg nature.
Now consider the android Rachel in Philip K. Dick’s Do Androids Dream of Electric Sheep? (a.k.a. Blade Runner) or consider David, the android boy in Spielberg’s AI. These characters in effect push at the boundaries of our ordinary understanding of a person. The audience ponders whether such creatures can really understand, or be conscious. Intriguingly, if our own minds are computational, or if a person is just an embodied informational pattern, then perhaps there is no difference in kind between us and them. John Searle would suggest otherwise. As the Block chapter has discussed, Searle, in his classic “Chinese Room” thought experiment, argues against the very idea that we are computational and the related idea that machines could think. On the other hand, Daniel Dennett presents a very different picture of artificial intelligence in his piece “Consciousness in Human and Robot Minds.” And like Dennett, Ray Kurzweil’s vision of the nature of mind is diametrically opposed to Searle’s. In his book, The Singularity is Near, he sketches a future world in which we (or perhaps our children or grandchildren) become cyborgs, and eventually entirely artificial beings. The creation of “superintelligent” AI brings forth beings with such advanced intelligence that solutions to the world’s problems are generated, rapidly ending disease and resource scarcity. “Superintelligence and Singularity” is not a work of science fiction however; it is Kurzweil’s prediction of the shape of the near future, based on our current science.

Part IV: Ethical and Political Issues

Related Works: Brave New World; Gattaca; Terminator; White Plague

Minds have many philosophical dimensions: the epistemic - what they know; the metaphysical - what they are; the ethical - whether their actions are right or wrong. The first few sections have looked at the epistemology and metaphysics of selves and their minds; now, in Part IV, we consider certain ethical issues. Having closed the previous section with Kurzweil’s utopian perspective, it is intriguing to recall, in contrast, Aldous Huxley’s sobering dystopian satire, Brave New World (1932). Inspired by his sentiments about American culture, Brave New World depicts a technologically advanced society in which everyone is complacent yet where the family has withered away and child bearing is no longer a natural process. Instead, children are bred in centers where, via genetic engineering, there are five distinct castes. Only the top two exhibit genetic variation; the other castes are multiple clones of one fertilization. All members of society are trained to strongly identify with their caste, and to appreciate whatever is good for society, especially the constant consumption of goods and, in particular, the mild hallucinogen Soma that makes everyone blissful.
Brave New World is a classic dystopian science fiction novel, gravely warning us of the twin abuses of rampant consumerism and technology in the hands of an authoritarian dictatorship. Like Huxley, George Annas is intensely concerned with the social impact of genetic engineering and other enhancement technologies. His chapter employs themes from science fiction to motivate his case against genetic engineering. A major concern of his is the following:
We constantly compare the new genetics to “putting a man on the moon,” but if history is a guide, this genetic engineering will not lead to a sterile publicity stunt like the moon landing, but instead will inevitably lead to genocide: the “inferiors” killing off the “superiors” or vice-versa.
Annas contrasts sharply with Kurzweil and other “transhumanists.” Transhumanism is a cultural, philosophical, and political movement which holds that the human species is only now in a comparatively early phase and that future humans will be radically unlike their current selves in both mental and physical respects. They will be more like certain cyborg and virtual creatures depicted in science fiction stories (Bostrom 2003). While Annas advocates an international treaty banning specific “species-altering” techniques, many transhumanists, in contrast, believe that species alteration is justified insofar as it advances the intellectual and physical life of the individual. Indeed, according to transhumanism, some future humans may be “uploads,” living immortal and virtual lives on computers, being superintelligences, and indeed, in many ways being more akin to AI than to unenhanced humans (Bostrom 2003).
Bostrom, another leading transhumanist, had discussed the notion of “substrate independence” in his earlier piece in Part I, a concept closely wedded to both CTM and informational patternism, positions that many transhumanists adopt. In Susan Schneider’s piece (Chapter 19) she considers whether informational patternism really supports the technoprogressive’s case for radical human enhancement. As exciting as transhumanism may be to science fiction enthusiasts, Schneider stresses that the transhumanists, who generally adopt informational patternism, have as of yet to provide a plausible account of the nature of persons. In particular, there is no feasible sense in which this notion of a person allows that a person can persist throughout radical enhancements, let alone even mild ones. Although she considers various ways that the transhumanist might furnish patternism with better conceptual resources, her suspicion is that informational patternism is itself deeply flawed.
A common point of agreement between transhumanists and bioconservatives who oppose enhancement is a concern that the development of artificial intelligence, biological weapons, advanced nanotechnology and other technologies bring forth global catastrophic risks, that is, risks that carry the potential to inflict serious damage to human well-being across the planet. Here, these issues go well beyond the interplay between science fiction and philosophy; but readers are encouraged to read Garreau (2006) for an extensive overview of cultural and technological issues, and Bostrom and Cirkovic (2008) for an excellent series of papers focusing on just the topic of global catastrophic risk. In Chapter 20, philosopher John Leslie provides a brief version of his “doomsday argument,” a probabilistic argument that attempts to predict the future lifetime of the human race given an estimate of the total number of humans born thus far. The final two pieces of the section turn to the pressing issue of ethical dimensions of artificial intelligence and the existential risks its development may bring. 2001’s HAL has stayed with us so long precisely because the film depicts a very possible future - a situation in which the ethical programming of an extremely intelligent artificial being crashes, creating a psychotic computer. As HAL’s vacuum tubes are slowly pulled out the audience listens to HAL’s bewildered machine voice report his diminishing memories and sensations. Stanley Kubrick thereby orchestrates a believable scene in which HAL “dies”; pumping the intuition that like us, HAL is a conscious mind. Indeed, philosophers and computer scientists have recently become concerned with developing adequate “ethical programming” for both sophisticated intelligences and more simple programs that could be consulted as ethical advisors. Susan Anderson’s intriguing piece discusses these issues, using Asimov’s famous three laws of robotics and his story “Bicentennial Man” as a springboard. She ultimately rejects Asimov’s three laws as a basis for ethical programming in machines; Asimov would surely agree.
The next piece explores ethical issues involving superintelligence. If humans construct AI, it may be that AI itself engineers its own future programming, evolving into a form of intelligence that goes well beyond human intelligence. Like the evolved Mecha of the distant future that find David frozen in the ice at the very end of Spielberg’s AI, perhaps superintelligent AI will supplant us. Or perhaps our descendants will be cyborgs that themselves upgrade to the level of superintelligence. In any case, a superintelligent being could engage in moral reasoning and make discoveries that are at a higher or different level than us, and which we cannot grasp sufficiently to judge. This is one reason why the issue of ethical programming must be debated now; in hopes that the original motivations programmed into AI evolve into a superintelligence that is indeed benevolent. In “Ethical Issues in Advanced Artificial Intelligence,” Nick Bostrom surveys some of these ethical issues, as well as investigating whether the development of such machines should in fact be accelerated.

Part V: Space and Time

Related Works: Twelve Monkeys; Slaughterhouse Five; The Time Machine; Back to the Future

The final section begins with Ray Bradbury’s well-known time travel tale about a time travel business called “Time Safari, Inc.” that takes travelers back in time to hunt prehistoric animals. Believing that even the slightest change in the past can alter the future in momentous ways, travelers are instructed to use extreme diligence to leave the environment undisturbed. For instance, they are not allowed to take trophies; they are only permitted to shoot animals that are about to die; and they are required to stay on a path hovering a bit above the ground. Needless to say, things go awry.
Time travel tales such as Bradbury’s raise intriguing issues concerning the nature of time. For one thing, what is it to travel back through time? To answer this, one needs to first ponder the classic question, “What is the nature of time?” On the one hand, time is one of the most familiar elements of our lives. On the other, as Ted Sider explains in his chapter, this age-old question has no easy answer. Sider outlines different answers to this question, cleverly uncovering problems for the major views of the nature of time.
One might wonder if time travel is really possible. Indeed, MIT students recently held a “time travel party,” announcing the event in national papers to attract people from the future. And while a raging costume party was had, their low tech experiment in time travel discovery unfortunately failed to reveal any genuine time travelers. Of course, the partygoers, like the rest of us, are all boring cases of time travel - we merely press forward in time, minute by minute. But perhaps the partygoers’ disappointment is due to some sort of inbuilt limitation; that is, maybe time travel somehow is contrary to the laws of physics or even the laws of logic. While some physicists, like Kip Thorne, have argued that time travel is compatible with the laws of physics (see Thorne 1995), philosophers and physicists have long been worried about the “Grandfather Paradox.” Suppose that Maria constructs a time machine, and that she goes into the past to visit her grandfather when he was a boy. Unfortunately, her instruments are so accurate that the machine lands right on him, and she unwittingly kills him. Now, her own father hasn’t been conceived yet, so it seems that since her grandfather will not survive to father her own father, she would not have existed to unwittingly cause her time machine to kill him.
Clearly, something strange is going on. For if time travel is compatible with the laws of physics, and if machines can carry human sized objects back in time, why couldn’t she change the past in a way that would eliminate her own eventual existence? As philosopher David Lewis once jokingly remarked: is there a time policeman who races after her machine to stop her from altering the past in certain ways? Perhaps time travel is conceptually incoherent. The pieces by David Lewis and coauthors Michael Lockwood and David Deutsch both attempt to respond to the Grandfather Paradox. While Lewis uses philosophical resources to do so, Lockwood and Deutsch employ the many worlds interpretation of quantum mechanics to attempt to dissolve the paradox. They argue that Maria actually goes into a parallel universe where she does not, in fact, kill her grandfather. Instead, she kills his counterpart in the parallel universe. Finally, philosopher Richard Hanley considers the issue of miracles. Would radically advanced technologies, such as time travel, be, from our vantage point at least, miracles? After all, consider Arthur C. Clarke’s Third Law: “any sufficiently advanced technology is indistinguishable from magic” (Clark 1961). Hanley’s fun piece blends together science fiction themes from various parts of the book, discussing Chalmers’ and Bostrom’s papers on being in a simulation, Edwin Abbot’s Flatland: A Romance of Many Dimensions, and more.

Conclusion

So this is where we are going. It is my hope that if you are new to philosophy that you shall see fit to revisit these issues again and again, gaining philosophical sophistication with each visit. I believe you will find that your position on one topic helps to shape your perspective on some of the others. Always there - and enhanced by your years of reflection - is an understanding that these topics represent some of life’s great mysteries. And it is my hope that seasoned philosophers, cognitive scientists and others working in fields which touch upon these issues, will have a heightened awareness of some new philosophical developments (e.g. new challenges to external world skepticism) and, especially, the multiple challenges posed by neural enhancement and artificial intelligence technologies. As many of the readings emphasize, these issues call for detailed philosophical work at the interface of epistemology, philosophy of mind, metaphysics and neuroethics. The questions posed by this book have no easy answers; yet it is the human condition to ponder them. Perhaps our cyborg descendants will ponder them too; maybe by uploading their philosophy books directly into their memory systems. Perhaps, after numerous upgrades, both the problem and the solution space will even reshape itself.
It is fitting to end our survey with a science fiction thought experiment. It is AD 2300 and some humans have upgraded to become superintelligent beings. But suppose you resist any upgrades. Having conceptual resources beyond your wildest dreams, the superintelligent beings generate an entirely new budget of solutions to the philosophical problems that we considered in this book. They univocally and passionately assert that the solutions are obvious. But you throw your hands up; these “solutions” strike you and the other unenhanced as meaningless gibberish. You think: Who knows, maybe these “superintelligent” beings were engineered poorly; or maybe it is me. Perhaps the unenhanced are “cognitively closed,” as Colin McGinn has argued, being constitutionally unable to solve major philosophical problems (McGinn 1993). The enhanced call themselves “Humans 2.0” and claim the unenhanced are but an inferior version. They beg you to enhance. What shall you make of our epistemic predicament? You cannot grasp the contents of the superintelligent beings’ thoughts without significant upgrades. But what if their way of thinking is flawed to begin with? In that case upgrading will surely not help. Is there some sort of neutral vantage point or at least a set of plausible principles with which to guide you in framing a response to such a challenge? Herein, we shall begin to reflect on some of the issues that this thought experiment raise.
We clearly have a lot to think about. So let us begin.

Notes

1. Great Dialogues of Plato: Complete Texts of the Republic, Apology, Crito Phaido, Ion, and Meno, Vol. 1. (Warmington and Rouse, eds.) New York, Signet Classics: 1999, p. 316. For a discussion of Plato’s theory of forms see chapter eleven of Charles Kahn’s Plato and the Socratic Dialogue: The Philosophical Use of a Literary Form. Cambridge University Press, 1996.
2. De Marse and Dockendorf (2005).
3. Thus, by “CTM,” in this context, I do not just mean classicism. Computational theories of mind can appeal to various computational theories of the format of thought (e.g. connectionism, dynamical systems theory, symbolicism, or some combination thereof). See Kurzweil (2005). For philosophical background see Block’s piece in this volume (Chapter 14) and Churchland (1996).
4. This commonly held but controversial view in philosophy of cognitive science is called “multiple realizability”; Bostrom calls it ‘Substrate Independence’ in his 2003.

References

Bostrom, Nick (2003), Transhumanist Frequently Asked Questions: A General Introduction, Version 2.1 (2003), World Transhumanist Association, , extracted at Dec. 1, 2008.
Bostrom, Nick and Cirkovic, Milar (2008), Global Catastrophic Risks, Oxford: OUP.
Churchland, P. (1996), Engine of Reason, Seat of the Soul, Cambridge, MA: MIT Press.
De Marse, T. B. and Dockendorf, K. P. (2005), “Adaptive flight control with living neuronal networks on microelectrode arrays.Proceedings of the International Joint Conference on Neural Networks, 3, 1548-51.
Garreau, Joel (2005), Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies - and What It Means to Be Human, New York: Doubleday & Company.
McGinn, Colin (1993), Problems in Philosophy: The Limits of Enquiry, Oxford: Blackwell.
Thorne, Kip (1995), Black Holes and Time Warps: Einstein’s Outrageous Legacy, W. W. Norton & Company.

Part I
COULD I BE IN A “MATRIX” OR COMPUTER SIMULATION?
Related Works
The Matrix
Permutation City
The 13th Floor
Vanilla Sky
Total Recall
Animatrix

1
BRAIN IN A VAT
John Pollock
 
 
 
It all began that cold Wednesday night. I was sitting alone in my office watching the rain come down on the deserted streets outside, when the phone rang. It was Harry’s wife, and she sounded terrified. They had been having a late supper alone in their apartment when suddenly the front door came crashing in and six hooded men burst into the room. The men were armed and they made Harry and Anne lie face down on the floor while they went through Harry’s pockets. When they found his driver’s license one of them carefully scrutinized Harry’s face, comparing it with the official photograph and then muttered, “It’s him all right.” The leader of the intruders produced a hypodermic needle and injected Harry with something that made him lose consciousness almost immediately. For some reason they only tied and gagged Anne. Two of the men left the room and returned with a stretcher and white coats. They put Harry on the stretcher, donned the white coats, and trundled him out of the apartment, leaving Anne lying on the floor. She managed to squirm to the window in time to see them put Harry in an ambulance and drive away.
By the time she called me, Anne was coming apart at the seams. It had taken her several hours to get out of her bonds, and then she called the police. To her consternation, instead of uniformed officers, two plain clothed officials arrived and, without even looking over the scene, they proceeded to tell her that there was nothing they could do and if she knew what was good for her she would keep her mouth shut. If she raised a fuss they would put out the word that she was a psycho and she would never see her husband again.
Not knowing what else to do, Anne called me. She had had the presence of mind to note down the number of the ambulance, and I had no great difficulty tracing it to a private clinic at the outskirts of town. When I arrived at the clinic I was surprised to find it locked up like a fortress. There were guards at the gate and it was surrounded by a massive wall. My commando training stood me in good stead as I negotiated the 20 foot wall, avoided the barbed wire, and silenced the guard dogs on the other side. The ground floor windows were all barred, but I managed to wriggle up a drainpipe and get in through a second-story window that someone had left ajar. I found myself in a laboratory. Hearing muffled sounds next door I peeked through the keyhole and saw what appeared to be a complete operating room and a surgical team laboring over Harry. He was covered with a sheet from the neck down and they seemed to be connecting tubes and wires to him. I stifled a gasp when I realized that they had removed the top of Harry’s skull. To my considerable consternation, one of the surgeons reached into the open top of Harry’s head and eased his brain out, placing it in a stainless steel bowl. The tubes and wires I had noted earlier were connected to the now disembodied brain. The surgeons carried the bloody mass carefully to some kind of tank and lowered it in. My first thought was that I had stumbled on a covey of futuristic Satanists who got their kicks from vivisection. My second thought was that Harry was an insurance agent. Maybe this was their way of getting even for the increases in their malpractice insurance rates. If they did this every Wednesday night, their rates were no higher than they should be!
My speculations were interrupted when the lights suddenly came on in my darkened hidey hole and I found myself looking up at the scariest group of medical men I had ever seen. They manhandled me into the next room and strapped me down on an operating table. I thought, “Oh, oh, I’m for it now!” The doctors huddled at the other end of the room, but I couldn’t turn my head far enough to see what they were doing. They were mumbling among themselves, probably deciding my fate. A door opened and I heard a woman’s voice. The deferential manner assumed by the medical malpractitioners made it obvious who was boss. I strained to see this mysterious woman but she hovered just out of my view. Then, to my astonishment, she walked up and stood over me and I realized it was my secretary, Margot. I began to wish I had given her that Christmas bonus after all.
It was Margot, but it was a different Margot than I had ever seen. She was wallowing in the heady wine of authority as she bent over me. “Well Mike, you thought you were so smart, tracking Harry here to the clinic,” she said. Even now she had the sexiest voice I have ever heard, but I wasn’t really thinking about that. She went on, “It was all a trick just to get you here. You saw what happened to Harry, He’s not really dead, you know. These gentlemen are the premier neuroscientists in the world today. They have developed a surgical procedure whereby they remove the brain from the body but keep it alive in a vat of nutrient. The Food and Drug Administration wouldn’t approve the procedure, but we’ll show them. You see all the wires going to Harry’s brain? They connect him up with a powerful computer. The computer monitors the output of his motor cortex and provides input to the sensory cortex in such a way that everything appears perfectly normal to Harry. It produces a fictitious mental life that merges perfectly into his past life so that he is unaware that anything has happened to him. He thinks he is shaving right now and getting ready to go to the office and stick it to another neurosurgeon. But actually, he’s just a brain in a vat.
“Once we have our procedure perfected we’re going after the head of the Food and Drug Administration, but we needed some experimental subjects first. Harry was easy. In order to really test our computer program we need someone who leads a more interesting and varied life - someone like you!” I was starting to squirm. The surgeons had drawn around me and were looking on with malevolent gleams in their eyes. The biggest brute, a man with a pockmarked face and one beady eye staring out from under his stringy black hair, was fondling a razor sharp scalpel in his still-bloody hands and looking like he could barely restrain his excitement. But Margot gazed down at me and murmured in that incredible voice, “I’ll bet you think we’re going to operate on you and remove your brain just like we removed Harry’s, don’t you? But you have nothing to worry about. We’re not going to remove your brain. We already did - three months ago!”