You’re viewing a free article. Subscribe for more.

The World as a Game

What is a game? Ludwig Wittgenstein famously chose this nebulous concept to illustrate what he meant by “family resemblance,” where the individual members of a class can be determined to fulfill no necessary and sufficient conditions for admission, and instead only share some traits with some others in the class, others with others. Yet we can at least identify two types of game, which seem not just distinct from one another but very nearly opposite. One class of games, which includes peek-a-boo, charades, and musical improvisation as representative instances, is characterized by free expressivity. It is the manifestation of what Friedrich Schiller called the Spieltrieb, the “play-drive,” which is innate in all human beings insofar as they are free. The other class includes chess, fencing, and wargames as its representative instances. If there is still some dose of freedom operating in this sort of game, it is freedom under severe constraints. The purpose here is to win, and one does so by means of strategy aforethought. In such games, serendipity and spontaneity are disadvantages. While some such games may, like Schillerian free play, be “fun” (especially when you win and the other guy loses), at their outer edge they shade over into a domain of human endeavor that has little to do with leisure at all. At their most serious they can determine the fate of the world.

It is this latter sort of game alone that machines are capable of “understanding.” Strategy games, in other words, are essentially algorithmic. A good portion of the history of computing has been dedicated in fact to training machines up algorithmically in such narrow domains as chess, then Go, and more recently a full array of natural-language processing tasks. This training has facilitated the gradual progress of the machines from domain-specific “weak artificial intelligence” — the ability to master all the possible moves within a given narrow field — to something at least approaching “general AI” — the ability to competently execute the whole range of tasks that we associate with human intelligence.

Yet curiously, as the machines draw closer to this peculiar ideal of humanness, the society they have been brought in to structure has grown correspondingly more inhuman. As the scope of algorithm-based applications in social reality has expanded over the past decades, we have by the same measure been conditioned to approach ever more fields of human life as if they were strategy games. The technology was honed in such narrow domains as chess and video poker. By the 2010s it had come to shape public debate as well, in the transfer of the greater part of our deliberative efforts to social-media platforms that are, in effect, nothing more than debate-themed video games, where one racks up points in the form of likes and followers by “gaming the algos,” just as one racks up points in Super Mario Bros. by smashing turtles. Soon enough algorithms bled out from behind the screen too, and began to transform the three-dimensional world of labor, first in “disruptive” app-based companies such as Uber, where gamified structures determine each “move” a driver makes, and then in domains such as warehouse work, where the laborer might never have used an app before signing up, and might not have understood the revolutionary significance of the top-down imposition of gamified structures for the measurement of his work productivity.

Finally, the development of social-credit systems for the algorithmic measurement of the civic standing of individuals living under authoritarian regimes extends the gamification process to the entirety of social reality. Life becomes, quite literally, a game, but emphatically not the kind of game that is expressive of our irreducible freedom. On the contrary, the spread of gamified structures to the entirety of life has brought about the near-total eclipse of the one sort of game by the other, the near-total loss of a distinctly human life of Schillerian free play, and the sharp ascendancy of a machine-centered conception of play: play as strategy, as constraint; the extremity of play where it shades over into never-ending high-stakes work and struggle.

It is against this background that we must understand the recent popularity in some circles of the “simulation argument,” according to which all of reality is, or is likely to be, an algorithmically structured virtual system of essentially the same sort we know from our encounter with video games such as Pac-Man or Minecraft. This is by now an argument with a bit of history behind it. The philosopher Nick Bostrom has been promoting it for a few decades, and by means of it has gained the attention of such tech mandarins as Bill Gates and Elon Musk, who unsurprisingly are attracted to the idea that the world itself resembles the very products that they are in the business of selling.

The “simulation argument” has more recently been taken up by the philosopher David J. Chalmers in his book, Reality+: Virtual Worlds and the Problems of Philosophy. It is worthwhile to consider this work in some detail, because it is a rich document of our gamified world. The “technophilosophy” that it expounds is perhaps the clearest expression yet of a new sort of hybridism between industry and the academy, in which it is decidedly the first member of this unnatural pair that forms the head of the resulting beast. Philosophy, while in its most enduring expressions always stands apart from the era in which it emerges, also has a long history of subordinating itself to whatever appears in its day as the biggest game going, as the most dazzling center of concentrated power and influence. Once it was theology, then it was science, now it is Big Tech, whose ascendancy required the prior work of the scientists even as it now operates independently of any faithfulness to truth-seeking and has nothing other than capital accumulation as its logic and life-breath.

Before turning to Chalmers’ central claims and arguments, some interesting characteristics of our moment are revealed in his style and method. It is instructive indeed to see what gets through the filter of the big trade publishers these days. One striking feature of the prose in this particular piece of Norton-managed content is that even though the book includes a special online supplement that presumably may be opened in a browser window right next to a search engine such as Google, it presupposes no shared background knowledge whatsoever between author and reader, nor any ability or motivation on the part of the reader to undertake a quick search that would supplement any informational gaps. Thus Plato is “the ancient Greek philosopher Plato,” and Queen is “the British rock group Queen,” headed up by “lead singer Freddie Mercury” (identified a few lines later as “Mercury”). The opening of Chapter One seeks to draw the reader in by making the most profound questions of philosophy appear close to home, by reference to what must be for the author a treasured lyric from Queen’s hit “Bohemian Rhapsody”: “Is this the real life? / Is this just fantasy?” Now that the tone has been set, the philosopher seeks, as best he can, to flesh out the backstory. “These questions have a history,” he writes. “Three of the great ancient traditions of philosophy —those of China, Greece, and India — all ask versions of Mercury’s questions.”

 

Such a framing is significant beyond what it reveals about one man’s musical taste. Rather than understanding pop-cultural artifacts such as “Bohemian Rhapsody” as lying at the end of a tradition of romance ballads, faintly echoing that tradition’s themes in words whose large and original meanings have been mostly forgotten, we are instead invited to see pop culture, or at least the pop culture of Chalmers’ childhood, as the pinnacle of tradition, as bringing to its fullest expression what could only be more crudely attempted in centuries past. Here it is the past that asked “Mercury’s questions,” rather than “Mercury” channeling that past with only dim knowledge that this is what he is doing. And this order of things, which characterizes Chalmers’ approach to philosophy in general, largely obviates any need for him to dwell on what I, for my part, take to be the total debt that our present conceptual possibilities owe to the historical legacies that shaped them.

In the book Chalmers dispenses with style and metaphor in order to speak in plain language, indeed in the simplest sentences possible. But this is not a clarification, it is a conceit which does not really free him, in the way he might hope, from the perceived pitfalls of a more stylish language. For in truth his whole project rides on unacknowledged metaphor: a captivating image of reality taken for an account of reality. The simplicity of his style confers upon Chalmers’ argument an unearned air of realism, and emboldens him to push for conclusions unmoored from the evidence of history and anthropology as if they were plain common sense. This is how the analytic tradition in modern philosophy has long operated: through the abjuration of “erudition” in favor of “straight talk,” the rejection of the properly humanistic disciplines, embedded in centuries-old traditions, in favor of pandering and “relatable” pop-culture references from Star Trek, Black Mirror, and Queen.

Thus do analytic philosophers deliver their truths in the mode of naïveté, though they are not of course the first philosophers to do this. Descartes (“the French philosopher René Descartes”) worked his way into this mode too, if only as an exercise of so-called “radical doubt” in the first of his Meditations, pretending that he never learned any truths from any book or authority, even though what follows in the next five Meditations is rich with the learning that he could only have received under the tutelage of the Jesuits, with whom he studied, among many other guiding lights, Saint Augustine (“the North African saint Augustine”). As far as I can tell, Chalmers, by contrast with Descartes, is a true naïf, “just some guy,” as Keanu Reeves’ Neo memorably describes himself in The Matrix — one of Chalmers’ touchstones, naturally.

We get an intimate sense of the scope of this philosopher’s inner life on nearly every page. Here he is, for example, describing himself at leisure: “During the pandemic, I’ve… met up once a week with a merry band of fellow philosophers in VR. We’ve tried many different platforms and activities — flying with angel wings in Altspace, slicing cubes to a rhythm in Beat Saber, talking philosophy on the balcony in Bigscreen, playing paintball in Rec Room,” and so on. There is a well established pattern in analytic philosophy, to which Chalmers has conformed in his career with something almost approaching grace, by which the refusal to put away childish things helps to establish the aura of a certain kind of philosophical brilliance. I have been told by someone who once shared a vacation home with the renowned analytic metaphysician David Lewis that the reading material he brought along for the summer consisted entirely in a giant stack of magazines for model-train hobbyists. When asked by my informant why these toys interested him, Lewis glared back as if the answer were self-explanatory.

The pop-culture encyclopedia on which so much recent analytic philosophy has drawn yields flat-footed examples at best, and at worst it betrays a cavalier indifference to the depth of the subjects it grazes. It is, in a word, slumming. Now I am generally the last person to condemn “appropriation.” Who could possibly make sense of the unpredictable and scattershot and often demagogic way in which calls are made for switching out terms of art deemed to be “problematic,” or to have been originally deployed in another cultural sphere from which it is allegedly not ours to borrow? A perfectly anodyne-seeming term can be targeted for elimination — recently I observed a philosopher catching heat from another philosopher for hosting a podcast called “Unmute,” which was seen as an ableist belittling of the hearing-impaired — while another term that appears, at least to me, far more open to criticism just keeps going, year after year, in innocent ignorance of its huge semantic charge. Consider, for example, the use in analytic philosophy of the figure of the “zombie.” This is a use that precedes Chalmers, but that he did more than any other to make familiar in his book, The Conscious Mind, and that he continues to deploy in Reality+. Of course I see no particular problem with the figure of the zombie in American mass entertainment since the mid-twentieth century. I recognize that George Romero’s Night of the Living Dead belongs at least to some sort of canon, and I have myself enjoyed Jim Jarmusch’s recent spoof of the zombie genre, The Dead Don’t Die. But the genre in question is what is sometimes called “exploitation,” and it seems to me that if philosophy should not distance itself from this genre altogether, it should at least be cautious about deriving its thought-experiments from it.

What is the zombie genre exploiting, exactly? For one thing, it exploits its audience, enticing the viewer into a few hours of thoughtless titillation that could also have been spent in edifying contemplation; for another, it exploits a complex system of folk-beliefs and practices, in this case one that developed over the centuries among the African diaspora of Haiti, and does so without acknowledging any of the depths of experience or the internal cultural logic from which these beliefs and practices emerged. For Chalmers, a zombie is simply “a complete physical duplicate of a conscious human being [or animal], with the same brain structures but no subjective experience.” He acknowledges that this is essentially the same thing as what Descartes called an “automaton” (“Descartes thought that dogs were mere automata, or zombies”), but he cannot resist the lure of the new, or the temptation to break with history.

Some contemporary philosophers attempt to make this figure of thought more precise by calling it a “philosophical zombie,” but what this misses is that the character from Haitian folk-belief is already itself philosophical: it is a representation by which a group of people make sense of and navigate the world. The folk-belief in zombies is not only, and not principally, concerned with the bodily zombie of interest to analytic philosophers. Their zombie is only half of the story; there is also the corresponding soul zombie, which an evil priest keeps in a bottle throughout the duration of a corpse’s interment, and then deftly opens up under the corpse’s nose in order to bring it halfway back to life — quickened enough to do the priest’s will, and most importantly to work the fields in some living person’s stead, but not enough to remember who it was before or to contemplate its unhappy plight. Belief in zombies is thus heavily dualistic. It is not that there remains for the zombie no locus of subjective identity, only that this stays under the priest’s control while the body, also under his control, is elsewhere. The theory seems to have emerged, as it is not hard to imagine, in the encounter of Roman Catholicism, ancestral African beliefs about body and soul, and the grueling inhumanity of slave labor.

The idea of the separability of body and soul in fact has a long and complex history in both European and African folk-belief, and race is thematized with surprising frequency at many of this history’s key moments. In 1690, for example, the French Jesuit Gabriel Daniel published a satire of Cartesian dualism under the title Voyage du Monde de Descartes, which tells the story of an African servant in Europe who falls asleep while sitting in a field under a tree. A white maiden has been dishonored nearby, and a lynch-mob sets out to find the culprit. Little do they know when they find the servant that he is also an adept of the secret art of the Cartesian sect, by which an initiate can go out as his body sleeps for a little ambulatio animae, traveling around the earth and even into outer space as a disembodied soul. The mob kills what it takes to be the boy but is in fact only a “zombie.” When the soul attempts to return, it finds it has no corporeal home to return to, and so it floats around as a specter, and ultimately befriends the disembodied soul of the long-deceased Descartes. It is at least possible that Daniel is drawing on some knowledge of the proximity of Cartesian dualism to what by the end of the seventeenth century may have appeared as a distinctly racialized cluster of folk-beliefs, of the sort that would later be associated with the figure of the zombie.

 

None of this is to say that there is anything misguided about the particular thought-experiment analytic philosophy enjoys contemplating, but only that the choice of the zombie as this experiment’s vehicle reminds us of the limitations intrinsic to a philosophical tradition that considers, say, the oeuvre of George Romero part of its general culture, its universe of references and illustrations, but not, say, ethnographic reports from rural Haiti, or indeed the profoundly learned anthropology of the Jesuit intellectual tradition.

While it is simply naïve to start a reflection on zombies as if they were invented from scratch in twentieth-century American popular culture, it is positively self-defeating to start a reflection on the prospect that reality is itself virtual as if the very notion had to await our current VR technologies in order to be entertained. Perhaps under pressure from editors to give hasty shout-outs to non-Western ideas — the sort of shout-outs that are now de rigueur in Anglophone philosophy, which congratulates itself for being “inclusive” and then goes right back to doing what it would have been doing anyway — Chalmers duly catalogs in his book not only Nāgārjuna’s Buddhist anti-realism about the external world, but also the famous dream of Zhuangzi (in which the Chinese philosopher believes himself to be a butterfly), as well as the even more famous account of dreaming provided by Descartes. In this mechanical nod to the history of philosophy (in the book’s acknowledgments he provides a long list of the historians of philosophy who helped him to execute the nod) he seems close to recognizing that our brains themselves, for about eight hours a day, furnish us sufficient material for philosophical reflection in the remaining sixteen on the possibility that reality is in some way virtual. Chalmers even acknowledges early on that “[a] dream world is a sort of virtual world without a computer.” But soon enough this concession to the timelessness of the problem in question gives way to a bold claim of the problem’s novelty: “any virtual world,” we are now told, “involves a computer simulation.”

The possibility that we come pre-stocked with the sort of experiences generally thought to be novelties of the era of VR goggles looms as a threat to the entire project of what Chalmers calls “technophilosophy.” He writes that “we haven’t developed dream technology as thoroughly as we’ve developed VR technology, so Descartes’ dream argument is less affected by technological change than his illusion argument. (The latter argument concluded that Descartes’ own mind is in fact awake, but is being systematically deceived about the existence of an external world by an “evil genius.”) But it is not at all self-evident what ought to count as “dream technology,” and an older and more capacious approach to the philosophy of technology, as distinct from Chalmers’ technophilosophy, never lost sight of the fact that tekhnê, in its original sense, included not only gadgets and other objects of human invention, but also, crucially, practices. And in this sense it is important to note that “dream technology” has in fact been well developed in certain places and times: practices for the collective interpretation, social processing, and pragmatic management in waking life of dream experience.

In cultures where such dream technology is developed, it is typical to find very different philosophical commitments concerning the objects and beings encountered in dreams, and concerning the relation of these objects and beings to those encountered in waking life. The most common view of what dreams are, if we consider them from a cross-cultural and trans-historical perspective, is that they are either as real or more real than waking life. Significantly, the urgency of proving that this is not the case takes hold as a central task of modern philosophy, as in the work of Descartes, at precisely the moment when European missionaries, some of whom are in Descartes’ own epistolary network, are encountering groups of people, in the Americas for example, whose practical rationality is largely governed by experiences had in sleep. Descartes is anxious to contain these experiences, to keep them cordoned off from the waking life whose basic constituents — the self, God, the external world — are alone worthy candidates for his project of epistemological foundationalism. This is indeed one way of going about things, but it is not the default way of humanity.

Repeatedly, in fact, what Chalmers takes to be the default way of humanity turns out to be only a local road taken by modern philosophy in the centuries since the scientific revolution, often in an effort to remain faithful to the apparent implications of key scientific discoveries. But this faithfulness has frequently gone far beyond what is actually implied by science itself, and has forced modern philosophy into accepting as the default account of reality commitments that fall apart under scrutiny. Chalmers arrives on the scene at the moment when this falling apart is becoming impossible not to notice, and interprets it as evidence for philosophy’s linear progress, rather than philosophy’s return to other widely available and well-tested alternatives. This is particularly clear in his evocation of the current moment’s “fall from Eden.” Thus he reflects that “rocks in Eden were Solid, full of matter all the way through without any empty space. They had an absolute Weight, which did not vary from place to place.” But now, subatomic physics has forced us to throw out this model. “People in Eden had Free Will,” similarly. “They could act with complete autonomy and their actions were not predetermined.” But now, physics and neuroscience together have significantly challenged this belief.

What is missing here is any acknowledgment that this Eden was never long-lived and never universal. The belief in irreducible external matter and the belief in free will come with distinct local histories, emerging originally as answers to specific contextual problems (as, in the case of free will, accounting for the causes of sin). And so when Chalmers suggests that a further instance of the fall is currently transpiring, from an idea of reality as inherently physical to one that acknowledges the virtual as well, one cannot help but wish for further genealogy and elucidation of the supposedly Edenic stage of the representation of reality which is supposedly now coming to a close.

And now to the heart of the matter. What is the nature of the increasingly widespread contention that reality may be “virtual”? A great deal in our culture and our society hangs on the answer to this question. In Chalmers’ book, the argument sets out from the observation that we, today, have become adept at running simulations of many things, from possible chess moves to possible pathways and chronologies of early human migration from Asia to the Americas. We simulate paleoclimates and the possible pathways of future climate change. We simulate hydrodynamic flow and the risks of nuclear escalation. We also simulate regions of the world, from, say, a digital model of Paris in 1789 to the Amazon rainforest. In time, it is reasonable to anticipate that we will have a fully immersive model of the entirety of the observable universe. In scientific research, in defense initiatives, and in leisurely gaming, we as a species have begun digitally to reproduce the world into which we have been cast, and even to generate new possible worlds that have the power to reveal to us the general form of the future.

Now, you might anticipate that as world-simulations become increasingly fine-grained, they will begin to include not just individual human beings but also the smallest details in the constitution of these human beings, notably the number and the arrangement of their individual neurons. But if these are simulated faithfully and exhaustively, supporters of the simulation argument contend, then they are likely to come to have the same conscious experiences as would be had by the neurons of a physical brain. Given our own tendency to run huge numbers of simulations for anything in the physical world that is of interest to us, it is reasonable, the argument in turn holds, to suppose that any beings in “base reality” (that is, the physical reality in which simulations are generated) that develop the ability to run fine-grained simulations of human brains will bring into being vastly more simulated human beings than human beings who ever existed in base reality. That means, we are told, that if you find yourself thinking, and experiencing your life as a human being, and you acknowledge the possibility that our descendants may develop the ability to run exhaustively detailed simulations of their ancestors, it is highly probable that your own thinking is the thinking of a simulated human being and not of a human being in base reality.

This conclusion rests on a few fairly substantial presuppositions. One of them is what Bostrom, in an influential article in 2003 entitled “Are We Living in a Computer Simulation?,” calls “substrate-independence.” This is the view, widely but not universally shared among philosophers of mind, that the organic substrate in which human consciousness is realized is a contingent and not a necessary condition of this consciousness, which could just as well be realized in a silicon substrate, or in a substrate of toilet-paper rolls and string, or in anything, really, that faithfully reproduces the organization of the neurons in the brain. One implication of substrate-independence is that it would be in principle possible for each of us to “upload” our consciousness into a computer and thereby to achieve immortality.

If substrate-independence is not true, then the simulation argument is a non-starter. This is indeed a big if, and you might think that anyone who offers a version of the simulation argument would also feel compelled to make a convincing case for the truth of the claim on which it depends. But Bostrom simply presupposes it, a move that might be excused in an article in the name of succinctness. Chalmers, for his part, dilates in his tome on every question of interest to him, often repeating the same basic claims several times — and yet the argument that he offers for the substrate-independence thesis is hasty and unconvincing, and sometimes seems to be one he would rather not have to make at all.

Early in the book the technophilosopher states the case for the consciousness of simulated beings (or “sims”) hypothetically: “At least if we assume that a simulated brain would have the same conscious experience as the brain it’s simulating…”; “Under reasonable assumptions, … sims will have conscious experiences that are the same as those of the nonsims they simulate.” His fifteenth chapter, called “Can there be consciousness in a digital world?,” seeks finally to justify these “reasonable assumptions” — but it consists in a short introductory section telling of an episode of Star Trek: The Next Generation in which the philosophical problem of the android Data’s contested consciousness is explored, which is followed by a section entitled “The problem of consciousness,” in which Chalmers summarizes the philosophical contributions of his previous book on the philosophy of consciousness, in which he argued that “no explanation of consciousness in purely physical terms is possible.” Chalmers does not reject this view in Reality+, though it does seem to be in some tension with the simulation argument: if base-reality consciousness is not to be explained by existing fundamental properties of the physical world, then it is not clear, at least to me, how we can be confident that a computer-based model of the structures we find in the physical brain will be conscious. If we don’t know what the relationship between the physical brain and consciousness is, we would seem to be on even weaker ground in attempting to account for the relationship between a simulated brain and consciousness.

But Chalmers quickly sets the “hard problem” aside and moves on to another section, called “The problem of other minds,” in which he runs through the familiar problem of skepticism about the consciousness of other biological beings such as humans and non-human animals. When it comes right down to it, for all we know it is not just machines that lack qualia — inner subjective states, something it is “like” to be them — but other humans and animals may lack them as well. We have no access to the inner experience of other beings, and so in a strict sense other naturally generated minds leave us with the same problems as artificially constructed “minds.” But this problem is not directly relevant to solving the more narrow question of the possible consciousness of AI systems, and in this section again the philosopher of virtuality postpones the promised answer to the question that serves as this chapter’s title. We finally get to the heart of the matter in the penultimate section, “Can machines be conscious?,” and in a short coda entitled “Consequences.” The stakes are high: without substrate-independence, you will remember, the entire simulation argument fails to get off the ground.

So, then, can machines be conscious? Chalmers focuses on one type of machine: “a perfect simulation of a brain, such as my own,” that is, a brain simulation that is “a digital simulation running on a computer.” The initial attempt to characterize such a simulation has a troubling air of circularity to it: “How would simulating a brain work?” Chalmers asks. And he answers: “We can suppose that every neuron is simulated perfectly.” Alright, one wants to say, but how does it work? Chalmers acknowledges that it might not work: some believe, after all, that consciousness is not an algorithmic process at all. But even this obstacle, he supposes elsewhere in the book, might be got around eventually by simulating it, whatever it is, on an analog quantum computer, when, and if, such a technology becomes available. But how again, now, using known technologies, would a simulated brain work? In what appears to me a startling bit of legerdemain, Chalmers moves from an apparently sincere concern to answer this question to what I take to be a reiteration of the presumption that a simulated brain would work (that is, would be conscious), and then proceeds to tell us what “one big advantage” of such a simulated brain would be (that “it raises the possibility that we might become the machine”), and, next, to tell us his preferred strategy for going about such a simulation (“the safest way to become a simulated brain is to become one in stages”). But this strategy for “gradual uploading” is not an argument for the view that uploading is possible. It is a proposal for how to go about testing, someday, whether it is possible or not. And this is all we get in the way of an answer to the question: “Can there be consciousness in a digital world?”

Beyond the simple non-delivery of the promised answer, there is a troubling conflation of two different kinds of models of the brain. Chalmers begins the penultimate section of Chapter 15 proposing to discuss a particular kind of machine, namely, again, “a digital simulation [of a brain] running on a computer.” But then he goes on, with the example of gradual uploading, to describe something that looks a lot more like the successive removal of cells from the actual brain and their successive replacement by implants that “interact, via receptors and effectors, with neighboring biological cells.” Now, it might well be possible to preserve the full functionality of a biological brain when it is partially, or even perhaps entirely, replaced by physical implants that do the same job as brain cells or neurons. But this does not seem to me to answer the question whether a separate computer simulation of a brain — separate in the same way that a computer model of, say, the hydrodynamics of a river is separate from the river — could become conscious. In the case of brain implants, there is a clear respect in which the artificial implants are “like” the cells they replace, and in which both the artificial and the natural entities share in the same approximate nature, notwithstanding their distinct causal histories. In the case of the computer simulation, it is not at all clear to me that the simulated brain cell, even if it is a limit-case atom-for-atom simulation, shares the relevant properties with the biological brain cell, such that we may be able to anticipate that it is capable of facilitating consciousness — no more in fact than we might anticipate that a computer-based hydrodynamic model of a river, if it were to reach a sufficiently fine-grained degree of accuracy, would become wet.

That is just not something we can expect to happen inside a computer, no matter how much the computer is able to reveal to us about wetness, and I have seen no real argument that consciousness is relevantly different from wetness in this regard. Until I see such an argument, I must withhold a commitment to substrate-independence, and this means that I am also going to decline to take the simulation argument seriously, since it depends entirely on substrate-independence in order to work. Or at least, as with creation science and other similar deviations, I am going to take it seriously as a social phenomenon, and try to understand its causes, while refusing to take it seriously on the terms it would like to be taken.

Though we are both philosophers, Chalmers and I belong to different discursive communities, and most of my criticisms here (though not the criticism of his discussion of consciousness), I recognize, may be considered as “external” to his project as he conceives it. Ordinarily I believe projects should be criticized on their merits, according to the aims and the scope that those who undertake them have chosen. But in this case I am motivated by a concern about what gets to count as philosophy, and why. Although I am a philosopher, my preoccupation with such things as the ethnography of folk-beliefs about revenants has led to a general perception among my peers that I have strayed from the discipline, that I have let too much of the actual world seep into my thinking, that I have wandered off into mere “erudition,” a term that is always used by analytic philosophers as a back-handed compliment to signal that the erudit in question is cultivating a lesser skill, one that compensates for the lack of any natural aptitude in what really counts: the art of rational argumentation and distinction-making. And so the result is that what gets to count as philosophy is often generated in a vacuum, ignorant of its sources, of the contingency and localism of what it takes to be its self-evident starting-points, and destined to be as ephemeral as the pop-culture that nearly exhausts its universe of references.

Chalmers may well be living in a simulation, but not the one that he imagines. He is simulating for himself a world that is not inhabited by scholars and critics adept at exposing the ideological forces that shape a given historical era’s conception of reality; a world not inhabited by anthropologists and the people who inform them of models of the world inspired by objects of particular cultural value just as the video game inspires Chalmer’s model; a world in which there are no other ways of representing reality than those of a highly specialized caste in the learned institutions of Europe, India, and China, the latter two admitted as full members of the philosophical community only recently and begrudgingly, and at the expense of other traditions that could now more confidently be cordoned off as “non-philosophical.” Ideology yields simulations too, and the highest goal of the philosopher, now as ever, ought to be a search for “signs” that might lead us out of this simulation. These signs will not be “glitchy” cats that walk by, revealing their virtual nature as a result of some defect in the program, but rather doubts that might arise, for example while reading a pro-VR book such as Reality+. A philosopher who has no interest in even acknowledging the way in which ideological structures shape our worldviews has no business presenting himself as an authority on the question whether the world is a simulation or not.

Somewhat surprisingly, Chalmers favorably invokes Jean Baudrillard’s well-known postmodernist explorations in Simulacra and Simulations, which appeared in 1981. It is likely that this work entered Chalmers’ universe of references primarily because it was, famously, featured as an Easter egg in an early scene of The Matrix. Lily Wachowski, one of the pair of the original movie’s creators, claimed in 2020 that the film was born of “rage at capitalism,” while the critic Andrea Long Chu would a year later make the case that the film is an allegory of transgender identity (an identity both Wachowski siblings would claim some years after the film appeared). But plainly these are high-theory retrofittings upon what was in its original form mostly a piece of standard-fare science-fiction fun, in which the Baudrillardian flourish adds or explains next to nothing. And in any case the extent of Chalmers’ use of the French theorist involves little more than a technical distinction that Baudrillard makes between simulation and representation.

It is ironic that Baudrillard should find his way at all into a book arguing that physical reality itself may be a simulation, since Baudrillard’s concern was with the way in which our picture of social reality is shaped and mediated in large part through media technologies. His famous (or notorious) declaration that “the Gulf War did not take place” was not, by his own lights, a denial that anyone actually died in Iraq or Kuwait in the early 1990s, but only that the idea that a typical American, and perhaps a typical European, formed in association with the phrase “the Gulf War” was excessively shaped by media forces, particularly the new uninterrupted onslaught of images on cable news networks such as CNN. And when you understand a war to be something that happens on your screen rather than in the world, this significantly constrains your capacity to arrive at a mature and sober analysis of war’s moral and human costs. Baudrillard’s analysis of simulation drew him toward the conclusion that our attachment to digitally mediated images of reality, an attachment that is pushed on us by the profit-seeking interests of the media companies, fundamentally weakens our ability to engage critically with reality itself. He was an enemy of simulation. He would have considered the “simulation hypothesis” — that what we think of as reality is in fact a virtual world of the sort with which we are most familiar from our screen- or goggle-mediated games — to have been a resounding victory for the forces of which his work is meant to be a condemnation. It is hard to think of any idea that would more gravely damage our sense of reality than the idea that it is virtual or simulated.

The gamification of social reality is a political matter, and not, in the first instance, a metaphysical one. To conceptualize reality as a whole on the model of the algorithmic gaming technologies that have so enraptured us in our own age is to contribute to the validation of a particular form of social reality: namely, the model of reality in which gamified structures have jumped across the screen, from Pac-Man or Twitter or whatever it is you were playing, and now shape everything we do, from dating to car-sharing to working in an Amazon warehouse. The “simulation argument” is nothing but an apology for algorithmic capitalism.

The spirit of this new economic and political order has extended from Chalmers’ philosophical writing into other para-academic projects, most notably the social-media network PhilPeople.org, which he co-directs with David Bourget, and which largely duplicates the structures of Facebook or LinkedIn for a global network of professional philosophers. As if to demonstrate the great distance that AI has yet to traverse before it achieves anything that might be called “intelligence” in a non-equivocal sense, I have more than once had to write to PhilPeople to request that false information in AI-generated stub profiles of me be taken down. Still today, there is a stub that appears to indicate that I am an “undergraduate” at Eastern University. There is no accounting for this, nor evidently, given that it is still there, any human being willing to be held accountable.

Such a landscape of artificial stupidity, in which there is a glut of undifferentiated information and misinformation issuing forth from machines that could not care less about the distinction between the two, is, much more than the possible dawning of machine consciousness, which is the real story of our most recent technological revolution. That we human beings are compelled to submit to the terms and the constraints laid out by thoughtless machines — for example that we are expected to groom and update AI-generated stub profiles of ourselves that we never asked for in the first place, lest misinformation about us spread and we “lose points” in the great game of our professional standing — is, quite obviously, an encroachment on our freedom, and therefore, again, an encroachment on the one sort of play by the other. Play is now left to the very youngest of us: those too young to understand what screens are, too young to discern the world that lies behind and beyond them. Adolescence begins, perhaps, when we learn to channel our innate playfulness into competition. The comprehensive gamification of adulthood, in this light, has the condition of permanent adolescence as its corollary.

PhilPeople is in the end a boutique affair, greatly overshadowed by such large-scale projects as ResearchGate, Google Scholar, or Academia.edu, which aggressively metricize scholarly output, and effectively transform the assessment of a scholar’s work, even of a philosopher’s work, into formulae so crude that even a machine, even a dull vice-dean with a background in business administration, can understand them. Universities now regularly take such metrics as the number of downloads an open-access article has received to be decisive for promotion and tenure, and there is no reason not to expect, in such a gamified landscape, that soon enough professors up for advancement will respond to this absurd predicament by paying an off-shore click-farm for bulk downloads of published work. In time we might expect to outsource the work of both scholarship and scholarship-evaluation to the machines, which would really just be the perfection of a system already emerging, in which the only real job left is the work of managing our online profiles, while the machines do everything else.

If you wish to set yourself up in this world as a poet, your plight is substantially the same as that of a scholar: you create an account on Submittable, “the social impact platform,” and you manage it. You also have to write some poems at some point, of course, but what can poetry hope to be in the age of “streamlined social-impact initiatives so you can reach your goals faster,” as Submittable describes itself? In nearly every domain of public life in which I have any investment at all, I also have an online portal to it, and automated messages telling me that I need to update the information in my portal. In most of these domains, the activity in these portals is being tracked, and is taken as an ersatz measure of my commitment to the domains themselves. My experience is limited, admittedly, but I have trouble believing they are not representative of our contemporary situation, and that if I were a truck driver, or a restaurateur, or a cosmetician, I would be doing much the same thing as I am in my actual life: updating my passwords, checking my stats, pumping my metrics — feeding the machines.

That these are the real challenges of our current technological conjuncture, rather than, say, the search for glitchy cats that might show us the way out of the Matrix, is a sober fact that at least some professional philosophers are prepared to acknowledge. Daniel Dennett, great influence and inspiration for Chalmers, has been increasingly outspoken in the view that we should not be wasting time speculating about the dawn of conscious machines when this time would be much better spent coming up with practical policy measures to ensure that machines be prevented from encroaching into distinctly and irreducibly human spheres of existence. Other philosophers are interested in the serious dangers of algorithmic bias, where machines that lack consciousness nonetheless discriminate against groups of people, with no awareness that this is what they are doing, and no accountability for doing so; and the even more serious dangers of algorithmic defense systems, where machines that lack consciousness are imbued with the gravest responsibility of all, one that they could fail as a result of a simple technical malfunction, and once again with no real accountability. Sober philosophy, in sum, recognizes our human responsibility as the makers and stewards of the machines, rather than imagining that our entire reality is a virtual simulation produced by a machine.

At least some readers might expect a book co-written by Henry Kissinger and former Google CEO Eric Schmidt (along with Daniel Huttenlocher) to be more susceptible to ideological infection than a book about technology written by an academic philosopher. But these are strange times, and I must grudgingly report that in their recent work The Age of AI and Our Human Future these authors are surprisingly lucid about the actual challenges that we face at present. This lucidity extends both to practical risks and to philosophical questions about the nature of the new systems that create these risks. For them, AI’s ability to process information about aspects of the environment that remain undetected by human beings fundamentally transforms the nature of several domains, notably warfare, as it adds a tremendous element of incalculability. But this information-processing capability is in the end only a further development of the same sort of computation that machines have been doing for many decades now, since they first began to be trained up on the rules of checkers and chess. No matter how far and wide their training extends, the ability of machines to process a wide range of moves more quickly and exhaustively than a human being ever could is by no means an indication that they are moving towards any sort of intelligence, let alone consciousness, of the sort that human beings experience. Indeed the fact that they are so much better at processing certain bodies of information should itself be taken as an indication that they are not at all doing something comparable to what we are doing in our imperfect, limited way. Information-processing, no matter how vast, is not the same activity as judgement.

It is the presumption that human beings are, in their nature, algorithmic “problem-solvers,” as Karl Popper was already saying in the 1950s, that leads to such poorly thought-out efforts at the integration of machines into human society as we are seeing in the present day. Already in the early 1960s, Norbert Wiener discerned the most serious challenges of the digital revolution, where simply training machines to execute evidently minor tasks already awakens threats that can be foreseen in their concrete form, and forestalled in good time, only with great difficulty. “To turn a machine off effectively,” Wiener warned, “we must be in possession of information as to whether the danger point has come. The mere fact that we have made the machine does not guarantee that we shall have the proper information to do this. This is already implicit in… the checker-playing machine [that] can defeat the man who has programmed it.” Our greatest challenge today is not that machines may gain consciousness, and still less that we are ourselves conscious machines, but that the machines may defeat us, and do not require consciousness in order to do so.

The real prospect of our total defeat arose in the middle of the twentieth century at the same moment that we began to take strategy games, even such trivial pastimes as checkers, to be paradigmatic models of the core endeavors of human life. The intellectual historians of the last century regard “game theory” as one of its great achievements, but it is past time that we regard it critically and recognize the poverty of its understanding of human motivation and human action. We have conferred too much prestige upon games, just as we have mistaken algorithms for play. We should be highly wary today of anyone who continues to take games, even such trivial and seemingly harmless pastimes as the VR-mediated fun of Rec Room, as any sort of key for grasping the human difference and its place in our fragile and frightening world.