When you log into ChatGPT, the world’s most famous AI chatbot offers a warning that it “may occasionally generate incorrect information,” particularly about events that have taken place since 2021. The disclaimer is repeated in a legalistic notice under the search bar: “ChatGPT may produce inaccurate information about people, places, or facts.” Indeed, when OpenAI’s chatbot and its rivals from Microsoft and Google became available to the public early in 2023, one of their most alarming features was their tendency to give confident and precise-seeming answers that bear no relationship to reality. In one experiment, a reporter for the New York Times asked ChatGPT when the term “artificial intelligence” first appeared in the newspaper. The bot responded that it was on July 10, 1956, in an article about a computer-science conference at Dartmouth. Google’s Bard agreed, stating that the article appeared on the front page of the Times and offering quotations from it. In fact, while the conference did take place, no such article was ever published; the bots had “hallucinated” it. Already there are real-world examples of people relying on AI hallucinations and paying a price. In June, a federal judge imposed a fine on lawyers who filed a brief written with the help of a chatbot, which referred to non-existent cases and quoted from non-existent opinions. Since AI chatbots promise to become the default tool for people seeking information online, the danger of such errors is obvious. Yet they are also fascinating, for the same reason that Freudian slips are fascinating: they are mistakes that offer a glimpse of a significant truth. For Freud, slips of the tongue betray the deep emotions and desires we usually keep from coming to the surface. AI hallucinations do exactly the opposite: they reveal that the program’s fluent speech is all surface, with no mind “underneath” whose knowledge or beliefs about the world is being expressed. That is because these AIs are only “large language models,” trained not to reason about the world but to recognize patterns in language. ChatGPT offers a concise explanation of its own workings: “The training process involves exposing the model to vast amounts of text data and optimizing its parameters to predict the next word or phrase given the previous context. By learning from a wide range of text sources, large language models can acquire a broad understanding of language and generate coherent and contextually relevant responses.” The responses are coherent because the AI has taught itself, through exposure to billions upon billions of websites, books, and other data sets, how sentences are most likely to unfold from one word to the next. You could spend days asking ChatGPT questions and never get a nonsensical or ungrammatical response. Yet awe would be misplaced. The device has no way of knowing what its words refer to, as humans would, or even what it means for words to refer to something. Strictly speaking, it doesn’t know anything. For an AI chatbot, one can truly say, there is nothing outside the text. AIs are new, but that idea, of course, is not. It was made famous in 1967 by Jacques Derrida’s Of Grammatology, which taught a generation of students and deconstructionists that “il n’y a pas de hors-texte.” In discussing Rousseau’s Confessions, Derrida insists that reading “cannot legitimately transgress the text toward something other than it, toward a referent (a reality that is metaphysical, historical, psychobiographical, etc.) or toward a signified outside the text whose content could take place, could have taken place outside of language.” Naturally, this doesn’t mean that the people and events Rousseau writes about in his autobiography did not exist. Rather, the deconstructionist koan posits that there is no way to move between the realms of text and reality, because the text is a closed system. Words produce meaning not by a direct connection with the things they signify, but by the way they differ from other words, in an endless chain of contrasts that Derrida called différance. Reality can never be present in a text, he argues, because “what opens meaning and language is writing as the disappearance of natural presence.” The idea that writing replaces the real is a postmodern inversion of the traditional humanistic understanding of literature, which sees it precisely as a communication of the real. For Descartes, language was the only proof we have that other human beings have inner realities similar to our own. In his Meditations, he notes that people’s minds are never visible to us in the same immediate way in which their bodies are. “When looking from a window and saying I see men who pass in the street, I really do not see them, but infer that what I see is men,” he observes. “And yet what do I see from the window but hats and coats which may cover automatic machines?” Of course, he acknowledges, “I judge these to be men,” but the point is that this requires a judgment, a deduction; it is not something we simply and reliably know. In the seventeenth century, it was not possible to build a machine that looked enough like a human being to fool anyone up close. But such a machine was already conceivable, and in the Discourse on Method Descartes speculates about a world where “there were machines bearing the image of our bodies, and capable of imitating our actions as far as it is morally possible.” Even if the physical imitation was perfect, he argues, there would be a “most certain” test to distinguish man from machine: the latter “could never use words or other signs arranged in such a manner as is competent to us in order to declare our thoughts to others.” Language is how human beings make their inwardness visible; it is the aperture that allows the ghost to speak through the machine. A machine without a ghost would therefore be unable to use language, even if it was engineered to “emit vocables.” When it comes to the mind, language, not faith,
or
Register for 2 free articles a month Preview for freeAlready have an account? Sign in here.