When you log into ChatGPT, the world’s most famous AI chatbot offers a warning that it “may occasionally generate incorrect information,” particularly about events that have taken place since 2021. The disclaimer is repeated in a legalistic notice under the search bar: “ChatGPT may produce inaccurate information about people, places, or facts.” Indeed, when OpenAI’s chatbot and its rivals from Microsoft and Google became available to the public early in 2023, one of their most alarming features was their tendency to give confident and precise-seeming answers that bear no relationship to reality. In one experiment, a reporter for the New York Times asked ChatGPT when the term “artificial intelligence” first appeared in the newspaper. The bot responded that it was on July 10, 1956, in an article about a computer-science conference at Dartmouth. Google’s Bard agreed, stating that the article appeared on the front page of the Times and offering quotations from it. In fact, while the conference did take place, no such article was ever published; the bots had “hallucinated” it. Already there are real-world examples of people relying on AI hallucinations and paying a price. In June, a federal judge imposed a fine on lawyers who filed a brief written with the help of a chatbot, which referred to non-existent cases and quoted from non-existent opinions. Since AI chatbots promise to become the default tool for people seeking information online, the danger of such errors is obvious. Yet they are also fascinating, for the same reason that Freudian slips are fascinating: they are mistakes that offer a glimpse of a significant truth. For Freud, slips of the tongue betray the deep emotions and desires we usually keep from coming to the surface. AI hallucinations do exactly the opposite: they reveal that the program’s fluent speech is all surface,
or
Register for 2 free articles a month Preview for freeAlready have an account? Sign in here.