The Technology of Bullshit

Apart from being sent to bed early, the worst part about being the youngest member of my family was that everyone around me could read except me. Even if I wasn’t born into a bookish family, I could intuit the power of the written word. It allowed my mother to remember what she had to buy in the market. Notes passed between my brothers could elicit laughter. Note to self: written squiggles can tickle. I knew my father often stayed up late immersed in a novel. I remember staring at my brother for hours while he was doing homework, his eyes darting across the textbook in front of him, his pencil in hand bobbing over the notebook page, leaving mysterious symbols behind. I felt excluded from what I knew was a world of meaning. “When can I learn how to read?” I asked on my first day of school.  Words enable us to read minds. Through them, we can communicate with various degrees of precision our innermost thoughts and our most visceral feelings. We can travel through space and time. Words allow us to learn from the dead and convey our knowledge to those who come after us; they allow us to overcome the geographical and temporal limits of our bodies. Words are vehicles through which we can plan and coordinate with others. Sentences and paragraphs are tools through which we enhance our cognitive capacities. Written language is one of our defining skills as human beings. In 1950, the computer scientist Alan Turing imagined that, one day, written language might not be exclusive to human beings. In his “imitation game,” Turing imagined human judges holding five-minute text-based conversations with a computer and a person, both of which were hidden. Could judges reliably tell which was the computer just through reading its text? If they couldn’t, might that mean that the computer was thinking? This became known as the Turing Test. It was never meant to be a literal test for intelligence; it was a thought experiment. But it was suggestive enough to become a landmark. For several decades, the businessman Hugh Loebner funded an annual Turing Test event known as the Loebner Prize, which enacted the imitation game. These stopped for financial reasons after Loebner’s death only a few years before the development of large language models. Large language models disrupted the world in November 2022. OpenAI made ChatGPT available, and in a matter of days it was the main topic of conversation of every lunch and dinner I had, whether with friends or colleagues. What was unclear was whether much of the excitement was illusory — little more than impressionable human beings feeling dazzled by the latest tech trick — or whether it was the product of glimpsing the sprouts of a revolution that will radically alter how we work and how we interact with technology. The question remains unanswered.  The shiny side of large language models includes the astounding feeling that one is talking to another person, and the hope that these imitators could work for us. College students salivated at the prospect of having them write their essays, and professors felt guiltily tempted to use them to mark those essays. Lawyers figured that they could use them to draft legal briefs (it turned out to be a bad idea). Doctors hypothesized about using them to write down notes about their appointments with patients. And who wouldn’t want an AI to answer the billions of inane emails that we send one another? The main argument in favor of these systems, it seems, is the promise of greater efficiency and therefore greater productivity. In this sense, it is not different in kind from other mechanical devices that were invented to make life easier.  Except that it is different in kind. Delegating language to a statistical tool built by a company has its special shadows. One concern is that, by automating more of our life, we are losing skills to AI. “Use it or lose it,” my Latin teacher said every time he assigned homework. He was right: decades later I have lost it entirely. I’m writing these words on an airplane. It is cold and windy outside, and the pilot has mentioned the possibility of turbulence. If there is an emergency, I wonder, does the pilot have enough flying experience to know how to successfully navigate it? On Continental Connection Flight 3407 in 2009, there was no mechanical failure. The captain had been distracted talking with the first officer. As they prepared for landing, they continued chatting, forgetting to monitor the airplane’s airspeed and altitude. By the time the captain realized that they were in trouble, it was too late. No one on board survived. Similarly, on Asiana Airlines Flight 214 in 2013, a plane crashed because the pilots were not proficient in landing without the use of high-level automation. That day, part of the airport’s instrument landing system that helps guide planes to the runway was out of service for repairs. As flying has become more automated, pilots have been losing certain skills required to fly manually, such as navigating by reference to landmarks, calculating their speed and altitude, and being able to visualize the plane’s position. They don’t get to practice these skills enough. What’s more, with automation, they have fewer details to worry about during flights; this induces boredom and mind-wandering, which in turn causes mistakes. When automation fails or necessitates human input, distracted pilots are less able to overcome risky situations. Artificial intelligence is the ultimate kind of automation. The aspiration is to create a kind of intelligence that can take over as many of our tasks as possible. As we increasingly rely on AI in more spheres of life —from health and policing, finance to education, to everything in between — it is worth asking ourselves whether increased automation will lead to a loss of expertise, and to what extent that might be a problem. And the concern that technology might degrade our cognitive

Log In Subscribe
Register now