Grids, Glass, and More Glass

    I have started thinking of them as spaceships to nowhere. In my city, another one is always on the way; the latest touches down at 213 Bowery this fall. The last to arrive at that address, the SANAA-designed New Museum, was finished in 2007, the year of Obamamania and the iPhone and the first gentle rustling of the Great Recession. Like its predecessor, the new annex is a sleek, politely glowy object. It disrupts the skyline but not too much, making a statement but not too loudly. Designed by Rem Koolhaas and Shohei Shigematsu, it took three years to build, doubles the original floorspace, and cost something like sixty million dollars, or what Manhattan developers now call a “bargain.” The style is the kind still optimistically described as futuristic, though the reopening of a new New in 2025 is a reminder that this future is close to twenty years old, a throwback to a time when smart design and calm authority were still widely believed to be capable of saving the world. The first exhibition on the calendar is titled, too perfectly, “Memories of the Future.” 

    The timing is always strange with these things. 2025 has been a terrible year for most American museums, the year rivers of cash went dry: millions lost for the Clyfford Still, the Berkeley, the Pennsylvania Academy of the Fine Arts, and dozens of others, because the National Endowment for the Arts would no longer provide them; and millions more in international tourism, because suddenly nobody wants to fly here and take a Nighthawks selfie. It was the year the National Endowment for the Humanities cut major funding to museums. 

    This was also the year the White House went snarling after the Smithsonian, and when suited Beltway goons paid the National Gallery a visit to discuss its “legal status.” It was the year an executive order blasted the National Museum of African American History and Culture for encouraging its visitors to learn something about racism, and the year Trump squeezed out the National Portrait Gallery’s director for daring to think that diversity was on the whole not such a bad thing. It was the year hundreds gathered in Washington, D.C., to defend the Smithsonian, chanting, “Hands off our history,” though it was also the year protesters booed the Brooklyn Museum’s latest round of layoffs, and others crowded the lobby of the Whitney to condemn the cancellation of a pro-Palestinian performance, and others danced outside the Museum of Natural History to condemn board members who profit from oil. It was the year climate activists stayed in jail for throwing soup on Van Gogh’s Sunflowers and the name “Warren Kanders” stayed in the Whitney lobby. 

    Depending on who you talk to, in sum, 2025 was the year museums were the victims or the problem or part of the problem. What is beyond dispute is that 2025 was the year America’s big museums got bigger. Every year is. 

    The New Museum; the Studio Museum, which reopens this fall in a new three-hundred-million-dollar package by Adjaye Associates and Cooper Robertson; the Frick Gallery, recalled to life after a two-hundred-million-dollar renovation by Selldorf Architects; the Met, seventy million dollars poorer but one twinkly Michael C. Rockefeller wing richer. 2025 is no anomaly. In 2015, the Whitney moved to a new four-hundred-million-dollar Renzo Piano building; in 2019, MoMA reopened after a years-long, block-darkening, four-hundred-fifty-million-dollar expansion, its third in as many decades. COVID paused the growth for a while but did nothing to challenge the trustees’ confidence that growth is good. The rest of the 2020s will add fifty thousand square feet of waterfront property to the Tampa Museum of Art (a little ominous, given the state of the Atlantic, but hopefully Florida knows what it’s doing), a hundred thousand square feet to the Portland Art Museum in Oregon, sixty thousand to the Portland Museum of Art in Maine. Upward and outward they swell: palaces of art covered in endless pricy lifts and implants and transplants, not so different from the kind the sponsors lavish on their own bodies. 

    If you want a clear x-ray of an era, every triumph and delusion crisply rendered, you can always study its art. In the case of the United States in 2025, however, it might be more revealing to study its art museums. Such anxious, blustery things! By the time a new renovation has hatched, a successor is already pecking through the shell. The final products slant and shift their weight as though aware that there is nothing final about them: not a chance in a society that relishes moving fast and breaking things, including itself. 

    Is it ungrateful, in Trump Part II Year One, to be skeptical of the art museums that have managed to keep expanding, thanks to billionaire largesse? The American system of private cultural philanthropy has a lot to answer for, but at least it provides some cushion from POTUS 47’s whims. The better question might be: given the rain-or-shine ballooning of these buildings, and the municipal taxes that help make them possible, and the unaffordable restaurants, and the thirty-dollar tickets, and the shady land rights deals, and the write-offs, and the walls covered in donor names so that your eyes start to burn well before you reach the paintings, and the gift shops of deluxe crud, and the gentrifying neighborhoods that make the restaurants look affordable, and the galas — given all this, what, exactly, does museum expansion have to do with art? 

    The concept of growth, I am not the only one to notice, is having a rough twenty-first century. Blame the housing bubble, the overextended American empire, the mallification of urban centers, the net worth of the plutocracy, the greenhouse gas emissions, or all of them, since they may be symptoms of the same sickness. At least among people without summer houses, growth is reckless, boorish, decadent, cancerous, inherently suspicious; growth is the needle tower that could wipe out homelessness but stays empty fifty-one weeks of the year; growth is the rising tide that exclusively lifts yachts. Even its cooler friend, sustainable development, may only be growth with a better PR agent. 

    In the midst of this, museum growth seems to enjoy something like the benefit of the clergy. Not always, and not all museums — many journalists have wondered how much of 53rd Street MoMA will swallow before its stomach stops growling — but an expanding American art museum is still innocent until proven guilty, as an Amazon headquarters or a McMansion is not. In the clash between love of art and skepticism of growth, love triumphs. Art museums are sacred spaces where many visitors have the closest thing to a religious experience they will ever feel. What could be wrong with making room for more worship? 

    Start with the simplest justification for museum expansion: structural necessity. Some of these places are a century old, and nobody can worship if the walls crumble. All big buildings require repairs: ceilings blotch, plumbing and heating fritz. They are the kinds of problems that irk every museum, and, traditionally, they are the kind trustees have no interest in paying to fix. Upkeep is as important as it is unglamorous — no ribbon-snipping ceremony welcomes the new roof tiling, and nobody wants their name on a radiator, though to be fair I did see a named fire escape on a recent visit to MASS MoCA. It is a curious side-effect of tycoon psychology that a museum director may have an easier time scraping together fifty million for a new building than half a million for new toilets: vanity being vanity, unnecessary expansion is one of the shrewdest ways of funding necessary repair. 

    Who cares about necessity, as long as the results are beautiful? Some of the recent museum growth in New York, where I live, certainly is lovely. The cantilevered staircase that connects the two floors of the Frick has an elegance that doesn’t overpower; the Breccia Aurora marble somehow splits the difference between the flowery Bouchers upstairs and the pale chill of the Reception Hall below, so that transition rivals either destination. I have heard sniggers about the Gilder Center at the Museum of Natural History, but I think its oozy granite interiors are built to last in the most important sense: while other, more self-consciously tasteful buildings are doomed to look more like the 2020s with each passing year, the Gilder will go on looking like itself. 

    For every triumph, though, there are multiple museum makeovers that inflame my inner Peggy Lee. Is that all there is? I thought this when I visited the new Whitney a decade ago. The design was far from terrible; an outright terrible building would have been so much less perplexing. The eastern façade resembled four or five façades stacked together while their architect, recently dubbed “our Brunelleschi” by one of America’s leading magazines, decided where to put them. The western side, facing the Hudson, resembled a ship with a white sail, if the sail was a hunk of Styrofoam and the ship was sinking. North had lots of exposed pipes that somebody must have found pleasant to look at, and south was so utterly, breathtakingly okay it could only have been the work of a renowned architect dozens of museums deep in his career. 

    And for everything that goes up in a city this snug, something else must be knocked down. The Frick’s Music Room was one of the most ravishing places in New York before Selldorf’s renovation scrapped it to make way for temporary exhibitions (some excused the act by saying the venue was too small, as though this wasn’t half the charm of the Music Room, not to mention the rest of the building — architectural victim-blaming). In 2014, as an amuse-bouche before its next meal, MoMA chewed up the American Museum of Folk Art, having bought the building and decided that Williams and Tsien’s bronze façade disagreed with its house style of grids, glass, and more glass. That the world’s most influential modern art museum has taken to junking work that stands too far outside the aesthetic norm is a sick joke I will leave hanging. 

    Who cares about beauty, as long as the results are bigger? At least in press releases, the rationale for museum expansion is fiercely utilitarian: more space equals more wall area, which allows for the display of more art and a greater bang-for-buck for the common museumgoer’s eyeballs; more space also means more floor area and elevators and stairwells, which work together to relieve congestion. All very sensible on paper — funny, though, how the tiny museums that would benefit most from additional acreage cannot afford any and the museums that already own football fields of it seem to get more congested with growth, as any MoMA visitor knows. But only a fraction of the new layout goes to art, and the total amount may go down — at the moment LACMA is wrapping up a new six-hundred-fifty-million-dollar building by Peter Zumthor with ten thousand fewer square feet of galleries. 

    Even when the new space is bigger and one hundred percent art-devoted, it is unclear why a mega-museum gets intrinsically better with more stuff on the walls. “One cannot enjoy a pure aesthetic sensation,” Kenneth Clark terrifyingly put it, “for longer than one can enjoy the smell of an orange.” The purpose of going to the Met should not be to huff every orange on the tree, nor should it be the Met’s duty to pelt visitors with as much citrus as possible. In point of fact, large museums never come close to displaying everything they own and instead rotate their permanent collections in and out of storage. The Met’s collection includes close to two million works, only about five percent of which fit in the building; for MoMA, the number is somewhere around ten percent; for the Guggenheim, three. Making room for everything will always be a quixotic cause — besides, if given the choice between a shinier museum with marginally more on the walls or an already massive museum that doesn’t cost a family a hundred dollars to visit, which one would our mythic common museumgoer choose? 

    The question is theoretical, needless to say. One of the tartest ironies of this era of nine-figure art museum philanthropy must be how little of the money reaches the consumer: across America, pay-what-you-can entry has been scaled back to free weekends, free weekends to seasonal free weekdays, and seasonal free weekdays to free parking. With every hundred-million-dollar expansion, free museum admission looks more like a weird twentieth-century fossil. Administrators cite study after study proving that cheaper tickets have no measurable effect on the size of museum audiences, and so — ah, terrible shame! — they might as well charge twenty or thirty dollars. I am less interested in the studies than I am in why free admission is now posed as a question instead of a right, economics be damned. Had the same stern, penny-pinching scrutiny been applied to the hundreds of new wings and annexes of recent years, I wonder how many of them would exist. 

    Fear not, though: museum expansion is of debatable value to the visitor but of enormous value to somebody else. If there is a central reason why museums keep growing, it may be that the donors like a guarantee that their gifts will remain on permanent display and not be buried in storage — more space, more guarantees. (Philip Guston’s daughter Musa Mayer, for example, made the Met a present of 220 of her father’s works on the condition that at least half be on display at all times.) When done right, philanthropy pets the ego and pads the wallet. There is the psychological reward of knowing your art collection will be ogled long after you are dead, plus the charitable tax deduction, plus all the ancillary ways of pocket-lining. A high-end museum drives up property values, attracts tourists, fills up hotels and department stores, and generally enriches the sort of people who populate museum boards to begin with. Conflicts of interest are, of course, discouraged — why else would it say so in the code of ethics? Gentlemen, I’m shocked, shocked to find that profiteering is going on in here! 

    Still, reasons only take you so far. Follow utilitarian logic through to the end, usually, and you arrive at some humorless “well . . . because.” Few multi-millionaries and billionaires are famous for the practicality of their wants; probably not even they know why a rising tide delights them so much. Why have America’s museums kept getting bigger, then? To fit a few thousand more artworks next to fifty thousand others, certainly. Fundamentally, though, museums expand because expansion does not need a reason: to the people who make the decisions, it justifies itself, like life or happiness or, for a few hopeless fogeys, art. For a long time now, the signature style of the contemporary art world has been something like real estate aestheticism — growth for growth’s sake. 

    Even though I’m the other kind of aesthete, my first instinct is to say, let the tycoons do what they want. There are worse things to do with money than burn it, and every million dollars spent on a modern art wing that nobody likes is a million dollars not spent on predatory loan marketing or the reelection of some moussed, drooling climate change denier. 

    If museum expansion warped buildings and buildings alone, I could laugh it off, but it has a way of warping what’s inside them, too. Make a quick list of the glitteriest art careers of the past twenty years or so, and you find a few genuine talents and a rollcall of mediocrities with a gift for ritzing up the vast gray interiors in which museums increasingly abound: Ai Weiwei, scatterer of porcelain seeds at the Tate Modern; Yayoi Kusama, wallpaperer of the same institution and dozens of others; KAWS, whose giant brown dolls I am doomed to pass every time I find myself in the lobby of the Brooklyn Museum. The first task for these people is to fill up lots and lots of space, and at this they succeed brilliantly, since their work consists of a few simple elements (seeds, dots, dolls) that can babble on to whatever degree is required of them. When museumgoers walk in and ooh at the dots disappearing into the distance, they are oohing at the giant space that hosts them, handsome in its bright new costume. Space is boss, and art does what it says. 

    The premier filler-upper artist of the decade so far must be Jeffrey Gibson, the MacArthur genius and proud occupant of the American pavilion at last year’s Venice Biennale. With the help of a stable of assistants, he assembles hundreds of thousands of rainbow beads into sculptures, paintings, and costumes, none of which exhibit the slightest grace or facility with color, unless turning on the entire spectrum full blast is your idea of chromatic wizardry. Glance one-eyed at a Gibson and you absorb the whole thing along with most of the others. The best test of this is the impressive forgettability of his work — not long ago I spent a while in POWER FULL BECAUSE WE’RE DIFFERENT, his installation at MASS MoCA. I am still pondering that fire escape, but today I would be hard-pressed to say if this dress was bright yellow or bright blue, if that bit of wall was bright orange or bright pink, or much else beyond the fact that the room was big and everything was bright. But this would seem to be part of what museums love about Gibson, and why at the time of this writing his beads are being slobbered over from sea to shining sea: they dress up anywhere because they don’t say too much of anything. Like Kusama’s dots, there is something superficially innocent about them that gives the most bloated museum hangars a sweet glaze of populism. 

    Bloat and populism are having spectacular twenty-first centuries. It seems strange that both should be doing so well simultaneously, but here we are. Bloat won economics, while populism seems to have won aesthetics some time ago. (Politics is the usual tug o’ war between them.) Peacocking displays of wealth are so common that our senses have numbed to them, but “elitist” has been one of the filthiest words in the English language for as long as I have spoken it. Nobody gets in trouble for selling out anymore, but the idea of making art that might alienate some of its audience has become vaguely impolite, to the delight of some and the horror of others. (The inevitable Mark Fisher quotation: 

    “The assault on cultural elitism has gone alongside the aggressive restoration of a material elite.”) Nowhere do bloat and populism clash with such matter-antimatter explosiveness as they do in museums: the new spaceships to nowhere are for everybody, and they are toys and tools for billionaires. The harder they strain to seem down-to-earth, the more bloat they hide. 

    The art historian András Szántó bottles the bloat and the down-to-earth-ness and the rest of contemporary arts administration culture in The Future of the Museum. A collection of twenty-eight interviews with museum directors, all conducted in the early months of COVID, the book is a quietly amazing compendium of the ways in which art people — but not artists — think about art. I read and reread it like a novel. There are twenty-eight main characters, half men and half women and all fluent in their regional dialects of bureaucratese. (The fungal creep of the word “immersive” in the last decade or so has spared few museum directors.) Their institutions are scattered across fourteen countries in every continent but Antarctica. Together they preside over some seven million objects and an annual budget of nearly a billion dollars. Their average age is forty-nine. Many studied art history in college, though one is an ex-Louis Vuitton executive and another is an ex-child star. Part of the pathos and the comedy of this novel is that nobody is allowed to say what’s really on their mind, but sometimes they are so determined not to say X it is clearly X and nothing else that they are thinking. 

    The first thing I noticed, reading The Future of the Museum in this bumper year of buildings, was that nobody fesses up to wanting a bigger museum. In hindsight, at least some of these people were speaking to Szántó in between frantically rescheduling the new sculpture wing, yet the subject of expansion goes all but unmentioned for three-hundred seventeen pages — like the dog in the Sherlock Holmes story, it doesn’t bark because it recognizes its master. Instead of growth, museum directors would like to talk about community. It would be impossible for me to overstate how badly they would like to do this. The executive director of the M+ Museum in Hong Kong believes in the importance of community. So does the director of the Garage Museum of Contemporary Art in Moscow. So does everybody — “the term ‘community’ is bandied about too much,” says the director of the Brooklyn Museum after bandying it about too much. Not that anyone can really be anti-community, but the tic-like repetitions suggest a guilty conscience. The more I read the word “community,” the more vividly I pictured a big concrete slab named for a Sackler. And it is strange to see communities praised on page after page with so few mentions of what they are communities of

    To put it another way: this is a book of conversations with twenty-eight of the world’s most educated and powerful arts administrators in which almost nobody speaks with passion, or even much warmth, about art; in which everybody remembers to praise community but nobody rhapsodizes about a painting or a sculpture or a film or a tapestry or a drawing. At times, some of these people seem almost sheepish about managing such flimsy things. One director does speak at length about the value of his museum’s collection, but he is talking about their cash value, which apparently is five billion dollars. (“The conversation about how the liquidity trapped in artworks can be used has been a very unnuanced one.”) In the book’s most touching and depressing moment, the Brooklyn Museum’s director confesses that she wonders if she should have gone into politics instead. 

    These interviews were conducted in a pandemic year, to be fair, and perhaps it struck Szántó’s subjects as insensitive to extoll Rembrandt in dark times, though it might have struck them that in 2020 some of us needed Rembrandt, who lost his lover to plague, more than ever. Americans consider art “a luxury rather than a necessity,” as the poet and one-time NEA chair Dana Gioia wrote in 1991, well before the pandemic or the dismembering of the NEA. By “Americans,” Gioia meant people who have not chosen to devote their lives to art, but this book made me wonder if some art bureaucrats are hiding the same sneer. If you didn’t believe paintings to be of vital importance during the COVID-19 pandemic, you don’t really believe them to be of vital importance at all. 

    To understand a tribe, anthropologists say, it is not enough to pay attention to what the tribespeople talk about. Truth lies also in what they are not talking about: all the thoughts they consider too self-evidently absurd to mention. The big unspoken subject in The Future of the Museum, even bigger than museum expansion, is pleasure. Museum directors differ in their attitudes toward retail or political neutrality, but on pleasure, and the possibility that a museum might afford its visitors some, they sing the same silent song. Twenty-eight times Szántó asks what a museum is for, and almost every interviewee replies with something about education or activism or building community — all admirable goals, but lifeless when the central one goes missing. The director of the de Young in San Francisco says in four words what everyone else in this book says in zero: “We are not entertainment.” 

    His grimness would have amused Alfred Barr, MoMA’s first director, who felt his museum’s purpose was to help people “enjoy, understand, and use the visual arts of our time.” We can imagine what would happen if the art museum directors of the early twenty-first century had to agree on their own definition, though actually we don’t need to imagine anything: in September 2019, the International Council of Museums determined that museums “are participatory and transparent, and work in active partnership with and for diverse communities to collect, preserve, research, interpret, exhibit, and enhance understandings of the world, aiming to contribute to human dignity and social justice, global equity and planetary well-being.” A revised version appeared three years later, with “enjoyment” tossed in at the end like a pack of gum in the checkout line. I have nothing against planetary well-being, and you may quote me as saying so. How bizarre, though, to hold entertainment, one of a handful of things that makes this planet bearable, in such low esteem, like an opera house that proclaims its commitment to justice but forgets to mention music. 

    The final twist of The Future of the Museum is that it is full of odes to the power of the image, just not the kind of image you would expect museum workers to praise. There are millions of people who will forgo sleep, sex, sun, water, and food to keep staring at screens, and clearly arts administrators have taken envious notice. Things go from bizarre to sinister here: the museum directors of the early twenty-first century look at the red-eyed consumers and the companies selling their own dopamine back to them and think, “How can we be more like that?” “How can we better understand the motivations and intentions of the kinds of experiences that people are seeking through online games,” wonders the director of Singapore Art Museum, “so that we may use these as a way to steer them toward, as well as complement and enhance, the experiences museums can offer?” “This idea of the museum as photo backdrop arrived here early,” adds another director; “We spend a lot of time thinking about how to turn this inexorable urge into something productive.” (Notice he doesn’t say, “Something pleasurable.”) “People love serialized content,” the nuanced liquidity guy opines. “Imagine if museums found a way to have each program build off the previous one, and if we figured out a way to distribute that through digital media in a way that was binge-worthy. That is a digital future I would like to imagine.” 

    Binges and gamifications and inexorable urges — behold the museum directors’ glorious dream! What disturbs me more is that they claim to be dreaming in our names. 

    Pleasure, you have surely noticed, is having a spectacular time of late, and a terrible one. Some audiovisual thrill is always available, provided your devices stay charged, but if you have never felt the ache of all this bottomless fun, bully for you. There are whole clinics clotted with people who got such a kick out of online games or porn or other pixelated delights that they no longer feel much of anything; and their undiagnosed kin absent-mindedly run the world. It is telling that the concept of the guilty pleasure has almost disappeared from the culture — now there are only different pleasures for different folks. Neuroscience concurs, cheapening the feeling to a chemical squirt. 

    Some dour economic principle seems to be at work: mint too much pleasure too fast and it is devalued into the merest itch. As though to keep its stock trading high, meanwhile, fine art gets cashed in the stabler currency of community or duty or self-improvement or self-advancement — “something productive,” as that wise museum director might put it. Hence all the books insisting that the function of great literature is to make us nicer (Céline? Hamsun?); hence all the op-eds my grandfather used to mail me about how a humanities degree could help me get a job at McKinsey. 

    There is something I have not yet mentioned but should. I get enormous pleasure from art museums, not only the underfunded ones, but also the gray lugs I have been complaining about. And not only the art that hangs in them; I mean also the lines, the selfies, the gross kid-friendly installations in the lobby, the humid elevators of tourists, the tour groups, the wall texts written in something that somewhat resembles English. Of course I also get sick of every one of these things, but I believe that any real love for the exhilarating, exhausting art museum involves some irritation, a healthy mix of because and in spite of that is stronger than because alone. The kind of pleasure I get from museums, I suppose, is the kind I get from communities (we shouldn’t let the art bureaucrats ruin the word), and from art, and from almost anything else that is intrinsically worthwhile. 

    The question with which I began was not all rhetorical: what does museum expansion have to do with art? Very little, but also everything. Unless you happen to be wealthy enough to buy masterpieces yourself, to experience art means to experience it with a pack of strangers in a shiny new room named after people you couldn’t stand much more than they could stand you. The people who make such places happen seem to think of museumgoers as dopamine junkies, utility maximizers who will of course want to see more things since more things equal more dopamine. You can, if you like, play along with this and try to binge as you might binge on serialized content. You can also slow down, choose a handful of works, and go swimming in them. I recommend option two, not because I have any illusions that it measurably alters the world but because I believe that real pleasure exists outside measurement, and because I believe that real pleasure needs no reason to exist. If you require one, though, might I suggest the satisfaction of not acting like the sheep that art bureaucrats would like us to be? If we need help, we can always consult artists. 

    The artist Johann Zoffany has been of some help to me, even though I can’t always convince myself that he really existed. His work hangs in the Tate, and the ZOFFANY, JOHANN (1734/5–1810) entry in my edition of The Oxford Companion to Art is respectably long. Still, ask yourself, does this sound like a person or a literary character? Born Johannus Josephus Zauffaly near Frankfurt, he moved to Rome at seventeen and reinvented himself as Zoffani. In his twenties he got himself a court painter gig in Wurzburg, but three years later he ran off to London, leaving his wife behind. In Georgian London, he changed his name to Zoffany, took a mistress whom he passed off as his wife, and befriended the greatest actor of the era, David Garrick. For a while he had the favor of Queen Charlotte, but by the 1780s he was cash-strapped and resolved to sail to India to start again yet again. On the voyage back to England he was shipwrecked on the Andamans and, he claimed, ate a sailor to survive. I have no idea why someone would say this if it weren’t true. I have no idea why someone would say it if it were. Start self-mythologizing as a teenager, I suppose, and you never stop. 

    “Indifferent artistic merit” is what the Oxford Companion has to say about Zoffany’s work. I’m not so sure myself, and neither, for that matter, is the Oxford Companion, which gives Zoffany’s Tribuna of the Uffizi pride of place on the front cover. I have an odd relationship with this image, having never seen the original at Windsor Castle but glancing at the little ink reproduction most days for the last five or six years. Ordinarily, I would not write about a work of art I had never seen with my own eyes, but given that the Queen instructed Zoffany to travel to Florence in order to paint the most important room in the city’s most important museum, cramming his canvas full of tiny reproductions of works she had never seen with her own eyes, it seems forgivable somehow. 

    Zoffany spent six years doing the cramming, and it shows: every kind of image and sculpture can be found floating somewhere in this swollen gut of a painting. It is true that museums before the twentieth century displayed art frame-to-frame, but even by this standard the Uffizi that we are shown by Zoffany is a mess — compare it with a calmer gallery interior like Samuel Morse’s Gallery of the Louvre, completed a few decades later, and you see how far Zoffany is overstepping the curatorial rules of his own era, not just ours. More is more, and still not enough. Works that ordinarily hung elsewhere in the Uffizi were rushed into the Tribuna for the Queen’s delight. So were works that ordinarily hung in other museums. There are so many things here that some cannot fit on the walls and need to be carried or dumped on the floor: Rubens’ Consequences of War, a sculpture of baby Hercules, Titian’s Venus of Urbino, an Etruscan urn, a Holbein, a Correggio, a few Raphaels . . . 

    Charlotte hated it. She had expected a Tribuna overflowing with paintings and sculptures; instead she got one overflowing with paintings, sculptures, and people. These are tourists, in the original sense of the word: educated and wealthy men hitting the last stop on their Grand Tour of the European continent. We can imagine the Queen’s anger at this unsolicited reminder that mere gentlemen had been to the Uffizi and she had to be content with copies. This is a particularly bovine bunch, too — “a flock of traveling boys,” Horace Walpole thought, “and one does not know nor care whom.” Look how they swarm and gawk, sticking their noses and fingers where neither belong. The painter Thomas Patch pokes Titian’s Venus but doesn’t look at it — he is too taken with the homoerotic The Two Wrestlers. Zoffany himself makes an appearance on the painting’s left: he is the one grinning too widely as he holds up Raphael’s Niccolini-Cowper Madonna, to the fascination of everyone around him. Even Pietro Bastianelli, the Uffizi’s curator, seems unenlightened by his daily exposure to the sublime: he’s got his greasy digits on the Titian, too.

    But to look at The Tribuna of the Uffizi a quarter of a millennium later is to breathe easier and, dare I say, to believe in art slightly more. If he was anything at all, Zoffany was a skilled copyist. His miniature Rubens preserves the meaty writhe of the original, and, adjusting for superficial things like clothes, he more or less copied the feel of any big museum in the twenty-first century, too. There are few problems with the contemporary art world that were not also problems in the 1770s. Hopeless commercialism? Zoffany added paintings to The Tribuna of the Uffizi because his friend was trying to sell them to George III. Congestion? You can barely scratch your cheek in this room. Distractable tourists? Mr. Patch cannot keep his eyes on a Titian. The cheapening of artworks into lifestyle props? The only reason most of these posh yahoos are here is because the Grand Tour is an experience that they are supposed to collect — a pretty accessory for a life of foxhunting and gout. Art only matters because someone looks. The more renowned the art, the greater the number of clueless lookers, joyless collectors, donors in search of tax breaks, and steroidal museums. It’s the muck that clings to most worthwhile culture. It is not going anywhere, and neither is art. 

    We are all in the muck, to slightly paraphrase a writer who was serious about pleasure, but some of us are looking at the stars. One of the few figures in The Tribuna of the Uffizi who shows some glimmer of life in his eyes is a young man toward the painting’s left side whom the professors identify as the painter and politician Charles Loraine Smith. He is one of the few people in the scene who is seated, which would seem to mean he intends to be there a while, and he is the only one who is making something — sketching on a little pad — instead of gulping things down. Not his face but his whole body points at an ancient sculpture of Cupid and Psyche, and one imagines him taut with his own fervid staring. A cloud of contagious distractions hangs over his right shoulder, but somehow he is immune: Zoffany and his friends could walk away, but Charles would keep sketching. A grenade could go off and he wouldn’t wince. But the bigger miracle is the boy hunched behind Charles: given the choice between the loud, louche circle and the artist quietly sketching, the boy chooses the artist. He squats, trying to feel whatever pleasure keeps Charles seated — the grenade might not startle him, either. Under the right conditions, attention can be more contagious than distraction.

    The Nonsense of ‘Neoliberalism’

    A Conceptual Trash Heap

    Toni Morrison was wrong when she intoned that language is violence. But let’s give her this: the reckless use of words can do violence, idiomatically speaking, to clear thinking and therefore to political analysis. Slinging about words whose meaning is muddled, misleading, or tendentious — or whose usage is meant to oversimplify or to inflame — makes it impossible to think rationally, coherently, and productively.

    It is a tall order in this age of slogans and shibboleths to select one word to expunge from our political vocabulary, but if asked to do so I would nominate “neoliberalism.” A coinage of the late 1970s and early 1980s, the term remained fairly limited in its use for two decades, gaining currency at first in academic circles and then exploding in popularity after the financial crash in 2008 and Bernie Sanders’ rise to celebrity. Then, just when it was fading from overexposure, it surged back into fashion. Critics, scholars, consultants, and commentators now finger neoliberalism as the reason for practically all our political problems, especially the Democrats’ failure to keep the presidency out of the hands of Donald Trump. 

    “What Trump is attacking is neoliberalism. Economic neoliberalism underpins the past seventy years of Western economic and cultural order,” declares America’s most overrated senator, Chris Murphy, who alleges that neoliberalism has bequeathed a “very real epidemic of American unhappiness.” (Struggling with his cognitive dissonance over a concept he doesn’t quite understand, Murphy added: “Though it contains the word liberal, neoliberalism was devised by libertarian-conservative economists.”) Ro Khanna, another ambitious, out-of-his-depth operator, calls for “the rejection of neoliberalism. For forty years, we made a mistake. Frankly, it was both parties.” (Forty? Wasn’t it seventy? But what are a few decades among friends?) The Hewlett Foundation, which bankrolls efforts to replace neoliberalism with something else — the left hates billionaires except when they fund the left — defines neoliberalism as “free-market fundamentalism” and “the free-market, anti-government, growth-at-all-costs approach to economic and social policy.” Search the horizonless steppes of the internet and you will find countless pundits, politicians, and even ostensibly knowledgeable policymakers invoking the bogeyman of neoliberalism to explain where the Democrats and America went wrong. 

    The promiscuous use of the word “neoliberalism” has plagued our discourse since well before Trump. Over the years several intrepid explicators have pointed up its semiotic bankruptcy. Back in 2009, in an academic article titled, “Neoliberalism: From New Liberal Philosophy to Anti-Liberal Slogan,” the political scientists Taylor Boas and Jordan Gans-Morse concluded that “neoliberalism has become a conceptual trash heap capable of accommodating multiple distasteful phenomena without much argument as to whether one or the other component really belongs.” A decade later, the fine intellectual historian Daniel Rodgers warned that “the success of ‘neoliberalism’ is a measure of its substantive hollowness” and noted “four distinctly different phenomena” that fly under its banner: an economic theory; a set of economic policies; the capitalist economy itself; and — take a breath — “the hegemonic force of the culture that surrounds and entraps us.” The journalist Jonathan Chait meanwhile traced how “neoliberal” morphed into an off-the-shelf slur used to denigrate regular Democrats. “The ubiquitous epithet is intended to separate its target — liberals — from the values they claim to espouse,” he shrewdly observed. “By relabeling self-identified liberals as ‘neoliberals,’ their critics on the left accuse them of betraying the historic liberal cause.” In his Substack newsletter, Matthew Yglesias continues valiantly to puncture what he calls “anti-neoliberal” thinking. Yet for all these debunkings, the term has only gotten more popular, leaping out of academic tracts and leftist polemics and into the vernacular. 

    As it is used today, “neoliberalism” contains at least three assumptions that its users hope to promulgate but which are, in fact, wrong. The first concerns what historians call periodization: reliance on neoliberalism as a historical framework depends on the flawed premise that in or about 1980, with the election of Ronald Reagan, the American ethos changed. Second, the invocation of neoliberalism incorporates a critique of liberals and Democrats, who, it is insinuated, supinely acquiesced in Reaganism, creating a “Washington consensus” by jettisoning the party’s historic commitment to using government to better people’s lives. Third, the neoliberal mantra implies that the economic policies pursued by Democrats when they had power were an economic, political, and even moral failure. 

    Each of these ideas may contain kernels of truth. But none holds up as an overarching and empirically demonstrable proposition. If we want to understand liberalism and liberal governance over the last half century — and there is no denying that it is now facing a crisis — we should start by euthanizing this unenlightening word. The sooner we clarify our thinking about our recent economic and political history, the more intelligently we can debate what should come next. 

    The Origins of Neoliberalism 

    To understand where “neoliberalism” came from, we must return to the 1970s, when American voters were repudiating liberalism — known ominously in those days as “the L word” — in droves. 

    By the late 1970s, the enormous achievements of Lyndon Johnson’s Great Society had become clear — but it was no less clear that they had failed to stanch the spread of social maladies such as divorce, out-of-wedlock births, drug use, and violent crime. The civil rights movement had secured formal equality for black Americans and invigorated efforts to do likewise for women, gays, and other groups, but liberals suffered when they counseled more intrusive governmental measures to guarantee not only political and legal equality but also economic and social equality. Keynesian policies that had fueled prosperity since World War II proved powerless to combat the beast of stagflation, and the rise of a post-industrial economy — which had shifted away from heavy manufacturing and toward white-collar jobs that demanded a college education for the new hordes of “symbolic analysts” — triggered a long series of painful geographic and professional dislocations. In foreign policy, the Vietnam War stained the luster of liberal internationalism, leaving many Americans leery of wielding power abroad and voters leery of trusting the Democrats as a younger generation of leaders slouched toward isolationism. 

    The political wreckage was immense. During the presidency of Richard Nixon — who, though loathed by liberals and already tainted by Watergate, cruised to reelection in 1972 — the Democratic Party hemorrhaged support from key constituencies, including white Southerners, blue-collar workers, Catholics, and the intellectuals soon to be known as neoconservatives. Watergate allowed the Democrats a brief reprieve, but Jimmy Carter’s hapless White House sojourn propelled more voters rightward. In 1980 and 1984, Ronald Reagan twice routed the Democrats, while the Republicans also seized the Senate for the first time since the 1950s. Between 1968 and 1988, Democrats lost every presidential election but one, almost all in landslides. “Unless they recover their partisan energies and intellectual vigor, the Democrats could enter a long historical passage of declining influence and relevance,” warned Lance Morrow of Time magazine in 1980, “becoming the political equivalent of some of the decaying cities of the Northeast, once flourishingly productive, the exuberant places where the modern Democratic Party originated.” 

    Projects arose to ask where the Democratic Party had gone astray. Politicians and analysts drew up new strategies and policies that they hoped could restore confidence in an affirmative if more realistic vision of government’s capacities. These efforts are commonly described as designed to steer the Democratic Party to the political center. Exhibit A is the founding in 1985 of the Democratic Leadership Council, a group led by Southern moderates aiming to win back Reagan Democrats by stressing values such as patriotism, religion, work, discipline, and responsibility. But the call for internal reform did not come only from centrists; it was heard across the center-left spectrum, urged by liberal stalwarts as well as middle-of-the-roaders. In the late 1970s, Edward Kennedy, the liberal lion, took up airline decontrol and criminal sentencing reform, breaking with recently enshrined left-wing orthodoxies. In the 1980s, his aide Paul Kirk, as Democratic Party chairman, implemented a platform emphasizing “traditional values.” Barney Frank, another quintessential liberal, wrote a book called Speaking Frankly urging Democrats to swallow their unease about brandishing their patriotism or condemning criminals. The civil rights hero John Lewis, elected to Congress in 1986, prodded his fellow Georgian Sam Nunn — maybe the most conservative Democrat in the Senate — to run for president in 1988. Lewis also attended DLC events, offering the insurgent group advice on forging biracial coalitions. Notwithstanding its sobriquet as the “Southern White-Boys Caucus,” the DLC included many pragmatic dyed-in-the-wool liberals who, like Lewis, wanted to build a big tent in order to win again — including prominent African Americans such as Tom Bradley, Maynard Jackson, Kurt Schmoke, Andrew Young, Mike Espy, Floyd Flake, Bill Gray, Doug Wilder, and Ron Brown. Refashioning the party’s public philosophy, in other words, was a goal pushed by Democrats of all stripes. 

    This crisis was what gave rise to the impulses that came to be known as “neoliberalism.” Apart from the DLC, the most prominent group of reformers in these years were those who hoisted the neoliberal flag. (The DLC included some neoliberals, such as Al Gore and Dick Gephardt, but the two groups were not identical.) The word itself was invented around 1979 by Charlie Peters, majordomo of the Washington Monthly, a scrappy little policy magazine and popularized by Peters and Randall Rothenberg, who wrote a defining article and book on the topic. (Peters’ and Phillip Keisling’s A New Road for America: The Neoliberal Movement and Rothenberg’s The Neoliberals: Creating the New American Politics are the ur-texts for understanding the phenomenon.) Even then, the meaning was vague. No hard-and-fast set of doctrines united neoliberals. “There are no meetings, no dues, no constitution,” said Gephardt, a Missouri congressman who was among those tagged with the label. Voting patterns in Congress revealed neoliberals to be no more conservative than other Democrats. 

    Despite the lack of a membership roster, the same people typically appeared in discussions of the movement: officials such as Gephardt, Gore, Gary Hart, Bill Bradley, Paul Tsongas, and Bruce Babbitt; academics such as Lester Thurow, Robert Reich, and Amitai Etzioni; and the journalists trained by Peters at the Washington Monthly, including James Fallows, Nicholas Lemann, and Michael Kinsley. The New Republic, then a weekly magazine at the center of Washington debates, published neoliberal policy proposals alongside critiques of the movement. Of course these people often disagreed about policies, candidates, and even principles. But a few commonalities among the neoliberals could be discerned. 

    For the most part, neoliberals focused not on cultural issues or foreign policy or judicial fights but on economics. Reacting to the crises of the 1970s, they called for policies suited for the emerging post-industrial landscape centered on technology and information. In the 1970s, many on the left had hailed an “age of limits” and called for relinquishing the hope of ever-rising living standards. Neoliberals, without forsaking the goal of economic fairness, reemphasized growth as a cornerstone of their agenda and message. 

    Neoliberals also extolled efficiency. They excoriated bureaucracy, public and private, and allowed themselves to defy their allied interest groups such as government workers, unions, public-interest lawyers, and pro-regulation lobbyists. They favored investments in education and research and development. Many championed what was clunkily called “industrial policy,” or having the government select up-and-coming sectors of the economy for support. Technology captivated them, giving rise to the phrase “Atari Democrats.” They foresaw that high-tech innovation could help maintain America’s global competitiveness. They were far-sighted, too, in acknowledging the tightening interdependence of nations — a condition that spawned the word “globalization,” a close cousin of neoliberalism — and the need to adapt. In the 1970s, in deference to the unions, congressional Democrats had begun discarding liberalism’s traditional commitment to free trade; but most neoliberals, underscoring the folly of protectionism, countered that lowering trade barriers and opening markets would help both the United States and its international partners. 

    The philosophy just described bears scant resemblance to the caricatures proffered by Chris Murphy, Ro Khanna, and their ilk. Contrary to current mythology, the neoliberals were not libertarians, conservatives, free-marketeers, supply-siders, rampant deregulators, Reaganites, Thatcherites, Friedmanites, Hayekians, or enemies of the New Deal or the welfare state. More than other liberals, they saw a role for markets in their new policies, but they rejected the axiom that the market was all wise. “First of all — and most important of all — we are liberals,” Peters explained, noting “large areas” of policy in which neoliberals scarcely differed from other liberals. “We criticize liberalism not to destroy it but to renew it.” Babbitt defended the “welfare state” from the Republicans who would gut it, calling for sustaining “an activist federal government in areas such as environmental matters, health, and entitlements.” Morton Kondracke of The New Republic in 1980 called neoliberalism “an attempt to combine the traditional Democratic compassion for the downtrodden and outcast elements of society with different vehicles than categorical aid programs . . . or new federal bureaucracies.” Neoliberals sometimes derided their liberal forbears: “We are not a bunch of little Hubert Humphreys,” Gary Hart famously railed. But more often they affirmed the values and the principles that had animated twentieth-century liberalism — coupled with a desire to devise new ways to meet the demands of a new economic reality. Far from Reaganites, neoliberals were practical-minded welfare-state anti-Reagan liberals seeking to adjust their means to meet their traditional ends. Neoliberalism was a revision that took place within the liberal tradition. This may be why many of the detractors of neoliberalism on the left and the right are really just old-fashioned enemies of liberalism. 

    So why do so many people misunderstand neoliberalism? Why is it now equated with what we normally call economic conservatism? For that, as we shall see, the fault lies, at least partly, with Michel Foucault. 

    A Little Knowledge Is a Dangerous Thing 

    By the early 1990s, as Bill Clinton emerged as the Democrats’ standard-bearer, the word “neoliberalism” took a backseat to a more capacious label: “New Democrat.” Clinton had not often been listed among the neoliberals and didn’t quite fit the bill. He appears in neither Rothenberg’s nor Peters’ books. In the Democratic primaries in 1992, on economic issues Clinton ran to the left of his closest rival, the card-carrying neoliberal Paul Tsongas, contrasting his own pledge to protect Social Security with Tsongas’ dour fixation with trimming entitlements. Some Clinton aides, such as Robert Reich and Ira Magaziner, were called neoliberals, and as a governor and a presidential candidate Clinton had found promise in neoliberal ideas about growth, high-tech investment, government reform, and globalization. But he balanced his technocratic side with a visceral economic populism and a critique of Reaganomics for catering to corporations and the rich. Clintonism was a synthesis of several strands of liberal reformism, of which neoliberalism was only one. A chairman of the DLC, Clinton stressed the values of community, opportunity, and responsibility. He also captured the loyalty of a diverse mix of other groups: the black community, the nation’s governors, assorted academics and intellectuals. During his presidency, Clinton’s program was described not as neoliberal but as that of a New Democrat or, starting in his second term, as a “Third Way” — a label also used by center-left leaders in Britain and Germany. 

    In short, Clinton’s ascent rendered “neoliberalism” obsolete as a taxonomic category. A different strain of updated liberalism — call it Clintonism — now held sway. Yet just as Washington journalists were retiring “neoliberalism,” it got picked up, by sheer coincidence, by European leftists — people who had no familiarity with the legislation once bandied about by Bradley, Gephardt, Hart, and the others; who were not well-versed in American policy debates about military reform or education reform or “reinventing government”; who hadn’t read the neoliberal books and journals. Some of them probably had not even kept up with the decades-old shift in the meaning of the word “liberalism” itself, which in the nineteenth century had meant an assertion of individual rights, including economic rights, against the state, but in the Progressive Era had also come to encompass a belief in an active governmental role in the economy. Tethering liberalism to its former and now-antiquated meaning, these left-wing European academics thus felt none of Chris Murphy’s addlement in applying a word rooted in liberalism to a non-liberal philosophy. 

    These European academics glommed onto “neoliberalism” to name a school of conservative or right-of-center economic thought that they traced back to the 1930s. It turned out that neoliberalism — or more precisely the French néo-libéralisme — had been fleetingly applied in 1938 to a group of intellectuals who attended a conference in Paris called, charmingly, the “Colloque Walter Lippmann,” which debated the ideas in the American journalist’s book The Good Society. Like Lippmann, the convener of the conference, a French philosopher named Louis Rougier, hoped to develop an “essentially progressive” alternative both to rigid nineteenth-century laissez-faire doctrines and to socialism. To this end, Rougier invited twenty-six thinkers, ranging from the liberal humanist Raymond Aron to the free-market economists Friedrich Hayek and Ludwig von Mises, for a long weekend in Paris in late August. But Rougier’s dreams went unrealized. The discussion in Paris “remained vague and broad,” according to the historian Angus Burgin’s well-researched account, “because of both the relative brevity of the individual contributions and a general sense of uncertainty about whether . . . [to] focus on a reexamination of foundational principles or . . . practical policies.” A follow-up symposium the next year was canceled after Hitler and Stalin invaded Poland. So much for néo-libéralisme

    Enter Foucault, four decades later, who appears to have been the first European to misapply the resurrected term “neoliberal” not simply to the Paris conferees of 1938 but specifically, and inaccurately, to the conservatives in attendance — Hayek, von Mises, and their intellectual allies. Foucault did so in a series of lectures from 1979, published in 2004 as The Birth of Biopolitics, which included an account of the Colloque Walter Lippmann. (Biopolitics describes the — inevitably sinister — workings of political and governmental power upon the body and organic life more generally, as states manage their populations through policies relating to reproduction, sexuality, public health, and the like.) Foucault’s core point was a reasonable one: that unlike the nineteenth-century apostles of pure laissez-faire, who had theorized a weak state, these economists of the 1930s believed that governments had to take an active role in underwriting any market-based system. “The problem of neo-liberalism,” Foucault argued in one of his lectures, “was not how to cut out or contrive a free space in the market within an already given political society, as in the liberalism of Adam Smith and the eighteenth century. The problem of neo-liberalism is rather how the overall exercise of political power can be modeled on the principles of a market economy.” His blunder in choosing the label “neoliberal” — which was just then coming into circulation in the United States with a categorically different and indeed nearly opposite meaning — can be understood when we recall that he was resuscitating a forgotten French term and was surely unaware of neoliberalism’s contemporary American meaning. 

    Foucault’s application of this appellation to twentieth-century free-market economists such as Hayek, Ludwig von Mises, and (later) Milton Friedman was historically ignorant — and triply so. First, Foucault seemed not to have known that, as Burgin tells us, neoliberalism as “a formal designation” for the ideas at the Lippman Colloquium was “raised and rejected” at the time. Second, the Hayekians in fact did not call themselves neoliberals; those who had briefly flirted with that name were those on the center-left, like Lippmann and Rougier, not those on the right. Finally, for most of the century nobody else called these conservatives neoliberal either. Foucault’s was thus a highly peculiar and misleading usage. Yet just as with some of his other dubious theories, he got away with it. 

    It took time for this weird use of “neoliberalism” to catch on and still longer for it to reach American shores. By the 2000s, books by the eccentric British Marxist geographer David Harvey, the barrister Daniel Steadman Jones, and then Angus Burgin, along with a zillion academic articles and conference papers, had fused the idea of “neoliberalism” to market-based economics — and specifically with a genealogical narrative centered on Hayek, von Mises, Friedman, and their kind that ran from the Lippmann colloquium to the Mont Pelerin Society of the 1940s (a Switzerland-based hub of conservative thought) to the University of Chicago in the 1960s and 1970s. Interestingly, there were a few superficial points of overlap between Washington Monthly neoliberalism and Mount Pelerin pseudo-neoliberalism. As the historian Kevin Schultz remarks in his new book, Why Everyone Hates White Liberals (Including White Liberals): “Both prioritized economic growth. Both hated excessive government intrusion. Both were attempts, in rhetoric at least, to expand individual freedoms. But the Democratic ‘neo-liberals’ were more welcoming to social welfare programs, national allegiance, and government intervention to assist people.” More importantly, the Foucault/Harvey/ Steadman-Jones/Burgin conception of neoliberalism had no actual historical or intellectual connection to the standard meaning of neoliberalism in American political analysis. That the same word was used for both was a deeply confusing coincidence. 

    Yet perhaps unavoidably, the two were confused, and conflated and commingled. That commingling created a conceptual error that has since warped our discourse. Imagine dusting off the old meaning of “filibuster” — originally from the Dutch word for “freebooter,” used to refer to eighteenth-century pirates in the Caribbean — and concluding that today’s speechifying U.S. senators are all sword-swinging buccaneers. Or merging two meanings of “gay,” so that all happy people are deemed homosexual or all homosexual people are deemed happy. A half-knowledgeable Washington observer could see the absurdity of saddling a genuine neoliberal such as Gary Hart with the views of Milton Friedman. But the Europeans and academics bruiting about the label were not knowledgeable, or even half-knowledgeable, about these matters. And after 2000, the political climate made the merging of the two meanings of “neoliberalism” irresistible to some. Cursory understandings of the concept allowed left-wing critics to brand Obama as a neoliberal because he had bailed out the banks. Clinton’s support for the North American Free Trade Agreement (even though it was negotiated by his predecessors) and his repeal of the Glass-Steagall Act (enacted in 1933 to separate commercial banking from investment banking) were cast as pivotal moments when Democrats surrendered to market forces and set us on a path to where we are now. The misnomer stuck. 

    The leftist academics who tossed about “neoliberalism” almost always used it as a pejorative. As it bled into popular usage and the commingling continued, it became a malaprop cocktail to lob at anyone associated with the post-1980s intellectual ferment among Democrats. This meant that Bill Clinton, Al Gore, Robert Rubin, Larry Summers, Gene Sperling, and the rest of the Clinton economic team were not only branded “neoliberals” but cast as the ideological progeny of Hayek, von Mises, and Friedman. Multisyllabic and Latinate, “neoliberalism” posed as a sophisticated idea harboring profound and subtle analyses, but by the 2010s it had hardened into a blunt rhetorical tool, a form of invective, with which anti-capitalist writers could bash anyone they deemed to have betrayed the cause. This tendency reached its delicious reductio ad absurdum in 2017 in an online contretemps between Cornel West and Ta-Nehisi Coates, with the former charging the latter with possessing a “myopic political neoliberalism” and the latter responding by quitting Twitter. 

    Witting or unwitting, the wrongheaded conflation of neoliberalism with free-market conservatism has continued to flourish. The practice yokes together two groups who are clear ideological enemies. A category that embraces such stark opposites as Ronald Reagan and Bill Clinton, or Hillary Clinton and Donald Trump, can only obfuscate. And, besides, good names already exist for market-friendly economics: free-market conservatism, economic libertarianism, classical liberalism, laissez-faire. But leftists prefer “neoliberal” because it enfolds liberal Democrats in their blunderbuss critique. If to a hammer everything looks like a nail, then to a Marxist every non-Marxist looks like a neoliberal. One suspects, as Jonathan Chait has written, that “the whole trick is to bracket the center-left together with the right as ‘neoliberal,’ and then force progressives to choose between that and socialism.” 

    The Periodization Problem 

    If neoliberalism is a hot mess as a category of economic thought and political classification, it also flops as a tool of historical analysis. Here we come to the question of periodization, the way in which historians segment the past into units. Those who hold up neoliberalism as a tool for organizing recent events insist that it has hegemonically governed our era. But, as we can see from Chris Murphy’s and Ro Khanna’s failure to get their stories straight, there’s no consensus on when this supposed hegemony began or ended, or indeed if it has ended at all. Some place the beginning in the Clinton ’90s. Others point to the late 1970s. Most will say Reagan’s election in 1980 was the turning point, since he came into office preaching lower taxes and smaller government (even though he followed through much less than is supposed), and people talked about a Reagan Revolution as if seismic changes were underway. 

    But was Reagan’s ascendancy really the major break point of the recent past? To organize our recent history around Reagan’s rise — that is, around Reaganomics — enshrines a crude economicist mentality. It subordinates historical events of manifestly greater historical magnitude to trends in economic thought and policymaking. (There are people who believe that Watergate was a less significant episode in Nixon’s presidency than the end of the gold standard.) Specifically, the 1980s-centric periodization ignores the most transformational decade of the post-World War II era — the 1960s, when dramatic changes occurred in culture, foreign policy, law, and society, though somewhat less so in economics. Before the vogue for the neoliberal periodization came along, historians agreed that the span of the late 1960s and early 1970s was a hinge in American history. That was when the Cold War began to thaw, when Vietnam shattered belief in American virtue, when old manners and morals were overturned, when cultural backlash politics scuttled dreams of expanding the Great Society, when so-called hard hats beat up antiwar protesters, when lifelong Democrats gave Nixon his landslide, when the liberal vision fell on hard times. Debates today about the Democrats’ electoral struggles to recapture the working class should recall that those struggles, too, date to the early 1970s and not to the later “neoliberal” period. 

    Even by strictly economicist measures, the historical focus on Reagan instead of Nixon fails key explanatory tests. Today’s critics blame neoliberal policies for “hollowing out” manufacturing communities, sending onetime Democrats into the Republican column. But manufacturing began collapsing long before Reagan. The steel and auto industries faced competition from Japan and West Germany in the 1960s. By the 1970s, magazine stories, think-tank studies, and congressional hearings proliferated about plant closures and job losses in Rust Belt cities. When neoliberals came along in the 1980s, they were reacting to manufacturing losses, not driving them. Neoliberalism’s detractors have their chronology backwards. 

    The Reagan–Bush years were not the start of an historical era but the end of one. Beginning in 1992, with Bill Clinton’s election, a long stretch of Republican dominance ended. Presidential politics became competitive again. Since 1992, Democrats have lost the popular vote just twice. Divided government has reigned, with control of the White House and Congress seesawing between the parties. Our talk of red states and blue states and polarization dates to the year 2000 and the knife’s-edge contest between Al Gore and George W. Bush. 

    The 1990s amounted to a break, too, in America’s economic fortunes. The wage stagnation now blamed on neoliberalism actually occurred in the 1970s and 1980s, not in the 1990s. The Clinton years sparked a run of higher productivity and wage growth, along with stiffer taxes on the rich, reductions in poverty, and growth that has outpaced Europe’s. Whatever was happening in the 1990s, it marked a sharp reversal from the doldrums of the 1970s and the uneven recovery of the 1980s — historical shifts that the neoliberal periodization does not take the trouble to accommodate. 

    The Myth of the Washington Consensus 

    Once we recognize that the 1990s constituted a departure from — much more than a continuation of — the 1980s, more problems with neoliberalism as an operating concept emerge. Related to the claim that no real economic policy differences have separated the two parties is the corollary that Democrats guzzled the Reagan Kool-Aid, joining in a “Washington consensus” by ditching liberalism’s commitments to the welfare state, progressive taxation, regulation, and helping blue-collar workers. 

    This is more nonsense. The last quarter of a century has been defined not by consensus but, famously, by polarization. Americans sorted into red and blue camps, telling one another that each election was the most important of our lives, with each contest fought as if the entirety of the republic’s fate hung in the balance. In part, these stark and inflamed partisan divisions have been about sociocultural issues such as abortion, gay rights, racial progress, and immigration, as well as about political-legal questions such as civil liberties, civil rights, and the scope of presidential power. But they have also been about economics. Since Clinton’s presidency, knock-down, drag-out fights have occurred over core differences in fiscal and regulatory policy. The Democrats press for progressive taxation, increased social provision, and restraints on business; the Republicans seek to cut taxes, domestic spending, and restrictive rules. Clinton’s first major action as president was to raise taxes on the rich. Bush’s was to cut them. Obama then ended Bush’s tax cuts. Trump passed new ones, and then passed them again. Equally bright lines have separated the parties over health care, Social Security, and the whole litany of kitchen-table issues. Only on one major issue — trade — have the parties’ leaders been relatively united. (More on that below.) Yet the enveloping partisan rancor of our times is seldom noted by tellers of the neoliberalism tale, since they have no way to account for it in their fantasy of a seamless elite bipartisan comity. 

    To see how badly some people misremember even recent history, consider Clinton’s campaign in 1992. Jennifer Harris, who served in the Biden White House and now works for the Hewlett Foundation, recently wrote in Foreign Affairs that Clinton “won election in 1992 in part by stressing his adherence to Reagan’s free-market dictums.” Come again? The opposite is the case. Here is Clinton debating George Bush before seventy million viewers in 1992: “We’ve had twelve years of trickle-down economics. We’ve gone from first to twelfth in the world in wages. We’ve had four years where we’ve produced no private-sector jobs. Most people are working harder for less money than they were making ten years ago. It is because we are in the grip of a failed economic theory.” Clinton’s economic plan, published as Putting People First, excoriated Reaganism, promising instead a bottom-up path to growth including higher taxes on the rich, universal health care, and investment in transportation and communication infrastructure — which became key pieces of his blueprint for governing. 

    Neoliberalism is said nowadays to denote the rejection of New Deal economics, but neither under Clinton nor in the years after did the Democratic Party allow the dismantling of the postwar liberal tentpoles of a mixed economy, progressive taxation, robust regulation, and a welfare state. Consider the case of regulation. Contrary to the neoliberal mythology, Democrats since Clinton have reliably lined up against the Republicans’ anti-government agenda. To be sure, Clinton did loosen some constraints on business, as in repealing Glass-Steagall. His “Reinventing Government” initiative trimmed other requirements, too — not to allow business a free hand but to prune bureaucracy so that the public would again trust the government to be efficient and responsive. (It is hard to come up with any regulations phased out by the Reinventing Government project that anyone misses today.) Overall, however, the pattern of the past three decades shows Democrats fairly consistently promoting environmental protection, workplace safety, civil rights safeguards, public health, and financial oversight. In 2010 Obama gave us the Dodd-Frank legislation, which imposed tougher capital and oversight requirements on big banks and created the Consumer Financial Protection Bureau. 

    Count the pages in the Federal Register, which lists new governmental rules. Under Ronald Reagan, the count fell from 87,000 to 53,000. Under Clinton it climbed from 63,000 to 77,000, and under Obama from 80,000 to 98,000. “It’s pretty clearly true,” Matthew Yglesias observes, “that the overall scope of regulation is larger in 2024 than it was in 1974.” Think about it: if the 1990s and 2000s had been such an orgy of slash-and-burn, how could Trump’s first term have witnessed such a barrage of headlines about the trashing of vital protections? “E.P.A. to Lift Obama-Era Controls on Methane, a Potent Greenhouse Gas.” “Consumer Bureau Scraps Restrictions on Payday Loans.” “How the White House Rolled Back Financial Regulations.” “Trump Says His Regulatory Rollback Already Is the ‘Most Far-Reaching.’” Whose rules do they think Trump was undoing? 

    One can tick off the different policy areas. Fiscal policy? Democrats in the 1990s raised the minimum wage, boosted taxes on the rich, and expanded the Earned Income Tax Credit — all over Republican objections. Social provision? As he had in his primary race against Tsongas, Clinton continually prioritized the protection of Social Security: “Save Social Security first,” he vowed in his State of the Union address in 1998, revealing his intention for his newfound budget surpluses. He bested Newt Gingrich and Robert Dole in budget battles mainly by defending Medicare and Medicaid. Clinton and Obama both made universal health care a top priority — Clinton unsuccessfully, Obama triumphantly. Investment in infrastructure, science, and technology? Clinton poured money into building the internet and mapping the human genome; Obama’s stimulus package in 2009 was so sweeping that the journalist Michael Grunwald wrote a book about it called The New New Deal. Labor? The Clinton administration enacted the Family and Medical Leave Act, fought right-to-work laws, appointed union allies to the National Labor Relations Board, and curtailed sweatshop labor. Obama backed a controversial law to let workers unionize without a secret ballot. In none of these cases did Democrats receive much Republican help. Partisan division — not consensus — was the Washington norm. 

    Believers in the fiction of neoliberalism-as-Reaganism sometimes believe they have a smoking gun in Clinton’s statement in 1996 that “the era of big government is over.” But never has a sentence been more grossly distorted. Rarely quoted is the next line, a rebuke to the laissez-faire ideologues: “But we cannot go back to the time when our citizens were left to fend for themselves.” Clinton’s original draft had contained the punchier “But we can’t go back to ‘every man for himself.’” That formulation, however, was deemed sexist and rewritten, and the graceless new iteration got dropped from headlines and soundbites. In any case, the snippet was never meant as a death knell for government’s role in helping citizens; it was an acknowledgment that ambitious Great Society–style projects such as Clinton’s failed health-care initiative were, given the congressional log-jams and insuperable deficits, unlikely to be forthcoming. As John Lewis said at the time, the “era” of big government might be over — the climate of opinion that had birthed programs like Medicare in the 1960s was now in the past — but the “role” of big government was not going to change. 

    In each of the policy realms noted above, Republicans firmly opposed the Democrats’ agenda and vice versa: it was a Washington dissensus. In one realm, though, bipartisan majorities did exist: trade. And when you peer closely at the charges of Democratic perfidy, they usually boil down to the fact that Clinton, Obama, and other party leaders backed NAFTA in 1993, and permanent most-favored-nation status for China in 2000, and the Trans-Pacific Partnership in 2016. In these cases, a hefty majority of Republicans and a sizable minority of Democrats came together in favor of a freer trade regime. 

    A few complicating points bear mention. First, there is near-unanimity among economists about the benefits of free trade, just as there is among public health officials on the danger of lead exposure and among education researchers on the worth of early-childhood schooling. Although certain constituencies have over the decades called for tariffs, leading to pitched political fights, the recent trade deals all had strong expert justification and scholarly support. This reality vitiates the charge that Democrats were cravenly acquiescing in Republican dogma. If conservatives today were to stop questioning the danger of a warming planet, would they be capitulating to a Democratic ideology? Or would they simply be grounding their policymaking in an accurate, objectively established set of facts? We should at least entertain the idea the Democrats backed free trade because it was good policy. The alternative is to understand policy and politics only cynically. 

    Relatedly — and here the periodization problem again rears its head — the neoliberal era is alleged to have begun in the 1980s or 1990s. But support for free trade was the standard liberal position since the nineteenth century. One of Woodrow Wilson’s first steps as president in 1913 was to sign the Underwood Act reducing tariffs. Franklin Delano Roosevelt produced the Reciprocal Trade Act in 1934 and the Bretton Woods Agreement in 1944. Harry Truman signed the General Agreement on Tariffs and Trade in 1947. John F. Kennedy enacted the Trade Expansion Act of 1962. If any position represents an abandonment of liberal principles, it is the protectionism that some Democrats began adopting under pressure from organized labor in the 1970s. Thus, the Democrats’ support for trade, too, turns out to be a flimsy peg on which to hang the weighty conceptual behemoth of “neoliberalism.” 

    The Fiction of Liberal Failure 

    Neoliberalism, as we have seen, can no longer be said to accurately describe a coherent body of economic thought. Nor does it designate a clear-cut political affiliation. Nor does the notion of an “age of neoliberalism” linked to Reagan’s rise survive scrutiny. And the critique implicit in today’s pejorative use of “neoliberalism” — that contemporary Democrats junked their values for pro-market cheerleading — also unravels once we review the countless policy conflicts that have riven the two parties in our fevered times. 

    But let us allow that, even if it has been grossly overstated, the Democrats of the 1990s and 2000s did tilt their party in a somewhat more pro-market direction. That’s true enough. What of the criticism that their modifications to their party’s governing philosophy wrought horrendous economic damage, especially to society’s lower strata? Even if the Clinton–Obama agenda wasn’t the brainchild of Milton Friedman, even if it does not deserve the opprobrious epithet “neoliberalism,” didn’t it nonetheless buoy the rich and oppress the poor? 

    Here, too, history undermines the anti-neoliberal arguments. The Clinton and Obama presidencies boasted some of the strongest economic records of recent times. The Clinton numbers are so phenomenal, so jaw-droppingly enviable, that they beggar belief. Clinton presided over the longest continuous peacetime economic expansion in history, with growth averaging 4 percent annually. Unemployment fell from 7.3 to 4 percent and inflation stayed low. The stock market boomed, but prosperity also extended to the lowest rungs of the ladder: poverty fell by nearly one quarter, from 15.1 to 11.3 percent, and the two lowest-income quintiles saw their earnings increase nearly 17 percent. Real median household income grew by 13.9 percent. Blacks and Hispanics made especially strong gains. All of this was achieved as once-crippling budget deficits turned into record surpluses and Americans’ trust in government spiked for the first time since the 1960s. As Hillary Clinton later said when her husband’s record came under fire, “I always wonder what part of the 1990s they didn’t like: the peace or the prosperity?” 

    These policies succeeded politically, too. Clinton wooed many Reagan Democrats back into the fold. In both of his races, he drew more than 40 percent of the working-class white vote — a quantum leap over Carter, Mondale, and Dukakis, and a high-water mark that no subsequent Democratic presidential nominee would match. These voters had been drifting from the Democratic column before Clinton and would drift away again afterward, but Clintonomics was not the reason for their defection. 

    Obama’s economic legacy, though not as strong, also holds up well. His presidency kicked off an expansion that, while less robust than Clinton’s, lasted longer, extending into Trump’s first presidency until the pandemic hit in 2020. Taking office just after the 2008 recession, Obama, in Rooseveltian fashion, followed through on the bank rescue and the auto industry rescue. Those efforts, including Obama’s huge stimulus bill, constitute, along with the Affordable Care Act, his most important achievements. Whatever name we affix to his economic policy, it, too, worked. Unemployment fell; inflation remained modest; median household income rose by 5.3 percent. Politically, Obama’s performance was also a bit weaker than Clinton’s; his reluctance in the early days to rhetorically balance the bailouts with a dose of Clinton-style left-populism gave an emergent right-wing proto-Trump movement, the Tea Party, room to grow. But Obama’s economy still performed well enough to win him reelection in 2012, thanks to a decent showing among the white working class, especially in states like Michigan and Ohio where the auto bailout saved jobs. To be sure, the slow growth of Obama’s second term hurt Hillary Clinton in 2016, a year when the economy merely inched along. But his was hardly an economic program geared toward the superrich. 

    Given this mostly admirable economic record, can it really be said that the last thirty-five years have amounted to failure — especially on the Democrats’ part? It is worth addressing two economic failures of the recent era that have been especially salient. In both cases we can fairly criticize Democratic governance, although in neither case more so than Republican governance. First is the fallout from the trade regime of the twenty-first century. While globalization benefited Americans overall, fueling growth and lowering consumer costs, the downsides hit hardest in the de-industrializing regions. The stories of constricted job opportunities, impoverished civic life, family dysfunction, and drug and alcohol abuse in these communities are legion and heartbreaking. In post-industrial cities and towns, rural areas, low-income suburbs, and other lagging regions from Appalachia to swaths of the South, the toll has been severe. 

    The second problem is also one of inequality, but on a broader societal level. We have all seen the statistics that portray the yawning gaps between the top 1 percent and everyone else, the growing chasms between CEO pay and the going hourly wage. Inequality has also deepened a sense of deprivation among the working and middle classes. As important, it has meant that a large segment of Americans has been prosperous enough to shoulder the high costs of child-care, health care, housing, college, and retirement, but that a much bigger group has watched those elements of the American Dream recede from their grasp. 

    These hardships must not be minimized. They pose urgent challenges — of politics and policy, of solidarity and sympathy — for both parties. The Democrats as well as the Republicans failed to do enough to address the privations and the struggles that, while not new to our times, continued to afflict the de-industrializing regions into the 2000s. Both parties also failed to deliver effective solutions to the skyrocketing costs of big-ticket life-event costs such as health care and housing. But the critics of “neoliberalism” imply that remedies were readily at hand for Clinton and Obama and other Democratic leaders, who turned away from them. Yet no such obvious remedies existed (or exist today). For one thing, these inequality trends stem mainly from factors other than public policy. The manufacturing decline long preceded the controversial trade deals, and its recent acceleration derives more from automation, technological advances, and turbocharged worker productivity than from Chinese imports. Drawing an analogy, the Harvard economist Robert Lawrence notes that the number of agricultural jobs in the United States has plummeted not because of trade but because of a transformation in farming technology. 

    Wealth inequality, similarly, has widened not primarily owing to any public policy decisions but owing to the huge spikes in stock market and real estate valuations. Democrats, left, liberal, or centrist, have generally wanted to do more to address these serious inequities, but since the Reagan years we have been hobbled by divided government. We have had no period like the 1930s or the 1960s when one party could work its will; the congressional majorities that Clinton, Obama, and Biden all briefly enjoyed were never large enough to overcome the threats of the filibuster. (Alas, we may now be embarking on such a period, led by the other side.) Democrats may pass redistributionist taxes or expand spending programs, but when the Republicans return they get blunted or reversed. Most of the time, it is simply impossible to pass a large-scale social program in the first place. That is why we say that the era of big government is over. It is a description of reality, not a wish. 

    Of the two parties, the Democrats are the ones who have consistently favored measures to mitigate the burgeoning inequality. Their efforts, unfortunately, have not helped them much politically; ironically, the hard-hit communities in places such as Arkansas, West Virginia, and rural Wisconsin have gravitated toward the GOP — worse, toward the MAGA GOP. But these voters are not moving rightward because the Republicans are delivering higher wages or more bountiful health insurance. There are many other reasons for this realignment, rooted in values, culture, identity, and style. Politics never consists entirely in economics. The Democrats’ noble words about economic fairness will not win them elections in the absence of creative and effective new ideas. (Kamala Harris’ campaign proposal of $25,000 handouts for down payments isn’t going to cut it.) But if working-class and non-college-educated voters have been abandoning the Democrats because of their economic record, they are not going to find the Republicans’ solutions any more congenial. 

    The acute suffering in these afflicted communities demands our attention. It also creates rhetorical space for the continued bashing of Democratic policies. It has provided justification, for example, for the Hewlett Foundation to pour millions into a project groping for a “post-neoliberal” vision that it hopes will amount to a reverse DLC for the 2020s. The Hewlett project — regrettably based on the sort of murky understanding of “neoliberalism” that pervades our discourse — was expected to bear fruit under Biden. Yet despite a lot of hyperventilating in early 2021 about a “transformational” Biden presidency and absurd comparisons of his decidedly non-radical program to the New Deal, Biden governed mostly in the same center-left mode as Clinton and Obama, albeit less effectively. 

    Biden touted a purportedly new economic vision, saying he would build the economy “from the middle out.” He failed to acknowledge that Obama had used and popularized the exact same phrase, and Clinton had propounded the same basic idea. Biden also hyped the value of his child tax credit, which was more generous than past iterations, but which also had first been implemented by Clinton and then expanded by succeeding presidents. Biden talked up anti-trust actions against the tech giants, but this, too, was something Clinton had pioneered with a lawsuit against Microsoft, the Goliath of its day. Apart from keeping some of Trump’s tariffs, Biden’s main claim to policy innovation was to jack up the domestic outlays in his spending bills to dwarf even Obama’s $800 billion Recovery Act of 2009 — something that he could do because we were stumbling forth out of the pandemic. Unfortunately, just the year before Trump had signed the gargantuan CARES Act, and, on top of that, Biden’s two huge spending bills combined with pandemic-related shortages to produce inflation rates higher than they had been since 1981 — one of the main reasons that Harris lost the election in 2024. If his approach was designed to improve on the “neoliberalism” of his Democratic predecessors, it failed. 

    The historian Tara Zahra has written about the backlash against progressivism and globalization in the aftermath of World War I. Where goods and people had moved freely across borders, restrictions now limited exchange. Governments framed migration not as an opportunity but as a threat to national strength and social cohesion. Nations pulled back from international bodies and treaties. Democracies and dictatorships alike preached self-reliance. This inward turn promised order amid chaos, rootedness in place and tradition, and protection from the dislocations of global capitalism. Fascism, communism, and anti-semitism flourished. It was not the age of Trump, but it was the age of Ford, Lindbergh, Coughlin, and Mussolini. The worst war in history followed. 

    Now, too, an anti-globalization backlash is in flower. Liberal democracy is regularly derided. Elites are demonized. Strongmen are admired. Pluralism is regarded as weakness. Trade is blamed for poverty. Borders are walled and fortified. Illegal immigrants are targeted. On the left, voters flock to fantastic promises of free rent, free buses, and free food. Or they hear prophecies of a future liberated from work, so that we can all enjoy a government-provided universal basic income. On the right, Trump recklessly plays around with tariffs, wreaks economic havoc, and impulsively decimates government agencies. These are only a few of the latest proposed replacements for what has come to be disparaged as neoliberalism. If we persist now in trashing the many things that liberals, whatever their failings and flaws, have done rightly and reasonably well, we will breathe life into the poisonous ideologies that liberalism once rose up to defeat.

    Other Canons, Other Wars

    In the summer of 1981, the novelist Italo Calvino published an article on the great books in the Roman weekly news magazine L’Espresso. “Why Read the Classics?” is classic Calvino: playful, charming, erudite, skeptical, humane. It consists of fourteen “suggested definitions” of a classic that deliberately contradict each other. Per definition one, the classics are books you are always rereading, even if you are discovering them for the first time, per definition five; or they are books you have yet to read because you are still waiting for the opportune conditions to enjoy them, per definition two. Classics are pre-selected for us by the group: “they come to us bearing the aura of previous interpretations, and trailing behind them the traces they have left in the culture” (definition seven), they generate “a pulviscular cloud of critical discourse” (definition eight), and are often known through “hearsay” before they are known by experience (definition nine). But they are also chosen by the individual reader for personal reasons: “‘your’ classic is a book to which you cannot remain indifferent” (definition eleven). Ancient or modern, a classic is a book that “relegates the noise of the present to a background hum,” (definition thirteen) and at the same time one that “persists as background noise even when a present that is totally incompatible with it holds sway” (definition fourteen). 

    The implication being: a classic is impossible to define. Rather, it is a designation relative to an individual reader’s position in a particular culture at a particular moment in history. In his scholium to definition fourteen, Calvino gives a reason for this. The proliferation of books in “all modern literatures and cultures” has led to “the dissolution of the library,” such as the one inherited by Giacomo Leopardi, the reclusive nineteenth-century poet and philosopher who was one of the last people who could plausibly confuse his thorough education in European literature, philosophy, history, and science with the totality of knowledge. The “eclecticism” characteristic of late twentieth-century culture is the result of its inescapable awareness of the contemporary, on the one hand, and the global, on the other. Just as the books of the past and the present are indispensable to understanding each other, Calvino told the readers of L’Espresso, the classics of his language and culture, such as Leopardi’s Canti, “are indispensable to us Italians in order to compare them with foreign classics, and foreign classics are equally indispensable so that we can measure them against Italian classics.” That we will “never be able to draw up a catalogue of classic works to suit our own times” was not a cause for worry, in his view. He proposed that each of us replace the catalogue or list model of the great books with “our own ideal library” consisting of works that have been meaningful to us and those that have been meaningful to others, making sure to leave “a section of empty spaces for surprises and chance discoveries” as we accumulate new experiences over the course of a lifelong relationship with the written word. 

    The following winter, a rather less cheerful assessment of this state of affairs appeared in the pages of National Review. In “Our Listless Universities,” Allan Bloom diagnosed “an easygoing American nihilism” among students at the country’s top schools. Already “socialized” as historicists and cultural relativists, incoming freshmen viewed “the comprehensive truth about man” as at best “opinion,” at worst “prejudice,” and in any case “unavailable” to knowledge — and nothing about their four years at the university was likely to disabuse them of this “dogma.” Encouraged by their professors, according to Bloom, students in the humanities were unwilling to acknowledge that “one culture is superior to another,” that the “old books” of the Western canon were any “better than any others” being produced in the present, let alone ones that might “contain the truth.” As a result, classics such as the Bible and Plutarch — to use his examples — no longer made up the “furniture” of the “souls” who were bypassing the liberal arts altogether for degrees in the hard sciences, where at least the aspiration to truth-finding was integral to the program of study, and the professional schools, where at least there were material rewards to be had upon graduation. In the name of an “equality of values,” Bloom concluded, students had lost the ability to discriminate in their moral and aesthetic judgments; in the name of “openness,” they had become closed-minded. The only remedy — a sustained encounter with the great books — was the one that was being foreclosed by the usual suspects: structuralists, deconstructionists, Marxist humanists, and those professors who would introduce course requirements in non-Western civilizations and cultures. 

    Although he shared Matthew Arnold’s view that culture is “the best which has been thought and said in the world,” Bloom’s denunciation of relativism is less Arnoldian in spirit than Calvino’s endorsement of it. The apocalyptic tone of Bloom’s invective causes him to make absurd claims, some of which, like his animus towards rock music, are comically square, while others, such as his claim that among his students “it is almost respectable to think and even do the deeds of Oedipus,” cross the line into hysteria. The special contempt he reserves for feminists — whose demands for equality in the workplace, the domicile, and the culture he holds responsible for the destruction of everything from the family to eroticism to literature — is downright sinister. 

    On the last point, Bloom has this to say: “In the absence (temporary, of course) of a literature produced by feminism to rival the literature of Sophocles, Shakespeare, Racine, and Stendhal, students are without literary inspiration.” It is neither here nor there, but off the top of my head I can think of dozens of female writers who are more deserving of our attention today than Racine, starting with his contemporary Madame de La Fayette. Where the canonical status of Stendhal (and, by extension, the force of that parenthetical) are concerned, I would just like to add that, as Calvino points out, when the author of Le Rouge et le Noir was still alive, he was dismissed by none other than Leopardi as the sort of faddish litterateur, admired by his sister, whose work would never stand the test of time. 

    Yet what “Our Listless Universities,” lacked in Arnoldian “sweetness and light,” it made up for with popular appeal. Encouraged by his friend Saul Bellow, Bloom expanded the essay into The Closing of the American Mind, which became a surprise bestseller when it was published in 1987. The ensuing “Canon Wars,” which pitted conservative defenders of “dead white men” against “multiculturalists, feminists, and postmodernists” were misnamed: they were more like a theater or a front in a far broader political conflict. They helped to establish a pattern whereby the intellectual habits, political views, and sexual mores of eighteen-to twenty-two-year-olds at a handful of elite universities were opportunistically turned into full-blown moral panics outside them by conservative activists, whose concern about the curricular “corruption of the youth” has proved less sincere than their desire to destroy the institutional independence of the university, a four-decade-long siege that now appears to be in its final stages. The Closing of the American Mind was nothing less than “the opening shot of the culture wars” in the words of Camille Paglia, who meant it as a compliment. 

    For better or worse, the debate about the great books was my introduction to American intellectual life: well-thumbed copies of bestsellers such as Harold Bloom’s The Western Canon, from 1994, and David Denby’s Great Books, from 1996, could be found among the precocious-naïve collection on my high-school self’s book shelves. (The Closing of the American Mind and Paglia’s Sexual Personae I read only later — at the insistence of my father and an ex-girlfriend, respectively.) It was still raging when I enrolled at Columbia in the fall of 2001 to take Literature Humanities and Contemporary Civilization, the mandatory survey courses in Western literature and philosophy that formed the core of what the university calls its Core Curriculum. During my senior year, I was one of the student representatives to the Committee on the Core Curriculum, a position that had been created in the aftermath of the campus occupations of 1968, to give students a seat at the table of university governance, along with faculty and administrators. I soon came to understand that the representation was merely symbolic and the governance was entirely nominal. The few meetings of the committee that I attended took place in one of the administrative offices in Low Library, the stately dome that is the architectural centerpiece of the upper Manhattan campus. They were largely taken up, I recall, by the same activity that was always taking place among undergraduates there: arguing about which books did or did not belong on the curriculum. 

    This is a cultural habit that is neither original nor exclusive to the West; it is simply the byproduct of any educational system that is based on a finite set of books. Such an education will be one that necessarily includes disagreement about which ones are selected, why they were selected, and what the value of reading them are to individuals as they are, to society as it is, and to both as we might prefer them to be. Definitively resolving these disagreements cannot be the aim of education, since to do so would end the debate and thus the education itself — in other words, the disagreeing is in no small part where the educating happens. Not long ago, in his review of Rescuing Socrates, Roosevelt Montás’ memoir of his time as the director of the Core Curriculum, the poet John Michael Colón concluded that the Canon Wars were a missed opportunity. For Colón, the way the debate about the great books was framed by its defenders and its critics alike presented a “false choice between two impossible options”: to treat “as the world’s sole inheritance traditions whose claim to universalism we know is false, or to live . . . without any deep connection to the past that created us.” The way out of this impasse, he wrote, was not to throw out the baby of canonicity with the bathwater of Western chauvinism, but to create a canon that was genuinely global. To the claim that a particular set of books ought to be considered canonical because it, rather than some other set, is the best which has been thought or said in the world, the first question a well-educated person ought to ask is: how do you know? 

    In 1754, when Columbia University was in its first year of existence as King’s College in the Province of New York, a man named Wu Jingzi died in Yangzhou. Born to a prosperous family of late Ming and early Qing officials from Anhui province, about three hundred miles inland from Shanghai, Wu seemed to have a promising future ahead of him when he passed the preliminary civil service exam at the age of twenty-two. But money burned a hole in Wu’s pocket: he gave it away to anyone who asked. He was also a bit of a bon vivant, spending his time in tea houses, taverns, and brothels. Subsequent examination attempts ended in failure. In his early thirties, he moved his small family to Nanjing, where he eked out what would later become known as a bohemian existence, surrounding himself with a circle of writers, philosophers, and actors. 

    In the culturally vibrant “southern capital” of the Empire, Wu wrote poetry and published a now-lost commentary on the Book of Songs, one of the Five Classics, which, along with the Four Books, comprises the core of the Confucian canon. He built enough of a reputation as an independent scholar to be personally invited to Beijing to sit for a special round of exams, but for reasons that are unclear he did not attend. Romantically inclined historians interpret this as a principled rejection of corrupt officialdom; others say he was sick on the day of the exam. In 1739, he spent what remained of his funds helping to dedicate a temple to an ancient sage in Nanjing, which he considered the pinnacle achievement of his life. The following year he started work on a long piece of prose fiction — a satire of life under the Qing dynasty centered on the literati, the class of scholar-bureaucrats who managed the Empire and the examination system through which they were selected — for the amusement of his friends, who all belonged, however peripherally, to this class. Written over the course of the next ten years, the completed book, consisting of fifty-five chapters, circulated in manuscript for decades after Wu’s death, until a Yangzhou firm published it as Rulin Waishi, or The Unofficial History of the Scholars, in 1803. 

    Along with four novels from the Ming dynasty — Romance of the Three Kingdoms, Water Margin, Journey to the West, and The Plum in the Golden Vase — and Dream of the Red Chamber by Wu’s younger contemporary Cao Xueqin, The Scholars is sometimes considered one of the six “classic Chinese novels.” The designation — having gained currency following the publication in 1968 of a book of that title by C. T. Hsia, the Shanghai-born scholar of Chinese literature who spent three decades on the faculty at Columbia — represents a moment of cultural syncretism, adding long vernacular prose fiction to the extant canons of Confucian and neo-Confucian philosophy, imperial historiography, Taoist and Buddhist scripture, and anthologies of poetry and short stories. 

    If you had to pick only one of the six classic novels to read, you would probably choose Dream of the Red Chamber, which is “one of the great novels of world literature” — what “Proust is to the French, or Karamazov is to the Russians,” in the words of the critic Anthony West. But what interests me about The Scholars is that its central subject is a society whose cultural, legal, and administrative institutions are grounded in the humanistic study of a canonical body of texts. 

    Hsia praises The Scholars for its “shrewd realism” and “intelligent satire” whose “stylistic and technical innovations” were of “revolutionary importance” for “the development of the Chinese novel.” In “pure and functional” narrative prose, Wu manages to paint a panorama of the entirety of Chinese society from the Emperor, his generals, and high-ranking officials to provincial judges, merchants, booksellers, and farmers to mendicant monks, swordsmen, actors, and prostitutes. With one exception — the painter-sage Wang Mien, whose tale opens the book — these characters are drawn not from history or legend, as is the case with the Ming dynasty classics, but from the imagination or experiences of their author. Some are based on Wu’s friends and acquaintances; some are based on his rivals and nemeses; the account of the prodigal poet Tu Shao-ching is undoubtedly a self-portrait. Wu puts the tenets of Confucianism and the folk beliefs of Buddhist and Taoist popular religion into the mouths of his characters, but these are treated with an irony not present in earlier Chinese fiction. The tragicomic sensibility expressed in The Scholars is his own. 

    But any resemblance between The Scholars and the novel as it was then being developed in England by Samuel Richardson and Henry Fielding — to say nothing of its evolution during the century of Jane Austen and Henry James — ends there. To readers whose expectations of the form were set by Pride and Prejudice and Portrait of a Lady, the most distinctive and puzzling feature of The Scholars is, first of all, its structure. The critic Steven Moore compares it to a long-distance relay race. The narrative follows one minor character for a few chapters until it is handed off to another, and so on, creating a cast of more than sixty principals, none of whom, not even Tu Shao-ching, function as its protagonist. Nor are any of them truly “round,” in E. M. Forster’s sense, since Wu is less interested in the psychological interiority of individuals than in the networks of social relations that connect them. The unified chronotope that organizes Western realist fiction is also absent from The Scholars. The book ranges great lengths in both space — from the Lower Yangtze region where most of the book is set, up to the imperial capital in Beijing, and down to Guizhou where the military brutally pacifies Hmong rebels — and time — after the prologue, set in 1368, the book spans the years 1487 to 1595, without organizing the narrative around the familiar allegorical unit of the single, multigenerational family. (It is thus set entirely in the Ming dynasty, no doubt because Wu wished to avoid any trouble for his satirical barbs against the current rulers: as Manchus, the Qing emperors suspected, not without reason, that they were regarded, like the Mongol Khans before them, as ethnic usurpers by the Han literati who staffed their civil service. They were known for conducting so-called “literary inquisitions,” which involved burning seditious books, and imprisoning or executing their authors.) Because of its sheer mass, the book has been described as plotless, though, as we will see, its apparently episodic structure is subtended by a deeper thematic logic. 

    Perhaps it would be better to think of The Scholars as the culturally specific “unofficial history” its title says it is, rather than as a novel, whose use as a catch-all term for “long fictional prose narrative” tends to obscure more than it illuminates. As the name suggests, an unofficial history is a parody of official or orthodox history — a genre that extends from the Records of the Grand Historian, written in the first century B.C., to The History of the Ming, completed in 1739 — in which a chronicle of the noble lineages and heroic deeds of emperors is replaced with a chronicle of mostly petty, pompous, and vicious scholar-bureaucrats; classical Chinese is replaced with the vernacular; and ostensibly factual persons and events are replaced with those that are ostensibly fictional. To this burlesque of high literary tradition, Wu grafts one from the other end of the class spectrum. For centuries, professional storytellers had entertained popular audiences at tea houses with tales of lovers, ghosts, warriors, and criminals. By turns didactic and bawdy, and often interlaced with topical observations and social commentary, these tales, whose episodes a skilled performer could parcel out over the course of months, began to be collected and published toward the end of the Ming dynasty for the consumption of literate audiences. 

    The imprint of the storytelling tradition on The Scholars can be seen in the short synopsis that opens each chapter and the formulaic sentence that concludes all but the last (“if you would like to know what happened next, you must read on”), devices that live on today in the recap sequences and cliffhangers of soap operas and other serial narratives. It can also be seen in the proem, which states the “moral of the book”: 

    Dynasties rise and fall, 

    Morning changes to evening . . . 

    And fame, riches, and rank 

    May vanish without a trace. 

    Then aspire not to these, 

    Wasting your days 

    “The idea expressed in this poem,” the narrator acknowledges, “is a commonplace one.” Indeed it is: the same idea can be found in Ecclesiastes and the Meditations of Marcus Aurelius, to cite just two examples. Wu illustrates the point with the story of Wang Mien. A good son from a humble background and an autodidact of genius and genuine curiosity, Wang chooses not to apply for an official career. Instead, he uses his talents to become a painter, which only serves to bolster his reputation among powerful officials. To keep his integrity intact, Wang is forced to come up with a series of increasingly elaborate and comical ruses to avoid meeting with them, ultimately becoming a hermit who lives in voluntary poverty. Near the end of his life Wang receives a visitor he can neither escape nor refuse: Chu Yuan-Ching, the founding Emperor of the Ming dynasty, who seeks his advice on the management of his kingdom. That the powerful are irresistibly drawn to those who shun power and contemptuous of those who seek it is another commonplace, as Diogenes and Plato, in their respective interactions with Alexander and Dionysius II, could both attest. 

    In his famous essay on the figure of the storyteller, Walter Benjamin remarks that “the nature of every real story” is that “it contains, openly or covertly, something useful.” “This usefulness,” he continues, “may consist in a moral; in another, some practical advice; in a third, a proverb or maxim. In every case, a storyteller is a man who has counsel for his readers.” For Benjamin, having counsel is what distinguishes a storyteller from a novelist, who, being “himself uncounseled . . . cannot counsel others.” Fortunately for readers whose tastes have been formed by the novel, for whom being spoon-fed moral counsel and practical advice adulterates aesthetic pleasure, the commonplace about the vanity of external things that opens Wu’s unofficial history turns out to be a red herring. 

    If The Scholars can be said to have a protagonist, it is not a person but an institution: the imperial examination system, which is the only thing that directly or indirectly touches the lives of most of the characters in the book. First instituted in the seventh century, the idea behind the exams was to select officials based on merit rather than birth, and to promote moral rectitude and ideological coherence among the group that would be in charge of administering an increasingly large and populous territory by grounding their education in a common set of culturally venerated texts, namely, the Confucian classics, and a common skill, namely, the ability to read and write hundreds of thousands of characters of non-vernacular Chinese. 

    By the time of Wu’s birth a little over a millennium later, the exam system had become a bureaucracy within the state bureaucracy. No fewer than three departments — the Imperial Secretariat, the Ministry of Rites, and the Han Lin Academy — were responsible for overseeing a bewildering array of exams. There were district exams, prefectural exams, qualifying exams, special preliminary exams, special exams, provincial exams, metropolitan exams, palace exams, and exams for the military, conducted every other year for two to three million candidates at between thirteen hundred and fourteen hundred sites around the country. Each dynasty put its distinctive stamp on the testing regime, according to the needs and the fashions of the times. One character, the publisher and bookseller Ma Chun-Shang, 

    (somewhat inaccurately) summarizes the history thus: during the Spring and Autumn period civil servants were selected for their skills in the art of the aphorism; during the Warring States period, for their skill in rhetoric; during the Han, for their exemplary deeds and character; during the Tang, for their ability to write poetry; and during the Song, for their knowledge of neo-Confucian philosophy. In one of his first official decrees, the Ming emperor scrapped the poetry requirement in favor of examinations based on the “eight-legged essay,” so-called because of the eight elements of its structure, which candidates had to follow step-by-step to answer a question on a topic selected from one of the Four Books. (The Qing civil service retained the eight-legged essay and Song neo-Confucian orthodoxy; shortly after Wu’s death, it also reinstated the poetry requirement in order to make the exams more competitive.) 

    Although the exams afforded, on principle, a degree of social mobility, there were no public schools, so candidates from the landed gentry and the merchant class, who could pay for private tutors and study materials, were at a distinct advantage; besides, it is much easier to hold your brush steady if you have had something to eat that day. (From the numerous dining scenes, whose menus are described in greater detail than the appearances of some of the characters, one gets the impression that The Scholars was written on an empty stomach.) Wu delights in slipping errors of fact into the mouths of pompous examiners and graduates of the prestigious Han Lin Academy, who were tasked with determining what constituted the proper and orthodox interpretation of the classics. In any event, any claim the system might have had on being genuinely meritocratic was undercut by the simple fact that women were barred from sitting for exams, a point underscored in The Scholars by the story of Lu’s daughter, whose intelligence, learning, and abilities as an essayist put to shame those of the successful literatus she has been married off to. 

    Since a position in the civil service conferred a degree of financial stability and social status on the successful candidate and his extended family, making sons more marriageable, the psychological pressures on the candidates were immense. Not for nothing is the title of one of the few books available in English on the subject China’s Examination Hell. At the mere sight of an exam school, one traumatized failure, Chou Chin, blacks out; another, Fan Chin, has a mental episode when he learns that, after almost a quarter century of sitting for the provincial exams, he has finally passed. Conditions like these encouraged favoritism, corner-cutting, bribery, and cheating. An intelligent and filial scholar, Kuang Chao-jen is corrupted by success; he later impersonates an exam candidate for money and becomes involved in a number of criminal schemes. As his case demonstrates, there was no necessary connection between the study of morality and its practice — and Kuang is no outlier. In fact, there is an inverse correlation between proximity to officialdom and decency as a person; perhaps that is why one of the book’s few virtuous literati, Dr. Yu Yu-teh, encourages his son to study medicine instead, and why its most sympathetic characters are poor farmers, actors, and others from socially-despised backgrounds. 

    To top it all off, there is a pervasive sense of fatalism among the candidates that ought to have been at odds with their rationalist philosophical training. Characters consult fortune tellers, astrologers, mediums, alchemists, and dream interpreters — behaviors that are typical of those who feel that the course of their lives is beyond their control. Along with tutoring the sons of the wealthy, doing clerical work for local officials, and contributing to China’s growing marketplace for books, these were some of the services provided by members of the vast underclass of highly educated exam failures produced by the system. It was hardly a recipe for social stability. 

    For me, the main pleasure of The Scholars is reading the debates conducted by the characters about the pedagogical, aesthetic, and political implications of every aspect of the exam system, which, for those who have ears to hear it, rhyme with many of the debates conducted in the United States over the past four decades. Wu’s third person narrator remains largely neutral on these debates, allowing the actions of the characters to stand as subtle confirmations or denials of the validity of their positions about the social value of poetry and essays, the criteria for judging exam performances, the trustworthiness of experts (or lack thereof), the relationship between scholarship and governance, the respective virtues of general knowledge and specialization, the duties of scholars to participate or refuse to participate in the civil service, and so on. In a handful of cases, however, he puts his thumb on the scale. After his audience with the emperor, for instance, Wang has this to say about using the eight-legged essay for exams: “These rules are not good. Future candidates, knowing there is an easy way to a high position, will look down on real scholarship and correct behavior.” To institutionalize something of value is to risk compromising it, and Wang predicts that linking scholarship to wealth and social status will have the effect of turning the exam, rather than the knowledge it is supposed to test, into the purpose of study. As if to prove his point, many years later Ma, the publisher of a bestselling collection of eight-legged essays, tells a young charge: “Even Confucius, if he were alive today, would be studying essays and preparing for the examinations instead of saying, ‘Make few false statements and do little you may regret.’ Why? Because that kind of talk would get him nowhere: no one would give him an official position.” Dostoevsky’s Grand Inquisitor would have been impressed by Ma’s reasoning. 

    Disillusioned by the corruption of scholarship, a motley crew forms. This group of failed candidates is made up of talented refuseniks and independent men of letters in Nanjing, and led by Wu’s “romantic” alter ego Tu and his friend Chuang Shao-Kuang, who has just declined the Emperor’s offer of a ministerial post thanks to a timely intervention by a scorpion that crawls into his scholar’s cap. The group decides to create a counter-institution called the Tai Po Temple. Chapter Thirty-Seven, which is about the temple’s dedication ceremony, is unanimously held to be the central episode of The Scholars, the one that gives structural and in all respects from the other chapters, including its form, whose “rhetoric of repetition” and “schematic expository style,” in the words of Shang Wei, the Du Family Professor of Chinese Literature and Culture at Columbia University, are pastiches of the ancient ritual manuals that were objects of great interest in Wu’s circle in Nanjing. Wu catalogues the ritual’s seventy-six participants, led by the virtuous Dr. Yu as master of sacrifice, in a manner reminiscent of the catalogue of ships in the Iliad; he describes their rites of purification and ceremonial dress; he details the decorations put up in the temple, the various items sacrificed to the ancient sage Tai Po, and the period-specific musical instruments played to entertain his spirit; finally, he gives a blow-by-blow account of the ritual itself. By recreating this Confucian-era practice of ceremony and music, the participants hope to help “produce some genuine scholars, who will serve the government well.” 

    It is a gesture that is as nostalgic as it is quixotic. “Of all the eighteenth century novelists,” Shang writes in Rulin waishi and Cultural Transformation in Late Imperial China, “Wu Jingzi was the one most engaged with issues of contemporary intellectual discourse.” During the eighteenth century, independent scholars publishing in China’s thriving literary marketplace began to subject neo-Confucian orthodoxy to the sorts of philological and evidential analysis akin to European “higher criticism” of the Bible. At the time, Shang notes, the literati were experiencing an “unprecedented degree of division and fragmentation,” and a corresponding anxiety about the “decline of the Confucian world order” that provided, in their view, the legitimacy of the state. By placing ritual at the center of his sweeping social critique, Wu’s unofficial history displays what Shang calls the “paradoxical combination of cultural iconoclasm and Confucian revivalism” characteristic of his generation’s intellectuals. 

    But how paradoxical is it, really? Critique of the present is just as often legitimized by an appeal to an idealized past as it is legitimized by an appeal to an imagined future: look no further than Bloom’s attempt to combat what he perceived as cultural decadence by regrounding elite education in the study of the Bible and Plutarch during the Reagan administration. Wu, for what it is worth, seems to be aware of the problem. Here he does not offer anything so simplistic as a commonplace about the externality of rank, riches, success, and fame to a flourishing life. In the final chapter of The Scholars, which takes place forty years after the dedication ceremony, a tea-house keeper named Kai Kuan visits the Tai Po Temple only to find it abandoned, its roof collapsed, its gate in ruins, and the musical instruments gathering dust inside or missing altogether. The events in the lives of Tu Shao-ching and Chuang Shao-Kuang have begun to fade into legend; no one can remember the details with any precision. The destiny of institutions may be corruption, as The Scholars amply records, but counter-institutions that fail to become institutions lack the material base necessary to ensure their longevity. 

    This is the paradox that has not ceased to be germane to the many who are concerned about the state of the humanities in particular and the American university more generally. Compared to the culture wars of today, the debate about the great books seems high-minded, even quaint. In the 1980s, the one thing that the advocates and the critics of the Western canon agreed about was the importance to society of humanistic education — a proposition that can no longer be taken for granted. While faculty argued at department meetings and at symposia and in op-eds about undergraduate humanities curricula, the material foundations for reproducing academics as a class and the liberal arts as an institution was cracking up. Skyrocketing tuitions and student loan debt, dried-up tenure lines, crushed graduate student unions, adjunctification, bloated administrations, drop-offs in enrollment and faculty hires, shrinking or shuttered departments, incoming freshman whose reading abilities were formed in the wake of No Child Left Behind, the smart phone, and now generative AI — the litany will be familiar to anyone who has been paying attention to the state of higher education for the past decade and a half. To this we can now add direct political interference in curricula and hiring decisions at Columbia, Harvard, and elsewhere by a presidential administration staffed wall-to-wall by culture-war berserkers who are in style, if not in substance, Bloom’s grandchildren. 

    It seems increasingly likely that, in the future, the experience of reading any literature, let alone the classics, let alone a global canon of the classics, will be the pastime of interested amateurs building for themselves the ideal library that Calvino envisioned, rather than a course of study pursued by undergraduates receiving a formal education in a college setting as Bloom assumed. Given this, it is hardly surprising that the last few years have seen numerous para-academic, educational, and cultural counterinstitutions — publications, seminars, salons, and institutes in the humanities and social sciences — set up shop outside the university system, like so many intellectual lifeboats floating alongside the hull of a sinking ship. Yet those who are prepared to abandon the university to its fate would do well to contemplate what happens to the Tai Po Temple. 

    The Scholars ends on a somewhat hopeful note, with the stories of a bohemian calligrapher, a draughts player, a painter, and a lyre-player. Each engages in these traditionally aristocratic pastimes not to appear refined, or to achieve social status, but simply because they “happen to like these things.” In spirit they recall Wang Mien, the sage whose story opens the book, or Tu Shao-ching, who is contented to live by his pen in the company of his family and friends, and does not complain of the decline in his social status and personal fortunes. Perhaps, in good Confucian fashion, Wu suggests, they will become the foundation of a new cycle of order emerging from the disorder he has chronicled in the relay race of his unofficial history. 

    The next cycle, however, was to be the last for the imperial literati. It was to come to an end in a surprising place. Just as The Closing of the American Mind was published when the United States was on the cusp of becoming the world’s sole superpower, Wu’s unsparing critique of contemporary society was written during a period that historians now call the “High Qing.” In retrospect, it was an apex moment — in terms of territorial expansion, political influence, and cultural achievement — for imperial China. Alongside the cyclical narrative about the imperial bureaucracy, Wu tells a story whose import would be far greater in ways he could not have predicted: the rise of the merchant class, which goes hand-in-hand with a coarsening of social life — as exemplified, in Wu’s satire, by the philistines of Wuhan — and the corruption of officialdom through the power of money. 

    The Scholars is awash in silver, which is at first given away, then loaned without expectation of return, and finally loaned with interest for profit. Mined in the Spanish colonies of South America in the eighteenth century, silver reached Chinese merchants through the intermediary of the British East India Company, who used it to purchase silk, porcelain, and, most importantly, huge amounts of tea. Shortly after Wu’s death, the emperor instituted the canton system, restricting European trade to a single port, today’s Guangzhou, at the mouth of the Pearl River. After the world’s silver reserves contracted, due to overmining, Britain, which had been running a severe trade deficit with China — sound familiar? — scrambled to find a product that would help keep the tea trade afloat. The East India Company discovered one in the Indian territories of Bihar and Bengal that they had recently conquered: opium. 

    As the novelist Amitav Ghosh detailed in his recent book Smoke and Ashes, the East India Company, having supplanted their Dutch rivals, proceeded to run one of the largest illegal narco-trafficking operations in history. Their monopoly was soon to be broken by a newcomer on the global scene, the United States Merchant Marine, whose ships gave them a run for their drug money. When the Qing dynasty tried to crack down on opium smuggling and smoking, EIC sepoys invaded Guangzhou, forcing the Emperor to legalize the drug at gunpoint. Following the Treaty of Nanking, which ended the First Opium War, the EIC took home millions of pounds of reparations payments, a free market for unlimited trade on British terms, the territory of Hong Kong, and a new idea: the use of exams to select potential employees. 

    For China, it was the start of “the century of humiliation,” which would go on to include a Second Opium War, a rebellion led by a failed examination candidate that left twenty to thirty million dead, and further military defeats at the hands of Britain, France, Russia, and Japan. Shocked by these events, the Qing intelligentsia blamed, among other things, the imperial examination system for holding back the modernization process that would be necessary for China to compete militarily and economically in the new globalized world. In 1905, shortly before both the dynasty and the empire collapsed, educational reformers persuaded the court to eliminate the exams. 

    That same year, halfway across the world, the president of Columbia University inaugurated a library on the school’s new campus in Morningside Heights. Since he had paid for it out of the funds of his inheritance, he decided to name the temple of learning after his father, Abiel Abbot Low, who made his fortune smuggling opium into Guangzhou. 

     

    There Is No Privacy Pill

    On a warm Monday in June 1965, the Supreme Court declared that married women had the right to use contraceptives. This was a hard-won victory for Estelle Griswold, executive director of the Planned Parenthood League of Connecticut and namesake of the case, Griswold v. Connecticut. She had previously helped displaced persons after World War II and, motivated by her conviction that contraceptives could alleviate poverty and human suffering, fought tirelessly to overturn the birth control laws in Connecticut, then some of the strictest in the country. Her persistence in the face of failed appeals, fines, and even jail time managed to transform contraceptive access from something reserved for well-resourced women to something available for all (married) women. But the legacy she left behind is far greater than the outcome of this single court case and the women her clinic personally helped with family planning. Griswold v. Connecticut set a precedent for the blockbuster reproductive health victories that followed, like Eisenstadt v. Baird, which extended contraceptive access to unmarried women, and Roe v. Wade, which granted women the federal right to an abortion. It also laid the groundwork for future court cases that decriminalized sodomy, same-sex marriage, and interracial marriage. But that wasn’t all. Estelle Griswold’s fight for contraceptive access paved the road for something else, something she couldn’t have imagined at the time: the right to internet privacy. 

    Even as a privacy researcher, it wasn’t immediately obvious to me that a married woman’s right to contraceptives was somehow related to internet privacy. The connection is thanks to the argument made by Justice William O. Douglas, who delivered the majority opinion in Griswold, in which he stated that to interfere with the contraceptive use of married couples would be a violation of their — and here is the key word — privacy. To connect the dots explicitly, he asked and then answered his own question. “Would we allow the police to search the sacred precincts of marital bedrooms for telltale signs of the use of contraceptives? The very idea is repulsive to the notions of privacy surrounding the marriage relationship.” On its own, this section of his opinion is worth celebrating, at least for married couples, but it is narrow. Yet Douglas didn’t stop there. He argued that the marriage relationship was actually just one example of something that falls in the “zones of privacy” afforded to Americans, as implied by the First, Third, Fourth, and Fifth Amendments. Essentially, Americans have the right to privacy even though it isn’t explicitly stated in the Constitution, and that right manifests across many different zones of life. Other zones include the home, and the papers and other personal effects within them, as covered by the Fourth Amendment’s protection “against unreasonable search and seizures” and the fifth amendment’s self-incrimination clause, which turns our words into a zone of privacy, such that we cannot be forced to speak in a way that can be used against us. 

    Obviously, the internet was not explicitly mentioned in a ruling made decades before it existed, but it has since become a space where we regularly make decisions akin to the ones we make in the “sacred precincts of marital bedrooms” and manage documents as sensitive as the papers we keep in our homes. The notion that at least some corners of the internet should be zones of privacy is evidently held by a majority of Americans who, according to the Pew Research Center, are concerned about the state of privacy online and in general. We have Griswold, and the subsequent legislation built on top of it, to thank for that. In the sixty years since this landmark ruling, the fates of both reproductive health and internet privacy have continued to touch, even as they both face an uncertain future. Perhaps this unexpected entanglement can teach us something about what is to come for them both. 

    In the decades since Griswold v. Connecticut, the internet has expanded rapidly, subsuming much of our social infrastructure. As a consequence, the many data pieces that stitch together a single life often exist as online digital traces, covering everything from shopping to taking tests, filing taxes, and making doctor’s appointments. Although the United States still has no comprehensive federal privacy law — unlike other governing entities, like the European Union — it does have some regulations which acknowledge that certain categories of data require more privacy than others. For example, the Health Insurance Portability and Accountability Act of 1996 restricts the collection and use of protected health information, like emails, phone numbers, social security numbers, or IP addresses, that can be used to link an individual to their health records. This regulation has taken a cue from Griswold and carved out specific digital records as belonging to a zone of privacy. 

    To see it in action, consider a case from two years ago. A patient at Redeemer Health, a Catholic non-profit health system, requested that Redeemer send a prospective employer a test result from her records, likely the results of a drug test or physical exam. Rather than only send the specific result requested, Redeemer sent the prospective employer her entire medical record without her consent, including her reproductive health and other OB/ GYN history. Consequently, Redeemer Health paid over $35,000 in settlement money and committed to a two-year corrective action plan. HIPPA serves as an example of how the law has codified the notion that reproductive health, even in digital form, is deserving of privacy in a way that cannot legally be breached. 

    Still, even though we have some internet privacy regulation today, it is insufficient for robust privacy. Although this reality has been somewhat demystified through large and well-documented scandals, like Cambridge Analytica’s unauthorized collection of Facebook data and Edward Snowden’s NSA disclosure, it can be easy to forget or dismiss. This is especially true when we use the internet to browse more intimate matters in isolated physical spaces, under what can feel like a cloak of invisibility. When privacy violations intersect with reproductive health, however, it becomes clear just how important, and lacking, internet privacy really is. This is particularly easy to see through the lens of targeted advertising. Consider the case of a Minneapolis teenager, whose targeted advertisements exposed secrets she hadn’t yet shared with her own family. Her father, initially outraged by the ads, discovered that she was pregnant from the recommendations she received for baby clothes and cribs from, ironically, the company Target. That this behavior feels spooky and inappropriate is a testament to the cultural understanding facilitated by Griswold that, when it comes to reproductive health, women deserve privacy from entities like corporations or governments. Even so, this type of tracking still happens today, often facilitated by big technology companies like Google and Meta, on hospital websites and pharmaceutical websites selling products like Plan B. This means that the personal decisions women are making about their bodies — such as searching for abortifacients — are collected and stored such that they may be made available to advertisers or, in some cases, the government. This type of privacy violation can already cause harm when living in a country with strong reproductive health protections. In a country without them, the consequences can be far worse. 

    Today, it is unclear which category the United States falls in. By many measures, reproductive health care access is hanging on by a thread. Roe v. Wade was overturned three years ago by another landmark ruling, Dobbs v. Jackson, after which twelve states banned abortion entirely and an additional seven states enforced restrictions after a certain number of weeks. To some extent, this ruling creates similar challenges for those seeking an abortion in the United States to those who were seeking birth control in Griswold’s era. Women lucky enough to be born in the “right” state, or with the means to travel to one, can still get abortions in this country, while the rest cannot. Once again, reproductive health access is stratified across socioeconomic lines. Unlike Griswold’s era, however, the internet is now widespread and facilitates access to contraceptives, medical professionals, and other community networks for all women, regardless of the state they are living in. This collision tests the limits of how much the internet can fulfill its original promise. Eight years ago, on its twenty-eighth birthday, World Wide Web inventor Tim Berners-Lee wrote a message calling for progress towards an internet “that gives equal power and opportunity to all.” A crucial component of an equal internet is one that respects the right to privacy that William Douglas found in the penumbra of the Constitution. 

    That is not the internet we have today. Much like abortion access, the internet exists in a state of stratification, with comprehensive privacy only available to those with the technical expertise to use privacy enhancing tools. Once again we see internet privacy and reproductive health run on parallel tracks. For Jessica Burgess, a woman in Nebraska who helped her 17-year-old daughter acquire abortion medication, the dearth of internet privacy had catastrophic consequences. Their conversations about it on Facebook Messenger created a digital trace that prosecutors were able to obtain and use to send both mother and daughter to prison. In another example, New York–based doctor Margaret Carpenter was able to prescribe abortion pills from New York, where abortion is legal, to patients in Texas and Louisiana, where it is not. Yet both states have filed charges against her, a civil penalty and criminal felony respectively, at least in part thanks to a digital trail of evidence. This digital trail doesn’t even need to be so explicit as to detail specific medications or procedures to become problematic. Privacy experts have already raised alarm bells about the role that menstrual tracking application data could play in future court cases. 

    Reproductive health rights and internet privacy have been unexpectedly entangled as both experienced a rollercoaster of expansion and contraction. The right to privacy in the United States, the guiding principle that unites them, has become largely uncertain. We are now living in a time where we are increasingly stripped of the right to make decisions about our bodies and conduct our intimate business online free from peering eyes. If history is any guide, those in desperate need will increasingly turn to alternative methods or technically complex tools for both abortion and internet privacy, ones that are less regulated and may be more dangerous. But hopefully that can change. With any luck, our generation’s Griswold is already gearing up for the next fight. 

     

    Love is a First Responder

    Lately, when I close my eyes at night, my thoughts strangely tunnel back to 2001 and one particular fireman. It was September, I was 22 years old living in New York and the world around me had turned to powder. Not the delightful dander of snow globe souvenirs or storybook Manhattan Christmases, but the dust and the ash, the gray and black debris, of a page in history that was burning even as it was being written. 

    A week or so after the Twin Towers were brought down, Ground Zero was still an open wound. It would be that way for a long time, I mean the smoking rubble. The smoke hovered everywhere above and around us. Even where we lived in Washington Heights, way uptown, we could smell what we assumed was burnt metal and charred flesh. A friend and I felt like we needed to go downtown to where it happened. To see it for ourselves; to stand at the epicenter. 

    When we got there, parts of the devastated site were still closed off, but we reached the center, and it was a deeply chilling monochrome. Storefront windows were cracked and smashed by the blast of flying shrapnel. There was a shredded comic book in one of the windows, smudged Superman and Batman figurines, a picture of Gotham covered (like our Gotham, I mean New York) in cinder, while the store’s owner tried to wipe the heroes clean. 

    Ash was everywhere and the air was thick, but we didn’t cover our faces. We let the cells and souls that lingered, suspended, hit our lungs. Yanik and I breathed it all in, probably stupidly, but we thought that if the first responders were down there, day in and day out, digging for any remains of the massacred, then we could for a few hours, in some kind of dumb camaraderie that made sense to us, also become one with these souls. 

    That same night, after we trudged through the dust, Yanik and I went to a bar called Hogs & Heifers, where, it turned out, some of the firemen and first responders would gather and get understandably stoned out of their minds, so as to drown out what they had seen, as if that were possible. The place was packed. The firemen were still wearing parts of their uniforms, their faces full of smudge, like Superman’s in the damaged window, and their arms too, but their hands were clean. Not their fingernails: their fingernails still held the day’s, the week’s, dark particles — those might be there forever, a new DNA code tucked into the nailbed, telling its story. But the rest of their hands were crisp, as if each of them had sung the alphabet or recited a prayer under tap water and soap, over and over again, until the soot of the catastrophe washed off. 

    Hogs & Heifers, usually kind of chilly and known for having a bunch of bras hanging from the ceiling, was thick with breath and humidity — bodies cramming up against each other, hot, even though it was cold outside. I was drinking gin and tonic, which felt clean for the same reason it was called “tonic,” and, though I wasn’t much of a dancer, Yanik and I started really moving, our bodies just letting go, until a fireman came up to me and asked if he could dance with me. This is the fireman I can’t stop thinking about lately. 

    I said yes. He got really close. The dirt from his uniform got all over me, changing the color of my clothes, and my heart started to beat fast and hard, thinking about what that dust actually was — all over me, all over us. “Baby,” said the fireman, “I wanna take you home with me.” “Let’s just dance,” I said. What I didn’t tell him was that I was a virgin and wasn’t going home with him; this was most definitely not the way I was going to “lose my virginity.” He was looking for something else that night and I was not it. 

    Except that, in the end, maybe I was. “Okay,” he said. “But don’t leave me, okay?” He whispered this last part into my ear and held me tight; he gripped me, as if for his life. “I won’t leave you,” I said, as a surge of emotion ran up to my throat. Embarrassed, I put my head on his shoulder and let a tear fall quietly, where he couldn’t see it. I let him put his hands under my shirt. We both went hard, my nipples and him inside his pants. We felt each other firm beneath our clothes, and that was comforting somehow, proof of a common surviving pulse. 

    Then he took his hand out from under my shirt and touched my lips with his sweaty thumb and I traced his eyebrows. I wanted to kiss him but I didn’t. We just danced in that devastated reality, back and forth, the two ends of a rope pulling and yielding. 

    And then, all of a sudden, the National Anthem started to blare. It came out of nowhere. And everyone stopped dancing, frozen in their tracks. Even those who were rocking out on top of the bar froze. Every single person stood still and put their hands on their hearts and sang. O say can you see, by the dawn’s early light, what so proudly we hailed at the twilight’s last gleaming. Tears made new roads in the black marks of those men’s faces. My throat ached as I sang, what I was trying to hold and what I was trying to release fighting inside me and rising to my lips. Never in my entire life had I sung so deeply. Not loudly, just profoundly, about the land of the free and the home of the brave. Which, after all, was mine — the place that had taken in my refugee parents, that had given my grandfather a cloak after he had to flee three, count them, tyrannies: Franco and Hitler and Castro. 

    And then it was over, and the bras swayed above us and another song came on over the speaker. My fireman turned to me and said: “Stay here, hot cookie, don’t go. I gotta take a leak.” 

    I stood swaying to Springsteen as I stared at the back of my fireman’s bald head, until he disappeared into the crowded dance floor. Then another fireman tapped me on the shoulder and said: “Mind if I borrow you while your man takes a whiz?” I laughed and went with it. This second fireman twirled and twirled me until Fireman One came back — my fireman, who looked at me with a sad face when he saw me twirling and said: “It’s okay if you go home with someone else, it won’t hurt my feelings.” 

    “I’m not going home with him,” I said, “I’m not going home with anyone,” which is when Fireman Two bowed out, smiling, stepping backward, making way for Fireman One. “Do you smoke?” asked my fireman, coming closer again. I nodded. He handed me a cigarette. There had been rumors that smoking would be banned in bars in the city, but we didn’t believe it. New York would never outlaw smoking inside bars. How lame would that be? Still, somehow, we ended up outside, or just near the door or the windows. 

    This part gets a little blurry, all those drinks in, as to exactly where we were standing, but what I remember clearly is that the smell switched from sweat to something more like a crackling pig over a low fire. Smoke upon smoke upon smoke. “Fuck,” he said, as if remembering everything. He looked down, inhaling, exhaling. He had been restored to the horror. “Yeah,” I nodded. 

    “Did you know anyone?” The question that always came up. He looked up as he asked, and it was the first time, under the streetlights, that I really saw his eyes, charcoal and glassy. 

    “No.” 

    “Lucky.” 

    “You?” 

    “Too many people.” 

    “I’m sorry . . .” It was a stupid thing to say, I knew it, but I had nothing else. 

    The smoke lingered in front of him, folding into the rest of Manhattan’s gray. 

    “We’re juiced,” he said, looking up at me again. And as he said it, his legs failed him a little, and he wobbled. “Let me go home with you . . .” he whispered into my ear as I held him up. 

    “I’m sorry,” I said again. “I’m sorry . . .” I was sorry. Eventually Yanik and I found each other in the crowd and we went home. 

    Hogs & Heifers is closed now. It was priced out, after twenty-three years of business. Otherwise I would make a pilgrimage there. Instead I find myself on a different kind of pilgrimage, an imaginary pilgrimage made of dream and memory, every night, hand to chest, thinking of the fireman, and about the love — it was nothing else — that coursed through me that day as I sang and as he held me close, and I him, alongside everyone he had dug out for days on end. 

    If I had to pick my fireman out of a lineup today, if I had to swipe him right or left on a dating app, I wouldn’t recognize him. I’m sure of it, even if age had nothing to do with it. But I will never forget what I felt that day. Disaster is a great educator. What I felt in those hours was a love for something greater than either of us, for something that we held and that held us. I am referring to a love of country. It was a country that felt, for the first time for many of its inhabitants, like it could crumble if we were not careful. 

    The indescribable intensity of that leaves me a little homiletical. Now, two decades later, when I sing the anthem, I hold my hand over my heart tighter than ever, as if clinging to the fabric of our flag and the parchment of our Constitution. The anthem ends with a question: “O say does that star-spangled banner yet wave, / O’er the land of the free and the home of the brave?” We are asked to answer, personally, individually, and in unison. The question serves as a kind of answer. Our anthem is not just a sentimental poem; it is a call to action. The answer sits in the push and pull of our checks and balances, in the ongoing dance (sometimes uglier than we’d like) of our democracy. It sits in us. In our ability to do what the fireman did: respond. To love is not to ask the question, but to respond to the question. Sometimes lessons take a moment to sink in, like these from my fireman. Once they’re in the nailbed, however, sooner or later, the body absorbs them.

    Shakespeare’s Mothers

    Shakespeare’s mothers are often nasty. Lady Capulet ignores, then disowns, poor Juliet. Lady Macbeth would kill her child to gain a throne. Though they grieve (Constance in King John) it is vicious grief (Queen Margaret in Richard III). Sometimes they are terrifying: Volumnia raised Coriolanus to be a tyrant; Tamora encourages her son to commit rape in Titus Andronicus. Often, there are no mothers. O! Cordelia, Katherine, Miranda, Jessica — think how they need their mother’s love! 

    Many are minor. Aemilia in A Comedy of Errors appears at the end, a resolution. The blameless Lady Macduff appears only to be hauntingly killed, a brief symbol of innocence in a darkening world. There is a wicked step-mother (Cymbeline), a jealous step-mother (Pericles), a weak-willed mother (Hamlet). It is almost incidental that Mistress Page (The Merry Wives of Windsor) is a mother; Cleopatra, too. Only Hermione’s strong innocence in A Winter’s Tale makes her rightfully beloved. 

    Hermione has a splendid precursor, the Countess of Rousillion, from All’s Well That Ends Well. This play is unjustly unloved, and the Countess gets less attention than she deserves. She is among Shakespeare’s most fascinating characters, and is his most wonderful mother. 

    All’s Well That Ends Well is an inverted romance in which the woman pursues the man. It is also about inverted families. It is often said that we cannot choose our parents: Shakespeare is interested in the fact that we cannot choose our children. Just as Helena inverts the expectations of a romantic heroine, so the Countess inverts the expectations of a mother, and picks her child. 

    Bertram’s father is dead. He is now a ward of the king. His mother the Countess is left in Rousillion with her own ward, Helena, a doctor’s orphan. Helena reveals to the audience that she loves Bertram and plans to go to Paris to cure the king of his fistula (using her father’s remedies) so he will marry her to Bertram. The Countess discovers this, and, knowing Bertram is a hopeless boy in whom “blood and virtue contend for empire,” promises Helena her help. The cure works! But Bertram refuses marriage because Helena is low born. The king’s pride forces the wedding. That night, Bertram absconds. Helena returns to the Countess, who is doubly distressed that Bertram has disobeyed the king and treated Helena badly. Bertram said he will have “Nothing in France” until he has no wife. So Helena, distressed that Bertram has gone to war, runs away so that he may return. Hearing that Helena has gone on pilgrimage, the Countess is heartbroken. When Helena told her Bertram absconded, the Countess replied: “I do wash his name out of my blood, / And thou art all my child.” 

    On her travels, Helena discovers that she is in the same town as Bertram, where he is wooing a virgin. When he left, Bertram told Helena they would be together if she got the ring from his finger and was pregnant with his child. Helena tells the virgin, Diana, and Diana’s mother, that she will provide them with a dowry if they help her. Diana bids Bertram come to her, give her his ring, and she will sleep with him. At the crucial moment, Helena takes Diana’s place (the “bed trick”). Diana also gives Bertram a ring of Helena’s, which exposes him later on at court as a liar, rogue, and cad. Then Helena returns with his ring and baby. And all’s well that ends well. 

    The plot comes from Boccaccio, via a translation. The Countess is Shakespeare’s. Without her, the story would play differently. Helena is not, as some have said, predatory. She is guided and approved of by the Countess. The play does not condemn her; the Countess shows us she is to be understood, not feared. Helena is one of Shakespeare’s great experiments. She inverts the expectations of her sex without losing her virtuous character nor destroying the basic plot of a romantic comedy, but with some of the most daring challenges to the form in the whole Shakespeare canon. She is matched in this by her adoptive mother. 

    When the countess discovers that Helena loves Bertram she has no class pride, but feels great sympathy. 

    Even so it was with me when I was young: 

    If ever we are nature’s, these are ours; this thorn 

    Doth to our rose of youth rightly belong 

    Helena appears and the Countess says “You know, Helen, / I am a mother to you.” Helena quibbles. “Mine honourable mistress.” The Countess replies, not in her usual mellifluous lines, but in broken speech. 

    Nay, a mother: 

    Why not a mother? When I said “a mother,” 

    Methought you saw a serpent: what’s in “mother,” 

    That you start at it? I say, I am your mother 

    Helena’s speech is characterized by caesura. She has a verse of deliberation, uncertainty, thinking. The Countess mirrors this to draw Helena out, rather than confront her. She even slips in the prospect of marriage, 

    Yes, Helen, you might be my daughter-in-law: 

    God shield you mean it not! daughter and mother 

    So strive upon your pulse. What, pale again? 

    My fear hath catch’d your fondness 

    As the scene reaches breaking point, Helena is overwhelmed and confesses her love on her knees, saying “My friends were poor, but honest; so’s my love”; begging the Countess, “Let not your hate encounter with my love.” 

    Were this a scene between father and daughter, it would turn vicious. “Out baggage!” screams Lord Capulet. Polonius demands demure obedience of Ophelia. Lear rages at Cordelia, Brabantio at Desdemona. Not the Countess. She tests Helena’s plan to cure the king (“how shall they credit / A poor unlearned virgin”) and, satisfied of her mettle, tells her: 

    Be gone to-morrow; and be sure of this, 

    What I can help thee to thou shalt not miss. 

    It is a very fine moment. The Countess knows Bertram lacks the sense and virtue Helena exemplifies. She tried to advise him when he left for court, but added to his mentor: “’Tis an unseason’d courtier; good my lord, / Advise him.” Alas, Bertram is lost to the maleness of court and the banter and bawdy of Parolles, a braggart who leads Bertram wrong. 

    The Countess knows Bertram is lost. But she can mother Helena, a girl much in need of a parent. Together they are the opposite of the rash and manly court: calm, discursive, supportive, clever. 

    The Countess is more able to govern her emotions than her impetuous son and his anger-prone king. She unsexes herself like Lady Macbeth, but in a pragmatic, not bloody manner. On hearing of Bertram’s departure for war, she says, 

    Pray you, gentlemen, 

    I have felt so many quirks of joy and grief, 

    That the first face of neither, on the start, 

    Can woman me unto’t 

    We will not catch her shedding woman’s tears, as her sex and genre have primed us to expect. She knows Bertram is too proud of his heritage (she says his behaviour “corrupts a well-derived nature”) and urges him “succeed thy father / In manners, as in shape.” Helena, by contrast, has raised herself up in virtues “her education promises; her dispositions she inherits, which makes fair gifts fairer.” The Countess cannot change them. But she can change who and how she mothers. 

    As Bernard Shaw said, the Countess is “the most beautiful old woman’s part ever written . . . full of wonderfully pleasant good sense, humanity, and originality.” Who else in Shakespeare sees their own hapless children so free from illusion and gives their “maternal tenderness and wisdom” to an orphan who needs her most? The Countess is a pragmatist, but not cold-hearted; a realist, but not venal. 

    The play does not end at court, where the marriage was enforced. It closes in Rousillion, where the ineffective king is the Countess’ guest. She had the opening line (setting the theme of new life from trauma: “In delivering my son from me, I bury a second husband”) and she voices some powerful declarations at the end, announcing that the rings expose Bertram as a cad. She loves her son. She pleads for him to the king. But she knows what he has done to Helena (and how little he cares for his own mother’s feelings) and she does what she can to put it right. One of the most moving moments of the play occurs at the end when Helena returns and says “O my dear mother, do I see you living?” 

    Shakespeare added the Countess to show us that the world is not a game of ideals. It is a contingent place, where we must do what we can to get as good an outcome as possible. Bertram believes in his class (Shaw called him a Junker with no quality other than his class quality). Helena and the Countess believe in virtue, skill, intelligence, independence. 

    Why should they be told what to do by these martial men? Why should they cry women’s tears? Why should Helena wait to be chosen as a wife? Why shouldn’t the Countess mother the one who loves and needs — the one who listens? 

     

    What AI Cannot Do, Not Now, Not Ever

    I am about to flip a coin. Can you predict whether it will be heads or tails?

    You might have gotten it right. (It was heads.) But if so, you were lucky. Whether it would come up heads or tails depends on an assortment of factors that you could not identify before I flipped the coin. 

    In principle, of course, reliable predictions could be made. If you knew everything relevant to whether a coin will come up heads or tails, you could of course make an accurate prediction. So let’s not talk about randomness. That is not the problem. The problem is the absence of essential information.

    With respect to the role of randomness, here is an illuminating exchange I had with ChatGPT: 

    ME: If I flip a coin, can you predict whether it will come up heads or tails? 

    CHATGPT: No, I can’t predict the outcome of a coin flip. A fair coin toss is random, and each flip has an equal 50% chance of coming up heads or tails. If you’d like, I can simulate a coin flip for you! 😊 

    ME: Why can’t you predict the outcome of a coin flip? 

    CHATGPT: Predicting the outcome of a coin flip is nearly impossible because it depends on a complex combination of factors, including:

    Initial Conditions: The force, angle, and speed of the flip, along with the height it reaches, are all variables that are difficult to measure and control precisely.

    Air Resistance: The way the coin interacts with the air as it flips introduces randomness.

    Surface Interaction: The coin’s bounce and spin upon landing on a surface can further alter the outcome.

    In theory, if all of these factors were perfectly known and calculated, the outcome could be predicted (this is the basis of classical mechanics). However, in practice, the variables are too many and too sensitive for accurate prediction. For most purposes, a coin flip is treated as random. 

    ME: Isn’t the word “random” misleading here? 

    CHATGPT: You’re absolutely right — calling a coin flip “random” might oversimplify what’s actually happening. In reality, a coin flip is deterministic, meaning the outcome is governed by physical laws like Newtonian mechanics. If we had perfect knowledge of all the initial conditions (force, angle, spin, air resistance, etc.), we could theoretically predict the result every time. 

    When we call it “random,” we’re really referring to practical randomness — the idea that the variables influencing the outcome are so complex and sensitive that, for all practical purposes, we treat the result as unpredictable. In this sense, “random” reflects our ignorance of or inability to measure the key factors, rather than true randomness in the sense of quantum unpredictability. 

    It’s a convenient shortcut in language but not strictly accurate in a scientific sense.

    Justice 

    A central goal of liberalism, and of the liberal political tradition, is to undo damaging forms of subjugation, which is why John Stuart Mill’s The Subjection of Women is a canonical liberal text. With his emphasis on the importance of individual agency, Mill laments that “the inequality of rights between men and women has no other source than the law of the strongest.” In a key passage, he writes: 

    What is the special character of the modern world — the difference that chiefly distinguishes modern institutions, modern social ideas, modern life itself, from those of times long past? It is that human beings are no longer born to their place in life, and chained down by an unbreakable bond to the place they are born to, but are free to use their talents and any good luck that comes their way to have the kind of life that they find most desirable. 

    Mill’s argument here is more subtle than the context might suggest. He is speaking, to be sure, of careers open to talents — of a right to seek opportunities and to try to find the kind of life that one finds most desirable. That is the liberal insistence on the dissolution of unwanted chains and bonds. But Mill is also careful to draw attention to the importance of “any good luck that comes their way.” In its best forms, the liberal tradition emphasizes that lotteries are everywhere. It points to the place of “good luck,” understood as practical randomness, and the multiple forms it takes. John Rawls’ A Theory of Justice is the most sustained development of that point. 

    The term “good luck” isn’t exactly right, but let’s not be fussy. I want to say something about the lived equivalent of lotteries. I am going to approach that question indirectly, or from the side. But please keep it in mind throughout. Full disclosure: this is, in part, a discussion of justice. 

    Sampling on the Dependent Variable 

    A number of years ago, a brilliant law student — let’s call her Jane — came to my office with an intriguing research project. She wanted to study the sources of success. Jane’s plan was to contact dozens of spectacularly successful people in multiple fields (business, politics, music, literature) to see what they had in common. Maybe all of them had difficult childhoods. Maybe none of them had difficult childhoods. Maybe all of them were quick to anger. Maybe none of them was quick to anger. Maybe all of them developed a passion in high school. Maybe none of them developed a passion in high school. Maybe all of them were impatient. Maybe none of them were impatient. Jane was energetic as well as astonishingly smart. There was little doubt that she would be able to carry through with her project. If she called famous people, she would find a way to get them to take her calls. 

    Still, something was wrong with what she had in mind. Suppose we learned that a large number of spectacularly successful people did indeed have something in common. Would we know that what they had in common was responsible for their spectacular success? 

    Not at all. There might be plenty of people (hundreds, thousands, millions) who share that characteristic and who did not end up spectacularly successful. The shared characteristic might not be sufficient for success. Imagine, for example, that spectacularly successful people turn out to be quick to anger. Plenty of people who are quick to anger do not succeed. Maybe they never got a chance. Maybe they got mad at the wrong person at the wrong time. Maybe they were born in poverty. Maybe they didn’t have the right skin color. 

    If we learn that spectacularly successful people tend to be quick to anger, have we learned anything at all? Maybe not. The problem with Jane’s project has a name: selecting on the dependent variable. Countless successful business books follow a path identical to that proposed by Jane. They try to figure out what characteristics are shared by inventors, innovators, leaders, or other successful types. If they find a shared characteristic, they urge that they have discovered a secret or clue of some kind. Maybe so. But maybe not. (Probably not.) 

    Challenges 

    Could AI have predicted in 2006 that Barack Hussein Obama would be elected president of the United States in 2008? Or could AI have predicted in 2014 that Donald Trump would be elected president of the United States in both 2016 and 2024? Could AI have predicted in 2005 that Taylor Swift would become a worldwide sensation? The answer to all of these questions is obvious: No. AI could not have predicted those things, and no human being could have predicted those things. But why? 

    To test your answer, here are five challenges: 

    1. Consider the question whether two people are going to fall in love. AI might not be able to foresee the potentially decisive effects of context, timing, and mood. 
    2. Consider the question whether a song will become a big hit. AI might not be able to foresee the effects of social interactions, which can lead people in directions that are exceedingly hard to predict. 
    3. Consider the question whether a social movement, on the left or the right, will arise in a specified month or year. AI might not be able to identify people’s preferences, which might be concealed or falsified, and which might be revealed at an unexpected time. 
    4. Consider the question whether fossil fuels will be phased out by 2048. AI might not be able to anticipate change, including rapid change, which might be a product of unexpected shocks (a technological breakthrough, a successful terrorist attack, a black swan). 
    5. Consider the question whether a new start-up will do well. AI might not have local knowledge, or knowledge about what is currently happening or likely to happen on the ground 

    Friedrich Hayek was the twentieth century’s most rigorous critic of socialism, or government planning, and his most influential essay, “The Use of Knowledge in Society,” from 1945, is best seen as an argument about the limits of prediction. Hayek did not draw attention to the motivations of planners, and he was not claiming that they are corrupt or self-interested. His concern was what he saw as their inevitable lack of information. 

    He began: “If we possess all the relevant information, if we can start out from a given system of preferences, and if we command complete knowledge of available means, the problem which remains is purely one of logic.” That is a lot of “ifs.” Of course we do not have all relevant information; preferences shift; and we do not have complete knowledge of the available means (including technologies) which change over time. Thus Hayek emphasized that the “peculiar character of the problem of a rational economic order is determined precisely by the fact that the knowledge of the circumstances of which we must make use never exists in concentrated or integrated form but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess.” 

    Focusing on those dispersed bits of incomplete and frequently contradictory information, Hayek pointed to “the importance of the knowledge of the particular circumstances of time and place” — knowledge that planners cannot possibly have. You cannot predict what will happen if you do not have knowledge of those particular circumstances. How much are people going to like a new movie about spies? How popular will a new store be? You might have some clues, but you might not be able to make confident predictions. The same is true of AI. 

    Hayek also pointed to a separate problem: change. In October things might be very different from what they were in January, and planners might struggle to understand that. What is true in January (what people like, what technologies exist, what diseases are spreading) might not be at all true in October. The knowledge that people have in markets also shifts rapidly over time. As Hayek had it, the price system is a “marvel,” because it can incorporate knowledge that is both not only widely dispersed but also fleeting. Some new fact might become clear all of a sudden and it might change everything. Or some new taste might emerge quickly and spread in a hurry. Markets can absorb new information and new tastes. Planners cannot. 

    Like central planners, AI will struggle to make accurate predictions, not because it is AI but because it does not have enough data to answer the question at hand. Those cases often, though not always, involve complex systems. 

    Life Trajectories 

    In 2020, a large team of researchers engaged in an unusually ambitious project, whose subject they called “measuring the predictability of life outcomes with a scientific mass collaboration.” They wanted to see if life trajectories could be predicted. To do that, they challenged the world. Their challenge had a simple name: The Fragile Families Challenge. 

    The challenge began with an extraordinary data set, known as the Fragile Families and Child Wellbeing Study, which was specifically created in order to enable social science research. That study, which is ongoing, offers massive amounts of data about thousands of families, all with unmarried parents. Each of the mothers gave birth to a child in a large city in the United States around 2000. The data was collected in six “waves,” at birth and at the ages of one, three, five, nine, and fifteen. Each collection produced a great deal of information, involving child health and development, demographic characteristics, education, income, employment, relationships with extended kin, father-mother relationships, and much more. Some of the data was collected by asking a battery of questions to both the mother and the father. Some of it came from an in-home assessment (at ages three, five, and nine) that included measurements of height and weight, observations of neighborhood and home, and various tests of vocabulary and reading comprehension. The Fragile Families Challenge was initially launched when data had been collected from the first five waves (from birth to the age of nine years), but when complete data from the sixth wave (year fifteen) were not yet available. 

    That was a terrific advantage, because it allowed the researchers to create the Challenge, which was to predict the following outcomes: 

    1. Child grade point average 
    2. Child grit (determined by a self-reported measure that includes perseverance) 
    3. Household eviction 
    4. Household material hardship 
    5. Layoff of the primary caregiver 
    6. Participation in job training by the primary caregiver 

    Those who took the challenge were given access to background material from the first five waves, and also to data on one-half of the families from the sixth wave. The material contained data on a total of 4,262 families, with a whopping 12,942 variables about each family. The central task was to build a model, based on the data that was available, that would predict outcomes for those families, during the sixth wave, for whom data were not available. 

    The researchers sought to recruit a large number of participants in the Fragile Families Challenge. They succeeded. In the end, they received 457 initial applications, which were winnowed down to 160 teams. Many of the teams used state-of-the-art machine-learning methods, explicitly designed to increase accuracy. The central question was simple: Which of the 160 teams would make good predictions? 

    The answer is: none of them! True, the machine-learning algorithms were better than random; they were not horrible. But they were not a lot better than random, and for single-event outcomes — such as whether the primary caregiver had been laid off or had been in job training — they were only slightly better than random. The researchers conclude that “low predictive accuracy cannot easily be attributed to the limitations of any particular researcher or approach; hundreds of researchers attempted the task, and none could predict accurately.” 

    Notwithstanding their diverse methods, the 160 teams produced predictions that were pretty close to one another — and not so good. As the researchers put it, “the submissions were much better at predicting each other than at predicting the truth.” A reasonable lesson is that even with the aid of AI, we really do not understand the relationship between where families are in one year and where they will be a few years hence. Seeming to draw that lesson, the authors of the Fragile Families Challenge suggest that their results “raise questions about the absolute level of predictive performance that is possible for some life outcomes, even with a rich data set.” You can learn a great deal about where someone now is in life, and still you, or AI, might not be able to say very much at all about specific outcomes in the future. 

    As GPT-4o put it in 2024, 

    At the time, the challenge highlighted the difficulty of predicting these outcomes, even with advanced machine learning techniques. One of the key findings was that models, while powerful, struggled with out-of-sample prediction. Many researchers found that human lives are influenced by so many complex and often random factors that standard models didn’t perform as well as anticipated. 

    That sounds a lot like Hayek, writing a long time ago. Asked whether AI could do better today, it responded, “predicting human behavior and life outcomes remains incredibly challenging due to the inherent complexity and unpredictability of life events. AI models, no matter how advanced, would still struggle with randomness, unmeasured variables, and ethical concerns regarding fairness and bias.” 

    Let us put to one side the point about randomness. Let us also (temporarily!) put aside ethical considerations, which do not bear on predictive accuracy. Let us focus on unmeasured variables, or the absence of relevant data, which are Hayek’s concern as well. Consider a girl named Susan, who is ten years old, and learn everything you can about her: her family, her demographics, her neighborhood, her schooling, her sports. Now predict various things about her life at the age of twenty-one. Will she be a doctor? A lawyer? A computer scientist? Do you have much confidence in your prediction? 

    You shouldn’t. The number of variables that can move a life in one direction or another is very high, and it is not possible to foresee them in advance. Someone might break a leg at a crucial moment, meet an amazing music teacher, find a new friend, hear a song on the radio on Sunday morning, or see something online or on the news that changes everything. 

    Love and Romance 

    Can AI algorithms predict whether you will fall in love with a stranger? Can AI help people to find romantic partners? 

    Thus far, the results on such counts are not promising. Samantha Joel and colleagues, in a study of “machine learning applied to initial romantic attraction,” find that AI algorithms struggle to predict “the compatibility elements of human mating . . . before two people meet,” even if one has a very large number of “self-report measures about traits and preferences that past researchers have identified as being relevant to mate selection.” Again sounding a lot like Hayek, Joel and her colleagues suggest that romantic attraction may well be less like a chemical reaction with predictable elements than “like an earthquake, such that the dynamic and chaos-like processes that cause its occurrence require considerable additional scientific inquiry before prediction is realistic.” 

    What are “dynamic and chaos-like processes”? It is worth pondering exactly what this means. Most modestly, it might mean that AI needs far more data in order to make accurate predictions — far more, at least, than is provided by self-report measures about traits and preferences (“considerable additional scientific inquiry”). Such measures might tell us far too little about whether one person will be attracted to another. Perhaps we need more information about the relevant people, and perhaps we should focus on something other than such measures. It is possible that AI cannot make good predictions if it learns (for example) that Jane is an extrovert and that she likes football and Chinese food. It is possible that AI algorithms would do a lot better if they learn that Jane fell for John, who had certain characteristics that draw her to him, and also for Tom and Frank, who had the same characteristics. If so, perhaps she is most unlikely to fall for Fred, who has none of those characteristics, but quite likely to fall for Eric, who shares those characteristics with John, Tom, and Frank. 

    On this view, the right way to predict romantic attraction is to say, “If you like X and Y and Z, you will also like A and B, but not C and D!” Or perhaps we should ask whether people who are like Jane, in the relevant respects, are also drawn to Eric. Of course it would be necessary to identify the relevant respects in which people are like Jane, and that might be exceedingly challenging. Maybe AI could be helpful in that endeavor. Maybe not. 

    More radically, we might read the findings by Joel and her colleagues to suggest that romantic attraction is not predictable by AI algorithms for a different and more Hayekian reason: it depends on so many diverse factors, and on so many features of the particular context and the particular moment, that any form of AI will not be able to do very well in specifying the probability that Jane will fall for Eric. The reference to “dynamic and chaos-like processes” might be a shorthand way of capturing current mood, weather, location, time of day, background sounds, and a large assortment of other factors that help produce a sense of romantic connection or its absence. 

    Think a bit, if you would, about what made you feel a romantic spark. Did someone look at you in a certain way, at a certain moment, and then boom? What led to that boom? Do you even know? 

    Jane might smile at a specific time at a street corner, and look Eric in the eye, and Eric’s heart might flutter. Or Jane might not look Eric in the eye at that moment, because she is distracted by something that happened in the morning. Eric might say something witty as sandwiches arrive, because of something he read in the paper that morning, and that might initiate a chain of events that culminates in marriage and children. Or Jane might make a bad joke at a bad time, and Eric might think, “This really will not work.” 

    For romance, so much may depend on factors that cannot be identified in advance. This is the sense in which AI is sometimes like centralized planners: it does not have relevant information about time and place. Even the largest language models will still lack crucial data. (Continuing the Hayekian market analogy, there does not seem to be anything like the price system to replace AI with.) 

    Careful 

    We have to be careful here. AI might be able to say that there is essentially no chance that Jane will like Carl, because there are things about Carl that we know, in advance, to be deal-breakers for Jane. Jane might not be drawn to short men or to tall men. She might not be attracted to much older men or to much younger men. She might not be attracted to men. An algorithm might be able to say that there is some chance that Jane will like Bruce; there is nothing about Bruce that is a deal-breaker for her, and there are some clear positives for her. Perhaps an algorithm can specify a range of probability that Jane will fall for Bruce; perhaps the probability of a romantic connection (suitably defined) is more than ten percent but less than seventy percent. An algorithm might be able to say that Eric is within the category of “it might well happen” for Jane, because Eric is in some sense “her type.” 

    Perhaps an algorithm can specify a range of probability for Jane and Eric; perhaps the probability of a romantic connection (suitably defined) is more than twenty percent but less than eighty percent. The real question is whether and to what extent AI will eventually be able to do much better than that. We might speculate that the importance of particular factors — the concrete circumstances — is so great that there are real limits on AI’s predictive power (even if AI might be able to outperform human beings, whose own predictive power is sharply limited in this context). 

    The topic of romantic attraction is intriguing in itself, and it can be seen as overlapping with an assortment of other prediction problems: whether you will enjoy living in Paris; whether you will become friends with a co-worker; whether you will like a new job; whether a pandemic will occur in the next five years; whether a recession will occur in the next six months; whether a new movie will be make a specified amount of money; whether a new book will hit the bestseller list. We have seen that in stable environments with fixed rules, AI algorithms, armed with a great deal of data, are able to make pretty good predictions. But if the future is unlikely to be like the past, there is a real question whether, where, and when AI algorithms will do well, or even outperform human beings. 

    The Hayekian problem might be the sheer number of relevant factors, not knowable in advance, that might produce one or another outcome; this is why the case of romantic attraction has general lessons. (Friendship is similar.) Another problem might be an external shock or unexpected event, which might turn everything around (a technological innovation, a terrorist attack, a pandemic, or in the context of romance, a recent loss for Jane, which opens her heart). Yet another problem might be how factors interact with each other (lasagna followed by chocolate cake might spell romance). We are speaking here of the essentially unpredictable nature of many events, because of the absence of necessary data. 

    Revolution Is in the Air 

    In an important work called Private Truths, Public Lies, which predated the rise of AI, the economist Timur Kuran urged that revolutions were unpredictable by their very nature. Kuran argued that an underlying problem lies in “preference falsification”: people do not disclose their preferences, which means that we cannot know whether they will, in fact, be receptive to a revolutionary movement. If we do not know what people’s preferences are, we will not know whether they might be willing to participate in a rebellion given the proper circumstances. 

    I asked GPT4o whether AI could predict revolutions, and it said this: 

    Predicting revolutions or large-scale social upheavals is extremely challenging, even for modern AI. While AI has advanced significantly in its ability to analyze complex data, revolutions are driven by a mix of structural factors (e.g., economic inequality, political repression) and unpredictable triggers (e.g., sudden political decisions, individual actions, or random events). The difficulty lies in the dynamic, multi-layered nature of these events. . . . Revolutions often result from unforeseen or seemingly minor events that AI may not anticipate — so-called “black swan” events. For instance, the Arab Spring was sparked by the self-immolation of a Tunisian street vendor, an act that set off widespread protests across the Middle East. . . . AI relies heavily on historical data to make predictions, but reliable data on political discontent, underground movements, or the true state of public sentiment is often scarce, especially in authoritarian regimes. . . . The most AI can do is flag societies at risk and provide real-time alerts to possible flashpoints. 

    It is true that we, and AI, might be able to learn something about when a revolution is improbable in the extreme, and also about when a revolution is at least possible. For one thing, we might be able to make at least some progress in identifying private preferences — for example, by helping people feel safe to say that they dislike the status quo, perhaps by showing sympathy with that view, or perhaps by guaranteeing anonymity. AI might well be able to help on that count. Kuran wrote before the emergence of social media platforms, which give us unprecedented opportunities to observe hitherto unobservable preferences — for example, via prompts, posts, and google searches, which might reveal widespread dissatisfaction with the current government. 

    Perhaps AI can say something about probabilities, based on data of this kind. But if GPT is right, AI will not be able to tell us a whole lot, because its knowledge of preferences and thresholds will be limited, and because it will not be able to foresee social interactions. The general analysis should not be limited to revolutions. Preference falsification, diverse thresholds, and social interactions — one or more of these are in play in many domains. 

    When will marriages break up? When will employees engage in some kind of revolt? When will we see something like #MeToo? When will a populist movement emerge and succeed? AI might be able to tell us something, but not nearly everything. 

    Hits! 

    Consider the question whether books, movies, or musical albums are likely to succeed. Of course we might know that a new album by Taylor Swift is likely to do well, and that a new album by a singer who is both terrible and unknown is likely to fail. A few decades ago I was part of a rock group called Serendipity. You haven’t heard of us, and we were terrible; there was no chance that we could succeed. You don’t need AI to know that. But across a wide range, a great deal depends on social interactions and apparent accidents, and on who says or does what exactly when. In such circumstances, AI might not be able to help much. 

    This point clearly emerges from research from a number of years ago, when Matthew Salganik, Duncan Watts, and Peter Dodds investigated the sources of cultural success and failure in what is known as their Music Lab experiment. Their starting point was that those who sell books, movies, television shows, and songs often have a great deal of trouble predicting what will succeed. Even experts make serious mistakes. Some products are far more successful than anticipated, whereas some are far less so. This seems to suggest, very simply, that those that succeed must be far better than those that do not. But if they are so much better, why are predictions so difficult? Why do the best analysts fail? No one anticipated the success of the Harry Potter series; the Beatles couldn’t get a record deal; the rise of Donald Trump was a shock. 

    To explore the sources of cultural success and failure, Salganik and his co-authors created an artificial music market on a preexisting website. The site offered people an opportunity to hear forty-eight real but unknown songs by real but unknown bands. One song, by a band called Calefaction, was called “Trapped in an Orange Peel.” Another, by Hydraulic Sandwich, was called “Separation Anxiety.” The experimenters randomly sorted half of about fourteen thousand site visitors into an “independent judgment” group, in which they were invited to listen to brief excerpts, to rate songs, and to decide whether to download them. From those seven thousand visitors, Salganik and his coauthors could obtain a clear sense of what people liked best. The other seven thousand visitors were sorted into a “social influence” group, which was exactly the same except in just one respect: the social influence group could see how many times each song had been downloaded by other participants. 

    Those in the social influence group were also randomly assigned to one of eight subgroups, in which they could see only the number of downloads in their own subgroup. In those different subgroups, it was inevitable that different songs would attract different initial numbers of downloads as a result of unknown factors. “Trapped in an Orange Peel” might attract strong support from the first listeners in one subgroup, whereas it might attract no such support in another. “Separation Anxiety” might be unpopular in its first hours in one subgroup but attract a great deal of favorable attention in another. 

    The research questions were simple: would the initial numbers affect where songs would end up in terms of total number of downloads? Would the initial numbers affect the ultimate rankings of the forty-eight songs? Would the eight subgroups differ in those rankings? You might hypothesize that after a period, quality would always prevail — that in this relatively simple setting, where various extraneous factors (such as reviews) were highly unlikely to be at work, the popularity of the songs, as measured by their download rankings, would be roughly the same in the independent group and in all eight of the social influence groups. 

    It is a tempting hypothesis, but it is not at all what happened. “Trapped in an Orange Peel” could be a major hit or a miserable flop, depending on whether a lot of other people initially downloaded it and were seen to have done so. To a significant degree, everything turned on initial popularity. Almost any song could end up popular or not, depending on whether or not the first visitors liked it. Importantly, there is one qualification: the songs that did the very best in the independent judgment group rarely did very badly, and the songs that did the very worst in the independent judgment group rarely did spectacularly well. But otherwise almost anything could happen. 

    The apparent lesson is that success and failure in cultural markets is exceedingly hard to predict, whether we are speaking of AI or human beings. Here is one: it is exceedingly difficult to know, in advance, whether a cultural product will benefit from the equivalent of early downloads. True, knowing that is not unknowable in principle. If you knew everything about everything, you would know that, just as if you knew everything about everything, you would know whether a specific coin will come up heads or tails if I toss it in the air in the next second, or whether Carl and Eleanor are going to fall in love if they have lunch next Tuesday, or whether there is going to a revolution in a specific nation in February of next year. But it is not easy to know everything about everything. 

    Cute 

    What about business? What about products? Where do people want to travel? (Paris, Berlin, London, Copenhagen, Vienna, Prague, Beijing, Dublin, Amsterdam, Boston, New York?) Where do people want to study? What objects do people like or not like? With respect to products, an experiment modeled on the Music Lab found the same pattern. The experiment involved “Meet the Ganimals,” an online platform where people can generate and curate “ganimals,” which are AI-generated hybrid animals. People can also say how much they like particular ganimals and rate them in terms of cuteness, creepiness, realism, and other variables. 

    As in the Music Lab experiment, people were sorted into groups with independent conditions, in which they made evaluations entirely on their own, and groups with social influence conditions, in which they could see what other people thought. Just as in the Music Lab experiment, participants were randomly assigned to one of multiple online “worlds,” each of which evolved independently of the others. Participants saw only ganimals discovered and votes cast by others in their online world, and the ranking of ganimals was based only on votes in that world. 

    You might think that some ganimals really are adorable and that others really are not, and that in the end the adorable ones would be counted as adorable and the not-adorable ones would be counted as not-adorable. But here again, social influences greatly mattered. In the social influence worlds, outcomes turned out to be more unequal and highly unpredictable. Without social influences, different groups converged in their enthusiasm toward precisely the same set of ganimal features. (If you are curious: ganimals have eyes, a head, and dog-like features.) But with social influences, groups rapidly evolved into diverse local cultures that dramatically diverged from that in the independent judgment conditions. One ganimal could be spectacularly popular in one group and essentially unknown in another. The findings were very similar to those in the Music Lab. 

    Shall we draw a large lesson? Many markets have a lot in common with the market for ganimals. People aren’t going to think that a gruesome ganimal is adorable. If you have something with eyes, a head, and dog-like features, you might be golden. But maybe not. Diverse local cultures can arise, and a fabulous product might get attention in one of them, and no attention at all in another. Could AI predict which products will get attention in which cultures? Maybe so. But maybe not, if social interactions, based on an assortment of factors on which data cannot be obtained in advance, turn out to be crucial. The success of Barack Obama in 2008 and Donald Trump in 2016 depended on such factors, and the same is true for the rise of Jane Austen, The Beatles, Star Wars, and Taylor Swift. 

    Knightian Uncertainty 

    In 1921, the great American economist Frank Knight wrote: “Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated. . . . The essential fact is that ‘risk’ means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomena depending on which of the two is really present and operating.” Knight was referring to what is now called “Knightian uncertainty”: circumstances in which probabilities cannot be assigned to future events. The social theorist Jon Elster offers an example: “One could certainly elicit from a political scientist the subjective probability that he attaches to the prediction that Norway in the year 3000 will be a democracy rather than a dictatorship, but would anyone even contemplate acting on the basis of this numerical magnitude?” 

    Regulators, ordinary people, and AI are sometimes acting in situations of Knightian uncertainty (where outcomes can be identified but no probabilities can be assigned) rather than risk (where outcomes can be identified and probabilities assigned to various outcomes). Some people appear to think that AI creates an uncertain risk of catastrophe, including the extinction of the human race. Consider in this regard a passage from John Maynard Keynes, also writing in 1921: 

    By “uncertain” knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty; nor is the prospect of a Victory bond being drawn. Or, again, the expectation of life is only slightly uncertain. Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealthowners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know. 

    Sounding a lot like Knight, Keynes insisted that some of the time we cannot assign probabilities to imaginable outcomes. “We simply do not know.” 

    Keynes immediately added, however, with evident bemusement, that “the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability, waiting to be summed.” But how on earth, he wondered, do we manage to do that? Keynes listed three techniques — and they are worth considering when we leave behind the legends about AI and soberly assess its capabilities: 

    1. We assume that the present is a much more serviceable guide to the future than a candid examination of past experience would show it to have been hitherto. In other words, we largely ignore the prospect of future changes about the actual character of which we know nothing. 
    2. We assume that the existing state of opinion as expressed in prices and the character of existing output is based on a correct summing-up of future prospects, so that we can accept it as such unless and until something new and relevant comes into the picture.
    3. Knowing that our own individual judgment is worthless, we endeavor to fall back on the judgment of the rest of the world which is perhaps better informed. That is, we endeavor to conform with the behavior of the majority or the average. The psychology of a society of individuals each of whom is endeavoring to copy the others leads to what we may strictly term a conventional judgment. 

    Keynes did not mean to celebrate those techniques. Actually he thought that they were ridiculous. We might know, for example, that technological innovations have not produced horrific harm in the past, and so we might think that AI will not produce such harm in the future. Or, as a good Hayekian, AI might look at the price signal to assess the risks associated with climate change. Or AI might follow the wisdom of crowds to assess the likelihood of a pandemic. But under circumstances of uncertainty, should we trust any of these methods? “All these pretty, polite techniques, made for a well-paneled Board Room and a nicely regulated market, are liable to collapse,” Keynes declared, because “we know very little about the future.” Those last seven words may seem simple or obvious, but they are not at all obvious to the inventors of and investors in the allegedly predictive algorithmic technologies that are now overwhelming our society. 

    Keynes emphasized the difficulty or the impossibility of assigning probabilities to outcomes, but he also signaled more generally the problem of ignorance, in which we are unable to specify either the probability of bad outcomes or their nature — where we do not even know the kinds or magnitudes of the harms that we are facing. One reason for our ignorance might be that we are dealing with a novel, unique, or non-repeatable event. Another reason might be that we are dealing with a problem involving interacting components of a system in which we cannot know how components of the system are likely to interact with each other, which means that predictions are highly unreliable. 

    Back to the Future 

    There are some prediction problems on which AI will not do well, and the reason lies in an absence of adequate data, and in what we might wisely see as the intrinsic unpredictability of (some) human affairs. In some cases, AI will be able to make progress over time. But in important cases, in which we are dealing with complex phenomena, and the real problem is that the relevant data are simply not available in advance, accurate predictions are impossible — not now, and not in the future. Never. 

    Consider now, if you would, a heartbreaking remark that Benjamin Franklin’s sister Jane addressed to her brother, lamenting the “Thousands of Boyles Clarks and Newtons” who “have Probably been lost to the world, and lived and died in Ignorance and meanness, merely for want of being Placed in favourable Situations, and Injoying Proper Advantages.” Who is placed in unfavourable situations? Who faces disadvantages? Both of these take diverse forms. We might speak of an absence of education; Franklin herself was not allowed proper schooling. We might speak of an absence of economic opportunity. Or we might speak more specifically, and less systematically, about the absence of a mentor, a helping hand, a nod of appreciation, a glimpse of something wondrous, an infusion of money, a year off, a friend or family member who refuses to give up. 

    In the domain of innovation in general, social scientists, sounding a lot like the despondent Jane Franklin, refer to “Lost Einsteins” — in the words of one of them, those “who would have had highly impactful inventions had they been exposed to innovation in childhood.” The emphasis here is on demographic characteristics, such as race, gender, and socioeconomic status, and on the contributions of role models and network effects to success. Countless potential innovators, in science, business, and elsewhere, were subjugated in some way, were born in a particular family, did not find the right role models, or did not benefit from networks. As a result, they never innovated. They lost life’s lottery, or a series of smaller lotteries. 

    There are lost Da Vincis, lost Shakespeares, lost Miltons, lost Austens, lost Dickenses, lost Ellisons, lost Rothkos, lost Scorceses, lost Stan Lees, and lost Bob Dylans. There are lost Edisons and lost Doudnas and lost Teslas. (Nicolai, not the car.) There are plenty of them. They have been lost for a thousand and one different reasons. If innovators have been lost, it is not only because of demographic characteristics, but also because of a host of factors, not identifiable in advance, which did not work in their favor. Someone might not have given them a path, a smile at the right time, an infusion of energy, or a contract. 

    That conclusion might seem to point to a tragedy, even to countless tragedies — not only for those who have been lost, but also to those of us who have lost them, perhaps because they were never given an opportunity, perhaps because they were never given attention. In many ways, that is indeed tragic. But it also points to a possibility or perhaps even an inspiration. Lost Einsteins, or lost Shakespeares and Miltons, might be unlost, or found again. In fact they are being found every day. And if we can stay alert to the fact of their existence among us, many fewer will get lost in the first place. And that is a point about justice.

    Two Slogans, Three Presidents, and the Fight for American Foreign Policy

    I

    With war raging in Ukraine indefinitely and instability flourishing in the Middle East and trade wars overwhelming our relations around the globe — and more generally with American leadership in the world deafeningly absent — the world appears to have been so completely transformed by Donald Trump’s foreign policy that precedents may seem irrelevant to our understanding of our current situation. If we lead anything now, it is to make it worse. It may seem useless to look back even a few years, to the neolithic age when the Democratic Party was in charge of American foreign policy. But look back we must. For the obligations and the challenges of managing America’s international relations will outlast any American government, and one day a Democrat will sit in the Oval Office again. When that happens, a return to what we used to think of as normalcy will be progress enough, though the magnitude of Trump’s destruction of our position beyond our borders will take a long time to correct. But real success — the restoration of a rational and moral calculus of interests and values — will require that the right lessons, including the cautionary lessons, be learned from the Clinton, Obama, and Biden administrations.

    The Trump administration’s international orientation will be a partial guide in this new thinking: it will show us what not to do, unless we seek an isolated and insulated America. Even though there are isolationist currents in the Democratic Party, it is not a party with a tradition of America First. We are for enlightened world leadership. (A phrase that needs unpacking, of course.) We are against denying Vladimir Putin’s responsibility for the invasion of Ukraine. We are for providing Ukraine the military support it needs to defend itself. We are against economic warfare with our friends and allies in Europe and Asia. We are for a strong NATO alliance and a strengthened security architecture with Asian allies, especially Japan, South Korea, Australia, and the Philippines. We still believe — I hope — that American power can be used for good in faraway places, and — I hope again — that the United States must act in the face of various kinds of emergencies and atrocities. And of course we emphasize the set of global issues that includes climate change, global health, and food security.

    When the happy day comes and Trump is back home golfing and hustling, the immediate challenge will be to restore America’s role as a leader, a friend, and an ally. The second Trump administration has in a matter of months squandered the soft power that took seventy-five years for America to create. With its economic warfare against friend and foe alike, as well as its disdainful attitude towards longstanding allies, trust and confidence in Washington’s leadership has been shattered and it may never return to its previous heights. His capricious demolition of America’s universally admired programs of foreign assistance amounts to a colossal blow to America’s position in the world, not to mention the immensity of the suffering that is causing. It is so much easier to destroy than to build. But there will be no third Trump administration, and if the pendulum of American politics swings back towards the Democratic Party, it will be imperative to understand the foreign policies of the post-Cold War Democratic administrations with lucidity. 

    Today America cannot lead because America is alone. Unsplendid isolation, we might call it. Isolationism always fulfills its own dream. When the Trump administration looks behind, it will find few followers. And yet it will at the same time insist upon American supremacy! (I am ignoring, of course, the dizzying inconstancy of Trump’s foreign policy, the daily and even hourly inconsistencies, which makes any generalization about it provisional and may prove to be the administration’s worst failing in foreign affairs. For the next three years, the better part of diplomatic wisdom for the nations of the world will be to try and catch a wave.) The international reputation that the founding fathers of the post-World War II generation created for the United States, Democratic and Republican presidents alike, had a powerful result: the United States was a rare combination of feared, admired, and respected. To be sure, it was also hated, but anti-Americanism did not suffice to overthrow the America-led world order, especially after the collapse of the Soviet Union.

    The erosion of the America-led world order was in fact begun by the United States. The Iraq war accomplished that, and in many ways it remains the gift that keeps on giving. (“Vietnam Syndrome” was nothing compared to this. Ronald Reagan was elected a mere five years after our retreat from Saigon.) Our reputation was damaged for at least a generation, as Dick Cheney and Donald Rumsfeld became the ugly Americans of a new era and the incompetence of the occupation of Iraq undermined America’s reputation for getting the job done. Later, the post-Iraq reluctance to act under the Obama administration further weakened the deterrent effect of America’s military might and taught the nations of the world to call America’s bluff. 

    Any honest assessment of America’s role in the world must start with the recognition that America’s power has been grievously diminished in recent decades — notwithstanding the successful air attacks on Iran’s nuclear facilities and, a decade earlier, the unforgettable Special Forces operation that killed Osama Bin Laden. For it is not just our economic, military, and political system that determines power, but also our alliances and friendships around the world: they are what has differentiated America’s role from that of previous hegemons. (We were a hegemon, but we were never an empire; whatever that abused terms means, it does not denote a system of alliances.) By rejecting the daily diplomacy of alliance building and management, what former Secretary of State George Schultz nicely called “diplomacy as gardening,” the two Trump administrations have done grievous and utterly self-inflicted harm to the force multiplier effect of America’s alliances. 

    What is the foreign policy of the Democrats? In recent decades it has been deeply contested. The contest was between two ideas that were perfectly encapsulated in two colloquial slogans by two presidents. “Get caught tryin’,” Bill Clinton instructed. “Don’t do stupid shit,” Barack Obama taught. Joe Biden left no epigram of his own, and his administration continued to be wracked by the debate, though in its responses to Russia’s invasion of Ukraine and the war begun by Hamas’ attacks on October 7, 2023, it seemed to be settling the matter with the proper respect for power and principle. 

    II 

    Bill Clinton often expressed regret that he was not president at a more dramatic moment in American history, such as World War II or the Civil War. History in the years of his presidency was not in overdrive, and it was without a trace of the apocalyptic tone. As it turned out, he presided over a remarkable period of peace and prosperity — not exactly a minor achievement. And when, in the Balkans, history demanded more than American prosperity but also American leadership and American action, he led the West (at first slowly and reluctantly) to a moment that The Economist called “the height of European civilization,” owing to the U.S.– NATO decisions to bring a halt to the Serbian atrocities in Bosnia and to prevent a genocide against Kosovo’s Albanian population. In 1999, we combined force and diplomacy and defeated the aggression of Slobodan Milosevic in the heart of Europe. In an act of moral and humanitarian determination backed up by our hard power, we united to thwart an imminent mass slaughter of the Kosovar Albanians. Unlike Rwanda or Bosnia, this was a genocide prevented. 

    Equally important, the way in which Washington led in Kosovo was crucial. Extensive diplomacy led to widespread international support for the war, as the Belgrade government was given every possible chance to accept a peaceful solution. Working with key European allies and Moscow, a UN resolution laid out a postwar plan with the burden for deployment and reconstruction shared by all. In the end, Kosovo became a functioning emerging democracy and is now an independent state — and a tribute to Bill Clinton’s and Madeleine Albright’s commitment to get caught trying. Kosovo was done for the right reasons and it was done the right way. (As an official at the State Department in those years I played a role in this effort.) Like Bosnia, Kosovo demonstrated that it was possible to use force justly, and to use limited force for a limited goal. This does not mean it can be done in all places, only that it can be done in some places — and that is a lot. 

    The late 1980s and the 1990s were a dazzling moment in world history. Apartheid fell in South Africa. Soviet communism collapsed in Russia and in Eastern and Central Europe. At Tiananmen Square, countless Chinese demonstrated their dedication to democratic values. Democracy was indeed ascendant. But history was not over. (Has there ever been a dumber notion than “the end of history?”) Perhaps because he was elected to focus on the economy and because the Democrats had been out of power for twelve years, the Clinton administration had an inauspicious beginning in foreign policy. Indeed, the combined chaos in Somalia, Haiti, and Bosnia managed to overshadow the outbreak of freedom and democratic change across Europe and Asia. But by the end of Clinton’s first term, the ship of state was righted — beginning with the ouster of Haiti’s dictators as a consequence of diplomacy backed by force and an endorsement by the United Nations, and finally, after far too long, the air strikes against the Bosnian Serbs which led to the Dayton Accords in 1995. Ending the war in Bosnia, which President Clinton did with the help of President Chirac, proved an important lesson, showing how U.S.-led diplomacy backed by military power could be America’s method of leadership. For the Clinton administration, unlike its Republican successors, deemed it important to pay “a decent respect to the opinion of mankind.” 

    We got caught trying, and it worked. And having seen how force and diplomacy could genuinely resolve crises over Taiwan in 1996 (when two Aircraft Carrier Battle Groups were deployed near the Taiwan Straits, prompting Beijing to halt its intimidation of Taipei) and Iraq in 1998 (when Operation Desert Fox destroyed for a while Iraq’s weapons of mass destruction programs after Saddam Hussein refused access to UN inspectors), the Clinton administration determined to take the lead in resolving international crises. Applying America’s unique power position, as well as the prestige of the White House lawn, the Clinton administration embraced America’s leadership role. Consider the remarkable diplomatic record of America the Peacemaker in that era. With President Clinton’s help, lasting peace was established between Northern Ireland and the United Kingdom. In the Middle East, there was peace between Israel and Jordan, the Oslo Accords, and the Hebron and Wye River Accords, all of which saw the Palestinians given substantial autonomy and land and Israelis living in greater security. The challenge of North Korea’s nuclear proliferation was also dealt with effectively in 1994 in the U.S.–North Korea nuclear accord. Secretary Albright was even dispatched to Pyongyang in the final weeks of the administration to get caught trying to stop North Korea from deploying long-range ballistic missiles. 

    And most famously of all, President Clinton himself got caught trying to secure the dream of a permanent Middle East peace at Camp David in 2000, along with Ehud Barak and Yasser Arafat, and later through the Clinton Parameters in his final days as president in 2001. A peace plan was even put forward by Clinton and Albright to the Syrian dictator Hafez Al Assad at Shepherdstown West Virginia, which envisioned the return of the Golan Heights to Damascus in exchange for recognition and security for the State of Israel. But just like his fellow Baath Party thug Saddam Hussein, Assad would not take yes for an answer and rejected any reasonable outcome. Considering what later happened to Syria, it seems unimaginably foolish that Assad spurned Clinton’s diplomatic offer back in 1999 and 2000. 

    This was American diplomacy at its most noble, pursuing solutions and working with difficult regimes on behalf of our friends and allies. Along with the prudent use of American power in Iraq, the Taiwan Straits, and the Balkans, it was this relentless pursuit of peace that generated substantial admiration and respect for the United States in those days. It was also during the second Clinton administration that NATO was enlarged to include Poland, Hungary, and the Czech Republic. With Albright leading the diplomacy, a way was found to bring security to Central and Eastern Europe from future Russian revanchism and still negotiate the NATO–Russia Founding Act, which saw Moscow acquiesce in diplomatic arrangements to ease the pain of losing former client states to the NATO alliance. (I recognize that the expansion of NATO is still controversial in some quarters, but the Russian aggression against Ukraine should have settled that dispute.) 

    In retrospect, the most significant mistake of Clinton’s activism abroad was the effort to coax China into becoming a responsible member of the community of nations. At Clinton’s behest, China was made a member of the World Trade Organization, and Beijing’s response was to exploit the West’s openness by growing (and stealing) its way to the point of becoming a rival and a potential threat to the West, as its phenomenal economic boom, its across-the-board adaptation of advanced technology, and its massive military build-up now jeopardizes America’s technological edge and strategic paradigm. To be fair, our China policy was bipartisan. Leaders in both parties were under the illusion that China would become more and more transparent and abide by the rules of the trade game if given the chance. Indeed, it was during the first George W. Bush administration that Deputy Secretary of State Robert Zoellick famously called for helping China to become a “responsible stakeholder” in international arrangements. But after Xi Jinping came to power some years later, nearly all concerned came to rue the day we gave China access to advanced technology and world markets. 

    What difference a Clintonian approach would have made in subsequent decades is, of course, unknowable. We do know that the Balkan model of force and diplomacy was consciously spurned during Bush’s two terms. And President Obama, to my knowledge, never once cited Clinton’s successes in Bosnia and Kosovo as a model worth emulating, even though it would have been natural enough for him to tout the previous Democratic president’s successes. Perhaps he opposed them as a young legislator in Chicago. Many years later, by contrast, during the Syrian catastrophe, when Obama was idling in the White House, a friend (the editor of this journal) did ask Clinton what he would have done differently than Obama in Syria, and he replied: “I don’t know, but I would have been caught tryin’.” 

    III 

    The foreign policy of the Obama years was marked above all by a dire case of Iraq War Syndrome. Obama and his team did all they could to hype his early remark at a rally in Chicago in opposition to Bush’s decision to invade Iraq. It was a stupid war, he said. And that, of course, led to the slogan that defined his foreign policy over two terms. On Air Force One, Obama famously told the White House press corps that his guiding principle was, “Don’t do stupid shit.” The cautionary example of Iraq became a kind of obsession-compulsion for Obama and his policymakers, as if it was all you needed to know about American power and the world. 

    On the surface, by restoring American support for European allies after the war and often deferring to them, much of the damage wrought by the Bush administration’s bullying unilateralism was undone. Remember, even before the war in Iraq, Bush and Cheney and Rumsfeld had alienated nearly every single American ally by rejecting a series of arms control treaties with Russia, including the ABM Treaty, the building block of them all negotiated by President Richard Nixon in 1972, as well as the Chemical Weapons Convention and the Biological Weapons Convention. To round it off, the administration pulled out of the Kyoto climate accords, and the immeasurably haughty Rumsfeld even expressed disdain for the Geneva Conventions regarding the treatment of prisoners of war. Obama’s administration reversed all that, which was a good thing, and worked on climate change and restoring discussions with Russia about strategic nuclear arms. Combined with the fact of twice electing an African American as president, America’s reputation was substantially restored as admiration for Obama the man soared around the world and allied governments found it easy to work with Obama’s diplomatic team, led in the first term by Secretary of State Hillary Clinton, who proved to be enormously popular on the world stage. When U.S. Special Forces finally located and killed Osama Bin Laden in a dramatic raid inside Pakistan in 2011, many observers judged that respect for American leadership “was back.” 

    Yet brilliant as his anti-war stance was politically, since most Americans had turned against the mission after the disasters of the American-led occupation, the ground truth on Obama and Iraq was much more confusing. Although few in the Washington press corps wanted to focus on it, in a moment of candor Obama revealed to one of his most determined admirers, David Remnick of The New Yorker, that if he had been a senator in 2003 and read all of those flawed intelligence reports detailing Saddam Hussein’s extensive possession of chemical and biological weapons, he might well have voted for the second Gulf war, as Hillary Clinton and Joe Biden and John Kerry did. Despite this acknowledgement, however, his campaign team and his White House aides regularly pilloried those senators. When he ran for president, Obama’s sole credential in foreign policy was his progressive opposition to the war from the start, but it turns out that his claim to greater wisdom than the Democratic establishment was not exactly true. While politically Obama got away with hanging the war in Iraq around Hillary Clinton’s neck in the primaries in 2008, he must not have felt his anti-war position too strongly, since it did not inhibit him from appointing pro-Iraq Democrats to be his Vice-President and both of his Secretaries of State and to direct his foreign policy. 

    The confusion in the Democratic Party on this fundamental question surely contributed to the essential confusion at the heart of Obama’s foreign policy. He pulled out of Iraq and then was forced to return American forces with the rise of the Islamic State in the vacuum left by the American withdrawal. He famously surged U.S. forces in Afghanistan after a lengthy policy review and then undercut that surge by establishing a short and public timeline for the withdrawal of the troops that were to be sent in, signaling to the Taliban they should just wait us out. 

    But the real damage done by the Obama administration was in its failure to follow through with the threat to use force in Syria — the infamous red line that wasn’t a red line. The American president was caught bluffing. This stupendous blunder was opposed by nearly all his foreign policy team save for his Chief of Staff Denis McDonough, who walked him around the White House lawn to cook up his last-minute call for a congressional vote on the intervention that he had promised and upon which the global credibility of the United States depended. This prevarication led to the swift intrusion of Vladimir Putin and Russia into Syria and the region — and the Russian government, perhaps fearing for its personnel who at the time were conducting clandestine chemical experiments at Syrian facilities, obligingly arranged for a disarmament program for Syria’s chemical weapons, which allowed Obama to avoid conducting even the limited air attacks that were under consideration. The man who never wanted to be a war president no doubt slept more easily, but Syria descended further into hell, along with the global reputation of the United States. 

    This was stupid shit par excellence. Yes, Syria’s chemical arsenal was dismantled. (The fact that Bashir al Assad immediately agreed to surrender the entirety of his chemical arsenal — or almost the entirety of it, as his later use of chemical weapons revealed — is evidence of how afraid he was of American military intervention.) But the damage to America’s leadership in the world was done. Their oh-so-clever stratagem proved far more consequential than Obama and his chief of staff must have imagined. America’s friends and allies in Europe, the Middle East, and Asia uniformly argued that they could no longer count on an American president to follow through on his commitment to use military power. Talking to a top French national security official at the time, I was met with a visceral attack on American leadership and furious claims of irreparable damage to Western policy in the Middle East. The Kremlin took Obama’s reversal as a golden opportunity to return to Syria, to deploy its modernized air and ground forces, and show what a real ally does. And with the help of Iran’s Al Quds force, Russia applied the brutal lessons learned in Chechnya to flatten rebel cities across Syria. Assad’s savage hold on power was soon re-established, leading to the horror of an estimated half a million Syrians killed and almost an entire population internally and externally displaced, not to mention the additional negative blandishment of Iran gaining suzerainty over the country and a base on the Mediterranean Sea. Syria was the worst case of American bystanderism in our time. 

    Ironically, it was President Trump a few years later who exposed the stupidity of Obama’s fear that the use of air power in Syria would lead to another Iraq-like disaster. Trump had also disparaged the Iraq war, but he had no problem following through on his threats to use American air power when chemical weapons were used against Syrian civilians. Indeed, he launched moderate-sized air attacks on the Syrian military twice in response to their violations of his red lines. And in a twist of fate that still boggles the mind, owing to the proven credibility of Trump’s threats to use force, Assad — and Putin — were deterred by a Trump tweet from finishing off the rebel alliance in Idlib, a small province in the north where rebels and dissidents and refugees were holed up. And it was those very rebels nearly a decade later who launched the rebellion that overthrew Assad this year, allowing the Biden administration and then the Trump administration to support Syria’s transformation into a relatively free country, now no longer under American sanctions and working on a diplomatic settlement with Israel. This extraordinary development was one of the delayed consequences of the calibrated but determined use of American (and later Israeli) power. 

    In retrospect, however, it was the Obama administration’s approach to Putin’s invasion of Crimea and Eastern Ukraine in 2014 that looks to have been even more consequential than the Syrian red line debacle. For the Obama administration essentially abdicated any leadership role in the response to Putin’s invasion, despite the fact that the United States had offered a security assurance to Ukraine in the Budapest Memorandum in 1994 in return for Kyiv giving up its Soviet nuclear weapons after the fall of the Soviet Union. Yes, there was a half-hearted sanctions package imposed on Russia, and some Obama officials did condemn Putin’s brazen invasion. But two decisions stand out ten years later. First, Obama declared publicly that he believed that Russia had “escalation dominance” in Europe. Even if true, the signaling of such an assumption to Russia is unforgivably hard to fathom. It gave Putin a green light. (Nor was it necessarily true.) And all the while the Democrats were still deriding Mitt Romney’s comment in the presidential debate two years earlier that “Russia is, without question, our number one geopolitical foe.” Such a view, they said, was “so nineteenth-century.” 

    Secondly, Washington absented itself from the diplomacy that ended the initial Russian assault. It was the French president, Nicolas Sarkozy, and the German Chancellor, Angela Merkel, who negotiated directly with Putin, leading to the endless arguments and violations of the resulting Minsk Agreement. I am absolutely certain that in her time Madeleine Albright would have resigned if her president had instructed her to absent herself from a diplomatic settlement of a Russian military assault in the heart of Europe. Inaction was the American response to Putin’s invasion of 2014, which is why Putin probably never expected the West to act so decisively eight years later when he launched a full-fledged invasion. 

    When it came to humanitarian aid, the Obama administration’s caution and fear of signaling any American involvement became both tragic and farcical. After ruling out military assistance to Ukraine, including purely defensive weapons, a ban was imposed on non-lethal assistance, such as uniforms and vehicles. The one measure of assistance eventually authorized by Obama was humanitarian assistance — and here his inaction became an absurdity. While a hapless Pentagon official began filling a U.S. transport plane with MREs and other emergency supplies to fly to Ukraine, last-minute orders came directly from the White House national security leadership to stand down. They feared how Moscow would interpret the arrival of a military plane loaded with food aid. They did not want to “provoke the Russians.” And so a German shipping company was contracted to deliver the food at far greater cost. 

    Then there was Libya. When an uprising against its dictator exploded, and the Obama administration reluctantly participated in air strikes against the Qaddafi regime, a top Obama aide promulgated an imperishable concept. President Obama and his administration were not ceding leadership to Paris and London, he said; not at all. The United States was “leading from behind.” Given the half-hearted American effort — I remember British officials were livid that the United States was refusing to replenish European aircraft with American weapons after the French and the British ran out of bombs and missiles — it was no surprise when the Libyan intervention became a muddle. Qaddafi was killed, but the chaos and instability persists to this day. Our participation in the Libyan operation is now remembered as a case of excessive American intervention, but in truth it was a case of insufficient American intervention. Having procured a perfect Security Council resolution to justify Western action and lead an international response, the Obama administration declined the opportunity and entered the fray with the overwhelming objective of getting out of it. And so we left Libya to its anarchy. 

    As a result of the debacle in Libya, the red line reversal in Syria, and the determined inaction after Putin invaded Crimea, not to mention Obama’s lukewarm response to the democratic rebellion in Iran in 2009 and his confused response to the democratic rebellion in Egypt in 2011, respect and fear of Washington precipitously declined. Of course, after four years of the first Trump administration, much of the world welcomed the arrival of a new Democratic administration led by Obama’s Vice President, Joe Biden. 

    IV 

    A British prime minister famously declared that it is “events, dear boy, events” that determine a country’s statecraft. Improvisation is at the heart of foreign-policymaking; otherwise your values and your interests stand no chance of being realized. Events during Biden’s term in office did just that. The record is mixed. While he made important strategic decisions, for good and for ill, regarding Afghanistan and American policy toward China, Biden’s four years in foreign policy will be remembered mainly for Washington’s response to the invasion of Ukraine in February 2022 and to Hamas’ attack on Israel of October 7, 2023, and the subsequent war in Gaza. Having worked directly for Biden from 1988 to 1993 and collaborated with him extensively afterwards, I can testify that he was uncannily well-suited to understand the threat from Putin’s Russia and to rally European and international support for Ukraine after the Russian invasion. It is hard to think of another plausible Democratic candidate for president in our time who would have done what Biden did to rally the world in defense of Ukraine’s sovereignty and independence — the world and his own wobbly post-Obama party. 

    The Biden team quickly realized that the Trump administration was right in seeking to slow down the threat from China rather than advocating China’s rise with WTO-type Obama-and- Clinton-style happy talk. While agreeing with the Trump administration’s threat assessment, however, the Biden administration’s approach was thankfully different. Instead of insulting and bullying crucial allies, Biden and his Secretary of State, Antony Blinken, would build a grand alliance to counter China across the board. The Trump administration thought commercially, but the Biden administration thought strategically. From building up the military capabilities of allies through AUKUS (Australia and the United Kingdom), and by means of new arrangements with India, Japan, South Korea, and the Philippines, from critical minerals to high technology, “de-risking” was the term of the hour; and unique efforts were also made to unite European and Asian allies to prevent economic and military coercion by the Chinese Communist Party. An all-of-government effort was established. Biden was determined to prevent China from dominating the Indo-Pacific or displacing America’s world leadership. 

    And by the end of the administration something remarkable was taking place. Europeans had moved from seeking an investment treaty to integrate the European and Chinese economies to working with Washington to penalize Beijing’s decision to go all in on behalf of Russia in what it called a “no limits” partnership. At the same time, Asian allies were increasingly integrated into NATO and supporting Ukraine’s defense against Russian aggression. Biden’s and Blinken’s diplomacy, combined with extensive intelligence briefings on the Chinese military build-up and its extensive support for Russia’s war economy, helped link Asian and European allies in unprecedented ways, so that allies from the Atlantic and the Pacific were coordinating with Washington to contain the threat arising from the Russia–China partnership. 

    For a Democratic president, this was neither obvious nor easy. Not only would European allies be wary if Washington failed to stabilize its relationship with China, but many in the Democratic Party have come around only slowly to the idea that China is somehow a threat to the West. (Some prominent Democrats have yet to assent to that proposition.) That is why Biden and Blinken started by demonstrating responsible management of the crucial diplomatic dance with Beijing. And for similar reasons — managing but not wanting to overstate or to stimulate the threat posed by China and Russia — Biden began by seeking a summit with Putin to try and stabilize the Russian relationship as he built political and diplomatic support to confront the new Chinese challenge. Of course, that effort came to naught when the bilious Putin continued to spout his grievances regarding everything from NATO expansion to the invasion of Iraq and alleged American interference in Russian political affairs. Soon Putin’s massive conventional build-up around Ukraine rendered any hope for a modus vivendi hopeless. 

    Meanwhile there was a second Biden decision of enormous consequence: to move forward with Trump’s Afghanistan policy by withdrawing American troops from that country. A failure to quit Afghanistan would have required the re-opening of negotiations with the Taliban, with all the attendant uncertainty. Since I didn’t join the administration until December 2022, I was not privy to the internal discussions and cannot provide an answer to the obvious question of why President Biden did not pursue the approach that Vice President Biden had advocated back in 2009, namely to reduce the American footprint in Afghanistan to a minimal role of countering terrorism but leaving the larger mission of defending the Afghan government from the Taliban insurgency to the Afghans themselves. Presumably, President Biden did not believe that the residual terrorist threat in Afghanistan justified continuing what Trump and many Democratic leaders insisted on calling a “forever war” rather than a long-term deployment like the ones we continue to maintain in strategically significant areas of the world. (This, notwithstanding the difference between monitoring the armistice line in Korea or maintaining bases in Europe and defending against a determined adversary like the Taliban.) 

    The failures of planning and the abysmal coordination in our withdrawal from Afghanistan troubled all Americans, but they were especially hard for a Democrat to watch. As American troops withdrew from Bagram Air Base, the elected government of Ashraf Ghani collapsed and a jubilant Taliban resumed their oppression of many millions of Afghans. And despite heroic efforts to evacuate hundreds of thousands of Afghans, particularly those who had courageously assisted us in our efforts, it is hard to deny that the way we withdrew evoked memories of the disastrous Iraqi occupation ten years earlier and even of Vietnam in 1975, and seriously weakened the reputation of the U.S. military and its political leadership. Until the Kremlin archives are opened, we will never know to what extent, if at all, the botched withdrawal from Afghanistan had any impact on Putin’s calculation to invade Ukraine. My guess is very little, because Putin was under the false impression that his attack would succeed in a matter of days and he could present the world with a fait accompli

    Yet Biden and his team surely proved their mettle from the moment Putin’s intention to invade became incontrovertible. First, an extraordinary effort was made to downgrade highly classified intelligence information and make it available to the public to warn Ukraine and America’s allies of the imminent invasion. This exceedingly unusual intelligence downgrade was immensely important. Following the debacle of Saddam Hussein’s alleged arsenal of weapons of mass destruction, the standing of the American intelligence community took a hit. Not only did doubt about the accuracy of intelligence make it harder to win support in a crisis, but the power of intelligence diplomacy to persuade friends and allies of future threats was weakened as well. In a tragic way, Putin’s invasion was a kind of vindication for the U.S. intelligence community. After all, key European allies and many others were convinced that Washington was once again wrong. Working at the OECD in Paris at the time, I can attest to the fact that German and French officials up and down the line refused to believe that Moscow would do something so dangerous and so stupid. By warning them, America restored some lost luster and helped Paris and Berlin as they responded with real solidarity to change their strategy and break their budgetary restraints and their traditional caution in response to the aggression. 

    More significant than the intelligence diplomacy was Washington’s overall world leadership after the invasion began and it became clear that Ukraine would not be quickly overrun, as many in the West feared. Given the troubled history of American foreign policy in the years prior to the Ukraine war, and especially given the dogmas and the debates inside the Democratic Party, few would have believed that the Biden administration would be able to galvanize such a solid coalition of nearly every country in Europe and many more in Asia to impose economic sanctions, freeze hundreds of billions of dollars of Russian assets, and provide hundreds of billions of dollars more in concrete military assistance, including modern tanks, aircraft, artillery, and multiple variants of missiles and drones. It is true that, from the standpoint of Ukraine’s needs, the pace of delivery of American and NATO weapons was slow and halting, but the far more important point here is that President Biden rejected the view of his predecessor that Moscow had some kind of “escalation dominance” that required Western countries to stand down or risk defeat in a conventional or even some kind of nuclear confrontation. Yes, Biden moved carefully and cautiously when it came to arming and training the Ukrainian military, but there was an undeniable logic to his strategy: given Putin’s repeated attempts to drive a wedge among European countries and between Europe and the United States by repeatedly threatening nuclear escalation, Biden insisted that the NATO allies cross the Rubicon together. NATO unity was Putin’s nightmare, and the Biden administration wanted NATO to provide more and better weapons as a united alliance. 

    The internal dynamics in the administration developed a certain pattern. The State Department would advocate and the Pentagon would equivocate. Consider two examples. The M1 tank episode was classic. The debate over this weapon system, over whether and when it should be given to the Ukrainians, involved the Pentagon correctly pointing out how difficult it would be to use the weapon, how complicated its logistical requirements were, and how few were available. Meanwhile, since America was not providing its best tank, Germany was refusing to provide its most modern tank, the Leopard. While early transfers of these weapons might not have changed the military balance, they surely would have helped. With the Pentagon stalling, Blinken eventually persuaded the president that the issue was not the tank itself, but whether the threshold of transferring a state-of-the-art tank would be crossed, in which case Germany and other Europeans could furnish the tanks, which were more readily available and did not contain the logistical tail of the M1. In the end, Biden decided to move forward with a small American deployment right away, which was the signal for the German and European transfers to proceed. Then there was continued debate over providing the Army Tactical Missile System, or ATACMs, a medium-range surface-to-surface missile. Here again Blinken had to argue it out with Pentagon officials over the details of the weapon’s utility. Since most Russian airfields had moved beyond the range of ATACMS, and since it was the Russian aircraft that were such a threat, Pentagon officials protested that the ATACMS would have minimal effect in blunting the Russian advantage — until new information emerged that the Russian Air Force envisioned the refueling and refitting of those same aircraft on their way to Ukraine at airfields within the range of ATACMS. 

    Biden’s policy of active and generous and unintimidated support of Kyiv accomplished nothing less than the overcoming of the taboos on American action that were established by the Iraq War. This time the syndrome took the hit. This was a momentous achievement, at least if the United States is not to shrink into a small-minded and selfish country. You might even say that, temporarily at least, Putin’s war brought many Democrats, led by Biden, back to their senses, or more specifically, back to their sense of history, which of course encompasses more than Baghdad and Falluja. And the great irony is that what made Biden’s forceful leadership possible was precisely his age. Among leaders in the Democratic Party, only he had been around long enough to remember and to understand that when Europe and the United States stand in solidarity, ultimately Moscow cannot sustain its aggressive stance. That was the lesson of the Berlin crisis in 1948 and the Cuban missile crisis of 1962, and the reason why the Cold War ended with the collapse of Soviet communism. Biden was not only post-Iraq, he was also post-World War II. His paradigm for American foreign policy was broader and deeper — and more knowledgeable — than certain doctrines of the Democratic Party in our time. He contested those doctrines and banished them to the margins, at least for now. 

    In addition to the Ukraine war, the war against Hamas has had a dramatic effect on perceptions of American power and international leadership, and not in the way most would expect. This is not the place for an extended discussion of what the Biden administration did and did not do in support of Israel from October 7, 2023, until January 20, 2025. I understand that this is the most bitter debate of our day, and the most morally excruciating one. I want only to note that the steadfast American support for Israel against Hamas, Hezbollah, and Iran, demonstrated with the provision of air defense systems such as the THAAD (as well as $16 billion in wartime assistance), was noted by leaders in the Middle East and beyond. Those leaders also saw a glimpse of the future when Washington organized European countries and key Arab countries to work with CENTCOM in a comprehensive defense of Israel’s air space. This was the work of an ally and a leader. (The Biden administration hoped that its fierce support of Israel in the aftermath of the Hamas atrocities would give it leverage to moderate Israel’s conduct of the subsequent war in Gaza, but in this it failed.) Having accompanied Secretary Blinken to a number of intense discussions with European and Arab leaders, I saw respect and admiration for America — not least for its prowess and its strength of will — expressed even by hard-boiled figures like Mohammed Bin Salman of Saudi Arabia. In the latter’s case, he told us emphatically that Biden’s steadfastness regarding Ukraine and Israel had persuaded him that Saudi Arabia’s long-term future lies with Washington and not with Beijing. 

    So the larger strategy worked. By doing the hard work of diplomacy, by not disparaging friends and allies who may not agree, by standing by our friends in time of crisis, and by meeting the global responsibilities imposed on us by our own power, the Biden administration, despite some missteps, restored American foreign policy to its proper level, to its moral traditions and its strategic strengths. 

    V

    If Barack Obama’s diffident withdrawalism made the nations of the world skeptical of American will and anxious about American reliability as an ally, Donald Trump’s erratic commerce-driven America First-ism has driven them to despair. Will a “snapback” in foreign policy be possible after a second Trump term? Certainly not if a Republican succeeds him. For this reason, Democrats must be lucid about what happened during the last three Democratic administrations, so as to prepare themselves intellectually and politically to maintain America’s role as an indispensable leader in Europe and Asia. Sure, with “burden-sharing” and higher defense spending among our allies. But the real issue is not who pays what, but who leads where. 

    While many commentators are right to lament the trainwreck of alliance relationships wrought by Trump, the larger problem is that many Americans have lost trust in their government since the debacle of Iraq. Skepticism about American power, about the possibility that it can be used for good and not for ill, is now received wisdom among the letter generations (X, Y and Z), many of whom probably nod their heads sullenly in agreement with Trump’s isolationist tropes. Isolationism, after all, has never been confined to the American right. Why should America defend South Korea? Who cares if China invades Taiwan? Isn’t it a Chinese island? Is Israel still a strategic asset deserving of our support? Is it worth risking World War III to protect those rich Europeans with their cradle-to-grave social services, and why don’t they devote as much national effort to military matters as we do? Isn’t the deployment of American soldiers in foreign lands nothing but imperialism? These questions will increase in force in the coming years, not least owing to the new elan of the progressives within the Democratic Party. It is way too late to count on the perdurability of the foreign-policy worldview that sustained us — and many other nations — before 2003 and the Obama retrenchment. Many Americans probably also agree with Trump’s sensible-sounding “spheres of interest,” whereby Beijing dominates Asia, the Kremlin dominates Eastern Europe and other parts of Eurasia, and the United States brings back the Monroe doctrine to dominate Latin America, with maybe Canada and Greenland thrown in. 

    What is most important is for the United States and its allies in Europe and Asia and North America to develop a common threat assessment. Difficult times lie ahead; great power rivalries and worse. The combination of Russia’s modern day war economy and China’s massive build-up of ships, missiles, strategic nuclear weapons, and space and air capabilities poses a threat such as we have never seen before. If we further take into account a ruthless North Korean dictator with nuclear weapons and whatever remains of Iran’s military and terrorist capabilities, we cannot avoid the conclusion — which many Americans would like to avoid — that the so-called Axis of Authoritarians must be countered and — to use the still-valuable Cold War term — contained. 

    We might begin with a concept of “dual containment.” Instead of shying away from the word “containment” in the case of China because of its extensive involvement in the world economy, the next administration should follow on the Biden approach with the added candor of using the C-word. Citing “dual containment” in the context of Russia and China will also provide clarity and a sense of purpose to American foreign policy in a chaotic and pitiless world. As long as the next president responsibly manages the relationship with Beijing’s leader, then European and Asian allies will not shy away from strengthening the security arrangements necessary to prevent and deter an aggressive CCP-led China from seeking to dominate Asia and the crucial sea lanes in the South and East China seas, not to mention pursuing its “we are a big power” intimidation of our friends and allies around the world. And by banding together when it comes to crucial minerals, rare earths, semiconductors, and other emergent technologies, China’s use of economic and technological blackmail can be contained as well. 

    It is past time for a revival of the mid-twentieth-century approach when the founding fathers of containment designed an enlightened global role for the United States — no, it didn’t always work perfectly! — and then developed a rational and empirically based threat assessment and then constructed a rational and empirically based plan of action. The next president can candidly lay out what China seeks, and the phenomenal military build-up of Russian and Chinese military and high technology weapons, to justify a new and firm policy of dual containment. And instead of demanding that the United States bear most of the burden, a renaissance of alliances can be launched instead. We do not have to do it all, but unless we want an Orwellian world order dominated by the Chinese Communist Party — aided and abetted by Putin’s Russia with a boost from the brutal militarized dictatorship of Kim’s North Korea — we need to lead. MAGAists cannot be expected to understand this, or to grasp that the formation of such essential friendships and partnerships will not be achieved by bullying and trade wars. And so the task of renovating and fortifying our position in the world will fall to the Democrats. But what are the Democrats thinking? 

    “When, later, our adventure has bogged down”

    When, later, our adventure has bogged down, 

    or umbrella’s to an end, and mountains & lakes 

    if any are the friends — 

    and that reminds me of a story — and so does ‘if’ — 

    Einstein — who personal’ never caught my fancy — 

    he took my umbrella once, — 

    or I took his — years gone — but as a soul, 

    brain, that stuff, yes — only his prose 

    I never knew was so good: 

    is getting over his loss of religious faith 

    at..twelve maybe? and he must replace it 

    and says, all the decades later, 

    ‘Men of the same bent & end, past and now, 

    along with what they happened really to discover, 

    were the un-lose-able friends’: 

    the German’s better, I did that stupid English 

    myself. Oh: ‘if’. I meant the air & water 

    peeled off, in a fouled test.

    A Bad Dream

    Yes. That is so. I found she hated then 

    (or even didn’t) her father who left when 

    she was a toddle of three. 

    She hated her mother (I couldn’t like her either) 

    and felt only a fully justified contempt for her one brother. 

    Which into waded: me. 

    Ran on her a morning en route to the Red Owl — 

    a supermarket not a totem pole — 

    not looking good; 

    when she unclad to me that suicide 

    was all she had at heart, & trembled, I tried 

    to, and did, clothe her with us. 

    The marriage came long after; — 

    there’s more here, pal, than ever we let out, but thus: 

    bugged by her ice for papa, I had her trace 

    the man down: — just dead, as no doubt he should; 

    then I made her her cancerous mother’s friend who died happy. I see our son sometimes. 

    I couldn’t help out with Brother. 

    “Waiting. Just waiting, in wet heat. A little more whiskey please”

    Waiting. Just waiting, in wet heat. A little more whiskey please. 

    Turn the fan up. The amenities. 

    No food yet, thank you. 

    I’ll feel better later. It’s too hot to read. 

    I think: do I have everything I need, 

    stomach & mouth? 

    A little more whiskey, please. In this terrible state 

    I hope I’m paying for my sins at any rate. 

    There must be some point to it. 

    It’s very hard to think with the fan so high 

    but I seem to remember times when Henry was happy 

    without particularly deserving it. 

    They say the temperature will drop with dark 

    and after all my lecture rooms are air-conditioned. 

    It’s that the actual brain won’t work 

    before or afterward, so that everything has to be done there. 

    I must be paying for some very special sin 

    this summer. A little more shiskey.

    “Grim Pilgrims gather: ‘Thanks.’ I give thanks too”

    Grim Pilgrims gather: ‘Thanks.’ I give thanks too, 

    as the last leaves fly, that he did not live on 

    but yellow & skin-thin 

    & grinning ceased. True that his harvest due 

    only was beginning, that no sun 

    distracted his widow in 

    her calm dismay; but count up then his gain, — 

    Paris unfallen, Hiroshima tall, 

    millions of Jews walking, 

    Gandhi spinning, treacheries that sprain 

    our hopes unspun, promises unmade all 

    that proved just talking. 

    The ballet of your dying hope no more 

    tortures me with its fool. What childish plan 

    ’s this, keen on living? 

    Embryonal adeno-carcinoma, grade 4. 

    ’Twas in the testes, there since you began. 

    Fume, hiss. Happy Thanksgiving.