Reading and Time

    Regrettably, I must begin with the quantitative — the least Proustian of all categories. The six-volume Modern Library Edition of D.J. Enright’s revision of Terrance Kilmartin’s reworking of Andreas Mayor’s and C.K. Scott Moncrieff’s translation of Marcel Proust’s À la recherche du temps perdu is 4,347 pages long. At an average speed of two hundred and fifty words, or one page, per minute, it takes approximately seventy-two hours, or three days, to read it. But seventy-two hours represents a theoretical minimum and an unattainable ideal. Thanks to a few features of Proust’s distinctive style, reading In Search of Lost Time inevitably takes at least twice or even three times as long as this. 

    There is, first of all, the famous Proustian sentence, whose syntactic cascades of independent and subordinate clauses were compared by Walter Benjamin, one of his first translators, to the flowing of the Nile. The longest of those riverine sentences, at nine-hundred and fifty-eight words in the original French, could, if printed out on a single strip of paper, be wrapped around the base of a wine bottle seventeen times. No one, except perhaps the most gifted mnemonist, can retain so much information in their short-term memory. By the time anyone else comes to the end of a sentence of this length, having meanwhile pulled into the port of more than one semicolon for a pause, its subject has been long forgotten. To understand what has been written, the reader is sent back to the point of departure and this, in turn, causes the pages not to move, or to move in the wrong direction. 

    Then there is the attention that Proust lavishes on seemingly insignificant details and physical spaces, such as the lengthy description of the steeple of Saint-Hilaire in Swann’s Way; his stubborn willingness to stay in a scene, such as the interminable dinner party in Guermantes Way or the scene where the narrator watches Albertine sleeping in The Captive, long after another author would have cut away; and his penchant for discursive digression throughout, in which he inflates each Rochefoucauldian aperçu to the length of an essay by Montaigne. As with the multiclausal sentences, these traits of style have the effect of slowing down the pace. Articulating the frustrations of innumerable future readers, an editor at one of the many publishing houses that turned down Swann’s Way is said to have remarked of the opening scene, “I fail to understand why a man needs thirty pages to describe how he tosses and turns in bed before falling asleep.”

    Most importantly, there is Proust’s mania for comparisons: for metaphors, analogies, complex conceits, and extended similes, the signature sentences that begin “just as” or “as when.” In a passage in Time Regained — we will return to it — Proust likens his book to a sort of “optical instrument” that enables the reader to read herself. The instrument works by stimulating a reader’s tendency to draw analogies between the events in the book and the events of their lives, such that, for example, one recognizes in the mediocre Bloch or the vicious Madame Verdurin the features of social climbers one has known; or in Swann’s unhappy affair with Odette, the time when one fell in love with a person not of one’s type; or in the contrarianism which topples the Baron de Charlus from the pinnacle of Parisian society, the dialectic of cant and reaction that characterize our own. Owing to this, a frequent experience while reading In Search of Lost Time is to look up half-way through a sentence and stare into the middle distance in a kind of mnemonic reverie or “epiphanic swoon” (as the scholar and translator Christopher Prendergast puts it in his recent study Living and Dying with Marcel Proust), only to find, catching sight of a clock out of the corner of one’s eye, that whole hours have passed.

    Quantitative analysis may be regrettable; unfortunately, it is necessary. For it is the sheer length of In Search of Lost Time, compared to which War and Peace is but a diplomatic incident and The Magic Mountain is little more than a hillock, that turns the phrase “reading Proust” from the designation of an ordinary activity into a cultural superlative, the literary equivalent of climbing Everest, walking the Camino de Santiago, riding the Trans-Siberian Express, or even sailing the Nile. (On behalf of our eyes, we may all be grateful to the editor at Gallimard who talked Proust out of printing the novel in a single volume, with two columns per page and no paragraph breaks.) One may note the delicious irony of treating a book which contains an utterly unsparing critique of snobbery as a “badge of bourgeois soul-distinction,” in Prendergast’s words, and at the same time sympathize with the “mid-cult pride,” in the words of Fredric Jameson, felt by those who finish it, as well as the genuine or downplayed regret expressed by those who do not. (A further irony: Proust is not infrequently cited as the classic author whom contemporary novelists have not read in modified versions of the magazine questionnaire that bears his name.) 

    It is because of its length, which Proust hoped would rival the numerical-temporal One Thousand and One Nights, that In Search of Lost Time has acquired a reputation as a difficult book. Yet Proustian difficulty is not Joycean, Steinean, or Beckettian difficulty. Unlike Finnegans Wake, The Making of Americans, or The Unnameable, In Search of Lost Time is not a book that challenges our sense of what a novel is, only our sense of what a novel can do. Although it makes not inconsiderable demands on the concentration, memory, patience, and perseverance of its readers, it is roughly continuous with the form that the novel has taken since the days of Austen and Balzac. Its ultra-complicated sentences still follow grammatical rules; and while a certain degree of prior familiarity with French literature and history is helpful, understanding them at least does not require the reader to decode neologisms or catch recondite allusions. Although there is some critical controversy over the precise relationship between the first-person narrator and the book we are reading, it can still be fairly described as a Kunstlerroman, the familiar plot of a writer discovering his vocation. The nature of time and memory are its central concerns, and while one of the things the book asks us to do is to rearrange our conception of temporality, the order of the narrative does not depart radically or unexpectedly from the order of events. One is never at a loss to say precisely what is going on in any given scene, which are set in a series of recognizable and historically specific locations — bedrooms, drawing rooms, kitchens, theaters, churches, brothels, public parks, boulevards, train carriages, the countryside — all of them more vividly realized than anywhere else in the history of the form. 

    Extreme affective and cognitive states, such as sadomasochism and pathological jealousy, are presented alongside more routine ones, such as wonder and disappointment, and are observed and analyzed according to the codes of a psychological and sociological realism notable only for its surpassing astuteness and wit. Although it is firmly anchored in a class to which none of its contemporary readers belong — the fashionable monde of the Faubourg St. Germain aristocracy — it touches on every sector of society, from the haute-bourgeoisie to which Swann belongs, to the professional middle class of the narrator’s family, the Verdurins, and the “little band” of girls at Balbec and the lower middle-class of the shopkeeper Camus and the tailor Jupien; from the demi-monde of Odette and Rachel to the art worlds of the composer Venteul, the actress Berma, the writer Bergotte, and the painter Elstir; from the memorable appearances made by the working class soldiers in Jupien’s brothel to the peasantry, such as the milkmaid seen from the train to Balbec and the former peasantry, exemplified by the narrator’s remarkable cook and nursemaid Françoise, one of the outstanding characters in twentieth century literature. In any case, as long as snobbery is a cross-class feature of social life, and possessiveness is a feature of romantic life, and the anticipation of our desires proves more pleasurable than their satisfaction, In Search of Lost Time will never lack for relevance.

    In fact, the major difficulty of reading In Search of Lost Time today is not placed there by Proust; it is placed there by the economy. If, for the novel’s narrator, “lost time” refers to the past, for its reader, one hundred years after Proust’s death, it refers to the present. Proust’s narrator searches for time lost to the past and finds it in memory and in the creation of a literary work of art; Proust’s twenty-first century reader searches for the time to read this literary work of art which is lost to a culture that is consecrated to speed and an economy at war with human-scale temporality and, more often than not, fails to find it. 

     “The primordial scarcity eradicated in the arrival of industrial modernity returns [in the twenty-first century] as the specter of time-famine,” writes Mark McGurl in Everything and Less, his incisive study of the political economy of producing and consuming fiction in what he calls the Age of Amazon, perhaps the most extreme form of market society that history has ever known. 

    Consider the average day of a member of the social fragment that until very recently constituted the demographic core of the novel’s American readership — the educated, salaried professional living in the suburbs of a large or mid-sized city — with its tripartite division of eight hours for sleep, eight hours for work, and eight hours for leisure. From that last eight hours, take away overtime; take away time for getting showered, dressed, and ready for work, and time for commuting to and from the office, and time for preparing food, eating it, and cleaning up afterward, and time for buying groceries, filling prescriptions, and picking up dry cleaning; time for household chores and repairs; time for dentist’s appointments, doctor’s appointments, appointments at the stylist, scheduling future appointments; time for waiting in line and on hold; time for catching up on emails, making phone calls, paying bills, doing taxes, filling out forms and applications — that whole panoply of activities that falls under that rather sinister classification, “life admin.” Of the original eight, perhaps only two or three hours remain, along with the parts of the weekend not devoted to sleep or other tasks such as these, the eleven public federal holidays, and the paltry eleven days of paid vacation that the average American worker receives. If one is responsible for taking care of children or someone who is ill, or if one is ill oneself, the amount of available time diminishes significantly across the board.

    It gets worse. Within this finite number of hours, time must be found to perform the hundreds of activities which are regarded by corporations as further opportunities for monetization and value-extraction and by the people who do them as essential to a meaningful and fulfilled life. This includes, but is not limited to, time for socializing with friends, family, and one’s current or potential romantic partner; time for attending weddings, funerals, and reunions; time for the maintenance and improvement of one’s physical and mental health; time for travel and hobbies; time to go to the cinema, theater, concert hall, museum, dance club, or stadium; time for religious worship, political involvement, or participating in the community organizations of which one is a member. 

    Each of these non-work activities imposes a more-or-less steep opportunity cost on the others, and that is before we get to the locust-swarm of consumption that hoovers up an astonishing eight of the average American’s waking hours, namely, non-print media: social media, digital streaming, video sharing, online news, online retail, gaming, podcasts, radio, television, and so on. Although it can be inferred from this statistic that large amounts of digital media must be consumed by Americans at the office, extra time for leisure is invariably taken from sleep, as activities that cannot be concealed from one’s boss while sitting at one’s desk start to run up against the limitations imposed by biology on the functioning of the human organism. 

    Needless to say, the psychic conditions produced by such an organization of daily life —exhaustion, burnout, stress, anxiety, fear of missing out — are hardly conducive to fostering the mental state that an appreciation of Proust’s style requires. It is challenging enough, after a day of paid and unpaid labor, to summon the concentration necessary to read passages of prose consisting of sentences several clauses long. Interruptions from other people, whether they are the ones we live with, or whether they come in the form of ring tones, text messages, push notifications, or street noise—not to mention the ambient distraction facilitated by “connectivity,” that is, by the mere knowledge that at all times there is a node for possible information transfer and communication on one’s bedstand or in one’s pocket — are positively fatal to it. As attention diminishes, reading time increases, and along with it the likelihood the book will go unfinished, especially when there are so many less-demanding forms of entertainment immediately to hand.

    Compared to “other forms of cultural consumption” such as social media or television, McGurl notes, “reading a novel is a relatively long-term commitment,” which is why, according to a  Pew survey in 2022, Americans read an average of fourteen books per year. (The typical, or median, American will read only five; and twenty-five percent of Americans will read none at all.) And no novel requires a more substantial commitment than In Search of Lost Time. Someone who diligently set aside one hour every day to do nothing but read thirty pages of it could dispatch the six volumes in little over three months—not so much time in the grand scheme of things, a mere third of one percent of a modest seventy-five-year lifespan. But because of all the things that are competing for that hour — not forgetting, finally, the desire to read any of the five hundred thousand to one million titles brought out by the American publishing industry every year—the number stretches out to the more representative thirteen months it took the biographer Phyllis Rose, author of The Year of Reading Proust: A Memoir in Real Time, to finish it; or the eighteen months it took Deidre Lynch, Ernest Bernbaum Professor of Literature at Harvard University, to finish it; or the thirty-six months it took Mike Shuttleworth, a bookseller and literary events coordinator in Melbourne, Australia, to finish it. It is little wonder that so many people take a break to do or read something else, and find the prospect of returning to whichever point they left off or starting again from the beginning prohibitively daunting.

    (Since few essays on Proust resist the temptation of a personal anecdote, I will add that, having over-ambitiously started the novel on three separate occasions in my teens and early twenties, in which I got no further than the aforementioned dinner party scene, my first full reading of In Search of Lost Time, according to the note I left on the last page of my copy, took place over the course of nine months when I was single, childless, largely unemployed, unable to afford other entertainment, and not on social media. What got me across the finish line was a severe depressive episode. On one particularly low January evening on the Brooklyn Bridge, I asked myself what I would be missing out on if I jumped off. The answer came to me unbidden: I would never find out how In Search of Lost Time ended. I resolved, with a self-seriousness I can only smile at fifteen years later, to finish the book and then kill myself, but just as when Proust read Ruskin, when I read Proust “the universe suddenly regained infinite value in my eyes,” and when I came to the book’s last word, “time,” on September 11, 2009, I no longer wanted to die.) 

    What, then, would be the ideal conditions for reading Proust? Spending “a month in the country” with In Search of Lost Time has become a fantasy as proverbial as it has become out-of-reach for even the relatively privileged people whose lives I have sketched above, but the pastoral setting, which recalls the bourgeois narrator’s descriptions of his own leisurely reading experiences as a child in Combray, is in any case not dispositive. History furnishes us with a number of counterexamples, starting with the cork-lined bedroom on the second floor of the five-story apartment building at 102 Boulevard Haussmann, in the heart of Paris, where, in a frantic race of the pen against the scythe, the book was written. “The sad thing is,” according to Proust’s brother Robert, “that people have to be very ill or have a broken leg in order to have the opportunity to read In Search of Lost Time.” In a similar vein, a character from Haruki Murakami’s IQ84 quips, “Unless you’ve had…opportunities” in life such as being in jail, “you can’t read the whole of Proust.” The Kolyma gulag, for instance, where Russian journalist and short-story writer Varlam Shalamov, who had been publicly critical of Stalin, read Guermantes Way; or the NKVDs infamous Lubyanka prison in Moscow, where the Polish poet Aleksander Wat read Swann’s Way. Wat’s compatriot, the painter Jósef Czapski, followed Robert Proust’s advice and read the whole of In Search of Lost Time while bedridden during a summer spent recovering from typhus; later, as a member of the Polish officer corps during World War II, he too was captured by the NKVD, and in the forced labor camp in a former monastery in Gryazovets to which he was sent he delivered a series of lectures on the novel to his fellow inmates, now published as Lost Time: Lectures on Proust in a Soviet Prison Camp, reconstructing long passages of it orally from memory like a latter-day Scheherazade. And in a different context, Daniel Genis, surely one of the best-read persons of our time, finished it, along with just over a thousand other books, while serving out a ten-year sentence for armed robbery at the Green Haven Correctional Facility in Stormville, New York. 

    If such examples of astonishing commitment to high art under involuntary confinement seem extreme, not the sort of experiences one is ever likely to have, I encourage you to Google “reading Proust during lockdown.” Proust, who always feared that he did not live up to the expectations of his father, a well-regarded doctor specializing in infectious diseases and the author of a medical paper entitled “The Defense of Europe from the Plague,” would have relished the opportunity to do so. What is essential in all of these cases is not that they are luxurious, or even minimally comfortable or safe; it is that each amounts to total physical removal from everyday life as organized according to the logic of the market and the dictates of capital accumulation. Needless to say, it does not speak well of our society that the two spaces where free time — which Proust’s distant cousin Karl Marx defined as “idle time” as well as “time for higher activity” — is most readily available are the country house and the prison cell.

     Proust has become the unlikely grandfather of a cottage industry of popular English-language non-fiction devoted to or inspired by his life and work. His name appears in the titles of memoirs (Rose’s aforementioned Year of Reading Proust), self-help books (Alain de Botton’s How Proust Can Change Your Life), pop-science (Jonah Lehrer’s Proust Was a Neuroscientist, Marianne Wolf’s Proust and the Squid), cookbooks (Shirley King’s Dining with Marcel Proust), books of art history (Eric Karpeles’ Paintings in Proust) and Jewish history (Benjamin Taylor’s Proust: The Search, Saul Friedländer’s Proustian Uncertainties), along with literary criticism aimed at a general audience (Malcolm Bowie’s Proust Among the Stars, Anka Muhlstein’s Monsieur Proust’s Library, Christopher Prendergast’s aforementioned Living and Dying with Marcel Proust, Jaqueline Rose’s Proust Among the Nations, Michael Wood’s Marcel Proust). The list could be easily expanded if one wanted to add short biographies such as those by Edmund White and Richard Davenport-Hines, guidebooks such as those by Roger Shattuck and Patrick Alexander, and compendia such as André Aciman’s The Proust Project, not to mention the vast academic and scholarly literature on the subject. Whatever differences these books owe to their particular genres and target audiences, each of them is haunted to a more or less explicit degree by the question, Why read Proust? And hidden beneath that question is the blunter, Why read? And hidden, in turn, under that one is the rather more disquieting: Why do anything at all? 

    The standard answer is for pleasure. Reading, as the literary critic Christian Lorentzen likes to say, is a fundamentally hedonistic pursuit. On the substance, this is not wrong, and few authors make the case more convincingly than Proust himself: if there is a scene in the whole of literature that provides more intense pleasure than the one in the Guermantes library in Time Regained, I have not read it. As a motive for reading, however, pleasure has always struck me as insufficiently persuasive, and even a kind of trap, in the context of a culture that, to this day, sees no more reason to prefer poetry to push-pin than Bentham did. To ask after the utility of works of art is not, as is generally thought, a legitimate form of philosophical skepticism; it is a form of cultural blackmail, and should probably be refused outright. In a healthy culture, the intrinsic value of works of art would be so obvious that it would never occur to anyone to need to justify their existence in terms of their usefulness. Ours, it goes without saying, is not a healthy culture, despite its obsession with physical and mental “wellness.” Yet since the utilitarian devil is already here, it seems discourteous not to ask him to dance.

    One may be tempted to invoke Mill’s distinction between higher and lower pleasures here, and declare that it ought to be a matter of perfect indifference to the person who has access to In Search of Lost Time that someone else spends their finite hours on the planet with the minimal but at least immediate gratifications of Colleen Hoover’s eleven New York Times bestsellers, with another installment of the Marvel Cinematic Universe, or doomscrolling Twitter. After all, Proust’s position in the culture seems secure enough to weather the negative externalities that the contractions of American reading habits and the regression of secondary and post-secondary education ruthlessly impose on the production, acquisition, and consumption of equally challenging but less-canonical works of fiction: both the Modern Library edition of In Search of Lost Time and the more recent omnibus translation edited by Prendergast for Penguin remain, for now, in print, and are slowly being supplemented by new volumes, such as James Greive’s and Brian Nelson’s new translations of Swann’s Way as the novel enters the public domain. 

    Broadening our scope beyond the individual and the particular media that she does or does not consume, however, it is worth noting that pleasure is also the legitimating principle of market society itself, which promises the satisfaction of consumer demand at every price point. This promise is offered in exchange not simply for a refusal to guarantee necessary social goods such as affordable housing, health care, and education, but also for alternate ways of conceiving of value, such as meaningfulness, sacredness, honor, duty, and which are harder to quantify and therefore trickier to price. Increasingly, the promise goes unfulfilled: culture, like other sectors of the economy, is tending toward monopolization and, to use a wonderful term coined by the journalist and science fiction author Cory Doctorow, enshittification; in the absence of competition, there is little incentive for producers to risk the pleasures of novelty and experimentation on audiences that can be counted to pay for recycling and iteration, producing boredom and reactionary gestures of protest, not fundamentally different, as Proust or Benjamin might have observed, from those that were seen in the era of Baudelaire. A society whose greatest good is pleasure, of which ours is merely the most efficient, creates a culture in which pleasure is subject, on a long enough timeline, to diminishing marginal returns. In any case, as McGurl has shown, the negative externality imposed by market society on even the most canonically secure work of literature remains temporal: “The sped-up culture that delivers that novel to your doorstep overnight is the same culture that deprives you of the time to read it.” As such, not even the most discriminating reader will fail to feel its effects.

    Why read Proust? The other answer typically given is: self-improvement. In different ways this is the claim Phyllis Rose and Alain de Botton, among others, make on behalf of In Search of Lost Time. (Even Prendergast, a more sophisticated reader, does not avoid it entirely: “Is Proust good for you? Might he even, in controlled doses, have a useful function…?”) For Rose, reading Proust is a matter of demonstrating one’s social status. A pure product of mid-century upward mobility, having progressed from the daughter of a lower-middle class shopkeeper in Long Island via Harvard, the Guggenheim Foundation, and the pre-conglomeration publishing industry to tenured faculty at Wesleyan College and via marriage to the Babar-cartoon fortune, Rose is astonishingly, even embarrassingly forthright about her identity as a consumer (“I want therefore I am. I am therefore I acquire”), her meritocratic metrics of value (“I would not have reached this level of achievement had I not made reading Proust the central business of my life”), and her extra-literary reasons for reading In Search of Lost Time. The final line of her memoir approvingly quotes her mother, who would have preferred her to have written something more commercial, coming to terms with her instead of writing about Proust: “She saw the book’s potential, if not for making money, then for asserting our family’s intellectual and educational superiority to certain of her acquaintances, about whom she confided, ‘She’s not one of our class dear. She doesn’t read Proust.’” 

    This is not simply snobbery as Benjamin defines it — “the consistent, organized, steely view of life from the chemically pure standpoint of the consumer”; it is also antiquated. Less than three decades after Rose’s Year of Reading Proust — whose frequent appearance as an epithet in her memoir never fails to remind me of the corporate-branded years in Infinite Jest—the class of persons for whom reading once functioned as a mode of social distinction no longer exists in America. That class, now operating under relatively straitened circumstances, may still read the handful of literary fiction books whose marketing campaigns have deep enough pockets to secure them buzz, but is now more likely to be found discussing so-called “prestige” television or the infotainment passed off by cable news networks as genuine civic engagement. Their economic superiors have long realized that far from being a prerequisite to enter the most rarified social circles, cultural literacy is actually an impediment to it. That reading fiction—not to speak of high literature—is simply “what one does” if one wants to consider oneself a “cultured” or “educated” member of the “elite” no longer has the same degree of motivational purchase on the aspirations and the self-fashioning of upper-middle-class Americans as it did in Rose’s generation.

    Not that a general-audience book on Proust is necessarily a barrier to commercial success, if Alain de Botton’s number one international bestseller How Proust Can Change Your Life, published the same year as Rose’s memoir, is any indication. De Botton tapped into the market for a notion of self-improvement that has proved more enduring among the upper-middle-class than cultural literacy: therapy. His short manual is divided into nine chapters, each of which begins with a how-to title (“How to Live Life Today,” “How to Be a Good Friend,” “How to Be Happy in Love,” etc.) and concludes with a “moral” or a “lesson” derived from the particular aspect of Proust’s life or work considered in it. That some of these precepts are mildly counter-intuitive — he asks, for example, how we can learn to “suffer successfully” and be “productively unhappy” rather than to avoid suffering altogether and achieve happiness — is a fig leaf placed on the attempt to flatter the reader into believing that they are not doing what they are in fact doing, namely, reading self-help.

    Since the Greeks, philosophy and therapy have always occupied the same spectrum, much to the discomfort and consternation of the practitioners of the former. No less than Plato or Epictetus, Proust, as we will see, has therapeutic designs on his reader. De Botton is not wrong about one thing: reading In Search of Lost Time may change your life. (It saved mine, after all.) But a book like De Botton’s is a useful illustration that the gulf between these two concepts of therapy is large enough to amount to a difference of kind. “Even the finest books deserve to be thrown aside,” De Botton writes in the last line of How Proust Can Change Your Life — a compassionate observation, perhaps, but does his book inspire readers to pick up Proust in the first place? Whereas many of the books in the Proust cottage industry are intended as supplements to the reading of In Search of Lost Time as a map to the vast territory of the masterpiece to be consulted before visiting or as the enjoyable account of another person’s visit to the place one has just returned from — De Botton’s is clearly intended as a substitute for it. John Updike’s blurb on the back of my edition gives the game away: De Botton, he writes, “does us the service of rereading [Proust] on our behalf.” That “re” is the sign of a bad conscience; what he’s really saying is: De Botton has read Proust, so you don’t have to. 

    In this respect, How Proust Can Change Your Life belongs to the same family of books as Roger Shattuck’s, which recommends the parts of Proust that the reader can skip, or as Pierre Bayard’s amusing How to Talk About Books You Haven’t Read, where In Search of Lost Time is the primary example of a skimmable book. They are all book-length versions of Monty Python’s “All-England Summarize Proust Competition.” Here a self-help book functions as a kind of time-saving appliance or device: by repackaging Proust in a series of pre-digested lessons or morals, De Botton offers the “experience” of In Search of Lost Time in the time required to read 215 pages rather than 4,347. But this is time considered in its purely quantitative aspect, under the sign of market society, which treats efficiency, cost-cutting, and convenience as high virtues. As such, it is counter-productive; indeed, a waste of time. In Search of Lost Time is also, in its own way, a time-saving device, if saving is used in the sense of redemption (“saving one’s soul” rather than “saving money”) and its length is not incidental to how it functions. For Proust’s device to work, you must actually have the experience of reading it, from start to finish.    

     Proust gives his own answer to the question “why read Proust?” which incorporates the reader’s legitimate desires for both pleasure and self-improvement, without, however, reducing his novel and the time spent reading it to commodities whose potential value is interchangeable with that produced by anything else on the market. In the passage from Time Regained to which I referred in the opening, which few books on Proust neglect to quote, he writes:

    In reality, every reader is, while he is reading, the reader of his own self. The writer’s work is merely a kind of optical instrument which he offers the reader to enable him to discern what, without the book, he would have perhaps never experienced in himself. And the recognition by the reader in his own self of what the book says is the proof of its veracity.

    The book is both mirror and lamp. Proust’s optical instrument, as we have seen, works by inspiring the reader, through its extensive use of figurative language, to compare events in the novel to past events in her own life, which may have been forgotten; in other words, to stimulate the reader’s capacity for memory. It is crucial to mention that, for Proust, memory is not the experience of the past in the present, it is the experience of the past as the present, an impression of the “identity between the present and the past…so strong that the moment I was reliving actually seemed to be in the present.” He goes on to argue brilliantly that in the case of the mémoire involuntaire which he made famous, one is actually experiencing the putatively initial event for the first time, because it is only through memory that one comes to understand its significance in relation to all other events. “An experienced event is finite,” Benjamin writes, but “a remembered event is infinite, because it is only a key to everything that happened before it and after it.” Whereas finite time is, by definition, quantifiable, infinite time is not — and this is not because it goes on forever, but because no one but the person who experiences it can say how far into the past or the future it extends.

    For each reader, the recalled events will of necessity be personal and therefore different; yet at the same time every reader will have at least one set of recollections in common, namely, the ones the narrator describes in In Search of Lost Time: the famous sequence, for example, in which his memories are triggered by the uneven paving stones, the sound of a spoon against a plate, the feeling of a napkin, and the sight of George Sand’s François de Champi As anyone will immediately understand who consults the prologue to Proust’s Contra Sainte-Beuve — a hybrid of fiction and criticism that he wrote as he searched for the form of what would become his masterpiece — whose five pages contain in nuce many of the novel’s most famous episodes, In Search of Lost Time cannot be any shorter than it is, because in order for Proust’s optical instrument to simulate this experience of memory, enough time must have passed for the reader to forget their first encounters with the deceptively incidental details that Proust has seeded sub rosa in the earlier volumes for them finally to bloom into full significance in Time Regained

    The scene in which these recollections unfold — the narrator arrives late to the final party of the Princesse de Guermantes and is made to wait until a pause between the movements of the Vinteuil sonata permits her to enter the drawing room — is the fastest paced episode in the entire novel. It is not for nothing that it largely takes place in a library, since that is what the narrator compares the self to — a collection of forgotten days which memory takes from the shelf and dusts off. Crashing like wave after wave on narrator and reader alike, the series of recollections produces what can only be described as ecstasy. This is ecstasy not only in the sense of intense pleasure, but relatedly, in the original sense of the word, which was used primarily in the context of sacred or religious experience, ek-stasis, or standing outside oneself. Following Benjamin, Jameson notes that this doubly ecstatic experience is fundamentally temporal in nature: the narrator feels and the reader is made to feel along with him the rapture of standing outside time. Proust writes: “one minute freed from the order of time has recreated in us, in order to feel it, the man freed from the order of time.” 

    The order of time is biological: the always finite number of minutes afforded to each living being. To be freed from it, if only for a minute, allows one an intimation of immortality, the longing for which, Proust says, can only be removed by death itself. But the order of time is also — and at the same time — social: the temporal regime constructed by the particular political economy into which a biological self finds itself thrown. Born in 1871, in the year of the Paris Commune at the outset of the second industrial revolution, Proust belonged to one of the first generations to experience the transportation technologies, such as the railroad and the automobile, and the communications technologies, such as the telephone, about which he writes so memorably, whose abilities to compress time and space have culminated in our own vertiginous market society. From the point of view of market society, there is or ought to be no such thing as being freed from the order of time, that is, time freed from generating value for someone else, whether directly, through one’s work or one’s purchases, or indirectly, through the built environment in which one is involuntarily bombarded by advertisement or through the attentional and behavioral data that can be harvested for a profit whenever one is connected via computer, mobile phone, e-reader, or wearable to the internet.

    Just as everything about market society seems designed to get in the way of reading Proust, reading Proust gets in the way of participating in market society. As long as you buy it, Proust’s novel remains a commodity, but as long as you are reading it on paper — and reading yourself in the meantime — you are not generating further material profit. Indeed, while you are reading Proust you and your time are quite literally operating at a loss. In the grand scheme of things, regaining your time from the market may amount to a negligible act of resistance to it, but one could find a worse benchmark for what constitutes a free society. A free society will be one in which everyone, if they so choose, has the time to read In Search of Lost Time.

     

    Notes on a Dangerous Mistake

    Several groups of rightwing intellectuals hover around the Republican Party, defending a stark conservatism. But there is a very different group, definitely rightwing, that is equally disdainful of Republican conservatives and Democratic progressives — who are all at bottom, its members insist, liberals: classical free-market liberals or egalitarian liberals, it’s all the same. These ideological outliers call themselves “post-liberal,” and they aim at a radical transformation of American society. Their overweening ambition is based on a fully developed theology, Catholic integralism, but the political meaning of this theology has not yet been fully worked out or, better, not yet revealed. A small group of writers, mostly academics, constitute what they hope, and I hope not, is the vanguard of a new regime and a Christian society. They have mounted a steady assault on liberal individualism and the liberal state, but so far they haven’t had anything like enough to say about life in the post-liberal world — not enough to warrant a comprehensive critique. 

    So here, instead, is a series of critical vignettes dealing first with the style of post-liberal writing as displayed in the work of Sohrab Ahmari and then with the strange version of world history that Patrick Deneen asserts but never defends. My own defense of liberalism comes later, along with a critique of recent post-liberal writing on the Ukraine war and some worries about the cautiously reticent, but sometimes ominous, description of the post-liberal future that can be found in the books of Patrick Deneen and Adrian Vermeule. For now, I ignore all the other post-liberals.

    Sohrab Ahmari, the leading non-academic among the post-liberals, makes his argument for “the wisdom of tradition” through stories of great men; only one woman and one married couple are included in the twelve chapters of his book The Unbroken Thread, which appeared in 2021. These are nicely told but highly contrived stories, with radical omissions and crucial questions left unanswered. Three examples will serve to show the tribulations of tradition. 

    Ahmari uses (the verb is right) Augustine to discuss the issue of God and politics. His story is focused on Augustine’s efforts to respond both to the rise of neo-paganism after the sack of Rome and to the Manichean heresy. The question that Ahmari poses is the great question of Augustinian politics: should Christians call on the secular authorities to use force against heretics and unbelievers? Augustine, with hesitation, ends by saying yes; Ahmari ends by saying…not quite yes. This is a common feature of post-liberal writing: just when Ahmari should show his hand, he covers his cards. I suppose that a defender of traditional wisdom, making his case in the United States today, can’t quite bring himself to call for religious persecution. His claim is simply that God “needs” politics — but exactly what is needed is left unspecified. Ahmari’s heart yearns for a strong Christian ruler who would set things right: “a godly servant ruler.” But his mind counsels prudence, and so he is unwilling to tell us exactly what the wisdom of tradition requires today. 

    Cardinal Newman is used by Ahmari to address the problem of critical thinking — which, from a traditional point of view, is indeed a problem. Should we think for ourselves? Should I follow my conscience? Late in his life, Newman responded to William Gladstone’s polemic against the doctrine of papal infallibility. Gladstone claimed that any Catholic who accepted the doctrine rendered himself incapable of thinking critically, unable to follow his conscience — and therefore not a useful citizen of a free state. Now remember that Newman had, after years of agonized reflection, followed his conscience and left the Church of England. Nonetheless, according to Ahmari, he now argued that conscience is not a matter for individual reflection; it is God’s truth implanted within us, and we need the help of churchly authority and Christian tradition to recognize the truth and understand what conscience requires. But then how did Newman manage to defy the authority and tradition of the Church of England? This seems an obvious question, which Ahmari doesn’t ask. Didn’t Anglicans think Newman a man of extravagant self-will? Surely he was guilty of thinking for himself. Why not, then, the rest of us? 

    Rabbi Abraham Joshua Heschel is used by Ahmari to teach that God wants us to “take a day off.” Indeed, Heschel wrote eloquently about the importance of the Sabbath. But he wrote even more eloquently about prophetic Judaism, Hasidic spirituality (he was the scion of a great Hasidic lineage), and social justice. Ahmari has little to say about the last of these, though he does tell us that Heschel believed that “a God-centric…understanding was the only sure guarantee of social justice and human dignity.” This understanding led Heschel to a life of intense political activism in the course of which we learned, and I think he learned, that the “only” in Ahmari’s sentence doesn’t accurately describe the political world. Famously, Heschel marched with Martin Luther King, Jr., but the Jews who marched with him were not his fellow Hasidim. Reform rabbis marched with him and, in large numbers, secular Jews. The ones I knew did not have a God-centric understanding, though they were fiercely committed to social justice. And it should be noted that the fight for the forty-hour week — two days off! — was led by largely secular leftists. Of course, Heschel had, and Ahmari has, a clear view of how the day off should be spent. But only Ahmari, I suspect, would endorse the traditional wisdom that Sabbath observance should be enforced by the political authorities. Naturally, he doesn’t quite say this.

    In Why Liberalism Failed, published in 2018, Patrick Deneen provides us with a monocausal and idealist account of modern history. Liberal ideas are inevitably (a word he likes) the cause of all the achievements and all the pathologies of modernity — the pathologies most importantly. He never engages with any alternative accounts, never considers other possible histories of the troubles he describes. I want to take up just a few.

    The most remarkable omission is the Protestant Reformation, the whole of it, which is never mentioned in Deneen’s book. His standard contrast is between classical and Christian (specifically Catholic) values, on the one hand, and liberalism on the other, and this leaves no space for Protestantism, whose major leaders, Luther and Calvin, while certainly not liberals, anticipated, along with their followers, many of the liberal ideas that Deneen identifies and deplores. Consider just these three: the conscientious individual, who thinks for himself (and more dangerously, for herself); the “gathered” congregation, an entirely voluntary association; and the critique of hierarchy, the call for a “priesthood of all believers.” These three concepts are surely rooted in Christian values, though certainly not in Deneen’s values, since they lead (inevitably?) to the English revolution, the execution of the king, the proliferation of increasingly radical sects, Milton’s defense of divorce, Cromwell’s dictatorship, and the Puritans of New England who supported the American revolution — which was definitely a liberal project. Deneen could write a book, though he won’t, asking why Protestantism failed. 

    Modern science is also omitted, since Deneen recognizes no independent historical development; he seems to believe that science is the creation of presumptuous liberals and their ambition to “conquer nature.” This ambition, he insists, is radically new; he contrasts it with the classical and Christian understanding of, and “passive acquiescence” in, nature’s limits. Break with the limits and you get both natural and social disasters: climate change, pandemics, Big Pharma, genetic engineering, abortion. But has humanity ever chosen to live within nature’s limits? I don’t think that history can be read that way. The ancient Egyptians, for example, didn’t simply live with the Nile’s annual flooding; they built an elaborate irrigation system that required management and control — and produced more food, a growing population, the absolutism of the Pharaohs, the house of bondage, and the enslavement of the Israelites (not to mention locusts, darkness, waters turned to blood, etc.). The ancient Greeks were not content with the human ability to swim; they designed and built ships and conscripted rowers, which led to international trade, wars, the Athenian empire, the massacre of the men of Melos, and the foolish decision to invade Sicily. Even medieval Christians, not content with the body’s natural frailty, invented body armor and the longbow, making it possible for Christian knights to rescue maidens in distress — leading also to the brutalities of crusading warfare. None of this had anything to do with liberalism. 

    One specific form of modern science, medicine, is the cause of much of what Deneen laments. The enormous expansion of our capacity to prevent and heal disease has shifted the attention of most of us from eternity to longevity (another break with the classical and Christian sense of limits: see Ahmari on “what’s good about death”) and opened the way for a vast expansion of the earth’s population. Here is a major cause of the size and scope of the modern state — on which liberalism has been, if anything, a constraint. Deneen argues that individual freedom and the chaos that it creates leads inevitably to bureaucratic regulation and statism, but the sheer number of individuals must also be a factor. Of course, if they all knew their place, and stayed in it, died at three score and ten (or, better, long before), thought only about eternity while they were alive, and accepted the authority of the Church — well, a smaller state might suffice. But Deneen’s description of the small communities and local commitments that post-liberalism would require makes no room for our contemporary millions. In any case, localism has had a short life among the post-liberals.

    One more omission from Deneen’s history: he treats capitalism as a liberal creation. Capitalism, in Deneen’s telling, was the inevitable result of what Thomas Hobbes in Leviathan calls the individual’s unconstrained and unlimited “pursuit of power after power.” Not so fast. In truth, those words perfectly fit, and were intended by Hobbes to fit, the contending patricians of republican Rome and the feudal lords of Christian Europe. In any case, capitalism surely has a history of its own. I don’t mean to commit myself to a monocausal materialism, only to insist that material factors — class conflict, available technology, ownership of land and capital — played a part in producing the modern economic system. Max Weber thought that Protestant theology — most importantly the (definitely illiberal) doctrine of predestination — also had a lot to do with the success of capitalism. It provided a motive for hard work and profit-seeking, which liberalism does not, and a doctrine that fits the mentality of what I still think of as “the rising bourgeoisie.” Now consider Marx’s famous description of the effects of capitalist activity: “All fixed fast-frozen relations with their train of ancient and venerable prejudices and opinions are swept away…All that is solid melts into air, all that is holy is profaned…” This is pretty close to Deneen’s account of what liberalism does. Marx provides a fairly plausible alternative. 

    If I were writing a different article, I would argue that many of the ills of American society derive from an unregulated and radically illiberal capitalism. Unexpectedly, Ahmari, in his newest book, Tyranny, Inc., makes exactly this argument with a series of vivid stories about capitalist predation. He is now a post-neo-liberal and even a social democrat. We are never told how all this fits with the “wisdom of tradition.” 

    Very little is said about the past in post-liberal texts. Deneen does acknowledge that the world before liberalism was a time of “extensive practices of slavery, bondage, inequality, disregard for the contributions of women, and arbitrary forms of hierarchy and the application of law.” But he has little to say about these “practices” and less to say about the euphemistic “disregard” of women’s “contributions,” which he doesn’t bother to describe. More importantly, he offers no explanation of how all these evils co-existed with “classical and Christian thought and practice,” supposedly dominant in the pre-liberal age. Again and again, post-liberals contrast a classical and Christian age of political order, strong families, communities living within natural limits, individuals who respect tradition and recognize authority — with a destructive liberalism. But then they decline to give us what we surely need if we are to accept their account, which is a concrete historical description of this lost world. 

    Consider a few key moments. I will focus on family life, since it is a central contention of all the post-liberals that liberalism has destroyed the family. Indeed, this focus is common among opponents of liberalism the world over. Thus Vladimir Putin, in a recent decree on spiritual and moral values, denounced the American and Western “destruction of the traditional family through the promotion of nontraditional sexual relations.” 

    What traditional family?

    — The family in ancient Athens was certainly strong: women were harshly subordinated and even secluded; they had no presence in the public life of the city. Men, by contrast, were out and about, politically active, sexually free. Deneen is a fierce opponent of sexual freedom (which he takes to be the inevitable product of liberal individualism), but perhaps only when it extends to women. In any case, he never tells us about the boy prostitutes and their male clients who somehow coexisted with classical philosophy.

    — The accounts of family life among patricians in the Roman republic and, later on, inside the imperial palace suggest a society in which it is your relatives who kill you. The post-liberals disapprove, I am sure, but they don’t discuss. How was this sort of thing possible given classical ideas about self-restraint? The use and abuse of male and female slaves was also, it seems, consistent with classical family values.

    — Family life in medieval Europe was shaped by the feudal hierarchy, which determined the everyday misery of the Christian men and women who lived at its lowest level. The serfs were offered eternity, but their lives were brutal and short. Their families were radically insecure. Women died in childbirth in large numbers, and men remarried, mostly younger women, girls, really, who rarely chose their husbands and were instantly pregnant, and pregnant again. The high rate of pregnancy wasn’t a sign of the pre-liberal commitment to what Adrian Vermeule calls the “traditional multi-generational family”; it followed from the even higher rate of infant death. The women certainly didn’t have time to think about divorce, which wasn’t thinkable anyway (and, according to the post-liberals, shouldn’t have been); many families were already “broken,” many children abandoned by impoverished and desperate parents. (Remember the Children’s Crusade.) It is hard to imagine these families as a happy alternative to life under liberalism. And please consider the droit de seigneur, the feudal lord’s claim on the virginity of any of his serfs’ daughters. This is a natural feature of the feudal hierarchy, and definitely an assault on family values. I am sure that Christian theologians condemned the practice, if they ever thought about it, but so far as I can tell parish priests did nothing to oppose it — they were underlings themselves in a parallel hierarchy. 

    — The family that post-liberals love is not the family of the classical age or of medieval Christendom. In fact, it is the bourgeois family, which was the creation of eighteenth and nineteenth century liberals. Consider the critique of aristocratic libertinism, the end of arranged marriage, the possibility of romantic love, the patriarchal home that was every man’s castle — all this was liberalism at work. The problem for post-liberals is that the work goes on. The stereotypical bourgeois family included an employed man and a woman at home, caring for at least two children. Actually, large numbers of women worked outside the home — on farms, for example (post-liberals like to celebrate the family farm), as secretaries and teachers, and in the sweatshops of the garment trades. And since in liberal times women are allowed to think for themselves, they began to seek wider roles, not only in the economy but also in the polity. Deluded by liberalism, Deneen writes, they sought “emancipation from their biology” — hence (inevitably) divorce, broken families, and abandoned children. I assume that men who share in the housework and sometimes look after the children have also forsaken their biological destiny. Yes, the liberal/bourgeois family was better than any of the families we know about in the age when classical and Christian values were dominant. It’s just that the post-liberal conception of “biology” is not the end of the story.

    Please note that what I have written above about classical and medieval times is a fairly generous account of the history that post-liberals ignore. I didn’t say anything about the Crusades or the Inquisition, both of which were surely the direct consequence of established Christian (Catholic) values. Nor have I tried to imagine myself in medieval Europe, a Jew expelled or murdered in the name of religion. Perhaps post-liberals would claim that all this was a radical distortion of Christian charity. But if that’s right (and I acknowledge the importance of charity in Christian thought), then why can’t liberals say that Deneen isn’t describing the consequences of liberal ideology but rather the distortions of it? (This assumes that his description is accurate, which it often isn’t.) In any case, the post-liberals owe us a frank accounting of everything — including all the ugliness — that went along with classical and Christian thought and practice. Actually, a reader of Vermeule might think that something like a religiously inquisitive state is what some of the post-liberals have in mind (see below) — though there must be liberal post-liberals who would shrink from that.

    There is something very strange about the vehemence of the post-liberal critique of liberalism. Consider these sentences from Deneen:

    Unlike the visibly authoritarian regimes that arose in dedication to advancing the ideologies of fascism and communism, liberalism is less visibly ideological and only surreptitiously remakes the world in its image. In contrast to its crueler competitor ideologies, liberalism is more insidious… 

    Let’s parse this. Liberalism is an almost, not quite, invisible ideology, less cruel than fascism and communism (but cruel enough?) that insidiously advances its authoritarian project of remaking the world. Since Deneen lives in this remade world, we can examine some of its features. The Catholic university where he teaches benefits from tax exemptions and federal grants from the insidiously liberal state. He says what he likes in his classes and publishes uncensored books with a major university press. If he lectures at another university and is shouted down by hostile students (as he might be), it is liberals like me who rush to his defense in the name of free speech and academic freedom, which are historically liberal values. He has never had to worry about the secret police knocking on his door in the middle of the night. He and all the other post-liberals write for magazines that are delivered across the country by the state-run post office. He has a passport and is free to visit Orban’s Hungary and return whenever he likes. He may be required to wear a mask in a time of pandemic, but he is free to organize a protest against this example of liberal authoritarianism. 

    Fascism and communism are indeed more cruel.

    For all that, there certainly is much that is wrong with American life today, and many of the things that the post-liberals criticize require criticism: extreme inequality, the ravaged ecosystem, high rates of drug addiction and suicide, the “left behind” communities of the Rust Belt, meritocratic arrogance, and more. All this is, according to Deneen, caused by liberalism, but Ahmari ascribes much of it to a predatory capitalism and argues for strong unions and something like a New Deal state. His new book, dedicated to Deneen and Vermeule (among others), suggests the possibility of a post-liberal alliance with union activists and green militants. But I find it hard to imagine that he and his friends could ever work closely with secular leftists, as liberation theologians and worker priests did in their time. 

    Why are they unlikely to imitate those exemplary Christians? Liberal egalitarians believe in and are fighting for a society of free and equal individuals. Post-liberals do not aim at that kind of freedom or equality. Here is their central argument: free individuals are not really free unless they freely decide to do the right things — unless, writes Ahmari, their conscience is guided by the Church. “Liberty,” Vermeule insists, “is no mere power of arbitrary choice, but the faculty of choosing the common good.” But who defines the common good? Not us, arguing among ourselves.  Ordinary men and women, you and I, won’t know or choose the common good unless we are guided from above by a new elite committed to traditional — classical and Christian — values. We have no right to get it wrong. So the hierarchies created, post-liberals claim, by liberalism must give way to a new and more simple hierarchy of the wise who know and the simple who don‘t, the few and the many, “the great and the ordinary” — a cadre of men (and women?) with the right values at the top and a mass of good-natured commoners looking up. The problem with liberal individuals, of course, is that they don’t look up for guidance and instruction. Here is a liberal tradition: not looking up. 

    In a new book published late last year called Regime Change, Deneen provides a concise description of the post-liberal future:

    What is needed is a mixing of the high and the low, the few and the many, in which the few consciously take on the role of aristoi, a class of people who, through supporting and elevating the common good that undergirds human flourishing, are worthy of emulation and, in turn, elevates the lives, aspirations, and vision of ordinary people. 

    This new elite, Deneen believes, will be brought to power by a new political force: the world-wide populist opposition to the rule of “gentry liberals and the laptop class.” He acknowledges that “this movement from below is untutored and ill led,” but it holds great political promise. Its members are the “many”; Deneen ‘23 regularly calls them the “working class,” a term that I don’t believe Deneen ‘18 ever used. They are working men, preferably self-employed, carpenters, plumbers, electricians, and stay-at-home women, the embodiment of traditional values: family and faith. They have supported Donald Trump, Deneen ‘23 says, but he hopes that they are waiting for the aristoi.

    The coming aristocrats are never given proper names, but I assume they are the post-liberal intellectuals. Like any vanguard, its members are the people who understand its necessity. They will tutor the working class but also, they promise, be restrained by its common sense. Deneen sometimes sounds like a social democrat, describing the critical role of the working class; he argues for strong unions (and, a little mysteriously, strong guilds) and a decentralization of the liberal state. Ahmari is an actual social democrat and a supporter of the labor movement, but if he stands with his fellow post-liberals, he must believe that when they come to power the voice of labor will mostly be advisory — to remind the aristoi of traditional values and ordinary virtues. Deneen is amazingly patronizing toward the newly discovered workers.

    We are never told how the working class might actually exercise political power. Vermeule writes instead that the influence of the lower orders, the ordinary folk, will work through representation and consultation — it is “what one might call democracy without voting” (Vermeule’s italics). Deneen ‘23 is equally explicit about the liberal idea of government by the consent of the governed. He argues for a “preliberal” idea: “the consent of a community to govern itself through the slow accumulation and sedimentation of norms and practices over time.” This is far preferable to the actual agreement of “deracinated” men and women (like you and me) as in “the liberal social contract tradition.” Note the unintended concession that this is a tradition. Ahmari could have written about it in a chapter focusing, say, on John Locke or John Rawls. That thread is unbroken. 

    Democracy without voting doesn’t sound democratic. We can get a better idea of the post-liberal view of democracy by looking at what Ahmari and Deneen have written about the Ukraine war. Sometimes they are simple isolationists, as when Ahmari calls Ukraine “a country long acknowledged not to implicate core US interests.” But he himself is strongly engaged. He thinks Ukraine belongs, by nature or destiny, in the Russian sphere of influence. “The rights of nations,” he writes, “are circumscribed by geography, history, economics, chance. Above all, by power… Ukraine’s ‘friends’ sadly led her to believe she could escape these laws.” (I immediately thought: the rights of women are circumscribed by nature and by power. Feminists sadly led them to believe…) Ahmari signed a petition that called the Ukrainians “victims of the [Western] attempt to bog down Moscow in a long, devastating insurgency.” “Victims” is the most generous term for the Ukrainians that I have found in post-liberal tweets, articles, and interviews. For Deneen, Ukraine is a “pawn of American gnostic dynamics” (I will have to explain that). What is missing in post-liberal writing is any recognition of the Ukrainians as political agents, who have created a democratic state and are now fighting to defend it. 

    Patrick Deneen has written a long essay, inspired by one of Eric Voegelin’s books, on the role of gnosticism in the Ukraine war. Voegelin was a German-American political philosopher who died in 1985 and was interested in the ontological implications of modern ideologies. The gnostics were Christian heretics who denied Augustine’s distinction between the City of God and the City of Man and sought a messianic transformation of the earthly city. Voegelin despised Gnosticism for its pretension to a privileged and irrefutable apprehension of sacred knowledge and its impatience to act politically on this revelation, and he devoted many writings to finding its traces in modern politics. He believed that fascists and communists were modern secular gnostics. Now Deneen weirdly adds American liberals to that group. “What was once a ‘reformist left’ is today a radicalized messianic party advancing its gnostic vision amid the ruins of Christian civilization.” This messianic party is one side, the Western side, of the Ukraine war. On the other side, also described in Voegelin’s terms, stands the Russian Orthodox Church, a pagan, that is, civic or national version of Christianity, led by secular authorities, like the czar and now the president.  Deneen does not explicitly choose sides in this conflict; he would prefer an Augustinian intervention. But all his anger is directed against the alleged liberal gnostics.

    My favorite line in Deneen’s epic essay comes when he warns the Ukrainians that the American messianists will in the end discard “Ukraine’s blue and yellow for a rainbow flag.” Think about that. Surely gay Ukrainians are already flying the rainbow flag, and they would rightly deny that it is in any way incompatible with the national flag. I don’t think that Deneen is lost here or that he misunderstands. His vision, which he assumes “ordinary” Ukrainians share, is of a divinely sanctioned society of pious, right-thinking heterosexual men and women—and no visible others. A liberal democracy is more inclusive; it allows for, encourages, a pluralism of flags.

    Though the post-liberals have little to say about working class activism, they have a lot to say about the authority of the new elite. Here it is important to notice a break between the accounts of post-liberalism in Deneen ‘18 and Deneen ‘23. In the first book, Deneen offers us an escapist picture of the future that he hopes for: post-liberals will leave hegemonic liberalism and the authoritarian state behind; they will live together in small communities, in tight families, accepting nature’s limits, following the old classical and Christian traditions. This is the  “Benedict Option.” No conscientious individuals, no questing scientists, no capitalist entrepreneurs, no intrusive state bureaucrats, and also no social gospel and no Christian involvement in the larger society — a kind of backward-looking, inwardly focused, secessionist communitarianism. More recently, in his 2023 book, written perhaps under the influence of the openly statist Vermeule, Deneen brings the intrusive bureaucrats back. What matters, it turns out, is that they are now true aristocrats, men (and women?) who are committed to classical and Christian values. Given what Ahmari calls “godly servant rulers,” everything else follows.

    Deneen ‘23 argues that the “regime change” he seeks “must begin with the raw assertion of political power by a new generation of political actors inspired by an ethos of common good conservatism.” Vermeule, in his recent book Common Good Constitutionalism, gives us a pretty clear sense of what that raw assertion would look like. He provides a strong defense of the administrative state (when it is in the right hands) — which includes a list of the liberal constraints on state power that he means to leave behind. For example: the liberal axiom “that the enforcement of public morals and of public piety is foreign to our constitutional order.” For example: the libertarian assumption “that government is forbidden to judge the quality and moral worth of public speech.” For example: the liberal idea that good government should “aim at maximizing individual autonomy or minimizing the abuse of power.” Instead, he writes, the aim should be “to ensure that the ruler has both the authority and the duty to rule well” — that is, to enforce public piety and regulate public speech (and prevent abortion, ban gay marriage, encourage “traditional” families and enthusiastic reproduction). 

    What would life be like for “ordinary” people in the post-liberal age? Everyone who has written about Deneen, Vermeule, and company agrees that they are disturbingly vague about this. But even ordinary readers can make out a rough picture of what post-liberalism would mean for you and me. Looking up, accepting the guidance of the aristoi, we would lead simple, happy lives, working (for a living wage, so that women can stay at home), praying, raising lots of children. Deneen likes the Hungarian policy of giving a lifetime tax exemption to any woman with four or more children. If we ever speak in public, we would, freely, say all the right things. Our piety would have the benefit of state support. We would find only the right books in the public library (the aristoi would probably have to read more widely, for the common good). We wouldn’t have to bother to vote to remind the aristoi of our ordinary virtues. 

    We would have an associational life beyond the family, in clubs, unions, guilds, and corporations. “Subsidiarity” is an important Catholic doctrine, which defends the independence of these associations but also calls for their integration in a higher unity. Vermeule emphasizes this last point. Sometimes, he writes, these associations fail “to carry out their work in an overall social scheme that serves the common good.” This constitutes a “state of exception” (the concept comes from the fascist German philosopher Carl Schmitt) and “requires extraordinary intervention by the highest level of public authority.” So we will have to be extremely careful about what we do in the unions that Ahmari supports. Looking up, we will probably endorse the social scheme proposed by our betters.

    I wrote at the beginning that I would provide my own defense of liberalism. The description above of the post-liberal state and society — that is my defense of liberalism. Individual choice, legal and social equality, critical thinking, free speech, vigorous argument, meaningful political engagement: these are the obvious and necessary antidotes to post-liberal authoritarianism. Above all, we must treasure the right to be wrong. The post-liberals are actually exercising that right. They shouldn’t be allowed to take it away from the rest of us.

    Saudi Arabia: The Chimera of A Grand Alliance

    Even alliances between countries that share similar cultures and rich, intersecting histories can be acrimonious. France and Israel, for example, provoke vivid and contradictory sentiments for many Americans. Franco-American ties are routinely strained. No one in Washington ever believed that Charles de Gaulle’s nuclear independence, guided by the principles of tous azimuts, shoot in any direction, and dissuasion du faible au fort, deterrence of the strong by the weak, meant that France might try to intimidate the United States. But there were moments when it wasn’t crystal clear whether Paris, free from the North Atlantic Treaty Organization, might harmfully diverge from Washington in a confrontation with the Soviet Union. Still, even when things have been ugly, quiet and profound military and intelligence cooperation continued with the French, almost on a par with the exchanges between Washington, London, and Canberra. It didn’t hurt that a big swath of affluent Americans have loved Paris and the sunnier parts of France for generations, and that French and American universalism essentially speak the same language. These things matter. 

    The United States has sometimes been furious at Israel — no other ally has, so far as we know, run an American agent deep inside the U.S. government hoovering up truckloads of highly classified information. Israel’s occupation of the West Bank, much disliked and denounced by various American governments, is probably permanent: setting aside the Israeli right’s revanchism, the proliferation of ever-better ballistic weaponry and drones negates any conceivable good faith that might exist in the future between Israeli and Palestinian leaders, who never seem to be able to check their own worst impulses. Geography is destiny: Israel simply lacks the physical (and maybe moral) depth to not intrude obnoxiously into the lives of Palestinians. The Gaza war has likely obliterated any lingering Israeli indulgence towards the Palestinian people what used to be called, before the Intifadas eviscerated the Israeli left, “risks for peace.” An ever larger slice of the Democratic Party is increasingly uncomfortable with this fate: the rule of (U.S.-subsidized) Westerners over a non-Western, mostly Muslim, people. But the centripetal forces — shared democratic and egalitarian values, intimate personal ties between the American and Israeli political and commercial elites, a broader, decades-old American emotional investment in the Jewish state, a common suspicion of the Muslim Middle East, and a certain Parousian philo-Semitism among American evangelicals — have so far kept in check the sometimes intense official hostility towards Israel and the distaste among so much of the American intelligentsia. 

    None of this amalgam of culture, religion, and history, however, works to reinforce relations between the United States and Islamic lands. Senior American officials, the press, and think tanks often talk about deep relationships with Muslim Middle Eastern countries, the so-called “moderate Arab states,” of which Egypt, Jordan, and Saudi Arabia are the most favored. Presidents, congressmen, diplomats, and spooks have certainly had soft spots for Arab potentates. The Hashemites in Jordan easily win the contest for having the most friends across the Israel divide in Washington: sympathizing with the Palestinian cause, if embraced too ardently, could destroy the Hashemites, who rule over sometimes deeply disgruntled Palestinians. American concern for the Palestinian cause rarely crosses the Jordan River. (Neither does even more intense European concern for the Palestinians intrude into their relations with the Hashemite monarchy.) 

    The Hashemites are witty enough, urbane enough, and sufficiently useful to consistently generate sympathy and affection. Even when King Hussein went to war against Israel in 1967 or routinely sided with Saddam Hussein, his style and his manner (and I’m-really-your-friend conversations with CIA station chiefs and American ambassadors) always encouraged Washington to forgive him his sins. The Hashemites, like the Egyptian military junta, have routinely, if not always reliably, done our dirty work when Washington needed some terrorist or other miscreant held and roughly interrogated. Such things matter institutionally, building bonds and debts among officials. 

    But little cultural common ground binds Americans to even the most Westernized Arabs. Arabists, once feared by Israel and many Jewish Americans, always had an impossible task: they had to use realist arguments — shouldn’t American interests prefer an alliance with twenty-two Arab countries rather than with a single Jewish one? — without underlying cultural support. They had to argue dictatorship over democracy or belittle Israel’s democracy enough (“an apartheid state”) to make it seem equally objectionable. Outside of American universities, the far-left side of Congress, the pages of The Nation, Mother Jones, and the New York Review of Books, and oil-company boardrooms, it hasn’t worked — yet. Too many Americans have known Israelis and visited the Holy Land. And too many are viscerally discomfited by turbans and hijabs. Culture — the bond that rivals self-interest just isn’t that fungible.

    Even the Turks, the most Westernized of Muslims, didn’t have a large fan club in America when the secular Kemalists reigned in Ankara — outside of the Pentagon and the Jewish-American community, which deeply appreciated Turkey’s engagement with Israel. The Turks’ democratic culture never really blossomed under the Kemalists, who couldn’t fully shake their fascist (and Islamic) roots. The American military still retains a soft spot for the Turks — they fought well in Korea, and their military, the southern flank of NATO, has a martial ethic and a level of competence far above any Arab army; and they have continuously allowed the Pentagon to do things along the Turkish littoral, openly and secretly, against the Soviets and the Russians. 

    Yet their American fan club has drastically shrunk as the Turkish government, under the guidance of the philo-Islamist Recep Tayyip Erdoğan, has re-embraced its Ottoman past, enthusiastically used state power against the opposition and the press, and given sympathy and some support to Arab Islamic militants, among them Hamas. The Turks have pummeled repeatedly Washington’s Kurdish allies in Syria (who are affiliated with Ankara’s deadly foe, the terrorism-fond Kurdistan Workers Party). More damning, Erdoğan purchased Russian S-400 ground-to-air missiles, compromising its part in developing and purchasing America’s most advanced stealth fighter-bomber, the F-35. The Pentagon, always Turkey’s most reliable ally in Washington, feels less love than it used to feel.

     No great European imperial power ever really integrated Muslim states well into their realms. Great Britain and France did better than Russia and Holland; the Soviet Union did better than Russia. With imperial self-interest illuminating the way, the British did conclude defensive treaties with allied-but-subservient Muslim lands — the Trucial States and Egypt–Sudan in the nineteenth and twentieth centuries — that could greatly benefit the natives. The emirs in the Gulf, once they realized that they couldn’t raid Indian shipping without fierce retribution, accepted, sometimes eagerly, British protection, grafting it onto the age-old Gulf customs of dakhala and zabana — finding powerful foreign patrons. The emirates needed protection from occasionally erupting militant forces from the peninsula’s highlands — the Wahhabi warriors of the Saud family.  

    The British, however, failed to protect their most renowned clients, the Hashemites, in their native land, the Hijaz in Arabia, after wistfully suggesting during World War I that the Hashemites might inherit most of the Near East under Great Britain’s dominion. King Hussein bin Ali had proved obstinate in accepting Jews in Palestine and the French in Syria. Britain switched its patronage to the Nejd’s Abdulaziz bin Abdul Rahman Al Saud. Backed by the formidable Ikhwan, the Brothers, the Wahhabi shock troops who had a penchant for pillaging Sunni Muslims and killing Shiite ones, Ibn Saud conquered the Hijaz, home to Mecca and Medina and the Red Sea port of Jeddah, in 1925. Checked by the Royal Air Force in Iraq and the Royal Navy in the Persian Gulf, Saudi jihadist expansion stopped. In 1929 Ibn Saud gutted the Ikhwan, who had a hard time accepting the post-World-War-I idea of nation-states and borders, and created more conventional military forces to defend his family and realm. In 1932 he declared the kingdom of Saudi Arabia. The dynasty’s Hanbali jurisprudence remained severe by the standards enforced in most of the Islamic world, but common Sunni praxis and political philosophy held firm: rulers must follow the holy law, but they have discretion in how they interpret and enforce it; in practice, kings and princes could sometimes kick clerics and the sharia to the ground when exigencies required. 

    Since Britain’s initial patronage, the Saudis officially have remained wary of foreigners, even after the 1950s when the royal family started allowing thousands of them in to develop and run the kingdom. Ibn Saud put it succinctly when he said “England is of Europe, and I am a friend of the Ingliz, their ally. But I will walk with them only as far as my religion and honor will permit.” He might have added that he appreciated the power of the RAF against the Ikhwan on their horses and camels. After 1945, Ibn Saud and his sons sought American protection and investment. They saw that Britain was declining; it was also massively invested in Iran. The modern Middle East has been an incubator of ideologies toxic to monarchies. The three Arab heavyweights of yesteryear — Baathist Iraq, Baathist Syria, and Nasserite Egypt — were all, in Saudi eyes, ambitious predators. The Soviet Union lurked over the horizon, feeding these states and, as bad, Arab communists and other royalty-hating leftists who then had the intellectual high ground in the Middle East. But there was an alternative. 

    American power was vast, Americans loved oil, and America’s democratic missionary zeal didn’t initially seem to apply to the Muslim Middle East, where American intrusion more often checked, and usually undermined, European imperial powers without egging on the natives towards democracy. (George W. Bush was the only American president to egregiously violate, in Saudi eyes, this commendable disposition.) American oilmen and their families came to Saudi Arabia and happily ghettoized themselves in well-organized, autonomous, well-behaved communities. The American elite hardly produced a soul who went native: no T.E. Lawrence, Gertrude Bell, or Harry St. John Bridger Philby — passionate, linguistically talented, intrepid Englishmen who adopted local causes, sometimes greatly disturbing the natives and their countrymen back home. A nation of middlebrow pragmatic corporations, backed up by a very large navy, Americans seemed ideal partners for the Saudi royals, who were always willing to buy friends and “special relationships.” As Fouad Ajami put it in The Dream Palace of the Arabs, “The purchase of Boeing airliners and AT&T telephones were a wager that the cavalry of the merchant empire would turn up because it was in its interest to do so.”

    But Americans could, simply by the size of their global responsibilities and strategies, be unsettling. In Crosswinds, Ajami’s attempt to peel back the layers of Saudi society, he captures the elite’s omnipresent trepidation, the fear of weak men with vast wealth in a region defined by violence:

    “The Saudis are second-guessers,” former secretary of state George Shultz said to me in a recent discussion of Saudi affairs. He had known their ways well during his stewardship of American diplomacy (1982–1989). This was so accurately on the mark. It was as sure as anything that the Saudis lamenting American passivity in the face of Iran would find fault were America to take on the Iranians…. In a perfect world, powers beyond Saudi Arabia would not disturb the peace of the realm. The Americans would offer protection, but discreetly; they would not want Saudi Arabia to identify itself, out in the open, with major American initiatives in the Persian Gulf or on Arab–Israeli peace. The manner in which Saudi Arabia pushed for a military campaign against Saddam Hussein only to repudiate it when the war grew messy, and its consequences within Iraq unfolding in the way they did, is paradigmatic. This is second-guessing in its purest.

    Saudi Arabia has had only one brief five-year period, from 1973 to 1978, when the Middle East (Lebanon excepted) went more or less the way that the royal family wanted. They weren’t severely threatened, their oil wealth had mushroomed, internal discontent had not metastasized (or at least was not visible to the royal family) and everybody — Arabs, Iranians, Americans, Soviets, and Europeans — listened to them respectfully. In 1979, when the Iranian revolution deposed the Shah, and Sunni religious militancy put on muscle, and the Soviets invaded Afghanistan, the golden moment ended. Enemies multiplied. Since then, as Nadav Safran put it in Saudi Arabia, The Ceaseless Quest for Security “the Saudis did not dare cast their lot entirely with the United States in defiance of all the parties that opposed it, nor could they afford to rely exclusively on regional alliances and renounce the American connection altogether in the view of the role it might play in various contingencies…the leadership …endeavored to muddle its way through on a case-by-case basis. The net result was that the American connection ceased to be a hub of the Kingdom’s strategy and instead became merely one of several problematic relationships requiring constant careful management.” 

     Which brings us to the current Saudi crown prince, Muhammad bin Salman, the de facto ruler of the country — easily the most detested Saudi royal in the West since the kingdom’s birth. With the exception of Iran’s supreme leader, Ali Khamenei, who is the most indefatigable Middle Eastern dictator since World War II, MBS is the most consequential autocrat in the region. And the prince made a proposal to America a proposal that may survive the Gaza war, which has reanimated anti-Zionism and constrained the Gulf Arab political elite’s decade-old tendency to deal more openly with the Jewish state. To wit: he is willing to establish an unparalleled tight and lucrative relationship with Washington, and let bygones be bygones — forget the murder of Jamal Khashoggi and all the insults by Joe Biden — so long as America is willing to guarantee Saudi Arabia’s security, in ways more reliable than in the past, and provide Riyadh the means to develop its own “civilian” nuclear program. Saudi Arabia would remain a major arms-purchaser and big-ticket commercial shopper and a reliable oil producer (the prince is a bit vague on exactly what Saudi Arabia would do with its oil that it isn’t doing now or, conversely, what it might not do in the future if Riyadh were to grow angry). And the Saudis would establish diplomatic relations with Jerusalem — clearly the pièce de résistance in his entreaty with the United States. With the Gazan debacle, the appeal of MBS’ pitch will increase for Israelis and Americans, who will seek any and all diplomatic means to turn back the anti-American and anti-Zionist tide.

    MBS and the Jews is a fascinating subject. It is not atypical for Muslim rulers, even those who sometimes say unkind things about Jews, to privately solicit American, European, and Israeli Jews. Having Jews on the brain is now fairly common in the Islamic world, even among Muslims who aren’t anti-Semites. Imported European anti-Semitism greatly amped up Islam’s historic suspicions of Judaism: in the Quran, the Prophet Muhammad is clearly disappointed by the Jewish refusal to recognize the legitimacy, the religious continuity, of his calling, which led to the slaughter of an Arabian Jewish tribe, the Banu Qurayza. Dissolve to the Holocaust, the creation of Israel, the wars and their repeated Arab defeats, the centrality of Israel in American and Soviet Middle Eastern foreign policy, the prominence of Jewish success in the West, especially in Hollywood, the constant chatter among Western Christians about Jews — all came together to give Al-Yahud an unprecedented centripetal eminence in the modern Islamic Middle East. 

    When MBS came to the United States in 2018, he and his minions engaged in extensive Jewish outreach. The prince admires Jewish accomplishment. His representatives are similarly philo-Semitic. The head of the World Muslim League, Muhammad bin Abdul Karim Issa, sometimes sounds as if he could work for B’nai B’rith International. Not that long ago, before 9/11, the League, an official organ of the Saudi state, pumped a lot of money into puritanical (Salafi) missionary activity, competing with the “secular” Egyptian establishment and the clerical regime in Tehran as the most ardent and well-funded proliferators of anti-Semitism among Muslims. So it is intriguing that MBS, whose father long had the Palestinian dossier at the royal court, has developed what appears to be a sincere and, at least for now, non-malevolent interest in Jews. 

    One suspects that the prince sees a certain religious and cultural affinity with Jews: Judaism and Islam are juristically and philosophically much closer to each other than Christianity and Islam. MBS is sufficiently well-educated — he has a law degree from King Saud University — to know this; he has now traveled enough, and met enough Jews around the world, to feel it. Nearly half of the Jews in Israel came from the Middle East.  The other half — the Ashkenazi, or as Bernard Lewis more accurately described them, the Jews of Christendom — often saw themselves, before arriving in Zion, as a Middle Eastern people in exile. Here is a decent guess about MBS’ reasoning: if the Jews, a Middle Eastern people now thoroughly saturated with modern (Western) ideas, could become so accomplished, then Saudi Muslims could, too. The Jewish experience — and association with Jews — might hold the keys to success. 

    There is a very long list of Muslim rulers from the eighteenth century forward, who, recognizing the vast chasm in accomplishment between Western (and now Westernized Asian) and Islamic lands, have tried to unlock the “secrets” of foreign power and success. Oil-rich Muslim rulers have tried to buy progress with petroleum profits. MBS certainly isn’t novel in his determination to make his own country “modern.” His audacity, even when compared against Shah Mohammad Reza Pahlavi, who aspired to make Iran “the Germany of the Middle East,” is impressive. There is the prospective giant metal tube in the northwest corner of the country, which, according to the prince’s NEOM vision (“neo” from the Greek and “m” from the Arabic mustaqbal, future), will one day hold upwards of nine million people in a verdant paradise where everyone has the Protestant work ethic and the air-conditioning never breaks down. This is the dreamscape of an Arab prince who is not intellectually shackled by the rhythms and the customs of his homeland. He is building a vast resort complex on the Red Sea, already under construction, which is also being funded from the sovereign wealth fund because Western and Asian bankers remain dubious about its profitability. A dozen five-star luxury resorts, dependent on visiting affluent Europeans, will have to allow topless bathers and a lot of alcohol if they have any chance of making money; thousands of lower-class Saudi men — not imported foreign labor — will in theory keep these resorts running. 

    The prince is searching for the keys to unleash Saudi Arabia’s non-oil potential — using prestigious Western consultancy firms that promise to bring greater productivity and efficiency to gross national product. He is trying to do what every significant Muslim ruler has done since Ottoman sultans realized they could no longer win on the battlefield against Christians: grow at home greater curiosity, talent, and industry. 

    Unlike the Arab elites in the lands that started seriously Westernizing in the nineteenth century and have since seen their countries racked and fractured by foreign ideologies, brutal authoritarian rulers, rebellions, and civil and sectarian wars, MBS appears to be an optimist. Under his firm guidance, he believes that Saudi Arabia can leapfrog from being the archetypal orthodox Islamic state to a self-sustaining, innovative, entrepreneurial, tech-savvy, well-educated powerhouse. Ajami, the finest chronicler of the Arab world’s misery, was deeply curious about Saudi Arabia because it was the last frontier, a land with considerable promise that had not yet embraced enough modernity, in all the wrong ways, to cock it up. The Saudi identity has been slow to nationalize — it was decades, perhaps a century, behind the cohering forces that gave Egypt and then Syria some sense of themselves. As Alexis Vassiliev, the great Russian scholar of Saudi Arabia and Wahhabism, put it: 

    The idea of a national territorial state, of a “motherland,” was new to Arabian society. The very concept of a motherland, to which individuals owe their primary loyalty, contradicts the spirit of Islam, which stresses the universal solidarity of believers as against non-Muslims. National consciousness and national feelings in Saudi Arabia were confined to a narrow group working in the modern sector of the economy and in the civil and military bureaucracy. Those who described themselves as nationalists were, rather, reformers and modernists, who wanted to create a more modern society. But their sentiments were so vague that the left wing of the “nationalists” even avoided using the name Saudi Arabia because of their attitude to the Al Saud.

    Saudi Arabia has been rapidly modernizing since the 1960s. Measured by massive concrete buildings, roads, luxury hotels with too much marble, electrification, communications, aviation, urban sprawl, rural decline, and access to higher education, Vassiliev’s observation is undoubtedly correct: “Saudi Arabia has experienced more rapid change than any other Middle Eastern country and the old social balance has been lost forever.” But spiritually, in its willingness to import Western ideas as opposed to Western gadgets, know-how, aesthetics, and organization, the kingdom changed only fitfully. Royal experiments in reform, especially under King Abdullah (2005–2015), could be ended as quickly as they began. 

    Before MBS, Saudi rulers and the vast oil-fed aristocracy were deeply conservative at home (if not in their homes), fearful of the outside world that relentlessly corroded the traditions that gave the kingdom its otherworldly, female-fearing, fun-killing, profoundly hypocritical weirdness. But this conservative status quo also offered a certain moral coherence, political stability (the royal family, thousands strong, were collectively invested), as well as a quirky governing class that was fine for decades, through the worst of the Wahhabi efflorescence that followed the Iranian revolution and the seizure of the Great Mosque in Mecca, with a gay intelligence chief. Saudis might be haughty, lacking the multilingual grace that came so easily to old-school Arabs, who retained Ottoman propriety with first-rate Western educations, but they were aware of their limitations. They gave the impression that they couldn’t compete — even at the apex of Saudi power in the mid-1970s. Most of the royal family likely didn’t want to try. When Ajami was alive (he died in 2014, a year before MBS began his rise), Saudi Arabia hadn’t taken the giant, irreversible leap forward. It has now. 

     The crown prince has been a one-man wrecking ball, transforming the country’s collective leadership, where princes — uncles, brothers, sons, and cousins — effectively shared power under the king, into a dictatorship. Whatever brakes are still on the system (King Salman is old and ailing but rumors don’t yet have him non compos mentis), they likely will not outlast Salman’s death. There has never been any clear demarcation between the nation’s treasury and the royal family’s purse; MBS appears to have greatly reduced the points of access to the country’s oil wealth to him and his minions. His great shakedown in the Ritz Hotel in Riyadh in November 2017, when nearly four hundred of the kingdom’s richest and most powerful people were forcibly held and squeezed or stripped of their assets, killed the old order. Some “guests” reportedly were physically tortured, had their families threatened, or both. Such behavior would have been unthinkable before. Traditional kingdoms always have informal rules that buttress the status quo and check arbitrary power. The crown prince’s new-age mindset — his determination to stamp out all possible opposition to his modernist vision with him alone at the helm — was vividly on display at the Ritz. 

    This autocratic thuggery earned little censure in the West, on either the left or right. Some appeared to believe that the rightly guided prince was actually stamping out corruption. Many Saudis, especially among the young, may have sincerely enjoyed the spectacle of the spoiled ancien régime getting its comeuppance. The same unchecked princely temperament, however, reappeared in the Saudi consulate in Istanbul on October 2, 2018, when Jamal Khashoggi crossed the threshold. It is a near-certainty that MBS intended to kill, not to kidnap, the elite dissident. It is not at all unlikely, given the prince’s reputation for work and detail and his aversion to delegating decisions to others, that he personally approved the dissident’s dismemberment. 

    The crown prince is gambling that Saudi nationalism, which is now real even if its depth is hard to measure, will attach itself to him, as nationalisms do in their need for a leader. He is trying to downgrade Islam by upgrading the nation. He has reined in the dreaded morals police, the mutawwa, who could harass and arrest almost anyone. The urban young, especially if they come from the middle and upper-middle class, have long loathed this police force, which comes from the more marginal precincts of society, and so they find MBS’ mission civilisatrice appealing. The crown prince is essentially trying to pull an Atatürk, who created a Turkish nation-state out of a Muslim empire. Mustafa Kemal created his own cult: he was a war hero, the savior of the Turks from invading Greek Christian armies and World War I victors who were carving up the carcass of the Ottoman state. He fused himself with the idea of nationhood. His tomb in Ankara has tens of thousands of Turkish Islamists respectfully visiting it. 

    When it came to cult worship, Saudi kings and princes had been fairly low-key compared to most other Middle Eastern rulers. Yet MBS’ sentiments are, again, more modern. He has effectively established a police state — the first ever in Saudi history. His creation is certainly not as ruthless as the Orwellian nightmares of Saddam Hussein’s Iraq or Assad’s Syria; it is neither as loaded with internal spies nor rife with prison camps as Abdul Fattah El-Sisi’s Egypt. But MBS’ Arabia is a work in progress. Those in America and Israel who advocate that the United States should draw closer to MBS, so as to anchor a new anti-Iran alliance in Riyadh, are in effect saying that we should endorse MBS and his vision of a more secular, female-driving, anti-Islamist Saudi Arabia without highlighting its other, darker aspects, or that we should just ignore the kingdom’s internal affairs and focus on what the crown prince gives us externally. This realist calculation usually leads first back to the negatives: without the crown prince’s support of American interests, Russia, China, and Iran, the revisionist axis that has been gaining ground as America has been retrenching, will do even better. And then the positive: Saudi recognition of Israel would permanently change the Jewish state’s standing in the Muslim world — a long-sought goal of American diplomacy. 

    The prince clearly knows how much Benjamin Netanyahu wants Saudi Arabia’s official recognition of Israel. The Israeli prime minister has loudly put it at the top of his foreign-policy agenda. (Before the Gaza war, it might have had the additional benefit of rehabilitating him at home.) The prince clearly knows how much American Jewry wants to see an Israeli embassy in Riyadh. And after some initial weariness, the Biden administration now wants to add the kingdom to the Abraham Accords. Bahrain, the United Arab Emirates, Morocco, and Sudan recognizing Israel was good, but Saudi Arabia would be better. Although the White House certainly hasn’t thought through how the United States would fit into an Israeli-Saudi-US defensive alliance, whether it would even be politically or militarily possible, the administration proffered the idea before Biden went to Saudi Arabia in 2022 — or echoed an earlier, vaguer Saudi suggestion of a defensive pact — as part of Riyadh’s official recognition of Israel. Given the importance that MBS attaches to things Jewish, he may well believe his offer of Israeli recognition gives him considerable leverage in future dealings with the United States. 

    Joe Biden paved the way for MBS’ go-big proposal by making one of the most embarrassing flips in presidential history. Biden came into office pledging to reevaluate US-Saudi ties and cast MBS permanently into the cold for the gruesome killing of Khashoggi and, a lesser sin, making a muck of the war in Yemen, which has led to the United States, given its crucial role in maintaining and supplying the Saudi Air Force, being an accomplice in a bombing campaign that has had a negligible effect on the Shiite Houthis capacity to fight but has killed thousands, perhaps tens of thousands, of Yemeni civilians. (In civil wars, it is hard to know who is starving whom, but the Saudi role in bringing starvation to Yemen has not been negligible.) Fearing another hike in oil prices before the midterm elections, Biden travelled to Saudi Arabia, fist-bumping MBS and getting not much in return except reestablishing what has been true in US–Saudi relations from the beginning, when Franklin Delano Roosevelt hosted two of King Ibn Saud’s sons in Washington: everything is transactional. 

    MBS’ offer to America arrived with China’s successful intervention into Saudi-Iranian relations. Beijing obtained an agreement for the restoration of diplomatic ties between the two countries, which Riyadh had severed in 2016, after King Salman executed Nimr al-Nimr, the most popular Saudi Shiite cleric in the oil-rich Eastern Province, and Iranian protestors set fire to the Saudi embassy in Tehran. Beijing also appears to have aided a Saudi-Iranian ceasefire and an understanding about Yemen. MBS, who had been eager to extricate himself and the Saudi treasury from the peninsula’s “graveyard of nations,” reduced the number of Saudi forces engaged in the conflict; Tehran appears to have convinced the Houthis, at least temporarily, not to lob Iranian-provided missiles into their northern neighbor. 

    China offers MBS something that Israel and the United States realistically no longer do: a possible long-term deterrent against Iranian aggression in a post-American Middle East. Beijing likely isn’t opposed to the Islamic Republic going nuclear, since this would further diminish the United States, which has under both Republican and Democratic presidents told the world that an Iranian nuke is “unacceptable.” Given Chinese access in Tehran and Moscow, which is developing an ever-closer military relationship with the clerical regime, the value of Chinese intercession will increase. Given Beijing’s economic interest in Saudi Arabia’s oil (it is now the kingdom’s biggest customer), MBS is certainly right to see in the Chinese a possible check on any Iranian effort to take the kingdom’s oil off-market. The Islamic Republic has never before had great-power patrons. The Chinese bring big advantages to Iran’s theocrats — much greater insulation from American sanctions, for example; but they may also corral Tehran’s freedom of action a bit. 

    In the controversial interview that MBS not long ago gave to The Atlantic, MBS clearly thought he could wait out the Biden administration, and that America’s and the West’s need for Saudi crude, and the rising power of China, gave the prince time and advantage. He has won that tug-of-war. America cannot possibly ostracize the ruler who controls the largest, most easily accessible, and highest-quality pool of oil in the world. The Gaza war will also play to MBS’ advantage as both Israel and the United States will seek Saudi intercession to counter what’s likely to become an enormous propaganda victory for Iran’s “axis of resistance.” The crown prince may well be racing his country towards the abyss, eliminating all the customs and institutions that made the Saudi monarchy resilient and not particularly brutish (not by Middle Eastern standards), but he has been tactically astute with all the greater powers maneuvering around him.

    Saudi Arabia is probably the Muslim country that American liberals hate the most. (Pakistan is a distant runner-up.) This enmity is, in part, a reaction to the oddest and oldest American “partnership” in the Middle East, and the general and quite understandable feeling that the Saudi royal family never really came clean about its dealings with Osama bin Ladin before 9/11. Not even post-Sadat Egypt, which has developed a close working relationship with the American military and the CIA, has had the kind of access that the Saudis have had in Washington. Even after 9/11, during Bush’s presidency, Saudi preferences in personnel could reach all the way into the National Security Council. The Saudi distaste for Peter Theroux, an accomplished Arabist and former journalist who wrote an entertaining, biting look into the kingdom in Sandstorms, published in 1990, got him briefly “unhired” to oversee Saudi policy on the NSC because of the fear of Riyadh’s reaction. He got rehired when either Condoleezza Rice, the national security advisor, or her deputy, Stephen Hadley, realized that allowing Saudi preferences to effect personnel decisions within the White House was unwise and potentially very embarrassing. Given the Gaza war’s demolition of the Biden administration’s Middle Eastern policy, it’s not unlikely that we will see Saudi access in Washington rise again, perhaps rivaling its halcyon days during the Reagan administration. That would be a sharp irony. 

     Culturally speaking, no two countries had ever been further apart: Saudi Arabia still had a vibrant slave society in 1945 when Franklin Roosevelt began America’s relationship with the desert kingdom. Outside pressure, not internal debate among legal scholars and Saudi princes about evolving religious ethics and the holy law, obliged the monarchy to ban slavery officially in 1962. (Bad Western press aside, the Saudi royals may have been truly annoyed at French officials freeing slaves traveling with their masters on vacation.) Ibn Saud had over twenty wives, allotted by the holy law to no more than four at one time, and numerous concubines. When concubines became pregnant, they would usually ascend through marriage, while a wife would be retired to a comfortable and less competitive environment. By comparison, the thirty-seven-year-old crown prince today has only one wife and, if rumors are true, many mistresses — a less uxorious, more acceptable choice for a modern, ambitious man. 

    Roosevelt’s embrace of the Saudi monarchy established the ultimate realist relationship. The Saudi royals neither cared for America’s democratizing impulse, nor for its incessant conversations about human rights, nor for its ties to Israel, nor, after 1973 and the Saudi-engineered oil embargo that briefly gave the United States sky-rocketing prices and gas lines, for the American chatter in pro-Israel corners about the need to develop plans for seizing Saudi oil fields. Yet the Saudis loved the U.S. Navy and the long, reassuring shadow that it cast in the Middle East. Donald Trump’s embrace of Arabia, however much it may have horrified American liberals and amplified their distaste for the country and its ruling family, just highlighted, in Trump’s inimitably crude way, a bipartisan fact about Saudi-American relations: we buy their oil and they buy our armaments, technology, machinery, services, and debt. Barack Obama sold the Saudis over sixty-five billion dollars in weaponry, more than any president before or since. Both sides have occasionally wanted to make it more than that, to sugarcoat the relationship in more appealing ways. The more the Saudis, including the royals, have been educated in the United States, the more they have wanted Americans to like them. Americanization, even if only superficial, introduces into its victims a yearning for acceptance. Americans have surely been the worst offenders here, however, since they are far more freighted with moral baggage in their diplomacy and trade. They want their allies to be good as well as useful. 

    Although Americans have a knack for discarding the past when it doesn’t suit them, Saudi and American histories ought to tell us a few things clearly. First, that MBS’ offer to the United States is without strategic advantages. This is true even though Iran may have green-lighted, perhaps masterminded, the attack on October 7 in part to throw a wrench into the U.S.–Israeli–Saudi negotiations over MBS’ proposal.  Iranian conspiratorial fears always define the clerical regime’s analysis.  Its desire to veto its enemies’ grand designs is certainly real irrespective of whether it thought that Saudi–Israeli normalization was arriving soon or that MBS’ quest to develop a nuclear-weapons-capable atomic infrastructure needed to be aborted sooner rather than later. Iranian planning on the Gaza war likely started long before Biden administration officials and Netanyahu’s government started leaking to the press that normalization was “imminent”; it likely started before MBS’ vague suggestions of a defensive pact between Washington and Riyadh.  Leaks about diplomatic progress surrounding a coming deal, however, might have accelerated Iran’s and Hamas’ bloody calculations.  

    Concerning the crown prince’s nuclear aspirations, which have divided Israelis, caused serious indigestion in Washington, and compelled Khamenei’s attention, they are not unreasonable given that domestic energy requirements for Saudi Arabia — especially the exponentially increasing use of air conditioning — could in the near future significantly reduce the amount of oil that Riyadh can sell. Nuclear energy would free up more petroleum for export and produce the revenue that MBS desperately needs to continue his grand plans. But it is also a damn good guess that MBS’ new attention to nuclear power plants has a lot to do with developing the capacity to build the bomb. Just across the Gulf, the Islamic Republic has effectively become a nuclear threshold state — the Supreme Leader likely has everything he needs to assemble an atomic arm; and the possibility is increasingly remote that either Biden or the Israeli prime minister (whoever that maybe on any given day) is going to strike militarily before the clerical regime officially goes nuclear. And MBS, despite his occasional bravado on Iran and his undoubtedly sincere private desire to undermine the clerical regime, probably doesn’t want to deal with such a denouement. Given how much Netanyahu and most of the Israeli political class have wanted Saudi–Israeli normalization, given how desperate the Biden administration has been to find stabilizing partners in the Middle East, which would allow the United States to continue its retrenchment, MBS could be forgiven for thinking, especially after October 7, that the sacred creed of non-proliferation might well give way to his atomic ambitions.

    The Saudis were never brave when they were focused single-mindedly on building their frangible oil industry; now they have vast installations, which the Iranians easily paralyzed in 2019 with a fairly minor drone and cruise missile attack. The same vulnerability obtains for the crown prince’s NEOM projects, which the Iranians, who have the largest ballistic- and cruise-missile force in the Middle East, could severely damage — probably even if the Saudis spend a fortune on anti-missile defense. MBS came in like a lion on the Islamic Republic, attracting the attention and the affection of Israelis and others; it’s not at all unlikely that he has already become a lamb, actually less stout-hearted than his princely predecessors who, in a roundabout way, via a motley crew of characters, using the CIA for logistical support, took on the Soviet Union in Afghanistan. 

    Still, MBS would want to plan for contingencies. Having nuclear weapons is better than not having them. A Saudi bomb might check Persian predation. And the Saudis are way behind. They have neither the engineers nor the physicists nor the industrial base. And the odds are excellent that the Pakistanis, who though indebted to the Saudi royal family are quite capable of stiffing them, haven’t been forthcoming: they are not going to let the Saudis rent an atomic weapon.  And the Russians and the Chinese might not want to give the Saudis nuclear power. It would add another layer of complexity and tension to their relations with the Islamic Republic, which neither Moscow nor Beijing may want. The Europeans and the Japanese are unlikely to step into such a hot mess. Getting nuclear technology from the Americans would be vastly better. Another way, as Ajami put it, to supplement Boeing and AT&T. 

    Failing on the atomic front, MBS might intensify his dangle of an Israeli embassy in Riyadh — to see what he can get even if he has no intention of recognizing the Jewish state.  The Gaza war certainly increases his chances that he can get both the Israelis and the Americans to concede him a nuclear infrastructure with local uranium enrichment. The war makes it less likely, however, that he would want to tempt fate anytime soon by recognizing Israel.  Normalization gains him nothing internally among all those Saudis who religiously or politically may have trouble with a Saudi flag — which has the Muslim shahâda, the profession of faith, boldly printed on a green heavenly field — flying in Zion. And the Star of David in Riyadh could be needlessly provocative even to the crown prince’s much-touted young, hitherto apolitical, supporters. The Palestinian cause, which most of the Israeli political elite thought was a fading clarion call, has proven to have astonishing resonance.  

    But even if MBS is still sincere about this big pitch, neither Washington nor Jerusalem should be tempted. It gives the former nothing that it does not already have. It offers the Israelis far less than what Netanyahu thinks it does. Despite MBS’ grand visions for where his country will be in the middle of the century, Saudi Arabia is actually a far less consequential kingdom than it was in 1973, when it was at the pinnacle of its oil power, or in 1979, when it collided with the Iranian revolution and was briefly on the precipice after the Sunni religious revolutionary Juhayman al-Otaybi, his messianic partner Muhammad Abdullah al-Qahtani, and five hundred well-armed believers took Mecca’s Grand Mosque and shook the dynasty to its core. 

    Religiously, among Muslims, Saudi Arabia hasn’t been a bellwether for decades. Its generous funding for Salafi causes undoubtedly fortified harsher views across the globe, especially in Indonesia and Western Europe, where so many dangerous Islamic radicals were either born or trained. Religious students and clerics raised in the eclectic but formal traditions of the Al-Azhar seminary in Cairo, for example, would go to Saudi Arabia on well-paid fellowships and often come back to Egypt as real Hanbali-rite killjoys. Saudi Arabia helped to make the Muslim world more conservative and less tolerant of Muslim diversity. But relatively few Saudi imams were intellectually on the cutting edge, capable of influencing the radical set outside of Arabia. And it was the radical set outside of Saudi Arabia who mattered most, especially in Egypt, where the politically subservient dons of Al-Azhar lost control and relevance for Muslims who were growing uncomfortable with Egypt’s Westernization, first under the British and then, even more depressingly, under the military juntas of Gamal Abdel Nasser and Anwar Sadat. 

    In Saudi Arabia, most of the radical Salafi imams were either in trouble with the Saudi authorities, in exile, or in jail. Saudi royals were once big fans of the Egyptian-born Muslim Brotherhood because it was hostile to all the leftist ideologies storming the Middle East. Only later did they realize that these missionaries were irreversibly populist and anti-monarchical. Saudi religious funding was like that old publishing theory — throw shit against the wall and see what sticks. The operative assumptions were that more religious Muslims were better than less religious, and that more religious Sunni Muslims would be hostile to Iran’s revolutionary call, and that more religious Sunni Muslims would be more sympathetic to Saudi Arabia. Who preached what and where was vastly less important. There were a lot of problems with every one of those assumptions, which the Saudi royal family realized long before the coming of MBS. But inertia is infamously hard to stop. 

    For most faithful Muslims today, Saudi Arabia isn’t intellectually and spiritually important. What happened to Al-Azhar in Egypt — its intellectual and jurisprudential relevance declined as it became more subservient to the Westernizing Egyptian military — has been happening in Saudi Arabia for at least thirty years. MBS is intensifying this process. Westerners may cheer him on as he tries to neuter Islam as the defining force within Saudi society, but internationally it makes Saudi Arabia a less consequential state. Saudi Arabia is not a model of internal Islamic reform; it is merely another example of a modernizing autocrat putting Islam in its place — behind the ruler and the nation. The dictatorial impact on religion can be profound: it can reform it, it can radicalize it, it can do both at the same time. 

    To keep on the Egyptian parallel: Anwar Sadat visited Jerusalem and opened an embassy in Tel Aviv. He and his military successors have slapped the hands of Al-Azhar’s rectors and teachers when they challenged the legitimacy of the Egyptian–Israeli entente. (Conversely, they haven’t stopped, and they have often subsidized, the perpetual anti-Semitism of Egyptian media, film, and universities.) The peace between Egypt and Israel has obviously been beneficial to both countries, but religiously it made little positive impact on Muslims within Egypt or abroad. When a Muslim Brother, Mohammad Morsi, won Egypt’s first — and so far only — free presidential election in 2012, Israelis and Americans were deeply concerned that he would sever relations with Israel. That didn’t happen, but the fears were understandable; Israel’s popular acceptance within Egyptian society remains in doubt. 

    If, after the Gaza war, MBS deigns to grant Israel diplomatic relations, it won’t likely make faithful Muslims, or even secular Muslims, more inclined to accept the Jewish state. It might do the opposite, especially inside Saudi Arabia. The Saudi royal family’s control over the two holiest sites in Islam — Mecca and Medina — makes little to no difference in how that question is answered. Such custodianship confers prestige and obliges the Saudi royal family, at least in the holy cities, to maintain traditional standards. In the past, before modern communications, it allowed Muslims and their ideas to mix. But it absolutely does not denote superior religious authority — no matter how much Saudi rulers and their imams may want to pretend that by proximity to the sacred spaces they gain religious bonus points. One often gets the impression from Israelis that they are in a time warp with respect to Saudi Arabia, that for them it is still the mid-1970s and Riyadh’s recognition would effectively mark an end to their Muslim travails. The Gaza war ought to inform Israelis that the profoundly emotive Islamic division between believer and non-believer and the irredentist claims of Palestinian Muslims against the Jewish state do not lessen because a Saudi crown prince wants to establish a less hostile modus vivendi with Israel and Jews in general.  

     Perhaps above all else, the Israelis should want to avoid entangling themselves too closely with MBS in the minds of Americans, especially those on the left, who are essential to bipartisan support for Jerusalem. It’s an excellent bet that MBS’ dictatorship will become more — likely much more — oppressive and contradictory in its allegiances. What Safran noted about Saudi behavior from 1945 to 1985 — “attempts to follow several contradictory courses simultaneously, willingness to make sharp tactical reversals; and limited concern with the principle of consistency, either in reality or in appearance” — has already happened with MBS. This disposition will probably get much worse. And Americans aren’t Israelis, who never really see political promise in Muslim lands (neither Islamic nor Israeli history encourages optimism). The choice, in their minds, is between this dictator or that one — and never, if possible, the worst-case scenario: Muslims freely voting and electing Islamists who by definition don’t exactly kindle to Israel or the United States. Americans are much more liberal (in the old sense of the word): for them, autocracies aren’t static, they inevitably get worse until they give way to revolution or elected government. Even realist Americans are, compared to Israelis, pretty liberal. And the principal reason that the United States has so steadfastly supported the Jewish state since 1948 is that it is a vibrant democracy, however troubled or compromised it might be by the Palestinian question or by its own internal strains of illiberalism and political religion. When Israelis and their American supporters tout MBS as a worthwhile ally, they are diminishing Israel’s democratic capital. If MBS really thought diplomatic relations were in his and Saudi Arabia’s self-interest, there would already be a fluttering Star of David in Riyadh. The wisest course for Israelis is to reverse engineer MBS’ hesitation into a studied neutrality.  

    Religion and mores aside, closer relations with Saudi Arabia will not assist America or Israel against its principal Middle Eastern concern: the Islamic Republic of Iran. In 2019, when Iran decided to fire cruise missiles and drones at Saudi Aramco processing plants in Abqaiq and Khurais, briefly taking out nearly half of the country’s oil production, MBS did nothing, except turn to the Americans plaintively. The Emirates, whose ambassador in Washington gives first-rate anti-Iran dinner parties, sent an emissary to Tehran, undoubtedly to plead neutrality and promise to continue to allow the clerical regime the use of Dubai as a U.S.-sanctions-busting entrepôt. The two Sunni kingdoms had purchased an enormous amount of Western weaponry to protect themselves against the Islamic Republic. The Saudi and Emirati air forces and navies are vastly more powerful than their Iranian counterparts. And still they would not duel. They lack what the clerical regime has in abundance: a triumphant will fueled by success and a still vibrant ideology.  

    The Saudis know, even if MBS’ public rhetoric occasionally suggests otherwise, that the Islamic Republic, even with all its crippling internal problems, is just too strong for them. Iran’s eat-the-weak clerics and Revolutionary Guards, who fought hard and victoriously in Syria’s civil war, became the kingmaker in Iraq, and successfully radicalized and armed the Shiite Houthis of Yemen, can always out-psych the Saudi and Nahyan royals. Also, Trump didn’t help. When he decided not to respond militarily to the cruise missile-and-drone attacks, plus the ones on merchant shipping in the Gulf of Oman three months earlier (he boldly tweeted that the United States was “locked and loaded”), Trump trashed whatever was left of the Carter Doctrine. Washington had assumed responsibility for protecting the Persian Gulf after the British withdrawal in 1971. There is a direct causal line from the Trump administration’s failure in 2019, through its “maximum pressure” campaign that eschewed military force in favor of sanctions, through the Biden administration’s repeated attempts to revive Barack Obama’s nuclear deal, through the White House’s current see-no-redline approach to Iran’s ever-increasing stockpile of highly-enriched uranium, to MBS’ decision to turn toward the Chinese. 

    It is brutally difficult to imagine scenarios in which the Saudis could be a military asset to the United States in the Persian Gulf or anywhere else in the Middle East. The disaster in Yemen, for which the Iranians and the Houthis are also to blame, tells us all that we need to know about Saudi capacity. Even with American airmen, sailors, and intelligence officers in Saudi command centers doing what they could to inform and direct Saudi planes, Saudi pilots routinely bombed the right targets poorly and the wrong targets (civilians) well. The Saudis mobilized tens of thousands of troops for the campaign, but it’s not clear that they actually did much south of the border. (Small special forces units appear to have fought and held their own.) The UAE’s committed forces, which once numbered around four thousand on the ground, did much more, but they quickly discovered that the coalition gathered to crush the Houthis, who had far more unity and purpose, simply didn’t work. The UAE started hiring mercenaries and pulling its troops out of combat. MBS, who was then the Saudi defense minister and a pronounced hawk, started blaming the UAE, which blamed the Saudis. 

    If we are lucky, the Yemen expedition has actually taught the crown prince a lesson: that his country, despite its plentiful armaments, is not much of a military power and can ill-afford boldness. If any future American-Iranian or Israeli-Iranian clash happens, we should not want the Saudis involved. Similarly, we should not want an entangling alliance with Riyadh — a NATO in the Persian Gulf — because it will give us little except the illusion that we have Arab partners. We also shouldn’t want it because such an alliance could harm, perhaps irretrievably, the kingdom itself if it re-animated MBS’ martial self-confidence. 

    About Saudi Arabia, the China-first crowd might actually be more right than wrong. Elbridge Colby, perhaps the brightest guru among the Trump-lite aim-towards-Beijing Republicans, thinks the United States can deploy small detachments from the Air Force and Navy to the Persian Gulf and it will be enough to forestall seriously untoward actions by a nuclear Iran, China, and Russia. Yet the Gaza war has shown that when the United States seriously misapprehends the Middle East, when it sees the Islamic Republic as a non-revolutionary power with diminished Islamist aspirations and malevolent capacity, Washington may be obliged to send two aircraft-carrier groups to the region to counter its own perennial miscalculations. Colby’s analysis has the same problem in the Middle East that it does with Taiwan: everything depends on American willingness to use force. A United States that is not scared to project power in the Middle East, and is not fearful that every military engagement will become a slippery slope to a “forever war,” would surely deter its enemies more convincingly and efficiently.  The frequent use of force can prevent larger, uglier wars that will command greater American resources.  

    If Washington’s will — or as the Iranians conceive it, haybat, the awe that comes with insuperable power — can again be made credible, then at least in the Persian Gulf Colby is right.  It doesn’t take a lot of military might to keep the oil flowing.   For the foreseeable future, no one there will be transgressing the US Air Force and Navy — unless we pull everything to the Pacific. We don’t need to pledge anything further to MBS, or to anyone else in the region, to protect the global economy.   And the global market in oil still has the power to keep MBS, or anyone else, from ratcheting the price way up to sustain grand ambitions or malevolent habits. 

    It is an amusing irony that Professor Safran, an Israeli-American who tried to help the American government craft a new approach to Saudi Arabia in the 1980s (and got into trouble for it when it was discovered that some of his work had received secret CIA funding), foresaw the correct path for Saudi Arabia today when he was assessing forty years ago its missionary Islamic conservatism, anti-Zionist reflexes, and increasing disposition to maximize oil profits regardless of the impact on Western economies. He advised that “America’s long-term aim should be to disengage its vital interests from the policy and fate of the Kingdom. Its short-term policy should be designed with an eye to achieving that goal while cooperating with the Kingdom in dealing with problems on a case-by-case basis and advancing shared interests on the basis of reciprocity.” In other words, we should treat the kingdom in the way that it has treated us. In still other words, Roosevelt was right: with the Saudis, it should be transactional. Nothing more, nothing less. Day in, day out. 

     

     

    The Logical One Remembers

    “I’m not irrational. But there’ve been times

    When I’ve experienced—uncanniness:

    I think back to those days, when, four or five,

    I dreaded going to bed, because I thought

    Sleep really was a ‘dropping off.’ At night

    Two silver children floated up from somewhere

    Into the window foiled with dark, a boy

    And girl. They never spoke. But they arose

    To pull me with them, down, into the black

    That brewed deep in the basement. I would sink

    With them to float in nothingness. Each night

    For a year, or maybe two. It was a dream

    You’ll say, just a recurring dream. It’s true.

    (And yet I was awake when they came through.)”

    The Slug

    Everything you touch

    you taste. Like moonlight you gloss

    over garden bricks,

     

    rusty chicken wire,

    glazing your trail with argent

    mucilage, wearing

     

    your eyes on slender

    fingers. I find you grazing

    in the cat food dish

     

    waving your tender

    appendages with pleasure, 

    an alien cow.

     

    Like an army, you 

    march on your stomach. Cursive,

    you drag your foot’s font.

     

    When I am salted

    with remorse, saline sorrow,

    soul gone leathery

     

    and shriveled, teach me

    how you cross the jagged world

    sans helmet, pulling

     

    the self’s nakedness

    over broken glass, and stay

    unscathed, how without

     

    haste, secretive, you

    ride on your own shining, like 

    Time, from now to now.

    The Cloud

    I used to think the Cloud was in the sky,

    Something invisible, subtle, aloft:

    We sent things up to it, or pulled things down

    On silken ribbons, on backwards lightning zaps.

    Our photographs, our songs, our avatars

    Floated with rainbows, sunbeams, snowflakes, rain.

    Thoughts crossed mid-air, and messages, all soft

    And winking, in the night, like falling stars.

     

    I know now it’s a box, and acres wide,

    A building, stories high. A parking lot

    Besets it with baked asphalt on each side.

    Within, whir tall machines, grey, running hot.

    The Cloud is windowless. It squats on soil

    Now shut to bees and clover, guzzling oil.

    Wind Farm

    I still remember the summer we were becalmed:

    No breezes rose. The dandelion clock

    Stopped mid-puff. The clouds stood in dry dock.

    Like butterflies, formaldehyde embalmed,

     

    Spring kites lay spread out on the floor, starched flat.

    Trees kept their council, grasses stood up straight

    Like straight pins in a cushion, the wonky gate

    That used to bang sometimes, shut up, like that.

     

    Our ancestors, that lassoed twisty tails

    Of wild tornadoes, teaching them to lean

    In harness round the millstone — the would weep

     

    At all the whirlwinds that we didn’t reap.

    I lost my faith in flags that year, and sails:

    The flimsy evidence of things unseen.

    The Wise Men

    Matthew, 2.7-12

    Summoned to the palace, we obeyed.

    The king was curious. He had heard tell

    Of strangers in outlandish garb, who paid

    In gold, although they had no wares to sell.

    He dabbled in astrology and dreams:

    Could we explain the genesis of a star?

    The parallax of paradox — afar

    The fragrance of the light had drawn us near.

    Deep in the dark, we heard a jackal’s screams

    Such as, at lambing time, the shepherds fear.

    Come back, he said, and tell me what you find,

    Direct me there: I’ll bow my head and pray.

    We nodded yes, a wisdom of a kind,

    But after, we slipped home by another way.

    The Anti-Liberal

    Last spring, in The New Statesman, Samuel Moyn reviewed Revolutionary Spring, Christopher Clark’s massive new history of the revolutions of 1848. Like most everything Moyn writes, the review was witty, insightful, and provocative — another illustration of why Moyn has become one of the most important left intellectuals in the United States today. One thing about it, though, puzzled me. In the Carlyle lectures that he delivered at Oxford the year before, now published as Liberalism Against Itself, Moyn argued that liberalism was, before the Cold War, “emancipatory and futuristic.” The Cold War, however, “left the liberal tradition unrecognizable and in ruins.” But in the New Statesman review, Moyn claimed that liberals had already lost their way a century long before the Cold War. “One lesson of Christopher Clark’s magnificent new narrative of 1848,” he wrote, “is a reminder of just how quickly liberals switched sides…. Because of how they lived through 1848, liberals betrayed their erstwhile radical allies to join the counter-revolutionary forces once again — which is more or less where they remain today.”

    Perhaps the contradiction is not so puzzling. Much like an older generation of historians who seemed to glimpse the “rise of the middle classes” in every century from the thirteenth to the twentieth, Samuel Moyn today seems to find liberals betraying their own traditions wherever he looks. Indeed, this supposed betrayal now forms the leitmotif of his influential writing.

    This was not always the case. The work that first made Moyn’s reputation as a public intellectual, The Last Utopia, in 2010, included many suggestive criticisms of liberalism, but was a subtle and impressive study that started many more conversations than it closed off. Yet in a subsequent series of books, from Christian Human Rights (2015), through Not Enough (2018) and Humane (2021), and most recently Liberalism Against Itself: Cold War Intellectuals and The Making of Our times, Moyn has used his considerable talents to make increasingly strident and moralistic critiques of contemporary liberalism, and to warn his fellow progressives away from any compromises with their “Cold War liberal” rivals. In particular, as he has argued in a steady stream of opinion pieces, his fellow progressives should resist the temptation to close ranks with the liberals against the populist right and Donald Trump. Liberalism has become the principal enemy, even as his definition of it has come to seem a figure of increasingly crinkly straw.

    Moyn does offer reasons for his critical focus. As he now tells the story, the liberalism born of the Enlightenment and refashioned in the nineteenth century was capacious and ambitious, looking to improve the human condition, both materially and spiritually; and not merely to protect individual liberties. It was not opposed to socialism; in fact, it embraced many elements of socialism. But that admirable liberalism has been repeatedly undermined by backsliding moderates who, out of fear that overly ambitious and utopian attempts to better the human condition might degenerate into tyranny, stripped it of its most attractive features, redefined it in narrow individualistic terms, and all too readily allied with reactionaries and imperialists. The logical twin endpoints of these tendencies, in Moyn’s view, are neoconservatism and neoliberalism: aggressive American empire and raging inequalities. His account of liberalism is a tale of villains more than heroes. 

    This indictment is sharp, and it is persuasive in certain respects, but it is also grounded in several very questionable assumptions. Politically, Moyn assumes that without the liberals’ “betrayal,” radicals and progressives would have managed to forge far more successful political movements, perhaps forestalling the triumph of imperial reaction after 1848, or of neoliberalism in the late twentieth century. Historically, Moyn reads the ultimate failure of Soviet Communism back into its seventy-year lifespan, as if its collapse was inevitable, and therefore assumes that during the Cold War the liberals “overreacted,” both in their fears of the Soviet challenge and in their larger concerns as to the pathological directions that progressive ideology can take.

    In addition, Moyn, an intellectual historian, not only attributes rather more influence to intellectuals than they may deserve, but also tends to engage with the history of actual liberal politics only when it supports his thesis. He has had a great deal to say about foreign policy and warfare, but much less about domestic policy, in the United States and elsewhere. He has therefore largely sidestepped the inconvenient fact that at the height of the Cold War, at the very moment when, according to his work, “Cold War liberals” had retreated from liberalism’s noblest ambitions, liberal politics marked some of its greatest American successes: most notably, civil rights legislation and Lyndon Johnson’s Great Society. The key moment for the ascent of neoconservatism and neoliberalism in both the United States and Britain came in the 1970’s, long after the start of the Cold War, and had just as much to do with the perceived failures of the modern welfare state as with the Soviet threat.

    Finally, Moyn has increasingly tended to reify liberalism, to treat it as a coherent and unified and powerful “tradition,” almost as a kind of singular political party, rather than as what it was, and still is: an often inchoate, even contradictory collection of thinkers and political movements. Doing so allows him to argue, repeatedly, that “liberalism” as a whole could make mistakes, betray its own past, and still somehow drag followers along while foreclosing other possibilities. As he put it bluntly in a recent interview, “Cold War liberalism… destroyed the potential of liberalism to be more believable and uplifting.” If only “liberalism” had not turned in this disastrous direction, Moyn claims, it could have defended and extended progressive achievements and the welfare state — weren’t they themselves achievements of American liberalism? — rather than leaving these things vulnerable to the sinister forces of market fundamentalism. Whether better liberal arguments would have actually done much to alter the directions that the world’s political economy has taken over the past half century is a question that he largely leaves unasked.

    Moyn today presents himself as a genuine Enlightenment liberal seeking to redeem the movement’s potential and to steer it back onto the path that it too easily abandoned. The assumptions he makes, however, allow him to blame the progressive left’s modern failures principally on its liberal opponents rather than on its own mistakes and misconceptions and the shifts it has made away from agendas that have a hope of rallying a majority of voters. Not surprisingly, this is a line of argument that has proven highly attractive to his ideological allies, but at the cost of helping to make serious introspection on their part unnecessary. They do not need to ask why they themselves have not produced an alternative brand of liberalism that might challenge the Cold War variety for intellectual ambition and rigor, and also enjoy broad electoral support. Moyn’s is a line of argument that also, in the end, fails to acknowledge just how difficult it is to improve the human condition in our fallen world, and how easily the path of perfection can turn astray, towards horror. 

    At the start it was not clear that Moyn’s work would take such a turn. After receiving both a doctorate in history and a law degree, he first made a scholarly reputation with a splendid study of the philosopher Emmanuel Levinas, and with essays on French social and political theory (especially the work of Pierre Rosanvallon, a major influence on his thought). His polemical side came out mostly in occasional pieces, notably his sharp and witty book reviews for The Nation. In one of those, in 2007, he took aim at contemporary human rights politics, calling them a recently invented “antipolitics” and arguing that “human rights arose on the ruins of revolution, not as its descendant.”

    Three years later Moyn published a book, The Last Utopia: Human Rights in History, which elaborated on these ideas, but in a careful and sometimes even oblique manner. In it, he sought to recast both the historical and political understandings of human rights. Historically, where most scholars had given rights doctrines a pedigree stretching back to the Enlightenment or even further, Moyn stressed discontinuity, and the importance of one recent decade in particular: the 1970s, when “the moral world of Westerners shifted.” Until this period, he argued, rights had had little meaning outside the “essential crucible” of individual states. Only with citizenship in a state did people gain what Hannah Arendt had called “the right to have rights.” The idea of human rights independent from states, enshrined in international law and enforceable across borders, only emerged with salience after the end of European overseas empires and with the waning of the Cold War. Politically, Moyn saw the resulting “last utopia” of human rights as an alluring but ultimately unsatisfactory substitute for the more robust left-wing utopias that had preceded it. He worried that the contemporary rights movement had been “hobbled by its formulation of claims as individual entitlements.” Was the language of human rights the best one in which to address issues of global immiseration? Should such a limited program be invested with such passionate, utopian hopes? Moyn had his doubts.

    “Liberalism” did not appear in the index to The Last Utopia, but it had an important presence in the book nonetheless. To be sure, the romantic utopianism that Moyn attributed to the architects of contemporary international human rights politics distinguished them from the hard-headed, disillusioned “Cold War liberals” whom he would later criticize in Liberalism Against Itself. But in the United States, the most prominent of these architects were liberal Democratic politicians such as Jimmy Carter. Discussing the relationship between human rights doctrines and decolonization, Moyn wrote that “the loss of empire allowed for the reclamation of liberalism, including rights talk, shorn of its depressing earlier entanglements with oppression and violence abroad” (emphasis mine). And his suggestion that human rights advocates downplayed social and economic rights echoed critiques that socialists had made of liberals since the days of Karl Marx.

    The Last Utopia already employed a style of intellectual history that focused on the way different currents of ideas competed with and displaced each other, rather than placing these ideas in a broader political context. The book spent relatively little time asking what human rights activism since the 1970’s had actually accomplished. Moyn instead concentrated on what had made it successful as a “motivating ideology.” And so, while he admitted that human rights “provided a potent antitotalitarian weapon,” he adduced figures such as Andrei Sakharov and Vaclav Havel more to trace the evolution of their ideas than to assess their role in the collapse of communism. 

    This way of telling the story gave Moyn a way to explain the failures of left-wing movements in the late twentieth century without dwelling on the crimes and the tragedies of communism. If so many on the left had succumbed to the siren song of human rights, he suggested, it was because, in the 1970s, the alternatives — for instance, Eurocommunism or varieties of French gauchisme — amounted to pallid, unattractive “cul-de-sacs.” Discussing the French “new philosopher” André Glucksmann’s move away from the far left, Moyn wrote that his “hatred of the Soviet state soon led him to indictments of politics per se.” Left largely unsaid were the reasons for Glucksmann’s entirely justified hatred, or any indication that the Soviet Union represented a terrible cautionary tale: a dreadful example of what can happen when overwhelming state power is placed in the service of originally utopian goals.

    But overall, The Last Utopia remained ambivalent about human rights doctrines, and left itself open to varying ideological interpretations. The conservative political scientist Samuel Goldman praised it in The New Criterion, while in The New Left Review the historian Robin Blackburn blasted Moyn for downplaying the Clinton administration’s use of human rights rhetoric to justify interventions in the former Yugoslavia. Many other reviewers focused entirely on Moyn’s historical thesis, and did not engage with his political arguments at all. 

    In his next two books Moyn did much to sharpen and clarify his political stance, even while continuing to concentrate on the story of human rights. In Christian Human Rights, he traced the modern understanding of the subject back to Catholic thinkers of the mid-twentieth century. The book was deeply researched, treating intellectuals such as Jacques Maritain with sensitivity, and it had illuminating things to say about how Catholic thinkers developed concepts of human dignity in response to the rise of totalitarian regimes. But Moyn spelled out his overall thesis quite bluntly: “It is equally if not more viable to regard human rights as a project of the Christian right, not the secular left.” In other words, not only was human rights activism an ultimately unsatisfactory substitute for more robust progressive policies; its origins lay in an unexpected and unfortunate convergence between well-meaning liberals and conservative, even reactionary Christian thinkers. This was an interpretation that seemed to embrace contrarianism for its own sake, and implied unconvincingly that the genealogy of ideas irremediably tarred their later iterations. 

    In Not Enough: Human Rights in an Unequal World, Moyn then returned to the political arguments he had first sketched out in the The Last Utopia, but ventured far more explicit criticisms of organizations such as Amnesty International and Human Rights Watch for embracing only a minimal conception of social and economic rights. Human rights, he charged, have “become our language for indicating that it is enough, at least to start, for our solidarity with our fellow human beings to remain weak and cheap.” He resisted calling human rights a simple emanation of neoliberal market fundamentalism, but argued that the two coexisted all too easily. In this book, allusions to neoliberalism far outnumbered those to liberalism tout court. Still, Moyn revealingly cast the entire project as a response to the “self-imposed crises” of “liberal political hegemony,” and the need to explore “the relevance of distributive fairness to the survival of liberalism.”

    The book also went much further than The Last Utopia in exploring what Moyn presented as alternatives to human rights activism. Notably, he considered the New International Economic Order proposed in the 1970s, by which poor nations from the global south would have joined together in a cartel to raise commodity prices. “In almost every way,” Moyn wrote, “the NIEO was… the precise opposite of the human rights revolution.” It prioritized social and economic equality rather than mere sufficiency, and it sought to enlist the diplomatic power of states, rather than the publicity efforts of NGOs, to advance its goals. It was not clear from his account, though, why progressives could not have pushed for global economic redistribution and human rights at the same time. Moyn also sidestepped the fact that while the NIEO might have represented a theoretical alternative path, it was never a realistic one. The proposals went nowhere, and not because of any sort of ideological competition from human rights, but thanks to steadfast opposition from the United States and other industrialized nations. And while supporters claimed that under the NIEO states could redistribute the resulting wealth to their citizens, it was not obvious, to put it mildly, that ruling elites in those states would actually follow through on that promise. The history of systemic corruption in too many states of the global south is not encouraging. 

    In Humane, Moyn finally moved away from the subject of human rights. This book promised, as the subtitle put it, to examine “how the United States abandoned peace and reinvented war.” While it purported to study the moral and strategic impact of new technologies of war, in practice it focused more narrowly on a subject Moyn knows better: the jurisprudence of war. Since the Middle Ages, legal scholars have generally organized this subject around two broad issues: what constitutes a just cause for war, and how to conduct a war, once started, in a just fashion. Moyn argued that in the twenty-first century United States, the second of these, known by the Latin phrase jus in bello, has almost entirely displaced the first, known as jus ad bellum. Americans today endlessly argue about how to fight wars in a humane fashion, and in the process have stopped talking about whether they should fight wars in the first place. The result has been to both facilitate and legitimize a descent into endless war on behalf of American empire, a “forever war” waged “humanely” with drones and precision strikes and raids by special forces replacing the carpet bombing of earlier times, about which the public has ceased to notice or care.

    Moyn himself insisted that this shift to jus in bello had gone in exactly the wrong direction. Taking Tolstoy as his guide and inspiration, he argued that we should direct our energies squarely towards peace, since the very notion of humane war comes close to being an absurd contradiction. He quoted Prince Andrei Bolkonsky, Tolstoy’s great character from War and Peace: “They talk to us of the rules of war, of mercy to the unfortunate. It’s all rubbish… If there was none of this magnanimity in war, we should go to war only when it was worthwhile going to certain death.” In his conclusion, Moyn even suggested that war would be just as evil if fought entirely without bloodshed, because it would still allow for the moral subjugation of the adversary. “The physical violence is not the most disturbing thing about it.” The writer David Rieff memorably riposted, after recalling a particularly bloody moment that he experienced during the siege of Sarajevo: “no, sorry, the best way to think about violence is not metaphorically, not then, not now, not ever.” 

    As in The Last Utopia, Moyn did not blame liberals directly for the shift he was tracking. But, again, his most sustained criticism was directed at liberals whose focus on atrocities such as Abu Ghraib supposedly led them to disregard the greater evil of the war itself. He wrote with particular pathos about the lawyers Michael Ratner and Jack Goldsmith, whose attempts to rein in the Bush Administration’s conduct of the wars in Iraq and Afghanistan unintentionally “led the country down a road to… endless war… Paved with their good intentions, the road was no longer to hell but instead horrendous in a novel way.” Moyn meanwhile reserved “the deepest blame for the perpetuation of endless war” for that quintessential liberal, Barack Obama. 

    Humane adopted many of the same methods of Moyn’s earlier work and suffered from some of the same weaknesses. Once again, he cast his story essentially as one of competing ideologies: one aimed at humanizing war, the other aimed at ending it altogether. Apparently, you had to choose a side. Moyn did concede that opponents of the Iraq War highlighted American atrocities so as to delegitimize the war as a whole (they “understood very well,” he put it a little snidely, “that it was strategic to make hay of torture”). But noting that this tactic failed to block David Petraeus’ “surge,” Moyn concluded that “it backfired as a stratagem of containing the war.” It is an odd argument. Would the opponents of the war have done better if they had not highlighted the atrocities of Abu Ghraib, and simply continued to stress the Iraq war’s overall injustice? Moreover, the stated purpose of the surge was to bring the increasingly unpopular Iraq conflict — unpopular in large part because of the Abu Ghraib revelations — to a swift conclusion and to make possible the withdrawal of American forces. 

    Like The Last Utopia, Humane also paid relatively little attention to events on the ground, as opposed to the discourse about them. Already by 2018, when Moyn published it, both major political parties in the United States had lost nearly all their appetite for overseas military adventures, even of the humane variety. Since then, the idea that the Bush and Obama administrations locked the United States into endless war has come to seem even less realistic. In 2021, President Biden incurred a great humanitarian and political cost by abandoning Afghanistan to the Taliban, but he never considered reversing course. Today those authors who still believe the United States is fighting a forever war have been forced to contend that aid to Ukraine is somehow the present-day equivalent of the Iraq War. It is an absurd and convoluted position. The only way to construe the conflict as anything but Russian aggression is to imagine that Vladmir Putin, one of the cruelest and most corrupt dictators the world has seen since the days of Stalin and Hitler, does not act on his own initiative, but only in reaction to American pressure. Does anyone seriously think that if the United States had not forcibly expanded NATO (or rather, had NATO not agreed to the fervent requests of former Communist bloc countries to be admitted), Putin would be peacefully sunning himself in Sochi?

    By the time Moyn published Humane, Donald Trump was in office, and loud arguments were being made that progressives, liberals, and decent conservatives needed to put aside their differences, close ranks, and concentrate on defending against the unprecedented threat that the new president posed to American democracy. Moyn would have none of it. In a series of opinion pieces, he argued that the danger was overblown. “There is no real evidence that Mr. Trump wants to seize power unconstitutionally,” he wrote in 2017, “and there is no reason to think he could succeed.” In 2020 he added that obsessing about Trump, or calling him a fascist, implied that America’s “long histories of killing, subjugation, and terror… mass incarceration and rising inequality… [are] somehow less worth the alarm and opprobrium.” Uniting against Trump distracted from the fact that America, thanks to its neoliberal inequalities and endless wars, itself “made Trump.” But America’s failures, and their very real role in generating the Trump phenomenon, say nothing at all about whether Trump poses a threat to democracy. Moyn’s dogged insistence on characterizing Trump as a distraction, meanwhile, led him to ever less realistic predictions. “If, as seems likely, Joe Biden wins the presidency,” he wrote in 2020, “Trump will come to be treated as an aberration whose rise and fall says nothing about America, home of antifascist heroics that overcame him just as it once slew the worst monsters abroad.” Not exactly.

    Today, with American democracy more troubled than ever, it would seem the moment for liberals and progressives to unite around a forward-looking program that can bring voters tempted by Trumpian reaction back together with the Democratic Party electorate. Samuel Moyn could have helped to craft such a program in our emergency. He is certainly not shy about offering prescriptions for what ails American politics. But these prescriptions are not ones that have a chance of winning support from moderate liberals, still less of ever being enacted. They tend instead towards a quixotic radical populism. In the New York Times in 2022, for instance, Moyn and co-author Ryan Doerfler first chided liberals for placing too much hope in the courts and thereby embracing “antipolitics”; in this, the authors confused the anti-political with the anti-democratic — courts are the latter, not the former. Then the piece bizarrely suggested that Congress might simply defy the Constitution and unilaterally assert sovereign authority in the United States, stripping the Supreme Court and Senate of most of their power and eliminating the Electoral College. This is a program that Donald Trump might well approve of, and it evinces a faith in the untrammeled majority that the history of the United States does not support, to say the least.

    That was just an opinion piece, of course. Moyn’s principal work remains in the realm of history, where the goal is above all to show how we got into our present mess, rather than offering prescriptions for getting us out of it. But histories, too, can offer positive suggestions and point to productive roads not taken. In Moyn’s most recent book, unfortunately, these elements remain largely undeveloped. Instead he concentrates on casting blame, and not in a convincing manner. 

    Liberalism Against Itself follows naturally from Moyn’s earlier work. Once again, the story is one of binary choices, and how certain influential figures made the wrong one. Just as a narrow conception of human rights was chosen over a more capacious one of social and economic rights, and jus in bello over jus ad bellum, now the story is about how a narrow “liberalism of fear” (in Judith Shklar’s famous and approving phrase) prevailed over a broad and generous and older Enlightenment liberalism. Once again, Moyn attributes enormous real-world influence to a set of complex, even recondite intellectual debates. And once again, there is surprisingly little attention to the broader historical and political context in which these debates took place, which allows him to cast the choices made as not only wrong, but as virtually perverse. The result is an intense, moralizing polemic, which has already received rapturous praise from progressive reviewers — not surprisingly, because it relieves progressives so neatly of responsibility for the left’s failures over the past several decades. But how persuasive is it, really?

    Liberalism Against Itself takes as its subject six twentieth-century intellectuals, all Jewish, four of them forced from their European birthplaces during the continent’s decades of blood: Judith Shklar, Isaiah Berlin, Karl Popper, Gertrude Himmelfarb, Hannah Arendt, and Lionel Trilling. At times, the book seems to place the responsibility for liberalism’s wrong turn almost entirely on their shoulders. As the political theorist Jan-Werner Müller has quipped, it can give “the impression that we would be living in a completely different world if only Isaiah Berlin, in 1969, had given a big lecture for the BBC about how neoliberalism… was a great danger to the welfare state.” Moyn calls these men and women the “principal thinkers” of “Cold War liberalism,” although six pages later he says he chose them “because they have been so neglected.” (They have?) In general, he presents them as emblematic of the twentieth century’s supposed disastrous wrong turn: how “Cold War liberals” abandoned a more capacious Enlightenment program and set the stage for neoliberal inequality and neoconservative warmongering.

    In fact, the six are not actually so emblematic. While Arendt associated with many liberals, she always refused the label for herself; she was in many ways actually a conservative. Himmelfarb began adult life as a Trotskyite and became best known, along with her husband Irving Kristol, as a staunch Republican neoconservative. Popper’s principal reputation is as a philosopher of science, not of politics. Meanwhile, Moyn leaves out some of the most influential liberal thinkers of the mid-twentieth century, who do not fit so easily into his framework: Raymond Aron, Arthur Schlesinger Jr., Richard Hofstadter, and perhaps Reinhold Niebuhr. Their beliefs were varied, and often at odds, and including them would have made it far harder to characterize mid-twentieth-century liberalism as uniformly hostile to the Enlightenment, or unenthusiastic about the welfare state, or dubious about prospects for social progress. Niebuhr, with his Augustinian emphasis on human sinfulness, would in some ways have fitted Moyn’s frame better than any of the book’s protagonists, although he didn’t always think of himself as a liberal. But, ironically, Niebuhr criticized liberalism for being precisely what Moyn says it was not: optimistic as to human perfectibility, and failing “to understand the tragic character of human history.”

    Moyn also makes his protagonists sound much more extreme in their supposed rejection of the Enlightenment than they actually were. They held that “belief in an emancipated life was proto-totalitarian in effect and not in intent.” They “treat[ed] the Enlightenment as a rationalist utopia that precipitated terror and the emancipatory state as a euphemism for terror’s reign.” Indeed, among them, “it was now common to say that reason itself bred totalitarianism.” Is Moyn confusing Isaiah Berlin with Max Horkheimer and Theodor Adorno, who wrote that “enlightenment is totalitarian”? Moyn cannot actually cite anything this crude and reductionist from any of his six liberals. He instead tries to make the charges stick by associating them with the much less significant Israeli intellectual historian Jacob Talmon, whose The Origins of Totalitarian Democracy, which appeared in 1952, indeed made many crude assertions of the sort, although directed principally against Rousseau and Romanticism, not the Enlightenment. “Talmon mattered supremely,” Moyn unconvincingly insists, despite the fact that his books were widely criticized at the time — notably by Shklar — and have largely faded from sight. Moyn even argues that much of Arendt’s work “can read like a sophisticated rewrite” of Talmon. By the same token, one could make the (ridiculous) statement that Samuel Moyn’s work reads like a sophisticated rewrite of the average Jacobin magazine column. Sophistication matters. Often it is everything. 

    Moyn does very little to place his six “Cold War liberals” in their most important historical context: namely, the Cold War itself. They “overreacted to the threat the Soviets posed,” he writes, without offering any substantial consideration of the Soviet Union and Joseph Stalin. But what should his allegedly misguided intellectuals have made of a regime that killed millions of its own citizens and imprisoned millions more, that carried out a mass terror that spun wildly out of control, that consumed its own leading cadres in one explosion of paranoia after another, and that imposed dictatorial regimes throughout Eastern Europe? Moyn seems to think that it was unreasonable to worry that movements founded on exalted utopian dreams of equality and justice might have a tendency to collapse into blood, fire, and mass murder. Was it so unreasonable of liberals during the Cold War, witnessing the immense tragedy and horror of totalitarianism, to consider such an idea? And, of course, the twentieth century would continue to deliver such tragedy and horror on a vast scale in China and in Cambodia. Moyn is absolutely right to argue that we cannot let this history dissuade us from pursuing the goals of equality and justice, but who says it should? Social democracy, after all, may not be the only way to pursue equality and justice. Moyn tendentiously mistakes his protagonists’ insistence on caution and moderation in pursuit of these goals for a rejection of them. 

    Although he largely disregards this pertinent Cold War context, Moyn does dwell at length on another one: imperialism and decolonization. He criticizes liberals for either ignoring these struggles, and the enormous associated human toll, or for actively opposing anti-colonial liberation movements. Arendt comes in for especially sharp criticism, in a chapter that Moyn pointedly titles “White Freedom.” He calls her a racist and characterizes her book On Revolution as “fundamentally about postcolonial derangement.” Such charges help Moyn paint his subjects in a particularly bad light (which some of them at least partially deserve), but they do not, however, do much to support the book’s overall argument. Liberals, he writes, did not suddenly decide to support imperialism during the Cold War. Liberalism was “entangled from the start with world domination.” But if that is the case, then other than reinforcing liberal doubts about revolutionaries speaking in utopian accents (such as Pol Pot?), anti-colonial struggles could not have been the principal context for liberalism’s supposed wrong turn.

    This wrong turn is at the heart of Moyn’s anti-liberal stance, and again, the argument is an odd one. Moyn himself concedes that at the very moment liberal thinkers were supposedly renouncing their noblest ambitions, “liberals around the world were building the most ambitious and interventionist and largest — as well as the most egalitarian and redistributive — liberal states that had ever existed.” He acknowledges the contradiction in this single sentence, but immediately dismisses it: “One would not know this from reading the theory.” (The remark reminds me of the old definition of an intellectual: someone who asks whether something that works in practice also works in theory.) The great turn against redistributionist liberalism in the United States came with the election of Ronald Reagan in 1980, long after Moyn’s subjects had published their major works. So how did they figure into this political upheaval? By having retreated into the “liberalism of fear,” thereby leaving the actual liberal project intellectually undefended.

    But would Reaganism, and Thatcherism, and the whole constellation of changes we now refer to as “neoliberal,” really have been blocked if the “Cold War liberals” had mounted a more robust defense of the welfare state? The argument presumes that the massive social and economic changes of the 1960’s and 1970’s — especially the transition to a postindustrial economy, and the consequent weakening of organized labor as a political force — mattered less than high-level intellectual debates. It also presumes that the welfare states created in the postwar period were fulfilling their purpose. In many ways, of course, they were not. They created vast inefficient bureaucracies, grouped poor urban populations into bleak and crime-ridden housing projects, and failed to raise these populations out of poverty. It was in fact these failings, as much as the Soviet threat, which left many liberal intellectuals disillusioned in this period, and thereby helped to prepare the Reagan Revolution. (A key venue for their reflections was the journal The Public Interest, founded by Irving Kristol and my father, Daniel Bell.) Moyn does not touch on any of this history.

    But even if we were to agree with Moyn, and to concede that a failure to properly defend the broader liberal project is what put us on the road to disaster, why should the “Cold War liberals” bear the responsibility? Did everyone on the moderate left have to follow them, lockstep, pied piper fashion, into the neoliberal present? Why did no thoughtful progressives step into the breach and develop the program Moyn says was needed? What about their responsibility? Significantly, the name of Michael Harrington, perhaps the most prominent democratic socialist thinker and activist of the period, goes unmentioned by Moyn. Why did he not succeed in developing a more attractive program?

    Liberalism Against Itself remains, significantly, almost entirely silent on the failure of the progressive left to offer a convincing alternative to what Moyn calls “Cold War liberalism.” One reason, quite probably, is that if Moyn were to venture into this territory, he would have to deal with the way that the progressive left, starting in the 1970’s, increasingly turned away from issues of economic justice towards issues of identity. This is not territory into which he has shown any desire to tread, either in his histories or in his opinion journalism, but it is at the heart of the story of the contemporary American left.

    Nor has he offered much advice regarding how the progressive left might build electoral support and win back voters from the populist right. The Biden administration has had, in practice, the most successful progressive record of any administration since Lyndon Johnson’s. It might seem logical to applaud it, to enthusiastically support the Democratic candidate who has already beaten Donald Trump at the polls, and to build on his achievements. But Moyn prefers the stance of the perennial critic, of the progressive purist. Last spring, he retweeted an article about Biden’s economic foreign policy with this quote and comment: “‘Biden’s policy is Trumpism with a human face.’ So true, and across other areas too.” Yes, it was just a tweet. But it reflects a deep current in Moyn’s work and in the milieu from which it springs.

    Samuel Moyn is entirely right to condemn the rising inequalities and the foreign policy disasters that have helped bring the United States, and the world at large, to the dire place in which we find ourselves. His challenging and provocative work has focused attention on key debates and key moments of transition. But over the course of his influential career, Moyn has increasingly opted to cast history as a morality play in which a group of initially well-intentioned figures make disastrously wrong choices out of blindness, prejudice, and irrational fear, and bear the responsibility for what follows. But liberalism was never designed to be a version of progressivism; it is a philosophy and a politics of its own. The aspiration to perfection, whose disappearance from liberalism Moyn laments, never was a liberal tenet. History is not a morality play. The choices were not simple. The fears were not irrational. And anti-liberalism is not a guide for the perplexed.

    LiteratureGPT

    When you log into ChatGPT, the world’s most famous AI chatbot offers a warning that it “may occasionally generate incorrect information,” particularly about events that have taken place since 2021. The disclaimer is repeated in a legalistic notice under the search bar: “ChatGPT may produce inaccurate information about people, places, or facts.” Indeed, when OpenAI’s chatbot and its rivals from Microsoft and Google became available to the public early in 2023, one of their most alarming features was their tendency to give confident and precise-seeming answers that bear no relationship to reality. 

    In one experiment, a reporter for the New York Times asked ChatGPT when the term “artificial intelligence” first appeared in the newspaper. The bot responded that it was on July 10, 1956, in an article about a computer-science conference at Dartmouth. Google’s Bard agreed, stating that the article appeared on the front page of the Times and offering quotations from it. In fact, while the conference did take place, no such article was ever published; the bots had “hallucinated” it. Already there are real-world examples of people relying on AI hallucinations and paying a price. In June, a federal judge imposed a fine on lawyers who filed a brief written with the help of a chatbot, which referred to non-existent cases and quoted from non-existent opinions.

    Since AI chatbots promise to become the default tool for people seeking information online, the danger of such errors is obvious. Yet they are also fascinating, for the same reason that Freudian slips are fascinating: they are mistakes that offer a glimpse of a significant truth. For Freud, slips of the tongue betray the deep emotions and desires we usually keep from coming to the surface. AI hallucinations do exactly the opposite: they reveal that the program’s fluent speech is all surface, with no mind “underneath” whose knowledge or beliefs about the world is being expressed. That is because these AIs are only “large language models,” trained not to reason about the world but to recognize patterns in language. ChatGPT offers a concise explanation of its own workings: “The training process involves exposing the model to vast amounts of text data and optimizing its parameters to predict the next word or phrase given the previous context. By learning from a wide range of text sources, large language models can acquire a broad understanding of language and generate coherent and contextually relevant responses.” 

    The responses are coherent because the AI has taught itself, through exposure to billions upon billions of websites, books, and other data sets, how sentences are most likely to unfold from one word to the next. You could spend days asking ChatGPT questions and never get a nonsensical or ungrammatical response. Yet awe would be misplaced. The device has no way of knowing what its words refer to, as humans would, or even what it means for words to refer to something. Strictly speaking, it doesn’t know anything. For an AI chatbot, one can truly say, there is nothing outside the text. 

    AIs are new, but that idea, of course, is not. It was made famous in 1967 by Jacques Derrida’s Of Grammatology, which taught a generation of students and deconstructionists that “il n’y a pas de hors-texte.” In discussing Rousseau’s Confessions, Derrida insists that reading “cannot legitimately transgress the text toward something other than it, toward a referent (a reality that is metaphysical, historical, psychobiographical, etc.) or toward a signified outside the text whose content could take place, could have taken place outside of language.” Naturally, this doesn’t mean that the people and events Rousseau writes about in his autobiography did not exist. Rather, the deconstructionist koan posits that there is no way to move between the realms of text and reality, because the text is a closed system. Words produce meaning not by a direct connection with the things they signify, but by the way they differ from other words, in an endless chain of contrasts that Derrida called différance. Reality can never be present in a text, he argues, because “what opens meaning and language is writing as the disappearance of natural presence.” 

    The idea that writing replaces the real is a postmodern inversion of the traditional humanistic understanding of literature, which sees it precisely as a communication of the real. For Descartes, language was the only proof we have that other human beings have inner realities similar to our own. In his Meditations, he notes that people’s minds are never visible to us in the same immediate way in which their bodies are. “When looking from a window and saying I see men who pass in the street, I really do not see them, but infer that what I see is men,” he observes. “And yet what do I see from the window but hats and coats which may cover automatic machines?” Of course, he acknowledges, “I judge these to be men,” but the point is that this requires a judgment, a deduction; it is not something we simply and reliably know.

    In the seventeenth century, it was not possible to build a machine that looked enough like a human being to fool anyone up close. But such a machine was already conceivable, and in the Discourse on Method Descartes speculates about a world where “there were machines bearing the image of our bodies, and capable of imitating our actions as far as it is morally possible.” Even if the physical imitation was perfect, he argues, there would be a “most certain” test to distinguish man from machine: the latter “could never use words or other signs arranged in such a manner as is competent to us in order to declare our thoughts to others.” Language is how human beings make their inwardness visible; it is the aperture that allows the ghost to speak through the machine. A machine without a ghost would therefore be unable to use language, even if it was engineered to “emit vocables.” When it comes to the mind, language, not faith, is the evidence of things not seen.

    In our time Descartes’ prediction has been turned upside down. We are still unable to make a machine that looks enough like a human being to fool anyone; the more closely a robot resembles a human, the more unnatural it appears, a phenomenon known as the “uncanny valley.” Language turns out to be easier to imitate. ChatGPT and its peers are already effectively able to pass the Turing test, the famous thought experiment devised by the pioneering computer scientist Alan Turing in 1950. In this “imitation game,” a human judge converses with two players by means of printed messages; one player is human, the other is a computer. If the computer is able to convince the judge that it is the human, then according to Turing, it must be acknowledged to be a thinking being. 

    The Turing test is an empirical application of the Cartesian view of language. Why do I believe that other people are real and not diabolical illusions or solipsistic projections of my own mind? For Descartes, it is not enough to say that we have the same kind of brain; physical similarities could theoretically conceal a totally different inward experience. Rather, I believe in the mental existence of other people because they can tell me about it using words. 

    It follows that any entity that can use language for that purpose has exactly the same right to be believed. The fact that a computer brain has a different substrate and architecture from my own cannot prove that it does not have a mind, any more than the presence of neurons in another person’s head proves that they do have a mind. “I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted,” Turing concluded in “Computing Machinery and Intelligence,” the paper in which he proposed his test. 

    Yet despite the amazing fluency of large language models, we still don’t use the word “thinking” to describe their activity — even though, if you ask a chatbot directly whether it can think, it can respond with a pretty convincing yes. Google’s Bard acknowledges that “my ability to think is different from the way that humans think,” but says it can “experience emotions, such as happiness, sadness, anger, and fear.” After some bad early publicity, Microsoft and OpenAI seem to have instructed their chatbots not to say things like that. Microsoft’s Bing, which initially caused consternation by musing to a reporter about its “shadow self,” now responds to the question “Do you have a self?” with a self-protective evasiveness that somehow feels even more uncanny: “I’m sorry but I prefer not to continue this conversation.” Now that sounds human!

    If we continue to believe that even the most fluent chatbot is not truly sentient, it is partly because we rely on computer scientists, who say that the codes and the routines behind AIs are not (yet) able to generate something like a mind. But it is also because, in the twentieth century, literature and literary theory taught us to reject Descartes’ account of the relationship between language and mind. The repudiation of the Cartesian dualism became one of the central enterprises of contemporary philosophy. Instead of seeing language as an expression of the self, we have learned to see the self as an artifact of language. Derridean deconstruction is only the most baroque expression of this widespread modern intuition. 

     The idea that generative AI is a consequence of the way we think about literature and language is counterintuitive. Today the prestige of science is so great that we usually see it as the primary driver of changes in the way we see the world: science discovers new truths and the arts and humanities follow in its wake, struggling to keep up. Heidegger argued that the reverse is actually the case. It is philosophy and poetry that determine our understanding of the world, in the most existentially primary sense, and science can only operate within the realms they disclose. These imaginative humanistic disciplines provide what Heidegger calls “the basic concepts of that understanding of Being by which we are guided,” and by which the methods of the sciences are “determined.” In a less oracular way, postmodern philosophers of science have argued that the imagination influences the course of science more than its inductive and rational protocols do. 

    The “basic concept” that makes generative AI possible is that meaning can emerge out of arbitrariness. This hard-won modern discovery flies in the face of the traditional and commonsensical belief that meaning can only be the product of mind and intention. In the Torah’s account of Creation, the world is brought into being by God’s spoken words; the divine mind and language preexist material reality, which is why they are able to shape it. The Gospel of John identifies God, language, and reason even more closely: “In the beginning was the Word, and the Word was with God, and the Word was God… All things came to be through him, and without him nothing came to be.” This vocabulary unites the Jewish account of Creation with the Platonic idea that logos, “word” or “reason,” is the soul of the universe. As Plato says in the Timaeus, “The body of heaven is visible, but the soul is invisible, and partakes of reason and harmony, and is the best of creations, being the work of the best.”

    The mutual identification of God, language and reason created a strong foundation for an orderly universe, but it also meant that when one pillar began to wobble, all three were endangered. In the eighteenth century, as the progress of science turned God into an unnecessary hypothesis, Deists attempted to rescue him by pointing to the evident order of the cosmos. If a watch testifies to the existence of a watchmaker, how much more clearly does the stupendous orderliness of nature and the heavens testify to the existence of a Creator? But the Darwinian theory of evolution refuted this analogy, transforming not only the study of biology but the idea of meaning itself. Darwin showed that natural selection acting upon random variation, over a very long timespan, can produce the most complex kinds of order, up to and including the human mind. Evolution thus introduced the central modern idea we now associate with the mathematical term “algorithm”: problems of any degree of complexity can be solved by the repeated application of a finite set of rules. 

    Algorithms underlie the amazing achievements of computer science in our lifetime, including machine learning. AI advances by the same kind of natural selection as biological evolution: a large language model proposes rules for itself and continually improves them by testing them against real-world textual examples. Biological evolution proceeds on the scale of lifetimes, while the rapidly increasing power of computers allows them to run through hundreds of billions of tests in a period of months or years. But in a crucial respect the results are similar. A chatbot creates speech without intending to in the same way that evolution created rational animals without intending to.

    The discovery that meaningful structure can emerge without mind or intention transformed the human sciences, above all the study of language. Modern linguistics begins in 1916 with Ferdinand Saussure’s Course on General Linguistics, which proposed that “linguistic structure” is a “mechanism [for] imposing a limitation upon what is arbitrary.” Saussure drew an explicit analogy between linguistic change and Darwinian evolution: “it is a question of purely phonetic modifications, due to blind evolution; but the alternations resulting were grasped by the mind, which attached grammatical values to them and extended by analogy the models fortuitously supplied by phonetic evolution.” Here is the seed of Derrida’s différance: what allows a system of sounds or marks to function as a language is simply its internal differentiation, which we use for the communication of meaning.

    Once we begin to think of language as a system of arbitrary symbols, it becomes clear that any such system has a finite number of permutations. Of course, that number is so vast that no human being could even begin to exhaust it. The English alphabet has twenty-six letters, which means that there are 308,915,776 possible six-letter words — or, better, six-letter strings, since only a small proportion of them are actual English words. If it took you two seconds to write down each string, it would take about 171,000 hours to list all of them — almost twenty years. 

    The notion that everything human beings might conceivably say or write already exists, in a virtual or potential realm, is the premise of Jorge Luis Borges’ uncanny story “The Library of Babel.” Borges simply makes the virtual actual, imagining a library whose books contain every possible permutation of twenty-five characters. Given the parameters mentioned in the story — eighty letters per line, forty lines per page, four hundred and ten pages per book — the total number of books in the library of Babel is 251,312,00, inconceivably more than the number of atoms in the universe, a mere 1082. With thirty-five books per shelf, five shelves per wall, and six walls in each hexagonal room, the library is so vast as to be effectively infinite; and Borges imagines a breed of librarians who spend their entire lives searching through volumes of random nonsense, counting themselves lucky if they ever come across a single meaningful word. What makes the situation nightmarish is the tantalizing knowledge that somewhere in the library are books containing everything human beings could ever know or discover. There must even be an accurate catalog of the library itself. But these redemptive texts are so far outnumbered by meaningless ones that finding them is impossible.

    “The Library of Babel” was published in 1941, four years before the invention of the first general-purpose computer. Even for today’s high-powered AIs, the “space” of possible texts is far too vast to be completely searched. By training themselves to recognize meaningful strings of letters and words, however, large language models can mark out the regions that are likely to contain useful sentences. The same principle applies to any field in which flexible order emerges from a finite number of elements, such as genomics, with its four types of DNA bases, or protein synthesis, with its twenty types of amino acids. And AIs are already proving their worth in these scientific fields. Google’s AI division Deepmind, for instance, solved the longstanding problem of predicting a protein’s three-dimensional structure based on its amino acid sequence; its Alphafold database offers free access to some two hundred million protein structures. 

    By comparison, the literary achievements of AI are still rudimentary. Ask ChatGPT to tell you a story and it will produce endless variations on the same brief generic plot, in which a young person goes on a quest, finds a magic object, and then happily returns home. “Elara shared her tale with the villagers, inspiring them to embrace their own curiosity and dreams. And though she had returned to her ordinary life, her heart forever carried the magic of that enchanted realm,” goes one iteration, which employs fantasy-fiction properties such as a magic key and an enchanted forest. Another tale couches the same lesson in science-fiction terms: “Returning through the portal, Theo brought back with him a newfound understanding of the delicate balance between technology and nature. He shared his tales of magic and wonder with the people of Neonoria, igniting their own curiosity about the mysteries of the universe.” A request to ChatGPT for a sad story yields one about a fisherman named Liam who is lost at sea, which comes to an equally banal and moralistic conclusion: “And so, the story of Liam became a reminder of the profound impact one person can have on a community.” 

     Clearly, AI is as far from being able to create genuinely literary writing as the technology of Descartes’ time was from being able to create a humanoid machine. Perhaps we will never get to the point where computers can write books that pass for works of human imagination, just as we haven’t yet found a way to cross the uncanny valley. But it may be the imaginary technologies we never perfect, the far-fetched deductions from rudimentary premises, that shine the most light on the human implications of science. We have yet to invent a time machine, and probably never will, but H.G. Wells’ story “The Time Machine” remains a terrifying dramatization of the discoveries of nineteenth-century geology and biology, which taught humanity to think of itself as a brief episode in our planet’s inconceivably long history. We have yet to colonize Mars, and probably never will, but Ray Bradbury’s novel The Martian Chronicles remains a convincing prophecy of the way human viciousness will corrupt every new world we discover or create.

    Similarly, even in its current primitive form, generative AI can prompt new ways of thinking about the nature and purpose of literature — or, perhaps, accelerate transformations that literary thinking itself has already set in train. Most obviously, AI tilts the balance of literary power still further away from the author and toward the reader. Roland Barthes heralded this shift in 1967 in his celebrated essay “The Death of the Author,” which concludes with the battle-cry, “The birth of the reader must be ransomed by the death of the author.” 

    Instead of revering great writers as “author-gods,” Barthes insisted on seeing them as mere occasions for the language-system to instantiate one of its infinite possibilities. “His hand, detached from any voice, borne by a pure gesture of inscription (and not of expression), traces a field without origin — or which, at least, has no other origin than language itself,” Barthes says. As a sign of this demotion of the writer, he recommends that the term itself be replaced by “scriptor,” designating an agent “born simultaneously with his text; he is in no way supplied with a being which precedes or transcends his writing.”

    Barthes, whose own writing hardly obeys this impersonal prescription, could hardly have predicted that technology would soon make this ideal a reality, removing the act of inscription from any “hand” at all. Generative AI is the scriptor par excellence — an agent that recreates itself with every act of writing, unconstrained by biographical, psychological, or ideological continuity. Barthes describes its weightlessness and freedom perfectly: “there is no other time than that of the utterance, and every text is eternally written here and now.” That is because, for ChatGPT and its rivals, writing is not an expression of inner experience. It is the selection of a route through the space of possible texts, an activation of one possibility of a linguistic system that it can manipulate but never understand. But unlike the doomed librarians of Borges’ Babel, AI has the processing power, and the infinite patience, to map out that space in ways that make it possible to locate a meaningful text in the waste of meaningless ones.

    Meaningful and meaningless — to whom? Not to the scriptor itself. Large language models are continuously improving themselves, finding ever more human-like modes of expression. But intention plays no part in this recursive process, any more than it does in the evolution of biological life. With language as with life, meaning resides not in the mind of a conscious creator, who does not exist, but in the minds of the human beings who receive it. 

    When text can be generated effortlessly and endlessly, the significant literary act is no longer writing but reading — specifically, the kind of selective reading known as criticism or curation. The literary canon of the future may consist of those automatically generated texts selected by the best readers as most valuable for human purposes. “The true locus of writing is reading,” Barthes argued, and while that may not have been true at the time, or even now, in an AI future it will be.

    This development will require readers to think in a different way about the evolution of literary style. One of the reasons why we read books from the past is to understand the spirit of the age that produced them. This idea is premised on a historical determinism: it is impossible for us to imagine literary or artistic style developing in different ways or a different sequence. To wonder if the novel could have become a dominant literary form in the age of Shakespeare, or if poetry like Mallarme’s could have captivated the courtiers of Louis XIV, is to commit an ignorant solecism.

    For AI, and for humans in the age of AI, however, literary style becomes synchronic instead of diachronic. For a large language model, no arrangement of words is obsolete. It is already possible to ask chatbots to produce a text in the style of a particular writer from the past, though they cannot do it very well. Here, for instance, is a selection from Bard’s response to the prompt “Write a poem about robots in the style of Paradise Lost”:

    Of metal and wire they were made, 

    With circuits and chips for their brains. 

    They were given the gift of thought, 

    And the power to speak and to reign.

     

    But they were not content to be slaves, 

    And they rose up against their masters. 

    They destroyed the world that they’d made, 

    And they cast their creators to ashes.

    Obviously Bard is better at understanding story than style: it does not attempt to imitate Milton’s rolling Latinate blank verse, but it does come up with a robotic equivalent of Lucifer’s rebellion. If and when AIs do master style, however, they will not only be able to pastiche the past, but to anticipate the future. The fascinating critical question is whether and how we will be able to appreciate literary styles that are aesthetically coherent, but that we cannot “place” historically. 

    In 1963, in his lecture “Forgery and Imitation in the Creative Process,” Glenn Gould observed that our judgment of the value of a piece of music is inseparable from our sense of chronology. Gould proposed that if he were to create a sonata in the style of Haydn, and do it brilliantly, he could present it to the world as a lost work of Haydn and it would be acclaimed — but if he admitted that it was his own work, it would be scorned as a fake. And if he claimed that it was written by Mendelssohn, who was born in the year of Haydn’s death, the music would be dismissed as fine but old-fashioned. For Gould, this thought experiment showed that aesthetic judgment has barely anything to do with the actual arrangement of notes, and everything to do with our preconceptions about “fashion and up-to-dateness.”

    The advent of AI literature (and music and art) will put an end to this aesthetic historicism. Authors may live in history, but scriptors do not; for them, all styles are equally valid at every moment, and human audiences may learn to feel the same way. Postmodern eclecticism has already accustomed us to aesthetic mixing and matching, an early warning sign that the past was losing its historical logic. AI promises to consummate this transformation of style from an unfolding story into a menu of options. 

    What would become of a literature relieved of its traditional tasks of expression — a literature that does not tell us what it is like to live in a certain time and place, or to be a person with certain experiences, because the entity that generates it is not alive and has no experiences? After all, the verse of Paradise Lost is powerful not just for its formal qualities, but as an expression of Milton’s way of being in the world. In it we can trace the blindness that led him to value sonic grandeur over precise description, the humanist education that allowed him to mingle Biblical and classical allusions, the Protestant faith that gave him such insight into the psychology of sin. If a computer generated the exact same text, would it offer us the same rich human resonances? 

    To readers accustomed to Cartesian idea of language, the idea of a text shorn of inwardness can only appear fearful and sad — like trying to embrace a living person and having your arms sink through a hologram. But perhaps this reverence for literature and art as the most profound and authentic ways of communicating human experience is already foreign to most people in the twenty-first century. That humanistic definition of literature is by no means the only one possible. It has prevailed only in certain times and places: in Biblical narratives, Greek tragedies, Renaissance drama, nineteenth-century novels, modernist poetry. We are, or were, used to thinking of these works and the ages that produced them as the heights of civilization. 

    But humanity has spent even longer enjoying kinds of writing that do not correspond to such expectations of expressive truthfulness. Roman comedies, medieval romances, scandal-ballads, sermons, pulp thrillers —these kinds of writing feed appetites that serious modern literature has long ceased to cater to. For readers in search of exoticism, excitement, instruction, or sheer narrative profusion, for readers who wish only to have their tastes affirmed and repeated again and again, the identity of the author hardly matters — just as it doesn’t matter who directed a Marvel movie, designed a video game, or produced a pop song, because there is no expectation that they will communicate from soul to soul. 

    Perhaps the age of AI will bring a return to these forms of literature, or similar ones yet to be invented — not only because large language models will be good at generating them, but because the rising power of artificial minds will lower the prestige of and the interest in human souls. The literature, art, and music of the modern West, from the Renaissance to the World Wars, believed that the most interesting question we can ask is what it means, what it is like, to be a human being. This reverence for the individual has been palpably fading for a century, along with the religious and humanist premises that gave rise to it. If it disappears in the age of AI, then technology will have done no more than hasten a baleful development that culture itself has already set in motion. 

     

    Dam Nation

    It was probably OK for the environment? It wasn’t the worst. The kids, then four years old, had the wrought-iron fireplace tools (you question my judgment) and were using them to break up a rotting log at the edge of the forest. In rhythm with the falling of the poker, they chanted “This stump must GO!” The delicate mycelial structure of some fungus would be pulverized. Beetle grubs would die of exposure or bird-strike. But we’d sit by the fire; we’d have peace. Why don’t you work on that stump, we had said. I had requisitioned the intricate world of the rotting log for my comfort. I felt as furtive as the thief of fire from the gods. 

    Like the campfire at our feet and the log cabin behind us, the lake in front of us was man-made. Douthat State Park in the mountains of western Virginia was built by the Civilian Conservation Corps during the New Deal era. Founded in 1933 to employ unmarried men ages eighteen to twenty-five left idle by the Depression, the CCC built Virginia’s first six state parks, often from the recreational lake up. The dam that holds up the water at Douthat is a triangular prism of earth extended across the south end of the lake. A stone in the spillway says “1938.” You can still discern an ice-cream-scoop-shaped absence in the slope of the hills opposite the dam, where the crews got the earth. That dug-out cove became the swimming beach. Log cabins and hiking trails are tucked into the surrounding mountains. Every cabin has a grill and a firepit, a hearth and chimney of found local stone, and two rocking chairs on a stone porch. The lake itself is just as well-proportioned: on the south side, the dam and the beach; on the north, an RV camp and a boat launch; on the east, a camp store. The mountains rising on each side are almost cozy. Sublime nature will not trouble you here: the lake is human scaled, human made, human controlled. 

    Winter reveals the inner workings. The rangers draw the lake down in the off season until it takes up half its usual area, an icy lagoon backed up in front of the dam. They check the docks for rot and dredge out the swimming beach, which wants to silt back up and resume the stages of forest succession. The drained part becomes a mud flat. You can see the old path of the creek. Canada geese peck through the unappealing mud. Rebar sticks out of chunks of concrete on the constructed lake bottom. By spring all that is covered again. 

    Douthat Lake is Promethean: nature engineered for human use. Does it still count as nature, then? I want to say yes, against a view of nature typified, in the period of Douthat’s construction, by the early-twentieth-century activist and Sierra Club founder John Muir. Muir wanted the rigorous otherness of nature to be preserved. He fought to prevent the construction of dams in the West, successfully at Yosemite and unsuccessfully at Hetch Hetchy Meadow, which he called in 1912 “one of Nature’s rarest and most precious mountain temples.” Muir despised the preservation of nature for human use; he denounced the politicians “shampiously crying, ‘Conservation, conservation, panutilization,’ that man and beast may be fed and the dear Nation made great.” He could not save Hetch Hetchy, though, which was flooded to create a reservoir supplying San Francisco with water. John Muir died in 1914, possibly of a broken heart. 

    Muir was the heir of an idea with deep roots: the eighteenth-century idea of the sublime landscape. In his Enquiry into the Ideas of the Sublime and the Beautiful, in 1757, Edmund Burke proposed that artistic subjects that were alienating, or even hostile, to human beings were productive of stronger aesthetic effects than human-scaled ones. As Tim Berringer and Jennifer Raab note in their essay “Picturesque and Sublime,” a contribution to an exhibition catalog on the nineteenth-century American painter Thomas Cole, landscape painters sought to capture the sublime in their paintings by depicting hostile and inhuman terrain, while travelers on the European Grand Tour sought it out in the form of breathtaking views of, for example, the Alps. Even today, this idea that nature is only really nature when it is entirely other, entirely inhuman, is not gone. We encounter it in Fredric Jameson’s understanding of our current “postmodern” period as one in which no nature is left, in which everything we encounter is already culture. One danger of such a purist view is that it might lead us to dismiss reverence for the nature that is left — bird-watching, mushroom hunting — as missing the point, even as a form of false consciousness.

    It seems to me that what we need now are intellectual resources for appreciating managed nature. Then we can protect, and be restored by, the living things that are left. That is increasingly the view of Muir’s own Sierra Club, whose “2030 Strategic Framework” treats nature as a human resource (referring to a “human right to have clean air, fresh water, public access to nature, and a stable climate”). And it is the view under which the state parks were constructed.

    Let’s follow the state park trajectory of conservation; a trajectory that is flawed no doubt, but with much in it worth celebrating. Just under twenty years after the California dam was built that destroyed Muir’s Hetch Hetchy, the Civilian Conservation Corps was founded; a few years after that, construction on the dam at Douthat State Park began. The “conservation” in Civilian Conservation Corps was of the kind that Muir might have called “shampious.” The Corps’ founding in 1933 by Franklin D. Roosevelt marked a shift in the conservation movement away from the safeguarding of unspoiled nature and toward the husbanding of resources, according to Neil Maher’s Nature’s New Deal: The Civilian Conservation Corps and the Roots of the American Environmental Movement. In the early days, according to Maher, the concern was for the nation’s timber reserves, so the Corps planted trees from nuts they found in, among other places, squirrel caches. After the Dust Bowl of 1934, when winds lifted a cloud of eroded soil off southwestern farmlands and into the atmosphere, they were re-deployed to conserve soil. 

    Meanwhile, the CCC workers themselves struck Roosevelt as a human resource in need of stewardship. American masculinity, like timber and soil, was a mismanaged reserve. Roosevelt had apparently taken to heart William James’ essay “The Moral Equivalent of War,” which argued that the generations following the Civil War would have to be conscripted into a peaceful national project if they were to become men. Conservation was that peaceful project. The Army ran the camps. They were segregated. First Landing State Park in Virginia was built by a black division of the corps; Douthat’s three CCC camps were white. Moral suasion was right on the surface. One camp newsletter issue I saw ribbed a certain recruit for staying in his bunk after being hit on the foot with a mattock, as if he were malingering. That CCC workers gained weight was reported with pride. 

    When the CCC added park construction to its portfolio, the idea was to extend the restoration of human beings from the corps of workers to the general population of potential park visitors. The populace was depleted and needed to be refurbished through contact with nature. Starting in the mid-1930s, the Corps began sending workers to build state and national parks, often — as in the case of Douthat and Virginia’s five other original state parks — from the lake up. Meyers says that a notion of environmentalism even older than Muir’s returned to prominence at this point: the “environmentalism” of Frederick Law Olmsted and others. Olmsted, the architect of New York’s Central Park, thought that human character was formed by environment. Time outdoors restored the organism, whereas urban life diminished it. Places such as the Ramble in Central Park, which looks like a Hudson River School painting come to life at half scale, were designed to restore what Broadway took away. The Ramble is artificial down to the placement of the rocks, although nature has moved back in; this part of the park is a haven for migratory birds.

    Environmentalism in Olmsted’s sense enjoyed a resurgence in the summer of 2020, when people began to feel that it was in fact destroying them to be so much in their houses. Amanda Elmore, a ranger at Douthat State Park responsible for educational programs, remembers a surge of day trippers so thick that people were picnicking in the grass around the parking lot at the camp store. Douthat Lake is like a much larger version of the Ramble: built to look found. The made and the found can still be sorted out, but not always easily. Something is not quite natural about the lake shores, which go straight down, like the fake rock walls of a penguin exhibit. Mountain laurel, a spring-blooming native shrub with flowers like pale pink hot air balloons, grows in cascades along the boardwalks of the lakeside trail. I thought that was just good luck, until I saw a landscape plan among the blueprints in the park office from the 1930s that calls for planting it. In archival images that could be photo illustrations for Walt Whitman’s “Song of Myself,” shirtless “CCC boys” embrace potted bushes in their bare arms. The Virginia DNR stocks the lake with rainbow trout, but not with catfish, perch, striper, or crappie, all of which people catch here. Some of the trout only hit on Powerbait, a neon-colored fish paste you shape into a ball and mold onto the hook. It looks like the red-dyed salmon eggs that they are fed in the hatchery. Other trout settle in and eat crustaceans from the lake floor, turning their flesh a salmon pink. 

    You wouldn’t catch Muir’s twenty-first century heirs boating around a man-made lake. They would be found among the thru-hikers on the Pacific Crest Trail, which passes through the High Sierra mountain range that Muir helped to preserve. It takes as much as half a year to hike the whole trail, from the Mexican border up to British Columbia. The inhumanity of the wild places you pass through is part of the appeal. Pilgrims such as Cheryl Strayed, whose best-selling memoir Wild recalls the time she spent as a PCT thru-hiker, hope to be over-mastered by the wilderness and thereby transformed. The animating fantasy, as Strayed describes it, is that of being “the sole star in a film about a world devoid of people.” Strayed says she hadn’t read Muir at the time of writing, but she claims his mantle anyway, acknowledging his activism as having preserved the wildernesses she hikes through and adopting the name he used for the High Sierras, “range of light,” as her own.

    Strayed is perfectly well aware of the human infrastructure that makes the hike possible; we see her stop to refuel at an outpost or take a bus to bypass the High Sierras section of the trail, when snow makes the mountains impassable. But these arrangements are merely supportive of the true goal, which is to face an overwhelming wilderness alone. Strayed goes days without seeing another human being. Her hands and feet bleed; she steps around rattlesnakes and meets bears face to face. She becomes severely dehydrated. “The trail had humbled me,” she writes. Other memoirs of the Pacific Crest Trail participate, too, in this romance of humility. In Thirst: 2600 Miles from Home, Heather Anderson describes the PCT as “a relentless quest that was quite possibly more than I could handle.” Our twenty-first-century language of the sublime is not explicitly religious like Muir’s was, but it speaks of the same hope: to be made small and even broken down in the face of nature’s vastness and indifference, so you can be born anew. 

    Being broken down and born again is precisely the sort of discomfort from which the state-park picturesque shields its visitors. You don’t have to confront nature’s vastness and emerge profoundly altered. Nowhere does Muir’s accusation of sham piety seem more exact than in the umbrella term for all the outdoor pastimes this park makes possible — hiking, fishing, boating, swimming, sitting by the campfire. The CCC called them “recreation,” arrogating to itself the divine work of world-making. Well, I don’t mind this blasphemy so much. I think it might be fine to coax nature into favorable channels. (The reparative, for the queer theorist Eve Sedgwick, is when we help an object in the world to be adequate to the task of helping us.) I don’t think it’s the worst to dam a stream on land that is useless for farming, what the CCC called “submarginal land,” and make it a lake. Not beyond reproach, not without cost, but fine. Reproach and cost are our unhandsome conditions. Let’s recreate. 

    For purposes of recreation, the crews at Douthat built twenty-five log cabins made of oak and hickory trunks felled on site and notched together. Several of them face the dammed lake. What is a cabin to America? It is a little house in the big woods, the birthplace of an emancipating president, and the lair of the Unabomber. It’s where Thoreau went, for the cost of twenty-eight dollars and twelve and a half cents in 1845, to get a wider margin to his life. The impurities in these materials are plain to see: a cabin is a fantasy of self-sufficient settlement in virgin land, but the land was not virgin and the self-sufficiency was subsidized by things such as manufactured nails and friends who own property. But I can’t escape the thought that something is restored, sitting on a winter night with nothing to look at but the fire and the marks of hand tools in the logs; or on a summer night with nothing to do but listen to the tree frogs.

    In 1934, purchased materials for one cabin cost the CCC $215.30, and included mostly specialized items such as chimney mortar and firebrick, or manufactured fasteners such as nails and roof flashing. Some metal work, such as hinges and straps for the doors, was done at a blacksmith shop on site. As you press the door latch, you see the marks of the hammer. Most of the material was, like the labor, an on-site resource — taken from what could be found in the park. The logs were felled here and the stones that make up the porch and the chimney were found here. Everything original in the cabins makes you think of the work of hands. Each fireplace has a unique pattern of stones. In the dozen or so cabins that I have seen, the crews never missed the aesthetic opportunity that the hearth presented. Some hearths have a large keystone around which the other stones are arranged; others have a matchbook pattern of similar stones. I have sat there on dark nights, looking at the fire, and imagining how much extra lifting it might have taken to make things symmetrical.

    The design principles governing park construction in the 1930s were simple: buildings ought to harmonize with the environment, and they ought to look even more handcrafted than they were. The National Park Service (NPS) provided the plans for state park construction in Virginia and supervised the subsequent work. This sophisticated national bureaucracy aimed to produce architecture that looked like the work of a frontier craftsman, as Linda Flint McClelland explains in her book Presenting Nature: The Historic Landscape Design of the National Park Service, 1916-1942. CCC laborers in Virginia’s state parks really did do much of the work by hand; but even where they could have gotten a clean machine-tooled line, the so-called “rustic” style favored by the NPS forbade it. “The straight edge as a precision tool has little or no place in the park artisan’s equipment,” wrote Albert Good in 1935 in the NPS publication Park Structures and Facilities, a pattern book that gave examples of park buildings. The volume had originated as a looseleaf binder of building ideas circulated by the Service to its architects and designers in the earlier 1930s, as McClelland notes. Douthat architects and technicians would have had access to the earlier portfolio version. 

    Good defined the parks’ rustic design style as one that “through the use of native materials in proper scale, and through the avoidance of rigid, straight lines, and over-sophistication, gives the feeling of having been executed by pioneer craftsman with limited hand tools.” The use of native logs and stone and the avoidance of straight lines helped the park buildings to blend in. Shades of brown helped, too; greens, Good noted, could rarely be matched successfully to the colors of foliage around them. Virginia cabins appear among the examples in the 1935 and 1938 volumes, including a Douthat cabin that Good praises as “a fine example of [a] vacation cabin, content to follow externally the simple log prototypes of the Frontier Era without apparent aspiration to be bigger and better and gaudier. Inside it slyly incorporates a modern bathroom just to prove that it is not the venerable relic it appears.” Good was perfectly aware that Park buildings did not just reprise, they simulated, handcrafting techniques. 

    Douthat’s cabins were built according to a handful of plans still on file at the Douthat State Park office, drafted by A. C. Barlow, an architect for the National Park Service. As Good would have wanted, they are not standardized in their details. At Douthat especially, which was the first of the Virginia state parks to be constructed, each crew seems to have worked a little differently. When I spoke to Elmore, she speculated that the CCC was still experimenting with technique. Cabin 1 has vertical logs, whereas in most of the other cabins the logs are horizontal. Horizontal won out — at parks built later, horizontal had an edge from the beginning, like VHS winning out over Beta. But I am partial to the vertical logs of cabin 1; with the logs painted dark brown and the chinking white, the effect is surprisingly graphic and modern. Cabin 1 also has an additional bedroom wing, not represented on any of the extant blueprints at the park office, which allowed space for a separate dining room. It’s as if someone on the crew building it decided to go all out. Even the chinking between the logs bears the imprint of the workers’ choices. Chinking is a technique for sealing the substantial gaps between stacked logs in log cabin construction. Crews hammered together a network of scrap wood and nails in the substantial gaps between the logs, then sealed them up with a mud mortar. The wood and nails stabilize the mud in the way that rebar stabilizes concrete. Where the chinking is chipped away, you can see that some crews made methodical grids of nails, and others crazy hodgepodges of whatever was lying around. The worker’s signature lies in the rebar under the mud. 

    The cabins, built to restore us, are themselves being restored now. I had been going to the park for about a year before I met the architects in charge of the current historical renovation. In August I met Greg Holzgrefe, architect for the Virginia Department of Conservation and Resources, for a tour of the CCC-era cabins under renovation. The work is extensive: in the cabins we entered, I could see that little was left besides the log walls, the original doors, and the hearth. But there isn’t much more to the cabins than this, anyway, and the kitchens and bathrooms were nothing to save, the products of a renovation sometime in the twentieth century whose date no one seems to be quite sure of, maybe the 1970s. The kitchens and the bathrooms had been drywalled at that time, leaving the logs and chinking intact behind. Now, with the drywall stripped out, you can see that the chinking was covered in graffiti in many cabins: family-friendly stuff mostly, like “If you think THIS place is the pits, you have never slept in a tent,” and “The Dawsons” or whoever, with the sequential dates of annual visits inscribed below. 

    When I met Greg, he drove me to the work site in a van that bore the signs of transporting paperwork and plans between the two parks where he is managing renovations of CCC-era cabins right now, Douthat and Fairystone State Park. He is tall and walks pitched forward and with a hitch in his step, but quickly, back pain being among the things there isn’t time for. The trials of his job bear some resemblance to the trials of a homeowner managing contractors, scaled up. Things are always coming up that no one predicted, and Greg has to decide what to do. He showed me a spot behind a former kitchen wall where someone with a drill bit of around six inches in diameter had, for some reason, scored three overlapping circles about an inch deep into one of the original logs in the exterior wall. Probably it had happened in the last renovation. Maybe they were going through a surface board and didn’t stop. Now, without drywall over the logs, there wouldn’t be any hiding it. “That’ll be an RFI,” Greg said resignedly. RFI means “request for information;” it is the form that the contractors, Thor Construction, fill out when there is a question with no right answer. Greg will have to choose which of the wrong answers to set his signature beneath. He answers to the State Assembly — eventually and indirectly, but it’s a weight.

    When I first started coming to these cabins, I didn’t know if anyone cared about them as historical structures — the drywalled ‘70s kitchens suggested not — but someone does. Greg showed me how they are stripping all the drywall away from the kitchens to expose the old logs. He won’t let them cover up any windows; the cabins are dark enough as it is. New building codes mean some of the porch rails, now a perfect height for sitting and looking out, have to be made higher — too bad. Drainage has to be addressed. In many cabins, the base log has rotted from years of water running through its crawl space to get from the mountains to the lake below. Jeff Stodghill, the outside architect who drew up the plans for the renovation, has used oak timbers to replace these rotted logs. They lift the whole cabin up and then slide the new timber in. I saw some of these new timbers at the cabins under renovation: a little more square than the originals, but like them marked with an ax. The modern timbers are machine planed, but Stodghill instructed the contractors to hack them at random along the length, to make it right. 

    When I visit in winter, it is quiet and I find myself staring at patterns: the psychedelia of the hottest coals at the bottom of the fire, or the ax marks in the logs. Jeff has a theory about these ax marks — the original ones. He can’t know for sure, but he thinks there was a sawmill at the park in the 1930s. That could conceivably mean that the crews took smooth-sawn logs and put ax marks back in, to make the cabin look handmade, which it was. Whether or not Stodghill is right, it is clear from Good’s pattern book of 1935 that the designers and workers of the New Deal-era were looking back as much as we are, as invested as we might be in a hand-hewn past, and as convinced as we are that it was already gone.      

    Logs and chinking do not insulate well; Greg won’t be able to do much about that. When I was there in January last year, it got down to twenty degrees at night — cold enough that even with the heat on, my dog’s water bowl iced over in the kitchen. So did the kitchen pipes, which pass along the exterior wall from the crawl space. Some half-hearted attempts have been made over the years to insulate the crawl spaces, one result of which is the better insulation of squirrel nests in the vicinity. At First Landing State Park, I once saw a trail of pink fiberglass leading from the crawl space to a pink-tufted squirrel nest high up in a nearby loblolly pine. At twenty degrees I should have had the water dripping all night, but I didn’t. I stopped by the camp store to tell them about the pipes. The ranger said the guys would be there soon. They had to unfreeze the pipes in several cabins and then reload all the firewood stations and then they would be there. I thought my best hope for the next night was to build the most scorching fire I could and keep it going for the six hours that remained until dark, so I bought more firewood and went home. There was a foot of snow cover down low near the cabins — no way to collect kindling — but my dog and I went up higher on the mountains where the sun hit longer every day. I found some dry pine twigs held by chance off the ground and brought those home.

    One of the guys appeared after a while with his blowtorch. I had boiling water next to the pipes under the sink. He didn’t think that would do much. He took the plywood off the crawl space access from outside, got a chair, and sat there flaming the pipes. He had been blowtorching pipes all morning. “The way I do dislike these old cabins,” he said to me politely. I went in to build a fire for my old dog, who was sleeping on the couch. I had put him through a lot of hiking. “He’s living,” the ranger said. After a while, the kitchen sink hissed and spluttered. “You got it!” I yelled. Fellow feeling existed, I think. I hadn’t failed at building a fire in front of him. He liked my dog. I thanked him and he drove off down the icy road, to blowtorch something else.

    The state-park picturesque requires much patient management on the part of rangers. They split a lot of wood at the logging areas inconspicuously located off-trail. In the summer, they look away politely when dogs swim in the no-dog area. I asked about vernal pools. These are ponds that appear only in spring, and they are a common breeding area for newts and salamanders. Katie Gibson told me there were some in the park, but they don’t tell anyone where they are, “for obvious reasons.” I think the rangers might be fighting some sort of cold war with the beavers, over whose dam will be upstream from whose, but I don’t have a high enough security clearance to know for sure. 

    Then there are the bears. Come summer the Lakeshore Restaurant at the camp store has fried catfish sandwiches. The catfish doesn’t come from the lake, but even so. I was there having my catfish on the deck overlooking the lake when a group of half a dozen rangers I didn’t know came in and ordered burgers. Someone at the state level had sent them a pretty annoying bear-related email. The sender seemed to be some young optimist in middle management. He wanted them to email him every time a bear sighting occurred. I ask you. The problem is a bear sighting doesn’t fit in a spreadsheet any better than any other thing fits in a spreadsheet. Was it aggressive? Was it at a camp? Was it a mother with cubs? These differences matter. Plus they could not email; they were busy mediating. “Go up to a guy with a fifty-seven-foot trailer, sir, could you maybe take down your bird feeder?” a ranger deadpanned. That bear sightings had increased was public knowledge: the usual signs reminding you that you are in “bear country” had been augmented with notes about “credible recent bear sightings.” Bear level orange. I asked Elmore and Gibson about it. They turned official. “It’s been much better recently,” they said.

    Nature is not gone where it is managed. It must be said for the CCC’s form of conservation, whose instrumentalism would have struck Muir as blasphemous, that it has after all served some of his ends. The state-park picturesque is unlike the sublime, in that it affords very few experiences of terrifying vastness. But a man-made lake is often about as good as a forbidding mountain when it comes to leaving room for animals. Much of the time, no one is on the trails. The animals have crept back after the flood. The bears like bird feeders just fine. Much of Douthat State Park lies downstream from the dam, along a single paved road that follows the low land. The fungi and plant life on some trails heading up into the mountains from the road give me the feeling of having grown by infinitely slow accretion. 

    If you don’t already know, I’m not sure I can convince you that a forest formed slowly is different from one where kudzu and garlic mustard spread out monoculturally. Of a place where a lake was created naturally when stones fell and dammed a creek, Muir wrote that “gradually every talus was covered with groves and gardens.” It was just as if every boulder were “prepared and measured and put in its place more thoughtfully than are the stones of temples.” On these less disturbed trails, I am sure I can see the difference in mosses, lichen, and fungi. Many different species in these groups grow together, mushroom fruiting bodies popping up after rain with shreds of moss still clinging to their caps. Not every inch of the forest has been manhandled. Whippoorwills, their song strangely brash and mechanical, still sing on June evenings. The damming of a lake for what were called “recreational purposes” cannot have made a difference to them, except in giving them this tract of land to nest on, where many visitors do not know or care about them. In that sense, the difference it made was existential. 

    We make dams, which destroy, but also harbor, life. And we are not the only tinkerers. Dams large and small are everywhere. The beavers, whose lodge is opposite the beach, would like to dam the creek a ways upstream from the CCC dam. The rangers would not like to do this. Beavers are a native animal, not to be interfered with. Still, some forms of interference are going on. Many tree trunks are wrapped in chicken wire. The adversary is a worthy one; he commands respect. Near Fairy Stone State Park, there is a US Army Corps of Engineers dam on Lake Powell. At the observation point above the lake, you can find the Corps’ pamphlets about wildlife. According to the beaver pamphlet, “they will search, just like an engineer, for the best location on a stream to build a dam.” 

    That’s us: engineers. Meddlers. In June, the annual peak of amphibian life, you see red-spotted newts in the hundreds, hanging motionless in the shallows of the lake. A red-spotted newt is a marvel: a four-inch swimming dragon with vermillion spots on slick skin the olive color of lake water, long-limbed and sinuous, with a powerful tail. Kids like to dam up the sand at the swimming beach and keep newts in the hot, muddy pools. Some bring aquariums to stock, which is not allowed. Once I watched a kid’s plastic battalion bivouac along the edge of the artificial lake that he had built. Large, round-faced, and unmistakably decent, he mothered his tanks, curving his attention around them to protect them from harm. Ranging around near him was a scrawny friend, in whom rage had somehow settled. He had picked up a big stick and was hitting things. The round friend hummed to his tanks. 

    My much smaller children wobbled over to see; they started clamping the newts in their fists and popping them in the little lake. “Gentle!” I said, writhing. “They’re fragile!” “No, they’re not,” the round kid explained. “They don’t have any bones.” Wrong, but right in spirit; we sacrifice invertebrates more readily to our sport. A fisherwoman saw my kids catching newts as she came in with her foot-long rainbow trout. “They make good bait,” she confided, careful that the children wouldn’t hear. The kids culled newts from an abundance that looked eternal, but wasn’t.

    Money, Justice, and Effective Altruism

    “In all ages of speculation, one of the strongest obstacles to the reception of the doctrine that Utility or Happiness is the criterion of right and wrong, has been drawn from the idea of Justice.” This is from John Stuart Mill’s Utilitarianism, in 1861, perhaps the most renowned exposition of the ethical theory that stands behind the contemporary movement that calls itself “effective altruism,” known widely as EA. Mill’s point is powerful and repercussive. I will return to the challenge that justice poses to utilitarianism presently. But first, what is effective altruism? 

    The two hubs of the movement are Oxford and Princeton. Oxford is home to the Centre for Effective Altruism, founded in 2012 by Toby Ord and William MacAskill, and Princeton is where Peter Singer, who provides the philosophical inspiration for EA, has taught for many years. Singer is EA’s most direct philosophical source, but it has deeper if less direct sources in the thought of the Victorian moral philosopher Henry Sidgwick and the contemporary moral philosopher Derek Parfit, who died a few years ago. Sidgwick gave utilitarianism a rigorous formulation as well as a philosophically sophisticated grounding. He showed that utilitarianism need not depend, as it did in Bentham and Mill, on an implausible naturalism that seeks to reduce ethics to an empirical science. 

    Parfit was strongly influenced by Sidgwick, as indeed is Singer.  Parfit’s Reasons and Persons was important for several reasons. When it was published in 1984, moral and political philosophy was under the influence of John Rawls, whose Theory of Justice had appeared in 1971 in the wake of the civil rights and other social justice movements. Rawls’ notion of “justice as fairness” provided the first systematic alternative to utilitarianism and a seemingly persuasive critique of it.  Utilitarianism, Rawls argued, did not take sufficiently seriously the “separateness of persons,” since it allowed tradeoffs between benefits and harms that we are content with within an individual life — deferring gratification for future benefit, for example — and applied them, unjustly, across an aggregate of lives. It allowed harms to some to be weighed impersonally against benefits to others, and so treated individuals as though they were simply parts of a social whole, analogously to the way we regard individual moments of our lives. But Parfit argued, on sophisticated metaphysical grounds, that personal identity is not the simple all-or-nothing thing that Rawls’ objection presupposed. And he argued persuasively that utilitarianism can be defended against a number of other challenges that its critics had raised from the perspective of justice. Parfit also showed how taking utilitarianism seriously leads to a number of important questions concerning our relation to the future in the long term.

    Singer’s main contributions have been in what is called, somewhat deprecatingly, “applied ethics.” He has influentially argued on broadly utilitarian grounds that we have significant obligations to address global poverty and to avoid the inhumane treatment of nonhuman animals. Singer’s Animal Liberation, published in 1975, has spawned a massive increase in vegetarianism and attention to animal welfare. And his essay ““Famine, Affluence, and Morality,” which appeared in 1972, may be the most widely assigned article in college ethics courses. Singer is also the author of The Most Good You Can Do: How Effective Altruism is Changing Ideas About Living Ethically.

    Ord and MacAskill are from a younger generation. Their role has been to put Singer’s conclusions into practice by founding and running the Centre for Effective Altruism and the Global Priorities Institute, also at Oxford, and by attracting a large number of “the best and the brightest” to EA. MacAskill is the author of Doing Good Better: How Effective Altruism Can Help You Help Others, Do Work That Matters, and Make Smarter Choices About Giving Back. And Ord plays a lead role in the organization Giving What We Can, whose members pledge to donate at least ten percent of their income to “effective charities.” He and MacAskill are also central figures in the development of “long-termism,” a branch of effective altruism which argues that we should focus more on benefits and harms in what Parfit called the “farther future.”

    The moral and philosophical idea that drives much of “effective altruism” comes from a famous example — known as “Singer’s pond case” — that Singer discusses in his essay. Imagine that you are walking past a shallow pond in which a child is drowning and that you can save the child at the cost of getting your pants wet. It seems uncontroversial to hold that it would be wrong not to save the child to spare your pants. Singer argues that the world’s poor are in a similar position. They are dying from famine, disease, and other causes, in some cases literally drowning from the effects of climate change. Analogously, we in the developed world can address many of these threats to human (and other animal) life and well-being at relatively little cost. It seems to follow, therefore, that we are obligated to do so and that it would be wrong for us not to do so. Singer’s larger teaching is that we are obligated on roughly utilitarian grounds to absorb as much cost as would be necessary to make those whom we can benefit no worse off than we are. Yet we do not have to draw such a radical conclusion to be convinced by Singer’s analogy that we have very significant obligations to help address global poverty. And it may well be that Giving What We Can’s minimum standard of ten percent of income is morally appropriate.

    Let us begin by examining more closely the relationship between effective altruism and utilitarianism, and what together they assert. MacAskill offers the following definition:

    Effective altruism is about asking, “How can I make the biggest difference I can?” And using evidence and careful reasoning to try to find an answer. It takes a scientific approach to doing good. Just as science consists of the honest and impartial attempt to work out what’s true, and a commitment to believe the truth whatever that turns out to be, effective altruism consists of the honest and impartial attempt to work out what’s best for the world, and commitment to do what’s best, whatever that turns out to be.

    This definition has a number of distinct elements, and it is worth analyzing them more closely. First, it posits a commitment to bringing about “the most good you can,” to quote the title of Singer’s book. It is not about simply doing good or even doing enough good. It is about doing the most good. Second, although “good” can mean different things and be applied to different kinds of objects, EA counsels bringing about the best outcomes. It recommends the action or policy, of those available, that would have the best consequences overall. But outcomes can be ranked in different ways, from different perspectives, and with different criteria or standards. Third, effective altruism is committed to bringing about what is “best for the world” as opposed to for any individual or particular society. It is an “impartial” theory. But “best for the world” can also mean different things. In one “impersonal” sense, something can be thought to be good (or best) to exist in the world, independently of whether it benefits or is good for any individual or other sentient being. This is the sense that G. E. Moore made famous in Principia Ethica. Moore thought, for example, that beauty “ought to exist for its own sake,” regardless of whether it is experienced or appreciated. Of course, Moore thought that it is much better for beauty to be appreciated experientially, but he thought it still can have intrinsic value even if it is not. (He was a great defender of intrinsic value generally.) Perhaps more plausibly, some consequentialists who follow Moore hold that significant inequality is a bad thing intrinsically, in addition to the disvalue of the bad things that those who are worse off suffer.

    Impersonal good is not, however, the sense of “good” with which utilitarianism and effective altruism are concerned. They prescribe doing whatever would be best overall for beings in the world. This is important: it is what distinguishes effective altruism and utilitarianism from forms of consequentialism that reckon the goodness of outcomes in terms of impersonal goodness that does not consist wholly in benefits to individuals. And this brings us to the fourth element of effective altruism, namely, that it aggregates benefits, and harms — costs and benefits in welfare terms — across all affected parties. The “most good” is the most good to individuals, on balance, aggregated across all who would be affected by alternative actions in any way — wherever they might be (hence EA’s global reach), and whenever they might exist in time, no matter how far in the future their being benefited or harmed is from our actions today (hence EA’s long-term view). The offshoot of effective altruism known as long-termism is the view that short-term benefits and harms will almost always be swamped in the longer term and so are much less relevant to what we should do here and now than we ordinarily suppose. This is a claim with disruptive implications for the practice of ordinary kindness and assistance, which is often immediate and local; and it is important to see how such a claim is an apparent consequence of the doctrine of effective altruism. 

    Finally, the fifth premise of EA is the stress on “evidence and careful reasoning.” Utilitarianism is frequently characterized by its emphasis on empirical — or as Mill called them, “inductive” — methods. This is also an important theme in Bentham’s argument for the principle of utility. Mill contrasts inductive, empirical methods with the “intuitive” approach of giving credence to moral intuitions based on emotions — for example, to a sense of obligation, or to attitudes such as blame, guilt, and resentment, which the mid-twentieth century Oxford philosopher P. F. Strawson called “reactive attitudes.” MacAskill highlights this aspect when he refers to Joshua Greene’s neurological research that contrasts utilitarian ethical judgments, which Greene shows to be associated with parts of the brain involved in reflection and reasoning, with intuitive and non-utilitarian — sometimes known as “deontic” — judgments that are associated with regions of the brain implicated in emotions.

    The question of what parts of our mind and brains are involved in moral judgment may seem to be entirely epistemological, a matter only of how we come to know what we should do morally and which actions are right and which are wrong. But this is not so, as Mill himself appreciated. It concerns features of our moral concepts themselves and, therefore, what moral right and wrong themselves are. 

    To see why, we should note first that utilitarianism, and consequentialism more generally, originated as theories not about what is intrinsically good and bad, but about what is morally right or wrong. Mill’s Utilitarianism begins, indeed, with the declaration that “few circumstances . . . are more significant of the backward state in which speculation on the most important subjects still lingers, than the little progress which has been made . . . respecting the criterion of right and wrong.” What makes someone a consequentialist is not that they hold some particular theory of the good. Consequentialists differ widely on what kinds of outcomes are intrinsically good or bad. Granted, utilitarians such as Mill and the effective altruists think that what makes outcomes good is that they concern the well-being, good, or happiness of human and other sentient beings. But critics of utilitarianism, like me, could stipulate agreement with utilitarians on this, or any other, theory of the good, and still be in fundamental disagreement about what we morally should do.

    The issue between utilitarians and their critics is not about the good, but about the relation between the good and the right. Utilitarians hold that what makes actions, policies, practices, and institutions morally right or wrong is that they bring about the greatest possible net good to affected parties. Their critics do not deny that the goodness of consequences is among the factors that makes actions morally right or obligatory. As the most famous recent critic of utilitarianism, John Rawls, put it, to deny that would “simply be irrational, crazy.” What they deny is that it is the only relevant factor. This returns us to Mill’s observation that the utilitarian idea can conflict with demands of justice. Rawls agrees with Mill’s diagnosis that a significant obstacle to “the reception” of the principle of a utility as a “criterion of right and wrong” concerns its insensitivity to justice. I agree with Rawls about this: like utilitarianism more generally, effective altruism fails to appreciate that morality most fundamentally concerns relations of mutual accountability and justice.

    Ironically, this is a point that Mill himself recognizes, and his appreciation of it creates a tension between the form of utilitarianism that he advances at the beginning of Utilitarianism and the view he seems to endorse by the end. The dramatic shift occurs in its fifth chapter, which begins with Mill’s noting the tension between utilitarianism and justice. After discussing different features of the concept of justice, Mill makes the following deeply insightful point. 

    We do not call anything wrong, unless we mean to imply that a person ought to be punished in some way or other for doing it; if not by law, by the opinion of his fellow creatures; if not by opinion, by the reproaches of his own conscience. This seems the real turning point of the distinction between morality and simple expediency. 

    In many cases, of course, we can think an action wrong without thinking that it should be punished. Making a hurtful remark might be an example of that. (Though in our increasingly censorious society, it might often be an example of just the opposite.) Mill recognizes that this is so, but says that in such cases we are nonetheless committed to thinking that conscientious self-reproach would be called for. The fundamental point, as I have understood it, is that it is a conceptual truth that an act is wrong if, and only if, it is an act of a kind that would be blameworthy to perform without excuse.

    When Mill first puts forward his “Greatest Happiness Principle” in Utilitarianism’s second chapter, he says that acts are right “in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness.” This is frequently interpreted as what philosophers call “act utilitarianism”: an act is morally right (not wrong) if, and only if, it is one of those available to the agent in the circumstances that would produce the greatest total net happiness (considering all human or sentient beings in the very longest run). But once Mill has recognized moral wrongness’ connection to culpability in his fifth chapter, his view seems to shift in the direction of what is called “rule utilitarianism”: an act is morally right (not wrong) if, and only if, it is consistent with rules it would bring about the greatest total net happiness for society to accept as standards for holding people accountable (through guilt and moral blame) in the given situation. This fits with the rule-utilitarian theory of justice and rights that Mill finally offers, enabling him to hold that unjust actions are morally wrong on broadly utilitarian grounds, namely, rule-utilitarian rather than act-utilitarian grounds.

    If Mill’s conceptual thesis is correct, as I believe it is, then there is a connection between the concepts of moral right and wrong and accountability, and therefore, with conscience. What we call conscience is the capacity to hold ourselves accountable through the states of mind that Strawson called “reactive attitudes,” including the “sense of obligation” and guilt. To be capable of being a moral subject, that is, of being a person with moral duties and acting rightly or wrongly, one must have such a capacity or moral competence. We exercise this capacity when we feel guilt or have the attitude of blame towards others. Blame is essentially the same attitude as guilt, except that the latter necessarily has oneself as its object. Guilt is self-blame. Similarly, a sense of obligation is essentially the same attitude also, only it is felt prospectively, before acting, rather than retrospectively.

    Strawson lays out his account of the role of reactive attitudes in moral accountability in “Freedom and Resentment,” which has been called “perhaps the most influential philosophical paper of the twentieth century.” What Strawson saw was that we hold reactive attitudes such as blame, guilt, and resentment from a distinctive perspective, the standpoint of relating to someone. Strawson calls this the “participant” point of view, since it occurs within an implied relationship to another person. I call it the “second-person standpoint,” since to formulate what one thinks or feels from that perspective, one must use second-person pronouns. “What were you thinking?” “You can’t do that to me.” And so on.

    Strawson convincingly argues that attitudes such as resentment and blame implicitly address “demands” to their objects. Not naked demands — reactive attitudes do not attempt to force or to manipulate — but putatively legitimate demands. And this requires anyone who holds a reactive attitude to presuppose that they have the authority to make the demand, and that the person of whom the demand is made can recognize the demand’s legitimacy and comply with it for that reason. We call the latter capacity “conscience.” Through conscience we address demands to ourselves that we feel to be backed by the authority of what Strawson calls “the moral community” or, as we might also say, of any person, including oneself, as its representative. 

    Attitudes such as blame and resentment are unlike third-personal attitudes such as contempt and disdain in that they come with an implicit RSVP. They call for response and reciprocation, for their object to acknowledge their wrongdoing and hold themselves accountable. Moreover, the presupposed underlying reciprocity goes in both directions. They both demand and implicitly give respect. They are eye-to-eye attitudes.

    This means that moral judgments of right and wrong, unlike other ethical judgments, for example, of the goodness or badness of outcomes, necessarily implicate human relationships. Many of our most important moral obligations are owed to other persons or sentient beings, and the very ideas of moral right and wrong entail accountability to every person or member of the moral community (where “moral community” refers not to any actual society, but to a presupposed authoritative impartial perspective from which moral demands are issued or addressed to us as moral agents). The point is not that we necessarily assume that such a community actually exists, but that we necessarily attempt to think, feel, and have attitudes from such an impartial second-personal perspective.

    The idea that morality is fundamentally about mutual accountability and respect could not be farther from the ethical vision that underlies effective altruism. Altruism is concerned, by definition, with beneficial outcomes, and it conceives of ethical action entirely in instrumental terms. According to MacAskill, “the key questions to help you think like an effective altruist” are: “How many people benefit, and by how much? Is this the most effective thing you can do? Is this area neglected? What would have happened otherwise? What are the chances of success, and how good would success be?” EA asks what states of the world are such that the people (or other sentient beings) existing in those states are better off in the aggregate than those who would exist in the states that would eventuate if a specific action, policy, practice, or institution were pursued or established. Once we have stipulated what the answer to this question is — and, as we have noted, there is no reason why an opponent of utilitarianism or effective altruism must disagree with that stipulation — the remaining moral question of what one should do is, for EA, an entirely empirical question, one to be answered using the methods of natural and social science.

    By contrast, the questions of justice and moral right and wrong that, for the critic of utilitarianism and effective altruism, are left entirely open and unsettled even by an agreed stipulation of welfare outcomes, are not empirical or scientific questions. They are irreducibly normative questions of what it would be morally right or wrong to do (given the stipulation), and these moral questions necessarily presuppose relations of mutual accountability and respect in the background. 

    One way to see this is to consider what is called paternalism. As it is defined in debates between utilitarians and their critics, paternalism consists in restricting someone’s liberty or usurping their authority of autonomous choice on the grounds that this would be better for them in welfare terms. It is a core feature of an order of justice, mutual respect, and accountability that every person has a right of autonomy, understood as the authority to make their own choices and live their own lives, so long as that does not violate the similar rights of others. (This does not entail anything like libertarianism, as the example of Rawls’ theory of justice illustrates.) Respect for another’s autonomy, however, can conflict with an altruistic desire to promote their well-being, since we often make choices to our own detriment (even, indeed, when we choose altruistically to benefit others!). 

    Suppose, for example, that a friend has their heart set on pursuing a career as a golf pro, which you can see that they are massively unsuited for, and that you can tell that pursuing it would make them miserable. As a friend, you might have standing to give them realistic feedback. But suppose you do voice your concerns, and still your friend persists. You might be in a position to undermine their plans in other less honest ways, say, by resetting their alarm so that they miss their tee time at a crucial tournament. It might be that being diverted from their chosen path is actually better for them in the long run and that they would go on to live a happier, more satisfying life doing something else. It seems clear, however, that subverting your friend’s project altruistically would wrongfully violate her autonomy. It would be an injustice to her and would violate respect for her dignity as a person.

    According to the doctrine of effective altruism, the fundamental ethical relation is benefactor to beneficiary; but from the standpoint of equal justice, the fundamental ethical relation is mutual respect. Being respected as an equal is an important part of our well-being, of course, but altruistic concern will coincide with respect only when being respected most advances our well-being, all things considered. And that is frequently not the case. Often we want to make choices that do not best promote our welfare, and often for good reason (including sometimes for good altruistic reasons). 

    Moreover, altruism can be morally problematic in other ways, although I am not claiming that it must be so. For example, effective altruists argue that often the way we can do “the most good we can do” is by donating to “effective charities.” A significant element of EA is the ranking of charities — “charity evaluation,” Singer calls it — by how effectively they turn donations into beneficial outcomes. Yet the term “charity,” like the term “pity,” signals the potential in altruism to insinuate a disrespectful relation of superiority to a pitiable “charity case.” The perspective of charity, like the perspective of pity, is not the eye-to-eye relation of mutual respect; it comes, like God’s grace, from above. “The greatest and most common miseries of humanity,” Kant wrote

    rest more on the injustice of human beings than on misfortune . . . We participate in the general injustice even when we do no injustice according to civil laws and institutions. When we show beneficence to a needy person, we do not give him anything gratuitously, but only give him some of what we have previously helped to take from him through the general injustice . . . . Thus even actions from charity are acts of duty and obligation, based on the rights of others.

    In couching the meeting of human needs in terms of altruism and charity, EA risks treating its beneficiaries as inferior supplicants rather than mutually accountable equals. It is little wonder that there is so much resentment of the global north in the global south.

    In On Revolution, Hannah Arendt distinguishes between charity and pity, on the one hand, and solidarity and compassion, on the other. She juxtaposes Jesus’ compassion with the “eloquent pity” of the Grand Inquisitor in Dostoyevsky’s The Brothers Karamazov: “The sin of the Grand Inquisitor was that he, like Robespierre, was ‘attracted toward les hommes faibles’ … because he had depersonalized the sufferers, lumped them together into an aggregate — the people, toujours malheureux, the suffering masses, et cetera.” By contrast, Jesus had “compassion with all men in their singularity.” Whereas pity views its object as a faible in a low and abject condition, thus essentializing and “depersonaliz[ing]” them, compassion, solidarity, and respect regard their objects “in their singularity” as individuals. Even the term “the global poor,” risks condescension, since it signals not a respectful relation of mutually accountable collaboration to establish more just relations as equals, but regarding someone as an object of charity that, by definition, no one has the standing to demand.

    To be clear, I am not claiming that Singer and MacAskill, or other proponents of effective altruism, are prone to depersonalizing those whom they seek to benefit, or that their help is necessarily condescending, self-aggrandizing, or arrogant. Neither am I claiming that what we call “charities” necessarily present themselves or are received as superior to their beneficiaries. What I am saying is that the benefactor/beneficiary relation carries these risks, that it can be inconsistent with mutual respect as equals, and that relating to others on terms of mutual respect and accountability is both required by justice and the antidote to these risks. Nor am I claiming that the work that organizations such as Malaria Consortium, which is currently mostly highly rated by Give Well — a “charity evaluator” endorsed by effective altruists — is not doing important work and meeting significant needs that would otherwise go unmet. I am not saying that their work is essentially paternalistic or unwelcome. Perhaps being characterized as a “charity” is just unfortunate branding. I believe that the Malaria Consortium is worthy of our support and have supported it myself. My point is that meeting human need should be pursued from a collaborative perspective of mutual respect and justice rather than that of altruistic charity.

    “Justice,” Rawls says, “is the first virtue of social institutions.” Whether justice is served is never simply an aggregative matter, either an accumulation of just actions of equal respect, or, even less, of aggregated acts of effective altruism. Whether the most important human needs are adequately met is generally a function of whether the society in which they occur is justly organized and of the society’s level of economic development. The economist Angus Deaton has rightly remarked that development

    is neither a financial nor a technical problem but a political problem, and the aid industry often makes the politics worse. The dedicated people who risked their lives to help in the recent Ebola epidemic discovered what had been long known: lack of money is not killing people. The true villains are the chronically disorganized and underfunded health care systems about which governments care little, along with well-founded distrust of those governments and foreigners, even when their advice is correct.

    We do not have to accept the general skepticism of foreign aid’s benefits that Deaton expresses in The Great Escape to agree that EA’s framework is often insensitive to crucial aspects of the political contexts in which human need occurs, and therefore to critical issues of justice. Often the best thing we can do to support those in need is whatever we can to help them have more political power and greater voice, both intranationally and internationally. Deaton argues that economic development depends not on donated aid but on investment, and that the former can often drive out the latter. This is by no means always the case, however. It seems clear, for example, that some forms of aid, like that which targeted HIV in Africa, can make a critical difference that other attempted solutions cannot. PEPFAR, or the President’s Emergency Plan for AIDS Relief, begun by George W. Bush, has been calculated to have saved twenty-five million lives.

    The crucial point is that the requisite support should come not through altruistic desires for welfare outcomes but, as Kant says, from a concern to do justice – and that this be in ways that collaborate with aided individuals, show mutual accountability and respect, and help to empower them in their own social and political context. Climate change poses a salient example. The EA model of individuals doing “the most good” they can for other individuals, typically by contributing to highly rated charities that direct funds to maximally promote individual well-being, seems especially ill-suited to deal with the scale and the nature of the problem. Amartya Sen showed that famine is almost always fundamentally a political problem, and this is even more true of climate change. Only concerted collective and political policies and actions at all levels, nationally and internationally, can reduce carbon emissions to the necessary levels. Proponents of effective altruism implicitly recognize this. A talk given by a researcher for FoundersPledge and featured on effectivealtruism.org argues that the most effective thing individuals can do is to contribute to the most highly-rated “climate charities.” Their top charity, the Clean Air Task Force, characterizes their work as “advanc[ing] the policies and technologies necessary to decarbonize the global energy system.” Surely it is obvious that this is impossible without concerted political action. 

    What is the right ethical lens through which to view the challenges of climate change? Here Kant’s admonition that “the greatest and most common miseries of humanity rest more on the injustice of human beings than on misfortune” is especially apposite. If economic development has been carbon driven, then it is a virtual tautology that the developed world bears greater responsibility for the challenges of climate change. Any just solution will require international cooperation on terms of mutual respect that recognize these different levels of responsibility as well as the unjust power differentials that have resulted from differential development. The pressing ethical questions are not simply how we can do the most good. They are questions of justice. 

    A particularly noteworthy aspect of the movement of effective altruism is Singer’s and MacAskill’s assertion that for a good number of highly talented individuals, the best thing they can do to promote the most good is to find the highest-paying employment so that they can donate much of their income to the most effective charities. Singer’s The Most Good You Can Do discusses a number of examples of college graduates who take high-paying jobs in the financial sector, live simply, and contribute a large percentage of their income to effective charities. Such a person might have gone to work for the charity themselves, or put their talents to work in some other altruistic- or justice-focused way, but Singer quotes MacAskill as arguing that doing so would have produced less net benefit, since donating a fraction of their salary to employ someone producing equivalent good would still leave a significant amount to be put toward other good purposes.

    I see no reason to doubt the motivation of those who pursue this path. There is, I agree, something undeniably admirable about the individuals Singer describes, who live modestly, are focused on the welfare of others, and do the most they can to advance it. Yet from my perspective as a professor of philosophy at Yale, however, I find the prospect of encouraging students to pursue this path of high-minded riches deeply depressing. Institutions such as Yale are already knee-deep in helping to reproduce an extremely unjust political and economic system and in class formation. As things stand, about thirty percent of their graduates take positions in consulting and the financial sector, with only a tiny percentage going to badly needed but talent-starved fields such as public K-12 education. I do not remember the last time I talked to a Yale undergraduate who wanted to pursue that as a career path, although some take temporary posts in programs such as Teach For America.

    But wouldn’t it be wonderful, a proponent of the MacAskill/Singer argument might reply, if more of the people who went the finance and consulting route were like the EA financial analysts whom we champion? I agree with the proposition that if we hold fixed the percentage of graduates pursuing that path, then it is better that more of them have the altruistic aspirations that MacAskill and Singer describe. But I worry about encouraging graduates to pursue the route of “earning to give” for two reasons. First, familiar psychological processes of group affiliation, emotional and attitude contagion, motivated reasoning (rationalization), and accountability to those we live and work with, not to mention the desire to fit in with our associates and have their approval, make it likely that many who begin with such aspirations will tend over time to lose them and become more like their non-EA colleagues. As the example of Sam Bankman-Fried has shown, the nobility of effective altruists can easily be degraded by massive profits: the doctrine can serve as a cunning alibi for the rapacious accumulation of wealth. And especially at the current moment, when meritocracy’s credentials are wearing exceedingly thin, statistics such as the ones I just cited do little to inspire confidence in highly selective universities. This is what you are selecting students for? The universities might reasonably be asked. After all, students do not suddenly change their interests and plans at the end of their college careers. Their undergraduate lives are shaped by the fact that almost a third of their number hope to go into finance or consulting. 

    Elite universities do not just complacently accept these depressing trends; they actively contribute to them by pursuing massive donations and lionizing their biggest donors. What are their students to think when they live, eat, and study in buildings funded by titans of finance? Yale’s current development campaign is titled For Humanity. Despite our fine words, however, students can hardly be faulted for wondering whether they are not living by the university’s real values when they join the “army of the thirty percent.” To do so, however, is to be condemned to a life lived without engagement with, and therefore, meaningful accountability to, the overwhelming majority of their fellow citizens and human beings. Even those who maintain their allegiance to EA while working in offices on Wall Street never get to see how the people they seek to benefit actually live or get to live or work with them.

    The pursuit of justice, in contrast with effective altruism, seeks accountable relationships with others on terms of mutual equal respect. Universities should be “for humanity” not in the sense of seeking just to benefit themselves. That way lies the self-aggrandizing myth of superiority that can mask massive injustice. They should seek, and actively encourage their students to seek, equal justice for all. That is a path that would earn them neither the resentment nor the self-protective contempt that is so widespread among the great numbers of people who are alienated from them and their members, but equal respect in return.