Vladimir Jankélévitch: A Reader’s Diary

    There are writers you do not so much read as live alongside: writers of a depth, a density, a multiplicity of suggestions that resist the sort of encapsulation by which their names wither into the occasion for empty allusions and knowing nods. For nearly twenty years now, the French philosopher Vladimir Jankélévitch has been such a writer for me. I know of few accounts more moving of the tragedy of the human condition than his The Irreversible and Nostalgia. His Pure and Impure has aided me in keeping my distance from many petty fanaticisms fashionable at present. He reminds me that “philosophy is not the construction of a system, but the resolution to look naively in and around oneself,” that the first sincere impulse toward knowledge is the patient articulation of one’s ignorance. 

    Born in 1903 in France, the son of Jews from Odesa, he studied in Paris with Henri Bergson, who was the subject of his first book in 1931, and whose ideas would remain central to his philosophical and musicological writings over the following half a century. He fought in the French Resistance in Toulouse, writing tracts encouraging Russian collaborators with the Wehrmacht to abandon their posts and giving the underground lectures in moral philosophy that would form the basis for his three-volume Treatise on the Virtues. Though he had written his dissertation on Schelling, and had even declared in his twenties to a friend that “only the Germans think deeply,” after the end of the Second World War he made an acrimonious public break with German culture (with exceptions for Nietzsche, Schopenhauer, Liszt, and a few others) that extended even to Jewish thinkers writing in German.

    This intransigence, and more specifically his contempt for Heidegger and his relative indifference to Marx, placed Jankélévitch outside the major currents of French thought, though thinkers from Levinas to Derrida acknowledged a debt to him. (He also cared little for Freud, ironically, as his father had translated him into French.) Hence he was little known and little read even as friends and peers — Sartre, Foucault, Derrida — became minor or even major celebrities. It is hard to say how deeply this affected him. Biography is an English and American genre, and sadly, a recent life of Jankélévitch, François Schwab’s Vladimir Jankélévitch: Le charme irresistible du je-ne-sais-quoi (Vladimir Jankélévitch: The Irresistible Charm of the I-Don’t-Know-What), is uninformative on this and many other matters. He did say that he saw himself more as a teacher than as a writer, and remarked, who knows whether with bitterness or ironic forbearance, “This era and I are not interested in each other. I’m working for the twenty-first century.” He died in 1985.

    In the present diary, I have not wished to arrange into any schema the thoughts of this philosopher who affirmed that his only system was to have no system, for whom philosophy was a living thing rather than a specimen to be preserved in the formol of empty deliberation. I would only like to share, as though from one friend to another, a sampling of what I have learned — what I am still trying to learn — from him, for the benefit of those who cannot read his many works not yet translated into English, or for others who have yet to make his acquaintance. 

    Time

    Time is the medium and ultimate boundary of human freedom. In The Irreversible and Nostalgia, Jankélévitch describes movement as the elementary form of freedom, and locates the basic tragedy of human life in our inability to travel back and forth in time as we can in space. The irreversibility of time is the root of nostalgia, of guilt, and of regret; its unceasing transformation of present into future is the ground of hope; and the inevitable conclusion of this future in death is the origin of anguish and despair. Yet so long as death has not yet come, it is the endless openness of time, the endless regress of its horizon, that permits an endless rejuvenation of hope, and hope is, and must be, populated by yearnings shaped by the past. 

    These considerations seem elementary, and my attempts to share my enthusiasm for Jankélévitch have more than once foundered before the shrugs of people to whom this all appears obvious. I can only respond that its obviousness is not an indictment of its truth, and that to ignore the obvious, perhaps even because it is obvious, because it lacks the airy beguilements of the contemporary and the urbane, is unserious; it suggests that we are justified in living as though the most superficial values pertained, or that by some strange alchemy our frivolous engagements might ripen into significance, or that we have not yet reached the moment when we must look at our life as a whole, and shall do so later, when this or that more pressing business is dispatched. Jankélévitch frequently cites “The Death of Ivan Ilyich” to illustrate the error of this way of thinking. Too much time devoted to subsidiary lives (it is noteworthy that in Tolstoy’s tale life is divided into “family life,” “married life,” “official life,” and so on) impairs an awareness of life taken as a whole. 

    The Adventure, Boredom, The Serious

    Of the many ways conceivable of distinguishing the philo-sophical from the sophistical, the most urgent takes as its point of departure what we already know in our hearts. The heart’s knowledge is like the appeal to the stone in Boswell’s Life of Johnson:

    After we came out of the church, we stood talking for some time together of Bishop Berkeley’s ingenious sophistry to prove the non-existence of matter, and that every thing in the universe is merely ideal. I observed, that though we are satisfied his doctrine is not true, it is impossible to refute it. I never shall forget the alacrity with which Johnson answered, striking his foot with mighty force against a large stone, till he rebounded from it, “I refute it thus.”

    Johnson wasn’t so stupid as not to grasp Bishop Berkeley’s idealism; he simply knew that it was inconsequential outside the realm of idle speculation. We cannot live, he means to say, with the idea of the immateriality of objects; we must live with the intuition that the things we know are real. No knowledge of the limitations of the human eye can disabuse me of the certainty that I see things the way things are; and no matter how persuasively Thomas Metzinger and others argue against the existence of the self, they cannot begin to offer me a way of living that dispenses with it as though you and I do not exist. Being inevitable, the acceptance of these pragmatic certainties, however unstable their foundations, has the character of a duty one can only shirk from insincerity or irresoluteness.

    Freedom is another such elemental fact that persists irrespective of its confirmation. Let us say that I am a determinist; let us admit that science persuasively argues that we may reduce behavior to its biological and physical determinants. This is fine for my abstract view of human life, but you and I will never grasp it in the innate manner in which we grasp freedom. The truth of determinism is like the existence of dark matter or the unsettling properties of quantum objects: all are incidental to the kind of being we are doomed to be. Freedom reveals itself in the moment when something must be done; its privileged instance is the recognition that only I can decide whether and how I must act. This, and the fact that the burden of responsibility is coeval with consciousness or conscience (the same word is used for both in French, and in Jankélévitch’s thought they are rarely separable) is what makes morality the a priori of all problems, the “chronologically first question.”

    There are three privileged responses to freedom in Jankélévitch’s thought: adventure, boredom, and seriousness. These correspond to the passion for the future, to the contempt for the present, and to the recognition that all that is and will be must elapse. Each of these maintains a more or less sincere relationship to death. The adventure is the passionate expectation of the future; it is by its nature a beginning, which in its continuity will be serious or boring. Erving Goffman’s “anticipated fateful activity” is a nice approximation of what Jankélévitch means by adventure, and Goffman, like Jankélévitch, recognizes the centrality of the sense of risk, which, when taken in its greatest extension, is always the risk of death (“death is at the end of all avenues prolonged indefinitely, no matter where they may lead”). Adventure is the freest possible response to the semi-openness of life: to the brief but indefinite concession to live and to the perpetually postponed imminence of death.

    Adventure, in advancing toward the future, has in a sense sidestepped the present; it is by definition underway. Boredom, meanwhile, remains mired in the moment prior to the decision. Boredom is scarce in vitality: it is the privilege of those who need not concern themselves with the basic tasks of life. Boredom is exemption from the stream of life, the relegation to observer status of one who feels he should be a participant, but cannot participate, or knows not how, or despairs of his capacity to do so. The hostage, the soldier, the base jumper are not bored; the bored person is rather one for whom the surrounding world falls short. It is the fruit of a civilized, convoluted consciousness that does not so much struggle to satisfy its needs as it satisfies them and finds no satisfaction in this satisfaction. Boredom is acquainted with the adventure, but has lost the taste for it; its longing for experience, too enfeebled to seek out new thrills, remains as the “froth” of self-consciousness upon a bodily sensation of unease.

    Boredom that persists into general malaise is a consequence of selfishness. What it lacks is “the tender solicitude of the second person, which alone is capable of filling an entire existence.” What Jankélévitch means here is love, a concept central to his work but one not adequately defined. He calls love a duty, fidelity, the essence of all virtue, the realization of the good, the endless and unconditional obligation to the other, but these seem like attributes or signs of love rather than its sensuous core. Love is not love of self, the love of abstract principles, or love of being in love; it is ecstatic, but falters when it revels in ecstasy at the expense of the beloved. Given Jankélévitch’s assertion that consciousness is always involved, it strikes me that what he means by love is the sincere recognition of others’ presence and the zealous embrace of living alongside them — the channeling of the élan vital into shared rather than “proprietary” happiness.

    The adventure turns intervals into instants, boredom turns instants into intervals. The person of adventure, in frenzied pursuit of he knows not what — because without the element of enchanted ignorance, his pleasure would be the serene one of seriousness or the cloyed hedonism of the bored — gives life no time to open up and take on texture, complexity, resonance. The adventurer moves through life rather than gathering it, whereas the bored person has gathered erroneously, asking time to yield freely to him what can only occur when he lives generously in time. Of him, Jankélévitch writes, “How many years so short are made of hours so long?” The bored person is like the connoisseur of wine, who believes that his attunement to ever finer distinctions is the prelude to recognizing higher sorts of pleasure, when in fact he is teaching himself to enjoy less and less, making the objects of his fancy so rare that it may be finally said that he does not like wine at all. It goes without saying that such a person has entirely missed the point of drinking. 

    What makes adventure and boredom unserious is their failure to reckon with the possibility of death. Jankélévitch is careful to differentiate between the possibility of death and death itself. Death is nothing; death eludes thought; death, in Wittgenstein’s phrase, “is not an event of life” — but the possibility of death is an ever-present reminder that desire, the remit of our freedoms, is bound by time in ways that we cannot understand. Adventure and boredom are unserious in averting their eyes from this boundedness. Adventure mistakes the brief pleasure of insouciance as evidence that we can live without care, and too often, when the taste or the aptitude for adventure are past, the instances in which we might have employed our freedom in the service of care are past as well. Boredom delays beginning in the delusion that the impulse to begin is growing within us, when with every calling — apart from the most frivolous and barren of pleasures — appetite comes with eating.

    Seriousness is wedded to sincerity; it demands an earnest inquiry into what matters, and the courage to pursue it with a steadfastness that avoids the siren song of adventure and the cavalier aloofness of boredom. Seriousness is the reasoned approach to duration. It does not shout carpe diem in the thought that death may come at any time, because it may well not come quickly and we will be stuck with the consequences of our actions; and it does not tell itself there will always be time, because a time will come when there is none. “The serious is an allusion to passion and a call to order” that seeks the just measure in the pursuit of enduring joy.

     

    Virtue 

    Desire places limits on freedom, but it is in responsibility that freedom is realized. Hence the freest gesture is the response to the call of conscience. Conscience is of a piece with consciousness: the stimulus that rouses the mind from its stupor, sharpening the edges of awareness, is at the same time a state of distress, the intimation that something unresolved is at play that we have a role in rectifying. The tragedy of the human condition is that the wrongs that call conscience and consciousness into being can never be fully righted, because they exist in time, which cannot be reversed. And so conscience and consciousness strive together for an impossible but also impossible-to-ignore reconciliation. 

    Fundamental to Jankélévitch’s ethics is the belief that “all consciousness is more or less adhesive,” that all consciousness tends inescapably toward virtue or dissipation. Virtue is fidelity to the moral orientation that suffuses consciousness in the moment of becoming conscious; dissipation, or neglect of this orientation, cannot quell the ache of consciousness or conscience, both of which attach preferentially to the sense of something not right. The person who tries to drown out his scruples in nihilism or forget them by exalting the sensuous over the moral is like a cat trying to run away from its burning tail. “Not only does the a priori of moral valuation anticipate and impregnate all paths of consciousness, but also seemingly, through the effect of an ironic ruse, the rejection of all valuation accentuates its impassioned character: as if, in clandestinity, axiology [the belief in an ordered scale of values] had recovered its strength and acquired a new vitality: repressed, harried, persecuted, it becomes only more fanatical and intransigent.”

    Jankélévitch’s rejection of hedonism puts one in mind of the contrasting meanness and extravagance of certain heroes in Dostoevsky, whose conscience stalks them even in their fits of vice, and who preach morality and religion over banquets of suckling pig and cognac. They are prey to the “anarchic and even contradictory system” of pleasures — but at the same time good deeds exert a depraved temptation upon them, and they mistake this temptation for virtue. What they lack, first of all, is sincerity: the understanding that what they call good in a drunken fit of penitence is merely an “aesthetic intermittency… a luxury article, a supplementary and gratuitous ornament of our nature” coveted by the morally enervated consciousness. They cannot admit that they are not yet ready for “the infinite movement through successive exponents that constitutes moral life.”

    Good deeds bear the same relation to virtue as the lone note to a musical composition. No inherent property of good deeds forces virtue into being. The passerby who, he knows not why, flings a dollar at a beggar’s feet is not virtuous, nor is the abuser who, in a fit of remorse, hysterically showers his victim with gifts. The bad person’s good deeds are “spasmodic,” representing a temporary concession to others’ ideas of virtue or the flareup of a truncated conscience that tries but fails to overtake the whole person. “Virtue (if it exists) must be chronic,” Jankélévitch writes. It reveals its presence in “the occasion”: the test we face to show that our idea of goodness is substantial. 

     

    Pure and Impure

    Jankélévitch contrasts the “relativism of effort” with the “absolutism of perfection.” The latter, a property of Kantian maxims officiously fulfilled, of the realization of all utopias from Plato’s republic to the workers’ paradise of socialism, is instantly repugnant to whomever loves the human. What will there be to do when heaven has come to earth? What will the point of doing be when our acts are no longer consequent, because there is no more evil to banish nor good to bring about? A moral life demands friction, the possibility of failure, whereas the ossification of virtue into reflex divests it of moral content, yielding a world of good deeds populated by morally vacuous individuals. Virtue cannot be static: it requires the tension of temptation. For this reason, “the moral is, in essence, the rejection of selfish pleasure.”

     

    The je-ne-sais-quoi and the presque-rien

    Rarely does Jankélévitch proceed by telling us what things are. His method is reductive — he glimpses his object, a je-ne-sais-quoi, an I-don’t-know-what, as yet unidentifiable, and peels away the predicates that opinion spuriously attributes to it, until what is left is the pure but elusive intuition of the presque-rien, the almost nothing. Emblematic here, once again, is death, the object of “a crepuscular thought,” a “pseudo-thought,” the center of ruminations that progress not forward but only deeper into themselves. We can say of death only that it is there — but death is nothing, and it is not. An almost-nothing, an “opaque destiny,” it exerts a refractory effect not on our understanding of life but on the feeling of being alive. Let us add to the list of the je-ne-sais-quois time, the self, consciousness, love, being, and all else that the eye fixes upon in a philosophical mood. To say what they are requires saying what they are not, and when all that they are not has been said, what remains — the presque rien — is as elusive as mercury. The je-ne-sais-quoi “is a manner of naming the impossibility of going to the end of things, of digging into their limit,” and a reminder that philosophy is a vocation rather than that laborious enumeration of primitive notions, inference rules, hypothetical cases, and ideal solutions by which the Anglo-American analytic tradition seeks less to philosophize than to render philosophy obsolete. Indeed, if Jankélévitch is right that what is moral cannot be the act itself but the only nature of the consciousness of the act, then the gargantuan ethical cheat-sheet to which utilitarian ethics aspires would mean not the perfection of morals, but their disappearance into rote obedience. The same is true of epistemology: the texture of existence, its bittersweetness, is indistinguishable from the heuristic value of error and uncertainty, and to replace these with facts, axioms, and laws is to divest human consciousness of the very things that make it human.

     

    The organ-obstacle

    The organ-obstacle is an impediment to a desired state and a catalyst that makes possible its attainment. Fear is the organ-obstacle to courage: courage must overcome fear, but without fear what might otherwise be courage would be mere rashness. It is transcending frugality that elevates generosity above extravagance, transcending selfishness that makes altruism altruistic. The body is the organ-obstacle of the soul; words are the organ-obstacle of thought. In the broadest sense, freedom, bound to irrevocable choices, is the organ obstacle to freedom.

    “The resistance of matter is the instrument-impediment of form that the artist’s hand pulls from the rebelliousness of marble: you cannot sculpt a cloud! Poetry and music, in turn, invent a thousand arduous problems, impose the gratuitous rules of the sonnet and the often arbitrary prohibitions of fugue and counterpoint, enclose themselves in a strict play in order to find their reason for being… to feel free, the artist must find quandaries in his anagrams and calligrams.”

     

    Austerity and Moral Life 

    Jankélévitch denounces the “pseudo-austerity” of moral purism which, driven by a hatred for pleasure, offers its exponent “an aesthetic compensation for ethical disorder.” Pseudo-austerity is a degeneracy that imputes moral value to self-castigation. This malady is especially prevalent now, when so many have learned to prefer to sacrifice on behalf of ethical causes the fabrication of virtual moral avatars that will earn them the accolades of others, in the pretense that, if the gospel of self-abasement spreads, the moral order of the universe will be restored. We call this virtue-signaling. Such behavior “attests to the will to power of a spirit in delirium on the lookout for alibis for its basic indigence”; in plainer terms, it is a way of feeling righteous while doing nothing. In this stylization of his own existence, the would-be moral agent “disappears beneath the pathetic characters he finds it opportune to embody, is blurred behind the statue he deems it opportune to sculpt.” Sham mortification, melancholy for public consumption, trauma and despair twice-removed from tragedy – they all masquerade as the prelude to ethical action while burning through the moral impulses that might drive it. The pseudo-austere subject scrutinizes his conscience not for the good of what he does but for the good he tells himself and others he wishes to do, in the illusion that his zeal is a prelude to action when in truth it is a byproduct of his reluctance to act. 

     

    Forgiveness

    Forgiveness is not forgetting. It is not exculpation through attenuating factors — for if there is no naked, undiluted fault, then there is nothing to forgive. It is not the oblivion into which the offense vanishes into over time, because time cannot affect the moral gravity of wrongs. It is not the expression of an imperturbable magnanimity: rancor being the organ-obstacle of forgiving, the person who does not feel the pain of the offense has never truly been harmed and is thus in no position to forgive. Forgiveness is not a dogma or intellectual disposition that accepts the place of evil in the world: neither theodicy nor determinism has a second person; both concern the “anonymous universality” of third persons, which are creations of the mind with no necessary relation to persons of flesh and blood. Nor can we excuse a person without debasing him morally: forgiveness is the mode of acquittal proper to relations among equals, but we reserve excuses for children, drunks, the senile, the mentally unwell. The mutual recognition of dignity that invigorates equality collapses when I treat myself as master of my decisions and the other as a plaything of destiny. “It is possible that a forgiveness free from any ulterior motive has never been granted here below,” but this is no rationale for surrendering before the “replacement products” of unwarranted grace or forgetting; forgiveness is a presque-rien, but this does not make it a rien.

     

    Music

    Jankélévitch’s writings on music are nearly as numerous as his books of philosophy. In the Russian émigré community in the France of his childhood, he recalls, musical ability was more highly prized than good spelling. Visitors to his apartment invariably mention the two grand pianos and the teetering piles of musical scores; privileged guests were allowed to accompany him as he played his favorite composers. His taste was stuffy, with a pronounced favor for the romantic: the “anti-hedonism” of the twentieth-century avant-garde lacked the enchantment of evoking “affective reminiscence,” which was, for him, one of music’s primary functions.

    With music, as with death, as with love, his aim is to clear away the discursive ornaments that delude us into thinking we have something to say about it. Being “both expression and constituent element,” music lacks the gulf between thought and its objects of which language is the bridge, and for this reason, it has no communicable meaning. Music is rather a mode of organizing experience. It possesses a vital structure, with a tentative beginning, a moment of plenitude, and an end; but unlike lost and yearned-for days, we can revisit it again, and because hearing the second time is not the same as hearing the first, re-exposure to it is “not repetition, but the beginning of an order,” an arrangement of the sentiments in some sense analogous to the self. To relisten to a favored piece again is like seeing the arrival of dawn that tells us death has not yet come for us. The passionate energies invested in music, lived in repetition, make of it “a protest against the irreversible.”

     

    Decadence

    Jankélévitch examined decadence explicitly in a brief essay in 1950, but it is alluded to in other works, and his philosophy in its entirety may be described as the attempt to rescue the primary moral intuitions from the distortions that decadence effects upon them. Decadence is “the confusion of pure and simple consciousness with the consciousness of consciousness,” and mistakes conscious involvement in the world with consciousness’s involvement with itself. Decadence produces “two families of monsters: narcissistic monsters of introspection and monsters of excessiveness.” Both of these abound at present, the one busy attuning itself to ever more microscopic violations of pseudo-moral tenets that serve only to browbeat others and exalt the faithful, the other renouncing mutual respect and decorum in the name of a supposed authenticity, as though a self constrained by care and consideration were somehow less valid than the one whose meanest impulses are allowed to run free. The decadent consciousness redoubles on itself infinitely: “In despair at its own ease, the decadent spirit will create imaginary difficulties and invent artificial obstacles in order to salvage by diktat that resistance which is the only thing capable of preserving life from boredom and stagnation; for want of real problems, the spirit takes refuge in charades, riddles, rebuses.” 

    The nostalgia for a supposed golden age, almost always a veil for reactionary tendencies, is a symptom of decadence and not its antidote. Decadence is a loss of attunement in moral and aesthetic terms: in the moral realm, it prefers the virtual to the material; in the aesthetic, it opts for the copy over the original or for the mise-en-abîme of ironic parody. Decadence is “crumbling and bloated,” seeking creative unity in vain. But this seeking itself remains a positive force that may announce the spring to decadence’s autumn. In this way, decadence is in fact inherent to progress. What it requires, in the first place, is seriousness: a sense of the brevity of time and the gravity of what is at stake. 

     

    Language

    So the word for

    Did you know her

    You may be thinking

    Are you thinking

    Of someone else

    The red oak survives

    Life in the city

    Feng is wind in Chinese

    Sirocco wind 

    Over the Sahara

    A wind off the dessert

    Burdened

    Memory now sand

    A lost ring

    Buried there

    Bells In European towers

    Sound and light shows

    The three pyramids of Giza 

    We knew them in those years

    A few decades

    Restaurants and country fairs

    The O of a lighted Ferris wheel

    The swinging gondolas

    Returning with the word for

    Light in shadows

    The mirage of water

    Puddled in the road ahead

    Liminal

    Lagniappe

    The yet to come

    Immigrants

    Aren’t we all,

    all of us?

    Coming from a world 

    before time and dream,

    a place without time

    a place that does not exist

    into a world that does,

    of time and content. The clock starts

    with a slap, breath,

    an intake of 

    our air, the colors of this world

    and first dreams of what’s ahead.

    Open your eyes. Breathe

    in the spice of your new world. 

    The mountains here 

    are everything new to you,

    the rivers to cross

    whose currents pull

    you to other shores,

    beaches shining with an infinity

    of reflecting grains,

    borders, a geography of constellations,

    stellar borders,

    everything in a single grain,

    just reflecting. I’ve seen you

    in lines outside,

    in the heat, in the cold,

    looking to inhabit beyond 

    these lines. And soon, as the days 

    and years turn over, you’ll again need 

    to begin the journey, the familiar journey,

    the long emigration back

    to the world

    where time is not a dream

    but an airless landscape, without scent,

    at the border where the dream sleeps. 

    No documents of transition. Breathe.

    Afternoon Idyll

    You were dreaming again, of holding her 

    in the failing light of some failing

    stop over or another, some merely broken down 

    town with nothing operative but corruption. 

    The sun like a cavity filling with blood 

    on the western horizon

    made the ocean Pacific, the late afternoon

    dangerous in its willingness to reveal.

    Were you dreaming? The warm beer stamped a ring 

    on the bamboo where you left it, a green dress, 

    moss green, just tossed 

    over the cane chair, a pale dress 

    of cloth, abandon, something — what? Your hand 

    finding for itself a little game with another. 

    While you were dreaming she walked out to the veranda

    wearing your starched white shirt, rolled to the elbows,

    the tails down to her knees. 

    Her feet left small damp marks on the plank flooring.

    You watched as the light dyed her red —

    was she dreaming? In the sink, the emptied shells 

    of crustaceans, three chopsticks and two paper plates. 

    In the clay pot on the railing, she notices 

    a slender vine-like plant tied with twine, 

    staked with a chopstick.

    Dust

    So when I think of you

    there is light.

    There is a window

    that disappears at night

    and returns at sunrise.

    There is the dust of us

    on the slant of incoming rays

    warming the rooms where we were,

    the many rooms, the dust of us

    blended, one sheath of light.

    Why Did Humphrey Bogart Cross the Street?

    This is a small thing, but it happened in a time when we were content to hang on the marvel of moving photography. In 1946, without undue fuss or fraud, the medium could record actual things and say, look, this happened. That’s what we were up for then, the appearance of a changing now. Even if it was just being on a street in Los Angeles and waiting for the afternoon to subside. 

    A man comes out of one bookstore and looks across the street at another: was this the heyday of American civilization? The street is moderately busy, passersby et cetera, and there is subdued Max Steiner music in the air, alert or wary, call it background italic, as if in 1946 such readiness was as detectable as smoke in the city‘s crisp fragrance. In a dark suit and a fedora, the man walks across the street. He seems headed for this other bookstore. But as he comes to the far sidewalk he passes a fire hydrant, and then, without a need in the world, but as if he has an inner life we’ll never know, he pats the top of the hydrant and moves on. If you want a glimpse of how good we were then, and what it meant to us — the movie thing — you could find worse than this.

    I forgot to tell you: there is a roll of thunder as the scene unwinds. It could be from out by Pasadena, but getting closer. No, this is not a disaster film about weather, or an earthquake splitting the street. But in a film called The Big Sleep you may wonder in the back of your mind whether some sleeper is stirring. It’s in that back of his mind that a man could think about thunder as he taps a hydrant on its head. Like touching wood for water.

    Or maybe the director Howard Hawks thought, Well, if this fellow is going to cross the street, we need a little extra to fill the time. Get me a dash of thunder, will you? Like putting mustard on a hotdog. But then perhaps the man in the fedora queried the director: Tell me, why am I crossing this street? And Hawks could have answered, Well, we need enough visual to make room for the thunder — and I like to watch you walk.

    We are attending to The Big Sleep, from the Raymond Chandler novel. This actor is Humphrey Bogart and he is playing Philip Marlowe, the private eye. Marlowe is on a case, so you’d assume that this street scene has to be significant — don’t we know that movies are loaded with all the big things about to happen? Isn’t it the rule on screen that every last thing is vital? The details are clues, and that’s how we are always the private eye. The process of a story is us finding something out, and over fifty years or so that became claustrophobic — as if every damn detail was weighing on us. The visual is so single-minded as a construct. It can’t breathe without insisting on focus and action. No one on a film set ever called out, “Inaction!” And yet there were listless streets in Los Angeles, or anywhere, where not much was happening. Certainly not enough for a movie. Think of it as life.

    And that’s a loveliness, like Mrs Dalloway saying to herself, “What a lark! What a plunge!” as she sets out walking on that summer morning in London to buy the flowers herself. That is a lyrical if unimportant moment, so exciting yet so ordinary, and it’s the kind of thing that is hard to get on film. Oh, the grind of all that relentless purpose! Making sure everything is underlined. When another wonder in photography is how it can be open to the light, to chance, to just the persistence of vision. Open like a window on a good morning.

    If you watched The Big Sleep over the years, you saw that the scene coming up at the Acme bookstore is blissfully unnecessary. That is curious, since it is among the most delectable scenes ever managed, even in the work of Howard Hawks, who loved to be casual yet provocative at the same time.

    It’s a good thing this is a classic, because it would never get made now: two bookstores on one block, and flirtation for its own sake? What happens is that Marlowe goes into the Acme bookstore (it is empty except for a young woman who works there – she has “the look of an intelligent Jewess” in Chandler’s novel). He talks to her; he impresses her to take off her spectacles and let down her hair (not in the novel). She knows her books and teaches Marlowe that the bookstore across the street is a sham. Not that that matters; we already knew that its owner was a crook. But this woman (she has no name or story, apart from her pliancy and letting her hair down) is persuaded to put up the “Closed” sign, find a bottle, and let the thunderstorm that was coming pass away. We have to imagine what happens next (that’s where the Code was terrific), but this may be disconcerting now because the film takes it for granted that she is smitten and just a lark and a plunge in male fantasy. Most of this is Hawks, not Chandler. 

    I don’t mean to forgive the scene. I can’t rule out the possibility that it exists because Hawks had found this young actress, Dorothy Malone, and wanted to have something for her to do, some flowers they could sniff together. But I’m not shocked by it. I’m looking at it to reinhabit the miracle of movie things that hardly need to happen, and the radiance of the nest-like places where they occur. You see, it’s not just that the medium is in a cul-de-sac now, where it cannot condone the male gaze of Howard Hawks or the fantasizing that he lived for. I’m also thinking about the loss of such small scenes and what I’ll call movie day-dreaming. Was there ever a nicer bookstore in a movie? Just watch how Malone moves. And feel the onset of evening.

    That may make the bookstore seem unduly cozy. But it is part of The Big Sleep and movies of that era that the sets were not just plausible; they were done with affection and emotional ownership. Marlowe’s office is a bare waiting room, but Hawks treats it kindly, so when Bogart and Bacall run their delirious telephone routine, the mundane is complicit in the marvelous. She has an itch above her knee so he tells her to scratch it. Is it absurd to see an empire at its pinnacle in that tremor? Just see how the ordinariness of these rooms sustains rapture. Maybe this was as much stage as screen, but Hawks guessed that we liked to imagine ourselves in these plain interiors. That’s how Bogart might become iconic even if his Marlowe has only one suit. (In the book, he’s a bit of a dandy — unthinkable for the movie.)

    Bogart is known still (I hope) as an illustrious tough guy — sardonic, abrasive, a needler, and rough when he had to be, while still willing to be romantic occasionally. And so anxious to be liked: don’t forget how in his two Hawks pictures in 1944–1946 he was talking to this woman who did the fondest thing for him — she answered him back, so he learned he didn’t have to be Warner Bros hardboiled all the time. That happened, let’s say, by chance, and it wasn’t that Hawks hadn’t had his eye on Lauren Bacall first. But Hawks always rejoiced in the principle of impressive guys being taken down. What most preoccupied him on The Big Sleep (and To Have and Have Not, the film that preceded it, the one where the nineteen-year-old Betty Perske slid into being Lauren Bacall) was the way Bogart walked. 

    If you think that’s fanciful, look at the film again and count how often Bogart has to walk across a space, a room or a street, and sink into Hawks’s rapture over this fundamental action. There’s a great deal of daft mystery in The Big Sleep, if you take it seriously or draw timelines to puzzle it out. But while you’re trying to follow the mystery, slip into the enchantment of this un-tall man, very plainly dressed, strolling into mythology. In the same way, in the first pages of Mrs. Dalloway, you feel for Clarissa walking in the June morning, counting off the chimes of Big Ben on her way to the florist. “I love walking in London,” she says. “Really, it’s better than walking in the country.”

    I’m not saying Hawks had a crush on Bogart (or more than Mrs. Woolf adored Mrs. D). But in the age of movies directors loved everyone who moved, and gladly endured a certain amount of story or drama to get into simply photographing the way a Bogart walks and talks and listens, or flat-out exists. He was an actor, of course, so long as no one caught him at it, but he walked with soul, in the way you might touch the top of a hydrant as you crossed a street. You can list Marlowe’s Chandleresque credentials, his career record, his sworn testament, and so on, but The Big Sleep is that hydrant, it is him turning up his hat brim and lisping about the Ben-Hur edition with the erratum slip, and his telephone manner, and the way he catches Carmen Sternwood (Martha Vickers) as she tries to sit down in his lap while he’s still standing, and so many other moments that would have been cut if you were monopolized by plot and solving the mystery. 

    Then there’s the patience with which Hawks was watching him and finding little things for him to do. Plus the way the director did movie after movie without being picturesque, stylish, or what was called cinematic. He had never thought of rivaling Hitchcock, whose incessant unique angles on everything trembled with his fear of life and looking. That’s where the claustrophobia can take you. But Hawks looked at the world and living rooms like a horseback rider in Wyoming.

    He liked life and day-dreaming and the way movie married the two and left room for pretty women in cameo parts. That is his shady elegance, and the airiness of those years when the world was desperate, circa 1945, but so much calmer than it can be now. Mrs. Woolf killed herself — we know that story — and Mrs. Dalloway culminates in a suicide, but you don’t forget its feeling for the epiphany of London and the florist shop aromas in June. That novel talks to itself about writing a book like Mrs. Dalloway; it is enthralled by the balance of composition and dismay. It’s like Bogart asking Hawks, What’s this scene about, Howard? and Hawks telling him, Well, this is the Sternwood house and you just follow the butler down the hallway, looking at stuff on the wall, and then after seven or eight paces there’s Martha Vickers coming down the stairs to meet you. She’ll goose you.

    What’s she going to do?

    I don’t know yet.

    So it feels like a rehearsal. For all we know, Bogart had not the least idea what he was meant to be doing, or why he was wearing out his shoes on the picture. But look at it now, and you can’t miss the pilgrimage. 

    Books and professors tell you that The Big Sleep is a film noir, a whodunit, a tough guy picture. There are traces of that genre to be sure, but don’t settle for the dead end. The Big Sleep is a chamber work, a screwball comedy so relaxed or evasive that we don’t need to get ready to laugh — it just happens. For all the terse fisticuffs, the offhand shootings, and the corpses left behind, it is a tranquil movie about optimistic motion, standing still and doing as little as you can get away with while giving everyone the eye. No one — least of all Hawks or Bogart — would have dared think this, let alone say it, but the picture is into momentary beauty. The wittiest summary of that aspiration is the credits shot, the silhouettes of a him and a her, both smoking then putting their cigarettes side by side in an ashtray — that is the bed scene in the movie. Hawks delivered it at the start so he could concentrate on talk.

    “Beauty” gets to the heart of the matter. That’s a concept we take for granted now, like oatmeal and other banal staples: it is the hope that helps us negotiate Hiroshima, Syria, and Ukraine; it’s the luster and the sheen that lets us get along with cancer, poverty, and the ads on TV. “Beauty” is the code that humanism has been using to contest our ugly social nature. But I’m putting the word in quotes because it would have made Hawks or Bogart wince. To do pictures in their heyday was to come through on schedule and on budget; to have something that grew lines outside the theater; to put away some money and permit the various superiorities that came from making pictures.

    Perish the thought that anyone would look at your movie and say it was beautiful. That cut against the grain of pictures being for everyone. It smelled of Soviet expressionism in the 1920s or French malaise in the 1930s. Something that alarmed Hollywood about Orson Welles and Citizen Kane was how it wanted to bludgeon us with show-off — or as critics learned to say, cinematically. That drop-dead emotional mouth uttering “Rosebud” was the warning shot, something out of nowhere meant to send us into nervous raptures and draw attention to itself. Whereas the keynote of movies, their astonishing habit, was to stay casual. Hawks was not alone in this. His avoidance of expressionism can be seen and felt in the work of Ernst Lubitsch, Preston Sturges, Frank Capra, Mitchell Leisen, Michael Curtiz, and most of the directors trying to keep in work.

    If they were lucky they did not pause to think this through, but there was no need to press beauty on the screen or on audiences, because photography was already there. That was the amazing thing, the knockout; it was the way of confusing life and the lifelike that is the riddle and enchantment of the medium — and it was the quiet bomb that keeps going off. Ginger Rogers was surely pretty. But no one reckoned Fred Astaire was good-looking, or more so than Stan Musial. He had a tough time getting into pictures. But then the truth sank in, that Fred might work out a dance routine in which he and Ginger went all the way across the room in one unbroken shot. No need to cut to close ups of twinkling feet and all that nonsense. The thing about Fred and Ginger was the blithe spirit of this homely man saying, we can manage this room, just like that, so long as no one thinks to cut or asks how beautiful it is. The joy of their films is not because the dancing is hard — it’s in seeing an ease in which everyone dances as a matter of course, like blessed walking. It’s the non-dancing stuff that is difficult to take.

    A theory of beauty existed in these casual lifelike miracles being put up on a big screen, so that all we had to do was wallow in the warm water of it. Yet it was trickier than that, for it played on our desire to be smart. It’s only a movie, we told ourselves in philosophical delight. Here was a paradox that humanism had not faced before: for while it could be Philip Marlowe out in LA at some five o’clock in the afternoon, on a street resembling the real thing, it was also Bogart, a chronic actor, on a stretch of city that had been designed and fabricated and would likely be folded up once the shot was done, or tactfully repurposed as an avenue in a college musical. It’s the threat buried in that disconnect now (the chance that our dream has been betrayed) that encourages gun-ownership and other panicky behaviors. The guns are held on to so tightly to cling to the idea of “the frontier.” 

    We had been looking at a romance, and we might go mad from the reverie. Yet The Big Sleep was offered as a run-of-the-mill product, and not a great work from a genius. Long before Marshall McLuhan had thought of it, the medium had kidnapped most messages. Movies were such a rapture that no one needed to waste time saying they were beautiful, or getting to be art. Though it might have been useful if someone had thought to ask, Well, sure, I love these pictures too, but should we start wondering what is happening to reality? What are all these women doing — taxi-drivers, hat-check girls, pretty psychopaths, and graduates in bookstores — and what are we meant to make of them? 

    In every historical reckoning, this denial of beauty or the allegedly higher reaches of art was crucial. The movies and Hollywood itself were constructs designed to smother notions of creative elitism. There was only one elite and that was the money. Wasn’t the medium for everyone, as no medium had been before? This was the inflationary boost served up for frightened people trying to get through the Depression, commonplace failure, and the disappointment at how the United States had turned out so far from its ideals. From 1776 on, we were suckers in the frenzy of advertising. The pursuit of happiness? Give us life, liberty… or Liberty Mutual. 

    In Hollywood, the abiding promo saw happy endings and lovely pictures as the rights of man. An end was nigh. As audiences began to stay home, film studies moved into academia, and movies turned self-conscious and sour about themselves. I put it that way because of our obstacle: it becomes more apparent every day that America will destroy itself rather than yield to measured, critical introspection. So it’s easy to argue that The Big Sleep is as antique and as woeful as love songs, handwriting, pitchers who have to hit, and movies that practice flirt and then dissolve so that we can imagine what Marlowe and that clerk did while it was raining outside. For Chandler, “the big sleep” had meant death; for Hawks, it welcomed dreaming.

    It’s a leap forward, from books on Howard Hawks to the New York Times wondering whether the movies must really be dead if it takes Tom Cruise and Top Gun Maverick to remind the business of glory days. As if Cruise had any idea in his sixty-year-old triumphalism how Bogart had hesitated in The Big Sleep, considered some drollery, and let it pass by because it would be vulgar to draw attention to it. Long before the technical effects were available, Cruise was a photo-shopped actor, clinging to his grin — just don’t remind him how he did Magnolia once upon a time.

    The movies had been dying for so long. Like flowers, ripeness is only today and tomorrow. Audience numbers began to decline in that age of film academia, no matter that Hollywood had briefly fallen into the hands of young rebels who were making some films that approximated the turmoil of the country, and deplored it. That fierceness couldn’t last — it struck at the virtue of money; and then George Lucas ordained a technological splendor that restored large young audiences and assured them that it was not just possible, but obligatory, to remain young forever, or for as long as that scheme lasted.

    What happened to the movies was that the medium abandoned that delicate and adult task of looking like life, and passed into a realm where, because anything was possible, photography itself was given up.

    That sounds odd, maybe, because we still live according to the homily that movies have been photographed. It is true that images are recorded, and then heavily doctored. But the digitization of appearance and the flood of computer-generated images mean that few filmmaking ventures respect the reality of that humdrum Los Angeles street. Everything can be handled, from stick-figure armies ready to be wiped out, to an actress, Vanessa Kirby, seeming at full-term pregnancy and having a baby in Pieces of a Woman. One may be more sentimentally inclined to the latter than the former. Kirby certainly seems more compelling or touching than Benedict Cumberbatch blundering around like a blind man in an obstacle course in Doctor Strange in the Multiverse of Madness (some titles can’t help being warnings). But you need to understand the condition whereby many filmmakers (and audiences) now are drawn to their power to surpass appearance. Now actors have to play scenes with spectacles they cannot see: Cumberbatch may stare through vistas of sublime destruction, while Bogart actually crossed the actual street or actually caught the reckless Martha Vickers in that Big Sleep opening. We are at a point where the people in movies have the status and the flexibility of characters in animation. So many of them are only diagrams of humanity. 

    Live action and animation involve different degrees of experience, and a different contract with the audience. Chloe Zhao has every right to be a success, and she had a new opportunity after Nomadland, a hit in so many ways, and seemingly pledged to ordinary stuff happening, but still too pious or noble for my pleasure. One understands the creative pressure to make a picture about people who have given up on the world — so many special-effects extravaganzas ride on that dynamic — but it is hard for a mass medium to do that without seeming self-satisfied, fascistic, or religious, while securing its home on the high ground and keeping its own discreet firearms. Nomadland was picturesque and even sanctimonious in the director’s gaze, where her earlier film, The Rider, had been factual, incidental and transfixingly commonplace. 

    It may seem archaic or forlorn now to favor the more humane approach, and one has to recognize that some young audiences at picture shows are more inclined to be wowed than moved. I am not knocking the wow: it has always been integral to moviegoing, where the urge to show people something they had never seen before was essential in the enterprise. Watching women talk back to men in Hawks pictures, and silencing them — once that was a wow. With several writers — from the lethally professional Jules Furthman to his drinking pal William Faulkner — Hawks pioneered smart small talk. It felt like a knack that could save the world. And he ran the lines serene in the knowledge that he would seldom be troubled by those policemen of significance named Oscar. 

    Still, it took only a few years for Chloe Zhao to go from that second film, The Rider, to Eternals made for Marvel Studios. The latter cost $200 million and a piece of that went to Zhao, as it should. But the picture was a disappointment, whereas The Rider — made with occasional actors on a South Dakota prairie, and maybe 1 per cent of the Marvel budget — was one of the best American films of this century. One reason for that was the film’s faith in a world of life and death that goes on between people and horses. Do not forget for how long that transaction was one of the most reliable marriages 

    It may feel contrived to locate a culture and a silver age in Bogart crossing a street in 1946. But twenty-four times a second in those days a photo-chemical reaction occurred with light and silver salts in the emulsion, and then some judicious pushing in the labs. It functioned for all of us who existed then, because we were still in love with the freshness of photography turning life into the lifelike with more enhancement than loss. We felt we might handle that risky in-between. I am not mining mere nostalgia, or saying that things were better then. Our race does not quite deserve “better” as a measuring stick. Let’s just say it felt more ironic to be pretending then, while the gradual loss of amused fun is one of the saddest retreats our United States have taken. That is how computerized infinities have killed the notion of humdrum rooms.

    The Acme bookstore scene is some kind of disgrace now — but it wasn’t in 1946, even if it nurtured resentments that might close down its fun in sixty years. That’s not my point, even if I own up to being a citizen of the disgrace. What’s more significant is regret over how a cherished medium — in telling silly stories to all of us it was once the hope of the world — walked away from the nearly accidental radiance of small things happening in the light. It sounds farfetched now: how a few people put together a city of bookstores, with a street set, the rumble of thunder, and the unexceptional placing of a fire hydrant. But they found a harmony and a shabby spokesman for our wish to be more confident than we are.

    As I was writing this essay, an issue of Sight & Sound arrived. It reprinted a 1971 interview in which Joseph McBride and Michael Wilmington talked to Howard Hawks. The magazine had enriched the interview with several photographs from its Hawks archive. One was a production still (not a frame from the film) of Angie Dickinson and Ricky Nelson on the Tucson location for Hawks’s Rio Bravo, made in 1959. She is looking off frame to her right, and he is rather meekly following her gaze. He feels like an extra dressed up in cowboy gear but uneasy playing a gunslinger named “Colorado” who helps win the day in that screwball siege story.

    They have a companion, or a question mark, between them. This is a horse with a white blaze on its head, looking at the camera but playing it very cool — I told you, the horse was pivotal in our culture — and edging the romantic pose closer to farce.

    You see, she is dressed in tights and a leotard along with high heels, none of which seems appropriate for semi-desert Arizona at the close of the nineteenth century. But she is wearing this unlikely costume with calm assurance. She may guess she is a natural, an ideal, and hardly the least appealing person ever photographed. Dickinson was another of Hawks’s discoveries, and in Rio Bravo (as “Feathers”) she seems as subtly actual as Bacall (as “Slim”) in To Have and Have Not. The photograph has an aura less of the West than of the “Western.” It is ironic and Hawksian in teasing actuality, and how by 1959 it was hard to make authentic Westerns. That window had been eclipsed by the decades of pretend movies. We exhausted our brave dream. You can’t kiss or kill anyone now without referring it to your movie repertoire. 

    But there might be heaven in a sly picture about a few straight-faced actors pretending to make such a story.

    What am I doing here in the desert in tights and high heels, Howard? 

    You’re giving that horse something to think about, and when you’re ninety, if we’re lucky, it’ll be the same. 

    Bogey, why did you give that touch to the fire hydrant?

    Did I do that? I don’t know. I wasn’t thinking.

    The Trance in the Studio

    The vastness and nuance and intelligent, rough beauty of John Dubrow’s paintings, the rhythmic turmoil which roils their cakes of paint, tempts one to conceive of them as natural wonders. How are such things made? These works sometimes put me in mind of the forces of nature that combine to create hurricanes and mountain ranges. In the deep geography of Dubrow’s works there seems to be no mediation, no polish, no editorial mercy to bridge, for the viewer’s sake, between what Dubrow was moved to make and what Dubrow meant by it. The painter’s long toil — these works require years to complete — is rewarded with an extraordinary immediacy. He does not translate for our sake. We meet him entirely on his ground.

    I thought all this before I had ever stood before a proper Dubrow painting. I had seen small oil sketches, the free power of which foreshadowed the force of the full-scale versions. Dubrow’s paintings are enormous, not only in height and width but also in the sculptural thickness of their surfaces and in the demands that they make. A topography of calcified oil rises and falls from one edge of each surface to the other. Seen from the side, uneven protrusions testify to the force of the painter’s impact. The surfaces are like the beaten ground of a paddock in which a wild horse has been penned. The man who made these paintings must have exerted prodigious energy to whip and slice and scratch all this paint so that it seethes the way it does. The edges of Dubrow’s surfaces are never straight because the paint rises and cakes over them, the corners are rounded, there are no clean angles. Like any inch of the natural world, Dubrow’s paintings are not neat. And yet they contain a beautiful order. 

    John Dubrow’s studio is sheltered in a converted tobacco warehouse twenty minutes outside Manhattan along with some several dozen other artists of various kinds. The complex in which it sits, called Mana Contemporary, strikes the visitor as misleadingly unsuitable for the attainment of transcendent experience: it is made up of several slabs of brick in a gritty, graffitied stretch of Jersey City only a few yards away from the train tracks whose thunder protects against gentrification. The eponymous Mana, Moishe Mana of Moishe’s Moving Business, is perhaps unaware of the condign allusion of his surname. Dubrow’s work is itself like manna, by which I mean it is inexplicable, it seems to have fallen from heaven, and it contains enough to satisfy a variety of appetites. Above all, it must be experienced physically; the effect of its physicality is difficult to describe. The charismatic textures of these dense canvases are a challenge to descriptive language. So is the atmosphere of his studio, with its mixture of solidity and vertigo, of gravity and excitation. Describing it in words is like trying to paint the sensation of tumbling down a flight of stairs. This incongruity is frustrating, and that frustration is precisely what makes the painter’s blood race and his paint hum and bellow. He is trying to describe in paint sensations that can only be felt in life. Something primary, something both sophisticated and atavistic, lives inside that studio. I will try to tell you what this means.

    The path from the front doors of Mana Contemporary to the elevator winds past a number of John Chamberlain’s gripping and enigmatic sculptural constructions made from welded scraps of colorful cars. John’s studio is on the fourth floor. The elevator doors open in the middle of a hallway, on the left side of which a glass sheet covers the wall of a large dance studio in which a choreographer was presiding over a practice session. I turned right, walked down a few thin, high corridors lined with enormous iron sliding doors until I reached 411. On the other side of the massive iron door, another world. Paint, paint, paint. Speckles of it and streaks of it and mounds of it. Paint piled high on a table that had once served as a palette but has over the years calcified into a many-colored undulating mass. Paint encrusted on empty paint pails and paint brushes and paint tubes. Stains of paint on the ground, on the tables and the chairs. The smell of paint is thick even in the immense space of the studio. The room is huge. The white walls, way above human height, lose their paint stains as they stretch all the way to the industrial ceiling. Enormous windows, the kinds of windows artists pray for, gape on two stretches of the studio, suffusing the immense space in a big, gentle light. On the day that I visit it is pouring rain outside — the light is soft and strong, like the artist. 

    Where did he come from and how did he get this way? John Dubrow was born in Salem, Massachusetts in 1958, studied at Syracuse University, Camberwell College of Arts in London, and then the San Francisco Art Institute. Formative years spent in California, Israel, and New York — where he has lived for the past several decades — mark him as a man with many influences and admirations, a man from no place and every significant place. He has painted the landscapes and rooftops of Jerusalem and New York City, in equal parts a creature of the Bible and of Babel. Leaning up against the walls of his studio are portraits —in an especially lyrical picture, the poet Mark Strand peers from behind his folded arms — and landscapes of euphorically vital greens and blues. On some walls crowded cityscapes buzz across from near-abstract combinations of flesh tones. Everything is oil paint.

    Paint is John Dubrow’s air and water and food. His commitment to the substance is absolute, almost monastic. He has spent his whole life studying its capabilities and its effects. While I was working on this essay I dreamed that I had to wash pills down with oil paint — to squeeze a tube into my open mouth and toss back two small capsules. His fanaticism is contagious: for John, paint is medicine.

    The paint which clings to the towering stretches of what John says are canvases — the paint wholly obscures the surface on which it was settled — impose themselves with great force. They are not violent, but they possess enormous power. Dubrow is very exacting in his observation and in his practice. (He is also slight and soft spoken.) 

    John rarely uses a paint brush. The paintings are too rough and large and a brush’s texture is limited. As mentioned, he does not use an ordinary palette, but requires an entire table. Near the center of the studio a quarter of a large table is covered in half empty paint tins. The rest of it glistens with the colors which make up the painting with which he is currently wrestling. Often he uses an enormous palette knife to apply the stuff, but he also uses his hands, covered in blue surgical gloves. The masses of paint spread so roughly across the canvas with his knife form heavy planes. These colored planes establish the painting like bricks or sheets of cement. They are strong, weighty, and deeply intelligent. 

    Tension undergirds this intelligence. That tension is fueled by the force within John fighting for figuration and the force pulling him towards abstraction. The process of creating a painting, for John, is made up entirely of these two compulsions, these two antithetical conceptions of what the painting ought to be, which do battle over and through John as he works. The painting is a record of this agonistic tug of war between figuration and abstraction. He builds up one and then breaks it down in service to the other. “That’s why they take me so long, I think.” Every step forward has to be undone. The artwork is a record of new beginnings. Creation and destruction; creation is destruction.

    For hours on that wet, tranquil afternoon we moved from canvas to canvas talking about art. When one asks John questions about his work, he answers in riddles, not because he is being coy but because he is trying to be precise about a process which is largely incommunicable in language. 

    CM: What’s the difference between pure abstraction and what you’re doing? Is it color and texture?

    JD: It’s the difference between kinds of space and form and texture. In pure abstraction those things usually stay on the surface of the picture plane with an even frontality. For me, in my own work I’m trying to get things to jump around back and forth from deep to shallow, from solid to fractured — I want things to seem to be one thing but actually be something else. As far as texture goes, I never think about the buildup of my own surfaces. The thickness develops organically while I’m working, although I do notice them. Most contemporary painters who build up textures use it as a decorative or willfully emotional device.

    John is a scholar-painter: he has studied the history of his art, as the books scattered about testify. After many decades mired in the vernaculars of the art world, he invokes its language to defy its axioms. Translation is necessary for the uninitiated: when John talks about applying surface decoratively, he is gesturing towards the strides into flatness and abstraction which were sanctified in part by the ferocious hectoring of critics such as Clement Greenberg, who insisted that art history was Hegelian, that it traveled inexorably, with the force of a logic, out of figuration and into abstraction, and that painting should nevermore strive to communicate three-dimensionality. Modern painting, Greenberg decreed, had to distinguish itself, to justify its own existence, by confining itself to the two-dimensionality of the picture plane:

    Three-dimensionality is the province of sculpture. To achieve autonomy, painting has had above all to divest itself of everything it might share with sculpture, and it is in its effort to do this, and not so much — I repeat — to exclude the representational or literary, that painting has made itself abstract. 

    This edict, issued repeatedly by Greenberg, who considered it his job to influence the art world rather than merely describe it, altered how artists — and the rest of us — conceive of art. Dubrow is still defining himself against the framework that the high church of modernism established. So, while purely abstract artists are not tangling with space and form and color all at the same time, John’s post-abstract but incompletely representational pictures are persistently communicating space and establishing three dimensions.

    JD: The only thing I’m interested in is the volume of space, and building and breaking down form at the same time. For instance, this painting [he gestured towards one of two canvases leaning against the wall in front of us] is a painting of two figures falling in Paris.

    Eyes widened, I looked intently at the identified object and muttered If you say so. A near-square mass — 50”x58”: John likes large surfaces — of swatches and scrapes of multicolored paint defied narrative explanation. In the bottom right-hand corner a cake had fallen, or was beaten, or was cut away, and a triangular gap of open space yawned between the edge of the canvas and the sheet of paint above it. It looks like the craggy cut of a cliff, so thickly layered was the paint. The painting was composed of colored swatches — some of them solid tones, some vibrating with multicolored flecks. The texture is uniformly uneven, sometimes like gravel, like stucco, like tesserae, like the mottled sides of buildings in beaten-up alleys. There is not a single stretch of smoothness. The dominant, heaviest colors are grayish purples — four of them arranged in slanted, flailing, and stretching masses from the lower left-hand side upwards to the middle of the rightmost edge. A deep blue anchors the left lower corner, grasping up to the upper left. A wide passage of distressed pink presides over the topmost edge. Flesh tones shimmer in fleeting scraps throughout, unsettled, disembodied. Crimson and cadmium scrapes dance in the upper left corner and the bottom right one. The whole thing teems with color in rhythmic, competing movements. This painting of two figures falling is not a painting of two figures falling — it is a painting of the feeling of a fall, the essence of a fall. An impossible thing to put into writing, and an impossible thing to put into paint. John conscripts his viewers into an exacting and mystifying exercise.

    JD: I was walking in Paris with my partner Kaye. We were holding hands walking down from Montmartre and she tripped. So when I came back I did a series of paintings of that experience.

    CM: How does a painting usually start?

    He thought for a moment. 

    JD: Kaye and I were in the subway last night going to Lincoln Center. We got caught in a human traffic jam underground, people going in every direction shoulder to shoulder. Not a visual experience, a psychological and physical one. Especially after Covid, that physical experience was like a revelation. I instantly had a flash: this is my next painting. Will it be? Maybe. If so I’ll probably go up there to that spot and try to add visual pieces to the inspiration. At other times it’s those same elements, physical and psychological, but predominantly visual which is more straightforward. I can follow the visual but sometimes it gets in the way too, as I usually need to break away from the visual to get to the real experience transformed into paint.

    CM: The real experience? You mean the feeling of it?

    He nodded.

    JD: It’s the feeling but it’s also, well, for me it’s very literal. You don’t see two figures falling, but I see the figures clearly. What I’d like to do in all these paintings is have these opposites collide. I want to be building a form and breaking down a form at the same time. The building of volumetric space and the flattening of that space. So it’s like everything is fighting with itself. I’m always describing something very literal, and then I’m breaking down the very literal. So at different points this painting — all these paintings — were very figurative.

    CM: Why are you breaking it down?

    JD: Because the intersection of these two things is what interests me. It’s a Cezannist idea. It basically is the same project as Cezanne’s. How do I build this and also have it fall apart and have it become an abstract patterning? I want both at once, in equal intensity.

    John is maniacal about studying the past masters. When I was at the studio a large volume about Giovanni Pisano was open and obviously in mid-study. One wouldn’t recognize their styles in his work, but the spirit of the work, the seriousness, the rigor, the reaching for a breakthrough, is analogous. Any student of art who has had their breath taken away by Cezanne’s simultaneity of structure and impression will recognize something in John’s experiments and intuit the harmony that is his goal.

    Greenberg, of course, would have bridled at the pronouncement that these two artists share a project. By Greenberg’s lights, “Cezanne sacrificed verisimilitude, or correctness, in order to fit his drawing and design more explicitly to the rectangular shape of the canvas.” Notice the distinction between Greenberg’s understanding of Cezanne’s project and Dubrow’s conception of the same works, the same choices. Both men recognize that, for the purpose of achieving a desired end, Cezanne moved away from traditional representation. In Greenberg’s mind, he did this because he wanted to create a painting fitted properly for a flat surface. Dubrow’s analysis is precisely the opposite: Cezanne, in order to communicate space as it is felt, needed to repeat elements of the volumetric structures he was looking at but also to alter those structures such that the painting could communicate and even depict the vitality of the object. Communicating terrestrial vitality, for Greenberg, is straying from the ideals of modernism, since a flat surface cannot contain a living thing. But when John looks at a Cezanne, figuration and abstraction both contribute to representation, insofar as representation refers not only to the way the subject looks, but also to the way it feels — to the experience of the energy in the scene. It is a mark in favor of John’s view that Cezanne’s earliest paintings lack the movement that vivify his later ones. His earliest paintings were of darkly imagined scenes — a rape, a murder. Only later in life, when he became Cezanne, did he paint exclusively and fanatically from life. 

    CM: Do you feel, like Cezanne, that you are trying to invent a new kind of painting?

    JD: I feel that with every painting I’m constantly trying to push forward beyond what I know I can do. And so they take me several years. This one took me three and a half years because I’ll be working on other paintings simultaneously and also going to see art that will help me break through to a new place in an old work. I went to Italy to see Pisano and when I came back I immediately returned to these figures and these figures became more organic like Pisano’s figures. The movement, there’s nothing static about those statues. And so when this happened here [he gestured towards a part of the canvas] then everything else was very clear to me. 

    CM: Can you show me where the figures are in these paintings?

    JD: Well, there are four figures here. This [he pointed to a passage] is the actor Paul Lazar, this is Joanne Akalitis, this is the actress Wendy vanden Heuvel, and this is Annie B. Parson, who is a choreographer… And there’s a table over there… And the catalyst was the moment when Joanne put her hand on Paul’s shoulder so that was there from the very start.

    CM: And would you expect your viewers to think of those four passages as figures?

    JD: I don’t care… People tell me they can’t. They say “I have no idea what you’re talking about.” But for me, they’re all… they’re so specific in my mind. In this regard “people” are just a framing mechanism for determining where to put related masses of color on a canvas. But I’m imagining those specific people all the way through no matter how buried in paint they become.

    But that’s how I look at Old Master paintings, too. It’s always been this way for me, from the time I was twenty when I was obsessed with Titian. I never saw the narrative. The narrative is only important because you can tell by looking at the painting that the painter really cares about the narrative. But the intense attention is what matters, and however that informs the painting doesn’t make a difference.

    CM: Do you want the viewer to be able to see the figures in the paintings? Because I look at this and I want to resist trying to find the figures. I feel that it is inhibiting me from giving myself over to your project.

    JD: Well, I do feel like what I’m trying to do is to create a new construction. I don’t want to give you exactly what you already see, I want to give you something else. I’m reconstructing the world and it’s not going to look like the world as you or I commonly experience it. But one of the things about painting figures in this kind of context is… you know, I set up an abstract context and I can’t just put figures into it. It wouldn’t make sense. So basically I’ve had to learn how to reformulate and reimagine what a figure is.

    CM: And you need both abstraction and figuration in order to make the painting feel like the experience?

    A swift nod. 

    JD: Yes. I need it to feel exactly like the thing. Normally it has to lose the descriptive element in order to feel like the experience. The miracle of Cezanne, and the miracle that I’m looking for, is that there still is a picture box in Cezanne. By which I mean, there is the traditional conception of the canvas as a window through which you see a comprehensible scene, and there is a picture box here too, but I’m denying it simultaneously. 

    The “picture box” conception of a painting engages with the canvas as if it is a stage, a proscenium in receding space, on which a scene is set, as in a play, with intelligible figures that are part of a legible narrative. Abstract painting, of course, is nothing like that. I looked around the studio. John’s earliest works were certainly representational, painted in his own patchy and idiosyncratic (and beautiful) realism. As my gaze drifted from the walls closest to the door to the ones near the center of the studio, the images became increasingly unintelligible, increasingly abstract. I confess that the ones closest to me didn’t look at all like picture boxes, and I strained to understand the distance traversed from door to easel. It seemed to me that over the years his pictorial enterprise has been entirely transformed. The new paintings, the ones that loomed directly over me, were making alien, singular demands of the viewer. They radiated a different energy. I needed to learn how to see them. Syncopating to their rhythm, acquiescing to their constraints and standards, required developing new muscles. How did the hand and mind that made the earlier pictures also make these recent works? 

    I thought for a while about how to formulate a question that might elicit a satisfying answer. 

    CM: When you destroy it, when you break down the figures, do you know what you’re doing?

    JD: No. I have no idea what I’m looking for. I know only what I’m not looking for. I know that I’m looking for the painting to ultimately be a completely free-standing object that reminds me of what I am remembering — because there’s no source material for most of these paintings except my memory. I might do drawings, but I’ll rarely look at those drawings.

    John paused for several beats, apparently hesitant to continue. Finally his eyes flicked up to mine and he had the look of someone who was about to reveal something strange and unexpected. 

    JD: The other thing that’s going on is that eight or nine years ago I began going into a weird… not quite a trance state, but my eyes began blinking really rapidly while I was working. I made a recording of it and sent the video to neuroscientists at NYU and they came over. They tested me while I was painting, with blinking and then without blinking, and they told me that I was somehow going into an increased theta wave state while I am working. Theta waves are like a waking daydream state.

    This was startling. But John is not the only artist I know of who painted in a trance. The sculptor Chana Orloff became close friends with Chaim Soutine in the last decade of his short life. (He died in 1943, fifty years old.) She said of him: “He nurtured his idea for a painting for several months and then, when ready, started the work in a fury. He worked with passion, with fever, in a trance, sometimes to the music of some Bach fugue that he played on a phonograph. Once he finished the painting, he was weak, depressed, wiped out.” Soutine himself used to say that, in order for a painting to begin, he needed to be seized irresistibly by a subject. He called this “the miracle.” Between such trances, waiting for the miracle to intercede again, he would stew in sterile agitation.

    Perhaps because Soutine called it “the miracle,” or because John’s studio felt overwhelmingly like a religious sanctuary, I interpreted the details about John’s trance as confirmation that his paintings were made in a state of meta-rational or even mystical inspiration.

    CM: Do you pray?

    JD: No. Well. Right, it’s pretty close. So what happens is that all day long I’m going into that state to block out any rational thought. I used to be a different kind of painter, I used to be a figurative painter. And I’m always fighting that. 

    CM: Do you feel when you’re inside the trance, or maybe when you come out of it because you’re not conscious of what you’re feeling while you’re still in it, are you communing with something outside yourself? Or is it internal?

    JD: It’s inner-directed. The eye blinking is a way to not be able to look out. So I’m looking in at the image that I know is there in the painting.

    CM: You said image just now. Is it an image or is it a feeling?

    JD: It’s a sense. I know a lot about painting. I’ve spent my life not only painting but looking at painting, so when I go to this painting I don’t need to be aware of what I’m doing. All I need is to be present and I can fall back on it.

    I thought about what he said, about how the experience of painting demands that he retreat so deeply inside himself that his body moves to shut out any alien interference. At some point the change in his gaze abolishes the subject-object relationship. John’s testimony paired strangely with what I knew about his obsession with traveling to see paintings. He makes pilgrimages to artworks around the world. 

    CM: When you travel to go look at paintings, how can that help you as an artist who waits for the trance? Is it that you are going as deeply into their paintings as you try to go into yourself while you’re painting?

    JD: Yes, I go into the eye blinking state when I look at other paintings.

    I gasped. So the blinking had to be catalyzed by intense concentration, by some sort of raptness. I was reminded of Simone Weil’s remark that “attention, taken to the utmost degree, is prayer.”

    CM: What must it be like to have a sense of solidarity with great painters so deep that your own body treats their work as if it were a part of you? And does that feel inward or does that feel like going into them?

    JD: It feels like Stendhal syndrome, like a kind of euphoria. It’s this weird mixture of being able to look out but then bringing it inward. Everything is inward.

    Stendhal syndrome is a psychosomatic condition wherein exposure to extreme beauty induces physical symptoms, such as chest pains, fainting, and rapid blinking. The condition is named after Stendhal’s report of his visit to the Franciscan church of Santa Croce in Florence, whose sixteen chapels contain an astonishing treasury of Renaissance painting.“I had palpitations of the heart…” Stendhal recalled. “Life was drained from me. I walked with fear of falling.” There are more violent accounts. Six years ago a visitor at the Ufizzi Gallery died of a heart attack in the presence of Botticelli’s Venus. I suppose he died happy.

    CM: It strikes me as a kind of mysticism…

    JD: Friends often compare it to deep meditation… though of course meditation usually requires stillness, but here I am running around the studio. I think of it as an altered state.

    CM: Can you simulate it?

    JD: Yes. I can turn the blinking on whenever I want. I can turn it on and off. I’m lucky that I can turn it off.

    CM: Does it ever overwhelm you?

    JD: There are times when I’ve been here all day and then I’m walking to my car and it’s still happening, and I have to stop myself. I can’t let that happen while I’m driving.

    CM: In those moments does it feel like intoxication?

    JD: Yeah.

    I have seen a video of John painting. Well, painting isn’t the right word exactly. In the clip he is using his hands, throwing them — and his whole body behind them — up against the canvas. A corporeal assault. It looks like intoxication. It doesn’t look remotely meditative. 

    CM: This area here [I pointed to a passage in the Paris Falling painting] — did you do that with your hands, too?

    JD: No, it’s mostly palette knives. Occasionally I use a brush. I also switched over from dominant hand to non-dominant hand just to have a second painter. You’re using a whole different brain circuitry, so that has been a very important part of this whole shift into this kind of work.

    A second painter. What an extraordinary thought. 

    CM: When did you start doing that?

    JD: I’ve always been right-handed. After an injury fifteen years ago I had to use my left hand till the right hand healed. Then, six or seven years ago, tendonitis forced me to paint with both hands. After several months of that something shifted: I could comfortably switch from left to right and back, and I noticed that not only was I thinking about space differently but my color sensibility was also quite different. The mark-making was also different, but that was just a motor skill variation. 

    But I conceive of and treat color and space differently depending on the hand. My color left- handed has a wider range of tone and hue, also a warmer and slightly brighter palette, and a more arbitrary reaching for what seemed to be a more random or less controlled color. The eye blinking certainly helped facilitate that. I think I read somewhere that right hand/left brain is more rational. I followed that model, it seems. 

    I thought of the Portuguese poet Pessoa’s heteronyms, the pseudonymous identities he invented, complete with names, biographies, and individual styles, in whose distinct voices he would write. Imagine them collaborating on a single project! 

    CM: When you’re in that state and then come out of it, do you ever feel like you’ve taken the painting away from where it needed to go?

    JD: Well, the whole project is this tug between figuration and abstraction, and so I’m constantly losing my way. 

    CM: So you’re really relying on your past. It’s not just breaking down what you’re putting on, it’s breaking down all the years of study and work that you did before. 

    JD: Right.

    The painters who pushed Western art into modernity all started with a classical education. But most of them moved entirely away from that world. It was a ladder they climbed up and then left behind. Picasso was not fighting with his figuration in his cubism. And he went back into figuration out of cubism. But the two were never alive in him fighting one another in a single work — not the way they are in John’s painting, anyway. 

    CM: What would happen if you moved beyond that bedrock, if you didn’t need it anymore? Could that happen?

    JD: I think that’s the interesting thing in my painting: are these opposites dependent on each other? Could I paint the same things without going back and forth the way I do. The poet David Yezzi was talking about the white painting over there [John pointed across the studio to a far wall] and he said that there are painters who would do that in a day or a week but you wouldn’t have the spirit. It wouldn’t make sense for me — it wouldn’t feel the same, even if it looked the same, to make a painting that way. These canvases become a record of their own composition. Really that’s what they are — they are a record.

    CM: The idea that you mentioned earlier that seems central to the whole project is color making space. But then that has nothing to do with narrative. And yet you say that narrative is always central for you. 

    I was thinking about the difference between conceiving of a painting as an instance of chromatic and spatial relations, and conceiving of it as a story. 

    JD: It has to do with narrative because the memory is communicated through the color relations. Narrative isn’t essential in painting, but color and form are. I think of all paintings — even purely figurative, utterly non-abstract paintings — in terms of color and spatial relations. When I go to Italy and I spend time with Duccio and Pisano… they seem to me like what I’m trying to do. Like I’m trying to get to what they’re doing through action painting. It’s the spirit of those paintings but with the method of action painting. 

    The critic Harold Rosenberg christened the New York style of painting which emerged in the mid-twentieth century “action painting.” Action painting was wholly new — which is to say, it was not to be found in Paris. Its practitioners, Pollock most consummately, conceived of painting not as representation but as an act in itself, wholly self-contained and self-sufficient, born of its own inner necessity. Rosenberg explained that “what was to go on the canvas was not a picture but an event.” The act of creation was the record of creation, and of the discoveries made in the thick of it. What appeared on the canvas was a surprise, a series of contingent, rhythmic surprises: “There is no point in an act if you already know what it contains.” In action painting, to paint is to experience something entirely different, entirely its own. “The new painting has broken down every distinction between art and life.” (In this way, at least, Cezanne was a kind of ur-action painter. When Emile Bernard asked him, “Aren’t nature and art different?,” Cezanne proclaimed, “I want to make them the same.”)

    Is Dubrow a latter-day action painter? No, because in one crucial sense he works programmatically: he refers to the memory that exists outside him, that he falls into. He wants to create the conditions in paint in which he will feel again what he felt before. At least, part of him wants it that way. But John is indeed a kind of action painter because he knows that he cannot reproduce the old feeling perfectly. The painting will tell him what he feels. “I don’t want another drink,” James McMurtry sings, “I only want that last one again.” John recognizes that he cannot have that last drink again. The slash of his palette knife cannot adequately conjure the whip of the wind on his face no matter how deftly he wields it, and half of him doesn’t want it to do so. Half of him thinks in paint. So the act of creation is its own event, its own bizarre and unpredictable act. He sprints outward to the original memory and then inward toward something strange, some visceral painterliness, which transforms the memory into a new, other thing. Back and forth, back and forth. The canvas is stained with his sweat. The painting is the sweat. Accident, intention; creation, destruction; meditation, activity. All of it at once. Two hands, ten minds.

    CM: How much is serendipity part of your work?

    JD: Everything is a surprise while working, every mark, and I’m hyperaware of the accidents that happen. Everything emerges from the paint while holding on to the memory, and the fixed ideas about the memory are very strong — and then the serendipitous stuff is when I’m reaching for a random thing which has nothing to do with it, I’ll think okay I’ll try THIS, and that’s when really interesting things happen.

    CM: But it still feels like the original thing when you’re done with it?

    JD: Yeah. Well, I think it does, but then… when I went back to the place where this memory took place it felt nothing like this. So the memory is transformed through the process of painting. These paintings in a way are all about transformation. They begin with a very concrete memory that then so completely transforms that they become like these living forms. This to me seems like something alive. And even though it’s echoing the original memory, it becomes its own live thing. And it is its own life that has nothing to do with… 

    CM: But that’s what you mean by painting?

    JD: That’s how I think of painting.

    CM: What’s interesting is that your “illegible” canvases are completely coherent. But you have to spend time with them —

    JD: Yes, they are very slow. You have to really look, wait for them to open up, and not try to resolve the canvas into a comprehensible image but to understand it on its own terms, to recognize the internal coherence. What’s interesting is that in the end they somehow seem inevitable. I know it’s done when it feels like it’s become what it wants to be and when it gets to that place… well, it shuts off, I mean it stops inviting me in and psychologically they shrink. 

    CM: Because there are no boundaries while you’re painting it?

    JD: Right. There are no boundaries for my brain. My brain is awakened by it and I can get into it and then when it reaches coherence it shuts me out and I have to stop.

    CM: So you know when it’s done?

    JD: There are a lot of false endings because it’ll shut me out and I’ll stop and then I’ll look at it again and it’s open again. But in the end, when the painting shrinks, and I turn around it, it’s startling to see that suddenly the painting is completely contained —

    CM: Does that feel like a relief? Or does it feel like being shut out?

    JD: No, no. It feels like it’s finally taken on its own life. I’m not part of it anymore. I feel like I’ve taken myself out of it. I am not in there anymore. I have a memory of doing all this stuff to it, but I don’t know how that happened, and whatever force was moving through me, it doesn’t need me anymore.

    A Paschal Homily by Naomi Klein, with a Commentary

    I.

    On the second night of Passover, in the year of our Lord 5784, a seder was held in the streets of Brooklyn, in Grand Army Plaza, a block away from the residence of Senator Chuck Schumer. The event was called the Seder in the Streets to Stop Arming Israel. It was addressed by a number of anti-Israeli, anti-Zionist, and/or anti-Semitic speakers — after the wild blurring of those distinctions in the past year, the burden of clarification falls on the demonstrators, many of whose intense hostility to the existence of the Jewish state, and promiscuous political rhetoric, crossed the line into the ancient foulness a long time ago. Hundreds of protesters attended and hundreds were arrested, thereby reversing the order of the holiday and going from freedom to bondage. Their bondage, of course, did not last long; he is a fortunate man whose bondage is purely gestural.  

    I have not been able to establish whether anything remotely resembling a seder took place at the Seder in the Streets. (It sounds like the name of an old Richard Widmark movie.)  The political director of Jewish Voice for Peace explained at the gathering that “tonight’s Seder in the Streets will be happening on the second night of Passover, a holiday we observe every year that is all about liberation and how our liberations are intertwined with one another.” Well, not all our liberations: later in her statement she declared that “the Israeli government and the United States government are carrying out a genocide of Palestinians in Gaza, over 34,000 people killed in six months in the name of Jewish safety, in the false name of Jewish freedom.”  Here, for a start, was another instance of the popular misuse of the term “genocide,” which has now become a regular feature of progressive discourse. For all of Israel’s cruelties toward the Palestinians, it is a gross historical lie that the Jewish state ever set out to eliminate every last Palestinian and every last vestige of Palestinian culture, so that the people and the culture would disappear from the face of the earth.

    Not even the Syrian war, before which the destruction in Gaza pales in grim comparison, was genocidal. Aren’t war crimes, or crimes against humanity, in which the charnel house of Syria abounded, evil enough? “Genocide” has become the term with which to describe the atrocity of which one most disapproves. There certainly are genocides in the world now — the Uyghurs most notably — but the left never marches for them. It never marched for Syria, either. An encampment on campus for the Rohynga? Not a prayer. Scores of thousands dead Sudanese? It appears that you have to be fighting Israelis or Jews for progressives to bestir themselves on your behalf. Anyway, the definition of genocide is not quantitative: the Hamas savagery of October 7, even though it killed “only” twelve hundred people, was in fact genocidal, owing to the anti-Semitic and eliminationist motivations that are amply and explicitly articulated in Hamas’s literature.

    None of this exonerates the Israelis from the high number of non-combatant deaths in Gaza. No, “non-combatant” is too cold: innocent men, women, and children. The retaliation for the Hamas attack has been ruthless; and whereas I have no idea how to compute the proportionality that is demanded by the rules of war, I am quite certain that monstrously disproportionate actions have been taken place. We have been witnessing the hell of violations justifying violations justifying violations. The Israeli government – which, to the eternal disgrace of Zionism, includes a few ministers who do think genocidal thoughts —  was dragged kicking and screaming to humanitarian assistance to Gaza; it was American pressure, that is to say, an expedient strategic consideration, that prodded the Israeli war cabinet to overcome its plain contempt for the population it was bombing. This was not the best it could have done. The Israeli notion that all Gazans are terrorists is as ludicrous as the Hamas notion that all Israelis are war criminals. The de-civilianization of others is a significant moment in their de-humanization.

    Yet the tendentious application of the concept of genocide was not the most egregious bit of the peacenik’s contribution to the seder in the street. If, as she says, all our liberations are intertwined with one another, why is the name of Jewish freedom false? What was Zionism if not the national liberation movement of the Jewish people? Perhaps someone would like to argue with a straight face that the Jews were not in need of national liberation, but it should not be controversial to suggest that such a person is an imbecile. “Have you ever tried playing the who-suffered-most game with Jews?” Dave Chapelle once remarked. “It’s very hard.” Unfortunately, versions of this malevolent imbecility now proliferate in the ubiquitous disqualification of Jews from the roster of oppressed peoples, as if the success of the Zionist endeavor to create a safe and strong and sovereign parcel for its hounded people should be held against it, and not as a sign of the moral seriousness with which a persecuted people went about rescuing itself. 

    The left’s willed obliviousness to the epic history of Jewish victimization is doubly offensive because it is attended by a deep contempt for the equally epic efforts by Jews to put an end to their own victimization. Self-rescue in any group is wholly laudable. A people in pain may be forgiven for impatience, and admired for it if its impatience breeds practicality. Zionism is supremely an ideology of anti-wallowing. It represents an absolute refusal to tolerate the misery of one’s own. In this respect, I have long wished, respectfully and as a steadfast friend of the idea of partition, that the Palestinians would Zionize themselves, that they would be done with historical excuses and come to admire the quickening mentality of state-building. Institutions are nobler than intifadas. The Jewish state existed before it was declared; its sovereignty came last. In a world of Jewish wretchedness, there was no other way. Diplomacy had its place, but agency was everything. Justice for the Jews was justice by the Jews, exactly as justice for the Palestinians will be justice by the Palestinians.

    “At the core of the Passover story is that we cannot be free until all people are free,” the JVP woman declaimed. She was right. The problem is that her own slogans vitiate the universalism of her teaching. Insofar as progressive anti-Zionists reject the legitimacy of a Jewish state, they advocate an Orwellian Passover, for which all people are free except one. There is nothing false about the name of Jewish freedom. This must be granted if the conversation is to proceed. In Grand Army Plaza, of course, the objective was not conversation. The gathering’s immediate purpose was to protest an impending Senate vote on billions of dollars of military aid to Israel — hence its proximity to Schumer’s home. Chuck was Pharaoh. “We’re here to tell Senator Schumer that enough is enough,” the JVP woman asserted, a few thousand rhetorical levels below Moses’ original demands of Pharaoh. As in the original exodus, however, the Israelite bill passed.

     

    II

    Politically, the Seder in the Streets was a failure. Emotionally, it sounds like it was a success. It consisted in the sanctimonious intoning of an entire anthology of progressive platitudes about the Israeli-Palestinian conflict and its meanings. In its secular way, the seder was entirely liturgical. Its call-and-response of political chants by the assembly of bitter Herbs was supplemented by a sermon by Naomi Klein, whose presence in this context was surely more exciting for the participants than a visitation by Elijah the Prophet would have been. On April 24, the Guardian in London published the text of her homily. In its way it is a precious document. I reproduce it here, with its lines numbered to assist in a close reading, with a commentary.

    III

    line 1: It is never a salutary impulse to identify with Moses on his way down the mountain. His rage was terrifying. He smashed God’s own writing. He despised his own people, who had failed yet another test. He kept his compassion from them. As is often the case with holy wrath, corpses ensued. Three thousand people died, brothers killed by brothers. A bad day. Moses fixed things with God, but still a bad day.

    line 3: Another unfortunate impulse. There are many ways to read Scripture, many methods of interpretation, but one’s own politics is likely not the most rewarding of them. Why would one want to read this text as an ecofeminist even if one is an ecofeminist? (And what on earth does environmentalism and feminism have to do with the manufacture of an effigy of a calf out of the baubles of high net-worth individuals in the desert?) It is not the purpose of ancient writings to edify modern ideologies. Nor is it a shortcoming of the Torah if it is found to be lacking in ecofeminism. Why would one boast about a bias? The parochialism of the enlightened never fails to amuse. One of the goals of hermeneutics is to encourage the reader to get out more.

    line 4: Before the interpreter moves on to arcane methods of interpretation, she should get the literal meaning straight. God’s ferocity is in no plausible way an expression of jealousy. Nothing in the text, or in its larger theology, suggests otherwise. Yahweh has many troubling anthropomorphic quirks, but He is not an idiot. His indignation here is directed at falsehood: he had vouchsafed them a revelation of the truth. Having demonstrated the veracity of His existence to the Israelites with the evidence of their own senses, in miracle after miracle, He is shocked by their reversion to the idolatrous error, to the metaphysical illusions of materialist Egypt. This disappointment is premised on His confidence that the Israelites, who were the first people asked to live with the burdensome intellectual requirements of monotheism, and therefore with its retraction of the emotional gratifications of polytheism, could rise to the spiritual difficulty; and when they fail to meet the challenge, He is volcanically angry. He insists that these people understand the truth. Did he make a bad bet on the spiritual capabilities of the Israelites, of ordinary men and women? To be sure, an intolerance of the frailty of human beings is unbecoming in a Deity, as it is unbecoming in a political movement that speaks in their name; but the crisis at the foot of the mountain was not caused by the injured vanity of an abstract being who was envious of the blingy materiality of a cow.

    Nor was it the result of God selfishly hoarding holiness. I do not mean to make God’s apologies, but if this were a hoarder He would not have created the world to compete for His holiness. The medievals discussed this. They believed that creation was therefore the ultimate expression of divine love. Sanctity is not what God keeps but what God dispenses, and in certain schools of thought to the entirety of creation. There is nothing petty about the concept or its history. But Klein’s contrarian questions do not deserve such serious answers. She was just trying to impress her congregation with her skepticism about religion. Her sermon really has nothing to do with theological reflection. She was just intellectually accessorizing.

    line 5: False idols? There are no other kind.

    line 6: Splendissima! Down with the material, up with the transcendent! But be careful about denouncing the small; trouble starts that way. The small is where we live. And what precisely is the progressive transcendent?

    line 8: Rabbi, such an assessment of the event is for others to make.

    line 9: Idolatry, in the Jewish tradition, is a very grave charge. The traditional understanding of idolatry is that it consists in the worship of the creation instead of the Creator. It is a misattribution of divinity. It comes in material and immaterial forms. Insofar as it consists in the overestimation of what one admires, it is a common malady.

    line 11: Surprise! But the preacher is correct about one thing, which she and her congregation have abundantly illustrated over the years: one can have an idolatrous relationship to ideas, to ideologies.

    line 13: The history of the acquisition of land by the founding Jewish settlers of the yishuv and by the relevant Zionist institutions is not remotely one of “colonial land theft.” This is an empirical matter. It has been archivally documented. It is certainly the case that some territories were acquired in battle, but the battle was for survival and it was visited upon Israel by neighbors who refused to consider seriously either the historic right of the Jewish people to the land or the sublime compromise of partition. The armistice lines of 1949 left Jewish forces and Arab forces in places that the United Nations had not assigned to them. The presence of a Jewish state in the ancient land is not an occupation. (But its borders should have more to do with security and morality than with antiquity.) At no point in its history did Israel launch a war of conquest, even when its leaders harbored fantasies of territorial expansion. The preacher in Grand Army Plaza should make herself aware of inconvenient facts and do her best to deal honestly with them. Nothing about this conflict is simple. In this way it differs from many of its analysts.

    But what about the occupied territories? The seizure of any or all of the territories that fell under Israeli dominion in the Six Day War was not an Israeli war aim, though a state that has successfully defended itself against many hostile armies cannot be blamed for wishing to end the hostilities in a strategically more advantageous position than when they began. In the aftermath of the Six Day War, the Jewish world was overwhelmed by a great triumphalism, which was only in part an expression of relief at not having been annihilated. There were ideological opportunists, too, who interpreted the unanticipated extension of Israeli rule over the new areas as a providential instrument for their own maximalist ambitions, secular and religious. But there were also some Israelis who, when the inebriation of victory wore off, recognized that sovereignty over the Palestinians in the newly acquired areas was a mortal trap for the Jewish state. I had an Israeli friend who wept with joy and then wept with dread. I remember arguing with my religious Zionist friends, still in high school, that if they really needed to find the hand of God in the new situation, they might consider that the Holy One had benevolently granted them, for the first time in Israel’s embattled history, this: land that Israel could surrender without surrendering itself. Years later I learned that I was groping for the concept of a bargaining chip. The bargain, of course, would be peace.

    And so, for fifty-seven years, with increasing intensity and increasing frustration, liberal Zionists in the Israeli Jewish community and the American Jewish community who believe that the survival of Israel depends upon reconciliation with the Palestinians, and that the Palestinians, unlike the Arab states that attacked Israel, have a moral and historical claim that Jews must respect, threw themselves into “peace work,” politically and culturally –—work from which today’s  progressive anti-Zionists would like us to desist. Our work spoils their paradigm, which is that liberalism and Zionism are incompatible. Historically and philosophically, this proposition is outrageously untrue. What is true is that we doves, or pragmatists, or moderates, or two-staters, are the big losers in present-day Israel. But progressives, of all people, should understand that historical reversal is not the same as philosophical refutation. Stubbornness is sometimes an ingredient of integrity. We are not wrong in our hunger for Israeli-Palestinian reconciliation, we are merely unpopular — for now.

    So why this rush to the exits? Why should liberal Zionists complacently accept defeat instead of persisting in their exertions? Why this progressive counsel of despair, except that it goes so nicely with other dogmas of the post-colonial faith, and that it makes certain people less likely to be despised by the left, which is their idea of perdition? And what is a more principled and more practicable solution to the conflict than the adjacent states of Israel and Palestine? I have not heard one. Justice that is purchased with injustice is injustice, and this goes for all sides. So I do not have the insolence to recommend to the Israelis that for their own good, or because of a wrinkle in critical theory, they should erase themselves. If Israel commits crimes or abuses that must be criticized, we have plentiful grounds, liberal grounds, Zionist grounds, Jewish grounds, universal grounds, on which to criticize it. We need no lessons in the practice of self-criticism from Naomi Klein or Judith Butler. If Israel cured cancer, they would defend cancer.

    Zionism, before it is a historical worldview or a political program, is a conclusion drawn properly from the long history of Jewish weakness, an expression of Jewish self-respect, of Jewish honor. Do I mean to suggest that anti-Zionist Jews, or Jews who advocate for the dissolution of the Jewish state and a return to Jewish weakness, are dishonorable? I think I do.

    One additional animadversion. Why do Klein and her camp followers assume that anybody who is a Zionist and who refuses to entertain the erasure of Israel also supports, and is even elated by, all the carnage in Gaza?

    Line 14: There was indeed a roadmap that led from Egypt, but it led to Canaan, not to Israel. The command to exterminate the seven nations of Canaan was a ghastly thing — not only was it exceptionally cruel, but it prefigured the same hideous mistake that motivated the anti-Semitic murderers of medieval and modern Europe: that you can kill the belief by killing the believers. In the event, the archeologists tell us, the complete liquidation of the Canaanites never happened. And what about the Egyptians who pursued the Israelites in the desert and drowned in the Red Sea? The ancient rabbis propose that God himself was incensed about the Israelite celebrations of their destruction. “The works of my hands are drowning in the sea and you come before me with song?!” Most importantly, what does modern Zionism have to do with the Bible? Not nothing, certainly; the inspirations for the restoration of the Jewish commonwealth do not all date from Herzl’s outrage at what they did to Dreyfus. But Klein is deploying the Bible exactly as the settlers in the West Bank deploy it: as some sort of blueprint, some sort of excuse, for modern politics. The Zionist exodus was different from the Biblical exodus. Nobody turned a river into blood or split a sea for the Zionists. There were no signs and wonders, except the wonder that people who staggered out of death camps could find the will to live again and cross the waters in the dark of night and participate in the construction of the secular means of their own salvation. There is a roadmap!

    line 15: The promised land was not an idea, it was a place — it was, and is, soil. Before it became a metaphor — the Zion of the black churches in America, for example — it was soil. Before it was a metaphor, it was a place. This is a conflict about geography. For the Palestinians, too, the land is soil. They remember, and teach their children to remember, particular olive trees on particular slopes outside particular villages. (One of the conditions of peace is that they finally choose a state over the recovery of those olive trees.) To treat the promised land as a “transcendent idea” is to elide the earthly fury of this conflict. And also its perpetual danger: it is dangerous, after all, to sacralize soil. We have seen in many countries the catastrophic results of geographical mysticism. I confess that I have myself felt the pull of it. I love that land, whatever the political vicissitudes. I would love it even if an imperial power still ruled it. I love it for its beauty and its poetry. I love it because I am a Jew. And so I am here to bear witness that the love of the land, the vulnerability to the aura of its physical setting, does not make one a murderer. Instead it makes me wish more ardently to see peace in it. Note to progressive theorists: sometimes a concept can make you more heartless than a clump of dirt.

    The de-materialization of the land has become a central idea of Jewish anti-Zionists. In The Necessity of Exile, a memoir of his spiritual instability disguised as a study of ideas, Shaul Magid lavishes praise upon Rabbi Shimon Gershon Rosenberg, a strange right-wing figure known as Rav Shagar, who died in 2007 at the age of fifty-eight. Until not long before his death, he lived in the West Bank and founded and directed a variety of yeshivot. I have been told that Shagar was an extraordinary teacher of Talmud. His contribution to contemporary Jewish thought was two-fold: a religious post-Zionism and a fusion of traditional Jewish concepts with post-modernism. (A religious zealot who propounded the indeterminacy of truth! Cool!) Shagar’s anti-foundationalist program, which in my view marks the end of belief itself, included his endorsement of a great nineteenth-century Hasidic master’s view that, in Magid’s words, “Eretz Yisrael is not a place but a state of mind.” Never mind that in his messianic thinking that rebbe spoke quite plainly about the land as a physical location. The excitement of a non-material land for the post-Zionist Magid is that it further enables his tedious swooning over exile, as if the romance of exile is not one of the oldest cliches of modern culture. The prolific Shagar once wrote an essay in which he taught, in Magid’s enthusiastic paraphrase, that “we must fold exile into the state itself.” By this he means more than a state of alienation (which Israel, like all states, long ago found ways to nurture). He is referring to a constitutive sense of ontological Jewish difference that not even the restoration of Jewish sovereignty can alter. We are getting into dark chauvinist waters here, in which any self-respecting progressive should be reluctant to swim. But not Magid; he will not be told that he came too late to be an exilic Jew. (Apparently there is not enough glamor in being only a diasporic Jew.) And so he concludes that “the establishment of the state is not a rejection of exile but rather a dialectical move, even a Hegelian one, that redirects exile into the state itself, and thereby elevates it to its next phase, the phase of the political, to a state of justice and compassion.” Who’s afraid of a dialectical move? Zionism has little to fear from such post-Zionism. But it peddles a distortion of Jewish ethics that must be rectified: the Jewish injunction to pursue justice makes no distinction between political dispensations, between physical locations, between exile and statehood, between homelessness and home. The Jew must seek justice wherever he is. The exilic condition is not ethically privileged, except perhaps in the sense that powerlessness makes many transgressions impractical. A state, by contrast, has the power to commit crimes. There has never been a state that has not committed crimes. And so a critical and even adversarial stance is therefore a prerequisite of responsible citizenship. Ethically speaking, territoriality is more exacting than extra-territoriality.

    line 17: Having a military does not make you militarist. Militarism is the view that all problems should be solved by force. Sometimes I see evidence of that disastrous view — a despair of diplomacy — in certain Israeli politicians and governments, though not so much in the Israeli army. To defend yourself with a military is also not militarism. Neither is a people’s army, if the security situation warrants it, or universal conscription, all those guns slung jarringly over the shoulders of young men and women who must disrupt their youth to guard their country against its enemies, who are not imaginary. Have Israeli soldiers committed abuses? Of course. But neither is that militarism. Owing to its geopolitical situation, Israel is a Western-style country that cannot suffice with Western-style consumerism as its way of life. (Though the Tel Aviv nights could fool you.) It must also organize itself for its security. Israel has provided an encouraging answer to the question of whether consumerist societies, materialist societies, lifestyle societies, can muster the inner resources that are required for their mobilization in their own defense, though I am not sure how generalizable to other Western societies, to us, its example is. 

    Israel is not an ethnostate, though its politics is now afflicted by a new ethnonationalism, like many other states. Israel is a nation-state modeled on the old European model of the nation-state, which I call the theory of the perfect fit. In this view, every nation should be incarnated in a state and every state should embody a nation. Ideally, the political borders and the cultural-religious-ethnic borders should coincide. The problem is that they never do, and so there appears what became known as the Problem of Minorities. As long as the minority on the “wrong” side of the border is small, such nation-states are workable. But when the minority grows larger, the majority would panic, and one of the manifestations of this panic was a xenophobic infatuation with itself; tolerance can metamorphose into intolerance in no time at all. (The Peel Commission of 1937, which proposed the idea of partition because it concluded that Arabs and Jews living together in a single state is a recipe for disaster, also recommended population transfers, voluntary or otherwise, in and out of the Jewish state and the Palestinian state.) Many European countries are now confronting this problem, or choosing not to confront it. So is the United States, where white panic has become a decisive force in our politics. And so is Israel.

    There are only two possible solutions to the fiction of the perfect fit: the redefinition of the nation in the nation-state as multi-ethnic, or fascism. Such a redefinition, which has become even more urgent in our era of vast migrations, would render the “problem of minorities” moot. In a self-defined multi-ethnic society, there are no natives and no foreigners. The streets flow with legitimacy. Israel is a multi-ethnic nation. (The Jewish people is a multi-ethnic people.) This is the demographic fact. But its right-wing radicals fear this fact, and their fear has been promoted into hatred, and they themselves have been promoted to the upper echelons of Netanyahu’s disgusting government. They have set out to overthrow the Enlightenment values that are enshrined in Israel’s Declaration of Independence. But there is, again, a struggle. And progressives, when they describe Israel as an ethnostate, are pretending that the struggle is over and that the villains have won. Genossen, this is not helpful! 

    Pessimism is not only an analysis, it is also a choice. Why do Jewish anti-Zionists want to pull the plug on the struggle for liberal Zionism and the two-state solution? Because it appears to be losing? But a cause is not a fair-weather activity. When I read Naomi Klein’s book on capitalism and the climate, I did not dismiss it as quixotic, even though the likelihood of her program’s realization seems low. At present there is a strong basis in political and economic reality for pessimism about the renunciation of fossil fuels and the “extractivist” paradigm. Klein’s cause seems doomed. So why doesn’t she give up? For the same reason that I don’t.

    It is true that John Rawls would not have written the Israeli law of return. The law enshrines a prior ethnic preference. But who is so ignorant, or so callous, that they cannot comprehend why there must be a secure place on earth to which Jews may flee in the assurance that they will not be turned away? Fleeing, after all, has been a primary activity of Jews over the centuries. For the same reason, the Jews must always form a demographic majority in Israel, so that no blocking minority or other majority makes Jewish asylum impossible. I admit, as a proponent of equality in an open multi-ethnic nation-state, that this is morally embarrassing. In this respect, my liberal Zionism is not completely consistent, but neither was Camus’ invocation of his mother in his discussion of justice in Algeria. If Jewish experience us teaches that we must be stringent and self-reliant about our survival, so be it. Survival, too, is a moral obligation. Anyway, a Palestinian right of return is a similar preference that one day will be extended by the state of Palestine. I would have thought that the extension of such a privilege as a response to their decades of displacement would be a significant Palestinian incentive for hastening the creation of a Palestinian state.

    line 18: The Nakba was not the case “from the start.” The expulsions that Klein regards as the essence of the Zionist enterprise did not occur until the war of 1948-1949, out of a mixture of strategy, battlefield improvisation, and chaos. I do not mean to excuse them, but such events are hardly unknown in the history of warfare, even in wars of liberation that have met with the approval of the left. The pre-Nakba years were also preceded by a protracted period of Arab violence against Jews, and by Arab alliances with the Third Reich. (In the 1920s and 1930s Arab attacks on Jewish towns and villages was accompanied by the cry Itbah al-yahud! or “slaughter the Jews!”) Again, I do not mean to excuse the pain that Jews inflicted on Palestinians; not at all. How could an ethically and historically self-aware Jew excuse it? But it is not too much to ask that people who wish to make grandiose interventions in this debate know some history. The sloppiness is insulting.

    line 23:  I was not aware that Israel is responsible for the Sisi regime. As I recall, not long after a million Egyptians demonstrated against authoritarianism in Tahrir Square, a million Egyptians demonstrated for authoritarianism in Tahrir Square. To be sure, the election of Mohamed Morsi of the Muslim Brotherhood to the presidency of Egypt rattled the Americans, the Israelis, the other Sunni states, and many Egyptians, but we should all have kept our heads, at least if we are serious about democratization as an objective of foreign policy. (Of course progressives regard democratization as a sinister euphemism for American imperialism, but that is for another day.) Israel’s lack of enthusiasm for democratization in the Arab world has roots in the old exilic preference for vertical alliances with rulers over horizontal alliances with populations, for reasons that are not hard to understand, though Yosef Hayim Yerushalmi showed that the vertical alliance, too, was a myth. The harsh disappointments of recent years notwithstanding, I am not prepared to give up once and for all on the hope for democracy in the Arab world. Sisi’s regime is despicable, and it is an ISIS-making machine. But what does Zionism have to do with it, except insofar as Zionism is the cause of all evil?

    line 25: An ugly kind of freedom for whom? I would not treat the achievement of Jewish freedom so lightly, especially at a seder. The plight of the Palestinians is not the entirety of what one needs to know about Israel.

    line 27: The Jew as Pharaoh: there is always a cheap thrill in such an inversion. It unburdens the post-Zionist (and the anti-Semite, though I am not accusing Klein of anti-Semitism) of any special sensitivity to Jewish fate and its implications for politics. Yes, to regard human beings in the generalizations of social science — in this instance, as demographic threats — is inhumane, though policy and politics do so all the time. But the notion that Israelis have concluded from their insistence upon a Jewish majority in Israel that they must slaughter Palestinian children is grotesque. Moreover, if it has been a goal of the diabolical Israelis to diminish Palestinian fertility rates, they have failed miserably. Klein should pause to consider the distinction between criticism and slander. And she can take some comfort that Pharaoh spared the daughters. Maybe he was an ecofeminist, too.

    line 28: No. What has brought us to our present moment of cataclysm is this: three thousand Hamas terrorists attacked Israel on October 7, 2023, and butchered and raped and incinerated men, women, and children. If they had not done so, all the Israelis and Palestinians who died violently since October 7 would still be alive. The absence of any mention of this depravity in Klein’s paschal sermon is despicable. Another example of the left’s universalism minus one. 

    line 35: Every Jewish value? Including holy war, which is also a Jewish value? Or anti-pagan violence? Or family purity? This woman who speaks so categorically about Judaism appears to know little or nothing about it. In American Jewry now, the surest sign of Jewish ignorance is to exalt “the value we place on questioning.” But goyim ask questions, too! And the Zionist tradition is a parade of the questions and challenges and quarrels. Oh yes, and the answers. Questioning is the beginning, not the end, of intellectual responsibility. In the Talmudic tradition, the number of subjects about which we must suffice with questions, because we will not have answers until the prophet Elijah delivers them, is exceedingly small. We sometimes confer too much prestige on questions. (At the seder, as Klein points out, questioning is the child’s task.) In any event, Klein is a creature of answers posing as a creature of questions. It is a neat trick, to traffic in certainties and portray yourself as a champion of doubts.

    line 38: How on earth has Zionism betrayed the love we have as a people for text and for education? Pass the maror, please.

    line 41: I had not encountered the term “scholasticide” before. I discovered that on April 18 a group of UN human rights experts in Geneva issued a press release in which they “expressed grave concern over the pattern of attacks on schools, universities, teachers, and students in the Gaza Strip, raising serious alarm over the systemic destruction of the Palestinian education system. ‘With more than 80% of schools in Gaza damaged or destroyed, it may be reasonable to ask if there is an intentional effort to comprehensively destroy the Palestinian education system, an action known as ‘scholasticide,’ the experts said.” The damage wrought upon education and culture in Gaza by the Israeli campaign must indeed be overwhelming. But there are collateral effects in war, in this war and every other. Not everything that is destroyed in war was targeted for destruction. That is one of the reasons that wars should be avoided: they are wanton. There is no such thing as “surgical” bombing. But there is something demagogic, an agit-prop quality, about the “-icide” construction. Is the destruction of a vineyard vinocide?

    line 48: She flatters herself.

    line 49: The phrase “our Judaism” contains much less authority than Klein thinks it does. Ben Gvir and Smotrich also have “their” Judaism. Their Judaism can indeed be contained by an ethnostate, so to hell with them. Such a Judaism, every customized Judaism, is nothing like the actually existing Judaism, historical Judaism, Judaism in its classical sources, Judaism in all its text-based variations, which is an irreducible alliance of universalism and particularism. The tangle, the incongruity, the simultaneity of all the imperatives, is the point. Every other version is cherry-picked. Klein’s “internationalist” Judaism is similarly an arbitrary doctrine invented in the image of a political desire. Does she really not see the “nationalist” elements in the Bible and the rabbinical tradition? The gravity of her opinion about Judaism is not enhanced by her assumption that she can blithely wave them all away. Her rejection of particularism has no basis whatever in our religion. Even the prophets, the ones with the prooftexts beloved of lion-and-lamb progressives, were particularists; or more precisely, they taught the coincidence of the local with the global. The internationalist Klein should shop elsewhere for precursors.

    line 51: Here is another post-Zionist shibboleth: that Israel is bad for the Jews. But Jewish nationalism never promised a world free of Judeophobia. It promised only a haven from it and a defense against it. And the world without the Jewish state was not exactly good for the Jews.  O, the bliss of subalternity!

    line 52: Neither is “my” Judaism. Solidarity with the downtrodden, including Palestinians, is perfectly compatible with it, and even required by it. Klein’s air of moral superiority is insufferable.

    line 53: Gender? See paragraph thirteen of the Declaration of Independence of the State of Israel, which includes a guarantee of “complete equality” with regard to sexual difference.  In 1948! Klein might also take an interest in the shelter provided in Tel Aviv for Palestinian LGBTQ people who fear for their lives at home.

    These debates discomfit me because they make truthful claims sound like apologetics. The rhetorical situation is rigged against elementary corrections, which come to seem like partisan pleadings. I do not deny my partisanship, obviously; but I insist that objectivity, or the search for it, is the obligatory accompaniment of partisanship. Not perfect objectivity, of course; but the impossibility of perfect objectivity must not provide cover for the whateverist epistemology that now governs our culture. The purpose of objectivity is not to ruin our commitments but to clarify them, to test them, to make them intellectually respectable. A wise philosopher has described our optimal mental situation as “positional objectivity.” There is an easy way to check on positional objectivity: it comes with scars. Those scars are the traces of the positions that one wished to espouse but discovered that one could not, because their usefulness for one’s side could not withstand the honest acknowledgement that that they are false. He who finds no fault in his own side, who lives without intellectual dissonance and moral friction, is a liar. (Honesty compels me to add that for this reason I have admired Klein’s withering critique of the environmentalist elite and the “extractivist left.”)

    line 57: As it happens, the Passover seder is a peculiarly bad illustration of the portability of Judaism: it is a service — not a technology! — for a table set with symbolic objects and performed by housed people who are enjoined to imagine empathetically the unhoused existence in the desert and the unhoused generally. It is indeed portable — but so is all of Judaism since the fall of Jerusalem, when a far-seeing rabbi of the first century proclaimed the dissociation of the religion from its capital. The adjustment of Judaism to extra-territoriality was never a choice for extra-territoriality. And there are Passover duties beyond the Passover seder that require a synagogue or meeting place. Even when we wandered, we were not light on our feet.

        (Oh what’s the use?)

    line 61: Was the exodus a revolution? Modern revolutionaries have thought so, but they departed from the ancient model. The Egyptian tyranny was not deposed. The slaves did not replace the masters; the slaves left. (The Jews have never sought to overthrow their oppressors. They sought instead to get beyond their sway, to be left alone to be themselves.) The freedom for which the slaves departed Egypt was not what we mean by political liberty. Instead they were given a new metaphysics of obedience. But there is one respect in which the saga of the Israelite liberation brings to mind our own perplexities about authoritarianism: the mentality of servitude survived the experience of servitude. The riddle of democratization is that you must already know what it is like to live democratically in order to live democratically. How, then, does democracy begin? Memory is a terrible saboteur.   

    line 65: The list of Zionist and Israeli peace plans is long, which is why it makes for melancholy reading. And the list of Zionists and Israelis who opposed those peace plans is also long, which is why it makes for even more melancholy reading. There has never been a Zionist consensus, except perhaps in the early 1940s, when the prospect of extermination concentrated the Jewish mind in favor of statehood. The old Zionist disputations have never been resolved. They will be on the ballot in the next Israeli election.

    line 70: There is no more definitive sign of moral frivolity in the discussion of Israel than to mock Iron Dome.

    line 74: You are the exodus? Then go away.

    line 77: Oh, I see. You have already gone. Somehow we will have to manage without you.

    As for our kids: you remind me of one of the most foolish comments of our time. “Young people are just smarter,” Mark Zuckerberg instructed in 2007. It was a sentiment that updated the most ludicrous generational conceits of the 1960s, as does your little boast about stealing our children. Listen. You do not know our children. You know only the ones who follow you. But we have our little darlings, too, a prodigious number of them, and they live in a world, a world of young and old, that is beyond your grasp – a liberal world, a conservative world, a Jewish world, a Zionist world, a traditional world, a patriotic world, a peace- and decency-seeking world. Their loyalties are not blind and their sentiments are not inauthentic because they are not your loyalties and your sentiments. You are making a mistake. Our kids are not with you now. Those are your kids. I wonder how many future venture capitalists and litigators are among them. Believe it or not, there are larger and deeper places than the quads, places more consequential for the future of the world, and for its betterment, than Grand Army Plaza on the night when you choose to deliver a homily there. No cause will succeed that cannot see beyond itself. For heaven’s sake, woman, look at life from both sides now.

    Like Peeling Off a Glove

    Reflecting on Philip Roth in Harper’s not long ago, the journalist Hannah Gold observes that few of the novelists she read during her high school years “captured my imagination and became my companion throughout adulthood the way Roth did.” It is a moist confession familiar to writers who recall clinging to Little Women in faraway childhood with similar ardor. Yet now, in full maturity, Gold sees this transfiguring devotion as touching on “questions of inheritance as a problem of influence.” And in pursuit of such spoor — directly as reporter, aslant as skeptic, but chiefly as admittedly recovering Roth addict — she recounts her impressions of “Roth Unbound,” a conference-cum-dramatic-staging-cum-fan-tour dubbed “festival” that unfolded in March of last year at the New Jersey Performance Center in Newark, Roth’s native city. Stale though it may be, she calls it, in a rare flash of sinuous phrase, “the physical instantiation of a reigning sensibility.”

     

    What remains in doubt is whether her recovery is genuine, and whether she has, in fact, escaped her own early possession by the dominance of a defined sensibility. The latterday Newark events she describes mark the second such ceremonial instantiation. The first was hosted by the Philip Roth Society and the Newark Preservation and Landmarks Department, and by Roth himself, in celebration of his eightieth birthday. Unlike during the previous occasion, the 2023 honoree was now in a nondenominational grave at Bard College, but the proceedings were much the same as ten years before: the bus tour of Rothian sites and its culmination at Roth’s boyhood home, the speeches, the critical and theatrical readings, the myriad unsung readers, gawkers, and gossips. With all this behind her — three nights in a “strange bed” in a “charmless” hotel, the snatched meals of chicken parm and shrimp tacos — Gold recalls her fervid homeward ruminations in a car heading back to writer-trendy Brooklyn:

                      

               I saw before me this distinguished son of

               Newark, his sentences like firm putty in my

               mind. I wanted to give them some other form,

               to claim, resist, and contaminate them, then

               release them back into the world, very much

               changed. My whole body went warm just

               imagining it, turning the words inside out

               over themselves the way that someone —

               maybe you, maybe me — peels off a glove.

     

    The concluding image echoes an exchange between Mickey Sabbath and a a lover named Drenka, taken from Sabbath’s Theater and quoted by Gold in a prior paragraph:

     

               “You know what I want when next time you

               get a hard-on?”

     

               “I don’t know what month that will be. Tell

               me now and I’ll never remember.”

     

               “Well, I want you to stick it all the way up.”

     

               “And then what?”

                 

               “Turn me inside out all over your cock

               like somebody peels off a glove.”

     

    But set all that aside — the esprit d’escalier dream of usurpation, the playing with Roth’s play of the lewd. Despite these contrary evidences and lapses into ambivalence, however pertinent they may be to Gold’s uneasy claim to be shed of Roth’s nimbus, they are not central to her hope of unriddling the underlying nature of inheritance and influence. A decade hence, will there be still another festival, and another a decade after that? Influence resides in singularity, one enraptured mind at a time, not in generational swarms. Besides, influential writers do not connive with the disciples they inflame, nor are they responsible either for their delusions or their repudiations.

     

    The power both of influence (lastingness apart from temporal celebrity) and inheritance (reputation) lies mainly in the weight, the cadence, the timbre, the graven depth of the prose sentence. To know how a seasoned reputation is assured, look to the complex, intricate, sometimes serpentine virtuosity of Dickens, Nabokov, Pynchon, George Eliot, Borges, Faulkner, Proust, Lampedusa, Updike, Woolf, Charlotte Brontë, Melville, Bellow, Emerson, Flaubert, and innumerable other world masters of the long breath. But what of the scarcer writers who flourish mainly in the idiom of the everyday — in the colloquial? One reason for the multitude of Roth’s readership, as exemplified by the tour buses, is too often overlooked: he is easy to read. The colloquial is no bar to art, as Mark Twain’s Huck Finn ingeniously confirms; and dialogue in fiction collapses if it misses spontaneity. A novel wholly in the first person, and surely a personal essay, demands the most daring elasticity, and welcomes anyone’s louche vocabulary. (Gold is partial to “cum.”)

     

    Roth’s art — he acknowledges this somewhere himself — lacks the lyrical, despite Gold’s characterization of it as “sequestered in enchantment,” a term steeped in green fields and fairy rings. Elsewhere she speaks of Roth’s “lyrical force,” but only as it manifests in the context of Sabbath’s immersion in Lear; then is it Roth’s force, or is it Shakespeare’s? Roth’s own furies come in flurries of slyness, lust, indirection, misdirection, derision, doppelgangerism, rant. Gusts of rant; rant above all. Gold’s desire to “contaminate” Roth’s sentences would be hard put to match his own untamed contraries. Nor can she outrun the anxiety of his influence in another sense: she is a clear case of imitatio dei — would-be mimicry of her own chosen god, and more than mimicry: an avarice to contain him, to possess him, to inhabit him, to be his glove. It is an aspiration indistinguishable from sentimentality: emotion recollected in agitation. Gold the ostensibly hard-bitten reporter, the wise-guy put-downer, the breezy slinger of slangy apostrophes, is susceptible to self-gratifying — and hubristic — yearnings. “I’d like to possess Roth in ways I’d hope to see more of his readers do as well: to take what creative, licentious force I need, and identify the Lear-ian corners in my own brain.” But this is to mistake both Roth and Lear. Lear’s frenzies are less licentious than metaphysical. Roth’s licentiousness is more grievance-fueled than metaphysical; he is confessedly an enemy of the metaphysical.

     

    Still, the underside of Roth’s satiric bite can be its opposite: a leaning toward extravagance of sympathy. The Roth parents in The Plot Against America, a relentless and not implausible invention of a fascist United States under a President Lindbergh, are imagined in the vein of an uneasy yet naive and pure-hearted goodness. As they tour the historical landmarks of Washington, the father’s instinct for the greatness of America is redolent of a schoolroom’s morning recitation of the Pledge of Allegiance. But while the novel is a brainy and wizardly achievement of conjecture clothed in event heaped on fearsome event, it also sounds the beat of allegory’s orderly quick-march. In “Writing American Fiction,” an essay published in Commentary as early as 1961, Roth was already denying contemporary political allusions in his work. Assessing Nixon, his chief béte noire at the time, he insisted that “as some novelist’s image of a certain kind of human being, he might have seemed believable, but I myself found that on the TV screen, as a real public image, a political fact, my mind balked at taking him in.” A decade later, in savaging Nixon in Our Gang, Roth’s mind, and his fiction, no longer balked. And who can doubt that beneath his fascist Lindbergh lurks a scathing antipathy to George W. Bush and Donald J. Trump?

     

    The heartwarmingly patriotic fictive father whose family is assailed by creeping authoritarianism is not the only Rothian father given to all-American syrup. He emerges again in American Pastoral, where the syrup is fully attested both in the novel’s title and in the person of blue-eyed Seymour “Swede” Levov, a successful Jewish glove manufacturer, Vietnam Marine veteran, and idolized athlete, a family man married to a beauty pageant queen — in an era when it was requisite for contestants in their swimsuits to prattle American sentiments as proof that they were more than starlets. This unforgiving caricature implodes when Merry, Levov’s daughter, is revealed to be a revolutionary bomber in the style of the 1960s Weathermen.

     

    Close kin to Levov is Bucky Cantor of Nemesis, another accomplished Rothian athlete, and a devoted playground director and teacher during the polio epidemic of the 1940s, when it was known as “infantile paralysis” and had no countering vaccine. He, like the dutiful Roth parents, is one more conscious avatar of spotless good will. His fiancée, a counselor at a children’s summer camp, persuades him to join her there to escape the devastating spread of polio he sees on the playground. And it is by means of this tender exchange, which takes place during an idyllic island holiday, that nemesis arrives, as it must, in the form of the unforeseen. Afflicted as an adult by the crippling disease, and festering guilt over the likelihood that it is he who carried polio from the playground into the camp, Bucky is a man broken forever. He will never again throw a javelin. He will never marry. But it is just here, in the lovers’ island murmurings, that syrup overtakes not merely the novel but Roth himself. Tenderness is his verbal Achilles heel: an unaccustomed flatness of prose, passages of dialogue that might have been lifted from a romance novelette. Gone is the Rothian irritability, the notion of the commonplace overturned, the undermining wit. In the absence of excess, in the absence of diatribe and rage, the sentences wither. Triteness is caricature’s twin.

     

    As for self-caricature: asked in an interview at Stanford University in 2014 whether he accepted the term “American Jewish writer,” Roth grumbled,

                 

               I flow or I don’t flow in American English. I get it

               right or I get it wrong in American English. Even 

               if I wrote in Hebrew or in Yiddish I would not be

               a Jewish writer. I would be a Hebrew writer or a

               Yiddish writer. The American republic is 238 years

               old. My family has been here 120 years, or for more

               than half of America’s existence. They arrived during

               the second term of President Grover Cleveland, only

               seventeen years after the end of Reconstruction.

               Civil War veterans were in their fifties. Mark Twain

               was alive. Henry Adams was alive. Walt Whitman

               was dead just two years. Babe Ruth hadn’t been born.

               If I don’t measure up as an American writer, just

               leave me to my delusions.

     

    What might Henry Adams say to that? Or Gore Vidal?

     

    And to reinforce his home-grown American convictions, Roth went on (but now in an unmistakably long breath) to invoke the density of the extensive histories that engrossed him: “the consequences of the Depressions of 1783 and 1893, the final driving out of the Indians, American expansionism, land speculation, white Anglo-Saxon racism, Armour and Smith, the Haymarket riot and the making of Chicago, the no-holds-barred triumph of capitalism, the burgeoning defiance of labor,” and on and on, a recitation of the nineteenth century from Dred Scott to John D. Rockefeller. “My mind is full of then,” he said.

     

    But was it? In Roth’s assemblage of family members, fictional and otherwise, his foreign-born grandmother is curiously, and notably, mostly absent. “She spoke Yiddish, I spoke English,” he once remarked, as if this explained her irrelevance. Was this insatiable student of history unaware of, or simply indifferent to, her experiences, the political and economic circumstances that compelled her immigration, the enduring civilization that she personified, the modernist Yiddish literary culture that was proliferating all around him in scores of vibrant publications in midcentury New York? Was he altogether inattentive to the presence of I. B. Singer, especially after Bellow’s groundbreaking translation of “Gimpel the Fool,” which introduced Yiddish as a Nobel-worthy facet of American literature? It cannot be true that writers in Hebrew or Yiddish (in most cases both, plus the vernacular), however secular they might be in outlook or practice, escaped his notice — as Eastern European writers, many of them Jews, whose various languages were also closed to him, did not. Speculation about the private, intimate, hidden apprehensions of Roth-the-Fearless may be illicit, but what are we to make of his dismissal of the generation whose flight from some Russian or Polish or Ukrainian pinpoint village had catapulted him into the pinpoint Weequahic section of Newark, New Jersey? Was it the purported proximity of Grover Cleveland, or the near-at-hand Yiddish-speaking grandmother, who had made him the American he was?

     

    Had Roth lived only a few years more, he might have discovered a vulnerability that, like the Roth family under President Lindbergh, he might have been unprepared to anticipate. Never mind that as the author of Portnoy’s Complaint and the short stories “Defender of the Faith” and “The Conversion of the Jews” he was himself once charged with antisemitism. Married to the British actor Claire Bloom and living in London, he experienced firsthand what he saw as societally pervasive antisemitism. But this, he concluded, was England; at home in America such outrages were sparse. One unequivocal instance was that of poet and playwright Amiri Baraka, né LeRoy Jones, the New Jersey Poet Laureate whose notorious 2002 ditty asked, “Who knew the World Trade Center was gonna get bombed / who told the 4000 Israelis at the Twin Towers / to stay away that day / why did Sharon stay away” — implying that the Jewish state had planned the massacre. Responding to protests, New Jersey removed Baraka by abolishing the post of Laureate. Roth, incensed by a writers’ public letter in support of Baraka, excoriated him as “a ranting, demogogic, antisemitic liar and a ridiculously untalented poet to boot.” So much for one offender a quarter of a century ago; but would proximity to Grover Cleveland serve to admonish the thousands of students across countless American campuses seething with inflammatory banners and riotous placards who traffic in similar canards today?

     

    And here in the shadow of what-is-to-come crouches the crux of the posthumous meaning of Philip Roth. No one alive can predict the tastes, passions, and politics of the future. No critical luminary can guarantee the stature of any writer, no matter how eminent — not even the late Harold Bloom, whose rhapsodic anointment of Roth named him one of the three grandees of the modern American novel. Inexorably, the definitive arbiter, the ultimate winnower, comes dressed as threadbare cliché: posterity.

     

    Some have already — prematurely? — disposed of any viable posterity for Roth, and for Bellow as well, “a pair of writers who strong-armed the culture” and whose hatred and contempt for women (an innate trait of the Jewish male writer?) dooms them, as Vivian Gornick suggests, to the crash of their renown. Yet the charge of misogyny diminishes and simplifies Roth to a one-dimensional figure, as if his work had no other value. Demand that a writer be in thrall to the current prescriptive policies of women’s (and gender) studies departments, and tyranny rules; every consensual relationship deserves punitive monitoring. And must rascally Isaac Babel, a bigamist, also be consigned to eclipse, or was his execution in Stalin’s Lubyanka Prison penalty enough? What of Dickens, who attempted to shut up his discarded wife in a lunatic asylum? Should David Copperfield be proscribed? 

     

    No writer can be expected to be a paragon; writers are many-cornered polygons. Gold, unlike Gornick, is more forgiving of Roth’s depiction of female characters. “I have no desire,” she affirms, “to expunge charismatic sexism from the page,” and asks that it “be read as libidinal drive, and a creative force in its own right, without being reduced to righteousness or piety.” But the eventual status of Philip Roth under the aegis of futurity will likely depend neither on sullen antipathies nor on greedy panegyrics. Posterity itself differs from era to era. Is there some universal criterion of lastingness — some signal of ultimate meaning — that can defy the tides of time, change, history? 

     

    Roth found it in mortality. It came after the hijinks, the antic fury, the vilifications of this or that passing political villain, the urge to startle and offend and deride, the floods of social ironies, the gargantuan will to procreate sentences. It came late, when mortality came for him. And so the writer who commanded that no kaddish be permitted to blemish his obsequies ends, after all, in the grip of his most-eluded nemesis — and the most metaphysically acute. [END]

     

    The Olive Branch of Oblivion

    To run out of memory, in the language of computing, is to have too much of it and also not enough. Such is our current situation: we once again find ourselves in a crisis of memory, this time marked not by dearth but by surplus. Simply put, we are running out of space. There is no longer enough room to store all of our data, our terabytes of history, our ever-accumulating archival detritus. As I type, my computer labors to log and compress my words, to convert each letter into a byte, each byte into a hexadecimal “memory address.” This procedure is called “memory allocation,” a process of sifting, sorting, and erasing without which our devices would cease to function. For new bytes to be remembered, older ones must be “freed” — which is to say, emptied but not destroyed — so as to prevent what are called “memory leaks.” Leaks are to be avoided because, wherever they occur, blocks of precious computing memory are forever fated to remember the same stubborn information, and therefore rendered useless. For memory allocation to function smoothly, the start and finish of each memory block must be definitively marked. “In order to free memory, we need to keep better track of memory,” one developer advises. Operating systems, unlike the humans for whom they were designed, are built to tolerate little ambiguity about where memory begins and where it ought to end. 

     

    The machinic lexicon is both a site of and a guide to the current memory crisis. We are living through the tail-end of the “memory boom,” immersed in the memory-soaked culture that it coaxed into being, a culture now saturated with information, helplessly consumed by the unrelenting labor of data retrieval, recovery, and storage. Even the computers are confused, for deletion does not mean what it used to: when profiles, usernames, or files are erased they are often replaced by what are called “ghost” or “tombstone” versions of their former selves, and these empty markers of bygone selves haunt and clutter our hard drives. Fifty years ago, memory became a “best-seller in consumer society,” as the great historian Jacques Le Goff lamented. The new prestige of memory, its special authority for us, was evident before the digital era, in culture and history and politics; but today, with the colossus of digital memory added, I suspect that we are watching as memory’s hulking mass begins to collapse under its own weight. 

     

    It is a physical crisis as well as a philosophical one: the overdue reckoning with corrosive memorials — with the contemporary ideal and imperative of memorialization — has not been answered with a reappraisal of what memorials are for and what they can do, but rather with a rapid profusion of new ones. We all belong to the contemporary “cult of apology,” in the words of the architect and scholar Valentina Rozas-Krause, who has observed that we have come perilously close to relying upon the built environment to speak on our behalf, to atone for our sins, to signal our moral transformation. Of course the cult of apology disfigures also our personal and social and political relations. “The more we commemorate what we did, the more we transform ourselves into people who did not do it,” warns the novelist and historian Eelco Runia. A superabundance of bad memories has been answered only with more memory. 

     

    Our spatial coordinates are no longer primarily defined by our relation to physical memorials, municipal boundaries, and national borders, but ultimately by our proximity to data centers and “latency zones,” geographical regions with sufficient power and water to keep us connected to the cloud, to track our live locations and feed our phones directions. (The cloud may be the controlling symbol of our time.) In the United States, the Commonwealth of Virginia is the site of the largest concentration of data centers: these bastions of memory are being built over Civil War battlefields, gravesites, and coal mines, next to schools and suburban cul-de-sacs, beside reservoirs and state parks. In Singapore, the proliferation of data centers led the government to impose a three-year moratorium on further construction. (The ban was imposed in 2019 and lifted in 2022; new data centers are subject to stricter sustainability rules.) In Ireland, which together with the Netherlands stores most of the European continent’s data, similar measures are under consideration. Augustine described memory as a “spreading limitless room,” an undefined space to which memories, things, people, and events are consigned for the sake of preservation, and we have made his theoretical fantasy all too real. These unforgetting archives suck up the water, energy, air, and silence; their server fields buzz, warm, and whir through the night. It is an unsustainable and ugly situation to which a bewildering solution has already been found: by 2030, virtual data will be stored in strands of synthetic DNA. 

     

    How did we get here? We are swimming in memory — sinking in it, really — devotees of what has become a secular religion of remembrance, consumed by the unyielding labor of excavating, archiving, recording, memorializing, prosecuting, processing, and reckoning with conflicting memories. We cannot keep going in this manner, for it is ecologically, politically, and morally unsustainable. There is no need to deploy metaphors here, for we are quite literally smothering the earth under the weight of all our memory. 

     

    What happened is that we forgot how to forget. Along the way, we also forgot why we remember — the invention of one-click data recovery, searchable histories, and all-knowing archives made our already accelerating powers of recollection reflexive, automatic, unthinking, foolproof. I am belaboring these contemporary technological mechanisms of recall because not only have they ensured that remembering has become the default setting of everyday life, but they have also tricked us into believing we can lay claim to a certain kind of forensic knowledge of the past — an illusion of perfect completeness and clarity. It is a dangerous posture, for it is one thing to say, as everyone well knows, that what’s past is always present, and quite another to insist upon experiencing the present as if it is the past, and to attempt to understand the past in the language of the present. 

     

    Our commitment to remembrance at all costs is a historical anomaly: ever since there have been written records and rulers to endorse them, societies have sustained themselves on the basis of cyclical forgetting. Over the past two decades, as memory has become the primary stage upon which politics, culture, and personal life is played out, a handful of voices have attempted to call attention to this aberration. In 2004, the late French anthropologist Marc Augé declared: “I shall risk setting up a formula. Tell me what you forget and I will tell you who you are.” In 2016, David Rieff asked, in a fine book called In Praise of Forgetting, on the political consequences of the cult of memory: “Is it not conceivable that were our societies to expend even a fraction of the energy on forgetting that they now do on remembering… peace in some of the worst places in the world might actually be a step closer?” He understood all too well that “everything must end, including the work of mourning,” for “otherwise the blood never dries, the end of a great love becomes the end of love itself.” In 2019, Lewis Hyde suggested that our inability to forget has crippled our capacity to sufficiently grieve. Reading Hesiod’s Theogony, he observes that Mnemosyne, the mother of the Muses, ushers in both memory and forgetting in the service of imagination and preservation. “What drops into oblivion under the bardic spell is fatigue, wretchedness, and anxiety of the present moment, its unrefined particularity,” Hyde writes, “and what rises into consciousness is knowledge of the better world that lies hidden beyond this one.” A dose of forgetfulness allows us to put aside, if only temporarily, the sheer volume of all that we must mourn, to break the cycle of vengeance, to see through the fog of fury in moments of the most profound loss. 

     

    Prior to any of these pleas for forgetting, the French scholar Nicole Loraux demanded that we look back to the Greek world to rediscover the political power of oblivion. Her interest in the subject, she explains, began when she read of a simple question that an Athenian citizen posed to his warring neighbors after surviving the decisive battle of the civil war that ended the reign of the Thirty Tyrants. The man had sided with the vanquished oligarchs and followed them into exile: he had chosen the side of unfreedom. Facing defeat, he confronted the winning democratic army and asked, “You who share the city with us, why do you kill us?” 

     

    It was an “anachronistically familiar” question for Loraux in 2001 and remains so for us today. How to make the killing cease? How to quell the desire for vengeance? How to relinquish the resentments of old? How to reunite a riven family, city, or nation? Loraux pondered the Greek experience, which has become the paradigmatic example of political oblivion, a collective “founding forgetting” that diplomats and lawmakers would attempt to replicate for centuries to come. For once the Athenian democrats won the war and reclaimed their city, they did not seek to exact vengeance upon everyone who had supported the tyrannical reign, but rather only tried and expelled the Thirty themselves and their closest advisors. All of the Greeks, no matter what side they took in the war, swore an oath of forgetting, promising not to recall the wrongs of a war within the family, a civil war that had led its citizens to kill and jail and disenfranchise one another. They swore never to remember: to not think of, recollect, remind themselves of evils. Oblivion became an institution of peace: it amounted to a ban on public utterances, a prohibition against vindictive lawsuits and accusations over what occurred before and during the fighting. “After your return from Piraeus you resolved to let bygones be bygones, in spite of the opportunity for revenge,” Andocides writes of this moment. An offering is said to have been made before the altar of Lethe, or oblivion, on the Acropolis; erasures cascaded across Athens as records of the civil war were destroyed, chiseled out, whitewashed. Memory was materially circumscribed, and democracy was re-founded upon the premise of negation. The Athenian approach, Loraux argues, “defined politics as the practice of forgetting.” It ensured that from that moment onwards, “Politikos is the name of one who knows how to agree to oblivion.”

     

    Oblivion: it is tempting to read the word as a mere synonym for “forgetting,” “erasure,” or “amnesty.” In practice, however, it has always been a far more complex commitment. When the Athenians swore never to remember, they were also swearing to always remember that which they had promised to forget. The Athenian example illustrates that the “unforgettable” — the civil war, or stasis, and the ensuing tyranny — is that “which must remain always possible in the city, yet which nonetheless must not be remembered through trials and resentments,” as Giorgio Agamben observed in 2015. The terms of the peace agreement compelled its subjects to behave “as if” a given crime, transgression, or conflict never occurred, but also to always remember that it did occur and may occur again. It was a paradoxical promise to never remember and to always remember. The beauty of oblivion is that it reinforces the memory of the loss while prohibiting it from calcifying into resentment; it sanctions certain acts of vengeance, but also imposes strict formal and temporal limitations upon them, so that recrimination does not go on forever. In short, it mandates forgetting in service of the future. This is the upside of oblivion, and this is why, in our hyper-historicist moment, we must labor to remember its powers in the present, which for us is not easily done. 

     

    Doing so requires excavating the long-forgotten techniques of oblivion that, for centuries, regulated private and public life. A mutual commitment to oblivion was once the premise upon which all peacemaking was conducted, between states as well as between spouses. (“It is undoubtedly the general rule that marriage operates as an oblivion of all that has previously passed,” the New York Supreme Court’s Appellate Division ruled in 1896.) Today, the contemporary “right to be forgotten”, which is practiced in a number of countries but not in the United States, is one of oblivion’s most prominent, and promising, contemporary incarnations, providing the grace of forgottenness to those who long ago made full penance for past crimes. It is a testament to oblivion’s power to combat cynicism and stubbornness and vindictiveness, to embrace the evolution of individual identity and belonging. Abiding by its rules, we acknowledge that who we have been is not the same as who we are, or who we may yet become. 

     

    “The only thing left is the remedy of forgetting and of abolition of injuries and offenses suffered on both sides, to erase everything as soon as possible, and proceed in such a way that nothing remains in the minds of men on either side, not to talk about it, and never to think about it.” So spoke the French jurist Antoine Loisel in 1582 in his “Discourse on Oblivion,” a document that has itself been almost entirely swallowed up by time. Loisel reminded his audience of the example of Cicero, who appears to have been the first to translate the Greek ban on forgetting into the Latin prescription for “oblivion,” from ob-lēvis, meaning “to smooth over, efface, ground down.” To erode, to erase. It is likely to Cicero that we owe the reconfiguration of the Athenian reconciliation agreement into a grand “Act of Oblivion.” Tasked with reconstituting Rome after the assassination of Caesar, Cicero appears to have studied the terms of the Athenian agreement as a model for reconciling the republic: 

     

    I have laid the foundation for peace and renewed the ancient example of the Athenians, even appropriating the Greek word which that city used in settling disputes, and so I have determined that all memory of our quarrels must be erased with an eternal oblivion.

     

    Cicero recasts the terms of the Athenian reconciliation, and the attendant promise not to recall, as an oblivione sempiterna, an eternal oblivion. The Romans look to the Greeks to find a model for political reconciliation which they adapt to suit their own ends. The oblivion is what erases “all memory” of Rome’s quarrels and allows for the settling of disputes. Oblivion is an instrument of truce and amnesty. 

     

    Cicero turns oblivion into a legislative undertaking: “The senate passed acts of oblivion for what was past, and took measures to reconcile all parties,” Plutarch reports. (Another translation reads: “The senate, too, trying to make a general amnesty and reconciliation, voted to give Caesar divine honors.”) As a result, Brutus and his allies were protected from vengeful reprisals: oblivion becomes a legal, legislative mechanism for forgetting, amnestying, and reconciling. The Roman adoption of the Greek practice suggests that oblivion was not understood as a blanket amnesty, nor as an absolute commandment to forget, but rather something in between, a somewhat ambiguous legal, moral, and material commitment that enabled political communities to come back together while at the same time preserving — memorializing by means of a mandate to forget — the memory of what tore them apart.

     

    Generations of statesmen, Loisel among them, have since followed Cicero’s example of looking back to the Greek example and recasting its “unending oblivion” for their own ends. In 1689, for example, Russia and China signed the Treaty of Nerchinsk, in which Russia gave up part of its northeastern territory in exchange for expanded trade access to the Chinese mainland. The text of the treaty was inscribed upon stones laid along the new boundary line. The third clause of the Latin version of the treaty promises that “everything which has hitherto occurred is to be buried in eternal oblivion.” (Interestingly, this clause does not appear in the Russian or Chinese versions of the treaty; the discrepancies between the different translations were one reason the treaty ultimately had to be revised.) During the early modern period, oblivion was a fixture of diplomatic speech: all over the world, powers swore to consign the grievances of wars and territorial disputes to “eternal oblivion.” Russian rulers swore to vechnoye zabveniye, Germans to ewige Vergessenheit, French to an oubli général. So too did Chinese, Ottoman, and African rulers in treaties with Western powers. The Arabic phrase mazâ-mâ-mazâ, “let bygones be bygones,” appears in Ottoman diplomatic correspondence dating from the thirteenth century as an element of customary law, and persists well into the nineteenth century in Ottoman and Western European diplomatic peace treaties. Oblivion was circulated, translated, and proclaimed as part of the ordinary business of statecraft. Rulers agreed to bury past wrongs as a way of signaling that their states belonged to the family of nations; forgetting the ills that members visited upon one another was a prerequisite for belonging to the family.

     

    Modern states owe their foundations to the pragmatic promise of oblivion. When the newly installed Republican government of Oliver Cromwell sought to erase the English people’s memory of the bloody civil war in 1651, his parliament passed an act to ensure “that all rancour and evil will, occasioned by the late differences, may be buried in perpetual oblivion.” And when, nine years later, King Charles II sought to coax his subjects into forgetting the reign of Cromwell, he too declared an oblivion, forgiving everyone for their prior allegiances to the English Commonwealth except the men who beheaded his father, Charles I. (They were tried for treason and killed.) In France, policies of oubliance were widespread in the sixteenth and seventeenth centuries, and the Bourbon restoration of 1814 was marked by a new public law ending investigations into “opinions and votes given prior to the restoration” and stipulating that “the same oblivion is required from the tribunals and from citizens.” In territories that would become the United States and Canada, European powers swore to oblivion in treaties with indigenous peoples as part of the project of imperial expansion. Diplomatic exchanges between indigenous leaders and European emissaries did not merely make mention of “burying the hatchet” or burying wrongs in oblivion — they were centered around these cyclical rituals of forgetfulness. French and English diplomats appealed to past oblivions whenever they desired to solidify an alliance with indigenous peoples, securing their support against the encroachment of other white settler groups.

     

    In the Revolutionary period, oblivions proliferated in the colonies, as the legal scholar Bernadette Meyler has documented. The Continental Congress invoked oblivion in its efforts to resolve a boundary dispute between Vermont and New Hampshire; North Carolina deployed one in 1783 to bring a cadre of seditionist residents back into the fold. Massachusetts passed one in 1766, Delaware in 1778. In 1784, Judge Aedaenus Burke, a member of the South Carolina General assembly, made one of the more forceful arguments for oblivion in American history when he delivered his pseudonymous “Address to the Freemen of the State of South Carolina.” He wrote of how, during the Revolutionary War, he watched as a man walked over the “dead and the dying” bodies of “his former neighbors and old acquaintances, and as he saw signs of life in any of them, he ran his sword through and dispatched them. Those already dead, he stabbed again.” The nature of the violence, he argued, far exceeded the capacity of law. And so a general clemency was the only way forward, for Burke, simply because “so many crimes had been committed that fewer than a thousand men in the state, he thought, could ‘escape the Gallows.’” He declared that “the experience of all countries has shewn, that where a community splits into a faction, and has recourse to arms, and one finally gets the better, a law to bury in oblivion past transactions is absolutely necessary to restore tranquility.” Oblivion was the only way that those who had been royalists could possibly still share the same ground with the revolutionaries they had fought: “Every part of Europe has had its share of affliction and usurpation or civil war, as we have had lately. But every one of them considered an act of oblivion as the first step on their return to peace and order.” 

     

    Almost a century later, President Andrew Johnson marshalled similar language in his attempt to restore peace in the aftermath of the Civil War. In his first annual message after Lincoln’s assassination, he advocated for a “spirit of mutual conciliation” among the people, explaining why he had invited the formerly rebellious states to participate in amending the Constitution. “It is not too much to ask,” he argued, “in the name of the whole people, that on the one side the plan of restoration shall proceed in conformity with a willingness to cast the disorders of the past into oblivion, and that on the other the evidence of sincerity in the future maintenance of the Union shall be put beyond any doubt by the ratification of the proposed amendment to the Constitution, which provides for the abolition of slavery forever within the limits of our country.” His speech casts the re-writing of the Constitution and the ratification of the Thirteenth Amendment as itself an Act of Oblivion, a way to “efface” the grounds upon which slavery had been legally sanctioned and defended. 

     

    And yet we live in the ruins of past peace treaties. We do not need to ask whether all these measures of imposed forgetting “worked,” because we know that neither the oblivions nor the ceasefires nor the reconciliations that they were supposed to inaugurate ever held up for long (often for very good reasons). The more interesting question is why oblivion proliferated in the first place, and where the desire that is continuously revealed by the fact of its repetition originates. “Oblivion brings us back to the present, even if it is conjugated in every tense: in the future, to live the beginning; in the present, to live the moment; in the past, to live the return; in every case, in order not to be repeated,” Marc Augé writes. The recursive calls for oblivion — pleas for a workable kind of forgetfulness, both legal and moral — can be found wherever people have quarreled, battled, and betrayed one another, only to subsequently discover that, even after all is said and done, they must share the same earth.

     

    On September 19, 1946, as part of a world tour following the end of his first term at 10 Downing Street, Winston Churchill arrived at the University of Zurich and called for “an act of faith in the European family and an act of oblivion against all the crimes and follies of the past.” Standing upon a dais set up outside the university building, he faced thousands of people gathered on the square before him and said: 

     

    We all know that the two World Wars through which we have passed arose out of the vain passion of Germany to play a dominating part in the world. In this last struggle crimes and massacres have been committed for which there is no parallel since the Mongol invasion of the 13th century, no equal at any time in human history. The guilty must be punished. Germany must be deprived of the power to rearm and make another aggressive war. But when all this has been done, as it will be done, as it is being done, there must be an end to retribution. There must be what Mr. Gladstone many years ago called ‘a blessed act of oblivion.’

     

    As he spoke, the guilty were indeed on their way to being punished in occupied Germany, in Japan, and in the Soviet Union, where prosecutors had not waited for the battles to end to begin trying and sentencing German prisoners of war. The International Military Tribunal at Nuremberg was preparing for its 218th day in session, and in Tokyo the prosecution was still making its case. Much was still unknown about the nature and volume of German atrocities. Churchill acknowledged the unprecedented character of the crimes in question and underscored the imperative of punishing their perpetrators. He also established that everyone in the audience, having lived through the horrible years of war, was all too familiar with its nature, and that this familiarity was a kind of shared knowledge among them. Much was still to be discovered, unearthed, proven, and punished, yet everyone who had lived through the war in Europe, who had been proximate to its force, “knew” how it came to be — even those who had profited from it, and those who looked away. Otherwise, he feared that memory might be wielded to perpetuate the absence of peace. 

     

    Churchill did not shy away from retribution (he had once supported the creation of a “kill list” of high-ranking Nazis), but he also saw its limitations. He understood that the desire for vengeance could not be allowed to fester forever because it risked preventing Europeans from imagining a shared future together:

    We cannot afford to drag forward across the years to come hatreds and revenges which have sprung from the injuries of the past. If Europe is to be saved from infinite misery, and indeed from final doom, there must be this act of faith in the European family, this act of oblivion against all crimes and follies of the past. Can the peoples of Europe rise to the heights of the soul and of the instinct and spirit of man? If they could, the wrongs and injuries which have been inflicted would have been washed away on all sides by the miseries which have been endured. Is there any need for further floods of agony? Is the only lesson of history to be that mankind is unteachable? Let there be justice, mercy and freedom. The peoples have only to will it and all will achieve their heart’s desire.

    The stakes were high: letting the ills of the past “drag forward” was something that Europeans could not “afford” to do because that would mean “infinite misery” and “final doom” for the already imperiled and destroyed continent. The indefinite continuation of exercises in vengeance and recrimination would spell certain death not only for “Europe,” as Churchill saw it, but also for the project of a “United States of Europe” that his speech called for. If the defeat of the Nazis had saved the continent from entering a new “Dark Age,” then the practice of perpetual vengeance, he argued, threatened to bring it there anyway. A “United States of Europe,” he argued, would return the continent to prosperity. But before that could occur, something else had to take place. “In order that this may be accomplished there must be an act of faith in which the millions of families speaking many languages must consciously take part,” Churchill said. That “act of faith” was not a religious or spiritual rite but a political one: an act of oblivion. 

     

    The “Mr. Gladstone” to whom Churchill referred was the liberal politician William Gladstone, who served twelve non-consecutive years as British prime minister between 1868 and 1894. In 1886, Gladstone called for a “blessed oblivion of the past” to bury the memory of British Home Rule in Ireland and restore peaceful relations between England and Ireland. “Gladstone urged the MPs to grant the Irish a ‘blessed oblivion’ and permit them to forget about a tradition of hatred,” the historian Judith Pollman writes. Calling for oblivion, Gladstone implicitly referred back to the Act of Oblivion that had restored the British monarchy under Charles II. He was suggesting that the same tool that restored the British monarch in 1660 could serve quite the opposite purpose two centuries later, marking the erasure and forgetting of British rule in Ireland.

     

    Oblivion in the aftermath of war and conflict is emotionally very exacting, and Churchill’s remarks were at first poorly received. The Manchester Guardian called it an “ill-timed speech,” and others thought that it was insensitive to the still-fresh wounds of war. (The paper did not argue that the speech insulted the memory of the slaughtered Jews of Europe, but rather to the French, whom Churchill had dared ask to reconcile with the Germans.) Today, however, the speech is regarded as one of the first calls for the creation of the contemporary European Union, and Churchill is celebrated as one of its founding fathers. He called for a new collective commitment to oblivion, yet the half-century that followed was defined not by oblivion but by its opposite. The Nuremberg trials delivered partial justice for a select group of perpetrators, as did proceedings in the Soviet Union, Poland, Israel, Germany, France, Italy, Japan, and elsewhere. Retribution came in fits and starts, and it is still ongoing today. Memorials were erected all over the formerly occupied territories, part of an effort to ensure that passersby would always remember what had occurred there. But memorials also have an odd way of sanctioning forgetfulness: the more statues we build, the more we fortify the supposedly unbreachable gap between past and present. Is this not its own kind of oblivion? 

     

    In a moment of profound rupture, Churchill called for yet another repetition of the Greek model, for a new adaptation of the founding forgetting that supposedly bound the Athenians back together, if only for a short time. His call for an end to memory was far too premature. But his suggestion that, at some point, memory must cede ground to mercy — and, we might add, to the memories of other and not necessarily more recent crimes — is one that we are only now beginning to take up. The “United States of Europe” was ultimately founded not upon an Act of Oblivion but rather upon the myth that its constituent nations were bound together by a commitment to repudiate and remember the past, and to ensure that the atrocities of World War II would “never again” occur. We all know how that went. To consider the possibilities of oblivion requires accepting that there are some forms of memory production — prosecution, memorialization, truth and reconciliation, processing — that may effectively prolong and even exacerbate the wrongs they were intended to make right. 

     

    Oblivion is not a refusal of these efforts but rather a radical recognition of their limitations. It is an invitation not to endlessly participate in the “global theater of reconciliation,” in the instrumentalization of survivor testimony, in what the literary scholar Marc Nichanian has called the “manipulation of mourning.” It provides an opening through which we might attend to the moral ruptures that preceded the acts of wrongdoing; it creates space to engage in the kind of “unhinged mourning,” that Nichanian locates “prior to any politics, prior to any foundation or restoration of democracy, prior to every accord, every contract, every pact and every reconciliation.” Oblivion never speaks of forgiveness; indeed, it is the alternative to forgiveness. To forget a transgression is a distinct moral act that liberates its subject from the dueling imperatives to either avenge the wrong or to forgive it. It is, in this sense, an important rejection of the language of reconciliation, of loving one’s enemy. It offers a path forward where this kind of “love” is unimaginable, if not impossible. Oblivion embeds the memory of the crime in the hearts of those whom it forbids from speaking about it. “This,” Nichanian argues, “is what the Greeks, in their obsession, called álaston penthos, mourning that does not pass which nothing could make one forget.”

     

    Some years ago, I came across a scientific paper announcing that a group of computer scientists in Germany and New Zealand had come up with a “universal framework” that they called Oblivion. Its function was rather straightforward: it could identify and de-index links from online search engines at extreme speed, handling two hundred and seventy-eight removal requests per second. They promised nothing less than to make forgetting “scalable,” as seamless and widespread as possible, and their citations refer to similar programs, including one called “Vanish” that makes “self-destructing data” and another, called the “ephemerizer” which also promised to make “data disappear.” All of these efforts were designed in response to the inauguration, in 2011, of the European Right to Be Forgotten, or as it is officially called, the “Right to Erasure.” This new European right affords individuals the ability to demand “data erasure,” to require criminal databases and online sources to remove any personal data that is no longer “relevant” or in the “public interest.” 

     

    The law is composed of two distinct but related ideas: first, that we have a “right to delete” the data that we leave behind as we move about the digital world, and second, that we also have a “right to oblivion” that endows us with what the scholar Meg Leta Ambrose calls “informational self-determination” – the right to control what everyone else is able to learn about us without our consent. Minor offenses, arrests, and dropped charges from the past may be deleted from internet articles and websites if they fit these criteria, such as in cases where criminal records have been sealed or expunged, and the penalties long ago fulfilled (or where no crime was found to have been committed in the first place). As Jeffrey Rosen has noted, the law derives from the French “‘droit à l’oubli’ — or the ‘right of oblivion’ — a right that allows a convicted criminal who has served his time and been rehabilitated to object to the publication of the facts of his conviction and incarceration.”

     

    The adoption of these new rights marks the most recent transfiguration of the ancient idea of oblivion. The Right to Be Forgotten is both a privacy protection and a rehabilitative mechanism, one which, like the Athenian oath, helps to restore individual membership to the civic family. It gives us the freedom to become someone else, to escape the unhappy past, provided that certain criteria are met. This new right extends far beyond the legal realm. For several years, European nations have been expanding the Right to Be Forgotten such that it protects cancer survivors and those with other chronic illnesses from facing penalties from insurance companies, banks, adoption agencies, and more because of their health troubles. It is a commitment to rehabilitation in the most comprehensive sense, a pledge to ensure that no one should be defined by their worst moments or their greatest misfortunes. You could call it a kind of grace. (The Russian word for these kinds of measures is pomilovaniye, derived from the word milyy, meaning “dear,” “darling,” “good.” We wash away wrongs and choose to see only the best in ourselves, and in others.) To honor the right to oblivion is to submit to a particular performance of citizenship, one that may seem strange at first glance, and ubiquitous the next: for who among us cannot be said to be engaged in some studied act of forgetfulness, forgetting unhappy episodes from the past in order to prevent them from overtaking the future?

     

    Like the oblivions of old, the right to be forgotten has a paradoxically memorial function: those who ask for erasure have not yet forgotten their offenses, and their digital rehabilitation cannot alter the facts of their transgressions. I am thinking in particular here of a Belgium man named Olivier G., who killed two people in a drunk driving accident in 1994. In 2006, he was “rehabilitated” under Belgian law after serving out his conviction on multiple charges. In 2008, he sued a French paper for continuing to maintain records of his role in the accident online, and the European Court of Human Rights ultimately ruled that the paper had to delete his name from its past articles and replace it with the letter “X.” Owing to the press coverage of the case, we all know very well that he is “X.” And he himself is unlikely to forget it. 

     

    Yet his case still raises the inevitable question: what does oblivion mean for historical knowledge? By embracing its possibilities, do we also open ourselves up to the erasure of records, of historical truth? In The Interpretation of History, in 1909, Max Nordau lamented the “almost organic indifference of mankind to the past,” and writes of the “stern law of oblivion” that limits the transmission of memory to no more than three generations. “It is in records, and not in the consciousness of man, that the historical part is preserved,” he observed. And yet, as Nietzsche warned, an over-reliance upon record-keeping, upon archiving, preserving, and documenting — the features of his “superhistorical” person — can also snuff out our will to live in the present, our ability to see the world clearly before us. Every archivist knows that doing the job right requires a balance of preservation and destruction, that it is irresponsible and even unjust to save everything from obliteration. This is especially true in instances where penance has been paid, vengeance taken, time served, justice achieved so fully that it has begun to undermine its own wise and measured conclusions. “For with a certain excess of history, living crumbles away and degenerates,” Nietzsche admonished. “Moreover, history itself also degenerates through this decay.” 

     

    It is a mistake to understand history as operating in opposition to forgetting. Ernest Renan made this error when, in 1882, he famously observed that “the act of forgetting, I would even say, historical error, is an essential factor in the creation of a nation, which is why progress in historical studies often constitutes a danger for nationality.” In fact, history is as much a vehicle for forgetting as it is for remembering: when we remind ourselves that histories are written by the victors, this is what we mean. History is always edited, and oblivion acts a kind of editorial force on the historical record, though of course history may be edited according to many criteria of significance and some historians may prefer one oblivion to another. To embrace the idea of oblivion, however, is to try to redirect the inevitable erasures of the historical record toward the pursuit of a more just and liberated future – to take moral advantage of the room, and the freedom, that we are granted by forgetfulness.

     

    Besides, every act of forgetting, as Loraux reminds us, “leaves traces.” There can be no absolute forgetting, just as there is no possibility of total memory. Every time I encounter a new Act of Oblivion in the archive, I take it as a marker that someone, somewhere, wanted its historical world to be forgotten. And yet there it is, staring back at me on the table. Almost always, whatever conflict prompted the oblivion in the first place is recounted in fine detail alongside the agreement to let bygones be bygones. 

     

    Where oblivion was once deployed to reconcile states with themselves and one another, today it is most often invoked in order to restore people to full political citizenship, to repair the relation between subject and sovereign. Oblivion has become individualized. To some extent, it always has been individualized. Every oath of forgetting required people to look past the transgressions of their neighbors, but not to forget them completely. Nichanian argues that this amounts only to a mere pragmatic performance of reconciliation, which should not be mistaken for absolution. “One should know with whom one is ‘reconciling.’ One should not confuse friendship and reconciliation,” he cautions. “One should be capable of carrying out a ‘politics of friendship’ instead and in lieu of a ‘politics of reconciliation’…one must in any case know what will never be reconciled within reconciliation.”

     

    One must never forget with whom one is reconciling; one must forget what came before the reconciliation. These are the contradictory claims that the oath levied upon its swearers. It aimed to obliterate one form of memory while at the same time consecrating another. “I wonder,” Loraux asks, “what if banning memory had no other consequences than to accentuate a hyperbolized, though fixed, memory?” The people are reconciled, but they see one another for who they were, and what they did, during the period of tyranny. Nothing is forgotten, and much is owed by one side to the other. This relation, Nichanian writes, is “the irony of being-together, the sole surviving language.” What else is there? Oblivion is when one person says to another: I know who you have been, and what you have done, but I will pretend not to remember, and I offer you my friendship, and we will live amicably together. Call it pragmatism, call it decency, call it politics. (Call it quaint.) In the absence of forgiveness, which usually never comes, it may be our only hope.

    The History of My Privileges

    Is it possible to be a historian of your own life? To see yourself as a figure in the crowd, as a member of a generation who shared the same slice of time? We cannot help thinking of our own lives as uniquely our own, but if we look more closely, we begin to see how much we shared with strangers of our own age and situation. If we could forget for a moment what was singular about our lives and concentrate instead on what we experienced with everyone else, would it be possible to see ourselves in a new light, less self-dramatizing but possibly more truthful? What happens when I stop using “I” and start using “we’?  

     

    What “we” are we talking about here? Which “we” is my “we”? An old joke comes to mind. The Lone Ranger and Tonto are surrounded by Indian warriors. The situation looks bad. The Lone Ranger turns to Tonto. “What do we do now?” Tonto replies, “What do you mean ‘we’, white man?” The “we” to which I refer and belong were the white middle-class of my generation, born between 1945 and 1960, and my theme is what we made of our privileges, and once we understand them as such, what we did to defend them.

     

    We were, for a time, really something. We were the biggest birth cohort in history. We made up more than half the population and we held all the power, grabbed as much of the wealth as we could, wrote the novels that people read, made the movies that people talked about, decided the political fate of peoples. Now it’s all nearly over. Every year more of us vanish. We have shrunk down to a quarter of the total population, and power is slipping from our hands, though two of us, both presidents, are squaring up for a final battle. It will be a last hurrah for them, but for us as well, a symbol of how ruthlessly we clung on, even when our time was up.

     

    The oldest among us were born when Harry Truman was in the White House, Charles de Gaulle in the Elysee Palace, Konrad Adenauer in the Chancery in Bonn, George VI on the throne at Buckingham Palace, and Joseph Stalin in the Kremlin. We were the happy issue of a tidal wave of love and lust, hopes and dreams that swept over a ruined world after a decade of depression and war. My parents, both born during the First World War, met in London during the Second, two Canadians who had war work there, my father at the Canadian High Commission, my mother in British military intelligence. They had gone through the Blitz and the V-2’s, fallen for other people, and at war’s end decided to return to Canada and get married.

     

    I once made the mistake of saying to my mother that I envied their wartime experience. It had tragedy in it, and tragedy, to a child, seems glamorous. She cut me short. It wasn’t like that, she said gently, I hadn’t understood. She knew what desolation and loss felt like, and she wanted to spare my brother and me as much as she could. I see now that her reticence was characteristic of a whole generation — for example, the rubble women in Berlin, Hamburg, Dresden, and other German cities, who cleared debris away with their bare hands and never talked about being raped by Russian soldiers; the survivors of the death camps who concealed the tattoo on their forearm; the women who went to the Gare de l’Est in Paris in the summer of 1945, waiting, often in vain, to greet emaciated lovers and husbands returning from deportation. My mother was one of those who waited for a man who never made it back. He was a silent presence in the house throughout my childhood, the man she would have married had he not died in Buchenwald. She kept her sorrow to herself and found someone else — my father — and they brought new life into the world. 

     

    I am the child of their hope, and I have carried their hopefulness with me all my life. Beside hope, they also gave us the houses and apartments we took our first steps in, the schools and universities that educated us, the highway systems we drive to this day, the international system — UN, NATO, and nuclear weapons — that still keeps us out of another world war, the mass air travel that shrank the world, the moon landing that made us dream of life beyond our planet, and the government investments in computing in the 1940s and 1950s that eventually led in the 1990s, to the laptop, the internet, and the digital equivalent of the Library of Alexandria on our phones. The digital pioneers of my generation — Jobs, Wozniak, Gates, Ellison, Berners-Lee, and so on — built our digital world on the public investments made by the previous generation. 

     

    Thanks to the hospitals and the clinics that our parents built, the medical breakthroughs that converted mortal illnesses into manageable conditions, together with our fastidious diets and cult of exercise, our not smoking or drinking the way they did, we will live longer than any generation so far. I take pills that did not exist when my father was alive and would have kept him going longer if they had. Medicine may be the last place where we still truly believe in progress. Ninety, so our fitness coaches promise us, will be the new seventy. Fine and good, but that leaves me wondering, what will it be like to go on and on and on?

     

    Our time began with the light of a thousand suns over Alamogordo, New Mexico in July 1945. It is drawing to a close in an era so violent and chaotic that our predictions about the shape of the future seem meaningless. But it would be a loss of nerve to be alarmed about this now. We have lived with disruptive change so long that for us it has become a banality.

     

    My first summer job was in a newsroom echoing to the sound of typewriters and wire-service machines clattering away full tilt, next door to a press room where the lead type flowed off the compositor’s machine down a chute to the typesetting room, where the hands of the typesetters who put the pages together were black with carbon, grease, and ink. Sitting now in a clean room at home, all these decades later, staring into the pale light of a computer screen, it is easy to feel cranky about how much has changed.

     

    But what did not change in our time, what remained stubbornly the same, may be just as important as what did. The New York Times recently reported that in America our age-group, now feeling the first intimations of mortality, is in the process of transferring trillions of dollars of real estate, stocks, bonds, beach houses, furniture, pictures, jewels, you name it, to our children and grandchildren — “the greatest wealth transfer in history”, the paper called it. We are drafting wills to transfer the bourgeois stability that we enjoyed to the next generation. This is a theme as old as the novels of Thackeray and Balzac. The fact that we can transfer such a staggering sum — eighty-four trillion dollars! — tells us that the real history of our generation may be the story of our property. It is the deep unseen continuity of our lives.

     

    Our cardinal privilege was our wealth, and our tenacious defense of it may be the true story of white people in my generation. I say tenacious because it would be facile to assume that it was effortless or universal. From our childhood into our early twenties, we were wafted along by the greatest economic boom in the history of the world. We grew up, as Thomas Piketty has shown, in a period when income disparities, due to the Depression and wartime taxation, were sharply compressed. We had blithe, unguarded childhoods that we find hard to explain to our children: suburban afternoons when we ran in and out of our friend’s houses, and all the houses felt the same, and nobody locked their doors. When we hit adulthood, we thought we had it made, and then suddenly the climb became steeper. The post-war boom ground to a halt with the oil shock in the early 1970s, leaving us struggling against a backdrop of rising inflation and stagnant real wages. Only a small number of us — Bezos, Gates, and the others — did astonishingly well from the new technologies just then coming on stream. 

     

    Many of the rest of us who didn’t become billionaires dug ourselves into salaried professions: law, medicine, journalism, media, academe, and government. We invested in real estate. Those houses and apartments that we bought when we were starting out ended up delivering impressive returns. The modest three-bedroom house that my parents bought in a leafy street in Toronto in the 1980s, which my brother and I sold in the early 2000s, had multiplied in value by a factor of three. He lived on the proceeds until he died, and what’s left will go to my children. 

     

    Real estate helped us keep up appearances, but so, strangely enough, did feminism. When women flooded into the labor market, they helped their families to ride out the great stagflation that set in during the 1970s. Thanks to them, there were now two incomes flowing into our households. We also had fewer children than our parents and we had them later. Birth control and feminism together with hard work kept us afloat. None of this was easy. Tears were shed. Our marriages collapsed more frequently than our parents’ marriages, and so we had to invent a whole new set of arrangements — single parenting, gay families, partnering and cohabitating without marriage — whose effect on our happiness may have been ambiguous, but most of the time helped us to maintain a middle-class standard of living. 

            

    Of course, there was a darker side — failure, debt, spousal abuse, drug and alcohol addiction, and suicide. The great novelists of our era — Updike, Didion, Ford, Bellow, and Cheever— all made art out of our episodes of disarray and disillusion. What was distinctive was how we understood our own failure. When we were young, in the 1960s, many of us drew up a bill of indictment against “the system,” though most of us were its beneficiaries. As we got older, we let go of abstract and ideological excuses. Those who failed, who fell off the ladder and slid downwards, took the blame for it, while those of us lucky enough to be successful thought we had earned it.

     

    So, as our great novelists understood, the true history of our generation can be told as the history of our property, our self-congratulation at its acquisition, our self-castigation when we lost it, the family saga that played out in all our dwellings, from urban walk-ups to suburban ranch houses, the cars in our driveways, the tchotchkes that we lined up on our shelves and the pictures that we hung on our walls, the luxuriant variety of erotic lives that we lived inside those dwellings, and the wealth that we hope to transmit to our children. 

     

    I am aware that such an account of my generation leaves out a great deal, outrageously so. There was a lot more history between 1945 and now, but for the rest of it — the epochal decolonization of Africa and Asia, the formation of new states, the bloody battles for self-determination, the collapse of the European empires, the astonishing rise of China — the true imperial privilege of those lucky enough to be born in North America and Western Europe was that we could remain spectators of the whole grand and violent spectacle. Out there in the big wide world, the storm of History was swirling up the dust, raising and dashing human hopes, sweeping away borders, toppling tyrants, installing new ones, and destroying millions of innocents, but none of it touched us. We must not confuse ourselves with the people whose misfortune provoked our sympathies. For us, history was a spectator sport we could watch on the nightly news and later on our smartphones. The history out there gave us plenty of opportunity to have opinions, offer analyses, sell our deep thoughts for a living, but none of it threatened us or absolutely forced us to commit or make a stand. For we were safe. 

     

    Safety made some of us restless and we longed to get closer to the action. I was one of those who ventured out to witness History, in the Balkans, in Afghanistan, in Darfur. We made films, wrote articles and books, sought to rouse consciences back home and change policies in world capitals. We prided ourselves on getting close to the action. Hadn’t Robert Capa, the great photographer who was killed when he stepped on a landmine in Vietnam, famously remarked that if your photographs aren’t any good, it’s because you aren’t close enough? So we got close. We even got ourselves shot at. 

     

    In the 1990’s, I went out and made six films for the BBC about the new nationalism then redrawing the maps of the world in the wake of the collapse of the Soviet Union. I can report that nothing was more exciting. A Serb paramilitary, whom I had interviewed in the ruins of Vukovar in eastern Croatia in February 1992, took a random couple of shots at the crew van as we were driving away, and later another group of drunken combatants grabbed the keys out of the van and brought us to a juddering halt and an uneasy hour of interrogation, broken up by the arrival of UN soldiers well enough armed to brook no argument. I had other adventures in Rwanda and Afghanistan, but the Balkans were as close as I ever came to experiencing History as the vast majority of human beings experience it — vulnerably. These episodes of peril were brief. We all had round trip tickets out of the danger zone. If History got too close for comfort, we could climb into our Toyota Land Cruisers and get the hell out. I can’t feel guilty about my impunity. It was built into the nature of our generation’s relation to History.

     

    Anybody who ventured out into the zones of danger in the 1990’s knew there was something wrong with Francis Fukuyama’s fairy tale that history had ended in the final victory of liberal democracy. It certainly didn’t look that way in Srebrenica or Sarajevo. History was not over. It never stopped. It never does. In fact, it took us to the edge of the abyss several times: in the Cuban missile crisis; when King and the Kennedys were shot; in those early hours after September 11; and most recently during the insurrection of January 6, 2021, when wild violence put the American republic in danger. Those were moments when we experienced History as vertigo. 

     

    The rest of the time, we thought we were safe inside “the liberal rules-based international order.” After 1989, you could believe that we were building such a thing: with human rights NGO’s, international criminal tribunals, and transitions to democracies in so many places, South Africa most hopefully of all. Actually, in most of the world, there were precious few rules and little order, but this didn’t stop those of us in the liberal democratic West from believing that we could spread the impunity that we enjoyed to others. We were invested in this supposed order, enforced by American power, because it had granted us a lifetime’s dispensation from history’s cruelty and chaos, and because it was morally and politically more attractive than the alternatives. Now my generation beholds the collapse of this illusion, and we entertain a guilty thought: it will be good to be gone.

     

    Smoke haze from forest fires in Canada is drifting over our cities. Whole regions of the world — the olive groves of southern Spain, the American southwest, the Australian outback, the Sahel regions of Africa — are becoming too hot to sustain life. The coral reefs of Australia, once an underwater wonder of color, are now dead grey. There is a floating mass of plastic bottles out in the Pacific as big as the wide Sargasso Sea. My generation cannot do much about this anymore, but we know that we owe the wealth that we are handing over to our children to high life in the high noon of fossil fuels.

     

    At least, we like to say, our generation woke up before it was too late. We read Silent Spring and banned DDT. We created Earth Day in 1970 and took as our talisman that incredible photo of the green-blue earth, taken by the astronaut William Anders floating in space. We discovered the hole in the ozone layer and passed the Montreal protocol that banned the chemicals causing it. We began the recycling industry and passed legislation that reduced pollution from our stacks and tailpipes; we pioneered green energy and new battery technologies. Our generation changed the vocabulary of politics and mainstreamed the environment as a political concern. Concepts such as the ecosphere and the greenhouse gas effect were unknown when we were our kids’ age. Almost the entirety of modern climate science came into being on our watch. With knowledge came some action, including those vast lumbering UN climate conferences. 

     

    Look, we say hopefully, the energy transition is underway. Look at all those windmills, those solar farms. Look at all the electric cars. They’re something, aren’t they? But we are like defendants entering a plea in mitigation. The climate crisis is more than a reproach to our generation’s history of property and consumption. It is also an accusation directed at our penchant for radical virtue-signaling followed by nothing more than timid incrementalism. The environmental activists sticking themselves to the roads to stop traffic and smearing art treasures with ketchup are as tired of our excuses as we are of their gestural politics. 

     

    Our children blame us for the damaged world that we will leave them, and they reproach us for the privileges that they will inherit. My daughter tells me that in her twelve years of working life as a theater producer in London, she has interviewed for jobs so many times she has lost count. In fifty years of a working life, I interviewed only a handful of times. The competitive slog that her generation takes for granted is foreign to me. The entitlement, dumb luck, and patronage I took for granted is a world away from the grind that her cohort accepts as normal. She said to me recently: you left us your expectations, but not your opportunities. 

     

    Like many of her generation, she grew up between parents who split when she was little. Like other fathers of my generation, I believed that divorce was a choice between harms: either stay in a marriage that had become hollow and loveless or find happiness in new love and try, as best you could, to share it with the kids. My children even say that it was for the best, but I cannot forget their frightened and tearful faces when I told them I was leaving. These personal matters that should otherwise stay private belong in the history of a generation that experienced the sexual revolution of the 1960s and took from that episode a good deal of self-justifying rhetoric about the need to be authentic, to follow your true feelings, and above all to be free.

     

    Our children are reckoning with us, just as we reckoned with our parents. Back then, we demanded that our parents explain how they had allowed the military-industrial complex to drag us into Vietnam. We marched against the war because we thought it betrayed American ideals, and even a Canadian felt that those ideals were his, too. Those further to the left ridiculed our innocence. Didn’t we understand that “Amerika” never had any ideals to lose? There were times, especially after the shooting of students at Kent State, when I almost agreed with them.

     

    I was a graduate student at Harvard when we bussed down to Washington in January 1973 for a demonstration against Nixon’s second inauguration. It was a huge demonstration and it changed nothing. Afterwards some of us took shelter at the Lincoln Monument. Righteous anger collapsed into tired disillusion. I can still remember the hopelessness that we felt as we sat at Lincoln’s feet. Two and half years later, though, the helicopters were lifting the last stragglers off the roof of the American embassy in Saigon, so we did achieve something. 

     

     Vietnam veterans came home damaged in soul and body, while radicals I marched with ended up with good jobs in the Ivy League. Does that make Vietnam the moment when the empire began to crack apart? The idea that Vietnam began the end of “the American century” remains a narrative that our generation uses to understand our place in history. Behold what we accomplished! It is a convention of sage commentary to this day, but really, who knows?

     

    The colossus still bestrides the world. The leading digital technologies of our time are still owned by Americans; Silicon Valley retains its commanding position on the frontiers of innovation. The United States spends eight hundred-billion dollars on defense, two and half times its European allies and China. America’s allies still will not take a significant step on their own until they have cleared it with Washington. Nobody out there loves America the way they did in the heyday of Louis Armstrong, Ella Fitzgerald, Walt Disney, and Elvis Presley; the universal domination of American popular music, mainly in the form of rap and hip hop, no longer makes America many friends. Yet the United States still has the power to attract allies and to deter enemies. It is no longer the world’s sole hegemon, and it cannot get its way the way it used to, but that may be no bad thing. The stories of American decline give us the illusion that we know which way time will unfold, and encourage us in a certain acquiescence. Fatalism is relaxing. The truth is that we have no idea at all. The truth is that we still have choices to make. 

     

    American hegemony endures, but the domestic crisis of race, class, gender, and region that first came to a head when we were in our twenties polarizes our politics to this day. As the 1960s turned into the 1970s, there were times, in the States but in Europe too, when the left hoped that revolution was imminent and the right dug in to defend its vanishing verities. The assassinations of Martin Luther King, Jr. and Robert Kennedy, followed by the police violence at the Chicago Democratic Convention in August 1968, led some of my generation — Kathy Boudin, Bernadine Dohrn, Bill Ayers, the names may not mean much anymore — to transition from liberal civil rights and anti-Vietnam protest to full-time revolutionary politics. What followed was a downward spiral of bombings, armed robberies, shoot-outs that killed policemen, and long jail-time for the perpetrators. Decades later I met Bernadine Dohrn at Northwestern Law School, still radical, still trailing behind her the lurid allure of a revolutionary past, but now an elegant law professor. Her itinerary, from revolution to tenure, was a journey taken by many, and not just in America. In Germany, the generation that confronted their parents about their Nazi past spawned a revolutionary cadre — the Baader-Meinhof gang and the Red Army Faction— who ended dead or in jail or in academe. In Italy, my generation’s confrontation with their parents ended with “the decade of lead,” bombings, political assassination, jail, and once again, post-revolutionary life in academe.

     

    Those of us who lived through these violent times got ourselves a job and a family and settled down to bourgeois life, and now we resemble the characters at the end of Flaubert’s Sentimental Education, wondering what a failed revolution did to us. For some, the 1960s gave us the values that we espouse to this day, while for others it was the moment when America lost its way. We are still arguing, but both sides carry on the shouting match within secure professions and full-time jobs. Nobody, at least until the Proud Boys came around, wants an upheaval anymore. What changed us, fundamentally, is that in the 1970s we scared ourselves. 

     

    And so we settled for stability instead of revolution, though we should give ourselves some credit for ending an unjustified war and wrenching the political system out of the collusive consensus of the 1950s. My generation of liberal whites also likes to take credit for civil rights, but the truth is that most of us watched the drama on television, while black people did most of the fighting and the dying. All the same, we take pride that in our time, in 1965, America took a long-resisted step towards becoming a democracy for all Americans. Our pride is vicarious, and that may mean it isn’t quite sincere. Our other mistake was in taking yes for an answer too soon. We believed that the civil rights revolution in our time was the end of the story of racial justice in America, when in fact it was just the beginning.

     

    The reckoning with race became the leitmotiv of the rest of our lives. I grew up in a Toronto that was overwhelmingly white. What we thought of as diversity were neighborhoods inhabited by Portuguese, Italian, Greek, or Ukrainian immigrants. The demographers now say that, if I live long enough, I will soon be in a minority in my city of birth. Fine by me, but it’s made me realize that I never grasped how much of my privilege depended on my race. My teenage friends and I never thought of ourselves as white, since whiteness was all we knew. Now, fifty years later, we are hyper-sensitively aware of our whiteness, but we still live in a mostly white world. At the same time, the authority of that world has been placed in question as never before, defended as a last redoubt of security by frightened conservatives, and apologized for, without end, by liberals and progressives.

     

    Some white people, faced with these challenges to our authority, are apt to speak up for empathy, to claim that race is not the limit of our capacity for solidarity, while other white people say to hell with empathy and vote instead to make America great again. Liberals are correct to insist that racial identity must not be a prison, but claims to empathy are also a way to hold on to our privileges while pretending we can still understand lives that race has made different from our own. While I do not regard the color of my skin as the limit of my world, or as the most significant of my traits, I can see why some other people might. 

     

    Nor has whiteness been my only privilege, or even the source of all the others. An inventory of my advantages, some earned, most inherited, would include being male, heterosexual, educated, and well housed, pensioned and provided for, with a wife who cares about me, children who still want to see me, parents who loved me and left me in a secure position. I am the citizen of a prosperous and stable country, I am a native speaker of the lingua franca of the world, and I am in good health.

     

    I used to think that these facts made me special. Privileges do that to you. Now I see how much of my privilege was shared with those of my class and my race. I am not so special after all. I also see now that, while privileges conferred advantages, some of them unjust, they also came with liabilities. They blinded me to other people’s experience, to the facts of their shame and suffering. My generation’s privileges also make it difficult for me to see where History may be moving. My frame of relevant experience omits most of the planet outside the North Atlantic at precisely the moment when History may be moving its capital to East Asia forever, leaving behind a culture, in Europe where I live, of museums, recrimination, and decline. There is plenty here that I cherish, but I cannot escape a feeling of twilight, and I wonder whether the great caravan may be moving on, beyond my sight, into the distance. 

     

    Everybody comes to self-consciousness too late. This new awareness of privilege, however late it may be, is perhaps the most important of all the changes that History has worked upon my generation. What we took for granted, as ours by inheritance or by right, is now a set of circumstances that we must understand, apologize for, or defend. And defend it we do. We moralized our institutions—universities, hospitals, law firms—as meritocracies, when they were too often only reserves for people like us. When challenged, we opened up our professions to make them more diverse and inclusive, and this makes us feel better about our privileges, because we extended them to others. “Inclusion” is fine, as long as it is not an alibi for all the exclusions that remain. 

     

    As white persons like me edge reluctantly into retirement, our privileges remain intact. Our portion of that money — the eighty-four trillion dollars — that we are going to hand over to the next generation tells us that we have preserved the privilege that matters most of all: transmitting power to our kith and kin. Closing time is nigh and raging against the dying of the light is a waste of time. What matters now is a graceful exit combined with prudent estate planning.

     

    Not all privileges are captured by the categories of wealth, race, class, or citizenship. I have been saving the most important of my privileges for last. 

     

    This one is hidden deep in my earliest memory. I am three years old, in shorts and a T-shirt, on P Street in Georgetown, in Washington, D.C. P Street was where my parents rented a house when my father worked as a young diplomat at the Canadian Embassy. It is a spring day, with magnolias in bloom, bright sunshine, and a breeze causing the new leaves to flutter. I walk up a brick sidewalk towards a white house set back from the street and shaded by trees. I walk through the open door into the house, with my mother at my side. We are standing just inside the door, looking out across a vast room, or so it seems from a child’s-eye view, with high ceilings, white walls, and another door open on the other side to a shaded garden.

     

    The large light-filled room is empty. I don’t know why we are here, but now I think it was because my mother was pregnant with my little brother, and she was looking the place over as a possible rental for a family about to grow from three to four. We stand for an instant in silence, surveying the scene. Suddenly the front door slams violently behind us. Before our astonished eyes, the whole ceiling collapses onto the floor, in a cloud of dust and plaster. I look up, the raw woodwork slats that held the ceiling plaster are all exposed, like the ribs on the carcass of some decayed animal. The dust settles. We stand there amazed, picking debris out of our hair.

     

    I don’t know what happened next, except that we didn’t rent the house.

     

    It is a good place to end, on a Washington street in 1950, at the height of the Korean War, in the middle of Senator McCarthy’s persecutions, that bullying populism which is never absent from democracy for long and which had all my father’s and mother’s American friends indignant, but also afraid of Senate hearings, loss of security clearances, and dismissal. I knew nothing of this context, of course. This memory, if it is one at all—it could be a story I was told later — is about a child’s first encounter with disaster. I begin in safety, walking up a brick path, in dappled sunlight. I open a door and the roof falls in. Disaster strikes, but I am safe.

     

    At the very center of this memory is this certainty: I am holding my mother’s hand. I can feel its warmth this very minute. Nothing can harm me. I am secure. I am immune. I have clung to this privilege ever since. It makes me a spectator to the sorrows that happen to others. Of all my privileges, in a century where history has inflicted so much fear, terror, and loss on so many fellow human beings, this sense of immunity, conferred by the love of my parents, her hand in mine, is the privilege which, in order to understand what happens to others, I had to work hardest to overcome. 

     

    But overcome it I did. I was well into a fine middle age before life itself snapped me awake. When, thirty-seven years after that scene in Washington, I brought my infant son to meet my mother, in a country place that she had loved, and she turned to me and whispered, who is this child? recognizing neither me nor her first grandchild, nor where she was, I understood then, in that moment, as one must, that all the privileges I enjoyed, including a mother’s unstinting love, cannot protect any of us from what life — cruel and beautiful life — has in store, when the light begins to fade on the road ahead.

     

    A Prayer for the Administrative State

    In February 2017, Steve Bannon, then senior counselor and chief strategist to President Donald Trump, pledged to a gathering of the Conservative Political Action Conference (CPAC; initiates pronounce it “See-Pack”) that the Trump administration would bring about “the deconstruction of the administrative state.” Bannon’s choice of the word “deconstruction” raises some possibility that he had in mind a textual interrogation in the style of Derrida. Laugh if you want, but Bannon claims an eclectic variety of intellectual influences, and the anti-regulatory movement that Bannon embraced did begin, in the 1940s, as a quixotic rejection of that same empiricism against which Derrida famously rebelled (Il n’y a pas de hors-texte“). More likely, though, Bannon was using the word “deconstruction” as would a real estate tycoon such as his boss, to mean dismantlement and demolition. The “progressive left,” Bannon told See-Pack, when it can’t pass a law, “they’re just going to put in some sort of regulation in an agency. That’s all going to be deconstructed and I think that that’s why this regulatory thing is so important.” Kaboom! 

     

    Already the wrecking ball was swinging. Reactionary federal judges had for decades been undermining federal agencies, egged on by conservative scholars such as Philip Hamburger of Columbia Law School, the author in 2014 of the treatise, Is Administrative Law Unlawful? Anti-regulation legal theorists are legatees of the “nine old men” of the Supreme Court who, through much of the 1930s, resisted President Franklin Roosevelt’s efforts to bring regulation up to date with the previous half-century of industrialization. The high court made its peace with the New Deal in 1937 after Roosevelt threatened to expand its membership to fifteen. Today’s warriors against the administrative state see this as one of history’s tragic wrong turns.

     

    As president, Trump attacked the administrative state not to satisfy any ideology (Trump possesses none) but to pacify a business constituency alarmed by Trump’s protectionism, his Muslim-baiting, his white-nationalist-coddling, and all the rest. “Every business leader we’ve had in is saying not just taxes, but it is also the regulation,” Bannon told CPAC. But does the war against the administrative state hold appeal for ordinary Republican voters? The rank and file don’t especially hate government regulation of corporations except insofar as they hate government in general (especially when Democrats are in charge). They certainly don’t wish to succor the S&P 500, which, as Comrade Bannon made clear, is what the war on the administrative state is all about. Between August 2019 and October 2022, a Pew survey found, the proportion of Republicans and Republican leaners willing to say large corporations had a positive effect on America plummeted from fifty-four percent to twenty-six percent. Bannon’s vilification of the administrative state would therefore appear to run in a direction opposite that of Trump voters. The nomenklatura loved it at CPAC, but the words “administrative state” make normal people’s eyes glaze over. 

     

    During his four years in office, Trump achieved only limited success dethroning the administrative state. On the one hand, he gummed up the works to prevent new regulations from coming out. The conservative American Action Forum calculated that during Trump’s presidency the administrative state imposed about one-tenth the regulatory costs imposed under President Obama. But on the other hand, Trump struggled to fulfill Bannon’s pledge to wipe out existing regulations. Trump’s political appointees were too ignorant about how the federal bureaucracy worked to wreak anywhere near the quantity of deconstruction that Trump sought. 

     

    To eliminate a regulation requires that you follow, with some care, certain administrative procedures; otherwise a federal judge will rule against you. The bumbling camp followers that Trump put in charge of the Cabinet and the independent agencies lacked sufficient patience to get this right, and the civil servants working under them lacked sufficient motive to help them. According to New York University’s Institute for Policy Integrity, the Trump administration prevailed in legal challenges to its deregulatory actions only twenty-two percent of the time. Granted, in many instances where Trump lost, as the Brookings Institution’s Philip A. Wallach and Kelly Kennedy observed, he “still succeeded in weakening, if not erasing, Obama administration policy.” But after Biden came in, the new president set about reversing as many of Trump’s deregulatory actions as possible, lending a Sisyphean cast to the deconstruction of the administrative state. Just in Biden’s first year, the American Action Forum bemoaned that he imposed regulatory costs at twice the annual pace set by Obama. 

     

    Clearly the only way Republicans can win this game is to gum up the regulatory works during both Republican and Democratic administrations. Distressingly, Trump will likely take a long stride toward achieving that goal in the current Supreme Court term. The vehicle of this particular act of deconstruction is a challenge to something called Chevron deference, by which the courts are obliged to grant leeway to the necessary and routine interpretation of statutes by regulatory agencies. The Supreme Court heard two Chevron cases in January and is expected to hand down an opinion in the spring. At oral argument, two of Trump’s three appointees to the high court, Neil Gorsuch and Brett Kavanaugh, were more than ready to overturn Chevron, along with Clarence Thomas and Samuel Alito. Chief Justice John Roberts, though more circumspect, appeared likely to join them, furnishing the necessary fifth vote. In killing off Chevron deference, the right hopes not only to prevent new regulations from being issued, but also to prevent old ones from being enforced. This is the closest the business lobby has gotten in eighty-seven years to realizing its dream to repeal the New Deal.

     

    The administrative state is often described as a twentieth-century invention, but in 1996, in his book The People’s Welfare: Law and Regulation in 19th Century America, William J. Novak, a legal historian at the University of Michigan, showed that local governments were delegating police powers to local boards of health as far back as the 1790s, largely to impose quarantines during outbreaks of smallpox, typhoid, and other deadly diseases. In 1813 a fire code in New York City empowered constables to compel bystanders to help extinguish fires; bucket brigades were not voluntary but a type of conscription. The same law contained two pages restricting the storage and transport of gunpowder, and forbade any discharge of firearms in a populated area of the city. These rules were accepted by the public as necessary to protect health and safety.

     

    In 1872, Benjamin Van Keuren, the unelected street commissioner of Jersey City, received complaints about “noxious and unwholesome smells” from a factory that boiled blood and offal from the city’s stockyard, and mixed it with various chemicals, and cooked it, and then ground it into fertilizer. The Manhattan Fertilizing Company ignored Van Keuren’s demand that it cease befouling the air, so Van Keuren showed up with twenty-five policemen, disabled the machinery, and confiscated assorted parts of it. The fertilizing company then sued Van Keuren for unreasonable search and seizure and the taking of property without due process or just compensation. But the street commissioner was carrying out a duty to address public nuisances that had been delegated to him by the city’s board of aldermen, and the judge ruled in Van Keuren’s favor. 

     

    As the latter example illustrates, government regulation grew organically alongside industrialism’s multiplying impositions on the general welfare. As with industrialism itself, this mostly began with the railroads. Thomas McCraw, in Prophets of Regulation, in 1984, identified as America’s first regulatory agency the Rhode Island Commission, created in 1839 to coax competing railroad companies into standardizing schedules and fees. Railroads themselves only barely existed; the very first in the United States, which were in Baltimore and Ohio, had begun operations a mere nine years earlier. At the federal level, the first regulatory agency, the Interstate Commerce Commission, was invented in 1887 likewise to regulate railroads. 

     

    From the beginning, the railroads’ vast economic power was seen as a threat to public order. In 1869, Charles Francis Adams, Jr. — great-grandson to John Adams and brother to Henry Adams— complained that the Erie Railway, which possessed “an artery of commerce [Jersey City to Chicago] more important than ever was the Appian Way,” charged prices sufficiently extortionate to invite comparisons to the Barbary pirates. “They no longer live in terror of the rope, skulking in the hiding-place of thieves,” Adams wrote, “but flaunt themselves in the resorts of trade and fashion, and, disdaining such titles as once satisfied Ancient Pistol or Captain Macheath, they are even recognized as President This or Colonel That.” Railroads represented the industrial economy’s first obvious challenge to the Constitution’s quaint presumption that commerce would forever occur chiefly within rather than between the states, and therefore lie mostly outside the purview of Congress. By the early twentieth-century, interstate commerce was becoming the rule rather than the exception. 

     

    A parallel development was the passage, in 1883, of the Pendleton Civil Service Act. After the Civil War, an unchecked “spoils system” of political patronage corrupted the federal government, making bribery and the hiring of incompetents the norm. A delusional and famously disappointed office-seeker named Charles Guiteau closed this chapter by assassinating President James Garfield. Guiteau had expected, despite zero encouragement, that he would be appointed consul to Vienna or Paris. He went to the gallows believing that he had saved the spoils system — a martyr in the cause of corruption! — but instead he had discredited it. The Pendleton Act left it up to the president and his Cabinet to determine what proportion of the federal workforce would be hired based on merit as part of a new civil service. As a result, civil servants rose rapidly from an initial ten percent of federal workers in 1884 to about forty percent in 1900 to nearly eighty percent in 1920. Today about ninety percent of the federal civilian workforce consists of civil servants.

     

    With the establishment of a civil service, it quickly became evident that government administration was becoming a professional discipline requiring its own expertise. In 1886, in an influential essay called “The Study of Administration,” Woodrow Wilson (then a newly minted PhD in history and government) argued that it lay “outside the proper sphere of politics.” Democratic governance, Wilson wrote, 

    does not consist in having a hand in everything, any more than housekeeping necessarily consists in cooking dinner with one’s own hands. The cook must be trusted with a large discretion as to the management of the fires and the ovens.

    If Wilson’s analogy strikes you as aristocratic, recall that he was writing at a time when it wasn’t exceptional for a middle-class family to employ a full-time cook. If Wilson were writing today, he would more likely say that you don’t need to be an auto mechanic to drive your car. His point was that elected officials lacked sufficient expertise to address the granular details of modern government administration. “The trouble in early times,” Wilson explained,

    was almost altogether about the constitution of government; and consequently that was what engrossed men’s thoughts. There was little or no trouble about administration, — at least little that was heeded by administrators. The functions of government were simple, because life itself was simple. [But] there is scarcely a single duty of government which was once simple which is not now complex; government once had but a few masters; it now has scores of masters. Majorities formerly only underwent government; they now conduct government. Where government once might follow the whims of a court, it must now follow the views of a nation. And those views are steadily widening to new conceptions of state duty; so that, at the same time that the functions of government are every day becoming more complex and difficult, they are also vastly multiplying in number. Administration is everywhere putting its hands to new undertakings.

    Government required assistance from unelected officials who possessed expertise, and a new type of training would be required to prepare them. In 1924, Syracuse University answered the call by opening the Maxwell School, the first academic institution in America to offer a graduate degree in public administration.

     

    All this took place before the advent of the Progressive Era, the historical moment when, conservatives often complain, Jeffersonian democracy died. In fact, as Novak argues persuasively in his recent book New Democracy: The Creation of the Modern American State, Congress had been expanding its so-called “police power” (i.e., regulatory power) since the end of the Civil War. The Progressive Era spawned only two new federal regulatory agencies, the Food and Drug Administration in 1906 and the Federal Trade Commission. What most distinguishes the Progressive Era is that its leading thinkers—among them Woodrow Wilson, Herbert Croly (the founding editor of The New Republic), and John Dewey — articulated more fully than before the rationale for an expanded federal government. Surveying America’s industrial transformation, they concluded that state houses would never possess sufficient means to check the power of corporations. “The less the state government have to do with private corporations whose income is greater than their own,” Croly observed tartly, “the better it will be for their morals.” It remains true today that state and local government officials are much easier for businesses to buy off or bully than their counterparts in Washington. That is the practical reality behind conservative pieties about the virtues of federalism and small government.

     

    What Croly said of state government could be applied equally to the judiciary. Through the first half of the nineteenth century, if a farm or small business engaged in activity that caused social harm, redress would typically be sought in the courts. Since damages weren’t likely very large, the Harvard economists Edward L. Glaeser and Andrei Shleifer have explained, the offending party had little motive — and, given its small scale, little ability — to “subvert justice,” that is, to bribe a judge. As the offending firms got bigger in size and wealth, “the social costs of harm grew roughly proportionately, but the costs of subverting justice did not.” To a railroad baron or a garment merchant, judges could (and often were) bought with pocket change. Wilson noted the problem during his campaign for president in 1912: “There have been courts in the United States which were controlled by the private interests…. There have been corrupt judges; there have been judges who acted as other men’s servants and not as servants of the public. Ah, there are some shameful chapters in the story!”

     

    It took the catastrophe of the Great Depression to establish the countervailing federal power necessary to make rich corporations behave. President Franklin Roosevelt and Congress created more than forty so-called “alphabet agencies.” Most of these no longer exist, but many remain, including the Federal Communication Commission (FCC), the Federal Deposit Insurance Corporation (FDIC), and the National Labor Relations Board (NLRB). During Roosevelt’s first four years in office the Supreme Court limited the creation of such agencies, following three decades of anti-regulatory precedent that sharply restricted federal power under the Commerce Clause. This is commonly known as the “Lochner era,” but that is slightly misleading because Lochner v. New York, a 1905 ruling against a regulation establishing maximum work hours for bakers, addressed power at the state level, where the Commerce Clause does not directly apply. In truth, Lochner-era justices didn’t like regulation, period, and found reasons to limit it in Washington and state capitals alike.

     

    For Roosevelt, matters came to a head in February 1937. Fresh from re-election and miffed that the Supreme Court had struck down the National Industrial Recovery Act, the Agricultural Adjustment Act, and assorted lesser New Deal programs, Roosevelt introduced a bill to pack the Supreme Court with six additional judges. The legislation went nowhere, but the mere threat was apparently sufficient to liberalize the high court’s view of the Commerce Clause, starting with NLRB v. Jones & Laughlin Steel Corp. (1937), which gave Congress jurisdiction over commerce that had only an indirect impact on interstate commerce. A legal scholar would explain this shift in terms of subtle jurisprudential currents, but I find the simpler and more vulgar explanation—Roosevelt’s application of brute force—more than sufficient. After NLRB v. Jones & Laughlin Steel Corp., fifty-eight years passed before the Supreme Court sided with any Commerce Clause challenge, and the few instances where it did so were fairly inconsequential.

     

    With that battle lost, conservative criticism of the administrative state shifted away from the broad question of whether Congress possessed vast powers to regulate business to the narrower one of whether it could or should delegate those powers to executive-branch agencies. The critique’s broad outlines were laid down by Dwight Waldo, a young political scientist with New Deal experience working in the Office of Price Administration and the Bureau of the Budget. It was Waldo who popularized the phrase “the administrative state” in 1948 in a book of that name. The Administrative State is an attack on empiricism — or more precisely, the positivism of Auguste Comte, the French social thinker who from 1830 to 1842 published in six volumes his Cours de Philosophie Positive, which inspired Comtean movements of bureaucratic reason around the world.

     

    The Progressive Era had articulated a need for apolitical experts to weigh business interests against those of the public on narrow questions that required some expertise. The New Deal had committed the federal government to applying such expertise on a large scale, creating for the first time a kind of American technocracy. Waldo did not believe that narrow regulatory questions could be resolved objectively. He surrounded the phrase scientific method with quotation marks. He complained that writers on public administration showed a bias toward urbanization (when really it was the American public that showed this bias, starting in the 1920s, by situating themselves more in cities than in rural places). He questioned the notion of progress and scorned what he called “the gospel of efficiency.” To Waldo, it was “amazing what a position of dominance ‘efficiency’ assumed, how it waxed until it had assimilated or overshadowed other values, how men and events came to be degraded or exalted according to what was assumed to be its dictate.” Waldo deplored the influence of Comte. Like positivists, he complained, public administrators “seek to eliminate metaphysic and to substitute measurement.” 

     

    Unlike most critics of the administrative state, Waldo is fun to read, and he was even right up to a point. Public policy is not as value-free as cooking a meal or repairing an automobile. “Metaphysic,” meaning a larger philosophical framework, may have a role to play; there are more things in heaven and earth, etc. But a lot depends on what sort of problem it is that we are trying to solve. The question of how much water should flow through your toilet doesn’t rise to the metaphysical. Waldo’s anti-positivism created the template that industry later adopted against every conceivable regulation: sow doubt about scientific certainty and exploit that doubt to argue either that a given problem’s causes or a proposed solution’s efficacy is unproveable. Shout down the gospel of efficiency with a gospel of uncertainty. Do cigarettes cause heart disease or cancer? Hard to say. Does industrial activity cause climate change? We can’t assume so. Hankering for “metaphysic” can be used self-interestedly to reopen questions already settled to a reasonable degree by science.

     

    Ironically, though, Waldo’s loudest complaint about the administrative state was that its methods too closely resembled those of that same business world that otherwise embraced his critique. To the latter-day corporate blowhard who demands that government be run more like a business, Waldo would say: God forbid! Waldo was especially repelled by Frederick W. Taylor’s theories of scientific management and their influence on public administration. Taylor (1856-1915) famously evangelized for improvements in industrial efficiency based on time-motion studies of workers that were sometimes criticized as dehumanizing. (Charlie Chaplin’s Modern Times is a satire of Taylorism.) “The pioneers,” Waldo protested, “began with inquiries into the proper speed of cutting tools and the optimum height for garbage trucks; their followers seek to place large segments of social life — or even the whole of it — upon a scientific basis.” Waldo likened the displacement of elected officials by government experts to the displacement of shareholders by managers in what James Burnham, seven years earlier, had termed the “managerial revolution”, and what John Kenneth Galbraith, a decade later, would more approvingly call “the new industrial state.” To Waldo, it was all social engineering.

     

    Today, of course, the managerial revolution is long dead. The shareholder, or anyway the Wall Street banks, hedge funds, and private equity funds that purport to represent him, holds the whip hand over managers, employees, and consumers. The shareholder value revolution of the 1980s came dressed up with a lot of populist-sounding rhetoric about democratic accountability, but even at the time nobody seriously believed this accountability would serve anyone but the rich. Its results were beggared investment, proliferating stock buybacks (largely illegal in 1982), and a reduction in labor’s share of national income. Conservative warriors against the administrative state similarly seek to serve the rich by minimizing any restraints that society might impose on their capital. As they push to return as much regulatory power as possible to Congress, they, too, farcically apply the rhetoric of democratic accountability.

     

    You think the administrative state came into being as a logical response to the growing power of industry? Nonsense, argued Hamburger in The Administrative Threat in 2017, a pamphlet intended to bring his legal analysis to a larger audience. Regulatory agencies arose to check the spread of voting rights! The Interstate Commerce Commission was founded seventeen years after the Fifteenth Amendment enfranchised African Americans. The New Deal’s alphabet agencies were created a decade after the Nineteenth Amendment enfranchised women. The Environmental Protection Agency, the Consumer Protection Safety Commission, and the Occupational Health and Safety Commission were created a decade after Congress passed the Voting Rights Act. “Worried about the rough-and-tumble character of representative politics,” Hamburger writes, “many Americans have sought what they consider a more elevated mode of governance.” That would be rule by experts and the cultural elite — the people whom the neoconservatives labelled “the new class.” Hamburger uses the milder term “knowledge class”, but, as with the old neocons and today’s MAGA shock troops, the intent is to denigrate expertise. Never mind that the newly freed slaves were too focused on racial discrimination to spare much thought for rail-rate discrimination, or that Depression-era women were too focused on putting food on the table to fret much about utility public holding companies.

     

    Hamburger’s argument leaned heavily on Woodrow Wilson’s well-known — and obviously repellent — affinity for eugenics. But even Wilson had to know that the great mass of the knowledge class possessed no greater understanding of how to keep Escherichia coli out of canned goods than an unschooled Tennessee sharecropper or an Irish barman. The staggering complexities of industrial and post-industrial society render all of us ignoramuses. That is why we must rely on unelected government experts. Too many of the most urgent policy questions facing us — financial reform, climate change, health care — are just too complicated and detailed and arcane for ordinary citizens to master, and it is not an elitist insult to these ordinary citizens to say so. 

     

    Equally absurd is the notion that the administrative state is unaccountable. Every regulation that a federal agency issues is grounded in a federal statute enacted by a democratically elected Congress. A regulation is nothing more than the government’s practical plan to do something its legislature ordered it to do already. To put out a significant regulation (i.e., one expected to impose economic costs of at least two hundred million), a government agency will usually start by publishing, in the Federal Register, an Advance Notice of Proposed Rulemaking. (This and most of what follows is required under the Administrative Procedure Act of 1946.) The advance notice invites all parties (in practice, usually business) either to write the agency or meet with agency officials to discuss the matter. The agency then sets to work crafting a proposed rule, incorporating therein an analysis of the rule’s costs and benefits. 

     

    It is amply documented that regulatory cost-benefit analyses, which necessarily rely on information from affected businesses, almost always overstate costs by a wide margin. This is not a new phenomenon, or even an especially partisan one. In 1976, for example, the Occupational Health and Safety Administration, under President Gerald Ford, estimated that a rule limiting workers’ exposure to cotton dust would cost manufacturers seven hundred million dollars per year. But when a slightly modified version of the rule was implemented in 1978, under President Jimmy Carter, it actually cost manufacturers only two hundred and five million dollars per year. That is a significant difference. In 1982, the Reagan administration, opposed philosophically to regulation as a betrayal of capitalism, and convinced that this particular regulation was too burdensome, called for a review. This time, the cost to manufacturers was found to be an even lower eighty-three million dollars per year. 

     

    After the agency in question has (over)estimated the cost of its proposed rule, it submits a draft to the White House Office of Information & Regulatory Affairs (OIRA). Here the draft is subjected to independent analysis, though sometimes that analysis is informed by political pressure applied to the president or his staff by the affected industry. OIRA may then modify the draft. The proposed rule is then published in the Federal Register. The public is given sixty days to submit comment on the proposal. The agency then spends about a year readying a final regulation, which is resubmitted to OIRA and perhaps modified or moderated further. Then the final rule is published in the Federal Register.

     

    At this point the affected industry, recoiling from limits (real or imagined) that the rule will impose on its profit-seeking, will take the agency to court to block or modify it. Congress also has, under the Congressional Review Act (CRA) of 1996, the option, for a limited time, to eliminate the regulation under an expedited procedure. That seldom occurs except at the start of a new administration, because a president will veto any resolution of disapproval against a regulation produced by his own executive branch. If the president’s successor is of the same party, he will also likely veto any such resolution. If, on the other hand, the president is from the opposite party and virulently against regulation — say, Donald Trump — he may go to town on any and all regulations still eligible for cancellation. Trump used the CRA to kill fifteen Obama-era regulations. 

     

    Getting a federal agency to issue a regulation is more complicated and more time-consuming even than getting Congress to pass a law. This is because a regulation gets into the weeds in a way that legislators, who must address a great variety of problems, truly cannot, even in much saner political times than these. There is always the risk of “regulatory capture,” wherein regulators adopt too much of a regulated industry’s point of view, possibly in anticipation of a job. But civil servants and political appointees receive much fewer financial inducements from industry than members of Congress, on whom Hamburger wishes to bestow most if not all regulatory functions. Senators and representatives collect money from the industries they oversee — in the form of campaign contributions — while they are still in government, and nearly two-thirds become lobbyists once they leave Congress, according to a study in 2019 by Public Citizen, a Nader-founded nonprofit. Government-wide revolving-door data for agency officials is less readily available, but about one-third of political appointees to the Department of Health and Human Services between 2004 and 2020 ended up working for industry, according to a study recently published in the journal Health Affairs. The proportion of civil servants who pass through the revolving door is likely smaller still because, unlike political appointees, civil servants don’t work on a time-limited basis. Congress, then, is at least twice as easy to buy off as regulators. That is the real reason administrative-state critics want to increase congressional control over regulation.

     

    For the past four decades the judiciary, under the Supreme Court’s Chevron decision in 1984, has deferred to the expertise of administrative agencies in interpreting statutes. It was inclined to do that anyway, but Chevron formalized that arrangement. After Chevron, the courts could still block regulations, but only in exceptional cases, because jeez, like Woodrow Wilson said, these guys are the experts.

     

    Industry loathes Chevron, and is bent on overturning it. This is ironic, because when it was handed down Chevron was considered pro-business. The New York Times headline was “Court Upholds Reagan On Air Standard.” The case concerned an easing of air pollution controls by the industry-friendly Environmental Protection Agency administrator Ann Gorsuch (who in 1983 re-married and became Ann Burford). Chevron turned on whether the word “source” (of pollution) in the text of the Clean Air Act referred to an entire factory or merely to a part of that factory. In his decision, Justice John Paul Stevens concluded that this was not a matter for a judge to decide. It was the EPA’s job, informed by the duly elected chief executive — even granting (Stevens might have added) that the president in question was on record stating that “eighty percent of air pollution comes from plants and trees.” 

     

    Conservatives cheered Chevron because it gave Reagan a blank check to ease the regulatory burden on business without being second-guessed by some liberal judge. But as the judges got less liberal and Democrats returned to the White House, enemies of the administrative state lost their interest in judicial restraint. Starting in 2000, an ever-more-conservative Supreme Court limited the application of Chevron deference in various ways — for instance, by requiring more formal administrative proceedings. One of the Chevron decision’s fiercest critics, interestingly, is Ann Gorsuch’s own son. Chevron, Neil Gorsuch wrote in a dissent in November 2022, “deserves a tombstone no one can miss.” In Gorsuch’s view, Chevron is a cop-out for judges. “Rather than say what the law is,” Gorsuch wrote, “we tell all those who come before us to go ask a bureaucrat.” 

     

    Bureaucrat. To opponents of the administrative state, that is the worst thing you can be. Even liberals, when they talk about bureaucracy, speak mostly about bypassing it. Granted, bureaucracies can be exasperating — cautious to a fault, obstructionist for no evident reason. But if government bureaucracy were defined only by its vices, we would have jettisoned it a long time ago. Bureaucracy is also, as Max Weber pointed out in The Theory of Social and Economic Organization in 1920,

     

    the most rational known means of carrying out imperative control over human beings. It is superior to any other form in precision, in stability, in the stringency of its discipline, and in its reliability…. The choice is only that between bureaucracy and dilettantism in the field of administration.

     

    On this last point, the presidency of Donald Trump is amply illustrative. Trumpian dilettantism collided with bureaucratic resistance again and again, and it was bureaucracy that (thank God) kept the Trump administration from spinning completely out of control. Most crucially, it was Justice Department bureaucrats who, when Trump disputed the 2020 election, threatened to resign en masse if Trump replaced Acting Attorney General Jeffrey Rosen with Jeffrey Clark, a MAGA sycophant eager to file whatever lawsuit the president desired to hang on to power. The lifers’ threat worked, and Trump backed down. Trump is now plotting his revenge with a scheme to strip career federal employees in “policy-related positions” of all civil service job protections, reviving Charles Guiteau’s fever dream of an unmolested spoils system. ““We need to make it much easier,” Trump said in July 2022, “to fire rogue bureaucrats who are deliberately undermining democracy.” In this instance, “undermining democracy” means upholding the rule of law.

     

    Overturning Chevron is every bit as important to the business lobby as overturning Roe was to evangelicals. Even more than tax cuts, the possibility of repealing Chevron and the regulatory burden — which could also be called the regulatory duty — that it represents was why corporate chiefs held their noses and voted for Trump in 2016. To reduce or eliminate the administrative state’s ability to interpret statutes is to reduce or eliminate regulation, because what is regulation if not the interpretation of statutes? As Justice Antonin Scalia wrote in 1989 (before he, too, changed his mind about Chevron), 

     

    Broad delegation to the Executive is the hallmark of the modern administrative state; agency rulemaking powers are the rule rather than, as they once were, the exception; and as the sheer number of departments and agencies suggests, we are awash in agency “expertise.” …. To tell the truth, the search for the “genuine” legislative intent is probably a wild goose chase.

     

    I would quarrel only with Scalia’s placement of snide quotation marks around the word expertise — was he himself not an expert? — and with his idealized notion of a recent past in which regulators seldom regulated. Four decades ago, conservatives such as Scalia accepted the administrative state as legitimate and necessary because they expected to control it. Now they realize that often they do not control it, so they want to kill it. 

     

    The conservative lie about regulation is that it is an anti-democratic conspiracy to smother capitalism. In truth, the administrative state came into being to allow capitalism to flourish in the industrial and post-industrial eras without trampling democracy. Today we sometimes hear it said that American democracy is in peril, but with six of the world’s ten biggest companies headquartered in the United States (ranked by Forbes according to a combination of sales, profits, assets, and market value) not even conservatives bother to argue that American capitalism is in peril. The war on the administrative state is not a sign that American business is too weak. It is a sign that American business is too strong — so strong that the business lobby, abetted by fanciful legal theories and mythologized history, is tempted to break free of the rules democracy imposes on it.

     

    In a pluralistic society, it is natural for any constituency — even business — to strive for greater power. But it would be immoral and self-destructive for the broader public, acting through its government, to grant such power. Capitalists live and thrive not in isolation but within a society, among people whom they are obliged not to harm. The damage they can do is complex enough to require scrutiny from government bureaucrats. One blessing of a functioning democracy is that we citizens (up to a point) can take for granted that our government will perform this necessary work. That assurance lets the rest of us pursue happiness and get on with our lives. It may soon be imperiled by the Supreme Court and, God forbid, a second Trump term. And so we must pray that we avert such perils, and apply all available tools in our democracy to preserve and protect the administrative state. Long may it rule.