Viral

    1
    Any one of these masked avengers
    might be moonlighting as another Captain Rock,
    might set out not only to censure
    but incinerate a rich farmer dreading his knock

    at midnight, a cowpuncher, a calf-drencher,
    a dweeb journalist, a helmetless jock
    courting death by misadventure,
    a negotiator trying to break the deadlock

    between boss and union, a prominent backbencher
    imagining he’s standing for re-election
    when he’s making a speech from the dock,

    a career civil servant both inured and indentured
    to a twice daily intravenous injection
    of horse piss and poppycock.

    2
    Any one of them might be an insurance underwriter
    taking the tube from Maida Vale,
    an architect pulling an all-nighter
    while she works to scale,

    a private investigator flicking a cigarette lighter
    and putting another nail
    in his coffin, a developer of an antibody titer,
    a Muddy Waters failing to curtail

    a Lightnin’ Hopkins, a bishop adjusting his mitre,
    a baker proffering a baker’s dozen
    of cakes and ale,

    a professional flautist, an amateur fire-fighter,
    a migrant worker who met your forty-second cousin
    at the blading of the kale.

    3
    Any one of these masked avengers
    might be moonlighting as Atticus Finch,
    might be reliant on the kindness of strangers,
    a blue man of the Minch

    crossed with a ginger, a money exchanger
    feeling the pinch,
    a lumberjack, a barista, a Texas Ranger
    tightening the cinch

    on any one of these masked avengers
    moonlighting as a butterfly nun, a lab technician,
    a boxer prolonging a clinch

    rather than putting themself in danger,
    a restorer of Tintoretto or Titian
    taking on the world square inch by square inch.

    A Bull

    Every day putting a fresh spin
    on how he maintains that shit-eating grin
    despite his notoriously thin skin.
    The quagmire of what-might-have-been.

    Every day shouldering an invisible tray.
    Hello, hello. Olé, Olé.
    His musing on how best to waylay
    a hiker passing through a field of Galloways.

    Every day aiming to swat
    the single fly that keeps tying and untying a knot
    before taking another potshot.
    Rolling through the Krishna Valley like a juggernaut.

    Every day trying to err
    on the side of standing firm. Foursquare.
    The singlemindedness of a Berber
    about to take out a French Legionnaire.

    Every day surviving by dint
    of three of his four hooves being knapped flint.
    Hanging out the bloody bandage of his, hint hint,
    barber’s pole. His stick of peppermint.

    Every day his hoofprints in the sand-strewn park
    have enclosed so much in quotation marks.
    Not even Job or Abraham, hark hark,
    is a patch on our patriarch.

    Every day the holy show
    of leather dyed robin egg blue by Tiffany & Co.
    Areas strictly off-limits? Strictly no-go?
    The wilds of Connaught. The stockyards of Chicago.

    Every day rising at 5 am,
    determined to stem
    the flow of misinformation from the well at Zem-Zem.
    His dangle-straw from a crib in Bethlehem.

    Every day fighting shy
    of the possibility his eye
    is a shellac-gouge from an old hi-fi.
    His helmet appropriated from a samurai.

    Every day the mob
    threatening a hatchet job.
    Their hobbling across concrete. Hobnob. Hobnob.
    Their sidelong glances at his thingmabob.

    Every day the urge to rut
    at odds with his yen for whole grain calf nuts.
    The “my-my” and “tut-tut”
    of that bevy of cattle at their scuttlebutt.

    Every day his own cow’s lick
    even more at odds with his almighty mick.
    How come his second cousin, the dik-dik,
    gets to trip the light fantastic?

    Every day taking a bow
    before settling back to plow
    the rowdy-dow-dow
    of a Filipino swamp-buffalo, or carabao.

    Every day plotting how to get even even with the get
    who’s trolling him on the internet.
    Under the vapor trails of the jet set
    the solidity of his silhouette.

    Every day his image picked out in tin
    to signify there being room at the Inn.
    Bottoms up. Chin-chin.
    The gulping of milk punch from a pannikin.

    Every day cruising the main drag
    in anticipation of raising his own red flag
    to the plaza’s rag-tag
    bunch of scamps and scallywags.

    Every day forced to cram
    for some big exam.
    The difference between quondam and quamdam.
    The origins of the dithyramb.

    Every day a razor. Every night a strop.
    Rush tickets for Carmen at the Met cost $25 a pop.
    Get a move on, would you? Chop chop.
    A world in which so much “art” is agitprop.

    Every day taking a hit
    from some little shit
    armed with the latest version of lit crit.
    The fly still looping the loop in his Messerschmidt.

    Every day, it would seem, rekindling a flame
    against the culture of shame
    and its interminable blame game.
    Every day countering a counterclaim.

    Every day forced to pit
    himself against Holy Writ
    and the nitwit
    for whom the Lascaux paintings are counterfeit.

    Every day having to whisk
    away the versifiers averse to risk.
    The ignominy of being supplanted, tsk tsk,
    by a ram on an Egyptian obelisk.

    Every day lying down with the lamb.
    What-might-have-been? More water over the dam.
    Having to meet the future head-on. Wham-bam.
    His muzzle a spermicide-slick diaphragm.

    Every day the thrill
    of balancing a natural proclivity and an acquired skill
    after a walk-on part in Cattle Drive with Chill Wills.
    His tongue turquoise-teal from chlorophyll.

    Every day learning not to pin
    his hopes on there being grain in the bin.
    The situation supposedly win-win
    when he mounts an upholstered Holstein mannikin.

    Every day the likelihood of a snub
    from a warble grub
    even as he rises above the hubbub.
    Every day the flash-freezing of his syllabub.

    Every day busting sod
    whilst straddling a divining rod.
    His permanent disdain for the god squad
    by whom he was once overawed.

    Every day contending with the holier than thou
    attitude associated with the sacred cow,
    “kowtow” and “powwow”
    being terms he’s now obliged to disavow.

    Every day cutting some slack
    to the youths leaping over his back
    in Knossos. His dream of trading endless ack-ack
    for a week on the Concord and Merrimack.

    Every day starting to dig
    with his one obsidian hoof through the rigs.
    A lily-pad where a bigwig
    flies in and out in some sort of whirligig.

    Every day muddling through
    thanks to his tried and true
    ability to rise above the general to-do
    by thinking of it all as déjà-vu.

    Every day chewing gum
    like a Teddy Boy in a bombed-out slum.
    As for his success in rising above the humdrum?
    For a moment only. Only a modicum.

    Every day striking a blow
    against a more-or-less invisible foe.
    A life lived in slo-mo
    ever since a chute opened at the rodeo.

    Every day creating a stink
    against being pushed to the brink
    by the powers that be (nod nod, wink wink)
    with their newspeak and doublethink.

    Every day those massive chords on the synth
    as he’s rabble-roused from his plinth.
    His taking everything to the nth
    degree despite being consigned to a labyrinth.

    Every livelong
    day making of his hide a parish-encircling thong.
    His panko-encrusted balls a delicacy in Hong Kong.
    Subsisting on a diet of mashed kurrajong.

    Every day waiting for someone to deign
    to give him free rein.
    That shit-eating grin. How it’s maintained?
    Running rings around a mill that crushes sugarcane.

    Every day trying to weigh
    in the scales those who still flay
    the burnt offering and those who naysay
    such exaltation of the everyday.

    Every day making a dry run
    for either his moment in the sun
    or an air-injection captive bolt stun gun.
    The china shop of his skeleton.

    Stealing Kisses

    There was a pounding in my dream. Could it be the surging chant of the Crystals’ killer line, “And then he kissed me”? It seemed to me I was about to have the wild gaze and wilder hair of Natalie Wood or Harpo Marx descend on me. But as I awoke I realized that the eager head was Lassie come home.

    Kissing can be pretty nice, you used to hear. Between the ages of ten and thirteen, in the dark, kiss-kiss scenes went from being squirm-making to I can’t get enough of this. There were teenagers in the back row at the Regal and the Astoria who seemed to be going further still. What a time!

    Where has that habit gone? Was it waning at the movies well before Covid? Had “sincere” love stories turned comical, shoved aside by deadpan ironies for kids to sneer at? There was a time when filmgoers wept over love stories. Past the age of fifty, you may recall the collisional rapture of two faces and the overflow of mouths. Did waves peak in oral caves? Tongues winding in the dark. The binge streaming with saliva. Are you beginning to be thrilled or nostalgic in our protocols of distancing? Or does such sensual language feel awkward now? Foreplay rhapsodies from 1959 are generally as dead as On the Beach or Ben-Hur. In that same year, Some Like it Hot served up sex as a buffet of custard and blancmange, but the sweets tasted sour to warn us that the old association with love was demented. Hot could be very cold.

    Still, I love the metamorphosis from Harpo to Lassie, and every station on that line. Even if we’ve forgotten the other creature’s name, we only have to close our eyes and inhale to regain the hesitation and its wondering — will the other mouth be open to us? Real kisses depend on risk.

    So here’s a cue. I won’t tell you yet where it comes from, except to say it seems more or less modern, though that could tell you how old-fashioned I am: “The voice fell low, sank into her breast and stretched the tight bodice over her heart as she came up close. He felt the young lips, her body sighing in relief against the arm growing stronger to hold her. There were now no more plans than if [X] had arbitrarily made some indissol-uble mixture, with atoms joined and inseparable; you could throw it all out but never again could they fit back into atomic scale. As he held her and tasted her, and as she curved in further and further toward him, with her own lips, new to herself, drowned and engulfed in love, yet solaced and triumphant, he was thankful to have an existence at all, if only as a reflection in her wet eyes. ‘My God,’ he gasped, ‘you’re fun to kiss.’ ”

    Are you having fun, or imagining it? As I re-read this book during the pandemic time, I felt not just the fun but also the falling. Isn’t this [X] slipping into love, or into his adolescent scheme of that condition? And I felt yes, that’s what it can be like, even if the scene is confined to his point of view. He decides; he makes the move; he tastes her. She is there, no doubt: she curves in towards him; she has and uses her own lips. It’s possible that she detects “fun,” too, but we don’t quite know that as we brim with his feelings.

    I’m not yet saying where the passage comes from. But I feel its behavioral accuracy. Then as I read it over more slowly, I am unsure whether it deserves to be regarded as Serious Literature or romantic pot-boiler. In the age of movie, so many novels turned randy. A bodice is mentioned, and though it isn’t ripped you can feel it yielding. Or does that depend on the reader being male, or as “turned on” as I suspect the author was while writing it?

    The passage comes from a novel, and it does reproduce the awe and the urgency of a man in that situation. But it reminds me of how the movies used to do such scenes. The man is there, aroused , though literary propriety does not allow him a physical tumescence or require that he start to undress her and get to the matter of what was called “sex” for so long. Is he watching himself doing it, as in a movie? Had that cultural weight gathered by the time the passage was written? Maybe today she would undress him, a decisiveness that would once have been as shocking as nakedness on screen.

    You see, there was a time when going to the pictures was finding an impossible window to gaze on provocative strangers or other available views that you might be granted. You would stand before a prairie that could be your ranch, or a suave bedroom for amorous splendor; you might buzz a frisky auto down the highway, or land a single-engine aircraft on a rocky south American plateau. Or you might shift forward, more from desire’s magnetism than real motion, and collapse into the arms of Alain Delon or Doris Day. It cost a quarter to behold Colbert or Lombard in and out of Travis Banton gowns on Park Avenue. The cusp between absurd luxury and common poverty was as taunting as the one between your plain-faced envy and the sated narcissism of a movie star. The medium sighed, all the time: “Don’t you want us?” Can you believe how that innocent longing subdued Depression or war?

    The continent of old movies shines with all its absurd hopes. Isn’t that the special thing with humans – it can work with love and death – that wondering means the most? And so we swim against the current that says you could have him or her, or both of them, or you might die tonight.

    I am looking out on the historic sea of movies, doing my best to be a visionary or a helmsman. At the end of last summer, there were urgings to believe that Christopher Nolan’s Tenet was the film event that would rescue theatrical cinema, a spectacle to draw millions of us back into the dark, no matter that it might be infectious still. In that spirit I wondered about Tenet, the film on which Nolan was able to spend some $200 million, with a palindromic title for a world in which our disasters could shrink back to zero. It had a romance arrangement that was only schematic, in which the technology of reversing time was a truer object of lust than the adjacency of John David Washington and Elizabeth Debicki. What turned Nolan on were the special effects, the culture that has eclipsed the chance of humans being specially affected. This was so unpromising I started imagining another movie to rival it.

    Mine is called Kiss Me .… (The ellipsis is crucial; it gets at the precious hesitation in the act and its air of danger.) I need to admit at the outset that my project is not just risky, or insane; it may even be deemed anti-social, unwholesome, incorrect, and fit to be banned. Are you alarmed, or are we always eager to be stirred up by the prospect of a movie that could be kept away from us? Censorship and denunciation are the Santa Ana winds on the embers of desire.

    My starting point is simply that this has been a time of not touching, not kissing. Millions of us have felt constrained: we are cautious about kissing grandchildren, let alone yielding to the passing possibility of a stranger. Please don’t be horrified at this risky opportunism — but don’t rule out the chance that in your happy marriage or your settled relationship, you might be open to the moment of kissing that stranger, or better, being kissed by him or her, that one, coming down the stairs, or squeezing past in the corridor. I know, you shouldn’t do this illicit thing, but surely that doesn’t mean you stop thinking about it in the stairwell or the corridor. The cinema was always constructed as an uneasy place of delicious wondering. Even in the forbidden summer of covid, it has occurred to some of  us to approach that person in the corridor and hazard a kiss. Of course, this would be horrendously perilous and irresponsible — but those are brinks where we have always waited.

    So in my Kiss Me … I propose a series of brief meetings, with a glance and maybe a few words, and the dark promise of a first kiss. With this extra: there is a CUT before any kiss can be consummated. And the frustrated faces are the treasury of actors who have done kissing scenes in the last hundred years. You want examples? Think of Rudolph Valentino putting his arm and uneasy authority around Angie Dickinson; or Louise Brooks prepared to lure George Clooney out of a snooze with a cool-hand gotcha; or Jeanne Moreau getting into Nabokovian chat with Cary Grant, as they offer each other the wary eyes they always kept as they contemplated doing it; or Katharine Hepburn ready to grab Lupita Nyong’o; or the Julia Roberts from Pretty Woman frolicking in a Beverly Wilshire tub with the monster from Alien; or that youthful Julia (Miss Vivian, the hooker magicked into a social princess) being aroused by the more celebrated Julia from Notting Hill. For far too long in the thundery build-up of narcissism we have swallowed the nonsense that stars cannot play with themselves. They only reflect on themselves. (Nicole Kidman wooing her mirror could be a blockbuster.)

    Please don’t turn naïve on me. Don’t widen your eyes and ask, For heaven’s sake, how can we or they do this? They can do anything they can imagine. Cinema is the playground of fantasists ready to give up the ghost of fact. In our lifetime, photographic naturalism (think of it as fidelity) has been swept aside by the seething promiscuity of technological command, so that imagery of Gary Cooper in Desir ie in 1936 could be put side-by-side, arm in arm, or mouth to mouth, with the Kristen Stewart in Personal Shopper in 2016. Do you want to see that?

    Please don’t act confounded. This is no more than the fulfilment of the infinite library of the plain, charmingly humble screen that has been prepared to whore away, night after night, with any and every fresh movie you put up on the whiteness.

    I plan to let my Kiss Me … run several hours, enough for a bingeable evening, and since you’re in the habit of four hours a night now you have no pretext for dismay or inconvenience. What do you have left now but wondering in the all night long?

    I foresee a montage rhythm, with scene after scene, stars lined up for their close-up and the moment (as if it was a vaccine), always leading to the CUT that leaves us sighing, oh this time, please, let those mouths mingle. There will be no story, just the motif of people attempting to be expressed, to be riding on desire into love – to put their lips together and blow. The audience can come and go in their home theater. If you miss something — if a friend says you really had to see James Dean with Garbo, together yet in rivalry over being alone —well, you can go to rewind if you want, or just add one more legendary extra to the anthology of every screen coupling in one hundred and twenty five years of movies. The medium may have kissed more people than it killed. And perhaps you will fall in love with desire again, which seems to me the pressing hope that Tenet ignored.

    You deserve a pause, and a clue. That passage that I offered above was published in 1934, though its kiss had occurred over ten years earlier — at night, in the mountains. Put aside the generosity of this clue (and the fits of frustration it may stimu-late — desire needs denial as Dracula thirsts for blood). I want to suggest that the dating of this incident in 1934 is instructive. It’s not that kissing hadn’t been going on a while, but its special dreamy currency had been freed by movies. Suppose that in the mid 1930s a new chance at middle-class love bloomed in the smoky light of cinemas.

    Imagine a great swell of newcomers: essentially poor people but educated to read and to question old norms, able to believe there might be fresh ways of running society so that nobodies could be in love. This was the enormous increase in population after about 1850, the one that has never stopped; and these are the people who recognized the photograph and the movies as a breathtaking horizon of desire. It could be Lauren Bacall urging Bogart that kissing is better when the other person helps: 19 telling 45 that she’ll do anything. Her insolent talk ushered in fantasy gratification, so it might be hard for real companions to stay kissed for decades in the habit known as fidelity. We were kissing strangers and we have never given up on that gamble.  

    You will not find reliable social statistics on this collective promiscuity, but dreams thrive without stats. It may seem innocent to a point of infancy now, but consider how far in the first age of cinema the technology of voyeurism colluded with fantasy situations not just to educate us in how to kiss (no minor thing) but in grasping how the initiation could be a prelude to happiness.

    In the blink of an “open sesame” millions of people learned to behold glamorous strangers possessed by a new code of “beauty” or attraction, and to explore the vicarious state of romance. These screened figures could not take off all their clothes. They could not do “it,” whatever it was; and we need to be respectful of the uncertainty in the early years of movie about how “it” worked. But the restrained, illicit privilege of cinema encouraged the voyeuristic imagination. And it established that this vector was not only a Peeping Tom but a Madeleine (or a Judy) holding her breath to see how far sympathy would go.

    The rapture was there in silent pictures, but its fullest commitment came with sound. Albeit in a hush or a suspension, kissing registers aurally. You can film and record a kiss so that the couple feels removed from our observation. But you can also do it as if from inside the mouth, in a way that allows for breathing, a few swallows, and the undertone of digestion. Tongues need not speak, but they can be as muscular as the shark in Jaws.

    It is possible to track this progress, from Garbo and John Gilbert, all the way to Jimmy Stewart and Margaret Sullavan or Astaire and Rogers — Fred could kiss the woman with the merest touch. Still, kissing was regarded as a liberty requiring Hays Code rules: not prolonged, not too searching, not too abandoned or exultant. Do you recall — can you forget — Jean Arthur and Joel McCrea on the stoop in The More the Merrier? Until, in Notorious in 1946, Hitchcock had Cary Grant and Ingrid Bergman act like real people — like themselves even — as lovers who find a secret place and just kiss and smooch for hours on end, chatting sometimes to draw breath, and knowing that the talk is as carnal as the tongues. And Hitchcock taught us to watch — not simply where to look, but how to grasp human nature by spying.

    So there was an age — not long, but profound — in which spectators came to feel the possibility of love and the beckoning of an abandon that could not be seen because of censorship. That impediment was vital to the arousal. Thus one could be transported by the way Montgomery Clift and Elizabeth Taylor kissed in A Place in the Sun (1951), while knowing the lovely restriction that their embracing could not escape the screen — so he was going to be executed for dispatching or contemplating the murder of an inconvenient prior girlfriend (the whiny Shelley Winters).

    The best love stories in those days did not guarantee a happy ending. Desire was a streetcar that hurried past and gave us the anguish of feeling left behind. In From Here to Eternity (1953), Clift nuzzled Donna Reed (a whore out of the Ladies Home Journal) and glimpsed the possibility of peace and bliss instead of being bullied by the Army. In the same film, Burt Lancaster and Deborah Kerr rolled in the surf on a beach in Hawaii, and we agreed that it was the sexiest scene the cinema had ever permitted itself. “Nobody ever kissed me the way you do,” she told him. (I had to pretend to be fourteen to get in.)

    Those two romances would be cut off by the storylines, allowing Kerr and Reed to meet on the ship taking them back to the states in a wistful realization that their men had been pals. Both screen women were more ladylike than James Jones had intended in his novel. But the romance was set for eternity, forever out of reach.

    In the 1950s — a heyday of screen desire — you could feel the pressure building for these beautiful people to do the lovely thing (and the confidence that it was lovely, instead of physiological and elemental). If you were a teenager then, you were so lucky to be alive in the dark. But maybe you were doomed too, imprisoned in the medium of arousal, the cult of desire.

    I’m speaking for myself, and my gender choice, and in those 1950s the diagrammatic set-up for screen desire was very restrictive. Cinema was a show for guys — no matter that Clift for one was uncertain about his own sexual status. So I was vulnerable to the few occasions on which I got to kiss someone intensely and one thing led to another. It is hard to claw your way back from desire’s brink to what might be normal life. Indeed, it was a key strategy of the movies — and the doom of the culture that made — that normalcy was left seeming drab or unworthy.

    The name blanked out in the test quotation above, the [X], is Dick, or Dick Diver, the central character in Scott Fitzgerald’s novel Tender is the Night. Dick has been in American uniform during the war and then he is a charming psychiatrist, living and working in Europe. The Swiss clinic to which he is attached has a needy patient, Nicole Warren, who was raped by her father. She is sixteen now, and she has noticed Dick and his spiffy uniform. She writes notes to him, half-flirtatious, half-cold and, always childlike. But Fitzgerald makes clear what Dick can hardly admit or forget, that she is very beautiful and sufficiently rich.

    Tender Is the Night moves me more than The Great Gatsby, even if both dwell on disasters that can attend desire. Even so, Tender Is the Night is untidy, not least in the way Nicole is both a presence and an absence. The narrative cannot take its eyes off her, but we never know what she feels or how disturbed she is. She is infatuated with the brilliant young doctor and wants to keep him in attendance. Perhaps she is replacing her father? But we never touch on what that rape did to her, beyond learning that from time to time she cracks up and is the center of “scenes” in bathrooms, fragments of distress heard and seen by alarmed bystanders.

    Does Nicole like to be kissed? Is she “turned on”? Do she and Diver have sex there in the Swiss night on the cold grass? Fitzgerald could no more get to that level than Howard Hawks would shoot Bogart and Bacall naked together in bed (though the public was surely thrilled that after To Have and Have Not the two became lovers, as if to prove that movie fantasy worked). So we are left uncertain whether Nicole becomes addicted to an older lover who has the veneer of a magic that may care for her disturbed mind, or is simply a rich and spoiled young woman who will engineer the course of tragic events through the blunt agency of her money.

    Dick and Nicole marry. This is an inappropriate arrangement misunderstood by onlookers. Dick’s professional partner sees that Nicole’s money could fund a clinic. Nicole’s older sister, “Baby,” disapproves, but she decides to bet on the professional proximity of Dick caring for the unpredictable Nicole. The couple have two children, but the “fun to kiss” passage is their climax of desire, and it is over as soon as it has begun.

    The moral complacency of Dr. Diver is underlined (rather too forcefully) when, once married, Dick falls in love with a teenage screen actress named Rosemary Hoyt and does eventually have sex with her (described very vaguely, as if Fitzgerald was unsure what happened in such transactions or was shy of spelling it out in a Scribner’s novel that might be a big seller). But Rosemary’s screen hit is a romance, a silent film called Daddy’s Girl. There are times when you hardly know whether Fitzgerald is witty and allusive or just lost in the dark. 

    The Divers have a heady season on the Riviera, making a beach fashionable and focusing an indulgent American tourism that came to be branded as “the lost generation.” Dick loses himself in kissing and a party-going lifestyle. But he is on the slide from the start, less a skilled surfer than a man who will drown one day. He dazzles onlookers, but nothing conceals his weakness. When Fitzgerald published the novel, he began it with the Riviera glory and then flashed back several years to the seduction of Nicole. But after the book had disappointed readers, he wondered whether he should have started with the seduction, to clarify the moral compromise. After Fitzgerald’s death, the critic Malcolm Cowley supervised a re-arranged structure, stronger and more tragic, so the book is now available in both versions, letting us assess its composition. The charm that Fitzgerald managed so easily has given way to autopsy.

    Dick turns alcoholic, prematurely middle-aged, and a liability as a clinician. With the effortless calm of even the wounded rich, Nicole discards him for a stupid but handsome second husband. Dr. Diver goes away; he is last heard of in upstate New Yok, clinging to sparse professional assignments and the women in the vicinity. The book still needs a lot of work, and a better and non-Keatsian title, but Tender is the Night is unforgiving on the damage done by a reckless kiss in the era when movies were a slick treatment for just about everyone.

    We could guess what was coming by 1960. In Psycho, Hitchcock cut Janet Leigh to ribbons when he was not permitted to undress her fully. The sexual revolution had ways of uncovering intimate violence at the movies. In the next decade censor-ship broke down in landmark works such as Blow Up, Belle de Jour, Point Blank, Carnal Knowledge, Last Tango in Paris. An edge of hostility underlay the sexual breakthrough. But even the novelty and the naughtiness of those films was incidental if you could see that a tsunami was coming: it, the thing itself, the simulated outrage of pornography, where men and women become slave bodies through the sacrifice of talk, character, and narrative purpose. The cat was out of the bag: the cinema had invited the bourgeoisie to fuck as if it were a new right, free from the encumbrance of love. That required taut bodies, gymnastic agility, and bereft conscience, with the deadpan confidence that emotionally nothing mattered. Fantasists have escaped responsibility. And so it is that our movies these days seldom bother with luxuriant kisses and have given up on poetic desire.

    There was another consequence to all the liberated rutting: that movie romance is no more. (The stale and silly “romcom” formula, from which desire has been almost completely banished, is evidence of the decline.) And the demise of romance could mean the demise of cinema itself.

    We are left stranded, uncertain what healthy bodies and reckless minds are going to do with family, faith, and aging once the first kiss has taken up residence in our lonely mind — the prison of desire.

    The Pluralist Heart

    “Purity of heart is to will one thing,” Kierkegaard famously proclaimed. He was right about purity but wrong to aspire to it. It is a common mistake, made all the more familiar to ordinary people because it is a quality that heroes and fanatics, the characters who spice religious liturgies, history books, novels, poetry, and Netflix often share. Even Dante, no stranger to the complications of life and character, endorses it: “One object, and one object only, is rightly to be loved  ‘with all my mind, with all my soul, and with all my strength.’” Purity is simplifying, and it is romantic, and in an existence as relentlessly variegated as ours it promises a great relief.  

    And so it is tempting to structure one’s life around a single, dominating idea or community, to be fanatically, singularly loyal. But like some of the most irresistible temptations, this one is false. Life will never be simple and people will never be pure. Perhaps it is our very impurity that engenders the myth that purity is a human achievement, medicine for the drabness that is a regular feature of living. But what if purity, were it even attainable, were instead a human failing? What if diversity, of kinds and of qualities, is an unalterable and enriching characteristic of individual and communal experience? It seems almost platitudinous to point out that the individual lives in many realms and has many loyalties. In a single day she may in one realm be a hero, in another a loser, and often just another body standing in line. 

    Our various realms are the settings for the various roles we all play. An individual engages different parts of herself in a museum than in a place of worship, and with her friends than with her family, and with her mother than with her husband. Her priorities, the pattern of her attention, even her tone alters depending on which of her contexts she inhabits at a given moment. She is not faking it, she is still herself, but herself is many things. These shifts and developments are not deceitful; complexity is not synonymous with promiscuity. People cannot live fully in any other way, and we are right to seek fullness. 

    It is in some ways much easier to adopt the Kierkegaardian ideal and devote oneself entirely to a single loyalty. It is easier, but it is not consistent with the mess of human life, which is why such an exclusive commitment breeds dissatisfaction with reality. In extreme cases, for those inclined towards melodrama, the undivided path can become a search for martyrdom. In such cases, the sort that Marianne naively romanticizes in Sense and Sensibility, one sacrifices oneself to a love that was never compatible with reality. These loves are not sublime, they are absurd. And while this sort of sacrifice certainly requires courage, so can folly. It is entirely distinct from the heroism of sacrificing one’s life for others, for the sake of life, which is the heroism that we rightly admire.

    Contrast that type of heroism with Antigone’s sacrifice. Against the order of the king and on penalty of death, Antigone buried her brother’s corpse so that it would not rot above ground. Upon a first reading of Sophocles’ play, she is strikingly noble. Ismene, her sister, acknowledges the integrity of Antigone’s devotion, even while Ismene accuses her of being “in love with impossibility” or, as we might say, out of touch with reality. Indeed, Antigone admits outright that she was able to face certain death in service to her brother’s memory only because she had no interest in staying alive. “For death is gain to him whose life, like mine, is full of misery.” If she had some reason to keep living, she would not have been able to die as she did. Suicide is not a sacrifice for someone who wants to die. Antigone’s single-mindedness proved fatal. To appropriate a contemporary platitude, she chose only one lane, and it led to her destruction.

    Madame de Stael’s intoxicating heroine Corinne suffered from a related contempt for real life. She whipped herself into a long and fatal frenzy in service to a love she knew could never be realized. A perpetual state of excitation was the only kind of loyalty of which she was capable; she could nurture only a single love, and so consumingly that it would kill her. Corinne’s intensity, not her love, demanded the highest sacrifice. It was a product of her temperament, and not based on the object of her love, on her lover’s qualities. The heat all came from her. Reality bored her. She wanted it to be more, or grander, than it was; she could not tolerate the inanities of ordinary existence. “I had learned about life by reading the poets.” she confesses. “It is not like that. There is something barren about reality that it is useless to try to change.” And so she chose to defy it.

    Both these women could not sustain a loyalty tested by commonplace experience. It was not that their hearts were too strong for the quotidian. They were, more accurately, too weak for it. A single, overwhelming love premised upon perpetual excitement is feeble, not powerful. Corrine and Antigone suffered from the same weakness, and it manifested for both in a similar monomania. For both of them, the simplification of self, its attempted transformation into a single thing, was poisonous. One-lane roads are the most dangerous ones.

    But it is also quite possible to love many loves poorly, and to dramatize all of them the same way that Corinne dramatized hers. Dissolve to Russia. Anna Karenina loved many people and idealized all of them. And despite nurturing multiple obsessions for different people, she was still dominated by a single loyalty: to her own feverish intensity. Actual human life was not enough, and it was too much, for her. Tolstoy’s omniscient narrator knows everything about Anna in each sentence describing her, but he knows only the Anna of the sentence, of the moment, who may not be the Anna who appears a few paragraphs later. Her intensity manifests in part in mutability, in rapid reinvention. She changes constantly and emphatically, in a saga of serial self-simplifications. If she stayed for too long in one place, she would be forced to reckon with the underwhelming horizons of human life. This makes commitment quite complicated for her. When she assures her son that she loves him “best in the world,” she believes it as much as he does. (“I know that,” Seryozha replied.) Later we discover with mild disgust that Anna’s image of Seryozha is dearer to her than the child in front of her: “The son, just like the husband, produced in Anna a feeling akin to disappointment. She had imagined him better than he was in reality. She had to descend to reality to enjoy him as he was.” As she is thrust from the orbit of one love to another, Anna’s center shifts and with it her heart. When she pledges her loyalty, when she tells them she loves them, her lovers are not wrong to believe her. Anna’s sincerity was not false, and there is nothing wrong with shifting centers. Her error was not that she loved many simultaneously, but that she preferred her idea of each one to the flesh-and-blood version. The fantastic intensity of her focus, its magic-making power, was possible for her because it was stimulated by attention that was never submitted to the test of time. Anna, like Corrine, preferred poetry to prose. 

    Anna’s heart, you might say, was a serial monist. To find a way to escape the drudgeries of human life, she sought single intoxicants, one love and loyalty after another. The consequence of her desperate need to idealize was to shrink herself, to lose her sense of reality by photoshopping it, and stripping it of everything unattractive. 

    She believed that true loyalty must be blind, when the opposite is the case. Every person — and every country and every culture and every religion, for that matter — worth loving is sometimes pathetic, ugly, mean, stupid, and dull. The highest love is love that is not dispelled by lucidity or by criticism. It is the love of an individual capable of evaluating herself and her contexts with some degree of objectivity. Such an individual will not be simplified by love, or worship an “all-consuming” passion. She does not want “all” consumed. She will recognize that she is many things, admirable and not, and so is her lover; and so is her group and her style of life. Inferring from the fact that her own contexts and communities contain both strengths and weaknesses, she will conclude that others too contain strengths and weaknesses. She will discover — not unhappily — that she, and life, is plural. And this recognition of the mottled richness of reality, this plurality of values and moods and experiences and origins, this multiplicity of temper-aments and cultures, will make her curious and adventurous. She may begin to wonder about existences and communities unlike her own, and she may decide to study them, and even to join them. She will cross boundaries. She will strengthen herself inwardly by moving outward.

    There are philosophers who argued vehemently against the possibility of this sort of migration. Herder declared that “each nationality contains its center of happiness within itself.” He believed that to become a person in full, to develop completely, one must be firmly rooted within a particular group which will equip its members with a distinct language, worldview, culture, and history. Once incubated within this culture, it is impossible to uproot and pick a different one. Herder emphasized that every aspect of life is tinctured by one’s origins. Thought itself is a product of where one comes from, and ideas grow like cacti in the desert, into and a part of their landscape. This is because language, which he called “the organ of thought,” is developed by nations. A people, with all its predilections, inclinations, and prejudices, made it. 

    Herder did not regard these particularities as parochial or stifling. Or rather, he regarded all of them as parochial and none of them as stifling. Parochialism is not a curse for this kind of pluralism, because parochialism is all there is. We all live in our specificities; but this specificity, this parochialism, the enormous power of one’s origins, does not quash the possibility of individual expression. By delving deep into one’s inherited resources, by absorbing and being absorbed by one’s tradition, Herder believed that a person can hope to gain some relationship with transcendent ideas. We already possess what we need for our highest purposes.

    By recognizing the richness in specificity, Herder secured his nationalism against chauvinism, because the specificities are everywhere: all cultures are equally authentic and equally rewarding. Whereas he believed that all cultures are incommensurate from one another, he did not mistake incommensurability for inequality. Herder did not argue for the superiority of any one culture, or that a member of one culture cannot esteem an alien one (though some of his intellectual heirs did). Appreciation for the uniqueness of one’s own form of life inculcates an appreciation for other forms of life. Since one judges one’s own culture against its own standards, one learns not to judge a different culture according to oneʼs own standards. Membership in a specific group teaches us about membership generally, and so the member of one group who does not respect the particularities of another group is a hypocrite. (In this spirit, we might say also that a cosmopolitan with no love for her own country cannot really love all countries.)

    Yet there is something ironic, or worse, about Herder’s love of the many. It teaches respect across the borders, but it hardens the borders. It is deeply centripetal, and suspicious of travel. Its celebration of difference ordains that we stay with our own. There are many lanes, but we each take only one of them. This thinker who wanted us all to acknowledge the worth of every system did not encourage us to investigate any system but our own. This is a decidedly unadventurous pluralism, which prefers authenticity to curiosity. It precludes the possibility of escape, of seduction, of conversion, even of understanding anything foreign. Differences are universal but there is no universalism. Since we are not poor, none of us, we should make do with our own resources, each of us. 

    In Herder’s view, human life is pluralistic but human experience, individual experience, is not. Surely this is a stunted and inaccurate account. Is the multiplicity of traditions and cultures only something we know about and revere, but do not sample and explore? Of course not. If this were true, any attempt to understand an alien culture, let alone to adopt one, would be impossible. We would all be trapped in our particular idioms, like the punishment at Babel. But the punishment at Babel failed: there are overlaps, there are bridges, there are translations, there are mobilities. Take Herder’s view of the genesis of ideas. In truth they do not grow like plants in the desert. They travel. They are refined and developed and applied far away from where they are born. Indeed, their birthplace may be the most trivial fact about them; and this may be true of people, too. 

    History offers many examples. Consider the American, French, and Haitian revolutions, beginning in 1775, 1789, and 1792, which were like ideological dominos, one following the other despite an ocean of water and cultural and linguistic differences. The idea of political liberty did not belong to any one of these revolutions, because ideas do not belong to anybody. All of these movements borrowed and altered a group of concepts, and changed the way that the previous champions of the concepts understood them. The Haitian revolution naturally inflamed the question of slavery in the United States, underscoring the irony that Americans continued to deprive their own people of the freedoms in the name of which the American revolution had been fought. A concept, no matter where it comes from, can be better honored by strangers than those who earlier articulated it. The same lesson has been reinforced by recent history: the greatest championing of the American kind of freedom occurred not by Americans on American soil, but by the citizens of Hong Kong waving the American flag in their streets in protest against the most powerful and nefarious contemporary enemy of that symbol. The people of Hong Kong recognized themselves not in Donald Trump’s America, but in the American heroes of 1776. Likewise, the American revolutionaries of the eighteenth century should have recognized themselves in the early days of the French Revolution, as Haitians did. 

    Of course not all loyalties are freely chosen; nobody entirely invents herself. Inherited loyalties should not be rejected merely because they are inherited. We do, all of us, in every group, inherit wisdoms. And often one discovers oneself already inside. Family, country, language, culture: these things are not less precious, when they are precious, because they were not freely chosen. But equally precious are the stimulations from outside. Is learning another language leaving one’s lane? And if it is, so what? Is it treasonous to conclude that the philosophy of somebody else’s ancestors is right, or better for oneself? What culture ever developed without cross-fertilizations from other cultures? Does coming from somewhere demand going nowhere?

    Everywhere in human life there are crossings, conversions, and migrations. A convert or an immigrant chooses her loyalties in a way that a born member does not, and so her identity has its own authenticity. Religions have always found converts spiritually glamorous (converts in, not converts out), and Americans, though not recently, have felt the same way about immigrants. A lonely soul — authenticity is no guarantee against loneliness — might wander into a church or a synagogue or a mosque or a classroom or a meeting hall in search of a philosophy or a community, and she might find it there, and good for her. 

    Taking multiplicity seriously is not easy. In a pluralist society, the individual begins with at least two loyalties: to her square in the quilt and to the quilt itself. (Without the quilt there is no square.) But this is just the beginning of her heart’s trajectory. What happens when she is exposed to the other squares? Members of our own pluralist society often dread this possibility. They think that their responsibility as citizens is to ensure that the state secures their square as much freedom and support as possible, and that the society flourishes best when each of its composite parts retreats inwards towards their kin. But this is not pluralism, it is Balkanization. Genuinely pluralistic living is unsettling for the nomadic impulse in all of us. It may quash that impulse or it may encourage it.

    Living in multiplicity is made up of unexpected challenges and inescapable influences and conflicting (or at least many) loyalties. Most of the time the unsettledness of living with conflicting loyalties will offer a healthy challenge, oxygen from our different settings, the distance to evaluate our own biases with a modicum of objectivity. But sometimes the freedom and the variety will be grueling. There will inevitably be moments when one will have to choose one loyalty over another. Camus publicly grappled with a challenge of this sort when, in 1957, after the ceremony in which he was awarded the Nobel Prize in literature, he remarked: “People are now planting bombs in the tramways of Algiers. My mother might be on one of those tramways. If that is justice then I prefer my mother.” (There are many versions of this story, but I take it from the careful appendix to Alice Kaplan’s edition of Camus’ Algerian Chronicles.) He had many commitments and many identities, and while he rejected none of them he still had to choose; and with fortitude, immediately, without a nervous attempt to justify it with what Bernard Williams would have called “one thought too many,” he made his choice. Every person who lives at the nexus of many loyalties will have moments of this kind, if much less melodramatic ones.

    Ideas and the other equipment of the spirit are not owned by people. It sounds silly to have to say, but it does bear saying now, that when I am captivated by an element of a foreign culture, so that I study it and reflect its influence in the work that I do, I am not stealing it, because it is much bigger than its provenance. The same is true of someone who is exercised by, and adopts an element of, my ancestral culture. I thank them for their interest. The calculus of gain and loss is the inverse of the one described in the Shakespearean speech: he enriches himself and does not make me poorer. No one was ever impoverished by being emulated. An idea only grows in influence and sophistication when it travels. If something is true in Chinese, it is also true in Arabic. If a concept was first expressed in English, does that mean non-English speakers can never under-stand it? Is democracy “Western,” and if it is “Western,” as the post-colonial critics say it is, then are non-Western democrats guilty of cultural appropriation? It is nothing but an expression of respect when a people translates a foreign literature into their language; there are many cultures that date a revolution in their literature to the first translations of Shakespeare into their to native tongues. Was Langston Hughes out of his lane when he wrote sonnets? And readers of all genders continue to learn from Flaubert and Tolstoy what it is like to be a woman. 

    It is the solemn duty of every citizen, particularly those of a multicultural and multiethnic society, to leave her lane, temporarily or even permanently if she wishes. Nobody can understand the world, or respect what is not familiar to her, while confined to her own lane. This is the lasting imperative of the pluralist heart.

    Where Are the Americans?

    They are begging us, you see, in their wordless way,
    To do something, to speak on their behalf
    Or at least not to close the door again.

    DEREK MAHON

    In foreign policy, the remedial efforts of the new administration, the post-Caligula administration may come down to this: the position of the United States in the world must be restored, but not too much. Sometimes, when people speak of all the damage that Biden must undo, they talk about giving us a fresh start by getting us back to zero. But zero is zero; and nobody in their right mind, in the terrifying social and economic crisis in which we have been living, would propose zero, a return to 2016, as the proper objective of domestic policy. In social and economic policy we must be ambitious, monumental, transformative, and finally translate the humaneness that we profess into laws and programs and institutions; we must assist and even rescue the weak and wounded millions in our midst. But the Rooseveltian moment is to be confined to our shores. Abroad, I fear, we will rescue nobody. We will be only national humanitarians. We are resolved to “repair our repaired alliances,” as we should — but this leaves the larger question of what we are to accomplish with our alliances, what we and our allies are to do in the world together. We are similarly resolved to “restore American leadership,” but we are also haunted by the prospect of genuine American leadership, grand leader-ship, leadership with power as well as politesse, unpopular but persuasive leadership, not least because we have distorted the modern history of American leadership into an ugly story, a sordid and simple tale of imperialism and exploitation, which is a calumny that will cripple us for the conflicts that are on their way, and are already here.

    One of the reasons that a return to 2016 will not suffice to recuperate our foreign policy is that the wayward course of the United States, its choice to abdicate global preeminence and to withdraw from decisive historical action, did not begin in 2016. We have been living contentedly in our shrunken version, in an increasingly Hobbesian world, in this springtime for Hitlers, for a dozen years. When historians record the history of American foreign policy in this century, they will be struck by the continuities between the Obama administration and the Trump administration, and thereby discomfit (I hope) many people. There are some differences, of course. Obama’s diplomatic diffidence was sold suavely, like everything he sells: an emotionally exquisite realism, a tender-hearted hard-heartedness, Brent Scowcroft’s policies with Elie Wiesel’s words. It was not, as in the case of Trump, animated by anything as coarse and candidly indifferent as America First, but in practice the callousness was the same. In the Obama era, no country, no ally, no democratic rebellion or dissident movement, no cleansed or genocidally attacked population, could count on America. (There was another difference: Trump, a swindler who hated to be swindled, at least got China right. The good news is that Biden appears to have noticed.) In 2016, in a radio interview, David Remnick, a wholly owned subsidiary of Obama, remarked to Ben Rhodes, another wholly owned subsidiary of Obama, that the president was “asking the American people to accept a tragic view of foreign policy and its limitations, and of life itself.” And he added, unforgettably: “Sometimes a catastrophe is what we have to accept.” What sagacity! But which catastrophes are the acceptable ones? So many atrocities, so little time. It takes a special kind of smugness, and politics, to be stoical about the sufferings of other people.  

    Insofar as the new Biden foreign policy apparatus is the old Obama foreign policy apparatus — are they now the Blob? — there is reason to worry that their former leader’s aversion to conflict, and his soulful patience with the anguish of others, will live on in a busy cosmopolitanism that mistakes itself for a robust internationalism — a genial, worldly, multi-lingual era of good feelings and recovered sanities that will still offer no serious impediment to the designs of rivals and villains. We will soon see how far the return of truth to government will reverse the isolationist foreign policy that was developed during government’s recent adventures with falsehood. Returning from Trump to Obama will not suffice. They knew the truth in the Obama White House, they knew the facts, but it set nobody free.

    Those who are pleased by the reduction of America’s position in the world like to say that America should lead not by power but by example. It is a clever argument, in that it imposes no obligations upon us other than to be ourselves, which is always the laziest imperative of all. Unfortunately for those who recommend this historical leisure, this self-congratulatory lethargy, the City on the Hill is presently in ruins. Who on earth would want to be us now? I exaggerate, of course: we never were Weimar America, and we sent our orange strongman packing, and our Constitution held; but we are miserable. Even in the good times, there was nothing terribly helpful, it was even a little insulting, to say to the wretched of the earth, be like us. The only way any of them could be like us was to fight their own fights, in their own communities and in their own cultures, for the opening of their societies, ideally with the expectation that the United States would be there to assist them in their struggle for their particular inflection of the universal value of freedom. There was also another way in which they could be like us: they could come here and join their democratic and economic appetites with ours, which is why we should regard immigration as the definitive way of taking America’s promise seriously; but on immigration, too, we lost our footing years ago, and are a haven no more. 

    How can we lead by example if we are not exceptional? But it is the people who despise the idea of American exceptionalism who insist that our example is our only claim to global authority. Their implication is that until we have justice at home, we cannot take an interest in justice abroad. We may as well inform the Uighurs and the Syrians and the Rohingya and the astonishing citizens of Hong Kong that they must wait forever. The worst example of such reasoning — it is one of the most outrageous sentences I have ever read — was Simone Weil’s observation that as long as France had colonies it had no moral authority to fight Hitler. As if ethical action is the duty only of saints. But the struggle for justice, at home and abroad, is always the work of sinners, whose introspection is supposed to catalyze, not paralyze, them. No, there is only one way to win the friendship of people beyond our borders, and it is to help people beyond our borders. We can be big in the world by doing good in the world. Lacking bigness or goodness, we (and not only we) are doomed. 

    I will be accused, at the very least, of a lack of irony. Don’t I know about the innocent blood spilled in the just wars? What about the interventions that went wrong? What about the infringements of sovereignty, that most hallowed of Westphalian principles? And the cocky way I am using that word “good” — good according to whom? These are fine and urgent questions. Naivete is especially unpardonable in discussions of power. Idealists have a special obligation to attend to considerations of costs and benefits; otherwise, as the Latin adage about doing justice warns, the world may perish for their stubbornness. Moreover, the rhetoric of political virtue, of enlightenment and liberation and democracy, was long ago appropriated by modernity’s monsters: they, too, use moral language, words like “good.” 

    But there are no perfect Westphalians: interests of state have regularly over-ruled the inviolability of states, often for shabby reasons. There is something grotesque about living with immoral and amoral transgressions against the state system but drawing the line at the moral ones. All these historical and philosophical complications persuade me only that we should be intellectually scrupulous, not that we should be practically feckless. The shoals of relativism, the taunts of epistemology, the consistencies of pacifism, pale before the sufferings of individuals and peoples. Their pain is overwhelmingly actual. Its facticity is almost stupefying. I have been reading a beautiful old essay by Ignazio Silone called “The Choice of Comrades,” where I find this: “It is a matter of personal honor to keep faith with those who are being persecuted for their love of freedom and justice. This keeping faith is a better rule than any abstract program or formula.  In this age of ours, it is the real touchstone.” Pretty unsophisticated, no? 

    It is now a terrible anniversary. It has been ten years since the beginning of the first great disgrace of the twenty-first century. I am referring to the Syrian catastrophe. Except for the dead and the raped and the tortured and the exiled, except for the refugees and the survivors, the dust seems to have settled for everyone else, and so it seems time for Americans to do what Americans do best: move on. Anyway, what can we do? The democratic rebellion in Syria that began in Dara’a in April, 2011 was defeated. It was successfully transformed into an ethnic and religious conflict, the direst kind of contemporary war, by the tyrant Bashar al-Assad, who proceeded to destroy his own country and bomb his own people and, when his weakness was showing, deliver his state to the aggressions of the Iranians and their Hezbollah allies under the protection of the Russians, who rushed in where America feared to tread. This was a moral and strategic (look again at the map) failure of staggering proportions. It was a genocide and an invasion and a conquest. We chose to stand idly by, feeling bad and watching it. And the effects of our passivity were not confined to the borders of the ravaged land. As a consequence of the West having done nothing, so that the murderers met no resistance from outside, no force that could obstruct them, the stability of Lebanon and Jordan has been threatened, Turkey has embarked on a dark path, Russia has become a semi-demi-hemi-superpower, Iran has become a regional hegemon, the position of the United States in the world has plummeted, and fascists are coming to power in Europe. Not bad for nothing.

    There are primal historical scenes that leave an indelible imprint upon one’s sense of the world — one’s expectations of it and one’s obligations in it. When I was growing up, there were two such primal scenes, and they generated antithetical views of history and politics. The first was World War II, the second was Vietnam. All I needed to know about an individ-ual was his or her primal scene, and the rest was easily filled in. There was post-war and there was anti-war. People who were postwar, who were imprinted by the effects of American power against fascism in Europe and Asia, and by the testimonies of the victims of totalitarianism who regarded American soldiers as saviors, had a large and admiring view of America’s role in the world, and a verified confidence that American power could be used justly and for justice. Post-war was not pro-war, but it was prepared to use American force in the name of certain values and certain interests — and did, with good, bad, and mixed results. Vietnam, the subsequent primal scene, was supposed to have shattered that confidence, and anti-war people deplored American intentions and interventions, which they viewed as cynical projections of power for power’s sake, and for money’s sake too, and as nothing other than imperialism. These different outlooks were to some extent generationally determined, but not entirely; they were applied not only to the uses of American military power but also more generally to the level of American activism around the world and to the level of Ameri-can preeminence in the world; and they may be described, if labels be needed, as liberalism and progressivism. (Biden is a post-war become an anti-war, I think.)

    Now there are two new primal scenes, from which two corollaries of historical and strategic understanding similarly flow. The first is Iraq, the second is Syria. Iraq is Vietnam’s successor in the foreign policy of progressives, the transgression from which there is no recovery, the obscene noun that silences all talk of American action. As Obama remarks in his memoir, “of course, I considered the invasion [of Iraq] to be as big a strategic blunder as the slide into Vietnam had been decades earlier.” Iraq was the reason that we did not go into Syria. The poor Syrians had the misfortune of being exterminated after 2003. Their horrors came too late, when the United States was sunk in historical memory. I am not suggesting, of course, that American forgetfulness would have been preferable; only that the infamous lessons of Iraq are not as obvious as every person on every street corner in Washington seems to think. 

    I should confess immediately that I supported the Iraq war. I believed, on what I (and almost everybody except Scott Ritter) regarded as good authority, that Saddam Hussein possessed chemical and biological weapons, and since he had already used chemical weapons against the Kurds in Halabja in 1988, the question of his willingness to employ weapons of mass destruction was not a theoretical one. The use of such weapons, I continue to believe, and the threat of their use, constitutes a global moral emergency. When I realized that the assumption behind the invasion was wrong, that the dictator in Baghdad was bluffing his way to his own destruction, I promptly retracted my support, but I did so in a way that did not please the anti-wars. I wrote that the United States had been taken by its leaders into a major war on the basis of a mistake or a lie, and that this was a great historical scandal — but I expressed no regret about the overthrow of Saddam, and I continued to hope, not without evidence, that democratic progress could be made in the political openings that we — perhaps not by right, but in fact — were creating and supporting. I ardently hoped to see democracy in an Arab country. I was not surprised by the sectarian strife that was released by the collapse of the dictatorship, but this was a problem that Iraq, and other Muslim countries, would sooner or later have to face. In heterogeneous societies, tyranny is, among other things, a stop-gap measure, a deferment of the inevitable confrontation with the political challenges of difference and disharmony.

    I certainly did not come away from the partial debacle in Iraq with the conviction that the United States was henceforth disqualified from international interventions. There were a number of reasons for this. For a start, there is no single event that explains everything, that is all we will ever need to know. Paradigms, and primal scenes too, enslave our thinking, and historical analogies are never precise. (During the Trump years, for example, we never had our Reichstag fire.) Those who forget history are sometimes condemned to repeat it and sometimes not; and those who remember history will know that it never slows down or stops, it offers no ellipses or time-outs, there is no interregnum between crisis and crisis in which we may calmly reflect and attend conferences before we act again. If ever we needed a respite from history, it was in 1945; but events in Europe and elsewhere did not allow it. (The isolationism of the 1930s, and America’s unconscionably slow start in the defense of England and the other democracies, was owed to a similar exhaustion, and to vivid memories of a recent war.) It makes no sense, at least to me, to say that we could have halted the genocide and the occupation of Syria if only we had not intervened in Iraq. The relentlessness of history, and the eruption of evil, is always inconvenient. We are never adequately prepared for it, intellectually and materially, but there it is. If we should have intervened in Syria, we should have intervened in Syria.

    There were many, of course, who did not agree that an intervention in Syria would have been justified. Obama never said so explicitly, but my obsessive study of his foreign policy led me to the conclusion that it was his belief that the United States has no right to make itself in any way responsible, directly or indirectly, for a significant change in another country. (In accordance with the anti-war account of American foreign policy, however, there were three exceptions to this quietism, three countries about which American guilt demanded American action: Vietnam, Cuba, and Iran.) There are many objections, historical and philosophical, that can be made to such a view. This debate must still be engaged. The Obama people, who in the Trump years swanned around like disappointed interventionists, argued that there was nothing, nothing, that we could have done in Syria, and eventually some of them bizarrely had algorithms made to settle the matter. But the important point is that we tried it their way, and it failed. Whereas we do not know what the outcome of American intervention in Syria would have been, we do know what the outcome of American non-intervention in Syria has been. Was sitting on our hands really worth it? We were disgusting. In the Obama years I had the honor of many visits from many Syrian friends who wanted to talk with me on their way to an appointment at the White House. I advised them all the same thing: tell the officials what you know about the situation on the ground, be useful to them, speak as eloquently as you can, appeal to American ideals and American interests, and expect nothing. 

    If Iraq is now one primal scene, Syria is now the other. Syria is the cautionary tale about the stupendous consequences of inaction. Here is another heresy: I have no doubt that the costs of American action in Iraq have been much less than the costs of American inaction in Syria. Governments and peoples everywhere were watching. The governments learned that they can do whatever they wish to their peoples, and the peoples learned that America will not try hard, or at all, to stop their governments. The governments also learned that they could send their troops across borders with impunity, in campaigns of aggression that seize territory and disrupt states, as Iran did in Syria and elsewhere, and Russia did in Ukraine, and China is likely to do in Taiwan and the South China Sea. It is true that we do not have the power to determine the policies of other countries, but we do have the power to inhibit them and complicate them and thwart them. We have the power, if we want it. Anyway, we know all about the limits of American power: it is the foreign policy cliché of our era, our diplomatic catechism. The question before us is which limits to accept, and why. A limit is not a fate. In domestic policy, certainly, we are correctly enjoined to “go big,” and never mind the warnings about what the economy will bear.

    But how does the dispatch of American troops to another country differ from the Russian dispatch of troops to another country? Are our actions acceptable because they are ours? Of course not. All interventions are not the same. We have sometimes abused our power abroad, and we have experienced legal and political and cultural reckonings with those abuses. What makes the difference, plainly, is the purpose. When, in the first Gulf War, we and our allies expelled Iraq’s troops from Kuwait, we were upholding international law and coming to the assistance of an invaded country — but when James Baker, who was asked about the reasons for the war, said “jobs, jobs, jobs,” he put a bit of a dent in its legitimacy. There is a commonly held view that an American presence is no longer welcome in the Muslim world after Iraq. I do not speak or read Arabic, and my evidence is journalistic and anecdotal, but I wonder. It cannot have been lost on many Muslims that most of America’s military campaigns in recent decades were designed partly or wholly to assist Muslims — in Bosnia, in Kosovo, in Kuwait, in Afghanistan, in Libya, even in Iraq. (The Libyan campaign, though, was a model of how anti-interventionists intervene: the objective of the mission almost immediately became to end the mission, and we hastily left Libya to its hell.) Syrians, certainly, were desperate for American intervention; and the one night in ten years that I saw my Syrian friends happy was the night that Trump fired fifty-nine cruise missiles at a Syrian air base in retaliation for the Syrian regime’s sarin attack on the village of Khan Sheykhoun. When I asked H.R. McMaster whether the American operation represented a new policy of engagement or a Tweet with missiles, I angered him; but alas, it was a Tweet with missiles.

    People who need help usually welcome help. They do not ask to see the ideological credentials of those who have come to save their lives. The credentials game is the sanctimonious pastime of those who are not in need of rescue — the American left, for example, which had nothing at all to offer the Syrians, or the Ukrainians. The journals of the American left have established a strange intellectual ritual about human-rights emergencies: they report on them in plangent detail and then deplore any suggestion that we might actually do something to alleviate them. They deify dissidents and their hearts break for the women and the children, and then the ideological prohibitions kick in. In this planet of horrors nothing horrifies them more than the prospect of an American soldier somewhere. It suits many Americans to believe that for the rest of the world we are still the ugly Americans, since the subject of intervention is moot if we are not wanted. 

    We certainly should not go anywhere as conquerors or occupiers, but there may be justifications for an American military presence that have nothing to do with conquest or occupation. We must always be respectful of the “local dynamics,” though there will be occasions when the “local dynamics” are precisely what bring us to a faraway place. But when there is an earthquake in Haiti or a nuclear accident in Japan or an invasion in Ukraine or a genocide in the Balkans or a plague in Liberia, the broken countries generally, and correctly, look to us. We are not the cops of the world, but we do not turn our backs, at least not always. Right now we are hardly in danger of doing too much. This has been a golden age of too little. Soon, if we do not recover our sense of our historical role, the imperiled of the world, and the prudent, will start looking to China. (They will discover the original meaning of attached strings.) During the Obama years, when friends would return from trips abroad and offer reports over drinks, a pattern began to emerge from their observations. Wherever they went, to Europe, the Middle East, South Asia, Japan, or Latin America, in meetings with officials and journalists and politicians and bankers, they were asked the same question — a question there was no need to ask during the Trump years because its answer was repulsively self-evident. The question that hounded them everywhere was, Where are the Americans?

    “O to be discussing this face à face (or mano a mano, in this case) outside Kramer books with a stiff drink. Our last conversation there was really memorable.” So wrote a cherished friend not long ago, a man of strong intellect and immense learning, a steadfast liberal. He was right: we needed a long and rigorous conversation. The situation called for a café. We had been corresponding about the question of interventionism, about Syria and Iraq, about what the new administration’s foreign policy should be. We disagreed. Our moral and philosophical premises were the same, but he was shaken by my stubborn confidence that American force could still be used in service to those premises. “Are you not at all chastened by the damage such confidence has caused over the past two decades?” In fact I believe that the “damage” is not remotely the whole story, that the results have been decidedly mixed; and in this vale  of tears I cannot scant mixed results. “We and the Syrians,”  my friend wrote, not at all complacently or triumphantly,  “are paying the price for the Iraq folly.” As a factual matter,  he was right.

    And then he expressed another objection, an exceedingly interesting one. “This is not a moral disagreement,” he explained. “What I object to is the reflex to transform political problems into moments of moral self-revelation and self-definition — Malraux moments. ‘Here I stand” — yes, yes, I know, but sometimes you need to stand over there and pipe down.” About not piping down, I plead guilty as charged, though I fail to see why the other side should not also stifle its ringing certainties. But in these debates I do not really mind the noise: the stakes justify some heat, which hopefully will be generated by some light. “For Zion’s sake I will not hold my peace,” the prophet Isaiah proclaimed. Elsewhere in his letter my friend had indeed impugned me for speaking not liberally but prophetically. And Isaiah’s proclamation may indeed be described as a Malraux moment.

    I understood what my friend meant by this notion. André Malraux was many things, but one of them was a grand self-mythologizer who, from the 1920s to the 1940s, valiantly but also narcissistically, hopped from world-historical crisis to world-historical crisis — China, Cambodia, Spain, Nazi-occupied France — an intellectual who dreamed of being a man of action, turning his participation in those cataclysms into an epic of self-description, and into novels. If he was a hero, it was not least in his own eyes. I had noticed long ago, in others as well as in myself, the grandiloquence that sometimes results from high moral arguments, the way that impassioned participation in a war of ideas can be confused with another variety of historical participation, the inflation of the self that comes from a certain intensity of commitment. (In 2002, during the debates about al-Qaeda and Iraq, Christopher Hitchens, with whom I stood shoulder to shoulder at least on these questions, declared: “You want to be a martyr? I’m here to help.” I remember thinking that his self-conception had crossed the line.) The prophetic feeling is a nice feeling, especially in a land where prophets pay no price.

    “Liberalism,” my friend continued, “should teach an art, or at least see the need for an art, of discerning when a fundamental moral issue is at stake and when it is not.” This is liberalism as a mentality, not as a doctrine. At the café I would have retorted that if intervention to stop a genocide, or to assist a democratic rebellion, is not a fundamental moral issue, I don’t know what is. But his charge of Malrauxism stays with me. He is on to something. Foreign policy must not be a form of self-expression. It must not be, as Americans like to say, about us. When we act abroad, it should not be to confirm a flattering picture of ourselves, or to furnish a sensation of our own rectitude. This is in part because statecraft should be a profoundly empirical activity, based on a sober evaluation of threats and opportunities — on an analytical disinterestedness without which our promotion of our values and our pursuit of our interests may become a menace to the world. The delusions of statesmen have murdered many millions of people. 

    Yet my friend’s warning against Malrauxism provoked me to another conclusion, which he will find perverse in the context of his prescriptions for caution. It is this: that there are circumstances in which our foreign policy, if it is not to be about us, must be about others. I do not mean only that diplomacy and strategy are always to some extent reactive, in that we do not have the capacity to determine all by ourselves, in our own time, based on our own preferences and our own moods, whether a crisis represents, say, a “new cold war.” There is always the not insignificant factor of the behavior of other states, inside and outside their borders. When I say that our foreign policy must sometimes be about others, I mean also something more radical, more swift, more humanistic, more exercised by peoples than by governments: that there are times when we must take action because of the needs of others. 

    The needs of others: we must agree to be distracted and quickened by their plight. They cannot be neglected because we are still discussing humility and hubris. We must be prepared to pause, to look up from the ordinary practices of international relations, for the purpose of support and rescue. People who are not citizens of a country sometimes have a legitimate claim on its power. In the case of refugees, for example, Kant recognized a “cosmopolitan right” that he called “universal hospitality,” which was eventually codified in the Refugee Convention of 1951. “We are concerned here not with philanthropy,” he sternly wrote, “but with right.” In this instance, the rights of strangers impose an obligation upon us. Do people who are being tortured, raped, thrown into concentration camps, and slaughtered so as to erase their identity from the face of the earth — do they have a right to demand that we come to their assistance and even liberate them? Maybe not, but it should not matter. They have a claim on our sympathy, and sympathy is cynical, vain, with no relation to conscience, unless one acts upon it. (There are many kinds and degrees of action: Iraq is hardly the model.) 

    By what right do we help them? It is the wrong question. The right one is, by what right do we abandon them? On certain occasions humanitarian intervention will coincide with our interests, and there is of course the long-term reward of gaining the friendship of the people we help; but sometimes we should use our power only because it is the right thing to do. We had no interests that would have been served by intervening against the genocide in Rwanda, except for our interest in being able to look at ourselves in the mirror.

    There are hard-boiled types who will scoff at all this, and dismiss it as altruistic, and mock it as not the way of the world. Well, so much the worse for the world. The world is the problem, not the solution. There is no shame in altruism, and when it practiced by a state, by a strong state, by a great power, it may even modify international norms. It is certainly not inconsistent with the toughness that will be demanded of our leaders by the Great Game that has already begun to define this century. And it is certainly not the whole of foreign policy, though the less sentimental problem of world order will also require a new American emboldenment. The opposite of America First is not America Second. It is America in full, unafraid of history’s pace, unembarrassed by its enthusiasm for democracy and human rights, larger than its mistakes and its crimes, comfortable with the assertion of its power in its own defense and in the defense of others, inspired by the memory of its magnitude, repelled by the rumors of its decline. Only we can bring us down and only we can lift us up, and not only us. “We fell victims to our faith in mankind,” wrote Alexander Donat, a survivor of the ghettos and the concentration camps, “our belief that humanity had set limits to the degradation and prosecution of one’s fellow man.” There are already too many people in too many places who fell victim to their faith in America.

    Art’s Troubles

    I

    Duly acknowledging that the plural of anecdote is not data, I begin with some stories drawn from the recent history of liberal democracy.

    • In November 2010, the Secretary of the Smithsonian Institution removed an edited version of footage used in David Wojnarowicz’s short silent film A Fire in My Belly from “Hide/Seek: Difference and Desire in American Portraiture” at the Smithsonian’s National Portrait Gallery after complaints from the Catholic League, and in response to threats of reduced federal funding. The video contains a scene with a crucifix covered in ants. William Donohue of the Catholic League claimed the work was “hate speech” against Catholics. The affair was initiated by an article contributed to the Christian News Service, a division of the Media Research Center, whose mission is to “prove — through sound scientific research — that liberal bias in the media does exist and undermines traditional American values.”

    • In October 2015, Dareen Tatour, an Israeli Arab from a village in the Galilee, was arrested. She had written a poem: “I will not succumb to the ‘peaceful solution’ / Never lower my flags / Until I evict them from my land.” A video clip uploaded by Tatour shows her reading the poem, “Resist, my people, resist them,” against the backdrop of masked people throwing rocks and firebombs at Israeli security forces. The day after the uploading, she posted: “The Islamic Jihad movement hereby declares the continuation of the intifada throughout the West Bank…. Continuation means expansion… which means all of Palestine. And we must begin within the Green Line… for the victory of Al-Aqsa, and we shall declare a general intifada. #Resist.” In 2018, Tatour was given a five months’ jail sentence. In May 2019, her conviction for the poem was overturned by the Nazareth District Court, but not the conviction for her other social media posts. The poem, said the court, did not “involve unequivocal remarks that would provide the basis for a direct call to carry out acts.” And the court acknowledged that Tatour was known as a poet: “freedom of expression is [to be] accorded added weight when it also involves freedom of artistic and creative [expression].” The Israeli Supreme Court rejected the state’s motion for appeal.

    • In 2017, the artist Sam Durant made a public sculpture, “Scaffold,” for location in the open grounds of the Walker Art Center in Minneapolis. It was an unpainted wood-and-metal structure, more than fifty feet tall, with a stairway that led to a platform with a scaffold. The work referred to seven executions between 1859 and 2006, including the execution in 1862 of thirty-eight Dakota-Sioux men. Protesters demanded the work’s destruction: “Not your story,” “Respect Dakota People!” “$200.00 reward for scalp of artist!!” Following mediation, the work was surrendered to the activists, who reportedly dismantled it, ceremonially burning the wood. Art critics endorsed the protest: “In general it’s time for all of us to shut up and listen.” “White Americans bear a responsibility to dismantle white supremacy. Let it burn.” The artist himself denied that he had been censored. “Censorship is when a more powerful group or individual removes speech or images from a less powerful party. That wasn’t the case. I chose to do what I did freely.”

    • In April 2019, three Catholic priests in the Polish city of Koszalin burned books that they said promote sorcery, including one of J.K. Rowling’s  Harry Potter  novels, in a ceremony that they photographed and posted on Facebook. The books were ignited as prayers were said and a small group of people watched on. They cited in justification of the ceremony passages from Deuteronomy (“The graven images of their gods shall ye burn with fire”) and Acts (“Many of them also which used curious arts brought their books together and burned them before all men”). In August of the same year, a Roman Catholic pastor at a school in Nashville, Tennessee banned the Rowling novels: “These books present magic as both good and evil, which is not true, but in fact a clever deception. The curses and spells used in the books are actual curses and spells; which when read by a human being risk conjuring evil spirits into the presence of the person reading the text.”

    • In August 2019, the release of the film The Hunt, in which “red state” Americans are stalked for sport by “elite liberals,” was cancelled. Donald Trump had tweeted: “Liberal Hollywood is Racist at the highest level, and with great Anger and Hate! They like to call themselves ‘Elite,’ but they are not Elite. In fact, it is often the people that they so strongly oppose that are actually the Elite. The movie coming out is made in order to inflame and cause chaos. They create their own violence, and then try to blame others. They are the true Racists, and are very bad for our Country!” The studio explained: “We stand by our film-makers and will continue to distribute films in partnership with bold and visionary creators, like those associated with this satirical social thriller, but we understand that now is not the right time to release this film.” Nine months later, with a new marketing campaign, the film duly appeared. The director explained: “The film was supposed to be an absurd satire and was not supposed to be serious and boring……. It’s been a long road.”

    • In Germany, a Jewish activist has been litigating to have removed a thirteenth-century church carving of the Judensau, or “Jewish pig,” an infamous trope of medieval anti-Semitism, from the outer wall of the main church in Wittenberg. A memorial plaque installed in November 1988, containing in Hebrew words from Psalm 130, “Out of the depths, I cry to you,” does not satisfy the litigant. The district court ruled that the continued presence of the carving did not constitute evidence of “disregard for Jews living in Germany.” The judgment was upheld this year by the Higher Regional Court: the presence at the church of both a memorial to the Holocaust and an information board that explains the Judensau as part of the history of antisemitism justified retaining the carving. The campaign to remove the carving has Christian clerical support: “The Judensau grieves people because our Lord is blasphemed. And also the Jews and Israel are blasphemed by showing such a sculpture.” A local Jewish leader took a different position: “It should be seen within the context of the time period in which it was made,” he argued. “It should be kept on the church to remind people of antisemitism.”

    • Two years ago the artist Tomaz Schlegl built a wooden statue of Trump in Moravce, Slovenia. It was a twenty-six-foot tall wooden structure that had a mechanism to open Trump’s red painted mouth full of pointy teeth. The artist explained that the figure has two faces, like populism. “One is humane and nice, the other is that of a vampire.” He explained that he had designed the statue “because people have forgotten what the Statue of Liberty stands for.” The Trump-resembling statue wasn’t actually Trump, but “I want to alert people to the rise of populism and it would be difficult to find a bigger populist in this world than Donald Trump.” It was burned down in January 2020. The mayor of the town, deploring the arson, commented: “This is an attack against art and tolerance…. against Europe’s fundamental values.”

    There is something arbitrary about this group of stories — others could have been chosen, without any loss of coherence in the picture of contemporary artistic freedom. There was the campaign against Dana Schutz’s painting of Emmett Till at the Whitney Museum, against Jeanine Cummins’ novel American Dirt, against Woody Allen’s film deal with Amazon, which was cancelled, and against his memoir, which was cancelled by one publisher but published by another one. There was the decision by the National Gallery of Art and three other major museums to delay until at least 2024 the Philip Guston retrospective planned for 2020, so that “additional perspectives and voices [can] shape how we present Guston’s work” (museum-speak for “we will submit our proposals to a panel of censors”). And though they are all recent stories, the larger narrative is not altogether new. In 1999, Mayor Rudolph Giuliani took exception to certain works in an exhibition at the Brooklyn Museum, notably Chris Ofili’s painting The Holy Virgin Mary. The mayor relied on a newspaper report: the Virgin was “splattered” with elephant dung, the painting was offensive to Catholics, the museum must cancel the show. (The museum offered to segregate some pictures and withdraw Ofili’s, but the mayor responded by withholding funds and terminating the museum’s lease. The museum injuncted him; the city appealed; it then dropped the appeal. The Jewish Orthodox Agudath Israel intervened on the mayor’s side.) In 2004, in Holland, the Dutch filmmaker Theo van Gogh was shot dead in Amsterdam by Mohammed Bouyeri, a 26-year-old Dutch-born Muslim who objected to the film Submission that Van Gogh had made earlier that year, with Ayaan Hirsi Ali, about violence against women in Islamic societies; the assassin left a note to Hirsi Ali pinned by a knife to the dead man’s chest. And in 2005, in Denmark, there occurred the cartoons affair. In response to an article about a writer’s difficulty in finding an illustrator to work on a book about Mohammed, the newspaper Jyllands-Posten  published twelve editorial cartoons, most of them depicting him. There was no immediate reaction. The Egyptian newspaper Al Fagr republished them, with no objection. Five days later, and thereafter, there were protests by fax, email, and phone; cyber-attacks, death threats, demonstrations, boycotts and calls to boycott, a summons to a “day of rage,” the withdrawal of  ambassadors, the burning of the Danish flag and of effigies  of Danish politicians, the exploding of car bombs, appeals to the United Nations, and deaths — about 250 dead in total and more than 800 injured. A decade later, when similar cartoons were published in the French satirical magazine Charlie Hebdo, its staff was massacred in its offices in Paris.

    There is more. Behind each story, there stand others — behind the Allen stories, for example, there is the Polanski story and the Matzneff story. And behind those, some more foundational stories. In 1989, the Ayatollah Khomeini issued his fatwa against Salman Rushdie and his novel The Satanic Verses, which had already been burned in Muslim protests; there followed riots and murders, and the writer went into hiding for years. (The threat to his life subsists.) Also, in 1989, the Indian playwright and theater director Safdar Hashmi was murdered in a town near Delhi by supporters of the Indian National Congress Party; the mob beat him with iron rods and police batons, taking their time, unimpeded. In the United States in those years, there occurred, among other depredations against literature and the visual arts, the cancelling of the radio broadcast of Allen Ginsberg’s poem Howl; the campaign against Martin Scorsese’s film The Last Temptation of Christ; the political, legal, and legislative battles over Robert Mapplethorpe’s The Perfect Moment and Andres Serrano’s Piss Christ, and over Dread Scott’s What is the Proper Way to Display the United States Flag?; the dismantling of Richard Serra’s site-specific Tilted Arc; and the campaign against Bret Easton Ellis’ novel American Psycho. There were bombings, boycotts, legislation, administrative action, the ripping up on the Senate floor of a copy of Piss Christ by a senator protesting the work on behalf of “the religious community.”

    Not all these stories have the same weight, of course. But taken together these episodes suggest that new terms of engagement have been established, across political and ideological lines, in the reception of works of art. The risks associated with the literary and artistic vocation have risen. New fears, sometimes mortal fears, now deform the creative decisions of writers and artists. Literature and the visual arts have become subject to a terrible and deeply illiberal cautiousness. (As a Danish imam warned the publisher of the cartoons, “When you see what happened in Holland and then still print the cartoons, that’s quite stupid.”) The interferences with what Joseph Brodsky called literature’s natural existence have grown brutal, overt, proud. We have witnessed the emergence of something akin to a new censorship conjuncture.

    There are ironies and complications. This new era of intolerance of, and inhibition upon, literature and the visual arts has occurred in the very era when the major ideological competitor to liberalism collapsed, and with it a censorship model against which liberal democracies measured their own expressive freedom. Or more precisely and ironically, in the era when the Berlin Wall and Tiananmen Square occurred within months of each other — the former exemplifying the fall of tyranny, the latter signifying the reassertion of it. When China conceived the ambition to become the major economic competitor of the capitalist liberal democracies, it also initiated a censorship model to which over time the greatest private corporations of these same liberal democracies would defer. Since artworks are also products that sell in markets — since filmmakers need producers and distributors, and writers need publishers and booksellers, and artists need galleries and agents — they are implicated in, and thus both enabled and constrained by, relations of trade and the capitalist relations of production. Corporations will both accommodate censoring forces and be their own censors. As their respective histories with the Chinese market show, the technology corporations tend to put commercial interests before expressive freedoms. And that is another irony: this assault on art took place even as the World Wide Web, and then the Internet, was invented, with its exhilarating promises of unconfined liberty. But the new technology was soon discovered to have many uses. As Rushdie remarked in Joseph Anton, his memoir of his persecution, if Google had existed in 1989 the attack on him would have spread so swiftly and so widely that he would not have stood a chance.

    And all the while a new era of illiberalism in Western politics was coming into being, for many reasons with which we are now wrestling. 1989 marked the moment when liberalism’s agon ceased to be with communism and reverted instead to versions of its former rivals: communitarianism, nationalism, xenophobia, and religious politics. New illiberal actors and newly invigorated illiberal communities, asserted themselves in Western societies, as civil society groups came to an understanding of a new kind of political activity. So if one were to ask, when did art’s new troubles begin, one could answer that they began in and around that single complex historical moment known as 1989. And these contemporary art censorship stories differ from older arts censorship stories in significant ways.

    II

    All these stories are taken from the everyday life of liberal democracies, or more or less liberal democracies. In not one of these stories does an official successfully interdict an artwork. There are no obscenity suits among them. With just one exception, there are no philistine judges, grandstanding prosecutors, or meek publishers in the dock. Customs officials are not active here, policing borders to keep out seditious material. There are no regulators, reviewing texts in advance of publication or performance. So how indeed are they censorship stories at all? We must reformulate our understanding of censorship, if we are to understand the censorship of our times.

    “Censorship” today does not operate as a veto. It operates as a cost. The question for the writer or the artist is not, Can I get this past the censor? It is instead, Am I prepared to meet the burden, the consequences, of publication and exhibition — the abuse, the personal and professional danger, the ostracism, the fusillades of digital contempt? These costs, heterogeneous in everything but their uniform ugliness, contribute to the creation of an atmosphere. It is the atmosphere in which we now live. The scandalizing work of art may survive, but few dare follow.

    Censorship today, in its specificity, must be grasped by reference to these profiles: the censoring actors, the censoring actions, and the censored. With respect to the censoring actors, we note, with pre-1989 times available as a contrast, that there has taken place a transfer of censoring energy from the state to civil society. In the West, certainly, we do not see arrests, raids, municipal and central government actions such as the defunding or closure of galleries, prosecutions and lawsuits, or legislation. Insofar as the state plays a part, it tends to be a neutral spectator (in its executive function) or as a positive restraint on censorship (in its judicial function). In respect of civil society, however, there has occurred a corresponding empowerment of associations, activists, confessional groups, self-identified minority communities, and private corporations. The censors among the activists are driven by the conviction that justice will be advanced by the suppression of the artwork. Their interventions have a self-dramatizing, vigilante quality. Artworks are wished out of existence as an exercise of virtue. The groups are very diverse: “stay-at-home moms” and “military veterans” (disparaged by “liberal Hollywood”), policemen (disparaged by rapper record labels), social justice warriors, and so on. Their censorings do not comprise acts of a sovereign authority; they have a random, unpredictable, qualified character, reflecting fundamental social and confessional divisions. As for the corporations, when they are not the instrument of activists (Christian fundamentalists, say), their responses to activists, foreign governments, and so on tends towards the placatory.

    Correspondingly, with respect to censoring actions,  we find a comparable miscellany of public and private  (when not criminal) initiatives in place of administrative  and judicial acts of the state. The activists, right and left,  have available an extensive repertory of tactics: demonstrations, boycotts, public statements, digital denunciations, petitions, lethal violence, serious violence, and threats of  violence, property destruction, disruptions and intimidations,  mass meetings, marches, protester-confrontations, pickets, newspaper campaigns. As for corporations, the tactics, again, have become familiar: refusals to contract, and terminations of employment, publishing, and broadcasting contracts already concluded; editing books and films in accordance with the requirements of state authorities in overseas markets.

    In all these instances, the wrong kind of attention is paid to an artwork — hostile, disparaging, dismissive. There is no respect for the claims of art; there is no respect for art’s integrity; there is no respect for artmaking. Art is regarded as nothing more than a commodity, a political statement, an insult or a defamation, a tendentious misrepresentation. If it

    is acknowledged as art, it is mere art — someone’s self-indulgence, wrongly secured against the superior interests of the censoring actors. All these actions are intended to frighten and burden the artist. And so artists and writers increasingly, and in subtle ways, become self-censoring — and thereupon burden other artists and writers with their own silent example. Self-censorship is now the dominant form of censorship. It is a complex phenomenon and hard to assess — how does one measure an absence? But recall the Jewel of Medina affair of 2008, the novel about one of the Prophet Mohammed’s wives that was withdrawn by Random House because it was “inflammatory.” Who now would risk such an enterprise? Instead we are, with rare exceptions, living in an age of safe art — most conforming to popular understandings of the inoffensive (or of “protest”), a few naughtily transgressive, but either way without bite.

    As for the censored: what we have described as the given problem of censorship — the heterogeneity of civil society censoring actors; the retreat of the state from censoring activity; the collapse of the Soviet Union as the primary adversary of a liberal order; the emergence of China as a powerful, invasive, artworld-deforming censor; the absence of any rule-governed censorship — has meant, among other things, that the pre-1989 defenses against censorship, such as they were, no longer work. They were deployed in earlier, more forensic times, when the state, the then principal censoring actor, was open to limited reasoned challenge, and when civil society actors were subject to counter-pressure, and were embarrassable. Essential values were shared; appeals could be made to common interests; facts were still agreed upon.

    *

    Art now attracts considerable censoring energy. There is no other discourse which figures in so many distinct censorship contexts. It attracts the greatest number of justifications for censorship. We may identify them: the national justification — art, tied up with the prestige of a nation, cannot be allowed to damage that prestige; the governing-class justification — artworks must not be allowed to generate inter-group conflict; the religious justification — artworks must not blaspheme, or cause offense to believers; the capitalist justification — artworks must not alienate consumers, or otherwise damage the corporation’s commercial interests.

    Yet the properties of art that trouble censors are precisely the properties that define art. An attack on a work of art is thus always an attack on art itself. What is it about art works that gets them into so much trouble? We begin with the powerful effect that works of art have on us. We value the works that have these effects — but they also disturb us, and the trouble that art gets into derives from the trouble that art causes. The arts operate out of a radical openness. Everything is a subject for art and literature; everything can be shown; whatever can be imagined can be described. As the literary critic Terence Cave observed, fiction demands the right to go anywhere, to do anything that is humanly imaginable.

    Art works are playful, mischievous; they perplex, and are elusive, constitutively slippery, and therefore by their nature provocative. Art serves no one’s agenda. It is its own project; it has its own ends. This has an erotic aspect: playfulness has its own force, its own drive. Art preys upon the vulnerabilities of intellectual systems, especially those that demand uniformity and regimentation. Art is disrespectful and artists are antinomian. The artist responds to demands of fidelity, Non serviam. He or she is consecrated to a resolute secularity and an instinct to transgress boundaries: the writer makes the sacred merely legendary, the painter turns icons into portraits. (The religious artist does not altogether escape this effect.) It makes sense to say, “I am a Millian” or “I am a Marxist,” but it does not make sense (or not the same sense) to say, “I am a Flaubertian” or “I am a Joycean.” The opinions that may be mined are typically amenable to contradictory interpretations — they invite contradictory interpretations. And let us not overlook the obvious: parody and satire, comedy and farce, are aesthetic modes. Laughter lives inside literature.

    Identity politics tends to be fought out on the field of culture because identity is among art’s subjects. Art confers weight and depth upon identity; and so it is no wonder that identity groups now constitute themselves in part through their capacity for censoriousness. Race politics, gender politics: art has a salient place in them, as do art controversies, in which the various communities pursue cultural grievances by denying legitimacy to certain symbolic expressions. Identity warfare is attracted to art in much the same way that class warfare is attracted to factories. Politics in our day has taken a notably cultural turn, and so art has become a special focus of controversy. Of course, low politics also plays a role in these outrages against art — the Ayatollah’s fatwa was a power-play against Saudi hegemony, and Giuliani’s protest against a sacrilegious painting was a means of distracting Catholics from his pro-choice record. But the problem cannot be reduced to such politics alone.

    Unlike artists, art cannot be manipulated. Specifically, works of art are immunized against fake news, because they are all openly fabricated. Novels are openly fictional: that is their integrity. The artist is the last truth-teller. As already fictional accounts, artworks cannot be subverted by “alternative facts,” and as forms of existence with a distinctively long reach, and a distinctive endurability, they are more difficult to “scream into silence” (Ben Nimmo’s phrase for the phenomenon described by Tim Wu as “reverse censorship,” a pathology of internet inundation). But this is hardly to say that works of art — and their makers — are not vulnerable. Artworks are accessible: books can be burned, canvases can be ripped, sculptures can be pulled down. They are also susceptible to supervision — by, among others, pre-publication “sensitivity readers.” One measure of censorship’s recent advance is the phenomenon of “publishable then, but not publishable now,” and “teachable then, but not teachable now,” and “screenable then, but not screenable now.” The essayist Meghan Daum relates that when she asked a professor of modern literature whether he still taught Lolita, he replied, “It’s just not worth the risk.” This widespread attitude is of course an attack on an essential aspect of art’s existence — its life beyond the moment of its creation.

    III

    To whom should we look for the defense of art?

    Not the state. Of course, the state should provide effective protection for its citizens who are writers and artists. But the state cannot be art’s ally, in part because of its neutrality and in part because of its partisan tendencies. Even in those states which have a tradition of government patronage of the arts, the state must not take sides on aesthetic or cultural questions. Art criticism is not one of the functions of government, and the history of art under tyrannies, secular and religious, amply shows why not. Moreover, the state, or more specifically government, has its own interests that will most certainly interfere in the free and self-determined development of art and literature: its desire for civil peace, which may cause it to intervene in cultural controversy; its privileging of religious freedom, as defined by the confessional communities themselves; its desire for the soft power that art of a certain kind gives; its majoritarian prejudices; and so on.

    What is more, the arguments for state involvement in the arts usually exclude too much art, preferring instead national projects with social and economic benefits, which are usually inimical to art’s spirit. Whatever the individual artist’s debts and responsibilities to her society, as an artist she works as an individual, not a member, not a citizen. It has often, and correctly, been said that the social responsibility of the writer is to write well. When the conditions of artistic freedom are present, the artist represents only her own imagination and intellect. John Frohnmayer, the chairman of the National Endowment of the Arts during the culture wars of the late 1980s and early 1990s, mis-stepped when he wrote: “We must reaffirm our desire as a country to be a leader in the realm of ideas and of the spirit.” That is not an ambition that any writer or artist should endorse.

    Not the right. Simply stated, there is no decent theory of free speech (let alone free art speech) that has come from the illiberal right in any of its various, and often contradictory, reactionary and conservative versions. We will not find a defense of free intellectual and artistic speech in the counter-Enlightenment, or in the illiberal reaction to the French Revolution, or in the conservative or reactionary movements of the late-nineteenth century and early mid-twentieth century. The very notion of free speech is problematic to those traditions. They promote authority’s speech over dissenting speech. They reject the Kantian injunction, sapere aude, dare to know; they reject its associated politics, the freedom to make public use of one’s reason. They esteem reason’s estrangement — prejudice — in all its social forms: superstition, hierarchy, deference, custom.

    In the United States, to be sure, the situation is different. There is, after all, the First Amendment. Conservative articulations of freedom of speech are frequent and well-established. But if one subtracts from their positions what has been borrowed from the liberal order and what is merely self-interested (it is my speech I want heard), is there anything that remains upon which the arts may rely for protection? Let us disaggregate. There are the increasingly noisy and prominent activists of the alt-right, the Trumpists, the neo-Confederates, the militia groups at the Charlottesville “Unite the Right Free Speech March,” and the like. In the matter of free speech they are the merest and most discreditable of opportunists: we should not look to the champions of statues of Confederate generals to protect free speech. Then there are the publicists and the pundits, the Fox commentators, the Breitbart journalists, and the like. They are part borrowers, part opportunists. We should not look for a renewal of free speech thinking to the authors of  The New Thought Police: Inside the Left’s Assault on Free Speech  and Free Minds; Bullies: How the Left’s Culture of Fear and  Intimidation Silences Americans; The Silencing: How the Left is Killing Free Speech; End of Discussion: How the Left’s Outrage Industry Shuts Down Debate, Manipulates Voters, and Makes America Less Free (and Fun); Triggered: How the Left Thrives on Hate and Wants to Silence Us, and so on. Their defenses of free speech altogether lack integrity; they are merely ideological (and often paranoid) in their polemics.

    And then there are the lawyers, the right-wing academics, think tanks, and lobby groups, the administrators, legislators and judges, and the corporations. The widely noticed “turn” of the political right towards the First Amendment had led only to its redefinition in the interests of conservative grievances and objectives: to the disadvantage of liberal causes (anti-discrimination measures, exercise of abortion rights free of harassment, university “speech codes,” and so on); to the disadvantage of trade unions (compulsory deduction of fees enlists employees in causes they may not support); to the benefit of for-profit corporations (conferring on “commercial speech” the high level of protection enjoyed by “political speech”); to the general benefit of right-wing political campaigns (disproportionately benefited by the striking down of campaign finance law in the name of corporate — or “associational” — free speech); and to the benefit of gun-rights activists (advancing Second Amendment interests with First Amendment arguments). So, again: part borrowers, part opportunists. These three prominent currents of American conservatism, united by their self-pity and their pseudo-constitutionalism, have nothing to contribute to a climate of cultural and artistic freedom. In the matter of a principled free speech doctrine, we can expect nothing from the right.

    Not the left. There is no decent theory of free speech, let alone free art speech, that has come from the left. (Rosa Luxemburg is an exception.) There are only leftist critiques of liberal doctrine, external and immanent, respectively. In the external critique, liberal rights are mere bourgeois rights; they are a fraud, of instrumental value to one class, worthless to the other class. This criticism was pioneered by Marx, and successive generations of leftists have regularly rediscovered it in their own writing. A recent example is P.E. Moskowitz’s book The Case Against Free Speech, in which we read that “the First Amendment is nearly irrelevant, except as a propaganda tool … free speech has never really existed.” In the immanent critique, liberal rights are recognized but must be dramatically enlarged, even if they put the greater liberal undertaking in jeopardy; certainly, received liberal thinking about free speech is too tender to commercial interests, while weakening the interests of non-hegemonic groups (including artists and creative writers). Free speech requires campaign finance laws (to enable effective diversity of expressed opinion), restrictions on speech that inhibits speech, and so on.

    While liberals may safely dismiss the external critique, they are obliged to engage conscientiously with the immanent critique. The elements of greatest relevance to art free speech relate to two discourses deprecated by the immanent critique. One is “hate speech,” the other is “appropriation speech.” It is frequently argued that minority groups characterized or addressed in a “hateful” way should not have their objections defeated by any free speech “trump.” Jeremy Waldron has given the most compelling (not least because it is also the most tentative) liberal critique of hate speech. He understands hate speech in terms of “expression scrawled on the walls, smeared on a leaflet, festooned on a banner, spat out onto the Internet or illuminated by the glare of a burning cross.” What then of literature and the visual arts? Here he is somewhat casual, writing in passing of “an offensive image of Jesus, like Andres Serrano’s Piss Christ.” Regarding “appropriation speech,” in this case the censor arrives on the scene as a territorialist, and addresses the over-bold artist: “This art, this subject, this style, etc. is mine. Stay in your lane. You cannot know my situation; you lack epistemic authority. You strain for authenticity, but it will always elude you.” This cultural nativism owes an unacknowledged debt to Herderian values and counter-Enlightenment ideas: the spiritual harmony of the group, the irreducible individuality of cultures, the risks of contamination and theft, and so on — in many ways a rather unfortunate provenance.

    Sometimes hate speech and appropriation speech combine: “In your mouth, this is hate speech.” Sometimes, the one is treated as an instance of the other: “Appropriation speech is hate speech.” Though this hybrid is at least as old as Lamentations (“ani manginatam,” “I am their song,” the author writes of his vanquishers), it is largely a post-1989 phenomenon. Against it, the literary artist, the visual artist, is likely to respond with Goethe: “Only by making the riches of others our own do we bring anything great into the world.” Notwithstanding all this, however, and the broader switching of sides with the right on free speech (which is often overstated), the left remains an occasional ally.

    Not the confessional communities. Religions are constitutively, even if not centrally, coercive systems. Within those systems of conformity, there are censorship sub-systems, protective of divinity and its claims, of institutions and clergy, of practices and dogmas. The master prohibition of these sub-systems relates to blasphemy. Religions are coercive of their own members, and in many cases also of non-members. Whether or not they hold political power, and no religion has been averse to it, they hold communal and social and cultural power. They certainly do not respect artistic autonomy, though they have permitted great artists to flourish in the doctrinal spaces in which they were commissioned to work. There is no decent theory of free speech that has come from any of the major religions. Certainly not from the monotheisms: they take ownership of speech. It is sacred both in its origins (“In the beginning was the Word”) and in its most elevated uses (Scripture, worship). Its lesser and other uses are denigrated or proscribed. Historically speaking, freedom of speech developed as a revolt against ecclesiastical authority.

    Religions are invested in art, and they control it when they can — both their own art and the art of non-members. They subordinate the artist to confessional and institutional purposes. Christianity does so the most — its aesthetics are theological: just as God the Father is incarnated in God the Son, so God the Son is incarnated in the Icon, writes the art historian and philosopher Thierry de Duve. The Christian work of art, though it may be breathtakingly beautiful, affirms the theological and historical truth of the Christian story. The model religious artist is the Biblical artisan Bezalel, and the model religious artwork is his sumptuous construction of the Tabernacle in the desert. “Bezalel” means, in Hebrew, “in God’s shadow.” The general stance of the church towards art may be termed Bezalelian. “Artists avoid idolizing the arts,” writes a contemporary Bezalelian, “by resisting any temptation to isolation and instead living in the Christian community, where worship is given to God alone.”

    Religion has too many red lines; it is too used to being in charge; it cleaves to the non-negotiable (“the Bible is our guide”); it must have the last word. And when the drive to subordinate art is denied, when the desired orthodoxy is frustrated or broken, a strong sense of grievance is generated, and this in turn leads repeatedly to scandalized protests — to the burning of books and the destruction of artworks. In a word, to iconoclasm, in its old and strict sense, as the doctrinally justified destruction of art with heterodox meanings, or the use of force in the name of religious intolerance.

    To be sure, confessional communities are ardent in defense of Bezalelian artists — of wedding photographers who refuse to photograph, and bakers who refuse to make cakes for same-sex marriages. And there is some truth in the argument that religion and art have common adversaries in the everyday materialism of consumerist societies, and could make common cause against everyday philistinism and banality. The history of the association of religion with beauty is long and marvelous. But in the matter of securing artistic freedoms, the confessional communities are simply not reliable. Certainly they have not been allies in recent times.

    Not writers and artists. Though they are anti-censorship by vocation; though they named censorship (“Podsnappery,” “Mrs. Grundy”); though much of the best anti-censorship writing in modern times came from them (Wilde, Orwell, Kundera, Sinyavsky), advocacy is for writers and artists an unfair distraction and burden. It takes them away from artmaking. In 1884, the novelist George Moore, in Literature at Nurse, wrote: “My only regret is that a higher name than mine has not undertaken to wave the flag of liberalism.” Called upon to defend their work, artists get understandably irritated: “I don’t feel as though I have to defend it,” answered Ofili regarding The Holy Virgin Mary. “The people who are attacking this painting are attacking their own interpretation, not mine.” Moreover, their work is often opaque to them. It always holds more meanings than they know, than they designed. Byron cheerfully admitted as much: “Some have accused me of a strange design / Against the creed and morals of the land, / And trace it in this poem every line: / I don’t pretend that I quite understand / My own meaning when I would be very fine… “ And artists are often poor advocates in their own cause. They too readily concede the principle of censorship; they pursue vendettas, and they grandstand; they turn political; they contradict themselves; they advance bad arguments, which sometimes they mix up with better ones; they misrepresent their own work. What is more, they frequently undermine in their art the defenses that are commonly deployed on their behalf.

    “But every artist has his faults,” Maupassant once said to Turgenev. “It is enough to be an artist.” In this censoring moment, that should be the beginning of wisdom.

    IV

    This leaves the liberals. Will they rise to the defense of literature and the visual arts? Freedom of speech, after all, is integral to a liberal society. As a historical matter, free speech is liberalism’s signature doctrine. It is embraced by all the major liberal thinkers; it is incorporated into all the international legal instruments that comprise the liberal order. Execrations of censorship are to be found everywhere in canonical liberal discourse — in Milton, in Jefferson, in Mill, in Hobhouse, in William James. Censorship stultifies the mind, they all affirm. It discourages learning, lowers self-respect, weakens our grasp on the truth and hinders the discovery of truth. Liberals typically figure prominently among the champions of oppressed authors and banned books; they tend to recoil, with a certain reflex of contempt, when in the presence of affronted readers or minatory censors.

    But there is a problem. Liberalism has traditionally cast a cold eye on literature and the visual arts, and has been peculiarly unmoved by their vulnerability. Literary and artistic questions have not been pressing for liberals, in the matter of free speech. We may even speak of a failure within liberalism to value literature and the visual arts, or to value them in a way that translates into a defense of them within a broader defense of free speech.

    To begin with, there is an historical circumstance that contributes to the explanation for this peculiar neglect. The defense of free speech in the liberal tradition is significantly tied up with the political virtue of toleration of religious dissent. This is reflected, for example, in the First Amendment to the American Constitution: “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press …” The free exercise of religion requires the free exercise of speech. Liberalism was tied at its inception to the defense of confessional dissent. Starting from a position in which loyalty to the state requires loyalty to its ecclesiastical institutions (in Protestant states) or to the ecclesiastical institutions favored by it (in the case of Roman Catholic states), liberals asked: Can the state accommodate citizens who wish to give their loyalty to it, but not to its ecclesiastical institutions? They gave several reasons for their affirmative answer. Tolerance is itself a theological matter. It derives from a respect for the individual conscience. It is not just a defense of theological dissent; it is itself an act of theological dissent. But none of this, of course, has anything to do with the welcoming of art, of artists, of artmaking. What was applied to religious works, practices, beliefs, and collectives was not applied to literary works, practices, or collectives. No question of tolerance in respect of the creative writer or artist arose for liberalism, within its own historical trajectory. (Indeed, when illiberal elements sought to exercise a censoring influence over art, they were often accommodated by liberals).

    This is not to say that liberal arguments for free speech are limited to religion. But if we look at its arguments, we search in vain for literature and the visual arts. Instead we are instructed, quite correctly, that the validity of a proposition cannot be determined without exposing it to challenge, and that a silenced opinion may be true, and that a false opinion may yet contain a portion of the truth, and that true opinions may become mere prejudices if we are forced to defend them — and so all opinions must be permitted. We are also told, again correctly, that free speech is the precondition for controlling abuses and corruptions of state power, since it empowers citizens to act upon the government, and impedes the freedom of governments to act on citizens. It reverses the flow of power, governments do not limit citizens; citizens limit governments. And also that free speech is the precondition of deliberative democracy: autonomous citizens cannot act autonomously, that is, weigh the arguments for various courses of action, if they are denied access to relevant facts and arguments. The promoting of public discussion requires a vigorous, generous free speech regime. The liberal tradition also includes, particularly in Humboldt and Mill, the ideal of self-realization, which broaches the large realm of free communication and free culture.

    But where does art figure in all this? Almost nowhere. Alexander Meiklejohn, the American philosopher and educator who wrote authoritatively about freedom of speech, did observe that “the people need novels and dramas and paintings and poems, because they will be called upon to vote” — a defense of the arts, but not in their integrity, a utilitarian defense. (He denied that the people needed movies, which are engaged in the “enslavement of our minds and wills.”) We can instead trace a liberal indifference, and in some cases even a liberal hostility, toward literature and the visual arts. How many liberals would have endorsed Schiller’s declaration that “if man is ever to solve the problem of politics, he will have to approach it through the problem of the aesthetic, because it is only through Beauty that Man makes his way to Freedom”? The great liberal thinkers have not found artworks to be useful texts to think with. Indeed, the liberal complaint that the literary sensibility has a reactionary character dates back to the French Revolution, its adversaries and its partisans. Writers such as Paine and Cobbett directed some of their most venomous attacks against a literary imagination whose origin they saw in a morally bankrupt, libertine, aristocratic culture. The confrontation thus framed, the decades that followed merely deepened it, with creative writers fully returning fire. Poets and novelists made nineteenth-century liberalism their declared enemy (Baudelaire, Dostoyevsky); twentieth-century liberalism is modernism’s declared enemy (Joyce is the honored exception); reactionary politics and avant-garde art are taken to be, in M.H. Abrams’ phrase, mutually implicative. This ignores, of course, the enlistment of the arts in the modern revolutions; but liberals are not revolutionaries.

    It is therefore little wonder that when one surveys the modern intellectual history of liberalism, there are very few liberal thinkers for whom, in the elaboration of a theory of free speech, literature and art figured. I count two, both of them outside the Anglo-American tradition: Benjamin Constant and Alexis de Tocqueville. Here is Constant, in a direct affirmation of inclusiveness: “For forty years I have defended the same principle — freedom in all things: In religion, in philosophy, in literature, in industry, and in politics.” Constant defended this freedom against “the majority,” which in his view had the right to compel respect for public order and to prohibit expression of opinion which harmed others (by provoking physical violence or obstructing contrary opinions) but not to otherwise restrict expression. Constant was himself a man of letters, a novelist of the poignancies of love — a Romantic, who brings to mind Victor Hugo’s description of Romanticism as “liberalism in literature.”

    As for Tocqueville: in Democracy in America he wrote about democracy’s inhibiting effects on fresh and vigorous thought. “There is a general distaste for accepting any man’s word as proof of anything. So each man is narrowly shut in himself, and from there, judges the world.” This does not lead to debate. Each man mistrusts all others, but he is also no better than others. Who then to trust? “General sentiment,” by which Tocqueville means the tyrant he most fears in an open society: “public opinion.” He famously observed that “I know of no country where there is less independence of mind and true freedom of discussion than in America.” But then he went on to offer a brief account of the significance of literature in the growth of democratic sentiment, and a longer account of the type of literature that a democratic society might foster. Literature, he believed, is a counter to despotic tendencies. This is not literature passing as political theory; this is literature in its aesthetic integrity.

    Constant and Tocqueville — but not Mill. This is surprising, since it was literature — the French writer Marmontel in particular — that saved Mill from his nervous breakdown and alerted him to the emotional limitations of utilitarianism. And yet it is Mill’s name that we must give to liberalism’s defeat in its first major test in respect of arts censorship. It was in 1858 that he completed On Liberty, one of the very scriptures of modern liberalism — but which, in this context, must be remembered as the great work on freedom of expression in which the philosopher failed to address three major setbacks to artistic freedom that happened even as he was writing it: the trial for obscenity (“an outrage to public morality and religion”) of Flaubert’s Madame Bovary, the trial for obscenity (“an insult to public decency”) of Baudelaire’s Les Fleurs du Mal, and the passage in Parliament of the Obscene Publications Act, which allowed the British state to seize and destroy works of art and literature without even giving their makers a right to be heard. It must also be added that in this failing Mill had successors in the weak response of liberals to the attack on Rushdie: not only were they few in number, but their defenses of the novelist rarely included defenses of the novel, of the dignity of his aesthetic project, of the autonomy of art and its right to blaspheme. The same blindness to art and its rights disfigured many liberal interventions in the American controversies of the late 1980s and early 1990s. They attacked Jesse Helms and company for many good reasons; just not this one.

    *

    We have discovered a problem. Even liberals are not good on literature and the arts, and this matters now more than ever before. How might things improve? We could attempt to give liberals reasons why they should take literature and the visual arts seriously. We might make the case for a liberal literature — the case advanced finely by Martha Nussbaum in her discussion of The Princess Casamassima, which she reads as contending for “liberalism as the guiding principle in politics,” taken by her to include “a demand for artist’s freedom of expression.” But what about works of art that contend for a conservative politics? No, the case for artistic freedom must be made only on the grounds of art as such. Writers and artists will not find relief from their troubles unless art itself, aesthetic expression as such, is explicitly inducted into the class of protected free speech.

    There are many reasons to do so. I will give only some. Art is a human good. An attack on literature and art is an attack on capacities and practices that constitute human beings as human and allow us to flourish. When we attack writers and artists, we attack ourselves. We are species-constituted by our artmaking and art-experiencing capacities; we realize ourselves by our artmaking and art-experiencing practices. The arts aid mental development and social harmony; they offer representations of a transfigured world. Art contributes to our understanding of ourselves and of the world; art makes it easier for us to live peaceably together. That is to say, it makes us more transparent to ourselves, and it makes the world more transparent, as well as less threatening and more beautiful. Artworks are goods whose desirability cannot adequately be expressed in individual terms — that is to say, they are “public” or “communal” goods.

    We must recognize (and value) the form of existence of the writer and the artist. People who pursue the literary and artistic life are pursuing an estimable life, and the fruits of their pursuit, their literary and art works, should be secure. They have a “plan,” in the liberal sense of the word; in more heroic and Tocquevillian terms, they seek to forge their own destiny. They certainly pursue a conception of the good life. That is, to make use of a distinction drawn by Jeremy Waldron, they are to be held to account not by reference to what they have done, but rather by reference to what in general they are doing. Free speech occupies a special place in this “plan.” It is the precondition to the artistic vocation. None of this has anything to do with the seeking of privileges. That some will pursue this plan in a degraded manner is not to any point. The pornographer stands to the art world as the fundamentalist stands  to the religious world. Each is reductive, blinkered, unthinking — but it would be an inconsistency to grant toleration to the one and deny it to the other.

    The makers of art (and the audiences for art) merit recognition as a distinct group. Artists are not best imagined as individuals under contract; they should be recognized as members of their own communities, with their own practices and institutions. Artmaking is the characteristic activity of art-communities. And if art-makers are in their own way a group, then they, and their art, merit the protective attention that identity- and religious-groups and their products typically receive in liberal societies. Indeed, the liberal state should take positive steps to ensure that art-making flourishes when threatened by confessional or other “identity” groups. In certain respects, the art community is the ideal community, and a model for all given communities; the free speech that it needs is the free speech that we would all need for our ideal existence.

    The art community is many communities. None is coercive. All are time-bound: specific formations do not last. They have no transcendental quality. They are fully secular. They are self-constituting: they do not require myths of origin. They are non-exclusive. They are unboundaried; there are no impassable barriers to entry. They are open to the world; they address the world; their solicitations are gentle and may always be refused. Literary and art communities are communities for anti-communitarians. They can never be a menace to society, in the sense that fanatical communities, or fanatical members of other communities, are a menace.

    Art is a liberal good: to defend literature today is to defend liberalism, not as an ideology or a political doctrine but by modelling the benefits of its freedoms. How do we name the members of a liberal society? One way is to call them citizen-readers. Among art’s forms and kinds, it is the novel — with its many standpoints, its diversity of human types, its provisionality, its interest in ambiguity and complexity — that comprises the distinctive art form of a liberal democratic society. To make war on the novel really is to make war on liberal democracy.

    Literature and the visual arts have so many things in common with liberal societies. They are both committed to a certain process of making explicit. “The liberal insistence,” writes Waldron, “[is] that all social arrangements are subject to critical scrutiny by individuals, and that men and women reveal and exercise their highest powers as free agents when they engage in this sort of scrutiny of the arrangements under which they are to live.” He goes on, “society should be a transparent order, in the sense that its workings and principles should be well-known and available for public apprehension and scrutiny.” Does this not describe the work of the writer? And they are both reflexive: for the liberal, identities should be treated as a matter for continuous exploration, receiving at best only conditional and contingent statement. And they both tend to the agonistic. By the agonistic, I mean interests or goods in irresolvable conflict — one that cannot be settled and cannot be won. There can be no resolved triumph of one over the other. The understanding by each of the other is bound up with each’s self-understanding; neither recognizes itself in the account given of it by the other. Liberal societies exist to accommodate agonistic conflicts, and art exists to explore them. It also has its own agons — with religion, with philosophy, with science, with history. The work of art, said Calvino, is a battleground.

    Both liberal societies and the arts are committed to a flourishing civil society. Precisely because literature, in its difference from other writing, solves no problems and saves no souls, it represents a commitment to the structural openness of a wholly secular space, one which is not programmatic, not driving towards any final, settled state in which uniformity rules. The artwork, like the open society, is promiscuous in the invitation that it extends. It is available to all; all may enjoy it; all may interpret it; all may judge it. Both liberal societies and the arts, in sum, have the same necessary condition. That condition is freedom. Illiberal societies prescribe a literature and visual arts that is both a diversion (“bread and circuses”) and an instrument of legitimation (“soft power”). But liberal societies need the existence of a freeliterature and art. Works of art are liberal public goods.

    From time to time, and in our time almost daily, events occur that prompt the question: Is liberalism equal to the challenge? I do not believe that the censorship of literature and the visual arts is the worst evil in our world, but it is a bad thing, and there is too much of it around. In these censoring times, liberals should strive to give to aesthetic expression an honored place in their theory of free speech.

    A New Politics, A New Economics

    The major political phenomenon of the past decade has been a popular revolt against the economic arrangements that took form at the end of the twentieth century. The revolt is global. It takes both left- and right-wing forms, and often presents itself as overtly anti-immigrant or otherwise ethnonationalist, but the undercurrent of deep economic dissatisfaction is always there. Inequality in the developed world has been rising steadily for forty years now. The aftermath of the financial crisis of 2008 activated the politics of economic populism: in the United States, the rise of Bernie Sanders, Elizabeth Warren, and other politicians on the economic left, plus the Tea Party movement and to some extent Donald Trump and his acolytes who rail against globalization, Wall Street, and the big technology companies. In Europe, there is Brexit, new nativist parties (even in Scandinavia), and the Five Star movement in Italy, among other examples. What all of these have in common is that they took the political establishment utterly by surprise. And all of them regard the establishment, and any consensus that it claims to represent, with contempt.

     The dynamic of this moment brings to mind the politics of the early twentieth century. During the nineteenth century, succeeding waves of the industrial revolution created (along with enormous and highly visible wealth) a great deal of displacement, exploitation, and want, which at first manifested itself in radical rebellions — in the United States they took the forms of agricultural populism and labor unrest. This was followed by a series of experiments in translating economic discontent into corrective government policy. Then as now, popular sentiment and electoral politics came first, and the details of governance came later. Most of the leading intellectuals of the Progressive Era were deeply uncomfortable with populism and socialism. The young Walter Lippman, in Drift and Mastery, called William Jennings Bryan, three-time presidential nominee of the Democratic Party, “the true Don Quixote of our politics.” But Lippmann and his colleagues shared the view that private and institutional wealth had become more powerful than the state and that the imbalance had to be righted, so they set about devising alternate solutions. We are now in the early stages of a similar period of forging a new political economy for this still young century. It is going to be a large, long-running, and not very orderly task, but those who don’t take it seriously are going to find themselves swept away.

    It shouldn’t be necessary, but it probably is, to stipulate that economies are organized by governments, not produced naturally through the operations of market forces. National economies do not fall into a simple binary of capitalist or not; each one is set up distinctively. Government rules determine how banks and financial markets are regulated, how powerful labor unions are, how international trade works, how corporations are governed, and how battles for advantage between industries are adjudicated. These arrangements have a profound effect on people’s lives. The current economic discontent is a revolt against a designed system that took shape with the general assent of elite liberal and conservative intellectuals, many of whom thought it sounded like a good idea but were more closely focused on other issues to pay close attention to the details. To begin the discussion about a new system requires first developing a clearer understanding of the origins of the current one.

    In an essay in 1964 called “What Happened to the Antitrust Movement?,” Richard Hofstadter noted that for half a century, roughly from 1890 to 1940, the organization of the economy was the primary preoccupation of liberal politics. Hofstadter meant antitrust to be understood as a synecdoche for a broader concern with the response to industrialism in general and the rise of the big corporation in particular. He was not mourning liberalism’s shift in focus; instead, he was typical of midcentury liberal intellectuals in thinking that the economic problems that had preoccupied the previous generation or two had been solved. And that view of the postwar decades still resonates even all these years later, in the economically dissatisfied political present. During last year’s presidential campaign, Donald Trump’s “Make American Great Again” and Joe Biden’s “Build Back Better,” both backward-looking slogans, share the embedded assumption that at some time in the past, roughly when Hofstadter was writing, the American economy worked for most people in a way that it doesn’t now. But was that really true? And if it was, what went wrong?

    Most people would probably say that the economy really was better back in the mid-1960s — that it had earned, through its stellar performance, the conventional view that it was working well — and that what changed was globalization: in particular the rise of the United States’ defeated opponents in the Second World War, Japan and Germany, and previously unimaginable advances in communications and data-processing technology,  and the empowerment of Saudi Arabia and other oil-producing Arab countries. But if that is what most people think, it highlights a problem we have now in addressing political economy, which is a belief that economic changes are produced by vast, irresistible, and inevitable historic forces, rather than by changes in political arrangements. That is a momentous mistake. A more specific account of the political origins of the mid-century economy, and of what blew it apart, is a necessary precondition for deciding what to do now.

    In the presidential election of 1912, Theodore Roosevelt ran on a program he called “The New Nationalism,” and Woodrow Wilson on “The New Freedom,” with the third major candidate, William Howard Taft, having a less defined position. This was the heart of the period when economic arrangements were the major topic of presidential politics. (The perennial Socialist candidate, Eugene Debs, got his highest-ever total, 6 per cent of the vote, in 1912.) Advised by Lippmann and other Progressive intellectuals, Roosevelt proposed a much bigger and more powerful federal government that would be able to tame the new corporations that seemed to have taken over the country. Wilson, advised by Louis Brandeis, called for a restoration of the economic primacy of smaller businesses, in part by breaking up big ones. It is clear that Hofstadter’s sympathies, as he looked back on this great debate, were on Roosevelt’s side; he considered Wilson’s position to be sentimental, impractical, and backward-looking, in much the way that Lippmann had thought of Bryan’s economic inclinations as quixotic. Wilson won the election, but Roosevelt probably won the argument, at least among intellectuals. (Politicians, because they represent geographical districts, have a build-in incentive to be suspicious of economic and political centralization.) The years immediately after the election of 1912 saw the advent of the Federal Reserve, the income tax, and the Federal Trade Commission — early manifestations of the idea that the national government should take responsibility for the conduct of the American economy.

    The argument between Roosevelt and Wilson never entirely went away. During the New Deal, when the economic role of the federal government grew beyond Theodore Roosevelt’s wildest dreams, there were constant intramural debates within economic liberalism, between centralizers such as Adolf Berle, the highly influential Brain Truster-with-out-portfolio, and de-centralizers such as Thurman Arnold, the head of the antitrust division of the Justice Department. Despite major defeats, notably the Supreme Court’s striking down of the National Industrial Recovery Act in 1935, the centralizers generally had the better of it, especially after the American entry into the Second World War, when the federal government essentially took over industrial production and also set wages and prices, with an evidently happy result.

    After the war, Berle and his younger allies, John Kenneth Galbraith among them, celebrated the taming of the once menacing industrial corporation, thanks to the forceful and long-running intervention of government. Big corporations remained economically dominant, but because they were now answerable to a higher authority, they no longer ran roughshod. It is important to note that these were not the benign, socially responsible corporations one hears touted today — they were forced to be socially responsible, by govern-ment legal and regulatory decree. The liberal debate about corporations in the postwar years was primarily sociological and cultural, over whether they had eroded the American character by engendering a pervasive “conformity” — not over whether they exploited workers or dominated government. The economy was growing in ways that — in sharp contrast to today’s economy — conferred benefits at all income levels. As Hofstadter put it, “The existence and the workings of the corporations are largely accepted, and in the main they are assumed to be fundamentally benign.” Only conservatives, he asserted, with their resistance to modernity, failed to accept the reality of corporate dominance.

    Partly because the main economic problems seemed at that point to have been solved, and partly because mainstream midcentury liberal thought was almost unimaginably unaware of national problems such as race, women’s rights, and the environment that demanded urgent attention, most liberals turned their energies toward those neglected non-eco-nomic topics. Hofstadter wrote that antitrust “has ceased to be an ideology and has become a technique, interesting chiefly to a small elite of lawyers and economists.” But that glosses over a crucial element in the development of economic liberalism.

    Keynesian economics, which was in its infancy during the heyday of the New Deal, had become so prestigious by the 1960s as to have become the conventional way of thinking about government’s role in addressing economic problems — not just among economists, but by anybody who had ever taken an undergraduate economics course. For Keynesians, the most potent economic tools at government’s disposal were adjusting the money supply, tax rates, and overall government spending — not directly controlling the economic activities of corporations, through antitrust, regulation, and other means. (Adolf Berle used to boast that half the industries in America were regulated by federal agencies, and it was inevitable that the other half would be soon.) So the kind of government economic role advocated by a long line of liberal intellectuals, even as they squabbled over the details, fell out of the conversation.

    It is always easy to see the vulnerabilities of a regime in retrospect. The mid-twentieth-century economic order depended on the corporation to provide a range of social benefits — good wages and salaries, employment security, pensions, health care, social services, and a measure of personal identity — that in most other developed nations would likely have come from government, or the church, or a stable local community. The American political system didn’t seem willing to expand the New Deal into a full-dress social democracy, and corporations were available to perform these quasi-state functions — but that meant they were bearing a lot of weight. They did not command the loyalty of those whom they did not enfold in their warm embrace, so they had a limited number of political allies.

    Even more important, the corporation-based social order rested on the assumption of their economic invulnerability. Corporations had to be able to afford the social burdens being imposed on them by government. What could cut into the economic resources that would require? Three possibilities come to mind: a demand by shareholders that they get a higher return; a weakening of customer loyalty; or competition from other businesses. Adolf Berle’s classic work (with Gardiner 

    Means) The Modern Corporation and Private Property, which appeared in 1932, declared that corporations’ shareholders, their supposed owners, had no power because they were so widely scattered: how could the hundreds of thousands of individual owners of stock in AT&T force management to do anything? After the Second World War, Berle only increased his estimate of the power and stability of the largest corporations, and of the irrelevance of their shareholders. So that was one potential threat assumed away. Galbraith agreed, and made the claim of corporate immortality even more capacious by observing that corporations were also invulnerable to fluctuations in consumer taste, because advertising had become so effective. There went another threat. And much of the rest of the world was still flat on its back after the Second World War, which took away the threat of competition, at least from abroad. Berle and others regular predicted the demise of Wall Street — heavily constrained by regulation since the advent of the New Deal — as a force in the American economy, because big corporations, ever larger and more powerful, would have so much capital of their own that they would no longer need access to the financial markets. Another common claim in that era was that innovation would, and could only, come from large corporations, because only they had the resources to operate substantial research divisions.

     

    The corporate social order, taken for granted by many millions of people who lived within it, and not particularly appreciated by political thinkers on the left or the right, began to come apart spectacularly in the 1980s — which was also, not coincidentally, when the rise in inequality began. The forcing mechanism for this was the “shareholder revolution” — a great reorienting of the corporation’s priorities toward increasing its asset value in the financial markets (and therefore its shareholders’ wealth), and away from the welfare of its employees or of society. Most people credit Milton Friedman with launching the shareholder revolution, specifically with an article in the New York Times in 1970 called “The Social Responsibility of Business Is to Increase Its Profits.” This suggested an ideal for corporations that was almost precisely opposite to Adolf Berle’s, but it didn’t propose specific techniques for achieving it. The true chief theoretician of the shareholder revolution was Michael C. Jensen, a University of Chicago-trained conservative economist, who neatly reversed Berle’s life’s work by making the re-empowerment of the shareholder his own life’s work. 

    Jensen proposed such mechanisms as putting a corporation under the control of a single purchaser, at least temporarily, instead of a widely dispersed body of small stockholders (that’s the private equity business), and paying chief executives primarily in stock options rather than salary, so that they would do whatever it took to increase their companies’ share prices. Such measures would permit the corporation to attend to its new sole purpose. Jensen ceaselessly promoted these and related ideas through the 1970s, 1980s, and 1990s, with highly influential publications (he is the co-author of one of the most cited academic papers of all time), his popular teaching at Harvard Business School (whose graduates shifted from being corporate employees to corporate dismantlers), and public appearances before Congressional committees and elsewhere. This coincided with a great wave of mergers, acquisitions, and buyouts that remade corporate America in ways that stripped out the social and political functions that had been imposed on it since the New Deal.

    Since his work had large political as well as economic implications, Jensen may stand as the most under-recognized public intellectual of the late twentieth century. But his influence, like that of anyone whose ideas have consequences, was substantially a matter of context. He arrived on the scene at a time when the kinds of institutional arrangements on which the midcentury political economy rested had fallen deeply out of fashion. The large economic disruptions of the final quarter of the twentieth century, when they are not attributed to inevitable market forces, are often laid at the feet of an organized corporate-conservative effort to remake the political economy, beginning, perhaps, with the future Supreme Court Justice Lewis Powell’s famous memo to the U.S. Chamber of Commerce in 1971 suggesting the building of a new conservative infrastructure of think tanks, publications, and campus leadership training institutes. But this misses a couple of important elements. One is the tension between corporations and finance — that is, between Main Street and Wall Street. When a company like IBM or General Electric dropped its de facto guarantee of lifetime employment and its company-paid defined benefit pensions, this was “corporate” only in the sense of corporations were now being run for Wall Street investors, not in the sense of benefiting Organization Man-style corporate employees.

    Also liberalism was changing, and many of these economic rearrangements happened with liberal (or at least elite liberal) assent. For one of many possible examples, consider that the crusade against the airline-regulating Civil Aeronautics Board, now of blessed memory, which had to approve every route and every fare (and one of whose creators was Adolf Berle), was led by Senator Ted Kennedy, with another future Supreme Court Justice, Stephen Breyer, as his chief advisor. It had the enthusiastic support of Alfred Kahn, the liberal economist who was Jimmy Carter’s appointee as the euthanasiast chairman of the CAB. (Ralph Nader, then probably the leading liberal activist in Washington, was another participant in this crusade.) There was little or no liberal opposition to the supersizing of Wall Street, which mirrored the downsizing of the industrial corporation; the shareholder revolution would not have been possible without dozens of regulatory changes that enabled it, which didn’t attract much notice because at that moment economic deregulation was seen as an uncontroversial good cause. Much of the newly emerging economic Brahmin class was populated by elite liberals: graduates of Ivy League univer-sities who worked at McKinsey or Goldman Sachs or Google, proudly and profitably “disrupting” the old economy for a living. People at such companies became an important part of the funding base of the Democratic Party, playing the role that political machines and unions had previously played. The old instinct that the way to solve problems is by making corporatist bargains among government, labor, and business had faded away. A fluid, fast, transaction-oriented society, which proposed instead to solve problems by dismantling institutional arrangements and putting more innovative, efficient ones in their place, was now the ideal.

    I don’t want to sound facilely dismissive of these ideas. I was entranced by them when I was young. In those days one still saw people who had served in the New Deal strolling through downtown Washington — Tommy Corcoran, Ben Cohen, Joe Rauh. They appeared to me not as honored participants in a supremely successful political and economic order, but as ghosts, men who had outlived their times. “Neoliberal” had not yet become a dirty word. Books proposing to save liberalism by jettisoning its traditional formations, such as Theodore Lowi’s The End of Liberalism and Mancur Olson’s The Rise and Decline of Nations, were mesmerizing. Liberal heterodoxy was in the air. Why couldn’t liberalism off-load all those clunky appurtenances of its past, the labor unions and the interest groups and the government agencies, and just solve problems? Why did we have to defend to the death vast, wasteful, expensive programs such as Social Security and Medicare? Why couldn’t we be less political, more efficient, smarter, more attuned to real needs and less to powerful constituencies? Didn’t the sluggish economy need the kind of jump-start that deregulation and a general embrace of markets could provide?

    Maybe the Civil Aeronautics Board had indeed outlived its usefulness. The problem was that this broad antinomian logic was applied everywhere. With hardly a peep except from self-interested industry groups, the United States ended broadcast regulation, ushering in the age of hot-blooded talk radio and cable news. It set up the Internet to be an unregulated information platform that enriched a handful of immensely wealthy and powerful companies and made no effort to distinguish between truth and falsity. It declined to regulate the derivatives markets that brought down the global economy in 2008. In all those cases, policies that sounded good by the standards of the newly dominant form of economic liberalism wound up having old-fashioned libertarian effects that should have been predictable: more inequality, greater concentration of wealth and power, more disruption of social and economic arrangements that had been comfortable and familiar for many millions of people. The flaws in the new system were not immediately evident to its designers, because they were prospering. But many of the less well educated, more provincially located, and less securely employed eventually made their vehement dissent known through their voting behavior. That is where we are now.

    People get to choose how to involve themselves in politics, as participants and as voters. It would be wildly unrealistic to demand that everyone’s politics be “about” some topic that seems preeminent to you, or that their politics align with an outsider’s balance-sheet determination of their interests. If you are reading this, it’s likely that Donald Trump cut your taxes. Did you vote for him? Or did you vote because of longstanding party loyalty, or your values, or the way the candidates struck you, or what you think the American government should stand for at home and abroad? It is especially foolhardy to imagine that politics can be about economics rather than, say, race, or gender, or religion, or culture — or that it can be rigorously empirical, based on meticulous scientific determinations of the truth. Still, because democratic politics is meant to deter-mine the activities of the state, and much of what the state does is allocate resources, in the end economics runs through just about everything in politics, including matters that do not present themselves as economic.

    Racism would not command the public attention it does if blacks and whites were economically indistinguishable, and most of the proposed remedies for racism entail big changes in how governments get and spend their money. Nativism may express itself as hatred of the other, but it takes root among people who see immigrants as competitors for jobs and government benefits. The bitter controversies over the pandemic have been powered by the highly different ways it has affected people’s health and employment depending on where they stand in the class system. So, even when politics is not obviously about economics, it is still about economics. To address the deep unfairness of the current economic order requires political solutions, but they have to be political solutions that meet people where they are — that do not seem distanced and abstract. That will be the only way to build popular support strong enough to enact them.

    The fundamental test of the American political economy ought to be whether it can offer ordinary people the plausible promise of a decent life, with a realistic hope of economic progress and their basic needs met: health care, a good education, protection from want, security in old age. The country has failed that test for a generation. Until it succeeds economically and socially, it will not function well politically. And to function well politically requires addressing an enormous economic problem, which can come across as dry and statistical, in ways that feel immediate and palpable enough to inspire passionate engagement.

    I am proposing a great remaking of the political economy as a primary task over the next generation. At this moment the most useful next step in that project is not to produce a specific policy agenda, but instead to outline an approach to politics that could create widespread popular support for the larger project. In recent years the gap between voters and technically oriented policymakers who are genuinely concerned about inequality has been very wide — wide enough for pure grievance to take up the political space that ought to be devoted to fixing the problem. I will suggest three guiding principles for how to proceed.

    Work through institutions. Consequential human activity takes place through institutions. It has been an especially self-destructive element of recent thought to exaggerate the disadvantages of “bureaucracy” and other aspects of institutional life and to overestimate how much can be accomplished without them. This turn has coincided with the severe deterioration of the traditional bulwark institutions of American liberalism, such as labor unions and churches. Media and messaging meant to influence public opinion, organizing campaigns conducted only on social media — these are the snack foods of politics, far less effective over the long term than building institutions that have more conventional functions like structured meetings, ongoing rituals, and planned campaigns aimed at specific government policy outcomes.

    It is a familiar irony that the opponents of an inclusive economy have often used anti-institutional rhetoric while building up powerful institutions of their own. During the twenty-first century, we have seen a great consolidation of one economic sector after another, always made possible by favorable political arrangements, which only become more favorable as the sector gains more economic, and therefore political, power. To curb the power of big tech, big finance, big pharma, and big agriculture will require countervailing institutions. Institutions (which are not the same thing as communities) are necessary to achieve change, and also to instantiate change. Awakening consciences and changing minds is noble and necessary, but such advances lack staying power unless they lead to the creation of consequential new laws and institutions. 

    Address inequality upstream, not downstream. It is deeply ingrained in our economic thinking that the solution to inequality is redistribution. That way, in theory, a society can have the best of both worlds: the efficiency, flexibility, and growth associated with unimpeded markets, plus the corrections to markets’ inequities that only the state can provide. The master tool for redistribution is a progressive income tax system, but there are plenty of more specific tools that address economic injustice in the same spirit: unemployment benefits for people who lost their jobs, food stamps for the hungry, retraining for people whose workplace moved abroad. All of these instruments have in common that they offer a remedy after something bad has happened to people, rather than trying to prevent something bad from happening to them in the first place.

    A decade ago the political scientist Jacob Hacker suggested “pre-distribution” instead of redistribution as a model. In this way of thinking, the aim is to throw some sand in the gears of pure market function, so that it cannot so easily disrupt people’s lives. Strong labor laws are a good example: they boost workers’ pay and benefits and make it more difficult to fire them, which is far more dignity-promoting than the Silicon Valley model of economic justice, with no unions, a gig economy, and the cold solace of a universal basic income for those who experience misfortune. Another is restrictions on absolute free trade and outsourcing of employment. Another is making it more difficult for private equity companies to load expenses onto the companies they acquire, which puts them under irresistible pressure to break whatever compact they had with their employees.

    Most working people are focused on the particular place where they live and the particular company where they work. A politician’s signal that she understands this and will try her best to keep those arrangements in place will be far more meaningful than a promise to pursue abatements after people’s lives have been pulled apart. Economic policymakers for years have regarded policies with this goal as the province of petty rent-seeking politicians, the kind who created the Smoot-Hawley tariff back in the 1920s: all they can accomplish is to create a static, declining society; real economic policy has to be redistributionist and Keynesian. It is a longstanding part of conservative lore that liberals scored a landmark and unfair victory when they torpedoed the Supreme Court nomination of Robert Bork in 1987 — but during the borking of Bork, his liberal opponents barely mentioned what was by far his most influential belief, which was that economic efficiency and consumer benefit were the only proper concerns for government as it regulated companies’ economic activities. They barely mentioned it because they had accepted it. That same 

    year, the New York Times published a lead editorial titled “The Right Minimum Wage: $0.00.” (On the day this essay is going to press, the Times’ lead editorial is titled “Let’s Talk About Higher Wages.”) The economic program on which Joe Biden successfully ran for President, heavily emphasizing saving jobs and keeping small businesses open, was by far the most pre-distributionist by a Democratic candidate in decades. The tide is only just beginning to turn, and the Democrats’ relatively new economic constituencies are not going to be pushing the Biden administration to reinvent the party’s notion of an ideal political economy.

    Decentralize power. In 1909, in The Promise of American Life, which is still as good a framing device for twentieth-century American liberalism as one can find, Herbert Croly proposed that the country tack away from the political tradition of Thomas Jefferson and toward the tradition of Alexander Hamilton. In the present, it is necessary to be reminded of what Croly meant by that: to his mind, Jefferson was not primarily a plantation slaveholder, but an advocate for farmers, artisans, and other smallholders, and for localized government, and Hamilton was not primarily an immigrant who took his shot, but an advocate for centralized and nationalized government, and the father of the American financial system. For Progressives such as Croly, it was axiomatic that the world had become far too complex for a Jeffersonian approach to work. Like Theodore Roosevelt a few years later, Croly believed that the national government had to become bigger and more powerful — and also to employ technical, depoliticized expertise that would be beyond the capabilities of local governments. This way of thinking about government has an irresistibly powerful face validity for members of the category of people who would staff its upper ranks. Think about the coronavirus: wouldn’t you want trained public health professionals to have been in charge nationally, rather than governors of highly variable quality?

    Yet Croly’s position is a temptation to be avoided, for a number of reasons. Expertise is not, pace the insistence of the social-media mob and Fox News, merely a pretext for the exercise of power. Experts have both knowledge in their domains, and an obligation to set aside their pure, unruly human instincts and attempt to approach the world more dispassionately. They marshal evidence. They answer, rather than insult or stereotype, people who do not agree with them. That they operate with some degree of honor doesn’t make them infallible or supra-human, of course. Like everybody else, experts live in their own enclosed worlds, and they often operate on distinctive, non-universal, and not fully conscious assumptions that nobody they encounter ever challenges. Technocracy is not a guarantee of truth or wisdom. No matter how smart and epistemologically sophisticated they are, experts miss things. Over the past few decades, the list has been long: the collapse of the Soviet Union; the 2008 financial crisis; the dramatic rise and political empowerment of evangelical religion; the rise of populism. The problem with centralized, elite expert rule is not only that it creates an inviting target, but that it also requires a check on its power, a system built to incorporate alternative views. To paraphrase James Madison, expertise must be made to counteract expertise; and, in a democracy, experts must be prepared to respect and honor what the great majority of citizens who aren’t experts think.

    It is impossible to separate economic and political power in the way that the Progressives envisioned, and their present-day heirs still do. Great economic power, of the kind that the major technology and financial companies have today, requires favorable political arrangements; in return,  it uses its economic power to enhance its political power. The gentle treatment that big finance and big tech have gotten from government, including from Democratic administrations, is closely related to their role as major political funders and employers of past and future high government officials. The federal government is no longer capable of functioning as a countervailing force to all elements of economic plutocracy at all times: a Democratic administration may be able to stand up to Koch Industries, but not to Google or Goldman Sachs.

    A far better vision for liberals should be of a pluralistic society that does not assume that one major element will be so automatically good that it should be super-empowered. Super-empowerment may be the ill that ails us the most. Over the past few decades, inequality has increased substantially not just for individuals, but for institutions. The top five banks control a higher percentage of assets than they ever have in American history. The gap between the richest universities and the struggling mass is greater. The great metropolitan newspapers of the late twentieth century — the Los Angeles Times and the Philadelphia Inquirer and the Chicago Tribune and so on — aren’t great anymore. Book publishing is in the hands of the “big four” houses. Five big companies dominate the technology business. If all these arrangements are working nicely for you personally, you should not take too much comfort from that. Think about what it would feel like if people you find abhorrent had control of these institutions — it is a much better guide than thinking about the system you would want when the good guys, by your lights, are in charge.

    Politics is the arena that allowed these inequalities to flourish, and politics will be how they can get corrected. You should think in particular about what kind of political system you would want, if the bad guys were winning. You would want checks on the power of the President and on the more politically insulated parts of the federal government, such as the Supreme Court and the Federal Reserve. You would want good state and local governments to have room to do what the national government can’t do or won’t do. You would want to prevent economic royalty, individual or corporate, from being able to control political outcomes. You would want Congress to have more power than the President, and the House of Representatives to have more power than the Senate. You would want minority groups to be organized enough to be able to impress their distinctive point of view on a majority that ignores it. In other words, squabbling, bargaining, self-interest, partisanship, and “gridlock” would be signs of political health, not dysfunction. Influence would come from the sustained effort it takes to be effective through democratic means, not from finding workarounds to open, participatory politics.

    That these are ways of structuring politics, not of assuring the victory of one side or of arriving at a policy, ought not detract from their urgency. Politics should make people feel heard and attended to. It should address pressing problems successfully. Politics manifestly is not doing those things now. If the way it is framed and conducted does not change fundamentally, democratic politics, which is to say, democratic society, will not be able to function properly. Tasks that are essential to powerful interests will get accomplished, but not tasks to which they are indifferent, even if they affect the welfare of vast numbers of people. Building a new politics will take a long time, because there is a lot to undo.

    On Playing Beethoven: Marginalia

    Interpretation? Some musicians have little patience for this word, while on the other side there is a recent surge of musicologists who strive to do it justice by elucidating its essence, its development, and its historical peculiarities. After a lengthy period of purely structural reasoning about musical works, topics such as psychology, character, and atmosphere are being considered again. Every tiny portamento or cercar la nota throughout the history of bel canto is being unearthed. Recapitulations are scrutinized with the help of the stopwatch in order to find out whether, why, and by how much they may exceed the scope of the exposition.

    The anti-interpreters consider all this to be a waste of time. All they ask for is a reliable edition of the score. The rest will be provided by their own genius. Here I would like to interpose and remind the reader of the fact that to decipher a score precisely and sympathetically is a much more demanding task than most musicians realize, and a more important one as well. Among the composers who had the skill to put on paper distinctly what they imagined, Beethoven is an outstanding example. Do not register his markings with one eye only: it will not provide you with the full picture. I am thinking of his dynamic indications in particular — Beethoven was well aware of where his crescendi and diminuendi should start or end. The metronome markings are another matter. The unhesitating adherence to Beethoven’s metronome figures even in the most dubious cases (Op. 106, Ninth Symphony) has resulted in performances that hardly leave any space for warmth, dolce, cantabile, for — in the words of the prescription in his Missa Solemnis — “from the heart — may it reach out to the heart” (von Herzen möge es wieder zu Herzen gehen). They also leave no room for Beethoven’s humor.

    While, in the past, it was the cliché of Beethoven the hero and the titan that was harmful to an appreciation of the variety of his music, the danger now comes from the predilection for breakneck speeds and virtuoso feats. Tempi are forced on the music instead of derived from it. My own experience has taught me to trust Beethoven’s markings — if not the metronome indications — almost completely, and to consider them important hints about tempo and atmosphere.

    The terms from largo to prestissimo that Beethoven uses to indicate tempo and character seem to me frequently more suggestive than metronome prescriptions. Listening to some contemporary performances, the majority of allegros sound to me like presto possibile. The diversity of the tempi gets lost. The third movement of the Hammerklavier Sonata, called Adagio sostenuto, turns into an andante con moto. While the speed of the fugue (crotchet = 144) is technically feasible, it prevents the listener from taking in the harmonic proceedings. (For many pianists, playing too fast may come easier than slightly reining in the tempo.)

    Another bone of contention is the metronome’s unshakeable steadiness. There are musicians who do not permit themselves or their pupils to use a metronome because it purportedly contradicts the natural flexibility of feeling. Obviously music should breathe, and it presupposes, not unlike our spine and pulse, a certain amount of elasticity. Yet this does not hold true for all music: not only jazz and pop, but also a considerable part of twentieth century music, would, without a rigorous tempo, be senseless. And there is another beneficial function of the metronome: it prevents progressive speeding up. Many young musicians are unaware of what they are doing to the tempo while practicing, and there are virtuosi who consider it their privilege to accelerate the pace while playing fast notes — a habit no orchestra or chamber ensemble could get away with.

    I cannot acquiesce in the widespread assumption that a soloist may indulge in all conceivable liberties, even the most outlandish ones, because he or she is neither a member of an ensemble nor the helpless prisoner of an orchestra. Quite a few soloists seem to adhere to the belief that only soloistic independence will issue in true music-making that emanates from their innermost interior, unfettered by the strait-jacket of ensemble playing. Any pianist who is about to play a Beethoven sonata should listen to a good performance of a Beethoven quartet — by, say, the Busch Quartet — in advance.

    And there is more to learn from the best conductors, singers, and orchestras than from all-too-soloistic soloists.

    Do you know the story of the eminent pianist who early on in his career was accused by a critic of playing semiquavers as if counting peas — with the result that, from then on, rhythmic steadfastness evaporated from his playing? Many years of appearing with orchestras and dealing with string quartets have confirmed my ideal of a rhythmic control that, in solo music, should never stray too far from ensemble playing. After all, the greatest piano composers — excepting Chopin and, in their young years, Schumann and Liszt — have all been ensemble composers as well, if not primarily. It seems highly unlikely that a composer should harbor two distinctly different concepts of rhythm and tempo, one for soloists, another for ensemble players. “Freedom” of playing should be confined to cadenzas, recitatives, and sections of an improvisatory nature. It goes without saying that Beethoven’s scores are neither entirely complete nor apt to be put into practice by a computer. To prepare the onset of a new idea, to give sufficient time to a transition, to underline the weight of an ending: these were self-evident matters that the performance of tonal music implied.

    Compared to the younger and short-lived Schubert, Beethoven had more time and opportunity to hear his own works performed, and to react to the performances. His hearing trouble was probably not so severe that it would have prevented him from perceiving certain tones and nuances. The Schuppanzigh Quartet, an institution that had already been associated with Haydn, accompanied his string quartet production to its very end. This was the first professional quartet in performance history, and it seems to have been available to Beethoven consistently. When Schuppanzigh stayed away from Vienna for a number of years, Beethoven halted his composition of string quartets, only to take it up again when Schuppanzigh returned. His quartet in E-flat Op. 127 was premiered within the series of “classical” chamber music concerts that Schuppanzigh inaugurated. This performance, however, turned out to be inadequate, and in due course several other performances with different players were organized to give connoisseurs the chance of getting better acquainted with such novel music. (The fact that this was feasible may have been due to the unparalleled triumph of Beethoven’s patriotic creations Wellington’s Victory and “Der glorreiche Augenblick,” or “The Glorious Moment,” which marked the peak of his popularity as well as the low point of his compositional output.)

    The profusion and the distinctiveness of Beethoven’s markings in the late string quartets did not result entirely from imagining them — it was connected to performance practice as well. Only in his fugues do we find a lack of detailed instructions. In these passages the players have to intervene and provide additional dynamic information, unless they are intent on drowning the listener of Beethoven’s “Grosse Fuge” in long stretches of fortissimo.

    Schuppanzigh’s concert series were mainly geared towards string quartets (regularly those of Haydn, Mozart and Beethoven), but they also included quintets, nonets and (“to divert the ladies”) piano trios. Solo piano works were hardly performed in public until Liszt invented the piano recital in London in 1840. Like Beethoven’s late quartets, his late piano sonatas became too difficult to be executed by domestic players. In order to tackle works such as the Sonata Op. 101 you had to be as proficient as Dorothea Ertmann, Beethoven’s much-admired pupil and friend to whom the sonata is dedicated. Works such as Op. 106 and Op.111 were deemed unplayable. Only in the second half of the nineteenth century did they start to seep into musical consciousness, thanks mainly to the advocacy of Hans von Bülow.

    In spite of the commitment of performers such as Bülow, Arthur Schnabel, Edwin Fischer, and Rudolf Serkin, the appreciation of the Diabelli Variations took considerably longer. Only recently has this magnum opus turned into a parade horse of many pianists as well as an endurance test for audiences that have now learned to sit through, and even relish, a work that is fifty-five minutes long and almost entirely in the key of C major. Among the reasons for this delay was the mythological misconception of the late Beethoven as “a loner with his God,” when in fact the profane was no less available to him than the sublime, the musical past no less than the musical present and future. In the Diabelli Variations, virtuosity and introspection, humor and gracefulness, cohabit under one roof.

    According to his assistant Schindler, Beethoven conceived these variations while “in a rosy mood,” and humor (“the sublime in reverse,” according to Jean Paul) reigns over wide stretches of the work. Wilhelm von Lenz, who took piano lessons from Liszt and became the author of the first detailed appreciation of all of Beethoven’s works, calls him “the most thoroughly initiated high priest of humor.” Conveying humor in music had been one of Haydn’s great achievements, and Beethoven linked up with it. Of course, the performer of humorous music should never appear to be forcing the comical. In the Diabelli Variations, the wit ought to become apparent, as it were, by itself, while the enraptured and enigmatic pieces provide the depth of perspective.

    Beethoven had a predilection for placing the ridiculous next to the sublime. The bottomless introspection of Variation XX is followed by a piece in which a maniac and a moaner alternate. After concluding his work on the Sonata Op. 111, his last piano sonata, Beethoven turned to finishing his Diabelli Variations, the theme of which is motivically related to the Sonata’s Arietta. Once more, the sublime and the “sublime in reverse” face one another.

    It has been claimed that Beethoven’s late style narrows down into the subjective and esoteric. What I find in it, however, is expansion and synthesis. Opposites are forced together, refinement meets bluntness, the public is paired with the private, roughness stands next to childlike lyricism. Does the inclusion of the Diabelli Variations into the wider repertory suggest that, these days, we have learned to listen to Beethoven’s late music with open ears? What we can take for granted is that no amount of familiarity with these pieces is going to erase their tinge of mystery.

    The George Floyd Uprising

    I

    Overnight mass conversions to the cause of African American rights are a rare phenomenon in America, and, even so, a recurrent phenomenon, and ultimately a world-changing phenomenon. The classic instance took place in 1854 in Boston. An escaped slave from Virginia named Anthony Burns was arrested and held by United States marshals, who prepared to send him back into bondage in Virginia, in accordance with the Fugitive Slave Act and the policies of the Franklin Pierce administration. And a good many white people in Boston and environs were surprised to discover themselves erupting in violent rage, as if in mass reversion to the hot-headed instincts of their ancestors at the glorious Tea Party of 1773. Respectable worthies with three names found themselves storming the courthouse. Amos Adams Lawrence, America’s wealthiest mill owner, famously remarked, “We went to bed one night old-fashioned, conservative, Compromise Whigs & waked up stark mad Abolitionists.” John Greenleaf Whittier experienced a physical revulsion:

    I felt a sense of bitter loss, —
    Shame, tearless grief, and stifling wrath,
    And loathing fear, as if my path
    A serpent stretched across.

    Henry David Thoreau delivered a lecture a few weeks later under the scathing title, “Slavery in Massachusetts,” in support of blowing up the law: “The law will never make men free; it is men who have got to make the law free.” And in upstate New York, the businessman John Brown, taking the fateful next step, declared that “Anthony Burns must be released, or I will die in the attempt,” which sounded the note of death. Burns was not released. John Brown went to Bleeding Kansas, where the note of death produced the Pottawatomie Massacre in 1856, and thence to Harper’s Ferry and everything that followed.  

    A second instance took place in March 1965, this time in response to a police attack on John Lewis and a voting-rights march in Alabama. The event was televised. Everyone saw it. And the furor it aroused was sufficiently intense to ensure that, in our own day, the photo image of young Lewis getting beaten, though it is somewhat blurry, has emerged as a representative image of the civil-rights revolution. It was Lyndon Johnson, and not any of the business moguls or the poets, who articulated the response. Johnson delivered a speech to Congress a few days later in which, apart from calling for the Voting Rights Act to be passed, he made it clear that he himself was not entirely the same man as before. “We shall overcome,” said the president, as if, having gone to bed a mere supporter of the civil rights cause, he had waked up marching in the street and singing the anthem. He went further yet. In a speech at Howard University, he defined the goal, too: “not just equality as a right and a theory, but equality as a fact, and equality as a result,” which inched his toe further into social democratic terrain than any American presidential toe has ever ventured.  

    And, a week after the Voting Rights Act duly passed, the violent note of the 1960s, already audible, began to resound a little more loudly in the Watts district of Los Angeles, prefiguring still more to come over the next years — violence in the ghettos, and among the police, and among the white supremacists, and eventually on the radical left as well. All of which ought to suggest that, in the late spring of 2020, we saw and perhaps participated in yet another version of the same rare and powerful phenomenon: an overnight conversion to the cause of African American rights, sparked by a single, shocking, and visible instance of dreadful oppression, with massive, complicated, and, on a smaller scale, sometimes violent consequences. 

    During the several months that followed the killing of George Floyd, which occurred on May 25, 2020, close to eight thousand Black Lives Matter demonstrations are reported to have taken place in the United States, in more than two thousand locales in every part of the country. Many of those demonstrations must have drawn just a handful of people. Then again, a protest parading under my own windows in Brooklyn in early June filled eight lanes and took half an hour to pass by, and, far from being unusual, was followed by similar marches from time to time, week after week, eventually dwindling in size, then swelling up again, and never disappearing, not for several months. It is reasonable to assume that, nationwide in America, several million people took part in those demonstrations. These were the largest anti-racist demonstrations in the history of the United States, and they were echoed by still other Black Lives Matter demonstrations in a variety of other countries, which made them the largest such event in the history of the world. The scale of the phenomenon makes clear that, whatever the precise size of the crowds, enormous numbers of participants had to be people who, like Amos Adams Lawrence, went to bed as quiet citizens and waked up transformed into militants of the cause, ready to paint their own placards (a disdain for printed placards or anything else bespeaking the dead hand of top-down obedience was a style of the movement) and carry them through the streets, chanting “Black lives matter!” and other, scrappier slogans (“Why are you in riot gear? / I don’t see no riot here!”) that, until yesterday, would never have been theirs. This has been, in short, a major event not just globally, but intimately and individually, one marcher at a time. The intimate and individual aspect has made itself visible, too, in the wave of professional groups and institutions of many sorts that have announced campaigns of their own to break up the segregated aspect (or worse) of institutional life in America — protests and campaigns in any number of business organizations and academic and cultural institutions, unto Kappa Alpha, the Robert E. Lee-revering college fraternity. And, in conformity with the historical pattern, the undertow of violence and destruction has likewise made itself visible, some of it a low-level political violence on the radical left, some of it in prolonged versions too (which is a fairly novel development); some of it a violence on the radical right, the ominous posturing with guns in public, the wave of right-wing car-rammings, the terrorist plots in Michigan, and some murders; and some of it outbreaks of looting, not on the urbicidal scale of the 1960s, but epidemically spread across the country, hotspot to hotspot.

    The furors of 1854, 1965, and 2020 arose in response to particular circumstances, and a glance at the circumstances makes it possible to identify more precisely the intimate and even invisible nature of the mass conversions. The circumstances in 1854 amounted to a political betrayal. The mainstream of the political class had managed for a quarter of a century to persuade the antislavery public in large parts of the North that it was possible to be antislavery and conciliatory to the slave states at the same time, in the expectation that somehow things were going to work out. Instead, the Kansas-Nebraska Act of 1854, by enabling further triumphs of the slave system, demonstrated that nothing was working out. People who thought of themselves as patient and moderate reformers concluded that they had been played. And, with the arrest of a fugitive slave in antislavery’s principal city, the patient and moderate reformers felt personally implicated, too. They erupted in wrath on behalf of Anthony Burns, who was in front of them, and on behalf of the American slaves as a whole, who were mostly far away. They erupted on behalf of America and the principles of the American Revolution, which they understood to be identical to the antislavery cause (as expressed by Walt Whitman, still another enragé, in his poem on the Burns affair, “A Boston Ballad”). But they erupted also on their own behalf, one person at a time. They were earnest Christians who discovered, to their horror, that they had allowed themselves to be duped by smooth-talking politicians into acceding for a quarter of a century, through association with the abomination of slavery, to their own moral degradation or damnation. 

    The “stifling wrath” (Whittier’s phrase) was different in 1965, but not entirely so. Opinion in large parts of the country had come around in favor of civil rights, timidly perhaps, but with a feeling of moral righteousness. The philosophical battle against segregation and invidious discrimination seemed to have been won, as shown by Johnson’s success, a year earlier, in pushing through the Civil Rights Act. Under those circumstances, to see on television the state troopers of the rejected Old South descend upon the demonstrators in Selma, quite as if the country had not, in fact, already made a national decision — to see the troopers assault young John Lewis and other people well-known and respected for their noble agitations — to see, in short, the unreconstructed bigots display yet again, unfazed, the same stupid, brutal arrogance that had just gone down to defeat — to see this was — well, it did not feel like a betrayal exactly, but neither did it feel like a simple political setback. It felt like a national insult. It was an outrage to everyone who had waked up singing “We Shall Overcome.” It was an outrage to the murdered President Kennedy. Then again, to some people the spectacle signified the futility of political action and self-restraint and, in that fashion, it opened the gates of limitless rage. 

    The political origins of the mass response to the killing of George Floyd are likewise identifiable, though I will confess that, if you had asked me a day before it started to predict the future of radical reform in America, I would have identified a different set of origins, and I would have extrapolated a different outcome. The origins that did lead to the uprising had everything to do with Black Lives Matter as an organization, and not just as a vague movement. Everyone will recall that, in 2013, a Florida vigilante named George Zimmerman was acquitted of the murder of a black teenager named Trayvon Martin, and the acquittal led to furious demonstrations in Florida, California, and New York. A politically savvy young black woman in San Francisco named Alicia Garza posted stirring responses to the incident on Facebook which included the phrase “black lives matter,” simply as a heartbroken thought and not as a slogan, and which was reposted by others using #blacklivesmatter. Garza and a couple of her Californian friends, Patrisse Cullors and Opal Tometi, converted their hashtag into a series of social media pages and thus into a committee of sorts. 

    Garza was a professional community organizer in San Francisco, and, as she makes plain in her account of these events, The Purpose of Power: How We Come Together When We Fall Apart, she and the little committee did know how to respond to unpredicted events. The next year, when the police in Ferguson, Missouri, shot to death Michael Brown, a spontaneous local uprising broke out, which was the unpredicted event. Garza and her group made their way to Ferguson, and, by scientifically applying their time-tested skills, helped convert the spontaneous uprising into an organized protest. Similar protests broke out in other cities. The Black Lives Matter movement was launched — a decentralized movement animated by a sharply defined outrage over state violence against blacks, with encouragement and assistance from Garza and her circle, “fanning the flames of discontent,” as the Wobblies used to say, and then from other people, too, who mounted rival and schismatic claims to have founded the movement. 

    In New York City, the marches, beginning in 2014, were large and feisty — marches of young people, sometimes mostly white, sometimes multihued, with flames fanned by the New York Police Department, whose uniformed members managed to choke to death Eric Garner, guilty of the peaceable crime of selling bootleg cigarettes. I did a little marching myself, whenever an attractive cohort was passing by. Some of these marches were, in fact, attractive. Then again, some of them seemed to be youth adventures, a little daffy in their anti-police fervor. I kept expecting to discover, at the rear of one march or another, a graduate-student delegation wheeling an antique caboose loaded with dogmas of the university left, barely updated from the identity politics of the 1970s and 1980s, or shrewdly refitted for the anti-Zionist cause. And, to be sure, Angela Davis, who spent the 1970s and 1980s trying to attach the black cause in America to the larger cause of the Soviet Union, came out with a book in 2016 called Freedom Is a Constant Struggle: Ferguson, Palestine, and the Foundations of a Movement, trying to merge, on intersectionalist grounds, Black Lives Matter in Missouri to the Palestinian struggle against Israel. 

    As it happens, the anti-Zionists had some success in commandeering an umbrella group of various organizations, the Movement for Black Lives, that arose in response to the upsurge of Black Lives Matter demonstrations. But the anti-Zionists had no success, or only fleeting successes, in commandeering Black Lives Matter itself. Nor did the partisans of any other cause or organization manage to commandeer the movement. Alicia Garza makes clear in The Purpose of Power that, in regard to the maneuverings and ideological extravagances of sundry factions of the radical left, she is not a naïf, and she and her friends have known how to preserve the integrity of their cause. Still, she is not without occasional extravagances of her own. In her picture of African American history, she deems the “iconic trio” of freedom fighters to be Martin Luther King, Malcolm X, and, of all people, Huey Newton, the leader of the Black Panther Party in the 1960s and 1970s, “the Supreme Servant of the People” — though Garza’s San Francisco Bay Area is filled with any number of older people who surely remember the Supreme Servant more sourly. 

    An occasional ideological extravagance need not get in the way, however, of a well-run organizing project. In San Francisco, a black neighborhood found itself suddenly deprived of school busses, and, as Garza describes, she and her colleagues efficiently mobilized the community, even if that involved the followers of Louis Farrakhan, of whom she appears to be not too fond. And lo, bus service resumed. Mobilizing a few neighborhoods around police violence is not any different. Still, the ideological impulses are sometimes hard to repress. From Garza’s standpoint, the overriding necessity during the presidential campaign of 2016 was to denounce the Democratic Party for its evident failings. Militants of Black Lives Matter duly made dramatic interventions in the campaign — at one of Bernie Sanders’ events, in order to denounce Bernie for failing to give black issues a proper consideration; and at an event of Hillary Clinton’s, in order to denounce Hillary for her own related inadequacies. But those were less than useful interventions. They seemed likely only to dampen popular black enthusiasm for the Democratic Party, precisely at a moment when the cause of anti-Trumpism depended on black enthusiasm — which led me to suppose, back in 2016, that Black Lives Matter was bound to remain a marginal movement, brilliantly capable of promoting its single issue, but incapable of maneuvering successfully on the larger landscape. 

    The leftwing upsurges that, in my too fanciful imagination, seemed better attuned to the age were Occupy Wall Street, which got underway in 2011, and Sanders’ 2016 and 2020 presidential campaigns. Occupy mostly evaded the dismal fate that skeptical observers predicted for it (namely, a degeneration into mayhem, Portland-style); and the Sanders campaigns only partly indulged, and mostly evaded, its own most dismal possibility (namely, a degeneration into full-tilt Jeremy Corbynism). Instead, the two movements gathered up large portions of the American radical left and led them out of the political wilderness into the social mainstream — in the case of Occupy, by transforming the anti-Main Street hippie counterculture into a species of hippie populism, 1890s-style, with a Main-Street slogan about “the ninety-nine per cent”; and, in the case of Bernie’s campaigns, by convincing large portions of the protest left to lighten up on identity politics, to return to an almost forgotten working-class orientation of long ago, and to go into electoral politics. Those were historic developments, and, in my calculation, they were bound to encourage the more practical Democrats to make their own slide leftward into a renewed appreciation for the equality-of-results idea that Lyndon Johnson had tried to get at. And then, with the pandemic, a leftward slide began to look like common sense, without any need to call itself any kind of slide at all. In the early spring of 2020, that was the radical development I expected to see — a dramatic renewal of the unnamed social-democratic cause. Not an insurrection in the streets, but something larger.

    Instead, there was an insurrection in the streets. The insurrection owed nothing at all to nostalgias for the 1890s or Eugene V. Debs or LBJ. It was an antiracist uprising. What can explain this?

    The video of George Floyd explains it. Six or seven years of skillful agitations by the Black Lives Matter movement had made everyone aware of the general problem of police killings of black men, one killing after another, not in massacres, but in a grisly series. The agitations had made everyone aware of the furious resentment this was arousing in black communities everywhere. But Black Lives Matter had also tried to make the argument that police killings represent a larger underlying cruelty in American life, something built into the foundations of society. And, until that moment, the agitations had not been able to overcome a couple of widely shared objections to that last and most radical of contentions.

    There was the objection that, however ghastly the series of killings had proved to be, the series did not constitute a unified wave, and nothing in particular was responsible for it. Ijeoma Oluo is a journalist in Seattle, whose book So You Want to Talk About Race is one of several new popular tracts on these themes. And she puts it this way: 

    In this individualist nation we like to believe that systemic racism doesn’t exist. We like to believe that if there are racist cops, they are individual bad eggs acting on their own. And with this belief, we are forced to prove that each individual encounter with the police is definitively racist or it is tossed out completely as mere coincidence. And so, instead of a system imbued with the racism and oppression of greater society, instead of a system plagued by unchecked implicit bias, inadequate training, lack of accountability, racist quotas, cultural insensitivity, lack of diversity, and lack of transparency — we are told we have a collection of individuals doing their best to serve and protect outside of a few bad apples acting completely on their own, and there’s nothing we can do about it other than address those bad apples once it’s been thoroughly proven that the officer in question is indeed a bad apple.

    The second objection was the opposite of the first. It conceded Ijeoma Oluo’s points about police departments. But it went on to argue that, contrary to her contention, the failings of policework are, in fact, widely understood, and a campaign to address the failings is well underway. Perhaps the campaign has not advanced very far in the retrograde America that still flies the Confederate flag, but in other parts of the country, in the enlightened zones, where cities are liberal, and mayors likewise, and police chiefs are reform-minded, the campaign to modernize the police has been sincere, or mostly, and it has been social-scientifically sophisticated, and it has taken aim at racial biases. And if problems persist, these may amount to a failure of communication — the failure to conduct the kind of face-to-face conversations among reasonable people that President Obama promoted at the White House by having a beer with Professor Henry Louis Gates, Jr., and the police officer who had treated Gates as a burglar on his own doorstep. Minor problems, then — problems calling for articulate presentations of up-to-date civic values from liberal politicians and reform leaders.

    But the video was devastating to the first objection. And it was devastating to the second. The video shows a peaceful day on the sidewalks of enlightened Minneapolis. George Floyd is on the ground, restrained, surrounded by police officers, and Officer Derek Chauvin plants a confident knee on his neck. The officer looks calm, self-assured, and professional. Three other cops hover behind him, and they, too, seem reasonably calm, the group of them maintaining what appears to be the military discipline of a well-ordered police unit. Apart from Chauvin’s knee, nothing alarming appears to be taking place. No gunshots ring in the distance, no commotion rises from the street, no shouts against the police or anyone else — nothing that might panic the cops or enrage them or throw them into confusion. And, in that setting, the video shows the outcome. Floyd moans that he cannot breathe. Someone on the sidewalk tries to tell the oblivious Officer Chauvin that something is wrong. And, for the many millions of people who watched the video, the shocking quality was double or triple. 

    If even a firecracker had gone off in the distance, the viewers could have concluded that Officer Chauvin was overcome with fear, and his actions might be understandable, though a more skillful cop would have known how to keep his cool. Or, if only Officer Chauvin had looked wild-eyed and upset, the viewers could have concluded that here was a madman. But, no. Chauvin and the other cops, maintaining their unit discipline, plainly show that all was well, from their standpoint. The four of them make no effort to prevent the people on the sidewalk from observing the event. No one seems embarrassed. These are cops who appear to believe themselves to be operating by the book. 

    And yet, how can they believe such a thing? Everyone who watched that video was obliged to come up with an explanation. The obvious one was that, in Minneapolis, the four police officers do not look like rule-breaking rogues because they are not, in fact, breaking rules — not in their own minds, anyway. Yes, they may be going against the advice proffered by their reform-minded department chief and their hapless mayor, the bloodless liberal. But they are conforming to the real-life professional standards of their fellow officers, which are the standards upheld by the police unions everywhere, which are, in turn, the standards upheld by large parts of the country, unto the most national of politicians. “Please don’t be too nice,” said the president of the United States to the police officers of Long Island, New York, in July 2017, with specific advice to take people under arrest and bang their heads as they are shoved into police vehicles. Why, then, should the four cops in Minneapolis have considered themselves rogues? That was the revelation in the video of George Floyd’s death. 

    And a large public drew large conclusions. To draw momentous conclusions from a single video shot on the sidewalks of Minneapolis might seem excessive. Yet that is how it is with the historic moments of overnight political conversion. There were four million slaves in 1854, but the arrest of a single one proved to be the incendiary event. In the case of George Floyd, the single video sufficed for a substantial public to conclude that, over the years, the public had been lied to about the complexities of policing; had been lied to about bad apples in uniform; had been lied to about the need for patience and the slow workings of the law. The public had been lied to by conservatives, who had denied the existence of a systemic racism; and had been lied to by liberals, who had insisted that systemic racism was being systematically addressed. Or worse, a large public concluded that it had been lied to about the state of social progress generally in America, in regard to race — not just in regard to policing, but in regard to practically everything, one institution after another. Still worse, a great many people concluded, in the American style, or perhaps the Protestant style, that, upon consideration, they themselves had been terribly complicit, and, in allowing themselves to be deceived by the police and the conservatives and the liberals, they had abandoned the black protesters, and they had allowed the police violence and the larger pattern of racial oppression to persist. Those were solemn conclusions, and they were arrived at in the most solemn of fashions, by gazing at a man as he passes from life to death. 

    So masses of people marched in the streets to rectify the social wrong. But they marched also to rectify the wrong nature of their own relation to society. This of course raises the question of what would be the right nature — which happens to be the topic of the new and extraordinarily popular literature of American antiracism. 

    II

    The literary work that shaped the mass conversion to anti-racism in 1854 was Uncle Tom’s Cabin, by Harriet Beecher Stowe, from 1852 — which was much despised by James Baldwin a century later for its demeaning portrait of the very people it was meant to support. The book that, more than any other, shaped the mass conversion in 1965 was Dark Ghetto, a sociological study from that same year, by Kenneth B. Clark — which was much despised at the time by Albert Murray, the author of The Omni-Americans, for what he, too, took to be a demeaning portrait of the very people it was meant to support. The book that, more than any other, has shaped the mass conversion of our own moment is Between the World and Me, by Ta-Nehisi Coates, from 2015 — which was written in homage to Baldwin, and yet is bound to make us wonder what Murray would have thought, if he had lived another few years. 

    Between the World and Me has shaped events because, in a stroke of genius, Coates came up with the three main and heartrending tropes of the modern crisis behind the antiracist uprising — to wit, “the talk;” the killing by the police of a young black man; and the young man’s inconsolable mother. The form of the book is a frank and emotional letter from Coates to his young son, which amounts to “the talk,” advising the son on the realities of black life in a hostile white America. The killing that takes place is of an admirable young black man from Coates’ social circle at college. The inconsolable mother is the young man’s mother, whom Coates goes to visit. In laying out these elements, Coates has supplied a vocabulary for speaking about the realities of modern police violence against blacks, which is a language of family life: an intimate language, Baldwinesque and not sociological, a language of family grit and grief. 

    Then again, he speaks insistently and emotionally but also somewhat abstractly about the black body and its vulnerability — not the beauty of the black body, but, instead, its mortifications, considered historically. These are the physical horrors of slavery long ago, conceived as horrors of an ever-present era, as experienced by himself as a young boy growing up in the rough neighborhoods of Baltimore, or as a child subjected to what appear to have been his father’s disciplinary beatings. This aspect of the book, the contemplation of the body and its mortifications, amounts, in effect, to a theory of America. Or rather, it amounts to a counter-theory, offered in opposition to the doctrine that he describes as the capital-D “Dream.” The Dream, as he lays it out, is the American idea that is celebrated by white people at Memorial Day barbecues. Coates never specifies the fundamentals of the idea, but plainly he means the notion that, in its simple-minded version, regards America as an already perfect expression of the democratic ideal, a few marginal failings aside. Or he means the notion that, in a more sophisticated way, regards 1776 as the American origin, and regards America’s history as the never-ending struggle, ever-progressive and ever-victorious, a few setbacks aside, to bring 1776 to full fruition. A theory of history, in short.

    His counter-theory, by contrast, postulates that, from the very start, America has been built on the plundering of the black body, and the plundering has never come to an end. This is an expressive idea. It scatters the dark shadow of the past over every terrible thing that happens in the present, which is never wrong to do, if the proportions are guarded. Yet Coates adopts an odd posture toward his own idea, such that, in one way or another, he ends up miniaturizing certain parts of his story. When he conjures the Dream, the precise scene that he brings to life is of little blond boys playing with toy trucks and baseball cards at the Memorial Day barbecue, as if this were the spectacle that arouses his resentment. When he conjures his own adult experience with the historic mortifications, he describes a disagreeable altercation on an escalator on the Upper West Side of Manhattan, where a white lady treats him and his toddler son in a tone of haughty disdain, and is seconded by a white man, and the temperature rises — as if this were the legacy of the horrors of long ago.

    The incident on the escalator comprises a climax of sorts in Between the World and Me — the moment when Coates himself, together with his toddler, has to confront the reality of American racism. And yet the incident is inherently ambiguous. He gives us no precise reason to share his assumption that the woman and the man are angry at him on a racist basis — an observation made by Thomas Chatterton Williams in his discussion of the scene in his own book, Self-Portrait in Black and White. Williams wonders even if Coates’ anger at the lady’s haughtiness might not have alarmed the lady and the man, with misunderstandings of every kind likely to have resulted — an easy thing to imagine in a town like New York, where sidewalk incidents happen all the time, and whites presume their own liberal innocence, and blacks do not, and correct interpretations are not always obvious. The ambiguity of the scene amounts to yet another miniaturization. The miniaturized portraits are, of course, deliberate. They allow Coates to express the contained anger of a man who, in other circumstances, would be reliably sweet-tempered. 

    He does present himself as a loving man — as a father, of course (which confers a genuine tenderness on the book), but also in regard to African American life as a whole. And yet something about this, too, his love for black America, ends up miniaturized. His principal narrative of African America is a portrait of Howard University from his own school-days, presented as an idyllic place, intellectually stimulating, pleasant, socially marvelous, affection-inspiring and filled with family meaning, too, given that his father, the Black Panther, had worked there as a research librarian — an ideal school, in sum, designed to generate graduates such as himself, therefore a splendid achievement of black America. But the argument that he makes about the ever-present universe of American slavery and the eternal vulnerability of the black body makes it seem as if, over the centuries, black America has achieved nothing at all, outside of music, perhaps, to which he devotes a handful of words. It is a picture of the black helplessness that racist whites like to imagine, supine and eternally defeated. This was Albert Murray’s objection to the black protest literature of the 1960s, with its emphasis on victimhood — the literature that was unable to see or acknowledge that, in the face of everything, black America has contributed from the very start to what Coates disparages as the Dream, or what Murray extolls as the Omni-America, which is the mulatto civilization that, in spite of every racial mythology, has always been white, black, and American Indian all at once.  

    I do not mean to suggest that Coates’ bitterness is inauthentic. Frank B. Wilderson III is twenty years older than Coates and, with his degrees from Dartmouth, Columbia, and Berkeley, is today the chair of the African-American Studies department at the University of California Irvine. His recent book, Afropessimism, conjures a similar landscape of anger and bitterness, as if in confirmation of Coates, except in a version that is far more volcanic, or perhaps hysterical. Coates during his college years in the 1990s was, as he explains, an adept of Malcolm X, but then outgrew the exotic trappings of Malcolm’s doctrine, without rejecting the influence entirely. Wilderson, during his own youth in the 1970s, was a “revolutionary communist,” in an acutely intellectual, Third Worldist fashion. He was an admirer of the Black Liberation Army, which was the guerrilla tendency that emerged from Eldridge Cleaver’s faction of the Black Panthers on the West Coast (and from City College in New York). The great inspiring global example of revolutionary resistance, in Wilderson’s eyes, was the Popular Front for the Liberation of Palestine, given its uncompromising struggle against the Zionist state — which, being a man of ideologies, he imagined (and evidently still imagines) to be a white European settler colony. And the Black Liberation Army, in his view, was the PFLP’s American counterpart. 

    Revolutionary communism left him feeling betrayed, however, or perhaps singed — damaged and enraged not by his black comrades in the United States, but by everyone else: by the whites of the revolutionary communist movement (namely, the Weather Underground, who gave up the struggle and returned to their lives of white privilege), and even more so by the non-blacks “of color.” He felt especially betrayed by the Palestinians. He was horrified to discover that a Palestinian friend in his hometown of Minneapolis, who despised Israelis, reserved a particular contempt for Israel’s Ethiopian Jews. And, in despair at the notion that even Palestinians, the vanguard of the worldwide vanguard, might be racist against blacks, Wilderson turned away from revolutionary Marxism, and he distilled his objections and complaints into a doctrine of his own — it is a doctrine, though a very peculiar one — which he calls Afropessimism. 

    The doctrine is a racialized species of post-Marxism. Wilderson thinks that, instead of the world being riven by Marx’s economic class conflict, or by the imperialist versus anti-imperialist conflict of Marxism in its Third Worldist version, it is riven by the conflict between the non-blacks and the blacks. The non-blacks regard themselves as the capital-H Human race, and they do so by seeing in the blacks a sub-human race of slaves. And the non-blacks cannot give up this belief because, if they did so, they would lose their concept of themselves as the Human race. Nor is there any solution to this problem, apart from the “end of the world,” or an apocalypse. The idea is fundamentally a variant of certain twentieth-century theories about the Jews — e.g., Freud’s notion that hatred of the Jews supplies the necessary, though unstated, foundation for the Christian concept of universal love. Freud’s theory is not especially expressive, though. Wilderson’s theory expresses. It vents. But the venting is not meant to serve a constructive purpose. Wilderson tells us that he studied under Edward Said at Columbia University, and he was greatly influenced. He admired Said’s resolute refusal to accept the existence of a Jewish state in any form. But Said’s revolutionary aspiration, in conformity with the Popular Front for the Liberation of Palestine, was to replace the Jewish state with something else. Wilderson’s Afropessimism entertains no such aspirations. It is “a looter’s creed,” in his candid phrase — meaning, a lashing out, intellectually violent, without any sort of positive application. Positive applications are inconceivable because the non-black hatred of blacks is unreformable.

    Still, he does intend Afropessimism to be a demystifier, and in this regard his doctrine seems to me distinctly useful. The doctrine beams a clarifying light on the reigning dogma on the American left just now, which is intersectionalism — a dogma that is invoked by one author after another in the antiracist literature, with expressions of gratitude for how illuminating it is, and how comforting it is. Intersectionalism is a version of the belief, rooted in Marx, that a single all-encompassing oppression underlies the sufferings of the world. Marx considered the all-encompassing oppression to be capitalism. But intersectionalism considers the all-encompassing oppression to be bigotry and its consequences — the bigotry that takes a hundred forms, which are racism, misogyny, homophobia, and so forth, splintering into ever smaller subsets. Intersectionalism considers that various subsets of the all-encompassing oppression, being aspects of the larger thing, can be usefully measured and weighed in relation to one another. And the measuring and weighing should allow the victims of the many different oppressions to recognize one another, to identify with one another, and to establish the universal solidarity of the oppressed that can bring about a better world.  

    But Wilderson’s Afropessismism argues that, on the contrary, the oppression of blacks is not, in fact, a variation of some larger terrible thing. And it is not comparable to other oppressions. The oppression of blacks has special qualities of its own, different from all other oppressions. He puts this hyperbolically, as is his wont, by describing the bigotry against blacks as the “essential” oppression, not just in the United States — though it ought to be obvious that, whether it is put hyperbolically or not, the oppression of blacks throughout American history does have, in fact, special qualities. On this point he is right. He is committed to his hyperbole, however, and it leads to an added turn in his argument. He contemplates, as an exercise in abstract analysis, the situation of a black man who rapes a white woman. In his view, the black man ought to be regarded as more oppressed than his own victim. The man may have more force, but he has less power. He is the victim of the “essential” oppression, and she is not, which makes his victimhood deeper. Wilderson’s purpose in laying out this argument is to shock us into recognizing how profound black oppression is. 

    Only, the argument leads me to a different recognition. I would think that, if black oppression cannot be likened  to other oppressions — if a special quality renders the black oppression unique — the whole logic of intersectionalism collapses. For if the black oppression is sui generis, why shouldn’t other oppressions likewise be regarded as sui generis? The oppression experienced by the victims of rape, for instance — why shouldn’t that, too, be regarded as sui generis? Why not say that many kinds of oppression are genuinely terrible, and there is no point in trying to establish a system for comparing and ranking the horrible things that people undergo? There might even be a virtue in declining to compare and rank one oppression with another. A main result of comparing and ranking the various oppressions is, after all, to flatten the individual experience of each, which softens the terribleness of the oppression — an especially misguided thing to do in regard to the racial history of the United States. 

    It may be a mistake to argue with Frank Wilderson III too much. He is a brilliant man with a literary gift that is only somewhat undone by a graduate-school enthusiasm for critical theory. But, at the same time, a cloud of mental instability or imbalance drifts across his book. He explains in his opening pages that his shock at discovering a casual anti-black racism among Palestinians induced in him a serious nervous breakdown, and he appears never to have fully recovered. He describes the sinister persecution that he believes he and his lover underwent at the hands of the FBI, and his account hints of paranoia. Then, too, it is striking how insistently he goes about miniaturizing his own picture of the racism against blacks that he believes to be inherent in the whole of civilization. The great traumatic experience of Wilderson’s childhood appears to have been the moment when the mother of a white friend persisted in asking him, “How does it feel to be a Negro?” 

    He is traumatized by the poor reception of his incendiary ideas at an academic conference in Berlin, not just among the straight white males whose essence it is to be oppressive, but among the women and non-whites whose intersectional essences ought to have impelled in them a solidarity with his oppressed-of-the-oppressed outlook. Especially traumatic for him is a Chinese woman at the scholarly conference, who, in spite of being multi-intersectionally oppressed, fails to see the persuasive force of his ideas. Then, too, a fight that turns nasty with a white woman in the upstairs apartment back in Minneapolis seems to him a recursion to the social relations of slavery times. The man has no skin. Every slight is a return to the Middle Passage. His book resembles Ta-Nehisi Coates’ in this respect yet again, except with a pop-eyed excess. The shadow of slavery times darkens even his private domestic satisfactions. He appears to regard his white wife as, in some manner, his slave master, though he seems not to hold this against her. It is positively a relief to learn from his book that, during his career as communist revolutionary, he went to South Africa to participate in the revolution (by smuggling weapons, while working as a human-rights activist for Amnesty International and Human Rights Watch), but had to flee the country because he was put on a list of “ultra-left-ists” to be “neutralized” by the circle around Nelson Mandela himself — a level-headed person, at last!

    But it is dismaying also to notice that, for all his efforts to identify anti-black racism and to rail against it, the whole effect of Wilderson’s Afropessimism is to achieve something disagreeably paradoxical. He means to make a forward leap beyond Marx, and he ends up making a backward leap to the era, a generation before Marx, when Hegel felt entitled to write the black race out of capital-H History. Hegel believed that black Africa, where slavery was practiced, existed outside of the workings of historical development that functioned everywhere else — outside of the human struggles that make for civilization and progress. Hegel was, of course, hopelessly ignorant of black life. Wilderson is not, and, even so, he has talked himself into reproducing the error. Wilderson, too, believes that blacks live outside of History. It is because blacks have never ceased to be the slaves that Hegel imagined them permanently to be. Wilderson explains: “for the Slave, historical ‘time’ is not possible.” Here is the meaning of the bitterness that Wilderson expresses wildly, and that Coates expresses not wildly. It is more than a denial of the black achievement in America, along the lines that exasperated Murray half a century ago. It is a denial, in effect, of tragedy, which exists only where there is choice, which is to say, where there is history. It is an embrace of the merely pitiful, where there is no choice, but only suffering — an embrace of the pitiful in, at least, the realm of rhetoric, where it is poignant (these are literary men), but lifeless.  

    Ibram X. Kendi appears, at first glance, to offer a more satisfactory way of thinking in his two books on American racism, Stamped from the Beginning: The Definitive History of Racist Ideas in America, which runs almost six hundred pages, as befits its topic, and the much shorter How To Be an Antiracist, which distills his argument (and does so in the autobiographical vein that characterizes all of the current books on American racism). Kendi does believe in history. He thinks of the history of racism as a dialectical development instead of a single despairing story of non-progress, as in Wilderson’s despairing rejection of historical time, or a single story of ever-victorious progress, as in the naive celebration of the sunny American “Dream.” He observes that racist ideas have a history, and so do antiracist ideas, and the two sets of ideas have been in complicated conflict for centuries. He also observes that black people can be racist and white people can be antiracist. He cites the example of the antislavery American white Quakers of the eighteenth century. He is the anti-Wilderson: he knows that the history of ideas about race and the history of races are not the same. 

    His fundamental approach is, in short, admirably subtle. Still, he feels the allure of simplifying definitions. Thus: “A racist idea is any idea that suggests one racial group is inferior or superior to another racial group in any way.” And, with this formula established, he sets up a structure of possible ideas about blacks in America, which turn out to be three. These are: (a) the “segregationist” idea, which holds that blacks are hopelessly inferior; (b) the “assimilationist” idea, which holds that blacks do exhibit an inferiority in some regard, but, by assimilating to white culture, can overcome it; and (c) the “antiracist” idea, which holds that no racial group is either superior or inferior to any other “in any way.” His definitions establish what he calls the “duality of racist and antiracist.” And with his definitions, three-part divisions, and dualities in hand, he goes roaming across the American centuries, seeking to label each new person or doctrine either as a species of racist, whether “segregationist” or “assimilationist,” or else as a forthright “antiracist.”

    In How to Be an Antiracist, he recalls a high school speech-contest oration that he delivered to a mostly black audience in Virginia twenty years ago, criticizing in a spirit of uplift various aspects of African-American life — which, at the time, seemed to him a great triumph of his young life. In retrospect, though, sharpened by his analytic duality of racist and antiracist, he reflects that, in criticizing African Americans, his high-school self had fallen into the “assimilationist” trap. He had ended up fortifying the white belief in black inferiority — which is to say he had therefore delivered a racist speech! Is he fair to himself in arriving at such a harsh and humiliating judgment? In those days he attended Stonewall Jackson High School in Manassas, and, though he does not dwell over how horrible is such a name, it is easy to concede that, under the shadow of the old Confederacy, a speech criticizing any aspect whatsoever of black life might, in fact, seem humiliating to recall. On the other hand, if every commentary on racial themes is going to be summoned to a high-school tribunal of racist-versus-antiracist, the spirit of nuance, which is inseparable from the spirit of truth, might have a hard time surviving. 

    Kendi turns from his own mortifying student oration to the writings of W.E.B. Du Bois. He recalls Du Bois’ famous 

    “double consciousness” in The Souls of Black Folk, which reflected a desire “to be both a Negro and an American.” In Kendi’s reasoning, an “American” must be white. But this can only mean, as per his definitions, that W.E. B. Du Bois was — the conclusion is unavoidable — a racist, in the “assimilationist“ version. Du Bois was a black man who wished no longer to be entirely black. Or worse, Du Bois wanted to rescue the African Americans as a whole from their “relic of barbarism” — a racist phrase, in Kendi’s estimation — by having the African-Americans assimilate into the white majority culture. Du Bois’ intention, in short, was to inflict his own racism on everyone else. Such is the ruling of the high-school tribunal. 

    It is an analytical disaster. The real Du Bois was, to the contrary, a master of complexity, who understood that complexity was the black fate in America. Du Bois did not want to become white, nor did he want to usher the black population as a whole into whiteness. He wanted black Americans to claim what was theirs, which was the reality of being black and, at the same time, the reality of being American, a very great thing, which was likewise theirs. He knew that personal identity is not a stable or biological fact: it is a fluidity, created by struggle and amalgamation, which is the meaning, rooted in Hegel’s Phenomenology of Mind, of “double consciousness.” A man compromised by “assimilationist” impulses? No, one of the most eloquent and profound enemies of racism that America has ever produced. 

    Kendi is confident of his dualities and definitions. He is profligate with them, in dialectical pairings: “Cultural racist: one who is creating a cultural standard and imposing a cultural hierarchy among racial groups.” Versus: “Cultural antiracist: One who is rejecting cultural standards and equalizing cultural differences among racial groups.” And, with his motor running, one distinguished head after another falls beneath his blade. He recalls Jesse Jackson’s condemnation, back in the 1990s, of the campaign to teach what was called Ebonics, or black dialect, to black students. “It’s teaching down to our children,” said Jackson, which strikes Kendi as another example of “assimilationist” error.  But Kendi does not seem to recognize who Jesse Jackson is. In his prime, Jesse Jackson was arguably the greatest political orator in America — the greatest not necessarily in what he said, which ran the gamut over the years, but in the magnificent way he said it. And the grandeurs of Jackson’s oratorical technique rested on the grandeurs of the black church ministry, which rest on, in turn, the heritage of the English language at its most majestic, which means the seventeenth century and the King James Bible. In condemning the promotion of Ebonics, Jackson was not attacking black culture. He was seeking to protect black culture at its loftiest, as represented by his own virtuosity at the pulpit and the podium — or so it seems to me. 

    But then, Kendi does not like the hierarchical implications of a word like “loftiest.” Naturally he disapproves of the critics of hip hop. He singles out John McWhorter, who has seen in hip-hop “the stereotypes that long hindered blacks,” but he must also have in mind critics like the late Stanley Crouch, who condemned hip hop on a larger basis, in order to defend the musical apotheosis that Crouch identified with Duke Ellington — condemned hip hop, that is, in order to defend the loftiness of black culture in yet another realm. In this fashion, Kendi’s dualities of racist and antiracist turn full circle, and Ibram X. Kendi, the scourge of racism, ends up, on one page or another, the scourge of entire zones — philosophy, oratory, jazz — of black America’s greatest achievements. 

    His ostensible purpose is to help good-hearted people rectify their thinking. It is a self-improvement project, addressed to earnest readers who wish to purge their imaginations of racist thoughts, in favor of antiracist thoughts. This sort of self-improvement is, of course, a fad of the moment. An early example was Race Talk and the Conspiracy of Silence: Understanding and Facilitating Difficult Dialogues on Race, by the psychologist Derald Wing Sue, from 2015, a serious book with its share of genuine insights into microaggressions and other features of the awkward conversations that Americans do have on topics of race. White Fragility: Why It’s So Hard for White People to Talk About Racism, by Robin DiAngelo, a diversity coach, is perhaps the best-known of these books — a slightly alarming book because its reliance on identity-politics analyses has the look of the right-wing race theoreticians of a century ago, except in a well-intentioned version. Ijeoma Oluo’s So You Want to Talk About Race, with its breezy air, is the most charming of the new books, though perhaps not on every page. But Kendi’s version is the most ambitious, and the most curious. 

    He does not actually believe in the possibilities of personal rectification — not, at least, as a product of education or moral suasion. In Stamped from the Beginning, he observes that “sacrifice, uplift, persuasion and education have not eradicated and will not eradicate racist ideas, let alone racist policies.” The battle of ideas does not mean a thing, and racists will not give up their racism. The people in power in the United States have an interest in maintaining racism, and they will not give it up. “Power will never self-sacrifice away from its self-interest. Power cannot be persuaded away from its self-interest. Power cannot be educated away from its self-interest.” Instead, the antiracists must force the people in power to take the right steps. But mostly the antiracists must find their own way, in his phrase, of “seizing power.” The phrase pleases Kendi. “Protesting against racist power and succeeding can never be mistaken for seizing power,” he says. “Any effective solution to eradicating American racism” — he means any effective method for eradicating it — “must involve Americans committed to antiracist policies seizing and maintaining power over institutions, neighborhoods, countries, states, nations — the world.” And then, having seized power, the antiracists will be able to impose their ideas on the powerless.

    This attitude toward the seizure of power is known, in the old-fashioned leftwing vocabulary, as putschism. But as everyone has lately been able to see, there is nothing old-fashioned about it. The manifesto that was signed not long ago by hundreds of scholars at Princeton University, calling for the university administration to ferret out racist ideas among the professors, was accepted, and the university announced its intention to set up an official mechanism for investigating and suppressing professorial error. Can this really be so? It is so, and not just at Princeton. The controversies over “cancel culture” are controversies, ultimately, over the putschist instinct of crowds who regard themselves as antiracist (or as progressive in some other way) and wish to dispense with the inconveniences of argument and persuasion, in favor of getting some disfavored person fired or otherwise shut up. And the controversies have spread from the universities to the arts organizations and the press. I would think that anyone who admires Kendi’s argument for seizing power could only be delighted by the successful staffers’ campaign at the New York Times to fire its eminently liberal op-ed editor, whose error was to adhere to the Times tradition of publishing contrarian right-wing op-eds from time to time — though other people may suppose that putsches in the newsroom and in the universities amount to one more destructive undertow in the larger constructive antiracist wave.  

    A difficulty with putschism, in any case, has always been that putsch begets putsch, and the hard-liners will eventually set out to overthrow their wimpier comrades, and the reaction-aries will set out to overthrow the lot of them; and truth will not be advanced. But apart from the disagreeable impracticality of the putschist proposal, what strikes me is the inadequacy of Kendi’s rhetoric to express the immensity and the depth of the American racial situation. It is a dialectical rhetoric, but not an expressive one. It amounts to a college Bolshevism, when what is required is — well, I don’t know what is required, except to remark that, when you read Du Bois, you do get a sense of the immensity and the tragedy, and the inner nature of the struggle, and the depth of the yearnings.

    Isabel Wilkerson’s alternative to this kind of thinking, presented in Caste: The Origins of Our Discontents, manages to be lucid and poetic at the same time, perhaps not in every passage, but often enough over the course of her few hundred pages. She wishes to speak principally about social structures, and not as much about ideas. Only, instead of looking at economic classes, which is what people typically think of when they think about social structures, she speaks about social castes, as in India. The caste system in traditional Indian society is a rigid and ancient social structure, which divided and still divides the population into inherited classes, whose members work at certain occupations and not others, and perhaps dress in certain ways, or are physically distinct, or have distinctive names, and are forever stuck in the eternity of their caste status. 

    There was a vogue in the 1930s and 1940s for social scientists to venture into the scary old American Deep South and, by applying surreptitiously the techniques of anthropology, to look for social structures of that kind in Jim Crow America. Isabel Wilkerson is fascinated by those people — by the anthropologist Allison Davis especially, a pioneering black scholar, to whom she devotes a few enthusiastic pages in her book. She is taken with Davis’ insights and those of his colleagues. She sets out to update the insights to our own era. And, in doing so, she comes up with a marvelous insight, though it takes her until her fourth chapter to lay it out. A caste system, as she describes it, is defined by its antiquity. It resembles a theater play that has been running for a long time, with actors who have inherited their roles and wear the costumes of their predecessors. “The people in these roles,” she explains, “are not the characters they play, but they have played the roles long enough to incorporate the roles into their very being.” They have grown accustomed to the distribution of parts in their play––accustomed to seeing who plays the lead, who plays the hero, who are the supporting actors, who plays the comic sidekick, and who constitute the “undifferentiated chorus.” The play and the roles are matters of habit, but they take them to be matters of reality.

    In a social system of that sort, custom and conformity are ultimately the animating forces. But then, in the American instance, if custom and conformity are the animating forces, there might not be much point in analyzing too deeply the ideas that people entertain, or think they entertain. And it might not be necessary to go rifling through a philosopher’s papers, looking for unsuspected error. Nor should it be necessary to set up language committees to promote new vocabularies and ban the old ones, in the belief that language-engineering will solve the social problems of past and present. That is Isabel Wilkerson’s major insight. She prefers to make social observations.

    She glances at India in search of perspective into caste structures and customs, and, although Indian civilization differs in every possible way from American civilization, she is struck by the American parallels — by the visible similarities between the African-American caste status in the United States, at the disdained or reviled bottom of American society, and the status of the lowest caste in India, the Dalits, or untouchables, at the disdained or reviled bottom of Indian society. She does seem to be onto something, too. She tells us that, in India, Dalit leaders and intellectuals have been struck by the same parallels, and they have recognized the far-away African-Americans as their own counterparts, and have felt an instinctive and sympathetic curiosity. And then, seeking to deepen her perspective, Wilkerson examines a third instance of what she believes to be a caste structure, which was the situation of the Jews under Nazis in Germany. 

    This seems to me only partly a good idea. There is no question that, in traditional Christian Europe, as well as in the traditional Muslim world, the Jews occupied the position of a marginalized or subordinate caste, with mandated clothing, sundry restrictions, humiliations, and worse. Traditionalism, however, was not the Nazi idea. Still, it is true that, on their way to achieving their non-traditional goal, the Nazis did establish a caste system of sorts, if only as a transitional state, with the Jews subjected to the old ghetto oppressions in an exaggerated form. And some of those measures drew overtly on the Jim Crow precedent in America. Wilkerson reminds us that, in preparation for establishing the Nuremburg Laws for Jews in Germany in 1935, the Nazi leaders undertook a study of American racial laws, the laws against miscegenation, the laws on blood purity, and so forth. And with the American example before them, the Nazis established their Law for the Protection of German Blood and German Honor and their larger code. She tells us that, in regard to blood purity, the Nazis even felt that America, with its “one drop” mania, had gone too far! — which is not news, but is bound to horrify us, even so.  

    But she also draws another benefit from making the Nazi comparison, which has to do with the tenor and the intensity of her exposition. The Nazi comparison introduces a note from abroad, and the foreign note allows her to speak a little more freely than do some of the other commentators on the American scene. The foreign note, in this instance, is an uncontested symbol of political evil, and, having invoked it, she feels no need to miniaturize her American conclusions, and no need to introduce into them an aspect of childhood traumas. She does not draw a veil of critical theory over her presentation. Michel Foucault’s focus on the body appears to enter into her thinking not at all. Nor does she feel it necessary to toy with mental imbalances and nihilist gestures. Nor does she look for ways to shock anyone, beyond what is inherent to her topic.

    She points at the Nazis, and at the American champions of Jim Crow — points at the medical doctors in Germany, and at their medical counterparts in America, who, in the grip of their respective doctrines, felt free to conduct monstrous scientific experiments on victims from the designated inferior race. And any impulse that she may have felt to inhibit her expression or resort to euphemism or indirection disappears at once. In short chapters, one after another, she paints scenes — American scenes, not German ones — of mobs murdering and disfiguring their victims, of policemen coolly executing men accused of hardly anything, of a young boy murdered because of a love-note sent to a girl from the higher caste. She paints tiny quotidian scenes of minor cruelty as well — the black Little Leaguer who is prevented from joining his white teammates in a joyous festivity, or, then again, the Negro League career of Satchel Paige, perhaps baseball’s greatest pitcher, who watched his prime years go by without being able to display his skill in the Major Leagues. She does not twist her anger at these things into something understated, or into something crazy. Nor does she redirect her anger at secondary targets — at the white American resistance to discussing these things, or the lack of communication, or the lack of sympathy. Silence and the unspoken are not her principal themes. 

    Her theme is horror, the thing itself — the murdered victims dangling from the trees. Still, she does get around to addressing the phenomenon of denial and complacency and complicity, and, when she does so, her analytical framework allows her to be quietly ferocious. She reminds us that, apart from leading the Confederate troops in their war against the American republic, Robert E. Lee was a man who personally ordered the torture of his own slaves. He was a grotesque. She tells us that, even so, there were, as of 2017, some two hundred thirty memorials to Robert E. Lee in the United States. To underscore her point, she describes in a flat reportorial tone a public hearing in New Orleans on the matter of what to do about a statue of Lee, at which a retired Marine Corps officer spoke: “He stood up and said that Erwin Rommel was a great general, but there are no statues of Rommel in Germany. ‘They are ashamed,’ he said. ‘The question is, why aren’t we?’” — which is Isabel Wilkerson’s manner of staring her readers in the eye.  

    It would be possible to go through Caste and pick it apart, from the standpoint of social theory. But social theory is not really her theme, even if the anthropologists of the 1930s are her heroes and their concept of social caste drives her book forward. Mostly the work is an artful scrapbook of various perspectives on the black oppression in America, divided into short sections —  on the idea of caste, on the Indian social system, on Indian scholars she has met, on her visits to Germany, on Nazi legal codes, on the horrors of lynching, and still more horrors of lynching, on the severity of Jim Crow laws, on the pattern of police murders of blacks, and, then again, on her own experiences. She recounts any number of vexing or infuriating encounters that she has undergone with people at airports or restaurants, the DEA agents who decide that she is suspicious, the waiter who manages not to serve her table, together with vexing experiences that other black people have had — a distinguished black man mistaken for a bicycle messenger in his own apartment building, a student from Nigeria, whose language is English, praised for being able to speak it. 

    Certain of these incidents may seem ambiguous, and yet they do add up, such that, even if one or two of the incidents might be viewed in a kinder light by someone else, the pattern is hard to deny. The meaning of the pattern becomes identifiable, too, given the historical scenes that she has described. And yet, although she has every desire to register and express her own fury, and no desire to tamp it down, she has also no desire to drown in it. She looks for reassuring signs of a liberating potential, and she finds them here and there —  in the moral progress of the Germans and their reckoning with civic monuments. Barack Obama’s presidency strikes her as a not insignificant step forward. As for what came after Obama — well, she concludes the main text of her book with a sentimental anecdote about a surly MAGA-hatted white plumber, unhappy at having to work for a black lady in her leaky basement, who softens up after a while, which suggests the possibility of progress, in spite of everything. 

    I suppose that hard-bitten readers will figure that Wilkerson goes too far in clinging to some kind of optimism for poor old America. But then, I figure that I have some acquaintance with the potential readership for her book and the several other books that I have just discussed, if only because the readership spent several months in the spring and summer of 2020 marching around my own neighborhood. I can imagine that each of those books is bound to appeal to some of those militant readers, and to disappoint the others. Ta-Nehisi Coates will always be a popular favorite, if only because of his intimate voice, which has an attractive tone regardless of what he happens to be saying. Then again, in the course of the uprising, a carload of gangsters profited from the mayhem to break into a liquor store around the corner from my building and to carry away what they could. And those particular people, if they happen to be book readers, which is entirely possible, may look on Coates with a cold eye, given how lachrymose and virtuous he insists on being. They also won’t care for Alicia Garza’s California life-story and organizers’ tips in The Purpose of Power, and they are certainly not going to see anything of interest in the cheerful suggestions to white people in Ijeoma Oluo’s So You Want to Talk About Race. The gangsters might like Frank Wilderson III’s Afropessimism, though. Heartily I recommend it to them. Still other people, large numbers of them, will prefer the scholarly dialectics and historical research of Ibram X. Kendi. 

    And yet, I suspect that among the book-reading protesters, the largest number will prefer, as I do, Isabel Wilkerson and her Caste  prefer it because of her emotional honesty and directness, and because of her anger, which somehow ends up angrier than everyone else’s among the writers, and, then again, because it is refreshing to find someone with a greater interest in the shape of society than in the marks of interior belief, and still again, because of her streak of optimism. I cannot prove it, but, in my own perception, directness, anger, and a streak of optimism were main qualities that marched in the streets during those months —  even if some people were adrift in academic leftism, and other people were looters, and still others rejoiced in singing, “Jesus is the answer / for all the world today.” The protesters chanted only a handful of slogans, which testified to the discipline that mostly dominated those enormous marches. Sometimes — not often — they chanted “George! Floyd!” — which was the most moving chant of all: the note of death, which underlay the vast national event. But mostly the protesters chanted “black lives matter” — which was and is a formidable slogan: an angry slogan, plaintive, unanswerable. And somehow “black lives matter” is a slogan flecked with a reform spirit of democratic hopefulness, not exactly as in 1854, and not exactly as in 1965, and yet, given the different circumstances, pretty much as in those other eras, in conformity with the invisible geological structures of the American civilization.

    Without

    It is a warm winter mid-afternoon.
    We must understand what happened is
    happening. The colossus stands before us with its signature
    pre-emptivity. It glints. It illustrates.
    At my feet the shadow of the winter-dead bushes wave
    their windburnt stalks. Their leaves
    cast gem-cut ex-
    foliations on the patio-stone—bushfulls of shadow
    blossoming—& different-sized
    heads—& in them leaves, flowers, shoots, burgeonings—
    though when I look up again from their grey chop & slip
    what is this winterdead bush
    to me. This is how something happens but what.
    Inside, the toddlers bend over and tap. They cannot yet
    walk or talk. They sit on the floor one in the high chair. They
    wait. They tap but make no sound. The screen they peer
    down into waiting is
    too slow. The trick
    won’t ever happen
    fast enough. They are waiting for their faces to
    dissolve, to be replaced by the
    quick game.
    If you speak to them, they don’t look up.
    The story doesn’t happen fast enough.
    The winterdead heads move in a sudden breeze.
    The wilderness grows almost giddy with alternatives
    on the cold patio. I stand barefoot in it.
    I always do this as it
    always does this.
    It lies on me. Scribbles a summer-scrawl. I watch my
    naked feet take on the shadow-blossoming without a trace
    of feeling. It feels
    good. As long as I see it it feels
    like years, invasions, legends—a thing with something at its heart—
    it moves the way the living move absent of will—
    the wind will define what is happening here—I call
    a name out—just to check—
    at the one wearing the purple jumpsuit
    with the small blue elephant
    stitched into
    it. The young
    of the elephant starve because the matriarch
    is killed before it can be passed on—where water is, where safe passage,
    how
    to forage, how remember, how mourn. But I
    was talking about the logo.
    If you try to rebuild the world you will go crazy.
    Come outside,
    come out take off your shoes.
    What did you do when the world was ending.
    Before the collapse.
    In the lull.
    They look down into the screen. I can hear
    a towee make two notes then stop. Can hear, further off,
    a woodpecker search the hollow. Tap tap. A silence
    which goes in way too deep
    filling this valley
    I think.
    I had not heard it till
    a minute ago.
    Tap tap. Seeking the emptiness. What breeds in it. The festering.
    The nourishment.
    The whole valley echoes. Tap.
    And a single-engine plane now, like a blender.
    When it goes by the sky is much smoother.
    And the brook running through when wind dies down. There it is.

    We Refused

    amputation. Above the
    knee. You
    r so cold. Winter
    light moves up

    your neck to yr
    lips. For the duration of
    this song to u
    mother the cold

    light moves from yr
    lips to yr new
    permanently
    shut eyes. You

    can’t rave any
    more, slapping
    fury over the countdown of
    minutes, u can’t force yr

    quip in. The hills
    where the sun’s heading
    maintain their dead
    rest. No wind. No rain. The new

    wrong temps in-
    filtrate the too-dry
    grove, each stiffly curling silvery
    leaf—all up

    the slopes. All gleams
    momentarily.
    Each weed at
    the foot adds its

    quick rill of
    shimmering. Then off
    it goes. The in-
    candescent touches it, then

    off it goes.
    All afternoon day will do
    this. Touching,
    taking each thing up—no

    acceleration.
    Dry. Cold. Here
    mother is when it reaches
    yr eyes, the instant when it

    covers yr
    lids, curved to catch all
    brilliance, nothing
    wasted, carved, firm,

    while whatever

    is behind them,

    mind-light, goes.

    Maybe it will
    rain again
    the glittering says,
    but until then I

    will imitate the
    sheen of
    nourishment, of plenty, it says, I
    will be yr water,

    yr rivulet of

    likewater—while I, I, out here,

    bless you with
    this gorgeous
    uselessness
    mother, this turning

    of the planet onto
    yr eyes that refuse
    the visible now & ever
    again….

    We kept u
    as long as we
    cld whole.
    I have no idea

    what this realm is

    but it is ours,

    and as long as u
    are stuck in
    appearance I
    wish for the

    wind-glitter
    to come each day once
    to where you lie
    and wash you

    clean. Losing
    information yr gleaming
    shut lids light
    the end of the whole

    of this day

    again. Let it

    happen again.

    The Story of Dalal

    When the mighty men came back from faraway places, they were strangers in their own homes. They were catered to and kept in the dark. At some point the fathers had to be brought in, implicated if you will, in the deeds of their sons and their daughters, but until that day dawned, until a daughter’s transgressions became too public a matter to be ignored, or a son’s ways could no longer be indulged, the men were pampered and left ignorant. In the dark hours, when a reckoning could no longer be avoided, when the code of the place had been stretched to the breaking point, the women had to do things of great cruelty. It was their burden, their task.

    “She is the sister of men” was the highest compliment paid a woman who had to keep the world intact. To the women fell the task of smuggling diamonds from Sierra Leone because the skilled man of affairs who insisted that the high officials of the customs office were in his back pocket had gotten himself deported out of the country. The women were the ones who kept the constituents of a member of Parliament from finally having it out with him. They were the ones who prepared their sons for the duel and who stiffened their backs, reminded them of the hidden defects and capricious ways of their fathers. And it was their responsibility, of course, to keep the daughters in line. It was but a short distance from the daughter’s conduct, after all, to the mother herself. Better grieve for a daughter than play havoc with the order of things. This is the way things were understood here.

    It happened among us that a woman of radiant strength had to “do something” about one of our daughters. The daughter’s indiscretions had become too much to bear.  The pompous and dangerous head of the household had signaled that his patience was running out. The sturdy woman would do the task that was hers to do. Dalal was taken to her father’s village for burial. The young woman, it was announced, had committed suicide. But it was commonly known that her mother had struck. It had about it an air of inevitability. Dalal had rejected all offers of help and punctured all the pretenses of her people’s code. She had taken a step into a world she could not understand, and she had not known where to draw the line. The evasions and the consolations of the old world, the world of her mother and her aunts, were denied her, but the new ways were not yet internalized by the young woman, who had just begun to see the world on the other side of the prohibitions.

    Dalal had been given the best of what a generation on the make thought their children should be given. Parents who toiled in Africa made possible boarding schools, a new prosperity, a new freedom, less encumbered and burdened by inherited ways of seeing and encountering things. The fears of the old world, the need to “walk by the wall” and to “kiss the hand that you cannot confront,” the fear of the unknown and of the alien, the need to placate and to conceal — from all this the young woman seemed released. The limits that had defined the world of her mother and her aunts had irretrievably collapsed, and with their collapse it was hard to distinguish the permissible from the impermissible.

    Dalal had ventured into the world on the other side of the divide; she was the first of her kin to venture beyond the line of the familiar sounds and customs. She developed a sudden and total disdain for the ways of her elders, for their tales, for their dire warnings. They, in turn, were unable to explain how the young woman should juggle the two worlds on the margins of which she had been placed. There came a time when she began to complain about the women from the village, the grandmothers and great-aunts who came visiting and who stayed at her home. She complained about their tattoos, about their wrinkled and toothless faces, about their prayers and the ablutions that preceded them. Above all, she complained of the smell that clung to the old women: she believed that they came with a special smell. And so she recoiled when they approached her and wanted to kiss her and wish her a life of honor and rectitude in the home of a decent God-fearing man. Yes, Dalal, if you go about doing what is asked of you, if you follow the straight path, if you remain untarnished and your reputation remains unblemished, happiness will come your way, and you will go from the home of your father to the home of your husband, an honored woman in whose reputa-tion and whose conduct your father and brothers can take pride. No other man could humble your family by having his way with you. No ill-wisher could point to you whenever men and women sought to devour the reputations of others.

    A relative of Dalal prided herself on the fact that she  had been the first to detect early signs of trouble. The world here came in very small ways and expressions. The unwashed relatives from the village noted that Dalal did not invite relatives and friends to join a meal in the way that such invita-tions should be extended. Dalal would only offer a single invitation. And when the guest insisted that he or she had just eaten, she always took them at their word and left them to eye the food. In the protocol of the villagers you had to extent endless invitations and drag the guest to the table. Then you watched the guests who had “just eaten” stuff themselves with abandon. But the sophisticated young woman who had broken with her world would not play the game.

    Nor would she willingly join, it was noted in retrospect, her mother and her mother’s friends and guests when she was called to do so. In those sessions, young women learned the ways of their elders and the ways of the world. When she  was forced to participate, Dalal was never fully there. She would not engage in the sonorous language and its clichés, she would not play along. When a visiting friend of her mother told her that Dalal and her son Shawki would make an ideal couple, Dalal had no qualms about saying that Shawki was a buffoon, that she had no interest in him whatsoever, that she would not be traded over coffee between two women from an obsolete generation.

    A strange kind of honesty made Dalal see the hypocrisies of her elders’ world. She began to view their deeds with new eyes, and gradually she began to judge. And because she did, she made her elders self-conscious. In her presence, her tough mother and aunts would at times squirm, and animated discus-sions would often come to an end whenever she walked in.

    But Dalal knew many things that they thought had eluded her. She tired of hearing pieties that were betrayed in daily practice. She had seen through the falsities of her elders. A few years before the trouble began, while still a young girl, Dalal had been used as an alibi for many indiscretions by the older women in her life. She recalled the record of each of the virtuous women who later came to lecture her about her own behavior. She laughed at the pretensions of the cuckolded husbands who knew perfectly well what was going on but preferred to look the other way.

    Dalal had seen her pretentious paternal uncle Abu Hassan pass himself off as a man of the world, proudly displaying his women, letting the word out that he had finally seduced the voluptuous Leila and beaten out the competition. She then set this alongside what she knew of Abu Hassan’s wife. Fair-skinned and vain, sure of her beauty and more sure of the prerogatives of her new money, Abu Hassan’s wife exercised her own options as well. Two or three young men were in the wings, and it was rumored that they were being kept and provided for by the lady herself. Abu Hassan, Dalal knew, was both a rooster and a cuckold. In his own code, of course, he was a hunter and victorious. And in the pronouncements of his wife, the lady was queen in her house, a virtuous woman, cleaner than the ways of the cynical city.

    Dalal’s angle of vision enabled her to see the whole thing. Thus, when the virtuous woman said that she had spotted Dalal coming out of one of the furnished apartments on Hamra Street, Dalal recited what she knew of the other woman’s comings and goings. When given a chance to deny what she had been charged with, Dalal refused. She declined to participate in the charade and the theater that was Lebanese honor. Early marriage suggested itself as a remedy. A man, it was believed, could rein in this kind of passion. Dalal would have her own home, shoulder new responsibilities, and the storm would blow over. She could then begin to make her own discreet trips to the tailor and offer the excuses and the evasions of other women of honor and responsibility. A smug official of her father’s generation was the man recruited to cap the volcano. Dalal’s mother insisted that the man was Dalal’s own choice, that it was an affair of the heart.

    A respectable dowry was given to the unlikely couple. That was what money made in Africa was supposed to do — schools for the boys, dowries for the girls. All prayed that the young woman’s story was over. The determined mother had pulled it off. Dalal had walked from the home of her father to the home of her husband.

    But the hopes turned out to be short-lived. As the young woman explained it, surely she deserved something other than what she got. The man in her life was a man of reasonable distinction. He had studied on his own and risen in the bureau-cracy. But like her parents, Ali was a squatter in Beirut. He had about him the kind of clumsiness that Dalal’s generation was so fond of deriding and so quick to see in a man’s speech, in the kind of tie he wore, in the way he shook hands. Ali  was doomed in the young woman’s eyes: he spoke the Shia dialect of the south. His French was not refined enough. His pronunciation amused the young woman who had learned French properly. That mighty badge of distinction, the French “r,” never tripped off his tongue the way it should have.

    This was a world of mimic men. A dominant culture from afar, its acquisition and its display, its word and its jokes, were what set people apart from one another, what gave some of them a claim to power and self-worth. French pronunciation gave away the origin of men and women, the “age” of money in a particular household: new money spoke French in one way, old money in quite another way. Boys who learned it under the husk tree — or was it the oak tree? — as Ali proudly proclaimed to have done, had no chance of passing themselves off as sophisticated men of a very demanding place.

    The young Tolstoy, who grew up in a culture that borrowed the trappings and the language of France for its court and its gentry and its salons, divided the social world into two principal categories: comme il faut and comme il ne faut pas. Tolstoy’s comme il faut consisted “first and foremost in having an excellent knowledge of the French tongue, especially pronunciation. Anyone who spoke French with a bad accent at once aroused my dislike. ‘Why do you try to talk like us when you do not know how?’ I mentally inquired with biting ironies.”

    Dalal’s husband was definitely comme il ne faut pas. He knew nothing of the ups and downs of the relationship between Jacques Charrier and Brigitte Bardot. He was not familiar with the songs of Charles Aznavour and Sasha Distel. He told what for his wife and her companions were dreadfully boring stories about his triumphs in the bureaucracy, how this or that political boss needed his help and his patronage, how he had clashed with the minister and how the cabinet member had backed down because of his own superior knowledge and judgment. And he endlessly recited the familiar tale of how he had come into Beirut a quarter-century ago, how he had studied by the light of a kerosene lamp, how he had been one of the very first Shia boys from his world to graduate from the Lebanese University, how vain city boys taunted him about his village and his past, about the idiom and the twang of the countryside.

    The man of position had achieved all he could have hoped to achieve. But none of it mattered to the irreverent young woman by his side. That kind of tale would have filled the heart of a woman a generation older than Dalal with great pride. A different woman — denied, or spared, the world that Dalal had now seen — would have viewed his triumph as hers, and that of her kin. But this was not the kind of man to cut an accept-able figure in the mind of a young woman who had grown up on a diet of French novels and films, who was courted by young men who had nothing to do other than sharpen their skills for the hunt: those effeminate young men with shirts unbuttoned to their navels, those dandies with their gold chains, with their melodramatic and insinuating puffs on their Gitanes, with new cars purchased by fathers who tackled hell in remote places, were surely no match for the sturdy qualities of Ali, with his yellow socks and bad French.

    A real man, the sober official insisted, should not be compared to such flimsy material. But this flimsy material was the new world, the world to which his treasured young wife belonged. Ali could not take the chic young woman back to where he and her father had come from, to the village where women still dried and saved cow-dung for fuel, where children used the bones of dead animals for toys. How was he to communicate his world, and its wounds and its limits, to someone who had not known it? How was he to tell Dalal of his cruel and terrifying father, who humiliated him at every turn, and of the schemes of his stepmother, and of the distance he had to cover, forever on the run, unable to take anything for granted or to believe that he had anything to fall back upon? His family had thrown him into a mighty storm, and he had been denied even the possibility of a graceful, quiet failure.

    As the young woman picked on the filet mignon that was delivered to her doorstep, he very much wanted to tell her, while knowing full well how much of a bore he would be, of the white bundle that used to come to him while away at school, of the few scrawny potatoes in it, of the endless diet of lentils, of the few thin loaves of peasant bread. Child, Ali wanted to scream, and he often did, where have you been and what have you seen? You were spared such terrors and such needs. Ali’s generation had ploughed and had sown, and Dalal was the harvest. Ali’s generation, the generation of Dalal’s father, had never bothered to inquire as to the ends of such striving and such toil. With a hellish world to their backs, they had kept on the run. And now the journey had culminated in signé shirts and blouses, in spoiled daughters and sons, in endless trips to reputable tailors, in dining rooms whose décor was declared obsolete soon after it had been lavishly purchased and proudly displayed.

    The net that entangled women older than Dalal failed to entan-gle her. She was too far gone to submit and to accept. Hard as her husband and her mother would try to keep her within the boundaries, the young woman had become brazenly independent. She put very little if any effort and time into covering her tracks. The furious beatings administered by her mother and her husband were to no avail. On the morning after, she would plunge into it again, and ill-wishers would report her latest escapades. She was seen going into and coming out of this or that building, she had succumbed to the blandishments of yet another dandy who would proudly report his latest conquest. In the carefree city that outsiders loved so much for its freedom and its joie de vivre, the men and the women who lived there were suffocated and hemmed in by so many curious, watchful eyes. Even the trees had eyes here, wrote the sensitive novelist Hanan As-Shaykh.

    The gossips had seen it coming. The coroner’s and police reports about the terrifying day were met with the usual derision: the verdict of suicide, it was said, was secured by the payment of a large bribe. An ivory tusk, an expensive one of which Dalal’s family was proud, had exchanged hands and now adorned the coroner’s living room. The officials were men of this society, after all: they knew their world and what it drove men and women to do.

    When Dalal’s body was taken to her father’s village, her father and her husband were on hand to receive the condolences of those willing to treat it as a case of suicide. But the day belonged to her mother, the tower of strength, the victim and the killer, sure in her grief that she had done what she did for the sake of her other daughters, of her sons, of her home. The mother wailed, disheveled her hair, tore at her own clothes. She lined up all of Dalal’s shoes, all those elegant shoes that the young woman had bought with the new money, and she spoke to them, it was said, about the young woman who had departed at so tender an age. She wanted Dalal, her Dalal, back. The fancy shoes and the primitive code of honor: this country played them side by side.

    A new and intense piety overtook Dalal’s mother after the deed was done. A few years younger than my own mother, more exposed to the ways of the modern world, she would from then on accompany my mother to the holy shrine of Zaynab, the daughter of Imam Ali, in Damascus. When unable to do so, she would give my mother money and food to give to the poor who gather around the shrine, and to the keepers of the shrine. The “secret” was shared between the two women on one of those journeys to Damascus. My mother was of two minds. She abhorred the deed, but she respected the mighty woman and she knew what pressures and expectations had led her to do what she had done. Dalal, my mother said in defense of the woman, was a “piece of her mother’s liver”: nothing could be more precious than one’s own child. But for some time Dalal’s mother had been walking on eggshells. Dalal’s father, now a prosperous man, had become restless, and there was a danger that he would go beyond the common indiscretions, that one day a clever young huntress would lure him away from his family. A fallen daughter would serve as a convincing pretext and the honorable man would be released in his own eyes from a home that had disgraced him. Sadness and grief, my mother believed, were better than disgrace. Dalal’s mother had done a terrible duty, but decency required that those quick to judge should hold their tongues. Love, even maternal love, was a luxury here. It was given when it could be afforded, when men and women were not up against the wall, when others were not busy clawing away at their reputations, threatening them with exposure and shame, leading them into ditches where even “pieces of their liver” had to be inexorably removed.

    After the deed was done, Dalal’s mother was never as commanding as she was before, her face never quite as bright. She no longer sounded sure of herself. The tough woman who had survived hellish years in Africa, who had single-handedly built a fortune after her husband was deported out of Ghana, who had put aside enough money to aid her father and a pretentious brother who could never make ends meet, who was generous to the multitude of relatives and of stray men and women who walked to her door with a hard luck story, was transformed overnight.

    The letters that I wrote for her to friends and relatives in Africa, which previously had to be read back to her over and over again and repeatedly corrected before they met with her approval, became perfunctory. She trusted the writing, she said, there was no need to read them back. The tales she told in the letters to relatives and friends were no longer crisp and chatty. They had about them a matter-of-fact quality. One letter that she drafted to the overbearing husband, who was always in and out of the country, reported that all was well in the family, that she would see to it that all was well. This letter, and this section in particular, was read to her over and over again at her request. She wanted some hidden meaning to be transmitted, some knife perhaps to jump out of the pages, some sense of the cataclysmic deed that she had done — a reminder to the honorable man that it was she who had to keep the world intact, that he would never quite understand her sacrifice and her anguish.

    But the lines penned by the letter-writer fell short of what she wanted. Arabic, the language of cruel innuendo and hidden meanings and intricate alleyways, failed Dalal’s mother on that day. And with uncharacteristic sharpness, she told the attentive scribe that his style had deserted him, that he should be sure to plan a future that excluded a writing career. Yet she wanted the young man drafting the letters to stick around that day. She could bear no solitude. More than that: the drafter of the letters had been a friend of Dalal. The two of them had exchanged jibes and put-downs, they had tested one another about the latest fads, about books and movies. Dalal had insisted that the Arabic letters at which the young man excelled, which had brought him not only spending money but also access to the secrets of so many families here, gave him away as a product of the old culture, that the formal structure of the letters, the frequent invocation of Allah’s name and blessing and praise, confirmed the old mentality.

    Dalal and her friend who was good at Arabic letters had shared what they had shared. It was enough for Dalal’s mother that the young man stuck around that day. They both knew when they were speaking of the dead. They both knew the hidden language of lament and yearning. The mother very much wanted her daughter’s friend to know, without uttering a word about the entire matter, that it had all been a tragic act of last resort, that nothing else could have been done, that a mother’s grief exceeds the imagination of the closest, the kindest, the most outraged of friends.

    My family’s home was in the village of Arnoun, in the district of Nabatiyya. I was the son of Ali Ajami of Arnoun and Bahija Abdullah of Khiam whose marriage soon ended in divorce. My mother had come to this ill-starred village when she married my father. She had come from a large clan, the Abdallahs, from Khiam, a town in the valley of Marju’un. Khiam was not far away. The distance could not have been more than fifteen kilometers. But that was far enough to make it seem like a distant land. Khiam was a place where children played next to running streams and women had time to tell exquisite and drawn-out tales. The men working Khiam’s fields retired to places in the shade; the exuberant women passing by gave more taunting and playful remarks than they received.

    Arnoun, at the foot of Beaufort, the Crusader castle, was a different kind of place, harsh and forbidding, surrounded by granite cliffs. There was to the place the feel of living in a quarry. Here the banter was less kind, and the men more sullen and brittle. The women struggling uphill to the wells, with jars on their heads, had little time or energy for chatter. There was a pond in Arnoun at the entrance of the village, near the grey mausoleum where my grandmother’s parents were buried, but the pond that drew in the rain always dried up in no time. It was by its cracked, wrinkled surface that I knew the pond.

    Hyenas stalked the place. But as they said in Arnoun, the sons of Adam were more frightening than the hyenas. At the edge of the village, beyond its scattered patches of tobacco, its few fig trees, was the wa’ar, the wilderness — rocky, thorny land without vegetation. The wa’ar was more than a place beyond the village. It was a point beyond censure. It was from 

    the wa’ar, the wild heath beyond the village, that the hyenas turned up. The dreaded creature, it was said, could cast a spell on its victims. Stories were told of infants taken to the wa’ar who were never brought back. Daughters who dishonored their families were taken to the wa’ar. In the legend of the wa’ar, there was a rock where a shepherd had killed his sister who had gone astray. He had taken her there, slit her throat, and left her to die. For many years afterward Arnounis still swore they could identify the rock where the shepherd had done horror’s work.

    My beloved mother, I know of the hellish years that you spent in my father’s village, of the backbreaking toil. It hurts me to know what it was like, to think of how much you endured. I know that you spent a good deal of your life without a man’s protection and a man’s labor and a man’s support. I remember that I am a stepson, that my mother is not there to defend me against a heartless father. I know the tales of hurt you want me to remember. I live amid the tales. But all I want is for the tales to release me from their grip. If this is infidelity, so be it. I want to be your son, I shall always be so. But I do not wish to appropriate your sorrow and your defeat and make them mine. Surely in your galaxy of imams and their sayings, in your endless supply of parables and proverbs, there must exist the possibility of a life lived without man being hunter or prey.